Quantcast
Channel: Sameh Attia
Viewing all 1415 articles
Browse latest View live

How To Improve The Linux System’s Security Using Firejail

$
0
0
https://www.ostechnix.com/improve-linux-systems-security-using-firejail


Improve The Linux System's Security Using Firejail
As you already know, Linux kernel is secure by default. But, it doesn’t mean that the softwares on the Linux system are completely secure. Say for example, there is a possibility that any add-ons on your web browser may cause some serious security issues. While doing financial transactions over internet, some key logger may be active in browser which you are not aware of. Even though, we can’t completely give the bullet-proof security to our Linux box, we still can add an extra pinch of security using an application called Firejail. It is a security utility which can sandbox any such application and let it to run in a controlled environment. To put this simply, Firejail is a SUID (Set owner User ID up on execution) program that reduces the risk of security breaches by restricting the running environment of untrusted applications.
In this brief tutorial, we will discuss how to install firejail and use it to improve the Linux system’s security using Firejail.

Features

Concerning about Firejail features, we can list the following:
  • Easy to install
  • User can set file or directory attributes.
  • Customized security.
  • Support network.
  • Separate sandbox containers for applications.
  • Easy to monitor.
  • GUI provided to manage application.

Improve The Linux System’s Security Using Firejail

Installing Firejail

This security application is easy to install, and it can be installed using apt-get package manager. We will be using Ubuntu 16.04 OS for demonstration purpose.
Update Ubuntu Linux:
# apt-get update
Install Firejail application with command:
# apt-get install firejail
By default firejail configurations and profiles are stored under /etc/firejail. These can be manged by user as per their need, Have a look at the following output.
# ls /etc/firejail

Run applications with firejail

The typical syntax to use firejai is:
# firejail 
Say for example, to run Firefox web browser using firejail, we can use the following command:
# firejail firefox
When an user  launch application with firejail, profile defined in firejail configurations get loaded and events are logged in syslog. By default firejail launch application with default profile,  your can configure default profile with their own parameters.

Customize  firejail profile for application

To create a custom profile for a application/command create following directory under home environment of user.
# cd ~
# mkdir -p  ~/.config/firejail
Copy generic profile to that newly created directory.
# cp /etc/firejail/generic.profile /home/user/.config/example.profile
Sample output:
# vim /etc/firejail/generic.profile

If you wants to load Document folder for a particular user to be loaded as read only. Define parameters as follows:
blacklist /home/user/Documents
If you wants to set some attribute as read only:
read-only /home/user/Download
Accessing some banking stuff over the internet is recommended  to be secured, can be achieved with firejail.
Create a directory for user.
# mkdir /home/user/safe
Firefox will consider ‘safe’  as home directory.
# firejail --private=/home/user/safe firefox &
Define default network interface for application to run with.
# firejail --net=enp0s3 firefox&
Sample output:

Using firejail GUI tool

For the ease of user gui tool of firejail is available which can be downloaded from this link.
Download appropriate package as per your hardware and operating system installed and use it.

Conclusion

The filejail tool is a must have for Security concerned users. Although there are lots of methods available in Linux which can provide same level of security, Firejail is one such a way to improve the security to your Linux environment. We hope you will love this article.
Stay tuned!!
Resource:

Linux block I/O tracing

$
0
0
https://www.collabora.com/news-and-blog/blog/2017/03/28/linux-block-io-tracing


Collabora Like starting a car with the hood open, sometimes you need to run your program with certain analysis tools attached to get a full sense of what is going wrong – or right. Be it to debug an issue, or simply to learn how that program works, these probing tools can provide a clear picture of what is going on inside the CPU at a given time.
In kernel space, the challenge of debugging a subsystem is greatly increased. It is not always that you can insert a breakpoint, step through each instruction, print values and de-reference structures to understand the problem, like you would do with GDB in userspace. Sometimes, attaching a debugger may require additional hardware, or you may not be able to stop at a specific region at all, because of the overhead created. Like the rule of not trying to printk() inside the printk code, debugging at early boot time or analyzing very low-level code pose challenges on itself, and more often than not, you may find yourself with a locked system and no meaningful debug data.
With that in mind, the kernel includes a variety of tools to trace general execution, as well as very specialized mechanisms, which allow the user to understand what is happening on specific subsystems. From tracing I/O requests to snooping network packets, these mechanisms provide developers with a deep insight of the system once an unexpected event occurs, allowing them to better understand issues without relying on the very primitive printk() debug method.
So large is the variety of tools specialized to each subsystem, that discussing them all in a single post is counter-productive. The challenges and design choices behind each part of the Linux kernel are so diverse, that we eventually need to focus our efforts on specific subsystems to fully understand the code, instead of looking for a one-size-fits-all kind of tool. In this article, we explore the Linux block I/O subsystem, in a attempt to understand what kind of information is available, and what tools we can use to retrieve them.

iostat

iostat is a tool for monitoring and reporting statistics about the I/O operations happening on the system. It generates a device utilization report in real-time, which includes throughput and latency information split by Reads and Writes, as well as accounting of request sizes. The reports generated by iostat are a fast way to verify if the device is behaving as expected performance-wise, or if a specific kind of operation is misbehaving.
iostat can be configured to run periodically, printing reports at a specific frequency. The first report generated provides the accumulated I/O statistics since the system booted, while each of the subsequent reports will print the operations that occured since the last report.
The tool is packaged in all major distros. In Debian, you can install the sysstat package, which includes iostat and many other tools for I/O monitoring.
sudo apt install sysstat

The output below exemplifies the execution of iostat, which prints a report every two seconds. The first report has the accumulated statistics, while the next has only the delta from the last report. In this case, I started a dd from /dev/sdb at the same time I ran iostat, which explains the sudden increase of Read data in the sdb row.
[krisman@dilma]$ iostat 2
Linux 4.9.0-2-amd64 (dilma) 03/24/2017 _x86_64_ (4 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
6.50 0.01 1.06 0.08 0.00 92.34

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 5.64 35.28 158.58 9309088 41836483
dm-0 9.97 35.07 158.57 9251926 41836180
dm-1 0.98 8.70 3.55 2294873 936692
dm-2 0.16 0.15 0.50 38988 130968
dm-3 8.58 26.21 154.53 6915201 40768520
loop0 0.00 0.01 0.00 2125 0
sdb 0.00 0.16 0.00 42704 0

avg-cpu: %user %nice %system %iowait %steal %idle
3.75 0.00 7.13 20.03 0.00 69.09

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.00 0.00 6.00 0 12
dm-0 1.50 0.00 6.00 0 12
dm-1 1.50 0.00 6.00 0 12
dm-2 0.00 0.00 0.00 0 0
dm-3 0.00 0.00 0.00 0 0
loop0 0.00 0.00 0.00 0 0
sdb 680.50 43580.00 0.00 87160 0

In the output above, we have two utilization reports. If we left iostat running for longer, we would have reports like these printed every two seconds. The frequency of reports is defined by the number 2 in the command line.
With the default configuration, iostat will print the number of operations per second in the first column (tps), and the rate and total number of Reads and Writes, respectively, in the following columns for each device (rows) in the system. Some more advanced (and interesting) statistics can be obtained using the -x parameter.
In this case, the dd was copying data from sdb into a file in sda, which explains the large number of blocks read in the sdb column. But why the number of read blocks in sdb doesn't match the data written to sda in the report above?
That's most likely because the operating system does some caching of Write requests, in order to improve overall system responsiveness. In this case, it notifies dd that the write was completed long before the data is fully commited to the disk. In fact, if we look at reports generated later we will see matching numbers.
As an experiment to confirm this hypothesis, we can force the system to flush pending write requests using the sync() system call at the same time we execute the dd, for instance, by executing the sync command in the command line.
As a result, as show in the report below, once we issue sync calls, the transfer number start to carry a much better correlation between what is being read from sdb and what is being written to sda:
[krisman@dilma]$ iostat 2 /dev/sda /dev/sdb
avg-cpu: %user %nice %system %iowait %steal %idle
4.80 0.00 8.98 20.48 0.00 65.74

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 84.00 0.00 33490.00 0 66980
sdb 672.00 43008.00 0.00 86016 0

iostat gives us a good overview of statistics, but it doesn't really do much on opening the hood of the kernel and showing what is going on. For that, we need other tools.

SCSI logs

Depending on what part of the block I/O stack you are interested in examining, tools will vary. Maybe a filesystem specific or a block layer tracer tool will be more helpful. If you are interested in taking a quick look at the execution and the result of a SCSI command, the SCSI layer exposes a simple logging system that doesn't require any additional tools. It will trace the execution of SCSI commands and dump the data into dmesg.
The interface is exposed in /proc/sys/dev/scsi/logging_level, and you can simply echo hexadecimal values to enable and disable configuration options. Instead of doing that, and to make the feature discussion simpler, we'll use the scsi_logging_level script, provided in Debian by the sg3-utils package, which is simply a wrapper around the procfs interface, to configure the logging mechanism.
First, install it using apt:
sudo apt install sg3-utils

One can use the command scsi_logging_level to enable and configure the log level of several tracepoints along the Linux SCSI stack. The example below enables maximum logging for incoming SCSI ioctls and then, after triggering a couple SG_IO commands, uses dmesg to print the kernel log.
[krisman@dilma]$ scsi_logging_level -s --ioctl=7; send_ioctls; dmesg
sd 1:0:0:0: [sda] sd_ioctl: disk=sda, cmd=0x2285
sd 1:0:0:0: [sda] sd_ioctl: disk=sda, cmd=0x2285
The disk and the cmd field identifies the destination block device and the actual IOCTL submitted. As a slightly more complex example, we can use the --midlevel parameter to track SCSI commands as they flow through the SCSI submission and completion path.
[krisman@dilma]$ scsi_logging_level -s --midlevel=7; dd_something; dmesg
sd 1:0:0:0: [sda] tag#23 Send: scmd 0xffff8aa7a056a080
sd 1:0:0:0: [sda] tag#23 CDB: Read(10) 2800 001d 4680 0000 0800
sd 1:0:0:0: [sda] tag#23 Done: SUCCESS Result: hostbyte=DID_OK driverbyte=DRIVER_OK
sd 1:0:0:0: [sda] tag#23 CDB: Read(10) 2800 001d 4680 0000 0800
sd 1:0:0:0: [sda] tag#23 scsi host busy 1 failed 0
sd 1:0:0:0: Notifying upper driver of completion (result 0)

SCSI logging is useful for tracing I/O requests submitted to SCSI devices, but it obviously cannot handle other kind of devices. It also can't handle complex filtering options, and the amount of output can be overwhelming.
While SCSI logging gives an insight at the SCSI protocol layer, we can use other tools like blktrace to observe the flow of requests in the block layer.

Blktrace

blktrace explores the Linux kernel tracepoint infrastructure to track requests in-flight through the block I/O stack. It traces everything that goes through to block devices, while observing timing information. It is a great tool to debug I/O devices and the block subsystem, since it logs what happened at each step with each I/O request in the system, but it is also helpful to identify performance issues when issuing specific commands to devices.
Since it traces every request going to the device, an actual trace session can get very large very fast, making it harder to analyze, specially when running stress workloads. The output of blktrace is also read and stored in a binary format, requiring another tool to analyze it. That tool is blkparse, which crawls through a blktrace generated file and prints it in a human readable way.
A trace can be collected for late analysis or the output can be piped directly into blkparse, for a real-time debug session.
The blktrace suite is packaged for major distros. In Debian, for instance, you can install it by doing:
sudo apt install blktrace

Below is an example of the blktrace output, at the same time an sg_inq command is issued in userspace. The sg_inq, which is part of the sg3_utils package, is a simple tool to issue SCSI INQUIRY commands to devices through the SG_IO ioctl. Since this is a non-filesystem request, we only queried PC requests in the blktrace to reduce the noise from other requests that could have been issued at the same time.
The nice part of sg_inq is that it is a very simple tool, which will only issue a single SCSI request, being very easy to trace in the blkparse output. Let's take a look at the results now:
[root@dilma blkt]$ ./blktrace /dev/sdb -a PC -o - | ./blkparse -i -
8,16 1 1 0.000000000 12285 I R 252 (12 01 00 00 fc 00 ..) [sg_inq]
8,16 1 2 0.000000836 12285 D R 252 (12 01 00 00 fc 00 ..) [sg_inq]
8,16 0 1 0.000199172 3 C R (12 01 00 00 fc 00 ..) [0]

In one terminal, we executed blktrace to trace PC (non-filesystem requests) on the /dev/sdb disk. We pipe the binary output to blkparse, that generates human-readable formatting.
The first column has the Major and Minor number of the device, unequivocally identifying the destination disk. This is followed by the CPU number that executed that function, the sequence number, and a timestamp of the moment it executed. The fifth column has the Process ID of the task that executed the request, and the sixth column has a character describing the action taken, in other words what part of the I/O execution process was logged.
The next fields will describe the type of the request, for instance, whether it was a Read or Write, and then, the payload data, which can be specific to the request executed. In the case above, we did a SCSI Inquiry command, and inside the parenthesis, one can see the CDB data.
In the example above, we can see the flow of the request across the block system. The first line, which has the action I, indicates the moment when the request entered the block layer. Next, the moment when the request was dispatched (D) and completed (C), respectively.
blktrace is the specialized tool that will provide you with the most information about what is happening inside the block layer. It is great for debugging hard problems, but the amount of information it produces can also be overwhelming. For an initial analysis, other tools may be better suited.

BCC tools

The BCC tools are a very flexible set of scripts that use BPF to implement live probepoints in the Linux kernel. You can either implement your own scripts to probe any specific function that interests you, or you can use one of the example scripts that come with the package, some of which are very useful for generic block I/O debugging.
Writing BPF scripts for tracing is a huge topic on itself and, obviously, not specific to block layer debugging. Still, some existing scripts can provide the user with a deep insight of what is happening at several levels of the block I/O stack.
These scripts are part of the IO visor project and they are not yet packaged for Debian, as far as I know. In Debian, you are better off by installing from source, as described here. Be aware that, because of what I believe is a bug in the Debian kernel, you may need to run with the unsigned kernel for now.

Trace

The first of these scripts is trace. It is a simple wrapper to trace the execution of specific functions. In the example below, I quickly caught calls to cache_type_store(), that I triggered writing to the cache_type sysfs file.
root@dilma:/usr/share/bcc/tools# ./trace 'cache_type_store "%s", arg3'
PID TID COMM FUNC -
29959 29959 tee cache_type_store write through
29973 29973 tee cache_type_store writeback

I asked it to print the third argument of cache_type_store which is a buffer ("%s") that stores the value written to the cache_type file. For a quick understanding, the signature of the function cache_type_store is the as follows:
cache_type_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)


Snooping on BIOs and Filesystem operations

Instead of reinventing the wheel and tracing individual functions to track the flow of data, the bcc suite provides scripts to track I/O at specific levels, like at the filesystem or at the block layer.
biosnoop and biotop are tools to track operations at the struct BIO and struct request level. The first one, biosnoop, traces requests in-flight, similarly to blktrace, though it only prints completed requests, instead of tracing them at each step of the block subsystem, like blktrace does. biotop provides a top-like interface, printing the processes that are using the disk and how much I/O each one is issuing, in a very familiar way for top users.
Below is an example output of running biosnoop on an almost idle disk. This tool provides an easy way to log what parts of the disk are being accessed and by whom.
root@dilma:/usr/share/bcc/tools# ./biosnoop
TIME(s) COMM PID DISK T SECTOR BYTES LAT(ms)
0.000000 dmcrypt_write 224 sda W 523419576 45056 1.89
0.002782 dmcrypt_write 224 sda W 523419664 4096 0.06
0.941789 dmcrypt_write 224 sda W 70282904 4096 1.77
5.000375 dmcrypt_write 224 sda W 70264440 4096 1.80

At the filesystem level, one can use ext4slower to snoop at slow requests, in a similar way that is done by biosnoop. This tool only prints requests taking longer than a specified threshold. In my case, looks like syncing my Spam folder from Collabora takes a little longer than expected! :-P
root@dilma:/usr/share/bcc/tools# ./ext4slower
Tracing ext4 operations slower than 10 ms
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
22:45:08 mbsync 1067 S 0 0 10.15 :collabora-remote:Spam:


Wrapping up

The goal of this post is to give a sense of what tools are available and what kind of information can be collected from the block layer. Opening the hood and checking out the system at run-time is not always easy. But, like we did when validating the iostat caching hypothesis, it is a great opportunity to learn how things work and how they can be improved.

Python Inheritance

$
0
0
https://linuxconfig.org/python-inheritance

Introduction

Inheritance is yet another key concept in Object Oriented Programming, and it plays a vital role in building classes. It allows a class to be based off on an existing one.

When you first started writing Python classes, you were told to just put "Object" in the parenthesis of the class definition and not think too much about it. Well, now's the time to start thinking about it.

"Object" is actually the base class that all Python classes inherit from. It defines a basic set of functionality that all Python classes should have. By inheriting from it when you create a new class, you ensure that that class has that basic functionality.

In short, inheritance is a nice way of categorizing classes and making sure that you don't needlessly repeat yourself.

What Is Inheritance?

Inheritance exists in the real world too. The first couple of guides used a car as the example of a class. Well, what if you want more specific types of cars that all share those basic principles of a car? Inheritance can help.

You can start out with a basic "Car" class that has all of the properties that every car shares. These would all be very general.

After you have your "Car," you can create new classes and pass them "Car." They will all have the same properties as the base "Car" class. Then, you can add any additional properties that you want to those more specialized classes. For example, you can have "Truck,""Muscle Car," and "SUV" classes that all inherit from "Car."

When you think about it in real world terms, trucks, muscle cars, and SUVs all have the same basic properties as any other car, but they have specialized properties too.

You can also further specialize. There are tons of different types of trucks. So, you can create more specialized classes that inherit from "Truck." The will all start out with everything form "Car" and everything from "Truck."

Using Inheritance In Python

Alright, so now you can try this out with some real code. Set up a basic "Car" class that inherits from "Object." Try working with the example below.
class Car(object):
def __init__(self, make = 'Ford', model = 'Pinto', year = '1971', mileage = 253812, color = 'orange'):
self.__make = make
self.__model = model
self.__year = year
self.__mileage = mileage
self.__color = color

def move_forward(self, speed):
print("Your %s is moving forward at %s" % (self.__model, speed))

def move_backward(self, speed):
print("Moving backward at %s" % speed)


class MuscleCar(Car):
__hp = 300

def set_hp(self, hp):
self.__hp = hp

def get_hp(self):
return self.__hp

def drag_race(self, opponent):
if (self.__hp > opponent.get_hp()):
return "You Win!"
else:
return "You Lose!"

mynewcar = MuscleCar('Ford', 'Mustang', '2016', 3000, 'red')
mynewcar.set_hp(687)
opponent = MuscleCar('Ford', 'Mustang', '2014', 6400, 'green')
opponent.set_hp(465)

mynewcar.move_forward('25mph')
print(mynewcar.drag_race(opponent))
Notice that the MuscleCar objects were able to use the constructor and the move_forward method from the "Car" class even though the class they were instantiated from doesn't explicitly have them.

Overriding

Just because a class inherits from another one, you're not stuck with all of the functionality of the parent class. You can overwrite parts of the parent class within child classes. The changes applied to the child class will not apply to the original parent class, so you don't have to worry about messing up any other classes.

In the example above, the "MuscleCar" just had a variable, __hp just floating there with no way to set it on instantiation. Check out the same example, but with the constructor overridden.
class Car(object):
def __init__(self, make = 'Ford', model = 'Pinto', year = '1971', mileage = 253812, color = 'orange'):
self.__make = make
self.__model = model
self.__year = year
self.__mileage = mileage
self.__color = color

def move_forward(self, speed):
print("Your %s is moving forward at %s" % (self.__model, speed))

def move_backward(self, speed):
print("Moving backward at %s" % speed)


class MuscleCar(Car):
def __init__(self, make = 'Ford', model = 'Mustang', year = '1965', mileage = 54032, color = 'blue', hp = 325):
self.__make = make
self.__model = model
self.__year = year
self.__mileage = mileage
self.__color = color
self.__hp = hp

def set_hp(self, hp):
self.__hp = hp

def get_hp(self):
return self.__hp

def drag_race(self, opponent):
if (self.__hp > opponent.get_hp()):
return "You Win!"
else:
return "You Lose!"

mynewcar = MuscleCar('Ford', 'Mustang', '2016', 3000, 'red', 687)
opponent = MuscleCar()


mynewcar.move_forward('25mph')
print(mynewcar.drag_race(opponent))
There are two things to notice. First, __hp has become self.__hp and is incorporated into the constructor. Because of this, setting it is much easier. Second, the default values for a new "MuscleCar" have been changed. A Pinto isn't a very good default muscle car, is it?

You can do this with any variable or method in a subclass or child class. It adds an additional degree of flexibility and prevents you from being locked into the functionality of the parent or super class.

The Super Method

Sometimes, you need to access the methods found in the parent class from within the child class. Take the previous example which overrides that constructor. A lot of that code is redundant. Using super() to call the constructor from the "Car" class eliminates that redundancy and makes for a more streamlined class.

super() can also just be used to access regular methods for use in subclass methods. The example below used super() both ways.
class Car(object):
def __init__(self, make = 'Ford', model = 'Pinto', year = '1971', mileage = 253812, color = 'orange'):
self.__make = make
self.__model = model
self.__year = year
self.__mileage = mileage
self.__color = color

def set_make(self, make):
self.__make = make

def get_make(self):
return self.__make

def set_model(self, model):
self.__model = model

def get_model(self):
return self.__model

def set_year(self, year):
self.__year = year

def get_year(self):
return self.__year

def set_mileage(self, mileage):
self.__mileage = mileage

def get_mileage(self):
return self.__mileage

def set_color(self, color):
self.__color = color
def get_color(self):
return self.__color

def move_forward(self, speed):
print("Your %s is moving forward at %s" % (self.__model, speed))

def move_backward(self, speed):
print("Moving backward at %s" % speed)


class MuscleCar(Car):
def __init__(self, make = 'Ford', model = 'Mustang', year = '1965', mileage = 54032, color = 'blue', hp = 325):
super().__init__(make, model, year, mileage, color)
self.__hp = hp


def set_hp(self, hp):
self.__hp = hp

def get_hp(self):
return self.__hp

def drag_race(self, opponent):
if (self.__hp > opponent.get_hp()):
return "You Win!"
else:
return "You Lose!"

def trade_up(self, year, color):
super().set_year(year)
super().set_color(color)
super().set_mileage(0)

mynewcar = MuscleCar('Ford', 'Mustang', '2016', 3000, 'red', 687)
opponent = MuscleCar()

mynewcar.move_forward('25mph')
print(mynewcar.drag_race(opponent))

mynewcar.trade_up('2017', 'black')
print("My new %s muscle car is %s and has %d miles" % (mynewcar.get_year(), mynewcar.get_color(), mynewcar.get_mileage()))

Look at the way the trade_up method makes use of super() to access and call those setter methods from the parent class.

Closing Thoughts

Inheritance allows you to use classes as templates for more specialized classes. You can build out classes in such a structure that the begin to resemble a family tree, with the "older" and more general members passing on traits to the "younger" more specialized members.

Much of Object Oriented Programming is reducing the amount of code that is written and preventing as much code as possible from being rewritten. Inheritance serves a big role in making this possible.

Exercises

  1. Create a basic class that inherits from the "Car" class.
  2. Instantiate your new class and call one of the methods from "Car."
  3. Create a new methods in your child class.
  4. Call your new method.
  5. Use super() to add variables to your child class's constructor.
  6. Create a method using super() to access the parent class's methods.
  7. Call your new method that uses super().

8 Things You Didn’t Know You Could Do with ADB

$
0
0
https://www.maketecheasier.com/things-you-could-do-with-adb

ADB (Android Debug Bridge) is a debugging tool for Android developers. A developer can use it to perform many programming actions and can check the behavior of the system when the app is running. Even if you are just an average user or a non-developer, there are a few ADB commands that can be useful and help you to be more productive and save you time. Here are some cool tricks that you can do with ADB.
The Recovery mode in Android helps you to reset your phone and create backups. However, these backups can only be stored on phone storage or SD card. With the help of ADB, you can create a full backup of your phone on your computer.
Enter the following command to create a full backup of your phone.
adb backup -all-f/backup/location/file.ab
adb-full-backup-2
The above command will back up all the apps and its data at the file location provided by you. Make sure you add the “.ab” file extension to the filename.
After you hit Enter, you will have to unlock your phone and give permission to back up the data. You can also enter a password to encrypt the data. The password will be used when restoring the data.
adb-backup-permission
Other options you can add:
  • -apk : This will back up .apk files
  • -noapk : Will not back up .apk files
  • -obb: Will back up .obb files
  • -noobb: Will not back up .obb files
  • -shared: Will back up SD card data
  • -noshared: Will not back up SD card data
  • -nosystem: Will not back up system apps when -all is added.
To restore the backup on your phone enter the following command:
adb restore <backup-file-location>
adb-full-backup-restore
Unlock your phone and enter the password to restore the backup on your phone.
If you want to back up only a specific app and its data, ADB can help you with that, too. This can be helpful in cases where you want to play a game on a different phone with your previously saved gameplay. Also, it stores the cache of the app so it can be useful for apps like YouTube that save the offline videos as cached files.
In order to back up the app, you need to first know the package name of the app. You can find the package name using the following command.
adb shell pm list packages
This will list all the package names installed on your phone. Find the name of the app package that you want to back up and copy it.
Enter the following command to back up the app and its data:
adb backup -f<file-location-for-backup>-apk<package-name>
adb-app-backup
Replace with the previously copied package name and also add a file location as added in the previous section. Hit Enter. You will be asked to permit the execution of the backup command on your phone just like the previous section.
To restore the app, enter the following command:
adb restore <backup-file-location>
If you have multiple apps (apk files) stored in a folder, you can easily batch install them on your phone using ADB. One thing to note is that you won’t get any prompt screen on your phone, so be careful with the apps you are going to install. Make sure they don’t contain malware (or a malware app).
Enter the following command to install multiple apps from a folder:
for%f in(<folder-path>\*.apk)do adb install"%f"
adb-install-multiple-apps
You will get a “Success” message after each app installation.
For some reason if you require the apk of an app from your phone, ADB can easily extract it for you.
First, you need to know the package name of the app that you are going to extract. Perform the list package command shown in the 2nd section to get the package name.
adb shell pm list packages
You need to get the path or file location of this package. We’ll use this path to extract the APK from the phone.
adb shell pm path <package-name>
adb-get-apk-path
Copy the path and paste it in the below given command:
adb pull <package-location><path-on-computer-to-store-APK>
adb-extract-apk
This will store “base.apk” (which is the APK of the file selected by you) on your computer. You can rename it later.
There are many apps available on the Play Store for this, but doing it with ADB is always cool. Also, this will save storage space on your phone as you won’t have to install another app for the task.
Enter the following command to start recording the screen on your phone:
adb shell screenrecord <folder-path/filename.mp4>
adb-screenrecord
The path to be added in the above command should be of your phone storage or SD card. Also, there’s a slight limitation here – ADB will record the screen for 3 minutes maximum. If you want to stop the recording in between, you can press “Ctrl + C.” Apart from that, you can add parameter -time-limit to set the time limit beforehand.
DPI (Dots per Inch) is a value that Android uses to determine the ideal size of images and app icons to show on the screen. This value can be changed to get a larger, zoomed-in display or smaller display as per your needs. Check the below screenshots. The left image is at normal 480 dpi, and the right one is at 180dpi.
adb-change-dpi-example
To check what the current dpi is on your phone, enter the following command:
adb shell wm density
To change the dpi, just add the value next to it.
adb shell wm density <value>
adb-change-dpi
You can see the change live on the screen, and no reboot is required. You can switch back to original dpi using the same command.
In today’s world where everything is going wireless, why not connect to adb wirelessly too? It’s quite easy to make this happen. However, you’ll first need to connect your phone via USB to enable it. Also, turn on WiFi on your phone and your computer, and your phone should be on the same wireless network.
Enter the following command to make ADB run in TCP/IP mode:
adb tcpip 5555
Get the IP address of your phone from “Settings -> About -> Status -> IP address” and enter it in the next command.
Enter the command to wirelessly connect ADB with your phone.
adb connect <your-ip-address>
You can now disconnect your USB cable.
Enter the following command to check if it’s connected wirelessly:
adb devices
adb-wireless-connection
There is a shell command called dumpsys that developers use to check the system behavior when their app is running. You can use this command to get more info about the phone’s system and check various other hardware info for your knowledge.
Enter the following command to get all the sub-commands that can be used with dumpsys.
adb shell dumpsys |grep"DUMP OF SERVICE"
Now, use the sub-commands accordingly with dumpsys to get more information about various hardware on your phone. The following command shows battery information.
adb shell dumpsys battery
adb-dumpsys
Play around with other sub-commands and get more info about the phone hardware and its status.
There are plenty of things that you can do with ADB, and you don’t need to be a developer to tinker with it. You can also check out this page for all other ADB commands. ADB can be even more useful if you have rooted your phone. Root access will open a plethora of tricks that you can do with ADB on your phone.
If you come up with an error or have any issues using ADB, let us know in the comments below.

How to Setup Docker Private Registry on CentOS 7.x / RHEL 7.x

$
0
0
https://www.linuxtechi.com/setup-docker-private-registry-centos-7-rhel-7

UBER CLI : Easily Get Uber Pickup Time & Price Estimates from Linux Command Line

$
0
0
https://www.2daygeek.com/uber-cli-quickly-get-uber-pickup-time-price-estimates-linux-command-line

Uber-Cli : Now, user can easily get pickup time and price estimates for Uber from Linux command line. The application is initial stage and wont have the main feature like, Uber booking.
It’s very useful for NIX guys because now a days most of the guys used to connect remote Linux system from mobile & tab. So, they can easily get the Uber information from command line much faster compare with GUI Uber App in phone.
The developer said, he is a lazy person and don’t want to open the phone to check the price estimation and pickup time estimation for the raid.
We can create alias for frequent travel to get the information, even more quickly.
Uber-Cli required Node.js & npm. Make a note, it needs latest version of Node.js.
Install Node.js & npm in Linux system by adding Node.js official repository. No need to install npm separately because npm also installed along with Node.js.
Node.js 6.x for Debian based systems.
$ curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
$ sudo apt-get install -y nodejs
Node.js 7.x for Debian based systems.
$ curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
$ sudo apt-get install -y nodejs
Node.js 6.x for RPM based systems.
# curl --silent --location https://rpm.nodesource.com/setup_6.x | bash -
# yum -y install nodejs

# dnf -y install nodejs
Node.js 7.x for RPM based systems.
# curl --silent --location https://rpm.nodesource.com/setup_7.x | bash -
# yum -y install nodejs

# dnf -y install nodejs

Install Uber Cli

Everything is ready. Now, you can easily install Uber-Cli via npm.
$ sudo npm install uber-cli -g
Run the following format to Get Time-To-Pickup Estimates.
$ uber time 'pickup address here'
I’m in Bangalore, India. See the details below.
$ uber time '11th A cross street, kanaka nager'
┌─────────────────────────────────────────────────────────────────────────────┐
│ ? 11th A Cross Rd, Kanaka Nagar, Hebbal, Bengaluru, Karnataka 560032, India │
├──────────────────────────────────┬──────────────────────────────────────────┤
⏳│ ? │
├──────────────────────────────────┼──────────────────────────────────────────┤
│ 4 min. │ UberPOOL,UberGO │
├──────────────────────────────────┼──────────────────────────────────────────┤
│ 8 min. │ UberX │
├──────────────────────────────────┼──────────────────────────────────────────┤
│ 11 min. │ UberXL │
└──────────────────────────────────┴──────────────────────────────────────────┘
Run the following format to Get Price Estimates.
$ uber price -s 'start address' -e 'end address'
$ uber price -s '11th A cross street, kanaka nagar' -e 'silkboard'
┌──────────┬─────────────────────────────┬─────────────────────────────┬──────────────────────────┬────────────────────────────┐
│ ? │ ? │ ? │ ⏳│ ? Surge? │
├──────────┼─────────────────────────────┼─────────────────────────────┼──────────────────────────┼────────────────────────────┤
│ UberPOOL │ ₹185-₹228 │ 11.37 mi. │ 1 hrs. │ ? │
├──────────┼─────────────────────────────┼─────────────────────────────┼──────────────────────────┼────────────────────────────┤
│ UberGO │ ₹259-₹318 │ 11.37 mi. │ 1 hrs. │ ? │
├──────────┼─────────────────────────────┼─────────────────────────────┼──────────────────────────┼────────────────────────────┤
│ UberX │ ₹289-₹354 │ 11.37 mi. │ 1 hrs. │ ? │
├──────────┼─────────────────────────────┼─────────────────────────────┼──────────────────────────┼────────────────────────────┤
│ UberXL │ ₹340-₹416 │ 11.37 mi. │ 1 hrs. │ ? │
├──────────┼─────────────────────────────┴─────────────────────────────┴──────────────────────────┴────────────────────────────┤
│ ? │ 11th A Cross Rd, Kanaka Nagar, Hebbal, Bengaluru, Karnataka 560032, India │
├──────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ? │ Outer Ring Rd, B R Layout, Central Silk Board Colony, 1st Stage, BTM Layout 1, Bengaluru, Karnataka 560068, India │
└──────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Install has-uber-cli

While writing the article about Uber-Cli, i came to know about has-uber-cli which will check if Uber is available in your city or not with CLI.
$ sudo npm install -g has-uber-cli
Usage, Just add your city name followed by the has-uber command which will automatically check and tell you whether Uber cab is available or not in the mentioned city.
$ has-uber [city]

AutoFS configuration in Linux

$
0
0
http://kerneltalks.com/config/autofs-configuration-linux

On demand NFS mounting utility : autofs. Learn what is autofs, why and when to use autofs and autofs configuration steps in Linux server.


First place to manage mount points on any linux system is /etc/fstab file. This files mounts all listed mount points at system startup and made them available to user. Although, I explained mainly how autofs advantages us with NFS mount points, it also works well with native mount points.
NFS mount points are also part of it. Now, issue is even if user dont access NFS mount points they are still mounted by /etc/fstab and leech some system resources in background continuously. Like NFS services needs to check connectivity, permissions etc details of these mount points in background continuously. If these NFS mounts are considerably high in numbers then managing them through /etc/fstab will be major drawback since you are allotting major system resource chunk to system portion which is not frequently used by users.

Why use AutoFS?
In such scenario, AutoFS comes in picture. AutoFS is on demand NFS mounting facility. In short, it mounts NFS mount points when user tries to access them. Again once time hits timeout value (since last activity on that NFS mount), it will automatically un-mount that NFS mount saving system resources serving idle mount point.
It also reduce your system boot time since mounting task is done after system boot and when user demands it.

When use AutoFS?
  • If your system is having large number of mount points
  • Many of them are not being used frequently
  • System is tight on resources and every single piece of system resource counts

AutoFS configuration steps :
First, you need to install package autofs using yum or apt. Main configuration file for autofs is /etc/auto.master which is also called as mast map file. This file has autofs controlled mount points details. Master file follows below format :
mount_point map_file options
where –
  • mount_point is directory on which mounts should be mounted
  • map_file (automounter map file) is file containing list of mount points and their file systems from which they should be mounted
  • options are extra options to be applied on mount_point
Sample master map file looks like one below :
In above sample, mount points defined under /etc/auto.misc files can be mounted on /my_auto_mount directory with timeout value 60 sec.
Parameter map_file (automounter map file) in above master map file is also an configuration file which has below format :
mount_point options source_location
where –
  • mount_point is directory on which mounts should be mounted
  • options are mounting options
  • source_location is FS or NFS path from where mount will be mounted
Sample automounter map file looks like one below :
User should be aware of share path. Means, in our case, /my_auto_mount and linux, data1 these paths should be known to user in order to access them.
In all both these configuration file collectively tells :
Whenever user tries to access mount point linux or data1 –
  1. autofs checks data1 source (/dev/fs0) with option (-fstype=ext3)
  2. mounts data1 on /my_auto_mount/data1
  3. Un-mounts /my_auto_mount/data1 when there is no activity on mount for 60 secs
Once you are done with configuring your required mounts you can start autofs service.  Reload its configurations :
Thats it! Configuration is done!

Testing AutoFS configuration :
Once you reload configuration, check and you will notice autofs defined mount points are not mounted on systems (output of df -h).
Now cd into /my_auto_mount/data1 and you will be presented with listing of content of data1 from /dev/fd0!
Another way is to use watch utility in another session and keep watch on command mount. AS you executes commands, you will see mount point is mounted on system and after timeout value its un-mounted!

Open source projects for the Internet of Things, from A to Z

$
0
0
http://linuxgizmos.com/open-source-projects-for-the-internet-of-things-from-a-to-z

This guide to 21 open source projects for IoT ranges from standards organizations to open source frameworks and dev tools.


 
An Open Source Perspective on the Internet of Things
Part 2: 21 Open Source Projects for IoT

The Internet of Things market is fragmented, amorphous, and continually changing, and its very nature requires more than the usual attention to interoperability. It’s not surprising then, that open source has done quite well here — customers are hesitant to bet their IoT future on a proprietary platform that may fade or become difficult to customize and interconnect.



(Source: Wikimedia)

In this second entry in a four-part series about open source IoT, I have compiled a guide to major open source software projects, focusing on open source tech for home and industrial automation. I am omitting more vertical projects related to IoT, such as Automotive Grade Linux and Dronecode, and I’m also skipping open source, IoT-oriented OS distributions, such as Brillo, Contiki, Mbed, OpenWrt, Ostro, Riot, Ubuntu Snappy Core, UCLinux, and Zephyr. Next week, I’ll cover hardware projects — from smart home hubs to IoT-focused hacker boards — and in the final part of the series, I’ll look at distros and the future of IoT. The list of 21 projects below includes two major Linux Foundation hosted projects — AllSeen (AllJoyn) and the OCF (IoTivity) — and many more end-to-end frameworks that link IoT sensor endpoints with gateways and cloud services. I have also included a smattering of smaller projects that address particular segments of the IoT ecosystem. We could list more, but it’s increasingly difficult to determine the difference between IoT software and just plain software. From the embedded world to the cloud, more and more projects have an IoT story to tell.
All 21 projects claim to be open source, although it’s beyond the scope of this article to ensure they fully live up to those claims. They all run Linux on at least one component in the ecosystem, and most support it throughout, from desktop development to cloud/server, gateway, and sensor endpoint components. The vast majority have components that can run on Linux hacker boards like the Raspberry Pi and BeagleBone, and many support Arduino.
There is still plenty of proprietary technology in IoT, especially among the top-down, enterprise platforms. Yet, even some of these offer partially open access. Verizon’s ThingSpace, for example, which targets 4G smart city applications, has a free development API that supports hacker boards, even if the core platform itself is proprietary. Somewhat similarly, Amazon’s AWS IoT suite has a partially open device SDK and open source starter kits.
Other major proprietary platforms include Apple’s HomeKit and Microsoft Azure IoT Suite. Then there’s the 230-member Thread Group, which oversees the peer to peer Thread networking protocol based on 6LoWPAN. Launched by Nest, which is owned by Alphabet, the umbrella organization over Google, The Thread Group does not offer a comprehensive open source framework like AllSeen and the OCF. However, it’s associated with Brillo, as well as the Weave IoT communication protocol. In May, Nest launched an open source version of Thread called OpenThread (see farther below).

Here are 21 open source software projects for the Internet of Things:
  • AllSeen Alliance (AllJoyn)— The AllJoyn interoperability framework overseen by the AllSeen Alliance (ASA) is probably the most widely adopted open source IoT platform around.
  • Bug Labsdweet and freeboard— Bug Labs started out making modular, Linux-based Bug hardware gizmos, but it long ago morphed into a hardware-agnostic IoT platform for the enterprise. Bug Labs offers a “dweet” messaging and alerts platform and a “freeboard” IoT design app. Dweet helps publish and describe data using a HAPI web API and JSON. Freeboard is a drag-and-drop tool for designing IoT dashboards and visualizations.
  • DeviceHive— DataArt’s AllJoyn-based device management platform runs on cloud services such as Azure, AWS, Apache Mesos, and OpenStack. DeviceHive focuses on Big Data analytics using tools like ElasticSearch, Apache Spark, Cassandra, and Kafka. There’s also a gateway component that runs on any device that runs Ubuntu Snappy Core. The modular gateway software interacts with DeviceHive cloud software and IoT protocols, and is deployed as a Snappy Core service.
  • DSA— Distributed Services Architecture facilitates decentralized device inter-communication, logic, and applications. The DSA project is building a library of Distributed Service Links (DSLinks), which allow protocol translation and data integration with third party sources. DSA offers a scalable network topology consisting of multiple DSLinks running on IoT edge devices connected to a tiered hierarchy of brokers.
  • Eclipse IoT (Kura)— The Eclipse Foundation’s IoT efforts are built around its Java/OSGi-based Kura API container and aggregation platform for M2M applications running on service gateways. Kura, which is based onEurotech’s Everywhere Cloud IoT framework, is often integrated with Apache Camel, a Java-based rules-based routing and mediation engine. Eclipse IoT sub-projects include the Paho messaging protocol framework, the Mosquitto MQTT stack for lightweight servers, and the Eclipse SmartHome framework. There’s also a Java-based implementation of Constrained Application Protocol (CoAP) called Californium, among others.
  • Kaa— The CyberVision-backed Kaa project offers a scalable, end-to-end IoT framework designed for large cloud-connected IoT networks. The platform includes a REST-enabled server function for services, analytics, and data management, typically deployed as a cluster of nodes coordinated by Apache Zookeeper. Kaa’s endpoint SDKs, which support Java, C++ and C development, handle client-server communications, authentication, encryption, persistence, and data marshalling. The SDKs contain server-specific, GUI-enabled schemas translated into IoT object bindings. The schemas govern semantics and abstract the functions of a diverse group of devices.
  • Macchina.io— Macchina.io provides a “web-enabled, modular and extensible” JavaScript and C++ runtime environment for developing IoT gateway applications running on Linux hacker boards. Macchina.io supports a wide variety of sensors and connection technologies including Tinkerforge bricklets, XBee ZB sensors, GPS/GNSS receivers, serial and GPIO connected devices, and accelerometers.
  • Predix— GE’s Predix PaaS (Platform as a Service) software for industrial IoT is based on Cloud Foundry. It adds asset management, device security, and real-time, predictive analytics, and supports heterogeneous data acquisition, storage, and access. GE Predix, which GE developed for its own operations, has become one of the most successful of the enterprise IoT platforms, with about $6 billion in revenues. GE recently partnered with HPE, which will integrate Predix within its own services.
  • Mainspring— M2MLabs’ Java-based framework is aimed at M2M communications in applications such as remote monitoring, fleet management, and smart grids. Like many IoT frameworks, Mainspring relies heavily on a REST web-service, and offers device configuration and modeling tools.
  • Node-RED— This visual wiring tool for Node.js developers features a browser-based flow editor for designing flows among IoT nodes. The nodes can then be quickly deployed as runtimes, and stored and shared using JSON. Endpoints can run on Linux hacker boards, and cloud support includes Docker, IBM Bluemix, AWS, and Azure.
  • Open Connectivity Foundation (IoTivity)— This amalgamation of the Intel and Samsung backed Open Interconnect Consortium (OIC) organization and the UPnP Forum is working hard to become the leading open source standards group for IoT. The OCF’s open source IoTivity project depends on RESTful, JSON, and CoAP.
  • openHAB— This open source smart home framework can run on any device capable of running a JVM. The modular stack abstracts all IoT technologies and components into “items,” and offers rules, scripts, and support for persistence — the ability to store device states over time. OpenHAB offers a variety of web-based UIs, and is supported by major Linux hacker boards.
  • OpenIoT— The mostly Java-based OpenIoT middleware aims to facilitate open, large-scale IoT applications using a utility cloud computing delivery model. The platform includes sensor and sensor network middleware, as well as ontologies, semantic models, and annotations for representing IoT objects.
  • OpenRemote— Designed for home and building automation, OpenRemote is notable for its wide-ranging support for smart devices and networking specs such as 1-Wire, EnOcean, xPL, Insteon, and X10. Rules, scripts, and events are all supported, and there are cloud-based design tools for UI, installation, and configuration, and remote updates and diagnostics.
  • OpenThread— Nest’s recent open source spin-off of the 6LoWPAN-based Thread wireless networking standard for IoT is also backed by ARM, Microchip’s Atmel, Dialog, Qualcomm, and TI. OpenThread implements all Thread networking layers and implements Thread’s End Device, Router, Leader, and Border Router roles.
  • Physical Web/Eddystone— Google’s Physical Web enables Bluetooth Low Energy (BLE) beacons to transmit URLs to your smartphone. It’s optimized for Google’s Eddystone BLE beacon, which provides an open alternative to Apple’s iBeacon. The idea is that pedestrians can interact with any supporting BLE-enabled device such as parking meters, signage, or retail products.
  • PlatformIO— The Python-based PlatformIO comprises an IDE, a project generator, and a web-based library manager, and is designed for accessing data from microcontroller-based Arduino and ARM Mbed-based endpoints. It offers preconfigured settings for more than 200 boards and integrates with Eclipse, Qt Creator, and other IDEs.
  • The Thing System— This Node.js based smart home “steward” software claims to support true automation rather than simple notifications. Its self-learning AI software can handle many collaborative M2M actions without requiring human intervention. The lack of a cloud component provides greater security, privacy, and control.
  • ThingSpeak— The five-year-old ThingSpeak project focuses on sensor logging, location tracking, triggers and alerts, and analysis. ThingSpeak users can tap a version of MATLAB for IoT analysis and visualizations without buying a license from Mathworks.
  • Zetta— Zetta is a server-oriented, IoT platform built around Node.js, REST, WebSockets, and a flow-based “reactive programming” development philosophy linked with Siren hypermedia APIs. Devices are abstracted as REST APIs and connected with cloud services that include visualization tools and support for machine analytics tools like Splunk. The platform connects end points such as Linux and Arduino hacker boards with cloud platforms such as Heroku in order to create geo-distributed networks.

Read all the articles in this series


This article is copyright © 2016 Linux.com and was originally published here. It has been reproduced by this site with the permission of its owner. Please visit Linux.com for up-to-date news and articles about Linux and open source. The Internet of Things image reproduced in this post is licensed under the Creative Commons Attribution 2.0 Generic license, and was obtained here.

Related posts:


Atop – Monitor real time system performance, resources, process & check resource utilization history

$
0
0
https://www.2daygeek.com/atop-system-process-performance-monitoring-tool/2


Atop is an ASCII full-screen system performance monitoring tool for Linux that is capable of reporting the activity of all server processes (even if processes have finished during the interval).
It’s logging of system and process activity for long-term analysis (By default, the log files are preserved for 28 days), highlighting overloaded system resources by using colors, etc. It shows network activity per process/thread with combination of the optional kernel module netatop.
atop is a Linux process monitor tool which is similar to top but provides major advantages compared to other performance monitoring tools such as top, etc.
It shows system resources activity such as CPU utilization, memory utilization, swap utilization, disks (including LVM), disk I/O, network utilization, priority, username, state and exit code for every process (and thread).

Atop Advantages

  • Resource consumption by all processes
  • Utilization of all relevant resources
  • Permanent logging of resource utilization
  • Highlight critical resources
  • Scalable window width
  • Resource consumption by individual threads
  • Watch activity only & Watch deviations only
  • Accumulated process activity per user
  • Accumulated process activity per program
  • Network activity per process

1) Install atop on Linux

atop package is not available on RHEL/CentOS system official repository, so we need to Install/Enable EPEL Repository to get the package installed. For other distribution we can get it from official distribution repository.
[For RHEL/CentOS & upto Fedora 21]
$ sudo yum install atop

[Fedora 22 and later]
$ sudo dnf install atop

[For Debian/Ubuntu/Mint]
$ sudo apt install atop

[Arch Linux Based System]
$ sudo pacman -S atop

[Mageia]
$ sudo urpmi atop

1a) Install atop on openSUSE

atop package is not available on openSUSE system official repository, so we need to add additional repository.
[For openSUSE Leap 42.1]
$ sudo zypper addrepo http://download.opensuse.org/repositories/server:monitoring/openSUSE_Leap_42.1/server:monitoring.repo
$ sudo zypper refresh
$ sudo zypper install atop
[For openSUSE 13.2]
$ sudo zypper addrepo http://download.opensuse.org/repositories/server:monitoring/openSUSE_13.2/server:monitoring.repo
$ sudo zypper refresh
$ sudo zypper install atop
[For openSUSE Leap 13.1]
$ sudo zypper addrepo http://download.opensuse.org/repositories/server:monitoring/openSUSE_13.1/server:monitoring.repo
$ sudo zypper refresh
$ sudo zypper install atop

2) Adjust Atop configuration

By default atop store the activity on every 10 mins interval, if you want to log the activity on every 1 min for critical production server, you can do by modifying the interval value from 600 to 60 on atop config file.
[Debian based systems]
$ sudo nano /etc/default/atop

[RPM based systems]
$ sudo nano /etc/sysconfig/atop

[Arch Linux based systems]
$ sudo nano /etc/atop/atop.daily

INTERVAL=60
Note Logs are located @ /var/log/atop/ and you can check the history when you want.

3) Atop Usage

After successful installation, just type atop to monitor the Linux process activity, which will update every 10 seconds. The output is similar to top, see below.
$ atop
atop-system-process-performance-monitoring-tool-1
show individual threads : atop with help of -y flag shows individual users activity, like root, etc.,.
$ atop -y
atop-system-process-performance-monitoring-tool-2
Memory Utilization information : atop with help of -m flag shows running process memory information such as VSIZE, RSIZE, VGROW, RGROW & MEM.
  • VSIZE Shows total virtual memory usage per process
  • RSIZE Shows total resident memory usage per process
  • VGROW Shows virtual memory growth during the last interval
  • RGROW Shows resident virtual memory during the last interval
  • MEM Shows actual memory usage percentage
$ atop -m
atop-system-process-performance-monitoring-tool-3
Disk Utilization information : atop with help of -d flag shows disks activity information. RDDSK shows amount of data read & WRDSK shows amount of date write. DSK shows amout of Read & Write for the process.
$ atop -d
atop-system-process-performance-monitoring-tool-4
atop with help of -v flag shows various process information such as pid (Process Identifier), ppid (Parent Process Identifier), RUID (User Identifier), RGID (Group Identifier), date & time.
$ atop -v
atop-system-process-performance-monitoring-tool-5
atop with help of -c flag shows detailed command for each process.
$ atop -c
atop-system-process-performance-monitoring-tool-6
atop with help of -u flag shows, how many process (process count) are active for each user.
$ atop -u
atop-system-process-performance-monitoring-tool-7
atop with help of -p flag shows, show cumulated process-info per program.
$ atop -p
atop-system-process-performance-monitoring-tool-8
Alternatively you can sort processes in order of high consumption.
  • C Sort processes in order of cpu-consumption (default)
  • M Sort processes in order of memory-consumption
  • D Sort processes in order of disk-activity
  • N Sort processes in order of network-activity
  • A Sort processes in order of most active resource (auto mode)

4) Install & configure netatop on Linux

There is no official package for netatop and we have to compile manually on Linux. Install required dependency packages kernel-devel, zlib-devel & zlib1g-dev to make it work netatop.
[For RHEL/CentOS & upto Fedora 21]
$ sudo yum install kernel-devel zlib-devel

[Fedora 22 and later]
$ sudo dnf install kernel-devel zlib-devel

[For Debian/Ubuntu/Mint]
$ sudo apt install zlib1g-dev
Visit the netatop page and download the latest release and follow the below procedure.
$ wget http://www.atoptool.nl/download/netatop-1.0.tar.gz
$ tar -xvf netatop-1.0.tar.gz
$ cd netatop-1.0
$ sudo make
$ sudo make install
Load the module and start the daemon.
[For SysVinit system]
$ sudo service netatop start

[For systemd system]
$ sudo systemctl start netatop.service
Load the module and start the daemon on boot.
[For RPM based SysVinit system]
$ sudo chkconfig --add netatop

[For Debian based SysVinit system]
$ sudo update-rc.d netatop defaults

[For systemd system]
$ sudo systemctl enable netatop.service
Check the network usage, atop with help of -n flag.

Rolling Release Vs. Fixed Release Distros — Which Linux Distributions Are Better?

$
0
0
https://fossbytes.com/rolling-release-vs-fixed-release-distros-which-linux-distributions-are-better


rolling-vs-fixed-linux-distrosShort Bytes: Different methods are available for updating Linux distributions. On this basis, we can broadly classify various distros as rolling distributions and fixed release distributions. Rolling means that the updates are pushed as soon as they are coded. In fixed release, the updates are tested thoroughly and pushed at once.
Years ago, when I got the first glimpse of the world of Linux and its distributions, all I knew was an operating system named Ubuntu. And still, Ubuntu is one such thing that pops up in many people’s mind when they hear the word Linux.
Soon, I started to get the hang of it and realized that the Linux world wasn’t only about Ubuntu, it also had the GNU software and other important components that constitute a working Linux distribution.
Another important thing I came to know was about how the updates are delivered for the Linux distros. There are two types of Linux distros based on the type of update delivery method, rolling distribution, and fixed release distribution. Both of them have their own pros and cons.

Rolling Distribution

If you’re the one who wants the latest features and services straight out of the production, then, rolling distributions are the best deal for you. A rolling distribution receives new apps and features as soon as they get out of the code factory of its developers. Arch Linux is a well known rolling distro. There are a number of Arch-based, Debian-based, Gentoo-based, as well as standalone rolling distros.

Fixed Release Distribution

These are also known as Point Release distributions. In this type of distribution, the updates, released as versions, are pushed after a specified time interval. The apps and feature packages are developed in the time between two consecutive updates. These packages are then released as a combined ISO file or via the inbuilt update feature in the Linux distributions. A significant difference is visible in these version number of these major upgrades. For instance, Ubuntu 16.04 Xenial Xerus and the newer Ubuntu 16.10 Yakkety Yak.

Which is better rolling or fixed release?

Rolling distributions are a great option to enjoy the treats from developers. But great things do come at a price. The price, in this case, is the testing time for these updates. Rolling distributions may become a shelter for various bugs and vulnerabilities. But it is kind of an advantage to have bugs as they can be easily removed before they could affect the masses.
In the case of fixed release distros, the updates and features are thoroughly tested and tried before making their way to the machines of the users. Most consumer-centric Linux distros are based on fixed release update cycles.
A point of concern for the fixed release distros is that bugs and vulnerabilities undiscovered during the testing phase may be used to compromise the security once the update is released. Security patches and minor updates are pushed for such distros in addition to the regular update cycle which proceeds at its normal pace.
The fixed release distros are more stable than the rolling ones as the features and services causing trouble are repaired during the testing phase. That’s a downside for the rolling distros.
Major versions of the fixed release distros may differentiate themselves in terms of appearance and other noticeable features and software. You need to upgrade to the next major version when it arrives. In the case of rolling distributions, you don’t need to upgrade to the next major version because it doesn’t exist. There is no major version number attached to Arch Linux, like we have in the case of Linux Mint, and Ubuntu.
Some people might not like the idea of updating their system all the time. At least in countries like India, where the internet is a luxury. At last, it is the choice of the user, whether to go for the quick rolling updates while compromising stability or ditching new features for a bug-free experience.

Smart light using Arduino on Fedora

$
0
0
https://fedoramagazine.org/smart-light-arduino-fedora

The Internet of Things is a new concept to us. But if we think about it, Internet access is nothing new. We come across many “things” in day-to-day life. They help make our life easier each day. For example, take an ordinary light bulb. It consumes energy and produces light, which helps us see every day.
Technology and improvements have stripped down resource consumption to the bare minimum. They optimize the output, and now we have an era where the mobile and telecommunications industry are booming. The speed of the Internet is unimaginable compared to the past. From that, we have the idea of making things “smarter” by connecting them to the Internet, analyzing petabytes of historical and real-time data, and automating their operation. This results in a smarter way of living. The Internet of Things affects almost all major areas of the industry: agriculture, health care, home automation, and many more.

It is now easy to control a light on an Arduino without an Ethernet shield, but just over HTTP. The idea is to let you control a single bulb or series of bulbs in your home from the tap of an application on your device.

Ingredients for your homemade light switch

  1. Arduino UNO with USB port
  2. Arduino IDE
  3. An Internet connection
  4. “Root” access to the development machine
  5. Node.js
  6. Johnny-Five and narf
Arduino UNO is what we will use as the micro-controller for the switch. In this guide, the Arduino board will control a light. To keep things simple, we will use Pin13 of the Arduino for the light source. This is a LED light in the Arduino itself. An Ethernet or WiFi shield is not used.

Setting up the Arduino

To get started, you will need the Arduino integrated development environment, or IDE. If you are using Fedora, you can install the official Arduino IDE with a single command in a terminal.
$ sudo dnf install arduino
Once installed, make sure that you plug in your Arduino and check if your system detects it. After inserting the USB into your system, enter the terminal and look for where the system is registering the Arduino. You will need this information later on to execute the code you will make with the Arduino IDE.
The following command should tell you of its place.
$ dmesg | tail
Making a light switch from an Arduino: Locating the Arduino
Look for a bus device and Arduino in the same line. In my screenshot, the line I needed was Bus 002 Device: ID Arduino SA Uno R3. Once you find the board, we can move ahead to setting up the communication protocol.

Setting up Johnny-Five

Johnny-Five is the JavaScript Robotics & IoT platform. Released by Bocoup in 2012, Johnny-Five is maintained by a community of passionate software developers and hardware engineers. Over 75 developers have made contributions towards building a robust, extensible, and composable ecosystem. In this set-up, we will also be using Firmata. The Firmata library implements the Firmata protocol for communicating with software on the host computer. This allows you to write custom firmware without having to create your own protocol and objects for the programming environment that you use.
To install and set up Johnny-Five, open the Arduino IDE we installed in the previous step (if you’re using GNOME, it should be in the Applications menu). In the IDE, go to File > Examples > Firmata > StandardFirmata. We’re going to upload the StandardFirmata to the Arduino board for us to use when creating the switch. StandardFirmata is available in all versions of Firmata greater than v2.5.0.
Once you find this in the Arduino IDE, hit the “Upload” button to push the firmware to the Arduino board. If the upload was successful, the board is prepared for us to use, and you can now close the Arduino IDE.
Making a light switch from an Arduino: Using the Arduino IDE

Set up project workspace

You will need to create and set up a project workspace for creating the Arduino application. For our project, we will be using Node.js as the language for creating the switch. There are several ways to create this kind of application, but to help get you started, I created an HTML page and the JavaScript file you can use for your own set up.
You can find my demo code available on GitHub. For this project, you will want a copy of the index.html and LED_Server.js files. You can copy and paste the two files into the project workspace you created earlier.

Setting up node.js

Now that we have our workspace and files needed for running the project, we will need to set up a Node.js server to run the application. To begin with running the “light switch server”, you will need to install Node.js and NPM, the package manager for Node.js applications.
Enter the following commands to install the necessary dependencies.
$ sudo dnf install npm nodejs
$ npm install narf johnny-five
Once all the dependencies are installed, you will now be able to start your light switch server. Making sure you are in the project workspace folder, enter the following command to start the Node.js server.
$ node LED_Server.js
Once you execute the command, your terminal window should look like the following.
Making a light switch from an Arduino: Running the Node.js server
If you receive an error about a serial port not found, you may need another dependency (depending on your environment). To resolve this, run the following command to install serialport via npm.
$ npm install serialport
Making a light switch from an Arduino: Installing and using serialport
The application will now be running. To test if it’s working, open up your favorite browser and point it at http://127.0.0.1:8079/index.html. This is the address of a local page on your system where you can view the virtual power switch we created.

Controlling the Arduino light

Now, you can control the power for Pin13 on your Arduino board from this webpage. In this proof of concept, you will only be able to control the LED light on the Arduino. However, in a more realistic example, perhaps you leave for a vacation and can’t remember if you turned off the lamp next to your bed while you were packing. This solution would allow you power off the lamp from anywhere in the world at any time.
Here is how the web page looks in action from the above example.
Making a light switch from an Arduino: The web page for controlling the light

Feature Image lightbulb based off this icon from the Noun Project– CC-BY

Dstat – Versatile resource statistics tool for Linux

$
0
0
https://www.2daygeek.com/install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux


Dstat– versatile tool for generating system resource statistics & replacement for vmstat, iostat, netstat and ifstat. Dstat is another handy tool for monitoring systems during performance tuning tests, benchmarks or troubleshooting. It overcomes some of other tools limitations and adds some extra features, more counters and flexibility.
I really impressed by dstat utility when analyzing the tool to prepare the article. I excited then i dig into deep on dstat utility usage, wow! awesome features which i didn’t find any performance monitoring tools.
You can monitor additionally MySQL database activity, batter percentage info for laptop, number of dbus connections, fan speed, nfs utility, postfix, system temperature sensors, power usage, etc,., more & more. I personally advise every administrator to give a try, which will help you to improve the troubleshooting skill a lot.
Dstat allows you to view all of your system resources in real-time. You can add any combination on dstat command with option as per your requirements. Eg. compare bandwidth utilization with particular Ethernet (eth0 or eth1).
Dstat gives you detailed information about given input in columns, so Less confusion, less mistakes. You can export details to CSV output to a file for further investigation and graph generation.
By default dstat output display, delay in seconds between each update. Lot’s of predefined Plugins are available to generate the unmatched report, also you can write your own plugins to collect your own counters and extend in ways you never expected.

Dstat Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in python so easily extendable for the task at hand
  • Easy to extend, add your own counters (please contribute those)
  • Includes many external plugins to show how easy it is to add counters
  • Can summarize grouped block/network devices and give total numbers
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

1) Install dstat on Linux

dstat package is not available on RHEL/CentOS system official repository, so we need to Install/Enable EPEL Repository to get the package installed. For other distribution we can get it from official distribution repository.
[For RHEL/CentOS & upto Fedora 21]
$ sudo yum install dstat

[Fedora 22 and later]
$ sudo dnf install dstat

[For Debian/Ubuntu/Mint]
$ sudo apt install dstat

[Arch Linux Based System]
$ sudo pacman -S dstat

[openSUSE/SUSE]
$ sudo zypper in dstat

[Mageia]
$ sudo urpmi dstat

2) dstat Usage

After successful installation of dstat, simply fire dstat without any option, you will get similar to below screen.
$ dstat
The below output, combination of cdngy
  • c : cpu : Total cpu usage
  • d : disk : Disk Utilization
  • n : net : Total Network usage
  • g : page : page stats
  • y : sys : system stats
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-1
To show detailed information about Memory, Included (used, buffer, cache & free), Swap (used & free) & Virtual Memory (allocated, free, major page fault & minor page fault) usage.
$ dstat --mem --swap --vm
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-2
To show detailed information about each CPU (include cpu0, cpu1, etc) & total usage. It display each CPU (user time, system time, idle time, steal time & wait time) process activity
$ dstat -C 0,1,2,total
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-3
To show detailed information about disk utilization (read & write) & disk I/O (read & write) utilization for particular disk. If you want to check total disk utilization & I/O, use dstat --disk --io.
$ dstat --disk --io -D sda
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-4
To show detailed information about network utilization (data receive & data send) for particular Ethernet. If you want to show all the Ethernet utilization, use dstat --net.
$ dstat --net -N eth1
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-5
To show detailed information about top cpu, top cputime (process using the most CPU time (in ms)), top disk I/O activity, top disk block I/O activity, top memory and top latency usage.
$ dstat --top-cpu --top-cputime --top-io --top-bio --top-mem --top-latency
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-6
To show detailed information about (CPU, Disk, Memory, Process, Load & network) usage, which is very common for basic troubleshooting when server load is too high.
$ dstat --cpu --mem --proc --load --disk --net
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-7
To show detailed information about tcp (listen, established, syn, time_wait, close), udp (listen, active) & socket (total, tcp, udp, raw, ip-fragments) usage.
$ dstat --tcp --udp --socket
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-8
Display statistic every 5 seconds delay instead of default (1 sec delay) with any combination as per your requirement. The below combination shows about cpu & process statistic every 5 seconds
$ dstat --time --cpu --proc 5
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-9
By default delay is 1 sec and count is UNLIMITED, if you want to display statistic every 2 seconds delay with 10 counts on any combination as per your requirement. The below combination shows about cpu & process statistic every 2 seconds with 10 counts.
$ dstat --time --cpu --proc 2 10
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-10
If you want to store the reports to file for further investigation, you can do by using the below format with any combination.
$ dstat --output /opt/dstat-output.csv --cpu --mem --disk --io
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-11
See the output.csv file date.
install-dstat-resource-statistics-process-performance-monitoring-tool-on-linux-12

Which Is The Best Compression Tool For Linux?

$
0
0
https://www.lifewire.com/which-is-the-best-compression-tool-for-linux-4082712

Introduction

When it comes to finding file compression tools in Linux you are left with a number of different choices but which one is the best?
In this guide, I will put zip, gzip and bzip2 through their paces to see which one is best.
I have conducted a number of tests against different file types and using different settings for each tool and here are the results

Best Tool For Compressing Windows Documents

Before looking at a more detailed test I wanted to try each compression tool against a single file type so that we could see how each tool handles the file in question.
These tests have been run against the Microsoft DOCX format.
Default Settings
I have started with default settings for each program.
ToolFile Size
Initial Filesize12202 bytes
zip9685
gzip9537
bzip210109
Best Compression
This time I have gone for maximum compression,
ToolFile Size
Initial Filesize12202 bytes
zip9677
gzip9530
bzip210109
To make sure this wasn't a fluke I tried the same test against 2 other documents.
File 1:
ToolFile Size
Initial Filesize14913176
zip14657475
gzip14657328
bzip214741042
File 2:
ToolFile Size
Initial Filesize13314
zip10814
gzip10653
bzip211254
Two of the files contained text only whereas the larger file contained a lot of pages of text with lots of images and a lot of formatting.
From the first test gzip comes out on top in all categories and bzip2 is the least effective.

Best Tool For Compressing Images

This time I am going to show the results of compressing various image formats such as PNG and JPG.
In theory, JPG files are already compressed and therefore may not compress at all and could, in theory, make the file bigger.
PNG File
ToolFile Size
Initial Filesize345265
zip345399
gzip345247
bzip2346484
JPEG File
ToolFile Size
Initial Filesize44340
zip44165
gzip44015
bzip244281
Bitmap File
ToolFile Size
Initial Filesize3113334
zip495028
gzip494883
bzip2397569
GIF File
ToolFile Size
Initial Filesize6164
zip5772
gzip5627
bzip26051
In all cases, gzip came out on top again except for one and that was the humble bitmap. The bzip2 compression produced a tiny file in comparison to the original.

Best Tool For Compressing Audio Files

The most common audio format is MP3 and in theory, this has already been compressed so the tools may actually end up increasing the file size.
I am going to test two files:
File 1:
ToolFile Size
Initial Filesize5278905
zip5270224
gzip5270086
bzip25270491
File 2:
ToolFile Size
Initial Filesize4135331
zip4126138
gzip4126000
bzip24119410
This time the results were inconclusive. The compression in all cases was minimal but it is interesting that bzip2 came out the worst for file 1 and the best for file 2.

Best Tool For Compressing Video

In this test, I am going to compress 2 video files. As with MP3 the MP4 file already contains a level of compression and so the results will probably prove to be negligible in terms of how well the tools perform.
I have also included an FLV file which will not have any level of compression as it is a lossless format.
MP4:
ToolFile Size
Initial Filesize731908
zip478546
gzip478407
bzip2478042

Yet again the bzip2 format came out better than the other file types.
At this stage, it would seem that there is little difference as to which tool you use. The results are close across the board for all file types and sometimes gzip is best and others bzip2 is best and the zip command is usually there or thereabouts.
FLV:
ToolFile Size
Initial Filesize7833634
zip4339169
gzip4339030
bzip24300295

It would appear that if you are compressing video that bzip2 is the compression tool of choice.

Executables

The last single category that I will try is executables.
As executables are compiled the code I suspect that they won't compress very well.
File 1:
ToolFile Size
Initial Filesize26557472
zip26514031
gzip26513892
bzip226639209
File 2:
ToolFile Size
Initial Filesize195629144
zip193951631
gzip193951493
bzip2194834876

Again we see that gzip comes out on top and bzip2 comes last. For the smaller executable the bzip file actually grew in size.

Complete Folder Test

Thus far I have dealt with individual files. This time I have a folder full of images, documents, spreadsheets, videos, audio files, executables and many other different file formats.
I have created a tar file which makes it easier to compress using all of the tools available. The gzip and bzip2 commands work against single files whereas the zip command can work against folders.
By using the tar command I have created a single file which contains all of the folders and files in an uncompressed format.
I am going to monitor a number of things in this test:
  • Compress using default compression settings - report results by file sizes
  • Compress using default compression settings - report results by time was taken
  • Compress using best compression - report results by file sizes
  • Compress using best compression - report results by time was taken
  • Compress using fastest compression - report results by file sizes
  • Compress using fastest compression - report results by time taken
Default Compression
ToolFile SizeTime Taken
Initial File13330841600
zip13031777781 minute 10 seconds
gzip13031776371 minute 35 seconds
bzip213092349476 minutes 5 seconds
Maximum Compression
ToolFile SizeTime Taken
Initial File13330841600
zip13031078941 minute 10 seconds
gzip13031077531 minute 35 seconds
bzip213092349476 minutes 10 seconds
Fastest Compression
ToolFile SizeTime Taken
Initial File13330841600
zip13041639431 minute 0 seconds
gzip13041638021 minute 15 seconds
bzip213135575956 minutes 10 seconds
Summary
Based on the final test it is clear that bzip2 is not as useful as the other 2 compression tools. It takes longer to compress the files and the final file size is larger.
The difference between zip and gzip is negligible, and whilst gzip generally comes out on top, the zip format is more common across different operating systems.
So my verdict is that definitely use either zip or gzip but maybe bzip2 has had its day and needs to be confined to history.

How to find your System details using inxi

$
0
0
https://www.ostechnix.com/how-to-find-your-system-details-using-inxi


find your System details using inxi
There are so many free and paid applications available to display or find the Linux system details. Today, we will be discussing how to find your Linux desktop or server details using a simple and yet useful tool called “inxi”. It is free, open source, and full featured command line system system information tool. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information. Be it a hard disk or CPU, mother board or the complete detail of the entire system, inxi will display it more accurately in seconds. Since it is CLI tool, you can use it in Desktop or server edition. Inxi is available in the default repositories of most Linux distributions and some BSD systems.

Install inxi

Like I said, inxi tool is available in most Linux distribution repositories.
On Arch Linux and derivatives:
To install inxi in Arch Linux or its derivatives like Antergos, and Manajaro Linux, run:
sudo pacman -S inxi
On Debian / Ubuntu and derivatives:
sudo apt-get install inxi
On Fedora / RHEL / CentOS / Scientific Linux:
inxi is available in the Fedora default repositories. So, just run the following command to install it straight away.
sudo dnf install inxi
In RHEL and its clones like CentOS and Scientific Linux, you need to add the EPEL repository and then install inxi.
To install EPEL repository, just run:
sudo dnf install epel-release
Or,
sudo yum install epel-release
After installing EPEL repository, install inxi using command:
sudo dnf install inxi
Or,
sudo dnf install inxi
On SUSE/openSUSE:
sudo zypper install inxi

How to use inxi?

inxi will require some additional programs to operate properly. They will be installed along with inxi. However, in case if they are not installed automatically, you need to find and install them.
To list all required programs, run:
inxi --recommends
If you see any missing programs, then install them before start using inxi.
Now, let us see how to use it to reveal the Linux system details. inxi usage is pretty simple and straight forward.
Open up your Terminal and run the following command to find the complete details of your system.
inxi
Sample output:
CPU~Single core Intel Core i3-2350M (-UP-) speed~2294 MHz (max) Kernel~4.4.0-34-generic x86_64 Up~5 min Mem~177.1/992.4MB HDD~21.5GB(17.0% used) Procs~127 Client~Shell inxi~2.2.35
To display complete details of your system, use “-F” switch as shown below.
inxi -F
Sample output:
System: Host: sk Kernel: 4.11.3-1-ARCH x86_64 (64 bit)
Desktop: MATE 1.18.0 Distro: Arch Linux
Machine: Device: portable System: Dell product: Inspiron N5050
Mobo: Dell model: 01HXXJ v: A05 BIOS: Dell v: A05 date: 08/03/2012
Battery BAT0: charge: 3.2 Wh 99.4% condition: 3.2/45.0 Wh (7%)
CPU: Dual core Intel Core i3-2350M (-HT-MCP-) cache: 3072 KB
clock speeds: max: 2300 MHz 1: 1266 MHz 2: 824 MHz 3: 824 MHz
4: 800 MHz
Graphics: Card: Intel 2nd Generation Core Processor Family Integrated Graphics Controller
Display Server: N/A driver: modesetting Resolution: 80x24
Audio: Card Intel 6 Series/C200 Series Family High Definition Audio Controller
driver: snd_hda_intel
Sound: Advanced Linux Sound Architecture v: k4.11.3-1-ARCH
Network: Card-1: Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller
driver: r8169
IF: enp5s0 state: down mac: 24:b6:fd:37:8b:29
Card-2: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express)
driver: ath9k
IF: wlp9s0 state: up mac: c0:18:85:50:47:4f
Drives: HDD Total Size: 500.1GB (73.6% used)
ID-1: /dev/sda model: ST9500325AS size: 500.1GB
Partition: ID-1: / size: 457G used: 342G (79%) fs: ext4 dev: /dev/sda2
ID-2: /boot size: 93M used: 49M (57%) fs: ext4 dev: /dev/sda1
ID-3: swap-1 size: 2.15GB used: 0.00GB (0%) fs: swap dev: /dev/sda3
Sensors: System Temperatures: cpu: 68.0C mobo: N/A
Fan Speeds (in rpm): cpu: N/A
Info: Processes: 165 Uptime: 3:23 Memory: 2368.7/3864.3MB Init: systemd
Client: Shell (bash) inxi: 2.3.12
I want to display a particular hardware details, is it possible? Of course, Yes.
To display hard disk details only, run:
inxi -D
Sample output:
 Drives: HDD Total Size: 21.5GB (17.0% used)
ID-1: /dev/sda model: VBOX_HARDDISK size: 21.5GB
What about Motherboard? Use “-M” flag.
inxi -M
Sample output:
 Machine: System: innotek (portable) product: VirtualBox v: 1.2
Mobo: Oracle model: VirtualBox v: 1.2
Bios: innotek v: VirtualBox date: 12/01/2006
What about graphics card?
inxi -G
Sample output:
 Graphics: Card: InnoTek Systemberatung VirtualBox Graphics Adapter
Display Server: N/A driver: N/A
tty size: 80x24 Advanced Data: N/A out of X
Network card?
inxi -N
Sample output:
Network: Card: Intel 82540EM Gigabit Ethernet Controller driver: e1000
As you can see in the above outputs, You can find almost all hardware details in seconds using inxi.
It is not only display the hardware details, but also the some other stuffs too.
Let us display the the list of repositories in your system.
inxi -r
Sample output:
 Repos: Active apt sources in file: /etc/apt/sources.list
deb http://in.archive.ubuntu.com/ubuntu/ xenial main restricted
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
deb http://in.archive.ubuntu.com/ubuntu/ xenial universe
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates universe
deb http://in.archive.ubuntu.com/ubuntu/ xenial multiverse
deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
deb http://in.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu xenial-security main restricted
deb http://security.ubuntu.com/ubuntu xenial-security universe
deb http://security.ubuntu.com/ubuntu xenial-security multiverse
You can even display the weather details of a given location. Yes, you read it right. Let me show you the weather details of my location.
inxi -W Erode,Tamilnadu
Sample output:
Weather: Conditions: 91 F (33 C) - Thunderstorm Time: August 22, 4:04 PM IST
Really cool, isn’t it?
For more options, refer the man page.
man inxi

That’s all for now. The primary purpose of this tool is to use in IRC or forum support. If you are looking for any help via a forum or website where someone is asking the specification of your system, just run this command, and copy/paste the output.
And, that’s all. Hope you find this tool useful. More good stuffs to come. Stay tuned!
Cheers!
Reference link:

The Linux File System Structure Explained

$
0
0
http://www.linuxandubuntu.com/home/the-linux-file-system-structure-explained

Linux File System structure
When I was first coming from Windows and exploring Linux, I found the Linux filesystem structure to be a bit confusing, simply because I didn’t know anything other than the Windows file system for my entire life. But after persisting through the learning curve, the mystery was unraveled and I can now comfortably switch between Linux and Windows whenever needed, and I actually feel like I understand the Windows file system better now after learning the Linux file system.
For me, the biggest difference between the two file systems is to understand where the root of the file system begins. In Windows, the root begins at the drive letter, usually C:\, which basically means it begins at the hard drive. In Linux however, the root of the filesystem doesn’t correspond with a physical device or location, it’s a logical location of simply “/”. See the graphics below for a visual representation.

​Linux File System Structure Tree

linux file structure
Image Courtesy - tldp.org

​Windows File System Tree

Windows file structure
Another thing to remember is that in Linux, everything is a file. Or, more accurately, everything is represented as being a file, while in Windows it may be displayed as being a disk drive.

For example, in Windows the hard drive is typically represented as C:\ in the file explorer, and it will even display a little icon of the hard drive and display how much space is being used. In Linux, on the other hand, the hard drive as represented merely as /dev/sda, which is really just a folder/directory, which in Linux is really just a file that points to other files.

So let’s take some other more practical examples. The Linux equivalent of your Documents folder in Windows would be /home/username/Documents, whereas in Windows it’s typically C:\Users\UserName\Documents. These are actually pretty similar, but you can see where the differences lie.

So using the above Linux file system chart, we need to explore what each folder in the Linux file system is for, which will help us to better understand how Linux works in general. Note that not every folder listed here or pictured above necessarily appears in every Linux distro, but most of them do.
  • / - this is known as “root”, the logical beginning of the Linux file system structure. Every single file path in Linux begins from root in one way or another. / contains the entirety of your operating system.
  • /bin - Pronounced “bin” (as opposed to “bine”), this is where most of your binary files are stored, typically for the Linux terminal commands and core utilities, such as cd (change directory), pwd (print working directory), mv (move), and so on.
  • /boot - This is where all the needed files for Linux to boot are kept. Many people, including myself, like to keep this folder in it’s own separate partition on the hard drive, especially when dual-booting is involved. A key thing to note is that even when /boot is stored on different partition, it is still logically located at /boot as far as Linux is concerned.
  • /dev - This is where your physical devices are mounted, such as your hard drives, USB drives, optical drives, and so on. We’ve already explored that typically, your system hard drive is mounted under /dev/sda, whereas your USB thumb drive might be mounted under /dev/sde. You may also have different partitions on your disk, so you’ll see /dev/sda1, /dev/sda2, and so on. In Windows, when you go to “My Computer” or “Computer” and you can see all of the physical devices and drives connected to your computer, this is the equivalent of /dev in Linux file structure.
  • /etc - Pronounced “et-see”, although some also prefer to spell it out, is where configuration files are stored. Configurations stored in /etc will typically affect all users on the system; whereas users can also store configuration files under their own /home folders, which will only affect that particular user.
  • /home - This is where you’ll spend the overwhelming majority of your time, as this is where all of your personal files are kept. The Desktop, Documents, Downloads, Photos, and Videos folders are all stored under the /home/username directory. You can also store files directly in your /home folder without going to a sub-folder, if you wish so. Typically, when you open a command-line terminal in Linux, the default location that the terminal points to is your /home/username folder, unless you’ve manually changed the default location to something else.
  • /lib - This is where libraries are kept. You’ll notice that many times when installing Linux software packages, additional libraries are also automatically downloaded, and they almost always start with lib-something. These are basically the files needed for your programs on Linux to work. You can think of this folder as somewhat equivalent to the Program Files folder on Windows, although it’s not exactly the same. Unlike Windows, libraries can be shared between many different programs, which results in Linux installations typically being much more lightweight than Windows, because typically in Windows each program needs it’s own library installed, even if it’s redundant and already exists for another program. Surely a benefit of Linux file system structure.
  • /media - Another place where external devices such as optical drives and USB drives can be mounted. This varies between different Linux distros.
  • /mnt - This is basically a placeholder folder used for mounting other folders or drives. Typically this is used for Network locations, but you could really use it for anything you want. I used to use it as the mount point for my media server’s hard drive (/mnt/server).
  • /opt - Optional software for your system that is not already managed by your distro’s package manager. I don’t really ever find myself using this, your mileage may vary.
  • /proc - The “processes” folder where a lot of system information is represented as files (remember, everything is a file). It basically provides a way for the Linux kernel (the core of the operating system) to send and receive information from various processes running in the Linux environment.
  • /root - This is the equivalent to the /home folder specifically for the root user, also called the superuser. You really don’t want to touch anything in here unless you know what you’re doing.
  • /sbin - Similar to /bin, except that it’s dedicated to certain commands that can only be run by the root user, or the superuser.
  • /tmp - This is where temporary files are stored, and they are usually deleted upon shutdown, which saves you from having to manually delete them like is required in Windows.
  • /usr - Contains files and utilities that are shared between users.
  • /var - This is where variable data is kept, usually system logs but can also include other types of data as well.

You can do some more research online and go deeper to learn more about specific applications and usage of each of the above mentioned folders, but for the typical everyday home user, your /home folder is generally the only folder you’ll be directly interacting with. Occasionally you may have to venture into the other folders if you’re trying to do some troubleshooting, but typically modern Linux distros automatically maintain these folders and they require little to no user interference. The exception would be if you’re using a distro like Arch Linux or Gentoo, in which case, you probably didn’t need to read this article in the first place.

Conclusion

To reiterate my previous statement, keep in mind that the Linux file system is a logical system, rather than a physical one. Different folders in the system may be on different partitions on the disk, or even on different disks altogether, but logically everything is still in the same location. The best way to grasp this concept is to simply use Linux as your daily driver, as the best way to learn is through immersion. Ubuntu or Linux Mint are probably the best choices for this task. After using the Linux file system for a while, eventually everything will click you’ll understand what’s going on.

Also Read -
Zorin OS 10 A Newbies Frendly Linux Distribution 
PCLinuxOS A Newbies Friendly Linux Distribution
Linux Mint Best Distro for New Linux users

How to Install Windows PowerShell Core 6.0 in Linux

$
0
0
https://www.ostechnix.com/how-to-install-windows-powershell-in-linux


install windows powershell
The CEO of Microsoft Mr.Satya Nadella said, “Microsoft loves Linux”. He hasn’t just said that, but also proved it. After partnership with Ubuntu, Microsoft has now open sourced PowerShell, and made it available on Linux and Mac OS. Currently, PowerShell supports CentOS, RHEL, Ubuntu Linux operating systems (more will follow), and Mac OS X. For those who don’t know, PowerShell is a distributed, scalable, heterogeneous configuration, and automation framework, consisting of an interactive command-line shell and scripting language, for Windows operating system. It is built on the .NET framework, and It allows the users to automate and simplify the system tasks. For more details about PowerShell, refer the following link.
In this brief tutorial, let us see how to install PowerShell in Ubuntu 14.04 LTS, Ubuntu 16.04 LTS and CentOS 7 64-bit server editions.

Install Windows PowerShell Core 6.0 in Linux

As of now, PowerShell supports RHEL and its clones like CentOS, Ubuntu operating systems. PowerShell developers now made installation much easier.
On Ubuntu 14.04 LTS:
Add PowerShell Repository public key:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
Add PowerShell repository:
curl https://packages.microsoft.com/config/ubuntu/14.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list
Update the software sources list:
sudo apt-get update
Then, install PowerShell using command:
sudo apt-get install -y powershell
On Ubuntu 16.04 LTS:
Add PowerShell Repository public key:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
Add PowerShell repository:
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list
Update the software sources list:
sudo apt-get update
Then, install PowerShell using command:
sudo apt-get install -y powershell
On CentOS 7:
Add PowerShell repository as root user:
curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/microsoft.repo
Update the repositories list:
yum install -y powershell
We have now installed PowerShell. Next, we will see how to use it in real time.

Getting started with PowerShell

Please note that PowerShell for Linux is still in development stage, so you encounter with some bugs. If there are any bugs, join the PowerShell community blog (The link is given at the end of this article) and get help.
Once you installed PowerShell, run the following command to enter to the PowerShell console/session.
powershell
This is how PowerShell console looks like in my CentOS 7 server.
PowerShell 
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS /root>
In PowerShell session, we mention the powershell commands as cmdlets, and we mention PowerShell prompt sign as PS />.
Working in PowerShell is almost similar to BASH. I ran some Linux commands in PowerShell. It seems almost all Linux commands works in the PowerShell. Also, PowerShell has its own set of commands (cmdlets). The TAB function (autocomplete) feature works as like in BASH.
Clear? Well, Let us few examples.
View PowerShell version
To view the version of the PowerShell, enter:
$PSVersionTable
Sample output:
Name Value 
---- -----
PSVersion 6.0.0-alpha
PSEdition Core
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 3.0.0.0
GitCommitId v6.0.0-alpha.15
CLRVersion
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
As you see in the above screenshot, the version of the PowerShell is 6.0.0-alpha.15.
Creating files
To create a new file, use ‘New-Item’ command as shown below.
New-Item ostechnix.txt
Sample output:
 Directory: /root


Mode LastWriteTime Length Name
---- ------------- ------ ----
------ 2/5/17 7:05 PM 0 ostechnix.txt
or simply use “>” as shown below below:
""> ostechnix.txt
Here, “”– describes that the file is empty. ostechnix.txt is the filename.
To append some contents in the file, run the following command:
Set-Content ostechnix.txt -Value "Welcome to OSTechNix blog!"
Or
"Welcome to OSTechNix blog!"> ostechnix.txt
Viewing the content of a file
We have created some files from the PowerShell. How do we view the contents of that files? That’s easy.
Simply use ‘Get-Content’ command to display the contents of any file.
Get-Content 
Example:
Get-Content ostechnix.txt
Sample output:
Welcome to OSTechNix blog!
Deleting files
To delete a file or item, use ‘Remove-Item’ command as shown below.
Remove-Item ostechnix.txt
Let us verify whether the item has really been deleted using command:
Get-Content ostechnix.txt
You should see an output like below.
Get-Content : Cannot find path '/root/ostechnix.txt' because it does not exist.
At line:1 char:1
+ Get-Content ostechnix.txt
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (/root/ostechnix.txt:String) [Ge
t-Content], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetCo
ntentCommand
Or you can simply use the “ls” command to view if the file is exist or not.
Viewing the running processes
To view the list of running processes, just run:
Get-Process
Sample output:
 NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName 
------ ----- ----- ------ -- -- -----------
0 0.00 0.00 0.02 599 599 agetty
0 0.00 0.00 0.00 2385 385 anacron
0 0.00 0.00 0.00 257 0 ata_sff
0 0.00 0.00 0.07 556 556 auditd
0 0.00 0.00 0.03 578 578 avahi-daemon
0 0.00 0.00 0.00 590 578 avahi-daemon
0 0.00 0.00 0.05 2327 327 bash
0 0.00 0.00 0.00 19 0 bioset
0 0.00 0.00 0.00 352 0 bioset
0 0.00 0.00 0.00 360 0 bioset
0 0.00 0.00 0.35 597 597 crond
0 0.00 0.00 0.00 31 0 crypto
0 0.00 0.00 0.11 586 586 dbus-daemon
0 0.00 0.00 0.03 63 0 deferwq
0 0.00 0.01 0.93 585 585 firewalld
0 0.00 0.00 0.00 30 0 fsnotify_mark
0 0.00 0.00 0.00 43 0 ipv6_addrconf
0 0.00 0.00 0.02 94 0 kauditd
0 0.00 0.00 0.00 20 0 kblockd
0 0.00 0.00 0.00 14 0 kdevtmpfs
0 0.00 0.00 0.00 351 0 kdmflush
0 0.00 0.00 0.00 359 0 kdmflush
0 0.00 0.00 0.00 13 0 khelper
0 0.00 0.00 0.03 29 0 khugepaged
0 0.00 0.00 0.00 26 0 khungtaskd
0 0.00 0.00 0.00 18 0 kintegrityd
0 0.00 0.00 0.00 41 0 kmpath_rdacd
0 0.00 0.00 0.00 42 0 kpsmoused
0 0.00 0.00 0.00 28 0 ksmd
0 0.00 0.00 0.17 3 0 ksoftirqd/0
0 0.00 0.00 0.02 27 0 kswapd0
0 0.00 0.00 0.00 2 0 kthreadd
0 0.00 0.00 0.00 39 0 kthrotld
0 0.00 0.00 0.01 2313 0 kworker/0:0
0 0.00 0.00 0.04 2369 0 kworker/0:0H
0 0.00 0.00 0.00 2440 0 kworker/0:1
0 0.00 0.00 0.05 2312 0 kworker/0:2H
0 0.00 0.00 0.28 2376 0 kworker/0:3
0 0.00 0.00 0.25 6 0 kworker/u2:0
0 0.00 0.00 0.00 272 0 kworker/u2:2
0 0.00 0.00 0.01 473 473 lvmetad
0 0.00 0.00 0.02 2036 036 master
0 0.00 0.00 0.00 21 0 md
0 0.00 0.00 0.00 7 0 migration/0
0 0.00 0.00 0.00 15 0 netns
0 0.00 0.00 0.22 653 653 NetworkManager
0 0.00 0.00 0.00 16 0 perf
0 0.00 0.00 0.01 2071 036 pickup
0 0.00 0.00 0.05 799 799 polkitd
0 0.00 0.02 5.02 2401 327 powershell
0 0.00 0.00 0.00 2072 036 qmgr
0 0.00 0.00 0.00 8 0 rcu_bh
0 0.00 0.00 0.73 10 0 rcu_sched
0 0.00 0.00 0.00 9 0 rcuob/0
0 0.00 0.00 0.51 11 0 rcuos/0
0 0.00 0.00 0.06 582 582 rsyslogd
0 0.00 0.00 0.00 267 0 scsi_eh_0
0 0.00 0.00 0.00 271 0 scsi_eh_1
0 0.00 0.00 0.00 275 0 scsi_eh_2
0 0.00 0.00 0.00 269 0 scsi_tmf_0
0 0.00 0.00 0.00 273 0 scsi_tmf_1
0 0.00 0.00 0.00 277 0 scsi_tmf_2
0 0.00 0.00 0.03 1174 174 sshd
0 0.00 0.00 0.79 2322 322 sshd
0 0.00 0.00 1.68 1 1 systemd
0 0.00 0.00 0.24 453 453 systemd-journal
0 0.00 0.00 0.04 579 579 systemd-logind
0 0.00 0.00 0.19 481 481 systemd-udevd
0 0.00 0.00 0.54 1175 175 tuned
0 0.00 0.00 0.02 12 0 watchdog/0
0 0.00 0.00 0.01 798 798 wpa_supplicant
0 0.00 0.00 0.00 17 0 writeback
0 0.00 0.00 0.00 378 0 xfs_mru_cache
0 0.00 0.00 0.00 379 0 xfs-buf/dm-1
0 0.00 0.00 0.00 539 0 xfs-buf/sda1
0 0.00 0.00 0.00 382 0 xfs-cil/dm-1
0 0.00 0.00 0.00 542 0 xfs-cil/sda1
0 0.00 0.00 0.00 381 0 xfs-conv/dm-1
0 0.00 0.00 0.00 541 0 xfs-conv/sda1
0 0.00 0.00 0.00 380 0 xfs-data/dm-1
0 0.00 0.00 0.00 540 0 xfs-data/sda1
0 0.00 0.00 0.51 383 0 xfsaild/dm-1
0 0.00 0.00 0.00 543 0 xfsaild/sda1
0 0.00 0.00 0.00 377 0 xfsalloc
The above command will display the whole list of running processes in your Linux system.
To view any particular running process, use ‘-Name’ option with the above command.
For example, to view the powershell process, run:
Get-Process -Name powershell
Sample output:
 NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName 
------ ----- ----- ------ -- -- -----------
0 0.00 0.02 5.19 2401 327 powershell
Check the following link to learn how to work in Windows PowerShell.
Viewing command aliases
Are you too lazy to type a whole command? Just type few words and hit the tab key, the command will autocomplete or the list of suggested commands will display, just like in Linux BASH shell.
Alternatively, there are aliases for some commands.
For example, to clear the screen, you would type: Clear-Host.
Or you can simply type the alias of the above command ‘cls’ or ‘clear’ to clear the screen.
To view the list of available aliases, run:
Get-Alias
Here is the complete list of available aliases:
CommandType Name Version Source 
----------- ---- ------- ------
Alias ? -> Where-Object
Alias % -> ForEach-Object
Alias cd -> Set-Location
Alias chdir -> Set-Location
Alias clc -> Clear-Content
Alias clear -> Clear-Host
Alias clhy -> Clear-History
Alias cli -> Clear-Item
Alias clp -> Clear-ItemProperty
Alias cls -> Clear-Host
Alias clv -> Clear-Variable
Alias cnsn -> Connect-PSSession
Alias copy -> Copy-Item
Alias cpi -> Copy-Item
Alias cvpa -> Convert-Path
Alias dbp -> Disable-PSBreakpoint
Alias del -> Remove-Item
Alias dir -> Get-ChildItem
Alias dnsn -> Disconnect-PSSession
Alias ebp -> Enable-PSBreakpoint
Alias echo -> Write-Output
Alias epal -> Export-Alias
Alias epcsv -> Export-Csv
Alias erase -> Remove-Item
Alias etsn -> Enter-PSSession
Alias exsn -> Exit-PSSession
Alias fc -> Format-Custom
Alias fhx -> Format-Hex 3.1.0.0 Microsoft.PowerShell.Utility
Alias fl -> Format-List
Alias foreach -> ForEach-Object
Alias ft -> Format-Table
Alias fw -> Format-Wide
Alias gal -> Get-Alias
Alias gbp -> Get-PSBreakpoint
Alias gc -> Get-Content
Alias gci -> Get-ChildItem
Alias gcm -> Get-Command
Alias gcs -> Get-PSCallStack
Alias gdr -> Get-PSDrive
Alias ghy -> Get-History
Alias gi -> Get-Item
Alias gin -> Get-ComputerInfo 3.1.0.0 Microsoft.PowerShell.Management
Alias gjb -> Get-Job
Alias gl -> Get-Location
Alias gm -> Get-Member
Alias gmo -> Get-Module
Alias gp -> Get-ItemProperty
Alias gps -> Get-Process
Alias gpv -> Get-ItemPropertyValue
Alias group -> Group-Object
Alias gsn -> Get-PSSession
Alias gsv -> Get-Service
Alias gu -> Get-Unique
Alias gv -> Get-Variable
Alias h -> Get-History
Alias history -> Get-History
Alias icm -> Invoke-Command
Alias iex -> Invoke-Expression
Alias ihy -> Invoke-History
Alias ii -> Invoke-Item
Alias ipal -> Import-Alias
Alias ipcsv -> Import-Csv
Alias ipmo -> Import-Module
Alias kill -> Stop-Process
Alias md -> mkdir
Alias measure -> Measure-Object
Alias mi -> Move-Item
Alias move -> Move-Item
Alias mp -> Move-ItemProperty
Alias nal -> New-Alias
Alias ndr -> New-PSDrive
Alias ni -> New-Item
Alias nmo -> New-Module
Alias nsn -> New-PSSession
Alias nv -> New-Variable
Alias oh -> Out-Host
Alias popd -> Pop-Location
Alias pushd -> Push-Location
Alias pwd -> Get-Location
Alias r -> Invoke-History
Alias rbp -> Remove-PSBreakpoint
Alias rcjb -> Receive-Job
Alias rcsn -> Receive-PSSession
Alias rd -> Remove-Item
Alias rdr -> Remove-PSDrive
Alias ren -> Rename-Item
Alias ri -> Remove-Item
Alias rjb -> Remove-Job
Alias rmo -> Remove-Module
Alias rni -> Rename-Item
Alias rnp -> Rename-ItemProperty
Alias rp -> Remove-ItemProperty
Alias rsn -> Remove-PSSession
Alias rv -> Remove-Variable
Alias rvpa -> Resolve-Path
Alias sajb -> Start-Job
Alias sal -> Set-Alias
Alias saps -> Start-Process
Alias sasv -> Start-Service
Alias sbp -> Set-PSBreakpoint
Alias sc -> Set-Content
Alias select -> Select-Object
Alias set -> Set-Variable
Alias si -> Set-Item
Alias sl -> Set-Location
Alias sls -> Select-String
Alias sp -> Set-ItemProperty
Alias spjb -> Stop-Job
Alias spps -> Stop-Process
Alias spsv -> Stop-Service
Alias sv -> Set-Variable
Alias type -> Get-Content
Alias where -> Where-Object
Alias wjb -> Wait-Job
To view the alias for any particular command, type:
Get-Alias cls
Sample output:
CommandType Name Version Source 
----------- ---- ------- ------
Alias cls -> Clear-Host
Viewing complete list of available commands
To view the list of all available PowerShell commands, run:
Get-Command
Viewing help
Don’t know what will particular do? No problem. You don’t have to search on Internet. Just run ‘Get-Help’ command along with the powershell command. It is something similar to ‘man’ command in the Linux.
For example, to display the help section of a command called “Clear-Host”, run:
Get-Help Clear-Host
Sample output:
NAME
Clear-Host

SYNOPSIS


SYNTAX
Clear-Host []


DESCRIPTION


RELATED LINKS
https://go.microsoft.com/fwlink/?LinkID=225747

REMARKS
To see the examples, type: "get-help Clear-Host -examples".
For more information, type: "get-help Clear-Host -detailed".
For technical information, type: "get-help Clear-Host -full".
For online help, type: "get-help Clear-Host -online
As you see above, ‘Get-Help’ displays the help section of a specific PowerShell command, like the name of the command, syntax format, aliases, and remarks etc.
To exit from the PowerShell console, just type:
exit
I hope you got a basic idea about how to install PowerShell Core alpha version in Linux (Ubuntu and CentOS), and the basic usage.
For further reading:
You might want to download the free resources related to PowerShell and Windows.
That’s all for today. If you find this guide useful, share it on your social networks and support OSTechNix.
Cheers!
Happy weekend!!

Linux dd Command Show Progress Copy Bar With Status

$
0
0
https://www.cyberciti.biz/faq/linux-unix-dd-command-show-progress-while-coping

I am using dd command for block level copy and just found out that there’s no built in way to check the progress. How do I use the Linux or Unix dd command while coping /dev/sda to /deb/sdb and display a progress bar when data goes through a pipe? How do I monitor the progress of dd on Linux?

You need to use the pv command which allows you to see the progress of data through a pipeline. You need to install pv command as described here. Once installed, type the following commands to see the status bar. Please note that if standard input is not a file and no size was given with the -s option, the progress bar cannot indicate how close to completion the transfer is, so it will just move left and right to indicate that data is moving. It will also show average MB/s rate:

Examples: Use pv command monitor the progress of dd

WARNING! These examples may crash your computer and may result into data loss if not executed with care.
Copy /dev/sda to to /dev/sdb:
pv -tpreb/dev/sda |ddof=/dev/sdb bs=64M
OR
pv -tpreb/dev/sda |ddof=/dev/sdb bs=4096conv=notrunc,noerror
Sample outputs:

Fig.01: pv and dd in action
Fig.01: pv and dd in action

You can create a progress bar and display using the dialog command as follows:
(pv -n/dev/sda |ddof=/dev/sdb bs=128M conv=notrunc,noerror)2>&1|dialog--gauge"Running dd command (cloning), please wait..."10700
Sample outputs:

HowTo: Check The Status of dd Command In Progress under Unix like operating systems
Fig.02: Show the Status of dd Command in progress using pv and dialog command

Examples: Use gnu dd command from coreutils version 8.24 or above only

Pass the progress option to see periodic transfer statistics using GNU dd command:
# dd if=/dev/sda of=/dev/sdb bs=1024k status=progress
Here is another example from my Mac OS X/MacOS:
$ sudo gdd if=ZeroShell-3.6.0-USB.img of=/dev/disk5 bs=1024k status=progress
Sample outputs:

Fig.03: GNU dd displaying progress
Fig.03: GNU dd displaying progress

How to see CPU temperature on CentOS 7 and RedHat Enterprise Linux 7

$
0
0
https://www.cyberciti.biz/faq/howto-view-cpu-temperature-on-rhel7-centos-linux-7

I am a new sysadmin of CentOS 7 server. How do I get my CPU temperature Information on CentOS Linux 7 or Red Hat Enterprise Linux 7 server? How can I read my CPU temperature on a Laptop powered by CentOS Linux 7 desktop operating system?

You need to install Linux hardware monitoring tool called lm_sensor. This tool provides some essential command line utilities for monitoring the hardware health of Linux systems containing hardware health monitoring hardware including CPU and fan speed.

Find out your os version

$ cat /etc/centos-release
OR
$ cat /etc/redhat-release
Sample outputs:
CentOS Linux release 7.2.1511 (Core) 

Install lm_sensors package on CentOS/RHEL 7

Type the following yum command:
$ sudo yum install lm_sensors
Sample outputs:

Fig.01: Installing ln_sensors on CentOS 7/RHEL 7
Fig.01: Installing ln_sensors on CentOS 7/RHEL 7

How to configure lm_sensors

Type the following command and say YES to all prompts:
$ sudo sensors-detect
Sample outputs:
# sensors-detect revision 6170 (2013-05-20 21:25:22 +0200)
# System: ADI Engineering RCC-VE [1.0]

This program will help you determine which kernel modules you need
to load to use lm_sensors most effectively. It is generally safe
and recommended to accept the default answers to all questions,
unless you know what you're doing.

Some south bridges, CPUs or memory controllers contain embedded sensors.
Do you want to scan for them? This is totally safe. (YES/no): YES
Silicon Integrated Systems SIS5595... No
VIA VT82C686 Integrated Sensors... No
VIA VT8231 Integrated Sensors... No
AMD K8 thermal sensors... No
AMD Family 10h thermal sensors... No
AMD Family 11h thermal sensors... No
AMD Family 12h and 14h thermal sensors... No
AMD Family 15h thermal sensors... No
AMD Family 15h power sensors... No
AMD Family 16h power sensors... No
Intel digital thermal sensor... Success!
(driver `coretemp')
Intel AMB FB-DIMM thermal sensor... No
VIA C7 thermal sensor... No
VIA Nano thermal sensor... No

Some Super I/O chips contain embedded sensors. We have to write to
standard I/O ports to probe them. This is usually safe.
Do you want to scan for Super I/O sensors? (YES/no): YES
Probing for `Maxim MAX6639'... No
Probing for `Analog Devices ADM1029'... No
Probing for `ITE IT8712F'... No
Probing for `Fintek custom power control IC'... No
Probing for `Winbond W83791SD'... No
Client found at address 0x50
Probing for `Analog Devices ADM1033'... No
Probing for `Analog Devices ADM1034'... No
Probing for `SPD EEPROM'... Yes
(confidence 8, not a hardware monitoring chip)
Probing for `EDID EEPROM'... No

Now follows a summary of the probes I have just done.
Just press ENTER to continue:

Driver `coretemp':
* Chip `Intel digital thermal sensor' (confidence: 9)

Do you want to overwrite /etc/sysconfig/lm_sensors? (YES/no): YES
Unloading i2c-dev... OK

How to get CPU temperature information on a CentOS/RHEL 7 Linux

Type the following command:
$ sensors
Sample outputs:
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +48.0°C (high = +98.0°C, crit = +98.0°C)
Core 1: +48.0°C (high = +98.0°C, crit = +98.0°C)
Core 2: +48.0°C (high = +98.0°C, crit = +98.0°C)
Core 3: +47.0°C (high = +98.0°C, crit = +98.0°C)
You can run the following watch command to see data on screen:
$ watch sensors
Sample output:

Gif.01: Sensors command in action
Gif.01: Sensors command in action

How do I get my hard drive temperature information on a CentOS/RHEL 7 Linux?

You can always install the hddtemp command to read hard disk temperature on a CentOS/RHEL 7:
$ hddtemp
Sample outputs:
/dev/sda: Samsung SSD 850 EVO mSATA 500GB: 45°C
/dev/sdb: WDC WDS500G1B0A-00H9H0: 42°C
/dev/sdc: WDC WDS500G1B0A-00H9H0: 42°C
/dev/sdd: WDC WDS500G1B0A-00H9H0: 40°C
/dev/sde: WDC WDS500G1B0A-00H9H0: 39°C

Performance profiling with perf

$
0
0
https://fedoramagazine.org/performance-profiling-perf

Performance plays an important role in any computer program. It’s something which makes a user stay with your software. Imagine if your software took minutes to start even on a powerful machine. Or imagine it showed visible performance drops when doing some important work. Both of these cases would reflect badly on your application. The operating system kernel is even more performance critical, because if it lags, the whole system lags. It’s the developer’s responsibility to write code that provides the highest possible performance.
To write programs that provide good performance, we should know which part of our program is becoming a bottleneck. That way, we can focus our efforts on optimizing that region of our code. There are a lot of tools out there to help you as a developer profile your program, and better understand which part of your code needs attention. This article discusses one of the tools to help you profile your program on Linux.

Introducing perf

The perf command in Linux gives you access to various tools integrated into the Linux kernel. These tools can help you collect and analyze the performance data about your program or system. The perf profiler is fast, lightweight, and precise.
To use the perf command requires the perf package to be installed on your distro. You can install it with this command:
sudo dnf install perf
Once you have the required package installed, fire up your terminal and execute the perf command. You’ll see output similar to below:
perf command
The perf command gives a lot of options you can use to profile your code. Let’s go through some of the commands which can come to our rescue frequently.

Listing the events

perf list
The list command shows the list of events which can be traced by the perf command. The output will look something like below:
perf list
There are a lot of events that can be traced via the perf command. Broadly, these events can be software events such as context switches or page faults, or hardware events that originate from the processor itself, like L1 cache misses, or number of clock cycles.

Counting events with perf stat

The perf stat command can be used to count the events related to a particular process or command. Let’s look at a sample of this usage by running the following command:
perf stat -B dd if=/dev/urandom of=/dev/null count=50k
The output of the command lists the counters associated with different types of events that occurred during the execution of the above command.
To get the system wide statistics for all the cores on your system, run the following command:
perf stat -a
The command collects and reports all the event statistics collected until you press Ctrl+C.
The stat command gives you the option to select only specific events for reporting. To select the events, run the stat command with the -e option followed by a comma-separated list of events to report. For example:
perf stat -e cycles,page-faults -a
This command provides statistics about the events named cycles and page-faults for all the cores on the system.
To get the stats for a specific process, run the following command, where PID is the process ID of the process for which you want performance statistics:
perf stat -p

Sampling with perf record

The perf record command is used to sample the data of events to a file. The command operates in a per-thread mode and by default records the data for the cycles event. For example, to collect the system wide data for all the CPUs, execute the following command:
perf record -a
The record collects the data for samples until you press Ctrl+C. That data is stored in a file named perf.data by default. To store the data in some other file, pass the name of the file to the command using the -o option. To see the recorded data, run the following command:
perf report
This command produces output similar to the following:
perf record

The report contains 4 columns, which have their own specific meaning:
  1. Overhead: the percentage of overall samples collected in the corresponding function
  2. Command: the command to which the samples belong
  3. Shared object: the name of the image from where the samples came
  4. Symbol: the symbol name which constitutes the sample, and the privilege level at which the sample was taken. There are 5 privilege levels: [.] user level, [k] kernel level, [g] guest kernel level (virtualization), [u] guest OS userspace, and [H] hypervisor.
The command helps you display the most costly functions. You can then focus on these functions to optimize them further.

Finding code to optimize

For example, let’s examine a firefox process, and sample the data for it. In this example, the firefox process is running as PID 2750.
firefox record
Executing the perf report command produces a screen like this, listing the various symbols in decreasing order of their overhead:
perf report firefox
With this data, we can identify the functions that generate the highest overhead in our code. Now we can start our journey to optimize them for the performance.
This has been a brief introduction of using perf to profile programs and system for performance. The perf command has lots of other options that give you the power to run benchmarks on the system as well as annotate code. For further information on the perf command, visit the Perf wiki.

DNF (Fork of YUM) Command To Manage Packages on Fedora System

$
0
0
https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system

We are working as a LINUX Server/System administrator but most of us don’t know about DNF command and its feature. In this article we are going to explain about DNF and its usage. DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally.
Now, we are going to play on our Fedora 22 box to explain and cover mostly used DNF commands with examples.
Reference link

DNF Installation in RHEL/CentOS/Scentific linux

By default DNF was enabled in Fedora systems. For RHEL/CentOS/Scentific linux, use the below command to install DNF.
# enable epel repository #
root@2daygeek [~]# yum install epel-release
or
root@2daygeek [~]# yum -y install epel-release

# install dnf #
root@2daygeek [~]# yum install dnf

1) Common syntax/file location for DNF

See below for common syntax/ file location of DNF.
# Common syntax for DNF #
root@2daygeek [~]# dnf [options] [commands] [package name]

# most popular dnf commands #
root@2daygeek [~]# [autoremove check-update clean distro-sync downgrade group help history info install list makecache provides reinstall remove repolist repository-packages search updateinfo upgrade upgrade-to]

# dnf config file location #
root@2daygeek [~]# /etc/dnf/dnf.conf

# dnf cached file location #
root@2daygeek [~]# /var/cache/dnf

2) Install a Package or packages

Use the below command to install any package or packages on your system. In this case i’m going to install apache, MariaDB-server, MariaDB-client. Every time DNF ask your confirmation to install the corresponding package. if you want to avoid the confirmation you can do by this adding -y option with dnf.
# Install a single package #
root@2daygeek [~]# dnf install httpd
or
root@2daygeek [~]# dnf -y install httpd

# Install more than one packages #
root@2daygeek [~]# dnf install MariaDB-server MariaDB-client
or
root@2daygeek [~]# dnf -y install MariaDB-server MariaDB-client

Output:
root@2daygeek [~]# dnf install httpd
Last metadata expiration check performed 0:31:59 ago on Tue Jun 9 22:52:44 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
httpd x86_64 2.4.12-1.fc22 fedora 1.2 M

Transaction Summary
================================================================================
Install 1 Package

Total download size: 1.2 M
Installed size: 3.8 M
Is this ok [y/N]: y
Downloading Packages:
httpd-2.4.12-1.fc22.x86_64.rpm 46 kB/s | 1.2 MB 00:27
--------------------------------------------------------------------------------
Total 35 kB/s | 1.2 MB 00:35
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing : httpd-2.4.12-1.fc22.x86_64 1/1
Verifying : httpd-2.4.12-1.fc22.x86_64 1/1

Installed:
httpd.x86_64 2.4.12-1.fc22

Complete!

3) Remove a Package or packages

Use the below command to remove/erase any package or packages on your system. In this case I’m going to remove apache, MariaDB-server, MariaDB-client.
# Remove a single package #
root@2daygeek [~]# dnf remove httpd
or
root@2daygeek [~]# dnf erase httpd

# Remove more than one packages #
root@2daygeek [~]# dnf remove MariaDB-server MariaDB-client
or
root@2daygeek [~]# dnf erase MariaDB-server MariaDB-client

Output:
root@2daygeek [~]# dnf install httpd
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Removing:
gnome-user-share x86_64 3.14.2-1.fc22 @System 467 k
httpd x86_64 2.4.12-1.fc22 @System 3.8 M
mod_dnssd x86_64 0.6-12.fc22 @System 53 k
php x86_64 5.6.9-1.fc22 @System 8.6 M

Transaction Summary
================================================================================
Remove 4 Packages

Installed size: 13 M
Is this ok [y/N]: y
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Erasing : gnome-user-share-3.14.2-1.fc22.x86_64 1/4
Erasing : mod_dnssd-0.6-12.fc22.x86_64 2/4
Erasing : php-5.6.9-1.fc22.x86_64 3/4
Erasing : httpd-2.4.12-1.fc22.x86_64 4/4
Verifying : httpd-2.4.12-1.fc22.x86_64 1/4
Verifying : php-5.6.9-1.fc22.x86_64 2/4
Verifying : mod_dnssd-0.6-12.fc22.x86_64 3/4
Verifying : gnome-user-share-3.14.2-1.fc22.x86_64 4/4

Removed:
gnome-user-share.x86_64 3.14.2-1.fc22 httpd.x86_64 2.4.12-1.fc22
mod_dnssd.x86_64 0.6-12.fc22 php.x86_64 5.6.9-1.fc22

Complete!

4) update a Package or packages

Use the below command to update any package or packages on your system. In this case I’m going to update openssh, MariaDB-server, MariaDB-client to latest version.
# update single package #
root@2daygeek [~]# dnf update httpd

# update more than one packages #
root@2daygeek [~]# dnf update MariaDB-server MariaDB-client

Output:
root@2daygeek [~]# dnf update openssh
Last metadata expiration check performed 0:18:34 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Upgrading:
openssh x86_64 6.8p1-8.fc22 updates 466 k
openssh-askpass x86_64 6.8p1-8.fc22 updates 76 k
openssh-clients x86_64 6.8p1-8.fc22 updates 665 k
openssh-server x86_64 6.8p1-8.fc22 updates 463 k

Transaction Summary
================================================================================
Upgrade 4 Packages

Total download size: 1.6 M
Is this ok [y/N]:y

5) list all repository packages

Use the below command to list all packages which are available in all repository. I have enabled EPEL, so in this case it shows all the repository packages. Both gives same results.
# list all repository packages #
root@2daygeek [~]# dnf list
or
root@2daygeek [~]# dnf list all
or
root@2daygeek [~]# dnf list available

Output:
root@2daygeek [~]# dnf list all | more
Last metadata expiration check performed 0:37:47 ago on Tue Jun 9 22:52:44 2015
.
Installed Packages
GConf2.x86_64 3.2.6-11.fc22 @System
LibRaw.x86_64 0.16.2-1.fc22 @System
ModemManager.x86_64 1.4.6-1.fc22 @System
ModemManager-glib.x86_64 1.4.6-1.fc22 @System
NetworkManager.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-adsl.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-bluetooth.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-config-connectivity-fedora.x86_64
1:1.0.2-1.fc22 @System
NetworkManager-glib.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-openconnect.x86_64 1.0.2-1.fc22 @System
NetworkManager-openvpn.x86_64 1:1.0.2-2.fc22 @System
NetworkManager-openvpn-gnome.x86_64 1:1.0.2-2.fc22 @System
NetworkManager-pptp.x86_64 1:1.1.0-1.20150428git695d4f2.fc22
@System
NetworkManager-pptp-gnome.x86_64 1:1.1.0-1.20150428git695d4f2.fc22
@System
NetworkManager-team.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-vpnc.x86_64 1:1.0.2-1.fc22 @System
--More--

6) Check updates

Use the below command to check available package updates on your system. In this case kernel update is available. Both gives same results.
# Checking avaliable package updates #
root@2daygeek [~]# dnf list updates
or
root@2daygeek [~]# dnf check-update
Fedora 22 - x86_64 - Updates 46 kB/s | 7.3 MB 02:44
Last metadata expiration check performed 0:01:38 ago on Sat Jun 13 11:48:58 2015.

autocorr-en.noarch 1:4.4.3.2-6.fc22 updates
createrepo_c.x86_64 0.9.0-1.fc22 updates
createrepo_c-libs.x86_64 0.9.0-1.fc22 updates
evolution.x86_64 3.16.3-2.fc22 updates
evolution-data-server.x86_64 3.16.3-1.fc22 updates
evolution-ews.x86_64 3.16.3-1.fc22 updates
evolution-help.noarch 3.16.3-2.fc22 updates
firefox.x86_64 38.0.5-2.fc22 updates
git.x86_64 2.4.3-1.fc22 updates
gnome-disk-utility.x86_64 3.16.2-2.fc22 updates
gnome-software.x86_64 3.16.3-1.fc22 updates
highlight.x86_64 3.22-1.fc22 updates
libcacard.x86_64 2:2.3.0-5.fc22 updates
libmwaw.x86_64 0.3.5-1.fc22 updates
libpurple.x86_64 2.10.11-12.fc22 updates
libreoffice-calc.x86_64 1:4.4.3.2-6.fc22 updates
libreoffice-core.x86_64 1:4.4.3.2-6.fc22 updates
libreoffice-draw.x86_64 1:4.4.3.2-6.fc22 updates
.
.
stunnel.x86_64 5.16-1.fc22 updates
unbound-libs.x86_64 1.5.3-4.fc22 updates

7) list installed packages

Use the below command to print installed packages on your Linux system.
# Checking avaliable package updates #
root@2daygeek [~]# dnf list installed
Last metadata expiration check performed 0:41:20 ago on Tue Jun 9 22:52:44 2015.
Installed Packages
GConf2.x86_64 3.2.6-11.fc22 @System
LibRaw.x86_64 0.16.2-1.fc22 @System
ModemManager.x86_64 1.4.6-1.fc22 @System
ModemManager-glib.x86_64 1.4.6-1.fc22 @System
NetworkManager.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-adsl.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-bluetooth.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-config-connectivity-fedora.x86_64
1:1.0.2-1.fc22 @System
NetworkManager-glib.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-openconnect.x86_64 1.0.2-1.fc22 @System
NetworkManager-openvpn.x86_64 1:1.0.2-2.fc22 @System
NetworkManager-openvpn-gnome.x86_64 1:1.0.2-2.fc22 @System
NetworkManager-pptp.x86_64 1:1.1.0-1.20150428git695d4f2.fc22 @System
NetworkManager-pptp-gnome.x86_64 1:1.1.0-1.20150428git695d4f2.fc22 @System
NetworkManager-team.x86_64 1:1.0.2-1.fc22 @System
NetworkManager-vpnc.x86_64 1:1.0.2-1.fc22 @System
--More--

8) Search a package

If you don’t know the exact package name which you want to install, Use the search option it will return the matching string. In this case I’m going to search ftpd.
# Search a package #
root@2daygeek [~]# dnf search ftpd
Last metadata expiration check performed 0:42:28 ago on Tue Jun 9 22:52:44 2015.
============================== N/S Matched: ftpd ===============================
proftpd-utils.x86_64 : ProFTPD - Additional utilities
pure-ftpd-selinux.x86_64 : SELinux support for Pure-FTPD
proftpd-devel.i686 : ProFTPD - Tools and header files for developers
proftpd-devel.x86_64 : ProFTPD - Tools and header files for developers
proftpd-ldap.x86_64 : Module to add LDAP support to the ProFTPD FTP server
proftpd-mysql.x86_64 : Module to add MySQL support to the ProFTPD FTP server
proftpd-postgresql.x86_64 : Module to add PostgreSQL support to the ProFTPD FTP
: server
vsftpd.x86_64 : Very Secure Ftp Daemon
proftpd.x86_64 : Flexible, stable and highly-configurable FTP server
owfs-ftpd.x86_64 : FTP daemon providing access to 1-Wire networks
perl-ftpd.noarch : Secure, extensible and configurable Perl FTP server
pure-ftpd.x86_64 : Lightweight, fast and secure FTP server
pyftpdlib.noarch : Python FTP server library
nordugrid-arc-gridftpd.x86_64 : ARC gridftp server
The above output shows matching string for ftpd.

9) Check package information

If you want to know the package detailed information before proceeding with the installation. Use the below command, it will give full information about the package like package version, size, repo name, etc..
# Search a package #
root@2daygeek [~]# dnf info httpd
Last metadata expiration check performed 0:43:39 ago on Tue Jun 9 22:52:44 2015.
Installed Packages
Name : httpd
Arch : x86_64
Epoch : 0
Version : 2.4.12
Release : 1.fc22
Size : 3.8 M
Repo : @System
From repo : fedora
Summary : Apache HTTP Server
URL : http://httpd.apache.org/
License : ASL 2.0
Description : The Apache HTTP Server is a powerful, efficient, and extensible
: web server.

Available Packages
Name : httpd
Arch : i686
Epoch : 0
Version : 2.4.12
Release : 1.fc22
Size : 1.2 M
Repo : fedora
Summary : Apache HTTP Server
URL : http://httpd.apache.org/
License : ASL 2.0
Description : The Apache HTTP Server is a powerful, efficient, and extensible
: web server.

10) How to Check package is installed or not

Use the below command to check whether the package is installed or not on your system. In this case I’m going to check openssh package.
# check whether the package installed on system #
root@2daygeek [~]# dnf list installed httpd
Last metadata expiration check performed 0:44:26 ago on Tue Jun 9 22:52:44 2015.
Installed Packages
httpd.x86_64 2.4.12-1.fc22 @System

11) dnf provides / whatprovides function

This command searches which packages provide the requested dependency of file.
# provides / whatprovides function #
root@2daygeek [~]# dnf provides /etc/passwd
Last metadata expiration check performed 0:47:38 ago on Tue Jun 9 22:52:44 2015.
setup-2.9.6-1.fc22.noarch : A set of system configuration and setup files
Repo : @System

setup-2.9.6-1.fc22.noarch : A set of system configuration and setup files
Repo : fedora

12) Purpose of makecache

Makecache is used to download and make usable all the metadata for the currently enabled repository on your system.
# Purpose of makecache #
root@2daygeek [~]# dnf makecache
Metadata cache created.

13) Print dnf repositories

Use the below command to print the list of repositories available on your system.
# To print enabled repository #
root@2daygeek [~]# dnf repolist
Last metadata expiration check performed 0:50:02 ago on Tue Jun 9 22:52:44 2015.
repo id repo name status
*fedora Fedora 22 - x86_64 44,762
*updates Fedora 22 - x86_64 - Updates 3,677

# To print Disabled repository #
root@2daygeek [~]# dnf repositories disabled
Last metadata expiration check performed 0:52:19 ago on Tue Jun 9 22:52:44 2015.
repo id repo name
fedora-debuginfo
fedora-source
updates-debuginfo
updates-source
updates-testing
updates-testing-debuginfo
updates-testing-source

# To print all repository #
root@2daygeek [~]# dnf repolist all
Last metadata expiration check performed 0:53:17 ago on Tue Jun 9 22:52:44 2015.
repo id repo name status
*fedora Fedora 22 - x86_64 enabled: 44,762
fedora-debuginfo Fedora 22 - x86_64 - Debug disabled
fedora-source Fedora 22 - Source disabled
*updates Fedora 22 - x86_64 - Updates enabled: 3,677
updates-debuginfo Fedora 22 - x86_64 - Updates - Debug disabled
updates-source Fedora 22 - Updates Source disabled
updates-testing Fedora 22 - x86_64 - Test Updates disabled
updates-testing-debuginfo Fedora 22 - x86_64 - Test Updates Debu disabled
updates-testing-source Fedora 22 - Test Updates Source disabled

14) Do full system update

Use the below command to keep your system up to date. It will install all the available updates.
# update all system packages #
root@2daygeek [~]# dnf update
or
# update all system packages #
root@2daygeek [~]# dnf upgrade
Last metadata expiration check performed 1:09:27 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository
Size
================================================================================
Installing:
git-core x86_64 2.4.3-1.fc22 updates 5.0 M
Upgrading:
autocorr-en noarch 1:4.4.3.2-6.fc22 updates 174 k
createrepo_c x86_64 0.9.0-1.fc22 updates 72 k
createrepo_c-libs x86_64 0.9.0-1.fc22 updates 89 k
evolution x86_64 3.16.3-2.fc22 updates 8.5 M
evolution-data-server x86_64 3.16.3-1.fc22 updates 3.0 M
evolution-ews x86_64 3.16.3-1.fc22 updates 490 k
evolution-help noarch 3.16.3-2.fc22 updates 2.1 M
firefox x86_64 38.0.5-2.fc22 updates 69 M
git x86_64 2.4.3-1.fc22 updates 4.5 M
gnome-disk-utility x86_64 3.16.2-2.fc22 updates 984 k
gnome-software x86_64 3.16.3-1.fc22 updates 2.0 M
highlight x86_64 3.22-1.fc22 updates 669 k
libcacard x86_64 2:2.3.0-5.fc22 updates 73 k
libmwaw x86_64 0.3.5-1.fc22 updates 2.3 M
Transaction Summary
================================================================================
Install 1 Package
Upgrade 68 Packages

Total download size: 258 M
Is this ok [y/N]: y

15) Purpose of autoremove

Removes all “leaf” packages from the system that were originally installed as dependencies of user-installed packages but which are no longer required by any such package.
# Purpose of makecache #
root@2daygeek [~]# dnf autoremove
Last metadata expiration check performed 0:55:06 ago on Tue Jun 9 22:52:44 2015.
Dependencies resolved.
Nothing to do.
Complete!

16) dnf version checking

Use the below command to check the DNF version installed on your system.
# dnf version checking #
root@2daygeek [~]# dnf --version
1.0.0
Installed: dnf-0:1.0.0-1.fc22.noarch at 2015-06-03 17:40
Built : Fedora Project at 2015-05-02 13:00

Installed: rpm-0:4.12.0.1-9.fc22.x86_64 at 2015-06-03 17:38
Built : Fedora Project at 2015-04-15 09:21

17) Distro-sync command

Synchronize installed packages to the latest stable/available versions from any enabled repository. If no package is given, all installed packages are considered.
# Distro-sync command #
root@2daygeek [~]# dnf distro-sync
Last metadata expiration check performed 0:07:57 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository
Size
================================================================================
Installing:
git-core x86_64 2.4.3-1.fc22 updates 5.0 M
Upgrading:
autocorr-en noarch 1:4.4.3.2-6.fc22 updates 174 k
createrepo_c x86_64 0.9.0-1.fc22 updates 72 k
createrepo_c-libs x86_64 0.9.0-1.fc22 updates 89 k
evolution x86_64 3.16.3-2.fc22 updates 8.5 M
evolution-data-server x86_64 3.16.3-1.fc22 updates 3.0 M
evolution-ews x86_64 3.16.3-1.fc22 updates 490 k
evolution-help noarch 3.16.3-2.fc22 updates 2.1 M
firefox x86_64 38.0.5-2.fc22 updates 69 M
git x86_64 2.4.3-1.fc22 updates 4.5 M
gnome-disk-utility x86_64 3.16.2-2.fc22 updates 984 k
gnome-software x86_64 3.16.3-1.fc22 updates 2.0 M
highlight x86_64 3.22-1.fc22 updates 669 k
libcacard x86_64 2:2.3.0-5.fc22 updates 73 k
libmwaw x86_64 0.3.5-1.fc22 updates 2.3 M
libpurple x86_64 2.10.11-12.fc22 updates 5.8 M

Transaction Summary
================================================================================
Install 1 Package
Upgrade 68 Packages

Total download size: 258 M
Is this ok [y/N]: y

18) Downgrade a Package

Downgrade command used to rollback/downgrade a package to previous version.
# Downgrade a Package #
root@2daygeek [~]# dnf downgrade firefox
Last metadata expiration check performed 0:25:11 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Downgrading:
firefox x86_64 38.0.1-1.fc22 fedora 68 M

Transaction Summary
================================================================================
Downgrade 1 Package

Total download size: 68 M
Is this ok [y/N]: y

19) Reinstall a Package

Used to reinstall already installed package, its not necessary as my point of view.
# Reinstall a Package #
root@2daygeek [~]# dnf reinstall httpd
Last metadata expiration check performed 1:06:15 ago on Tue Jun 9 22:52:44 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Reinstalling:
httpd x86_64 2.4.12-1.fc22 fedora 1.2 M

Transaction Summary
================================================================================

Total download size: 1.2 M
Is this ok [y/N]: y
Downloading Packages:
httpd-2.4.12-1.fc22.x86_64.rpm 10 kB/s | 1.2 MB 02:03
--------------------------------------------------------------------------------
Total 9.4 kB/s | 1.2 MB 02:14
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Reinstalling: httpd-2.4.12-1.fc22.x86_64 1/2
Erasing : httpd-2.4.12-1.fc22.x86_64 2/2
Verifying : httpd-2.4.12-1.fc22.x86_64 1/2
Verifying : httpd-2.4.12-1.fc22.x86_64 2/2

Reinstalled:
httpd.x86_64 2.4.12-1.fc22

Complete!

20) Upgrade-to Command

The upgrade-to command updates the packages to the specified version.
# Upgrade-to Command #
root@2daygeek [~]# dnf upgrade-to openssh 6.8p1-8.fc22
Last metadata expiration check performed 0:23:17 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Upgrading:
openssh x86_64 6.8p1-8.fc22 updates 466 k
openssh-askpass x86_64 6.8p1-8.fc22 updates 76 k
openssh-clients x86_64 6.8p1-8.fc22 updates 665 k
openssh-server x86_64 6.8p1-8.fc22 updates 463 k

Transaction Summary
================================================================================
Upgrade 4 Packages

Total download size: 1.6 M
Is this ok [y/N]: y

21) list available group packages

Use the below command to list available group of packages on your system. In these groups no of packages are bundled together with a single name. So you can install the group of packages with single shot instead of installing each and every software separately.
# group package list #
root@2daygeek [~]# dnf grouplist
Last metadata expiration check performed 0:31:39 ago on Sat Jun 13 11:48:58 2015.
Available environment groups:
Minimal Install
Fedora Server
Fedora Workstation
Fedora Cloud Server
KDE Plasma Workspaces
Xfce Desktop
LXDE Desktop
LXQt Desktop
Cinnamon Desktop
MATE Desktop
Sugar Desktop Environment
Development and Creative Workstation
Web Server
Infrastructure Server
Basic Desktop
Available groups:
3D Printing
Administration Tools
Audio Production
Authoring and Publishing
Books and Guides
C Development Tools and Libraries
Cloud Infrastructure
Cloud Management Tools
Container Management
D Development Tools and Libraries
Design Suite
Development Tools
Domain Membership
Fedora Eclipse
Editors
Educational Software
Electronic Lab
Engineering and Scientific
FreeIPA Server
Games and Entertainment
Headless Management
LibreOffice
MATE Applications
MATE Compiz
Medical Applications
Milkymist
Network Servers
Office/Productivity
Robotics
RPM Development Tools
Security Lab
Sound and Video
System Tools
Text-based Internet
Window Managers

22) Install the group of packages

To install group of packages use groupinstall instead of install with yum command. In this case i’m going to install “Editors”. Its bundled with lots package which was related with editors and supporting packages.
# Install the group of packages #
root@2daygeek [~]# dnf groupinstall 'Editors''
Last metadata expiration check performed 0:43:39 ago on Sat Jun 13 11:48:58 2015.
Dependencies resolved.
================================================================================
Group Packages
================================================================================
Marking installed:
Editors xemacs-packages-extra nedit zile
xmlcopyeditor emacs-bbdb geany
emacs-ess emacs jed
psgml code-editor emacs-vm
emacs-muse gobby leafpad
joe cssed vim-X11
vim-enhanced xemacs-xft xemacs-ess
xemacs xemacs-packages-base poedit
emacs-auctex xemacs-muse
Is this ok [y/N]: y
Complete!

23) update the group packages

Use the below command to update the group of packages to latest version.
# update the group packages #
root@2daygeek [~]# dnf groupupdate 'Editors'
Last metadata expiration check performed 0:08:36 ago on Mon Jun 15 12:15:59 2015.
Group 'Editors' is already installed, skipping.
Dependencies resolved.
Is this ok [y/N]: y
Complete!

24) Remove the group packages

Use the below command to update the group of packages to latest version.
# Remove the group packages #
root@2daygeek [~]# dnf groupremove 'Editors'
or
# Remove the group packages #
root@2daygeek [~]# dnf grouperase 'Development tools'
Warning: Group 'Editors' does not exist.
Dependencies resolved.
Is this ok [y/N]: y
Complete!

25) Install a package from particular Repository

Use the below command to Install a package from particular Repository. In this case I’m going to install Htop package from rpmforge repository.
# Install a package from particular Repository #
root@2daygeek [~]# dnf --enablerepo=epel install htop

26) Cleaning dnf cache

Whenever you are installing packages dnf creates a cache of metadata and packages. This cache can take up a lot space. The dnf clean command allows you to clean up these files. All the files dnf clean will act on are normally stored in /var/cache/dnf.
# cleans up any cached db packages #
root@2daygeek [~]# dnf clean dbcache

# cleans up any cached expire-cache packages #
root@2daygeek [~]# dnf clean expire-cache

# cleans up any cached xml metadata #
root@2daygeek [~]# dnf clean metadata

# cleans up any cached packages #
root@2daygeek [~]# dnf clean packages

# cleans up any cached plugins #
root@2daygeek [~]# dnf clean plugins

# Clean all cached files #
root@2daygeek [~]# dnf clean all

27) List Command

The list command can help us to print the output basedon our criteria.
# Lists all packages #
root@2daygeek [~]# dnf list all
or
root@2daygeek [~]# dnf list

# Lists installed packages #
root@2daygeek [~]# dnf list installed

# Lists available packages #
root@2daygeek [~]# dnf list available

# Lists extras, that is packages installed on the system that are not available in any known repository #
root@2daygeek [~]# dnf list extras

# List the packages installed on the system that are obsoleted by packages in any known repository #
root@2daygeek [~]# dnf list obsoletes

# List packages recently added into the repositories #
root@2daygeek [~]# dnf list recent

# List upgrades available for the installed packages #
root@2daygeek [~]# dnf list upgrades

# List packages which will be removed by dnf autoremove command #
root@2daygeek [~]# dnf list autoremove

28) Print dnf history

Use the below command to Print dnf history.
# Print dnf history #
root@2daygeek [~]# dnf history
Last metadata expiration check performed 0:30:01 ago on Mon Jun 15 12:15:59 2015.
ID | Command line | Date a | Action | Altere
-------------------------------------------------------------------------------
6 | install dnf-automatic | 2015-06-13 13:05 | Install | 1 <
5 | reinstall httpd | 2015-06-10 00:01 | Reinstall | 1 >
4 | install vsftpd | 2015-06-09 23:39 | Install | 1
3 | install httpd | 2015-06-09 23:25 | Install | 1
2 | remove httpd | 2015-06-09 23:22 | Erase | 4
1 | update | 2015-06-09 23:14 | I, U | 81

29) Enable DNF Automatic

If you want to automate the updates on your system, you can do by installation dnf-automatic package. After installaing a pacakge, make sure you need to edit the /etc/dnf/automatic.con file and change apply_updates = yes insteadof apply_updates = no.
# Install DNF Automatic package #
root@2daygeek [~]# dnf install dnf-automatic

# Enable DNF Automatic #
root@2daygeek [~]# systemctl enable dnf-automatic.timer
Created symlink from /etc/systemd/system/basic.target.wants/dnf-automatic.timer to /usr/lib/systemd/system/dnf-automatic.timer.

# Start DNF Automatic service #
root@2daygeek [~]# systemctl start dnf-automatic.timer

30) dnf man page

Use the below command for More info about dnf command.
# dnf man page #
root@2daygeek [~]# dnf -help
or
root@2daygeek [~]# dnf --h
or
root@2daygeek [~]# man dnf
That’s all over as of now about DNF. I think everybody learned DNF stuffs.
Viewing all 1415 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>