Quantcast
Channel: Sameh Attia
Viewing all 1413 articles
Browse latest View live

How to use syslog-ng to collect logs from remote Linux machines

$
0
0
https://www.techrepublic.com/article/how-to-use-syslog-ng-to-collect-logs-from-remote-linux-machines

Jack Wallen walks you through the process of setting up a centralized Linux log server using syslog-ng.
linuxhero.jpg
Image: Jack Wallen
Let's say your data center is filled with Linux servers and you need to administer them all. Part of that administration job is viewing log files. But if you're looking at numerous machines, that means logging into each machine individually, reading log files, and then moving onto the next. Depending upon how many machines you have, that can take a large chunk of time from your day.
Or, you could set up a single Linux machine to collect those logs. That would make your day considerably more efficient. To do this, you could opt for a number of different system, one of which is syslog-ng.
The problem with syslog-ng is that the documentation isn't the easiest to comb through. However, I've taken care of that and am going to lay out the installation and configuration in such a way that you can have syslog-ng up and running in no time. I'll be demonstrating on Ubuntu Server 16.04 on a two system setup:
  • UBUNTUSERVERVM at IP address 192.168.1.118 will serve as log collector
  • UBUNTUSERVERVM2 will serve as a client, sending log files to the collector
Let's install and configure.

Installation

The installation is simple. I'll be installing from the standard repositories, in order to make this as easy as possible. To do this, open up a terminal window and issue the command:
sudo apt install syslog-ng
You must issue the above command on both collector and client. Once that's installed, you're ready to configure.

Configuration for the collector

We'll start with the configuration of the log collector. The configuration file is /etc/syslog-ng/syslog-ng.conf. Out of the box, syslog-ng includes a configuration file. We're not going to use that. Let's rename the default config file with the command sudo mv /etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf.BAK. Now create a new configuration file with the command sudo nano /etc/syslog/syslog-ng.conf. In that file add the following:
@version: 3.5
@include "scl.conf"
@include "`scl-root`/system/tty10.conf"
options {
time-reap(30);
mark-freq(10);
keep-hostname(yes);
};
source s_local { system(); internal(); };
source s_network {
syslog(transport(tcp) port(514));
};
destination d_local {
file("/var/log/syslog-ng/messages_${HOST}"); };
destination d_logs {
file(
"/var/log/syslog-ng/logs.txt"
owner("root")
group("root")
perm(0777)
); };
log { source(s_local); source(s_network); destination(d_logs); };
Do note that we are working with port 514, so you'll need to make sure it is accessible on your network.
Save and close the file. The above configuration will dump the desired log files (denoted with system() and internal()) into /var/log/syslog-ng/logs.txt. Because of this, you need to create the directory and file with the following commands:
sudo mkdir /var/log/syslog-ng
sudo touch /var/log/syslog-ng/logs.txt
Start and enable syslog-ng with the commands:
sudo systemctl start syslog-ng
sudo systemctl enable syslog-ng

Configuration for the client

We're going to do the very same thing on the client (moving the default configuration file and creating a new configuration file). Copy the following text into the new client configuration file:
@version: 3.5
@include "scl.conf"
@include "`scl-root`/system/tty10.conf"
source s_local { system(); internal(); };
destination d_syslog_tcp {
syslog("192.168.1.118" transport("tcp") port(514)); };
log { source(s_local);destination(d_syslog_tcp); };
Note: Change the IP address to match the address of your collector server.
Save and close that file. Start and enable syslog-ng in the same fashion you did on the collector.

View the log files

Head back to your collector and issue the command sudo tail -f /var/log/syslog-ng/logs.txt. You should see output that includes log entries for both collector and client (Figure A).
Figure A
Figure A
Syslog-ng is collecting logs from both the collector and the client.
Congratulations, syslog-ng is working. You can now log into your collector to view logs from both the local machine and the remote client. If you have more Linux servers in your data center, walk through the process of installing syslog-ng and setting each of them up as a client to send their logs to the collector, so you no longer have to log into individual machines to view logs.

Working with Vi/Vim Editor : Advanced concepts

$
0
0
http://linuxtechlab.com/working-vivim-editor-advanced-concepts


Earlier we have discussed some basics about VI/VIM editor but VI & VIM are both very powerful editors and there are many other functionalities that can be used with these editors. In this tutorial, we are going to learn some advanced uses of VI/VIM editor.

(Recommended Read : Working with VI editor : The Basics )

Opening multiple files with VI/VIM editor

To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.
$ vi file1 file2 file 3
Now to browse to next file, we can use
$ :n
or we can also use
$ :e filename

Run external commands inside the editor

We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. ‘!’ followed by the command that needs to be used. Syntax for running a command is,
$ :! command
An example for this would be
$ :! df -H

Searching for a pattern

To search for a word or pattern in the text file, we use following two commands in command mode,
  • command ‘/’ searches the pattern in forward direction
  • command ‘?’ searched the pattern in backward direction
Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,
$ :/ search pattern                          (If at beginning of the file)
$ :/ search pattern                           (If at the end of the file)

Searching & replacing a pattern

We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,
$ :s/pattern_to_be_found/New_pattern/g
Suppose we want to find word “alpha” & replace it with word “beta”, the command would be
$ :s/alpha/beta/g
If we want to only replace the first occurrence of word “alpha”, then the command would be
$ :s/alpha/beta/

Using Set commands

We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,
$ :set ic                             ignores cases while searching
$ :set smartcase            enforce case sensitive search
$ :set nu                           display line number at the begining of the line
$ :set hlsearch                highlights the matching words
$ : set ro                           change the file type to read only
$ : set term                      prints the terminal type
$ : set ai                            sets auto-indent
$ :set noai                        unsets the auto-indent
Some other commands to modify vi editors are,
$ :colorscheme                its used to change the color scheme for the editor. (for VIM editor only)
$ :syntax on                      will turn on the color syntax for .xml, .html files etc. (for VIM editor only)

This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.
If you think we have helped you or just want to support us, please consider these :-
Connect to us: Facebook | Twitter | Google Plus
Become a Supporter – Make a contribution via PayPal
[paypal_donation_button align=”left” border=”1″]
Linux TechLab is thankful for your continued support.

How To Compile And Run C/C++ Programs In Linux

$
0
0
https://www.ostechnix.com/compile-run-c-c-programs-linux


Run C, C++ Programs In Linux
This brief tutorial will explain how to compile and run C/C++ programs in GNU/Linux. If you’re a student or a new Linux user coming from Microsoft platform, then you might be wondering how to run the C or C++ programs in a Linux distribution. Because, compiling and running code in Linux platforms is little bit different than Windows. Let us get started, shall we?

Setup Development Environment

As you may already know, to run the code we need to install the necessary tools and compilers, right? Yes! Refer the following guide to install all development tools in your Linux box.
The development tools includes all necessary applications, such as GNU GCC C/C++ compilers, make, debuggers, man pages and others which are needed to compile and build new software, packages etc.
Also, there is a script named manji that helps you to setup a complete environment in Ubuntu-based systems.
After installing the necessary development tools, verify them using any one of the following commands:
whereis gcc
which gcc
gcc -v
These commands will display the installation path and version of gcc compiler.

Compile And Run C/C++ Programs In Linux

Write your code/program in your favorite CLI/GUI editor. Use extension .c for C programs or .cpp for C++ programs.
Here is a simple “C” program.
nano ostechnix.c
#include 
int main()
{
printf("Welcome To OSTechNix!");
return 0;
}
To compile the program, run:
gcc ostechnix.c -o ostechnix1
Or,
g++ ostechnix.c -o ostechnix1
In the above example, we used C++ compiler to compile the program. To use C compiler instead, use:
cc ostechnix.c -o ostechnix1
If there is any syntax or semantic errors in your code/program, it will display them. You might need to fix them first to proceed further. If there is no error then the compiler will successfully generate an executable file named ostechnix1 in the current working directory.
Finally, execute the program using command:
./ostechnix1

To compile multiple source files (Eg. source1 and source2) into executable, run:
gcc source1.c source2.c -o executable
To allow warnings, debug symbols in the output:
gcc source.c -Wall -Og -o executable
To compile the source code into Assembler instructions:
gcc -S source.c
To compile the source code without linking:
gcc -c source.c
The above command will create a executable called source.o.
If your program contains math functions:
gcc source.c -o executable -lm
For more details, refer the man pages.
man gcc
And, that’s all for now. If you find out guides useful, recommend them to your colleagues and friends and support OSTechNix.
Cheers!

Ansible Tutorial: Intorduction to simple Ansible commands

$
0
0
http://linuxtechlab.com/ansible-tutorial-simple-commands


In our earlier Ansible tutorial, we discussed the installation & configuration of Ansible. Now in this ansible tutorial, we will learn some basic examples of ansible commands that we will use to manage our infrastructure. So let us start by looking at the syntax of a complete ansible command,
$ ansible -m -a
Here, we can also use a single host or all in place of & are optional to provide. Now let’s look at some basic commands to use with ansible,

Check connectivity of hosts

We have used this command in our previous tutorial also. The command to check connectivity of hosts is
$ ansible -m ping

Rebooting hosts

$ ansible -a “/sbin/reboot”
R

Checking host’s system information

Ansible collects the system’s information for all the hosts connected to it. To display the information of hosts, run
$ ansible -m setup | less
Secondly, to check a particular info from the collected information by passing an argument,
$ ansible -m setup -a “filter=ansible_distribution”

Transfering files

For transferring files we use a module ‘copy’ & complete command that is used is
$ ansible -m copy -a “src=/home/dan dest=/tmp/home”

Manging users

So to manage the users on the connected hosts, we use a module named ‘user’ & comamnds to use it are as follows,

Creating a new user

$ ansible -m user -a “name=testuser password=

Deleting a user

$ ansible -m user -a “name=testuser state=absent”
Note:- To create an encrypted password, use the ‘mkpasswd –method=sha-512’ command.

Changing permissions & ownership

So for changing ownership of files of connected hosts, we use module named ‘file’ & commands used are

Changing permission of a file

$ ansible -m file -a “dest=/home/dan/file1.txt mode=777”

Changing ownership of a file

$ ansible -m file -a “dest=/home/dan/file1.txt mode=777 owner=dan group=dan”

Managing Packages

So, we can manage the packages installed on all the hosts connected to ansible by using ‘yum’ & ‘apt’ modules & the complete commands used are

Check if package is installed & update it

$ ansible -m yum -a “name=ntp state=latest”

Check if package is installed & don’t update it

$ ansible -m yum -a “name=ntp state=present”

Check if package is at a specific version

$ ansible -m yum -a “name= ntp-1.8 state=present”

Check if package is not installed

$ ansible -m yum -a “name=ntp state=absent”

Managing services

So to manage services with ansible, we use a modules ‘service’ & complete commands that are used are,

Starting a service

$ ansible -m service -a “name=httpd state=started”

Stopping a service

$ ansible -m service -a “name=httpd state=stopped”

Restarting a service

$ ansible -m service -a “name=httpd state=restarted”

So this completes our tutorial of some simple, one line commands that can be used with ansible. Also, for our future tutorials, we will learn to create plays & playbooks that help us manage our hosts more easliy & efficiently.

How to install and setup Docker on RHEL 7/CentOS 7

$
0
0
https://www.cyberciti.biz/faq/install-use-setup-docker-on-rhel7-centos7-linux


How do I install and setup Docker container on an RHEL 7 (Red Hat Enterprise Linux) server? How can I setup Docker on a CentOS 7? How to install and use Docker CE on a CentOS Linux 7 server?

Docker is free and open-source software. It automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Typically you develop software on your laptop/desktop. You can build a container with your app, and it can test run on your computer. It will scale in cloud, VM, VPS, bare-metal and more. There are two versions of docker. The first one bundled with RHEL/CentOS 7 distro and can be installed with the yum. The second version distributed by the Docker project called docker-ce (community free version) and can be installed by the official Docker project repo. The third version distributed by the Docker project called docker-ee (Enterprise paid version) and can be installed by the official Docker project repo. This page shows how to install, setup and use Docker or Docker CE on RHEL 7 or CentOS 7 server and create your first container.

How to install and use Docker on RHEL 7 or CentOS 7 (method 1)

The procedure to install Docker is as follows:
  1. Open the terminal application or login to the remote box using ssh command:
    ssh user@remote-server-name
  2. Type the following command to install Docker via yum provided by Red Hat:
    sudo yum install docker
  3. Type the following command to install the latest version of Docker CE (community edition):
    sudo yum remove docker docker-common docker-selinux docker-engine
    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    sudo yum install docker-ce
Let us see all info in details along with examples.

How to install Docker on CentOS 7 / RHEL 7 using yum

Type the following yum command:
$ sudo yum install docker
Install Docker on RHEL 7 using yum command

How to install Docker CE on CentOS 7 (method 2)

First remove older version of docker (if any):
$ sudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-ce
Next install needed packages:
$ sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Configure the docker-ce repo:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
Finally install docker-ce:
$ sudo yum install docker-ce
Install Docker on CentOS 7 using yum command CE Version

How to enable docker service

$ sudo systemctl enable docker.service
Sample outputs:
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

How to start/stop/restart docker service on CentOS7/RHEL7

$ sudo systemctl start docker.service ## <-- docker="" kbd="" start=""> ##
$ sudo systemctl stop docker.service ## <-- docker="" kbd="" stop=""> ##
$ sudo systemctl restart docker.service ## <-- docker="" kbd="" restart=""> ##
$ sudo systemctl status docker.service ## <-- docker="" get="" kbd="" of="" status=""> ##-->-->
-->
-->

Sample outputs:
List status of Docker on CentOS RHEL server

How to find out info about Docker network bridge and IP addresses

Default network bridge named as docker0 and is assigned with an IP address. To find this info run the following ip command:
$ ip a
$ ip a list docker0

Sample outputs:
3: docker0:  mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:cd:c0:6d:4a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever

How to run docker commands

The syntax is:
docker command
docker command arg
docker [options] command arg
docker help | more

Get system-wide information about Docker

docker info

Getting help

docker help | more
Sample outputs:
Docker help
Run 'docker COMMAND --help' for more information on a command:
docker ps --help
docker cp --help

How to test your docker installation

Docker images are pulled from docker cloud/hub such as docker.io or registry.access.redhat.com and so on. Type the following command to verify that your installation working:
docker run hello-world
Sample outputs:
Run docker hello world for testing

How to search for Docker images

Now you have working Docker setup. It is time to find out images. You can find images for all sort of open source projects and Linux distributions. To search the Docker Hub/cloud for nginx image run:
docker search nginx
Sample outputs:
Docker search image
Click to enlarge

How to install Docker nginx image

To pull an image named nginx from a registry, run:
docker pull nginx
Sample outputs:
Docker pull nginx image (gif)

How to run Docker nginx image

Now you pulled image, it is time to run it:
docker run --name my-nginx-c1 --detach nginx
Say you want to host simple static file hosted in /home/vivek/html/ using nginx container:
docker run --name my-nginx-c2 -p 80:80 -v /home/vivek/html/:/usr/share/nginx/html:ro -d nginx
Where,
  • --name my-nginx-c1 : Assign a name to the container
  • --detach : Run container in background and print container ID
  • -v /home/vivek/html/:/usr/share/nginx/html:ro : Bind mount a volume
  • -p 80:80 : Publish a container's port(s) to the host i.e redirect all traffic coming to port 80 to container traffic
Go ahead and create a file named index.html in /home/vivek/html/:
echo 'Welcome. I am Nginx server locked inside Docker'> /home/vivek/html/index.html
Test it:
curl http://your-host-ip-address/
curl 192.168.122.188

Sample outputs:
Welcome. I am Nginx server locked inside Docker

How to list running Docker containers

docker ps
docker ps -a

Sample outputs:
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                NAMES
bb9d85a56a92 nginx "nginx -g 'daemon of…"55 seconds ago Up 54 seconds 0.0.0.0:80->80/tcp my-nginx-c2
fe0cdbc0225a nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp my-nginx-c1
You can use CONTAINER ID to stop, pause or login into the container.

How to run a command in a running container

Run ls /etc/nginx command for my-nginx-c1 container
docker exec fe0cdbc0225a ls /etc/nginx
OR
docker exec my-nginx-c1 ls /etc/nginx
Want to gain bash shell for a running container and make changes to nginx image?
docker exec -i -t fe0cdbc0225a bash
OR
docker exec -i -t my-nginx-c1 bash

How to stop running containers

docker stop my-nginx-c1
OR
docker stopfe0cdbc0225a

How to remove docker containers

docker rm my-nginx-c1
docker ps -a

And there you have it, Docker installed and running on a CentOS 7 or RHEL 7 server. For more info see the following resources:

How to Install Docker CE on Your Desktop

$
0
0
https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop

containers
Follow these simple steps to install Docker CE on your Linux, Mac, or Windows desktop.
In the previous article, we learned some of the basic terminologies of the container world. That background information will come in handy when we run commands and use some of those terms in follow-up articles, including this one. This article will cover the installation of Docker on desktop Linux, macOS, and Windows, and it is intended for beginners who want to get started with Docker containers. The only prerequisite is that you are comfortable with command-line interface.

Why do I need Docker CE on my local machine?

As a new user, you many wonder why you need containers on your local systems. Aren’t they meant to run in cloud and servers as microservices? While containers have been part of the Linux world for a very long time, it was Docker that made them really consumable with its tools and technologies.
The greatest thing about Docker containers is that you can use your local machine for development and testing. The container images that you create on your local system can then run “anywhere.” There is no conflict between developers and operators about apps running fine on development systems but not in production.
The point is that in order to create containerized applications, you must be able to run and create containers on your local systems.
You can use any of the three platforms -- desktop Linux, Windows, or macOS as the development platform for containers. Once Docker is successfully running on these systems, you will be using the same commands across platforms so it really doesn’t matter which OS you are running underneath.
That’s the beauty of Docker.

Let’s get started

There are two editions of Docker. Docker Enterprise Edition (EE) and Docker Community Edition (CE). We will be using the Docker Community Edition, which is a free of cost version of Docker intended for developers and enthusiasts who want to get started with Docker.
There are two channels of Docker CE: stable and edge. As the name implies, the stable version gives you well-tested quarterly updates, whereas the edge version offers new updates every month. After further testing, these edge features are added to the stable release. I recommend the stable version for new users.
Docker CE is supported on macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 and 17.10; Debian 7.7,8,9 and 10; Fedora 25, 26, 27; and centOS. While you can download Docker CE binaries and install on your Desktop Linux systems, I recommend adding repositories so you continue to receive patches and updates.

Install Docker CE on Desktop Linux

You don’t need a full blown desktop Linux to run Docker, you can install it on a bare minimal Linux server as well, that you can run in a VM. In this tutorial, I am running it on Fedora 27 and Ubuntu 17.04 running on my main systems.

Ubuntu Installation

First things first. Run a system update so your Ubuntu packages are fully updated:
$ sudo apt-get update
Now run system upgrade:
$ sudo apt-get dist-upgrade
Then install Docker PGP keys:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
Now install Docker CE:
$ sudo apt-get install docker-ce
Once it's installed, Docker CE runs automatically on Ubuntu based systems. Let’s check if it’s running:
$ sudo systemctl status docker
You should get the following output:
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
Since Docker is installed on your system, you can now use Docker CLI (Command Line Interface) to run Docker commands. Living up to the tradition, let’s run the ‘Hello World’ command:
$ sudo docker run hello-world
YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8
Congrats! You have Docker running on your Ubuntu system.  

Installing Docker CE on Fedora

Things are a bit different on Fedora 27. On Fedora, you first need to install def-plugins-core packages that will allow you to manage your DNF packages from CLI.
$ sudo dnf -y install dnf-plugins-core
Now install the Docker repo on your system:
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
It’s time to install Docker CE:
$ sudo dnf install docker-ce
Unlike Ubuntu, Docker doesn’t start automatically on Fedora. So let’s start it:
$ sudo systemctl start docker
You will have to start Docker manually after each reboot, so let’s configure it to start automatically after reboots. $ systemctl enable dockerWell, it’s time to run the Hello World command:
$ sudo docker run hello-world
Congrats, Docker is running on your Fedora 27 system.

Cutting your roots

You may have noticed that you have to use sudo to run Docker commands. That’s because of Docker daemon’s binding with the UNIX socket, instead of a TCP port and that socket is owned by the root user. So, you need sudo privileges to run the docker command. You can add system user to the docker group so it won’t require sudo:
$ sudo groupadd docker
In most cases, the dockeruser group is automatically created when you install Docker CE, so all you need to do is add your user to that group:
$ sudo usermod -aG docker $USER
To test if the group has been added successfully, run the groups command against the name of the user:
$ groups swapnil
(Here, Swapnil is the user.)
This is the output on my system:
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
You can see that the user also belongs to the docker group. Log out of your system, so that group changes take effect. Once you log back in, try the Hello World command without sudo:
$ docker run hello-world
You can check system wide info about the installed version of Docker and more by running this command:
$ docker info

Install Docker CE on macOS and Windows

You can easily install Docker CE (and EE) on macOS and Windows. Download the official Docker for Mac and install it the way you install applications on macOS, by simply dragging them into the Applications directory. Once the file is copied, open Docker from spotlight to start the installation process. Once installed, Docker will start automatically and you can see it in the top bar of macOS.
IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA
macOS is UNIX, so you can simply open the terminal app and start using Docker commands natively. Test the hello world app:
$ docker run hello-world
Congrats, you have Docker running on your macOS.

Docker on Windows 10

You need the latest version of Windows 10 Pro or Server in order to run/install Docker on it. If you are not fully updated, Windows won’t install Docker. I got an error on my Windows 10 system and had to run system updates. My version was still behind, and I hit this bug. So, if you fail to install Docker on Windows, just know you are not alone. Keep an eye on that bug to find a solution.
Once you install Docker on Windows, you can either use bash shell via WSL or use PowerShell to run docker commands. Let’s test the “Hello World” command in PowerShell:
PS C:\Users\swapnil> docker run hello-world
Congrats, you have Docker running on Windows.
In the next article, we will talk about pulling images from DockerHub and running containers on our systems. We will also talk about pushing our own containers to Docker Hub.
Learn more about Linux through the free "Introduction to Linux" course from The Linux Foundation and edX.

Linux Filesystem Events with inotify

$
0
0
http://www.linuxjournal.com/content/linux-filesystem-events-inotify

Triggering scripts with incron and systemd.
It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.
Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a 2005 article by Robert Lovewho primarily addressed the behavior of the new features from the perspective of C.
However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations—it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.
This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.

The inotifywait Utility

Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:

# yum install inotify-tools
Loaded plugins: langpacks, ulninfo
ol7_UEKR4 | 1.2 kB 00:00
ol7_latest | 1.4 kB 00:00
Resolving Dependencies
--> Running transaction check
---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================
Package Arch Version Repository Size
==============================================================
Installing:
inotify-tools x86_64 3.14-8.el7 ol7_latest 50 k

Transaction Summary
==============================================================
Install 1 Package

Total download size: 50 k
Installed size: 111 k
Is this ok [y/d/N]: y
Downloading packages:
inotify-tools-3.14-8.el7.x86_64.rpm | 50 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : inotify-tools-3.14-8.el7.x86_64 1/1
Verifying : inotify-tools-3.14-8.el7.x86_64 1/1

Installed:
inotify-tools.x86_64 0:3.14-8.el7

Complete!
The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.
Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from Fedora's EPEL repository, either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.
Any user on the system who can launch a shell may register watches—no special privileges are required to use the interface. This example watches the /tmp directory:

$ inotifywait -m /tmp
Setting up watches.
Watches established.

If another session on the system performs a few operations on the files in /tmp:

$ touch /tmp/hello
$ cp /etc/passwd /tmp
$ rm /tmp/passwd
$ touch /tmp/goodbye
$ rm /tmp/hello /tmp/goodbye

those changes are immediately visible to the user running inotifywait:

/tmp/ CREATE hello
/tmp/ OPEN hello
/tmp/ ATTRIB hello
/tmp/ CLOSE_WRITE,CLOSE hello
/tmp/ CREATE passwd
/tmp/ OPEN passwd
/tmp/ MODIFY passwd
/tmp/ CLOSE_WRITE,CLOSE passwd
/tmp/ DELETE passwd
/tmp/ CREATE goodbye
/tmp/ OPEN goodbye
/tmp/ ATTRIB goodbye
/tmp/ CLOSE_WRITE,CLOSE goodbye
/tmp/ DELETE hello
/tmp/ DELETE goodbye

A few relevant sections of the manual page explain what is happening:

$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p'
inotifywait will output diagnostic information on standard error and
event information on standard output. The event output can be config-
ured, but by default it consists of lines of the following form:

watched_filename EVENT_NAMES event_filename


watched_filename
is the name of the file on which the event occurred. If the
file is a directory, a trailing slash is output.

EVENT_NAMES
are the names of the inotify events which occurred, separated by
commas.

event_filename
is output only when the event occurred on a directory, and in
this case the name of the file within the directory which caused
this event is output.

By default, any special characters in filenames are not escaped
in any way. This can make the output of inotifywait difficult
to parse in awk scripts or similar. The --csv and --format
options will be helpful in this case.

It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here:
accesscreatemove_self
attribdeletemoved_to
close_writedelete_selfmoved_from
close_nowritemodifyopen
closemoveunmount
A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide—new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events:

#!/bin/sh
unset IFS # default of space, tab and nl
# Wait for filesystem events
inotifywait -m -e close_write \
/tmp /var/tmp /home/oracle/arch-orcl/ |
while read dir op file
do [[ "${dir}" == '/tmp/'&& "${file}" == *.txt ]] &&
echo "Import job should start on $file ($dir $op)."

[[ "${dir}" == '/var/tmp/'&& "${file}" == CLOSE_WEEK*.txt ]] &&
echo Weekly backup is ready.

[[ "${dir}" == '/home/oracle/arch-orcl/'&& "${file}" == *.ARC ]]
&&
su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper'&

[[ "${dir}" == '/tmp/'&& "${file}" == SHUT ]] && break

((step+=1))
done

echo We processed $step events.

There are a few problems with the script as presented—of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.
The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:

# man mksh | col -b | sed -n '/The parts/,/do so/p'
The parts of a pipeline, like below, are executed in subshells. Thus,
variable assignments inside them fail. Use co-processes instead.

foo | bar | read baz # will not change $baz
foo | bar |& read -p baz # will, however, do so

And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:

General features of at&t ksh88 that are not (yet) in pdksh:
- the last command of a pipeline is not run in the parent shell
- `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing
in pdksh (ie, the read is done in a separate process in pdksh).
- in pdksh, if the last command of a pipeline is a shell builtin, it
is not executed in the parent shell, so "echo a b | read foo bar"
does not set foo and bar in the parent shell (at&t ksh will).
This may get fixed in the future, but it may take a while.

$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p'
BTW, the most frequently reported bug is
echo hi | read a; echo $a # Does not print hi
I'm aware of this and there is no need to report it.

This behavior is easy enough to demonstrate—running the script above with the default bash shell and providing a sequence of example events:

$ cp /etc/passwd /tmp/newdata.txt
$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt
$ cp /etc/passwd /tmp/SHUT

gives the following script output:

# ./inotify.sh
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed events.

Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:

$ function pps { typeset a IFS=\| ; ps ax | while read a
do case $a in *$1*|+([!0-9])) echo $a;; esac; done }


$ pps inot
PID TTY STAT TIME COMMAND
3394 pts/1 S+ 0:00 /bin/sh ./inotify.sh
3395 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp
3396 pts/1 S+ 0:00 /bin/sh ./inotify.sh

As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:

# ./inotify.ksh93
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed 2 events.


$ pps inot
PID TTY STAT TIME COMMAND
3583 pts/1 S+ 0:00 /bin/ksh93 ./inotify.sh
3584 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp

Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:

$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh
-rwxr-xr-x. 1 root root 960456 Dec 6 11:11 /bin/bash
lrwxrwxrwx. 1 root root 21 Apr 3 21:01 /bin/ksh ->
/etc/alternatives/ksh
-rwxr-xr-x. 1 root root 1518944 Aug 31 2016 /bin/ksh93
-rwxr-xr-x. 1 root root 296208 May 3 2014 /bin/mksh
lrwxrwxrwx. 1 root root 10 Apr 3 21:01 /etc/alternatives/ksh ->
/bin/ksh93

The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:

#!/bin/mksh
unset IFS # default of space, tab and nl
# Wait for filesystem events
inotifywait -m -e close_write \
/tmp/ /var/tmp/ /home/oracle/arch-orcl/ \
2
Note that the Korn and Bolsky reference on the Korn shell outlines the following requirements in a program operating as a coprocess:
Caution: The co-process must:
  • Send each output message to standard output.
  • Have a Newline at the end of each message.
  • Flush its standard output whenever it writes a message.
An fflush(NULL) is found in the main processing loop of the inotifywait source, and these requirements appear to be met.
The mksh version of the script is the most reasonable compromise for efficient use and correct behavior, and I have explained it at some length here to save readers trouble and frustration—it is important to avoid control structures executing in subshells in most of the Borne family. However, hopefully all of these ersatz shells someday fix this basic flaw and implement the Korn behavior correctly.

A Practical Application—Oracle Log Shipping

Oracle databases that are configured for hot backups produce a stream of "archived redo log files" that are used for database recovery. These are the most critical backup files that are produced in an Oracle database.
These files are numbered sequentially and are written to a log directory configured by the DBA. An inotifywatch can trigger activities to compress, encrypt and/or distribute the archived logs to backup and disaster recovery servers for safekeeping. You can configure Oracle RMAN to do most of these functions, but the OS tools are more capable, flexible and simpler to use.
There are a number of important design parameters for a script handling archived logs:
  • A "critical section" must be established that allows only a single process to manipulate the archived log files at a time. Oracle will sometimes write bursts of log files, and inotify might cause the handler script to be spawned repeatedly in a short amount of time. Only one instance of the handler script can be allowed to run—any others spawned during the handler's lifetime must immediately exit. This will be achieved with a textbook application of the flock program from the util-linux package.
  • The optimum compression available for production applications appears to be lzip. The author claims that the integrity of his archive format is superior to many more well known utilities, both in compression ability and also structural integrity. The lzip binary is not in the standard repository for Oracle Linux—it is available in EPEL and is easily compiled from source.
  • Note that 7-Zipuses the same LZMA algorithm as lzip, and it also will perform AES encryption on the data after compression. Encryption is a desirable feature, as it will exempt a business from breach disclosure lawsin most US states if the backups are lost or stolen and they contain "Protected Personal Information" (PPI), such as birthdays or Social Security Numbers. The author of lzip does have harsh things to say regarding the quality of 7-Zip archives using LZMA2, and the openssl enc program can be used to apply AES encryption after compression to lzip archives or any other type of file, as I discussed in a previous article. I'm foregoing file encryption in the script below and using lzip for clarity.
  • The current log number will be recorded in a dot file in the Oracle user's home directory. If a log is skipped for some reason (a rare occurrence for an Oracle database), log shipping will stop. A missing log requires an immediate and full database backup (either cold or hot)—successful recoveries of Oracle databases cannot skip logs.
  • The scp program will be used to copy the log to a remote server, and it should be called repeatedly until it returns successfully.
  • I'm calling the genuine '93 Korn shell for this activity, as it is the most capable scripting shell and I don't want any surprises.
Given these design parameters, this is an implementation:

# cat ~oracle/archutils/process_logs

#!/bin/ksh93

set -euo pipefail
IFS=$'\n\t' # http://redsymbol.net/articles/unofficial-bash-strict-mode/

(
flock -n 9 || exit 1 # Critical section-allow only one process.

ARCHDIR=~oracle/arch-${ORACLE_SID}

APREFIX=${ORACLE_SID}_1_

ASUFFIX=.ARC

CURLOG=$(<~oracle/.curlog-$ORACLE_SID)

File="${ARCHDIR}/${APREFIX}${CURLOG}${ASUFFIX}"

[[ ! -f "$File" ]] && exit

while [[ -f "$File" ]]
do ((NEXTCURLOG=CURLOG+1))

NextFile="${ARCHDIR}/${APREFIX}${NEXTCURLOG}${ASUFFIX}"

[[ ! -f "$NextFile" ]] && sleep 60 # Ensure ARCH has finished

nice /usr/local/bin/lzip -9q "$File"

until scp "${File}.lz""yourcompany.com:~oracle/arch-$ORACLE_SID"
do sleep 5
done

CURLOG=$NEXTCURLOG

File="$NextFile"
done

echo $CURLOG > ~oracle/.curlog-$ORACLE_SID

) 9>~oracle/.processing_logs-$ORACLE_SID

The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.
A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case—cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:

# cat ~oracle/archutils/delay-lock.sh

#!/bin/ksh93

(
flock -n 9 || exit 1 # Critical section-only one process.

WINDOW=43200 # 12 hours

LOG_DEST=~oracle/arch-$ORACLE_SID

OLDLOG_DEST=$LOG_DEST-applied

function fage { print $(( $(date +%s) - $(stat -c %Y "$1") ))
} # File age in seconds - Requires GNU extended date & stat

cd $LOG_DEST

of=$(ls -t | tail -1) # Oldest file in directory

[[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit

for x in $(ls -rt) # Order by ascending file mtime
do if [[ $(fage "$x") -ge $WINDOW ]]
then y=$(basename $x .lz) # lzip compression is optional

[[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x"

$ORACLE_HOME/bin/sqlplus '/ as sysdba'> /dev/null 2>&1 <<-eof 9="" amp="" cancel="" database="" done="" eof="" fi="" mv="" quit="" recover="" rm="" standby="" x="" y=""> ~oracle/.recovering-$ORACLE_SID
-eof>

I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.

The incron System

Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals—it is a tool for filesystem events, and the cron reference is slightly misleading.
The incron package is available from EPEL. If you have installed the repository, you can load it with yum:

# yum install incron
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package incron.x86_64 0:0.5.10-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================
Package Arch Version Repository Size
=================================================================
Installing:
incron x86_64 0.5.10-8.el7 epel 92 k

Transaction Summary
==================================================================
Install 1 Package

Total download size: 92 k
Installed size: 249 k
Is this ok [y/d/N]: y
Downloading packages:
incron-0.5.10-8.el7.x86_64.rpm | 92 kB 00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : incron-0.5.10-8.el7.x86_64 1/1
Verifying : incron-0.5.10-8.el7.x86_64 1/1

Installed:
incron.x86_64 0:0.5.10-8.el7

Complete!

On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:

# systemctl start incrond
# systemctl enable incrond
Created symlink from
/etc/systemd/system/multi-user.target.wants/incrond.service
to /usr/lib/systemd/system/incrond.service.

In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:




Below is an example entry that was set with the -e option:

$ incrontab -e #vi session follows

$ incrontab -l
/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#

You can record a simple script and mark it with execute permission:

$ cat myincron.sh
#!/bin/sh

echo -e "path: $1 op: $2 \t file: $3">> ~/op

$ chmod 755 myincron.sh

Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:

$ cat ~/op

path: /tmp/ op: IN_ATTRIB file: hello
path: /tmp/ op: IN_CREATE file: hello
path: /tmp/ op: IN_OPEN file: hello
path: /tmp/ op: IN_CLOSE_WRITE file: hello
path: /tmp/ op: IN_OPEN file: passwd
path: /tmp/ op: IN_CLOSE_WRITE file: passwd
path: /tmp/ op: IN_MODIFY file: passwd
path: /tmp/ op: IN_CREATE file: passwd
path: /tmp/ op: IN_DELETE file: passwd
path: /tmp/ op: IN_CREATE file: goodbye
path: /tmp/ op: IN_ATTRIB file: goodbye
path: /tmp/ op: IN_OPEN file: goodbye
path: /tmp/ op: IN_CLOSE_WRITE file: goodbye
path: /tmp/ op: IN_DELETE file: hello
path: /tmp/ op: IN_DELETE file: goodbye

While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:

$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p'

EVENT SYMBOLS

These basic event mask symbols are defined:

IN_ACCESS File was accessed (read) (*)
IN_ATTRIB Metadata changed (permissions, timestamps, extended
attributes, etc.) (*)
IN_CLOSE_WRITE File opened for writing was closed (*)
IN_CLOSE_NOWRITE File not opened for writing was closed (*)
IN_CREATE File/directory created in watched directory (*)
IN_DELETE File/directory deleted from watched directory (*)
IN_DELETE_SELF Watched file/directory was itself deleted
IN_MODIFY File was modified (*)
IN_MOVE_SELF Watched file/directory was itself moved
IN_MOVED_FROM File moved out of watched directory (*)
IN_MOVED_TO File moved into watched directory (*)
IN_OPEN File was opened (*)

When monitoring a directory, the events marked with an asterisk (*)
above can occur for files in the directory, in which case the name
field in the returned event data identifies the name of the file within
the directory.

The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above
events. Two additional convenience symbols are IN_MOVE, which is a com-
bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines
IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.

The following further symbols can be specified in the mask:

IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link
IN_ONESHOT Monitor pathname for only one event
IN_ONLYDIR Only watch pathname if it is a directory

Additionally, there is a symbol which doesn't appear in the inotify sym-
bol set. It is IN_NO_LOOP. This symbol disables monitoring events until
the current one is completely handled (until its child process exits).

The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.

Path Units under systemd

When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted article by Paul Brown at OCS-Mag.
The relevant manual page has useful information on the subject:

$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p'

Internally, path units use the inotify(7) API to monitor file systems.
Due to that, it suffers by the same limitations as inotify, and for
example cannot be used to monitor files or directories changed by other
machines on remote NFS file systems.

Note that when a systemd path unit spawns a shell script, the $HOME and tilde (~) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.
Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:

$ cat /etc/systemd/system/oralog.path

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Path]
PathChanged=/home/oracle/arch-orcl/

[Install]
WantedBy=multi-user.target

The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd—it is limited to PathExists, PathChanged and PathModified, which are described in man systemd.path.
The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:

$ cat /etc/systemd/system/oralog.service

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Service]
Type=oneshot
Environment=ORACLE_SID=orcl
ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'

The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically—the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging—divert them to /dev/null if they are not needed.
Use systemctl start on the path unit to begin monitoring—a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.
Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.

Conclusion

Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.
Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.
In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.

Sidenote: Archiving /etc/passwd

Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:

# ll -i /etc/passwd
199720973 -rw-r--r-- 1 root root 3928 Jul 7 12:24 /etc/passwd

# vipw
[ make changes ]
You are using shadow passwords on this system.
Would you like to edit /etc/shadow now [y/n]? n

# ll -i /etc/passwd
203784208 -rw-r--r-- 1 root root 3956 Jul 7 12:24 /etc/passwd

The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:

$ ll -i /etc/passwd
203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd

$ chsh
Changing shell for fishecj.
Password:
New shell [/bin/bash]: /bin/csh
Shell changed.

$ ll -i /etc/passwd
199720970 -rw-r--r-- 1 root root 3927 Jul 7 12:23 /etc/passwd

For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN, ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored.
All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:

#!/bin/sh

# This script tracks changes to the /etc/passwd file from inotify.
# Uses RCS for archiving. Watch for UID zero.

PWMAILS=Charlie.Root@openbsd.org

TPDIR=~/track_passwd

cd $TPDIR

if diff -q /etc/passwd $TPDIR/passwd
then exit # they are the same
else sleep 5 # let passwd settle
diff /etc/passwd $TPDIR/passwd 2>&1 | # they are DIFFERENT
mail -s "/etc/passwd changes $(hostname -s)""$PWMAILS"
cp -f /etc/passwd $TPDIR # copy for checkin

# "SCCS, the source motel! Programs check in and never check out!"
# -- Ken Thompson

rcs -q -l passwd # lock the archive
ci -q -m_ passwd # check in new ver
co -q passwd # drop the new copy
fi > /dev/null 2>&1

Here is an example email from the script for the above chfn operation:

-----Original Message-----
From: root [mailto:root@myhost.com]
Sent: Thursday, July 06, 2017 2:35 PM
To: Fisher, Charles J. ;
Subject: /etc/passwd changes myhost

57c57
< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash
---
> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh

Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts.

How to Install and Configure Foreman 1.16 on Debian 9 / Ubuntu 16.04 Server

$
0
0
https://www.linuxtechi.com/install-configure-foreman-1-16-debian-9-ubuntu-16-04

Foreman is a free and open source Configuration and provisioning tool which can be installed on Red Hat, CentOS, Scientific Linux, Debian and Ubuntu Systems. With Forman tool we can easily provision Virtual machines and bare metal servers and then configure the installed systems using the configuration tools like Puppet and Ansible. Whenever we install Foreman server then it automatically installs Puppet master on it.
With help of Foreman GUI, system administrators can apply specific puppet modules to the registered servers to do the repetitive tasks and can also easily automate day to day operations tasks.
In this tutorial, we will walk through the installation steps of Foreman 1.16 on Debian 9 and Ubuntu 16.04 Server.
Following are the Minimum System Requirements for Foreman server:
  • 4 GB RAM (When Puppet Master is installed on same foreman Server)
  • 2 Core CPU
  • Freshly installed Debian 9 / Ubuntu 16.04
Beneath is the my lab setup details for foreman Server
  • IP address of Foreman Server is “192.168.1.20”
  • Hostname of Foreman Server “foreman.linuxtechi.com”
  • Puppet Master 5 will be installed on Foreman server
  • OS : Debian 9 / Ubuntu 16.04 LTS Server
Let’s login to Debian 9 / Ubuntu 16.04 LTS system

Step:1) Configure Hostname and update its entries in hosts file

Use systemctl command to configure hostname of your system.
$ sudo hostnamectl set-hostname "foreman.linuxtechi.com"
$ exec bash
Update your’s system hostname entries in /etc/hosts file.
192.168.1.20  foreman.linuxtechi.com foreman

Step:2) Enable required repositories for Foreman & Puppet

For Debian 9 system:
Enable Puppet 5 Repositories using below commands
linuxtechi@foreman:~$ sudo apt-get -y install ca-certificates
linuxtechi@foreman:~$ wget https://apt.puppetlabs.com/puppet5-release-stretch.deb
linuxtechi@foreman:~$ sudo dpkg -i puppet5-release-stretch.deb
Enable Foreman 1.16 repositories using below commands
linuxtechi@foreman:~$ echo "deb http://deb.theforeman.org/ stretch 1.16" | sudo tee /etc/apt/sources.list.d/foreman.list 
linuxtechi@foreman:~$ echo "deb http://deb.theforeman.org/ plugins 1.16" | sudo tee -a /etc/apt/sources.list.d/foreman.list
linuxtechi@foreman:~$ wget -q https://deb.theforeman.org/pubkey.gpg -O- | sudo apt-key add -
OK
linuxtechi@foreman:~$
For Ubuntu 16.04 LTS system
Enable Puppet 5 Repositories
linuxtechi@foreman:~$ sudo  apt-get -y install ca-certificates
linuxtechi@foreman:~$ wget https://apt.puppetlabs.com/puppet5-release-xenial.deb
linuxtechi@foreman:~$ sudo  dpkg -i puppet5-release-xenial.deb
Enable Foreman 1.16 repositories
linuxtechi@foreman:~$ echo "deb http://deb.theforeman.org/ xenial 1.16" | sudo tee /etc/apt/sources.list.d/foreman.list
linuxtechi@foreman:~$ echo "deb http://deb.theforeman.org/ plugins 1.16" | sudo tee -a /etc/apt/sources.list.d/foreman.list
linuxtechi@foreman:~$ wget -q https://deb.theforeman.org/pubkey.gpg -O- | sudo apt-key add -
OK
linuxtechi@foreman:~$

Step:3) Download ‘foreman-installer’ using apt-get command

Run the beneath command to install foreman-installer,
linuxtechi@foreman:~$ sudo apt-get update && sudo apt-get -y install foreman-installer
Foreman-installer is the installation tool for foreman.

Step:4) Install Foreman using ‘foreman-installer’

Run the foreman-installer command to install foreman server, by default foreman installer will install and configure following components:
  • Foreman Web UI ( Apache HTTP with SSL)
  • Smart Proxy
  • Puppet Master
  • Puppet agent
  • TFTP Server
linuxtechi@foreman:~$ sudo foreman-installer --foreman-admin-username admin --foreman-admin-password "Foreman@123#"
Once the installation is completed successfully, we will get output something like below:
Foreman-Installation-Completed-Debain9
In case OS firewall is enabled and running on your system then open the followings ports for foreman server
linuxtechi@foreman:~$ sudo ufw allow 53/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 67:69/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 80/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 443/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 3000/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 3306/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 5910:5930/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 5432/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 8140/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$ sudo ufw allow 8443/tcp
Rule added
Rule added (v6)
linuxtechi@foreman:~$
Note: In my case while installing foreman, I was getting this error “Error executing SQL; psql returned pid 32532 exit 1: ‘ERROR:  invalid locale name: “en_US.utf8”, I resolved this error by executing the below command:
linuxtechi@foreman:~$ sudo dpkg-reconfigure locales
Configure-locales-Debian9
Select “en_US.UTF-8 UTF-8” and then select OK and reboot the machine and then re-run the foreman-installer command.

Step:5) Access Foreman Web UI

We can access Foreman Web UI using the following url:
https://{Foreman_Server_IP}
or
https://{Hostname_Foreman_Server}
Use the user name as “admin” and password that we specify in foreman-installer command,
Foreman-Dashboard-Debian9
Foreman-Web-UI-Debian9
Go to Hosts Tab –> Click on “All Hosts
All-Hosts-Foreman-GUI-Debian9
As of now, only one host is registered i.e our foreman server. Whenever we register new servers to the foreman then those servers will listed here. Apart from this, production environment is also created by default and all the servers will be registered to the default env. You can create your environments that suits to your organization from Foreman UI.

Download and Import NTP puppet module on Foreman Server

Use the below command to download ntp puppet module  from “forge.puppet.com”
linuxtechi@foreman:~$ sudo su -
root@foreman:~# puppet module install puppetlabs-ntp -i /etc/puppetlabs/code/modules/
We will get the output something like below:
Puppet-Module-Install-Debian9
Import the installed NTP module into the foreman dashboard
From the dashboard go to Configure Tab –> Select Puppet–> Classes , Click on Import
Debian9-PuppetClasses-Dashboard
Select the environments that you want to attach this module,  in my case I am going to attach it to Production and development.
Modules-Assigned-environments-foreman-debian9
Click on Update,
We will get the next window something like below:
Puppet-Classes-Environments-Foreman-Dashboard
Let’s register a CentOS 7 host to foreman dashboard and then we will attach ntp module to it,

Registering a CentOS 7 Server

Login to the system and enable puppetlabs yum repository and then install puppet package
[root@mx2 ~]# yum install https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm -y
[root@mx2 ~]# yum install puppet -y
Note: In case you don’t have the DNS server, then add entries in the hosts file, In my case I have added the following lines in /etc/hosts file
192.168.1.20  foreman.linuxtechi.com
192.168.1.2    mx2.linuxtechi.com
Run the below command from your centos 7 server to register this machine in puppet master & foreman dashboard.
[root@mx2 ~]# /opt/puppetlabs/bin/puppet agent -td --server=foreman.linuxtechi.com
You will get the output of command something like below:
……………………………………………………
Debug: Finishing transaction 22347940
Info: Creating a new SSL key for mx2.linuxtechi.com
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for mx2.linuxtechi.com
Info: Certificate Request fingerprint (SHA256): A4:D3:15:0D:8D:10:48:93:96:1D:E4:61:5F:F7:F6:B4:CB:C2:01:F4:4C:02:99:37:03:2C:9E:24:0E:30:CF:CC
Debug: Using cached certificate for ca
Info: Caching certificate for ca
Debug: Using cached certificate_request for mx2.linuxtechi.com
Debug: Using cached certificate for ca
Debug: Using cached certificate for ca
Exiting; no certificate found and waitforcert is disabled
[root@mx2 ~]#
It means we have to manually sign the certificate of CentOS 7 server from foreman machine. To sign the certificate from foreman dashboard, refer the below steps
From the Infrastructure Tab –> Select Smart Proxies and then click on Edit option and select “Certificates
Foreman-Smart-Proxies-Certificates
Now sign the certificate of the machine whose state is pending, example is shown below:
Sign-certificates-Foreman-Dashboard
Click on sign
Refer the below steps To Configure Autosign
From the Infrastructure Tab –> Select Smart Proxies and then click on Edit option and select “Autosign
Create a autosign entry and specify the domain name for which we want foreman should autosign the certificates
AutoSign-entry-foreman-Server
Click on Save. Next time whenever any server from domain “linuxtechi.com” is registered to Foreman server then it will automatically signed,
Now again go to CentOS 7 server and re-run the puppet agent command, this time command  should be executed successfully,
[root@mx2 ~]# /opt/puppetlabs/bin/puppet agent -td --server=foreman.linuxtechi.com
Verify the host from Dashboard, From Hosts Tab –> Select “All Hosts“, there we will our newly registered host ( CentOS 7 Server)
CentOs7-Host-Foreman-Dashboard
Let’s attach the NTP module to the newly registered Server (mx2.linuxtechi.com), Select the host and Click on Edit, then go to “Puppet Classes” Tab
Click on ‘+’ sign in front of ntp to add the module and then click on ‘Submit
Add-NTP-Class-Host-Foreman
Go to CentOS7 Server and re-run the puppet agent command, this time it will configure ntp on your CentOS 7 server.
[root@mx2 puppet]# /opt/puppetlabs/bin/puppet agent -td --server=foreman.linuxtechi.com
Output of above command will be something like below:
Puppet-Agent-command-CentOS7-Server
This confirms that CentOS 7 server has registered and configured successfully via foreman Server. It also concludes the article as well, please do share your comments in the comments section below.
Read more on “Bare metal and Virtual Machine Provisioning through Foreman Server

VPN Free DNS Leak Test & Dns Leak Protection

$
0
0
https://anonymster.com/dns-leak-test

Take our test to check if your VPN is Leaking

Test
DNS leak protection is pivotal if you want to stay anonymous online. Even if you are using an anonymity or privacy service like a VPN connection, you can still be the victim of a DNS leak.
Only the best VPNs have a built-in DNS leak protection tool. Is your VPN protecting you?
Our free DNS leak test tool will reveal if you are safe online in a matter of seconds.

How Does The DNS Leak Protection Tool Work

  • First, check what your real IP address is. To do that, go to our free IP address tool while not connected to your VPN. The IP address tool will show your real IP address and the server where you are currently connected. What you see is the server used by your ISP provider.
  • Take note of your real IP address.
  • Activate your VPN service and run the free DNS leak test above. If the IP you see is still the same as before, you are the victim of a DNS leak.If instead your VPN is working correctly, you will see a different IP address and the VPN server you connected too.
Voila! In just a few seconds you now know if you are safe or your sensitive data is exposed to cybercriminals.

How DNS Works?

The Domain Name System (DNS) is what allows you to surf the Internet and access the websites you want.
Try to think of a DNS server as a big address-book where you can find all the websites available online. In this address-book, every single website is assigned a specific “phone number” or IP address. So, every time you type the name of a website, your ISP goes inside the “address-book” or DNS server, extracts the IP of the website you are looking for and use it to connect you with it.
As you can see, in all this process your ISP exactly knows who you are and what you are doing on the Internet. Moreover, the ISP can even track your data. In just a few words, you are totally exposed.
To avoid this annoyng and dangerous problem, you must activate the VPN who will protect you from prying eyes.
Once you access the Internet through your VPN, you bypass your ISP. The VPN will connect you to the Internet using its servers and DNS. This way, your ISP has no idea which website you are visiting and what information you are looking for.
Moreover, once you activate your VPN, your data is encrypted and therefore nobody can access it and use it with malicious intents.
The VPN works fine unless you are the victim of a DNS leak.

What Is A DNS Leak And Why Should You Care

How DNS Leak WorkA DNS leak poses a severe threat to your safety online. When using a VPN or another privacy service, you may assume you are safe and protected while it may not be the case.
A DNS leak is not a malfunction of your VPN, but it usually depends on the machine you are using. DNS leaks are more frequently an issue for Windows users even though it may happen also if you use Mac or Linux.
The problem you may experience is that your machine uses a DNS default setting that keeps rerouting you to your ISP DNS server instead of the VPN’s. When that happen, your queries are not encrypted and your ISP can keep monitoring your activity.
You are the victim of a DNS leakage.
When that happen, your sensitive data and your private activity are exposed even though you are confident of being shielded by your VPN service. For instance, if you are engaging in P2P or BitTorrent file sharing and are downloading copyrighted contents, you may incur some legal actions. They may take you by surprise since that shouldn’t happen when you are using a safe connection through a VPN. Moreover, even your sensitive data can unexpectedly fall into the wrong hands.
For these reasons, it is critical to check your DNS from time to time and especially when your connection needs to be 100% protected.

How To Avoid A DNS Leak

One of the basic rules to improve your security on the Internet is to always change your DNS settings to bypass your Internet Service Provider. Here you can find a user-friendly tutorial to change DNS on Windows and Mac.
Besides that, you should choose an excellent VPN service that offers an integrated DNS leak protection.

Learn to use Wget command with 12 examples

$
0
0
http://linuxtechlab.com/learn-wget-command-12-examples

Every now & then we have to download files from internet, its easy id you are using GUI but CLI it can be a bit difficult. WGET command makes it easy for us to download files from internet using CLI. Its extremely good utility & can handle all kinds of download.
So we are going to learn to use wget command with the help of some examples,
(Recommended Read: Learning GREP command with examples )
(Also Read: Check Linux filesystem for errors: FSCK command with examples)

1- Downloading single file

If we only need to download a single file , use the following
$ wget https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
This will start downloading the Nagios Core on your system & during download you will be able to see percentage completed, amount of bytes downloaded, current download speed & also time remaining for the download to be complete.

2- Downloading file & storing with a different name

If we want to save the downloaded file with a different name than its default name, we can use ‘-O’ parameter with wget command to do so,
$ wget –O nagios_latest https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
This will save the file with name nagios_latest.

3- Limit download speed of the files

We can limit the download limit of the files being downloaded, so that whole network line is not choked up or other network related operations are not affected. We can do this by using ‘- -limit-rate’ parameter
$ wget –limit-rate=500K https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
This will limit the download speed of file to maximum of 500 K.

4- Complete an interrupted download

If while downloading a file the download is interrupted, we can resume the download with wget using‘-c’ parameter,
$ wget –c https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
If you don’t use ‘-c’, the download will start from beginning.

5- Download a file in background

If you are downloading a huge size file & want to move file download in background , you can do this by using ‘-b’ parameter,
$ wget –b https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

6- Downloading multiple files

If there is list of URLs you need to download & don’t want to manually start the next download once the previous completes, you can use‘-I ’ parameter. But before we start downloading, we need to create a file with all the urls,
$ vi url.txt
& enter the URLs with a single URL in single line. After you have created the file, run the following command,
$ wget –I url.txt
This command will download all the files mentioned on those URl, once after another.

7- Increase total number of retries for the download URL

To increase the number of retries for the download, we can use‘- – tries’ parameter,
wget – – tries=100 https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

8- For downloading files from anonymous FTP,

Download file from FTP as an anonymous user,
$ wget FTP-URL
Downloading file with FTP with username & password,
$ wget – -ftp-user=dan  – – ftp-password=********* FTP-URL

9- Replicate whole website

If you need to download all contents of a website, you can do so by using ‘- – mirror’ parameter,
$ wget –mirror -p –convert-links -P /home/dan xyz.com
Here, wget – mirror is command to download the website,
-p, will download all files necessary to display HTML files properly,
–convert-links, will convert links in documents for viewing,
-P /home/dan, will save the file in /home/dan directory.

10-Download only a certain type of files

To download only a file with certain format type, use ‘-r -A’ parameters,
$ wget –r –A.txt Website_url

11-Restrict to download a certain file type

While downloading a website, if you don’t want to download a certain file type you can do so by using ‘- – reject’  parameter,
$ wget –reject=png Website_url

12- Download a file with custom log file

To download a file with a custom log file use ‘-o’ parameter
$ wget –o wgetfile.log https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia
I hope this helps you & with this we end our tutorial on usage of wget command, please feel free to post your comments/queries in the comment box down below.

Fix For Meltdown And Spectre

$
0
0
http://www.linuxandubuntu.com/home/how-hackers-can-read-your-websites-passwords-using-meltdown-and-spectre-with-solution

Hackers Can Read Your Websites' Passwords Using Meltdown And Spectre
​Everyone is talking about Meltdown and Spectre, the two security flaws found in Intel, AMD(less vulnerable) and ARM CPUs. Using the flaws attackers can read system memory which may have your passwords and other sensitive information. The worst part of it is that most systems are affected by it. So you're most likely affected by these flaws. Let's see how much an Internet surfer like you is affected by Meltdown.
​First question, if you're vulnerable or not. Most probably, Yes. The flaws are in all modern CPUs so you're most likely affected by it.

Secondly, how an attacker can read your system's memory? There are three variants to trigger the vulnerabilities as told by the Google project zero team. If you're only Internet surfer and think you're secure, you may not be. After the disclosure of the vulnerabilities by Google security blog, all software vendors came out and said that they had been working on the fix since they were informed. Luke Wagner from Mozilla confirmed in a blog post that the similar techniques can be used from web content (Javascript code etc.) to read private information of a website visitor.
Several recently-published research articles have demonstrated a new class of timing attacks (Meltdown and Spectre) that work on modern CPUs.  Our internal experiments confirm that it is possible to use similar techniques from Web content to read private information between different origins...
Now there is no question that users like us who mostly surf Internet on their devices are not secure. All it needs is a visit to a malicious website. Attackers may also start compromising websites to run the malicious code on the visitors' device to read sensitive information such as other sites passwords saved in web browser.

​Firefox and Chrome have also confirmed that they're working on the patch. Chrome will release Meltdown protected version on January 23. So will you (Chrome users) have to wait that long? Yes, but here is a quick solution as well.

Enable Site Isolation To Protect Browsers Against Meltdown And Spectre

​Besides waiting for Chrome to release the Meltdown protected version, Chrome/Chromium users can also use the solution that is already there. It's called Site Isolation. In chrome or Chromium, users can enable site isolation. Enabling Site Isolation, the content of every website is always rendered in a dedicated process and isolates from other websites. It makes the content not readable for other websites. In case you visit a malicious website which runs code on your browser, it won't be able to see data of other websites.

To enable Site Isolation in Chrome/Chromium, copy the following URL in URL bar -

chrome://flags/#enable-site-per-process
Now you can see the highlighted option is Strict site isolation. Enable it. Now you're done. Restart your web browser and the site isolation is working.
enable site isolation in chrome chromium

Site Isolation For Firefox Users

​I also tried searching for an alternative solution for Firefox and only found First-Party Isolation. I'm not sure if it will work against these vulnerabilities because First-Party isolation separates cookies and make it not accessible by other websites. I'm not sure if it separates the entire website content from other websites. Though I've given instructions below to enable FPI in Firefox. So you can try your luck.

To enable First-Party Isolation, type about:config in the url bar. Search for site isolation and you'll get the following options -
enable first-party isolation in firefox
As you can see the value of privacy.firstparty.isolate is set to false. Double click to set it to true.
So this was the possible way that an attacker can target you and exploit the flaws. I've also mentioned the possible solutions so that you can at least apply what you have. Do share this article with your friends on social media and let them know about this solution.

30 Linux System Monitoring Tools Every SysAdmin Should Know

$
0
0
https://www.cyberciti.biz/tips/top-linux-monitoring-tools.html


Need to monitor Linux server performance? Try these built-in commands and a few add-on tools. Most distributions come with tons of Linux monitoring tools. These tools provide metrics which can be used to get information about system activities. You can use these tools to find the possible causes of a performance problem. The commands discussed below are some of the most fundamental commands when it comes to system analysis and debugging Linux server issues such as:
  1. Finding out system bottlenecks
  2. Disk (storage) bottlenecks
  3. CPU and memory bottlenecks
  4. Network bottleneck.

1. top – Process activity monitoring command

top command display Linux processes. It provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.
top - Linux monitoring command
Fig.01: Linux top command

Commonly Used Hot Keys With top Linux monitoring tools

Here is a list of useful hot keys:
Hot KeyUsage
tDisplays summary information off and on.
mDisplays memory information off and on.
ASorts the display by top consumers of various system resources. Useful for quick identification of performance-hungry tasks on a system.
fEnters an interactive configuration screen for top. Helpful for setting up top for a specific task.
oEnables you to interactively select the ordering within top.
rIssues renice command.
kIssues kill command.
zTurn on or off color/mono
How do I Find Out Linux CPU Utilization?

2. vmstat – Virtual memory statistics

The vmstat command reports information about processes, memory, paging, block IO, traps, and cpu activity.
# vmstat 3
Sample Outputs:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 2540988 522188 5130400 0 0 2 32 4 2 4 1 96 0 0
1 0 0 2540988 522188 5130400 0 0 0 720 1199 665 1 0 99 0 0
0 0 0 2540956 522188 5130400 0 0 0 0 1151 1569 4 1 95 0 0
0 0 0 2540956 522188 5130500 0 0 0 6 1117 439 1 0 99 0 0
0 0 0 2540940 522188 5130512 0 0 0 536 1189 932 1 0 98 0 0
0 0 0 2538444 522188 5130588 0 0 0 0 1187 1417 4 1 96 0 0
0 0 0 2490060 522188 5130640 0 0 0 18 1253 1123 5 1 94 0 0

Display Memory Utilization Slabinfo

# vmstat -m

Get Information About Active / Inactive Memory Pages

# vmstat -a
How do I find out Linux Resource utilization to detect system bottlenecks?

3. w – Find out who is logged on and what they are doing

w command displays information about the users currently on the machine, and their processes.
# w username
# w vivek

Sample Outputs:
 17:58:47 up 5 days, 20:28,  2 users,  load average: 0.36, 0.26, 0.24
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 10.1.3.145 14:55 5.00s 0.04s 0.02s vim /etc/resolv.conf
root pts/1 10.1.3.145 17:43 0.00s 0.03s 0.00s w

4. uptime – Tell how long the Linux system has been running

uptime command can be used to see how long the server has been running. The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
# uptime
Output:
 18:02:41 up 41 days, 23:42,  1 user,  load average: 0.00, 0.00, 0.00
1 can be considered as optimal load value. The load can change from system to system. For a single CPU system 1 – 3 and SMP systems 6-10 load value might be acceptable.

5. ps – Displays the Linux processes

ps command will report a snapshot of the current processes. To select all processes use the -A or -e option:
# ps -A
Sample Outputs:
  PID TTY          TIME CMD
1 ? 00:00:02 init
2 ? 00:00:02 migration/0
3 ? 00:00:01 ksoftirqd/0
4 ? 00:00:00 watchdog/0
5 ? 00:00:00 migration/1
6 ? 00:00:15 ksoftirqd/1
....
.....
4881 ? 00:53:28 java
4885 tty1 00:00:00 mingetty
4886 tty2 00:00:00 mingetty
4887 tty3 00:00:00 mingetty
4888 tty4 00:00:00 mingetty
4891 tty5 00:00:00 mingetty
4892 tty6 00:00:00 mingetty
4893 ttyS1 00:00:00 agetty
12853 ? 00:00:00 cifsoplockd
12854 ? 00:00:00 cifsdnotifyd
14231 ? 00:10:34 lighttpd
14232 ? 00:00:00 php-cgi
54981 pts/0 00:00:00 vim
55465 ? 00:00:00 php-cgi
55546 ? 00:00:00 bind9-snmp-stat
55704 pts/1 00:00:00 ps
ps is just like top but provides more information.

Show Long Format Output

# ps -Al
To turn on extra full mode (it will show command line arguments passed to process):
# ps -AlF

Display Threads ( LWP and NLWP)

# ps -AlFH

Watch Threads After Processes

# ps -AlLm

Print All Process On The Server

# ps ax
# ps axu

Want To Print A Process Tree?

# ps -ejH
# ps axjf
# pstree

Get Security Information of Linux Process

# ps -eo euser,ruser,suser,fuser,f,comm,label
# ps axZ
# ps -eM

Let Us Print Every Process Running As User Vivek

# ps -U vivek -u vivek u

Configure ps Command Output In a User-Defined Format

# ps -eo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm
# ps axo stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,pcpu,comm
# ps -eopid,tt,user,fname,tmout,f,wchan

Try To Display Only The Process IDs of Lighttpd

# ps -C lighttpd -o pid=
OR
# pgrep lighttpd
OR
# pgrep -u vivek php-cgi

Print The Name of PID 55977

# ps -p 55977 -o comm=

Top 10 Memory Consuming Process

# ps -auxf | sort -nr -k 4 | head -10

Show Us Top 10 CPU Consuming Process

# ps -auxf | sort -nr -k 3 | head -10
Show All Running Processes in Linux

6. free – Show Linux server memory usage

free command shows the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.
# free
Sample Output:
            total       used       free     shared    buffers     cached
Mem: 12302896 9739664 2563232 0 523124 5154740
-/+ buffers/cache: 4061800 8241096
Swap: 1052248 0 1052248
  1. Linux Find Out Virtual Memory PAGESIZE
  2. Linux Limit CPU Usage Per Process
  3. How much RAM does my Ubuntu / Fedora Linux desktop PC have?

7. iostat – Montor Linux average CPU load and disk activity

iostat command report Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems (NFS).
# iostat
Sample Outputs:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 	06/26/2009

avg-cpu: %user %nice %system %iowait %steal %idle
3.50 0.09 0.51 0.03 0.00 95.86

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 22.04 31.88 512.03 16193351 260102868
sda1 0.00 0.00 0.00 2166 180
sda2 22.04 31.87 512.03 16189010 260102688
sda3 0.00 0.00 0.00 1615 0
Linux Track NFS Directory / Disk I/O Stats

8. sar – Monitor, collect and report Linux system activity

sar command used to collect, report, and save system activity information. To see network counter, enter:
# sar -n DEV | more
The network counters from the 24th:
# sar -n DEV -f /var/log/sa/sa24 | more
You can also display real time usage using sar:
# sar 4 5
Sample Outputs:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in) 		06/26/2009

06:45:12 PM CPU %user %nice %system %iowait %steal %idle
06:45:16 PM all 2.00 0.00 0.22 0.00 0.00 97.78
06:45:20 PM all 2.07 0.00 0.38 0.03 0.00 97.52
06:45:24 PM all 0.94 0.00 0.28 0.00 0.00 98.78
06:45:28 PM all 1.56 0.00 0.22 0.00 0.00 98.22
06:45:32 PM all 3.53 0.00 0.25 0.03 0.00 96.19
Average: all 2.02 0.00 0.27 0.01 0.00 97.70

9. mpstat – Monitor multiprocessor usage on Linux

mpstat command displays activities for each available processor, processor 0 being the first one. mpstat -P ALL to display average CPU utilization per processor:
# mpstat -P ALL
Sample Output:
Linux 2.6.18-128.1.14.el5 (www03.nixcraft.in)	 	06/26/2009

06:48:11 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
06:48:11 PM all 3.50 0.09 0.34 0.03 0.01 0.17 0.00 95.86 1218.04
06:48:11 PM 0 3.44 0.08 0.31 0.02 0.00 0.12 0.00 96.04 1000.31
06:48:11 PM 1 3.10 0.08 0.32 0.09 0.02 0.11 0.00 96.28 34.93
06:48:11 PM 2 4.16 0.11 0.36 0.02 0.00 0.11 0.00 95.25 0.00
06:48:11 PM 3 3.77 0.11 0.38 0.03 0.01 0.24 0.00 95.46 44.80
06:48:11 PM 4 2.96 0.07 0.29 0.04 0.02 0.10 0.00 96.52 25.91
06:48:11 PM 5 3.26 0.08 0.28 0.03 0.01 0.10 0.00 96.23 14.98
06:48:11 PM 6 4.00 0.10 0.34 0.01 0.00 0.13 0.00 95.42 3.75
06:48:11 PM 7 3.30 0.11 0.39 0.03 0.01 0.46 0.00 95.69 76.89
Linux display each multiple SMP CPU processors utilization individually.

10. pmap – Montor process memory usage on Linux

pmap command report memory map of a process. Use this command to find out causes of memory bottlenecks.
# pmap -d PID
To display process memory information for pid # 47394, enter:
# pmap -d 47394
Sample Outputs:
47394:   /usr/bin/php-cgi
Address Kbytes Mode Offset Device Mapping
0000000000400000 2584 r-x-- 0000000000000000 008:00002 php-cgi
0000000000886000 140 rw--- 0000000000286000 008:00002 php-cgi
00000000008a9000 52 rw--- 00000000008a9000 000:00000 [ anon ]
0000000000aa8000 76 rw--- 00000000002a8000 008:00002 php-cgi
000000000f678000 1980 rw--- 000000000f678000 000:00000 [ anon ]
000000314a600000 112 r-x-- 0000000000000000 008:00002 ld-2.5.so
000000314a81b000 4 r---- 000000000001b000 008:00002 ld-2.5.so
000000314a81c000 4 rw--- 000000000001c000 008:00002 ld-2.5.so
000000314aa00000 1328 r-x-- 0000000000000000 008:00002 libc-2.5.so
000000314ab4c000 2048 ----- 000000000014c000 008:00002 libc-2.5.so
.....
......
..
00002af8d48fd000 4 rw--- 0000000000006000 008:00002 xsl.so
00002af8d490c000 40 r-x-- 0000000000000000 008:00002 libnss_files-2.5.so
00002af8d4916000 2044 ----- 000000000000a000 008:00002 libnss_files-2.5.so
00002af8d4b15000 4 r---- 0000000000009000 008:00002 libnss_files-2.5.so
00002af8d4b16000 4 rw--- 000000000000a000 008:00002 libnss_files-2.5.so
00002af8d4b17000 768000 rw-s- 0000000000000000 000:00009 zero (deleted)
00007fffc95fe000 84 rw--- 00007ffffffea000 000:00000 [ stack ]
ffffffffff600000 8192 ----- 0000000000000000 000:00000 [ anon ]
mapped: 933712K writeable/private: 4304K shared: 768000K
The last line is very important:
  • mapped: 933712K total amount of memory mapped to files
  • writeable/private: 4304K the amount of private address space
  • shared: 768000K the amount of address space this process is sharing with others
Linux find the memory used by a program / process using pmap command

11. netstat – Linux network and statistics monitoring tool

netstat command displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
# netstat -tulpn
# netstat -nat

12. ss – Network Statistics

ss command use to dump socket statistics. It allows showing information similar to netstat. Please note that the netstat is mostly obsolete. Hence you need to use ss command. To ss all TCP and UDP sockets on Linux:
# ss -t -a
OR
# ss -u -a
Show all TCP sockets with process SELinux security contexts:
# ss -t -a -Z
See the following resources about ss and netstat commands:

13. iptraf – Get real-time network statistics on Linux

iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others. It can provide the following info in easy to read format:
  • Network traffic statistics by TCP connection
  • IP traffic statistics by network interface
  • Network traffic statistics by protocol
  • Network traffic statistics by TCP/UDP port and by packet size
  • Network traffic statistics by Layer2 address
Fig.02: General interface statistics: IP traffic statistics by network interface
Fig.02: General interface statistics: IP traffic statistics by network interface
Fig.03 Network traffic statistics by TCP connection
Fig.03 Network traffic statistics by TCP connection
Install IPTraf on a Centos / RHEL / Fedora Linux To Get Network Statistics

14. tcpdump – Detailed network traffic analysis

tcpdump command is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS, enter:
# tcpdump -i eth1 'udp port 53'
View all IPv4 HTTP packets to and from port 80, i.e. print only packets that contain data, not, for example, SYN and FIN packets and ACK-only packets, enter:
# tcpdump 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2 -="" amp="" tcp="" xf0="">>2)) != 0)'2>
Show all FTP session to 202.54.1.5, enter:
# tcpdump -i eth1 'dst 202.54.1.5 and (port 21 or 20'
Print all HTTP session to 192.168.1.5:
# tcpdump -ni eth0 'dst 192.168.1.5 and tcp and port http'
Use wireshark to view detailed information about files, enter:
# tcpdump -n -i eth1 -s 0 -w output.txt src or dst port 80

15. iotop – Linux I/O monitor

iotop command monitor, I/O usage information, using the Linux kernel. It shows a table of current I/O usage sorted by processes or threads on the server.
$ sudo iotop
Sample outputs:
iotop monitoring linux disk read write IO
Linux iotop: Check What’s Stressing And Increasing Load On Your Hard Disks

16. htop – interactive process viewer

htop is a free and open source ncurses-based process viewer for Linux. It is much better than top command. Very easy to use. You can select processes for killing or renicing without using their PIDs or leaving htop interface.
$ htop
Sample outputs:
htop process viewer for Linux

17. atop – Advanced Linux system & process monitor

atop is a very powerful and an interactive monitor to view the load on a Linux system. It displays the most critical hardware resources from a performance point of view. You can quickly see CPU, memory, disk and network performance. It shows which processes are responsible for the indicated load concerning CPU and memory load on a process level.
$ atop
atop Command Line Tools to Monitor Linux Performance

18. ac and lastcomm –

You must monitor process and login activity on your Linux server. The psacct or acct package contains several utilities for monitoring process activities, including:
  1. ac command : Show statistics about users’ connect time
  2. lastcomm command : Show info about about previously executed commands
  3. accton command : Turns process accounting on or off
  4. sa command : Summarizes accounting information
How to keep a detailed audit trail of what’s being done on your Linux systems

19. monit – Process supervision

Monit is a free and open source software that acts as process supervision. It comes with the ability to restart services which have failed. You can use Systemd, daemontools or any other such tool for the same purpose. This tutorial shows how to install and configure monit as Process supervision on Debian or Ubuntu Linux.

20. nethogs- Find out PIDs that using most bandwidth on Linux

NetHogs is a small but handy net top tool. It groups bandwidth by process name such as Firefox, wget and so on. If there is a sudden burst of network traffic, start NetHogs. You will see which PID is causing bandwidth surge.
$ sudo nethogs
nethogs linux monitoring tools open source
Linux: See Bandwidth Usage Per Process With Nethogs Tool

21. iftop – Show bandwidth usage on an interface by host

iftop command listens to network traffic on a given interface name such as eth0. It displays a table of current bandwidth usage by pairs of hosts.
$ sudo iftop
iftop in action

22. vnstat – A console-based network traffic monitor

vnstat is easy to use console-based network traffic monitor for Linux. It keeps a log of hourly, daily and monthly network traffic for the selected interface(s).
$ vnstat
vnstat linux network traffic monitor

23. nmon – Linux systems administrator, tuner, benchmark tool

nmon is a Linux sysadmin’s ultimate tool for the tunning purpose. It can show CPU, memory, network, disks, file systems, NFS, top process resources and partition information from the cli.
$ nmon
nmon command
Install and Use nmon Tool To Monitor Linux Systems Performance

24. glances – Keep an eye on Linux system

glances is an open source cross-platform monitoring tool. It provides tons of information on the small screen. It can also work in client/server mode.
$ glances
Glances
Linux: Keep An Eye On Your System With Glances Monitor

25. strace – Monitor system calls on Linux

Want to trace Linux system calls and signals? Try strace command. This is useful for debugging webserver and other server problems. See how to use to trace the process and see What it is doing.

26. /proc/ file system – Various Linux kernel statistics

/proc file system provides detailed information about various hardware devices and other Linux kernel information. See Linux kernel /proc documentations for further details. Common /proc examples:
# cat /proc/cpuinfo
# cat /proc/meminfo
# cat /proc/zoneinfo
# cat /proc/mounts

27. Nagios – Linux server/network monitoring

Nagios is a popular open source computer system and network monitoring application software. You can easily monitor all your hosts, network equipment and services. It can send alert when things go wrong and again when they get better. FAN is“Fully Automated Nagios”. FAN goals are to provide a Nagios installation including most tools provided by the Nagios Community. FAN provides a CDRom image in the standard ISO format, making it easy to easilly install a Nagios server. Added to this, a wide bunch of tools are including to the distribution, in order to improve the user experience around Nagios.

28. Cacti – Web-based Linux monitoring tool

Cacti is a complete network graphing solution designed to harness the power of RRDTool’s data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. It can provide data about network, CPU, memory, logged in users, Apache, DNS servers and much more. See how to install and configure Cacti network graphing tool under CentOS / RHEL.

29. KDE System Guard – Real-time Linux systems reporting and graphing

KSysguard is a network enabled task and system monitor application for KDE desktop. This tool can be run over ssh session. It provides lots of features such as a client/server architecture that enables monitoring of local and remote hosts. The graphical front end uses so-called sensors to retrieve the information it displays. A sensor can return simple values or more complex information like tables. For each type of information, one or more displays are provided. Displays are organized in worksheets that can be saved and loaded independently from each other. So, KSysguard is not only a simple task manager but also a very powerful tool to control large server farms.
Fig.05 KDE System Guard
Fig.05 KDE System Guard {Image credit: Wikipedia}
See the KSysguard handbook for detailed usage.

30. Gnome Linux system monitor

The System Monitor application enables you to display basic system information and monitor system processes, usage of system resources, and file systems. You can also use System Monitor to modify the behavior of your system. Although not as powerful as the KDE System Guard, it provides the basic information which may be useful for new users:
  • Displays various basic information about the computer’s hardware and software.
  • Linux Kernel version
  • GNOME version
  • Hardware
  • Installed memory
  • Processors and speeds
  • System Status
  • Currently available disk space
  • Processes
  • Memory and swap space
  • Network usage
  • File Systems
  • Lists all mounted filesystems along with basic information about each.
Fig.06 The Gnome System Monitor application
Fig.06 The Gnome System Monitor application

Bonus: Additional Tools

A few more tools:
  • nmap– scan your server for open ports.
  • lsof– list open files, network connections and much more.
  • ntop web based tool – ntop is the best tool to see network usage in a way similar to what top command does for processes i.e. it is network traffic monitoring software. You can see network status, protocol wise distribution of traffic for UDP, TCP, DNS, HTTP and other protocols.
  • Conky– Another good monitoring tool for the X Window System. It is highly configurable and is able to monitor many system variables including the status of the CPU, memory, swap space, disk storage, temperatures, processes, network interfaces, battery power, system messages, e-mail inboxes etc.
  • GKrellM– It can be used to monitor the status of CPUs, main memory, hard disks, network interfaces, local and remote mailboxes, and many other things.
  • mtr– mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
  • vtop– graphical terminal activity monitor on Linux
  • gtop– Awesome system monitoring dashboard for Linux/macOS Unix terminal
Did I miss something? Please add your favorite system motoring tool in the comments.

Ansible: the Automation Framework That Thinks Like a Sysadmin

$
0
0
http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin

I've written about and trained folks on various DevOps tools through the years, and although they're awesome, it's obvious that most of them are designed from the mind of a developer. There's nothing wrong with that, because approaching configuration management programmatically is the whole point. Still, it wasn't until I started playing with Ansible that I felt like it was something a sysadmin quickly would appreciate.
Part of that appreciation comes from the way Ansible communicates with its client computers—namely, via SSH. As sysadmins, you're all very familiar with connecting to computers via SSH, so right from the word "go", you have a better understanding of Ansible than the other alternatives.
With that in mind, I'm planning to write a few articles exploring how to take advantage of Ansible. It's a great system, but when I was first exposed to it, it wasn't clear how to start. It's not that the learning curve is steep. In fact, if anything, the problem was that I didn't really have that much to learn before starting to use Ansible, and that made it confusing. For example, if you don't have to install an agent program (Ansible doesn't have any software installed on the client computers), how do you start?

Getting to the Starting Line

The reason Ansible was so difficult for me at first is because it's so flexible with how to configure the server/client relationship, I didn't know what I was supposed to do. The truth is that Ansible doesn't really care how you set up the SSH system; it will utilize whatever configuration you have. There are just a couple things to consider:
  1. Ansible needs to connect to the client computer via SSH.
  2. Once connected, Ansible needs to elevate privilege so it can configure the system, install packages and so on.
Unfortunately, those two considerations really open a can of worms. Connecting to a remote computer and elevating privilege is a scary thing to allow. For some reason, it feels less vulnerable when you simply install an agent on the remote computer and let Chef or Puppet handle privilege escalation. It's not that Ansible is any less secure, but rather, it puts the security decisions in your hands.
Next I'm going to list a bunch of potential configurations, along with the pros and cons of each. This isn't an exhaustive list, but it should get you thinking along the right lines for what will be ideal in your environment. I also should note that I'm not going to mention systems like Vagrant, because although Vagrant is wonderful for building a quick infrastructure for testing and developing, it's so very different from a bunch of servers that the considerations are too dissimilar really to compare.

Some SSH Scenarios

1) SSHing into remote computer as root with password in Ansible config.
I started with a terrible idea. The "pros" of this setup is that it eliminates the need for privilege escalation, and there are no other user accounts required on the remote server. But, the cost for such convenience isn't worth it. First, most systems won't let you SSH in as root without changing the default configuration. Those default configurations are there because, quite frankly, it's just a bad idea to allow the root user to connect remotely. Second, putting a root password in a plain-text configuration file on the Ansible machine is mortifying. Really, I mentioned this possibility because it is a possibility, but it's one that should be avoided. Remember, Ansible allows you to configure the connection yourself, and it will let you do really dumb things. Please don't.
2) SSHing into a remote computer as a regular user, using a password stored in the Ansible config.
An advantage of this scenario is that it doesn't require much configuration of the clients. Most users are able to SSH in by default, so Ansible should be able to use credentials and log in fine. I personally dislike the idea of a password being stored in plain text in a configuration file, but at least it isn't the root password. If you use this method, be sure to consider how privilege escalation will take place on the remote server. I know I haven't talked about escalating privilege yet, but if you have a password in the config file, that same password likely will be used to gain sudo access. So with one slip, you've compromised not only the remote user's account, but also potentially the entire system.
3) SSHing into a remote computer as a regular user, authenticating with a key pair that has an empty passphrase.
This eliminates storing passwords in a configuration file, at least for the logging in part of the process. Key pairs without passphrases aren't ideal, but it's something I often do in an environment like my house. On my internal network, I typically use a key pair without a passphrase to automate many things like cron jobs that require authentication. This isn't the most secure option, because a compromised private key means unrestricted access to the remote user's account, but I like it better than a password in a config file.
4) SSHing into a remote computer as a regular user, authenticating with a key pair that is secured by a passphrase.
This is a very secure way of handling remote access, because it requires two different authentication factors: 1) the private key and 2) the passphrase to decrypt it. If you're just running Ansible interactively, this might be the ideal setup. When you run a command, Ansible should prompt you for the private key's passphrase, and then it'll use the key pair to log in to the remote system. Yes, the same could be done by just using a standard password login and not specifying the password in the configuration file, but if you're going to be typing a password on the command line anyway, why not add the layer of protection a key pair offers?
5) SSHing with a passphrase-protected key pair, but using ssh-agent to "unlock" the private key.
This doesn't perfectly answer the question of unattended, automated Ansible commands, but it does make a fairly secure setup convenient as well. The ssh-agent program authenticates the passphrase one time and then uses that authentication to make future connections. When I'm using Ansible, this is what I think I'd like to be doing. If I'm completely honest, I still usually use key pairs without passphrases, but that's typically because I'm working on my home servers, not something prone to attack.
There are some other considerations to keep in mind when configuring your SSH environment. Perhaps you're able to restrict the Ansible user (which is often your local user name) so it can log in only from a specific IP address. Perhaps your Ansible server can live in a different subnet, behind a strong firewall so its private keys are more difficult to access remotely. Maybe the Ansible server doesn't have an SSH server installed on itself so there's no incoming access at all. Again, one of the strengths of Ansible is that it uses the SSH protocol for communication, and it's a protocol you've all had years to tweak into a system that works best in your environment. I'm not a big fan of proclaiming what the "best practice" is, because in reality, the best practice is to consider your environment and choose the setup that fits your situation the best.

Privilege Escalation

Once your Ansible server connects to its clients via SSH, it needs to be able to escalate privilege. If you chose option 1 above, you're already root, and this is a moot point. But since no one chose option 1 (right?), you need to consider how a regular user on the client computer gains access. Ansible supports a wide variety of escalation systems, but in Linux, the most common options are sudo and su. As with SSH, there are a few situations to consider, although there are certainly other options.
1) Escalate privilege with su.
For Red Hat/CentOS users, the instinct might be to use su in order to gain system access. By default, those systems configure the root password during install, and to gain privileged access, you need to type it in. The problem with using su is that although it gives you total access to the remote system, it also gives you total access to the remote system. (Yes, that was sarcasm.) Also, the su program doesn't have the ability to authenticate with key pairs, so the password either must be interactively typed or stored in the configuration file. And since it's literally the root password, storing it in the config file should sound like a horrible idea, because it is.
2) Escalate privilege with sudo.
This is how Debian/Ubuntu systems are configured. A user in the correct group has access to sudo a command and execute it with root privileges. Out of the box, this still has the problem of password storage or interactive typing. Since storing the user's password in the configuration file seems a little less horrible, I guess this is a step up from using su, but it still gives complete access to a system if the password is compromised. (After all, typing sudo su - will allow users to become root just as if they had the root password.)
3) Escalate privilege with sudo and configure NOPASSWD in the sudoers file.
Again, in my local environment, this is what I do. It's not perfect, because it gives unrestricted root access to the user account and doesn't require any passwords. But when I do this, and use SSH key pairs without passphrases, it allows me to automate Ansible commands easily. I'll note again, that although it is convenient, it is not a terribly secure idea.
4) Escalate privilege with sudo and configure NOPASSWD on specific executables.
This idea might be the best compromise of security and convenience. Basically, if you know what you plan to do with Ansible, you can give NOPASSWD privilege to the remote user for just those applications it will need to use. It might get a little confusing, since Ansible uses Python for lots of things, but with enough trial and error, you should be able to figure things out. It is more work, but does eliminate some of the glaring security holes.

Implementing Your Plan

Once you decide how you're going to handle Ansible authentication and privilege escalation, you need to set it up. After you become well versed at Ansible, you might be able to use the tool itself to help "bootstrap" new clients, but at first, it's important to configure clients manually so you know what's happening. It's far better to automate a process you're familiar with than to start with automation from the beginning.
I've written about SSH key pairs in the past, and there are countless articles online for setting it up. The short version, from your Ansible computer, looks something like this:

# ssh-keygen
# ssh-copy-id -i .ssh/id_dsa.pub remoteuser@remote.computer.ip
# ssh remoteuser@remote.computer.ip

If you've chosen to use no passphrase when creating your key pairs, that last step should get you into the remote computer without typing a password or passphrase.
In order to set up privilege escalation in sudo, you'll need to edit the sudoers file. You shouldn't edit the file directly, but rather use:

# sudo visudo

This will open the sudoers file and allow you to make changes safely (it error-checks when you save, so you don't accidentally lock yourself out with a typo). There are examples in the file, so you should be able to figure out how to assign the exact privileges you want.
Once it's all configured, you should test it manually before bringing Ansible into the picture. Try SSHing to the remote client, and then try escalating privilege using whatever methods you've chosen. Once you have configured the way you'll connect, it's time to install Ansible.

Installing Ansible

Since the Ansible program gets installed only on the single computer, it's not a big chore to get going. Red Hat/Ubuntu systems do package installs a bit differently, but neither is difficult.
In Red Hat/CentOS, first enable the EPEL repository:

sudo yum install epel-release

Then install Ansible:

sudo yum install ansible

In Ubuntu, first enable the Ansible PPA:

sudo apt-add-repository spa:ansible/ansible
(press ENTER to access the key and add the repo)

Then install Ansible:

sudo apt-get update
sudo apt-get install ansible

Configuring Ansible Hosts File

The Ansible system has no way of knowing which clients you want it to control unless you give it a list of computers. That list is very simple, and it looks something like this:

# file /etc/ansible/hosts

[webservers]
blogserver ansible_host=192.168.1.5
wikiserver ansible_host=192.168.1.10

[dbservers]
mysql_1 ansible_host=192.168.1.22
pgsql_1 ansible_host=192.168.1.23

The bracketed sections are specifying groups. Individual hosts can be listed in multiple groups, and Ansible can refer either to individual hosts or groups. This is also the configuration file where things like plain-text passwords would be stored, if that's the sort of setup you've planned. Each line in the configuration file configures a single host, and you can add multiple declarations after the ansible_host statement. Some useful options are:

ansible_ssh_pass
ansible_become
ansible_become_method
ansible_become_user
ansible_become_pass

The Ansible Vault

I also should note that although the setup is more complex, and not something you'll likely do during your first foray into the world of Ansible, the program does offer a way to encrypt passwords in a vault. Once you're familiar with Ansible and you want to put it into production, storing those passwords in an encrypted Ansible vault is ideal. But in the spirit of learning to crawl before you walk, I recommend starting in a non-production environment and using passwordless methods at first.

Testing Your System

Finally, you should test your system to make sure your clients are connecting. The ping test will make sure the Ansible computer can ping each host:

ansible -m ping all

After running, you should see a message for each defined host showing a ping: pong if the ping was successful. This doesn't actually test authentication, just the network connectivity. Try this to test your authentication:

ansible -m shell -a 'uptime' webservers

You should see the results of the uptime command for each host in the webservers group.
In a future article, I plan start to dig in to Ansible's ability to manage the remote computers. I'll look at various modules and how you can use the ad-hoc mode to accomplish in a few keystrokes what would take a long time to handle individually on the command line. If you didn't get the results you expected from the sample Ansible commands above, take this time to make sure authentication is working. Check out the Ansible docs for more help if you get stuck.

How to Change Your Linux Console Fonts

$
0
0
https://www.linux.com/learn/intro-to-linux/2018/1/how-change-your-linux-console-fonts
font size

Yes, you can change your Linux console fonts; Carla Schroder shows how.
I try to be a peaceful soul, but some things make that difficult, like tiny console fonts. Mark my words, friends, someday your eyes will be decrepit and you won't be able to read those tiny fonts you coded into everything, and then you'll be sorry, and I will laugh.
Fortunately, Linux fans, you can change your console fonts. As always, the ever-changing Linux landscape makes this less than straightforward, and font management on Linux is non-existent, so we'll muddle along as best we can. In this article, I'll show what I've found to be the easiest approach.

What is the Linux Console?

Let us first clarify what we're talking about. When I say Linux console, I mean TTY1-6, the virtual terminals that you access from your graphical desktop with Ctrl+Alt+F1 through F6. To get back to your graphical environment, press Alt+F7. (This is no longer universal, however, and your Linux distribution may have it mapped differently. You may have more or fewer TTYs, and your graphical session may not be at F7. For example, Fedora puts the default graphical session at F2, and an extra one at F1.) I think it is amazingly cool that we can have both X and console sessions running at the same time.
The Linux console is part of the kernel, and does not run in an X session. This is the same console you use on headless servers that have no graphical environments. I call the terminals in a graphical session X terminals, and terminal emulators is my catch-all name for both console and X terminals.
But that's not all. The Linux console has come a long way from the early ANSI days, and thanks to the Linux framebuffer, it has Unicode and limited graphics support. There are also a number of console multimedia applications that we will talk about in a future article.

Console Screenshots

The easy way to get console screenshots is from inside a virtual machine. Then you can use your favorite graphical screen capture program from the host system. You may also make screen captures from your console with fbcat or fbgrab. fbcat creates a portable pixmap format (PPM) image; this is a highly portable uncompressed image format that should be readable on any operating system, and of course you can convert it to whatever format you want. fbgrab is a wrapper script to fbcat that creates a PNG file. There are multiple versions of fbgrab written by different people floating around. Both have limited options and make only a full-screen capture.
fbcat needs root permissions, and must redirect to a file. Do not specify a file extension, but only the filename:
$ sudo fbcat > Pictures/myfile
After cropping in GIMP, I get Figure 1.

fig-1.png

font size
Figure 1: View after cropping.
It would be nice to have a little padding on the left margin, so if any of you excellent readers know how to do this, please tell us in the comments.
fbgrab has a few more options that you can read about in man fbgrab, such as capturing a different console, and time delay. This example makes a screen grab just like fbcat, except you don't have to explicitly redirect:
$ sudo fbgrab Pictures/myOtherfile

Finding Fonts

As far as I know, there is no way to list your installed kernel fonts other than looking in the directories they are stored in: /usr/share/consolefonts/ (Debian/etc.), /lib/kbd/consolefonts/ (Fedora), /usr/share/kbd/consolefonts (openSUSE)...you get the idea.

Changing Fonts

Readable fonts are not a new concept. Embrace the old! Readability matters. And so does configurability, which sometimes gets lost in the rush to the new-shiny.
On Debian/Ubuntu/etc. systems you can run sudo dpkg-reconfigure console-setup to set your console font, then run the setupcon command in your console to activate the changes. setupcon is part of the console-setup package. If your Linux distribution doesn't include it, there might be a package for you at openSUSE.
You can also edit /etc/default/console-setup directly. This example sets the Terminus Bold font at 32 points, which is my favorite, and restricts the width to 80 columns.
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="UTF-8"
CODESET="guess"
FONTFACE="TerminusBold"
FONTSIZE="16x32"
SCREEN_WIDTH="80"
The FONTFACE and FONTSIZE values come from the font's filename, TerminusBold32x16.psf.gz. Yes, you have to know to reverse the order for FONTSIZE. Computers are so much fun. Run setupcon to apply the new configuration. You can see the whole character set for your active font with showconsolefont. Refer to man console-setup for complete options.

Systemd

Systemd is different from console-setup, and you don't need to install anything, except maybe some extra font packages. All you do is edit /etc/vconsole.conf and then reboot. On my Fedora and openSUSE systems I had to install some extra Terminus packages to get the larger sizes as the installed fonts only went up to 16 points, and I wanted 32. This is the contents of /etc/vconsole.conf on both systems:
KEYMAP="us"
FONT="ter-v32b"
Come back next week to learn some more cool console hacks, and some multimedia console applications.

How To Display Asterisks When You Type Password In terminal

$
0
0
https://www.ostechnix.com/display-asterisks-type-password-terminal

 
Display Asterisks When You Type Password In terminal
When you type passwords in a web browser login or any GUI login, the passwords will be masked as asterisks like ******** or bullets like •••••••••••••. This is the built-in security mechanism to prevent the users near you to view your password. But when you type the password in Terminal to perform any administrative task with sudo or su, you won’t even the see the asterisks or bullets as you type the password. There won’t be any visual indication of entering passwords, there won’t be any cursor movement, nothing at all. You will not know whether you entered all characters or not. All you will see just a blank screen!
Look at the following screenshot.

As you see in the above image, I’ve already entered the password, but there was no indication (either asterisks or bullets). Now, I am not sure whether I entered all characters in my password or not. This security mechanism also prevents the person near you to guess the password length. Of course, this behavior can be changed. This is what this guide all about. It is not that difficult. Read on!

Display Asterisks When You Type Password In terminal

To display asterisks as you type password in Terminal, we need to make a small modification in “/etc/sudoers” file. Before making any changes, it is better to backup this file. To do so, just run:
sudo cp /etc/sudoers{,.bak}
The above command will backup /etc/sudoers file to a new file named /etc/sudoers.bak. You can restore it, just in case something went wrong after editing the file.
Next, edit “/etc/sudoers” file using command:
sudo visudo
Find the following line:
Defaults env_reset

Add an extra word “,pwfeedback” to the end of that line as shown below.
Defaults env_reset,pwfeedback

Then, press “CTRL+x” and “y” to save and close the file. Restart your Terminal to take effect the changes.
Now, you will see asterisks when you enter password in Terminal.

If you’re not comfortable to see a blank screen when you type passwords in Terminal, the small tweak will help. Please be aware that the other users can predict the password length if they see the password when you type it. If you don’t mind it, go ahead make the changes as described above to make your password visible (masked as asterisks, of course!).
And, that’s all for now. More good stuffs to come. Stay tuned!
Cheers!

Linux paste Command Explained For Beginners (5 Examples)

$
0
0
https://www.howtoforge.com/linux-paste-command

Sometimes, while working on the command line in Linux, there may arise a situation wherein you have to merge lines of multiple files to create more meaningful/useful data. Well, you'll be glad to know there exists a command line utility paste that does this for you. In this tutorial, we will discuss the basics of this command as well as the main features it offers using easy to understand examples.
But before we do that, it's worth mentioning that all examples mentioned in this article have been tested on Ubuntu 16.04 LTS.

Linux paste command

As already mentioned above, the paste command merges lines of files. Here's the tool's syntax:
paste [OPTION]... [FILE]...
And here's how the mage of paste explains it:
Write lines consisting of the sequentially corresponding lines from each FILE, separated by TABs, 
to standard output. With no FILE, or when FILE is -, read standard input.
The following Q&A-styled examples should give you a better idea on how paste works.

Q1. How to join lines of multiple files using paste command?

Suppose we have three files - file1.txt, file2.txt, and file3.txt - with following contents:
How to join lines of multiple files using paste command
And the task is to merge lines of these files in a way that each row of the final output contains index, country, and continent, then you can do that using paste in the following way:
paste file1.txt file2.txt file3.txt
result of merging lines

Q2. How to apply delimiters when using paste?

Sometimes, there can be a requirement to add a delimiting character between entries of each resulting row. This can be done using the -d command line option, which requires you to provide the delimiting character you want to use.
For example, to apply a colon (:) as a delimiting character, use the paste command in the following way:
paste -d : file1.txt file2.txt file3.txt
Here's the output this command produced on our system:
How to apply delimiters when using paste

Q3. How to change the way in which lines are merged?

By default, the paste command merges lines in a way that entries in the first column belongs to the first file, those in the second column are for the second file, and so on and so forth. However, if you want, you can change this so that the merge operation happens row-wise.
This you can do using the -s command line option.
paste -s file1.txt file2.txt file3.txt
Following is the output:
How to change the way in which lines are merged

Q4. How to use multiple delimiters?

Yes, you can use multiple delimiters as well. For example, if you want to use both : and |, you can do that in the following way:
paste -d ':|' file1.txt file2.txt file3.txt
Following is the output:
How to use multiple delimiters

Q5. How to make sure merged lines are NUL terminated?

By default, lines merged through paste end in a newline. However, if you want, you can make them NUL terminated, something which you can do using the -z option.
paste -z file1.txt file2.txt file3.txt

Conclusion

As most of you'd agree, the paste command isn't difficult to understand and use. It may offer a limited set of command line options, but the tool does what it claims. You may not require it on daily basis, but paste can be a real-time saver in some scenarios. Just in case you need, here's the tool's man page.

Tlog - A Tool to Record / Play Terminal IO and Sessions

$
0
0
https://linoxide.com/linux-how-to/tlog-tool-record-play-terminal-io-sessions

Tlog is a terminal I/O recording and playback package for Linux Distros. It's suitable for implementing centralized user session recording. It logs everything that passes through as JSON messages. The primary purpose of logging in JSON format is to eventually deliver the recorded data to a storage service such as Elasticsearch, where it can be searched and queried, and from where it can be played back.  At the same time, they retain all the passed data and timing.
Tlog contains three tools namely tlog-rec, tlog-rec-session and tlog-play.
  • Tlog-rec tool is used for recording terminal input or output of programs or shells in general.
  • Tlog-rec-session tool is used for recording I/O of whole terminal sessions, with protection from recorded users.
  • Tlog-play tool for playing back the recordings.
In this article, I'll explain how to install Tlog on a CentOS 7.4 server.

Installation

Before proceeding with the install, we need to ensure that our system meets all the software requirements for compiling and installing the application. On the first step, update your system repositories and software packages by using the below command.
#yum update
We need to install the required dependencies for this software installation. I've installed all dependency packages with these commands prior to the installation.
#yum install wget gcc
#yum install systemd-devel json-c-devel libcurl-devel m4
After completing these installations, we can download the source package for this tool and extract it on your server as required:
#wget https://github.com/Scribery/tlog/releases/download/v3/tlog-3.tar.gz
#tar -xvf tlog-3.tar.gz
# cd tlog-3
Now you can start building this tool using our usual configure and make approach.
#./configure --prefix=/usr --sysconfdir=/etc && make
#make install
#ldconfig
Finally, you need to run ldconfig. It creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/lib and /usr/lib).

Tlog workflow chart

Tlog working process
Firstly, a user authenticates to login via PAM.  The Name Service Switch (NSS) provides the information as tlog is a shell to the user. This initiates the tlog section and it collects the information from the Env/config files about the actual shell and starts the actual shell in a PTY. Then it starts logging everything passing between the terminal and the PTY via syslog or sd-journal.

Usage

You can test if session recording and playback work in general with a freshly installed tlog, by recording a session into a file with tlog-rec and then playing it back with tlog-play.

Recording to a file

To record a session into a file, execute tlog-rec on the command line as such:
tlog-rec --writer=file --file-path=tlog.log
This command will record our terminal session to a file named tlog.log and save it in the path specified in the command.

Playing back from a file

You can playback the recorded session during or after recording using tlog-play command.
tlog-play --reader=file --file-path=tlog.log
This command reads the previously recorded file tlog.log from the file path mentioned in the command line.

Wrapping up

Tlog is an open-source package which can be used for implementing centralized user session recording. This is mainly intended to be used as part of a larger user session recording solution but is designed to be independent and reusable.This tool can be a great help for recording everything users do and store it somewhere on the server side safe for the future reference. You can get more details about this package usage in this documentation. I hope this article is useful to you. Please post your valuable suggestions and comments on this.

Creating an Offline YUM repository for LAN

$
0
0
http://linuxtechlab.com/offline-yum-repository-for-lan


In our earlier tutorial, we discussed “How we can create our own yum repository with ISO image & by mirroring an online yum repository”. Creating your own yum repository is a good idea but not ideal if you are only using 2-3 Linux machines on your network. But it definitely has advantages when you have large number of Linux servers on your network that are updated regularly or when you have some sensitive Linux machines that can’t be exposed to Internet directly.
When we have large number of Linux systems & each system is updating directly from internet, data consumed will be enormous. In  order to save the data, we can create an offline yum & share it over our Local network. Other Linux machines on network will then fetch system updates directly from this Local yum, thus saving data & also transfer speed also be very good since we will be on our local network.
We can share our yum repository using any of the following or both methods:
  • Using Web Server (Apache)
  • Using ftp (VSFTPD)
We will be discussing both of these methods but before we start, you should create a YUM repository using my earlier tutorial (READ HERE)

Using Web Server

Firstly we need to install web-server (Apache) on our yum server which has IP address 192.168.1.100. Since we have already configured a yum repository for this system, we will install apache web server using yum command,
$ yum install httpd
Next, we need to copy all the rpm packages to default apache root directory i.e. /var/www/html or since we have already copied our packages to /YUM, we can create a symbolic link from /var/www/html to /YUM
$ ln –s /var/www/html/Centos /yum
Restart you web-server to implement changes
$ systemctl restart httpd

Configuring  client machine

Configurations for sharing Yum repository on server side are complete & now we will configure our client machine, with an IP address 192.168.1.101, to receive updates from our created offline yum.
Create a file named offline-yum.repoin /etc/yum.repos.dfolder & enter the following details,
$ vi /etc/yum.repos.d/offline-yum.repo
[Offline YUM]
name=Local YUM
baseurl=http://192.168.1.100/CentOS/7
gpgcheck=0
enabled=1
We have configured your Linux machine to receive updates over LAN from your offline yum repository. To confirm if the repository is working fine, try to install/update packages using yum command.

Using FTP server

For sharing our YUM over ftp, we will firstly install the required package i.e vsftpd
$ yum install vsftpd
Default root directory for vsftp is /var/ftp/pub, so either copy rpm packages to this folder or create a symbolic link from /var/ftp/pub,
$ ln –s /var/ftp/pub /YUM
Now, restart server for implement the changes
$ systemctl restart vsftpd

Configuring  client machine

We will now create a file named offline-yum.repoin /etc/yum.repos.d , as we did above & enter the following details,
$ vi /etc/yum.repos.d/offline-yum.repo
[Offline YUM]
name=Local YUM
baseurl=ftp://192.168.1.100/pub/CentOS/7
gpgcheck=0
enabled=1
Your client machine is now ready to receive updates over ftp. For configuring vsftpd server to share files with other Linux system , read tutorial here.

Both methods for sharing an offline yum over LAN are good & you can choose either of them, both of these methods should work fine. If you are having any queries/comments, please share them in the comment box down below.

Making Vim Even More Awesome With These Cool Features

$
0
0
http://www.linuxandubuntu.com/home/making-vim-even-more-awesome-with-these-cool-features

Making Vim Even More Awesome With These Cool Features
Vim is quite an integral part of Every Linux Distribution and the most useful tool (of course after the terminal) for Linux Users. At least, this theory holds for me. People might argue that for programming, Vim might not be a good choice as there are different IDEs or other sophisticated text editors like Sublime Text 3, Atom etc. which make the programming job pretty easier.

My Thoughts

But what I think is that Vim works the way we want it to right from the very start, while other editors make us work the way they have been designed, not the way we actually want them to work. I can’t say much about other editors cause I haven’t used them much ( I’m biased with Vim ).

Anyway, Let’s make something out of Vim, that really does the Job god damn well.

​Vim for Programming

Executing the Code

Consider a scenario, What we do when we are working on a C++ code on Vim and we need to compile and run it.

(a). We get back to the terminal either through (Ctrl + Z) thing or we just save and quit it (:wq).

(b). And the trouble’s ain’t over, we now need to type on something on terminal like this { g++ fileName.cxx }.

(c). And after that execute it by typing { ./a.out } .

Certainly a lot of things needed to be done in order to get our C++ code running over the shell. But it doesn’t seem to be a Vim way of doing this (as Vim always tends to keep almost everything under one/two keypresses). So, What is the Vim way of doing this stuff?

The Vim Way

Vim isn’t just a Text Editor, It is sort of a Programming Language for Editing Text. And that programming language that helps us extending the features of Vim is “VimScript”.

So, with the help of VimScript, we can easily automate our task of Compiling and Running code with just a KeyPress.
create functions in vim .vimrc
Above is a snippet out of my .vimrc configuration file where i created a function called CPP().

Creating Functions in VimScript

The syntax for creating a function in VimScript is pretty easy. It starts with keyword “func” and is followed by the name of Function [Function Name must start with a capital letter in VimScript, otherwise Vim will give an error]. And the end of the function is denoted by keyword “endfunc”.

In the function’s body, you can see an exec statement, whatever you write after the exec keyword is executed over the Vim’s Command Mode (remember the thing starting with: at the bottom of Vim’s window). Now, the string that I passed to the exec is -
vim functions commands & symbols
What happens is when this function is called, it first clears the Terminal Screen, so that only you will be viewing is your output, then it executes g++ over the filename you are working on and after that executes the a.out file formed due to compilation.

Mapping Ctrl+r to run C++ code
-------------------------------------------------------------
I mapped the statement :call CPP() to the key-combination (Ctrl+r) so that I could now press Ctrl+r to execute my C++ Code without manually typing :call CPP() and then pressing Enter.

End Result

We finally managed to find the Vim Way of doing that stuff. So now, You just hit a button and the output of your C++ code is on your screen, you don’t need to type all that lengthy thing. It sort of saves your time too.

We can achieve this sort of functionality for other languages too.
create function in vim for python
​So For Python: Now you could press to interpret your code.
create function in vim for java
For Java: You could now press , it will first Compile your Java Code then interpret your java class file and show you the output.

Picture ain’t over, Marching a level deep

So, this was all about how you could manipulate things to work your way in Vim. Now, it comes to how we implement all this in Vim. We can use these Code Snippets directly in Vim and the other way around is by using the AutoCommands in Vim (autocmd’s). The beauty of autocmd is these commands need not be called by users, they execute by themselves at any certain condition which is provided by the user.

What I want to do with this [autocmd] thing is that, instead of using different mappings to perform execution of codes in different Programming Languages, I would like a single mapping for execution for every language.
autocmd in vimrc
​What we did here is that I wrote autocommands for all the File Types for which I had functions for Executing the code.

What’s going to happen is as soon as I open any buffer of any of the above-mentioned file types, Vim will automatically map my (Ctrl + r) to the function call and represents Enter Key, so that I don’t need to press “Enter key” everytime I press and alone does the job.

To achieve this Functionality, you just need to add the function snippets to your [dot]vimrc and after that just put all those autocmds . And with that, the next time you open Vim, Vim will have all the Functionalities to execute all the Codes with the very same KeyBindings.

Conclusion

That’s all for now. Hope this thing makes you love your Vim even more. I am currently exploring things in Vim, reading Documentations etc. and doing additions in [.vimrc] file and I will reach to you again when I will have something wonderful to share with you all.

If you want to have a look at my current [.vimrc] file, here is the link to my Github account: MyVimrcPlease do Comment on how you liked the article.

xfs file system commands with examples

$
0
0
https://kerneltalks.com/commands/xfs-file-system-commands-with-examples

Learn xfs file system commands to create, grow, repair xfs file system along with command examples. 
Learn xfs file system commands
Learn xfs commands with examples

In our another article we walked you through what is xfs, features of xfs etc. In this article we will see some frequently used xfs administrative commands. We will see how to create xfs filesystem, how to grow xfs filesystem, how to repair xfs file system and check xfs filesystem along with command examples.

Create XFS filesystem

mkfs.xfs command is used to create xfs filesystem. Without any special switches command output looks like one below –
Note : Once XFS filesystem is created it can not be reduced. It can only be extended to bigger size.

Resize XFS file system

In XFS, you can only extend file system and can not reduce it. To grow XFS file system use xfs_growfs. You need to specify new size of mount point along with -D switch. -D takes argument number as file system blocks. If you dont supply -D switch, xfs_growfs will grow filesystem to maximum available limit on that device.
In above output, observe last line. Since I supplied new size smaller than the existing size, xfs_growfsdidnt change filesystem. This show you can not reduce XFS file system. You can only extend it.
Now, I supplied new size 1GB  extra and it successfully grew the file system.
1GB blocks calculation :
Current filesystem has bsize=4096 i.e. block size of 4MB. We need 1 GB i.e. 256 blocks. So add 256 in current number of blocks i.e. 2883584 which gives you 2883840. So I used 2883840 as argument to -D switch.

Repair XFS file system

File system consistency check and repair of XFS can be performed using xfs_repair command. You can run command with -n switch so that it will not modify anything on filesystem. It will only scans and reports which modification to be done. If you are running it without -n switch, it will modify file system wherever necessary to make it clean.
Please note that you need to un-mount XFS filesystem before you can run checks on it. Otherwise you will see below error.
Once successfully un-mounting file system you can run command on it.
In above output you can observe, in each phase command shows possible modification which can be done to make file system healthy. If you want command to do those modification during scan then run command without any switch.
In above output you can observer xfs_repair command is executing possible filesystem modification as well to make it healthy.

Check XFS version and details

Checking xfs file system version is easy. Run xfs_info command with -V switch on mount point.
To view details of XFS file system like block size and number of blocks which helps you in calculating new block number for growing XFS file system, use xfs_info without any switch.
It displays all details as it shows while creating XFS file system

There are another XFS file system management commands which alters and manages its metadata. We will cover them in another article.
Viewing all 1413 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>