Quantcast
Channel: Sameh Attia
Viewing all 1417 articles
Browse latest View live

Collection of Useful Bash Functions and Aliases

$
0
0
http://www.tuxarena.com/2014/10/collection-of-useful-bash-functions-and-aliases

In this article I’m going to share some of the Bash aliases and functions that I use and I find pretty handy every once in a while. Aliases are composed of a word which is assigned some longer command, so whenever you type that word, it will be replaced with the longer command. Functions are usually used for anything that it is longer and not fit for an alias, and they usually perform more complicated tasks and can handle parameters as well. Here is a good explanation of both aliases and functions and how to use them. And here is a short tutorial that I wrote a while ago regarding aliases.

What follows is a collection of some of the aliases and functions that I use frequently, and which I believe may be useful to others as well. I showed them here in no particular order.

System Info

This is a function to show some system information (the KDE line will only work if KDE is installed):
myinfo () {
printf "CPU: "
cat /proc/cpuinfo | grep "model name" | head -1 | awk '{ for (i = 4; i <= NF; i++) printf "%s ", $i }'
printf "\n"

cat /etc/issue | awk '{ printf "OS: %s %s %s %s | " , $1 , $2 , $3 , $4 }'
uname -a | awk '{ printf "Kernel: %s " , $3 }'
uname -m | awk '{ printf "%s | " , $1 }'
kded4 --version | grep "KDE Development Platform" | awk '{ printf "KDE: %s", $4 }'
printf "\n"
uptime | awk '{ printf "Uptime: %s %s %s", $3, $4, $5 }' | sed 's/,//g'
printf "\n"
cputemp | head -1 | awk '{ printf "%s %s %s\n", $1, $2, $3 }'
cputemp | tail -1 | awk '{ printf "%s %s %s\n", $1, $2, $3 }'
#cputemp | awk '{ printf "%s %s", $1 $2 }'
}
And the cputemp alias:
alias cputemp='sensors | grep Core'
myinfo

Killing Processes

The next function will kill processes by name (does the same thing as pkill but with output). (Usage: kp NAME)
kp () {
ps aux | grep $1 > /dev/null
mypid=$(pidof $1)
if [ "$mypid" != "" ]; then
kill -9 $(pidof $1)
if [[ "$?" == "0" ]]; then
echo "PID $mypid ($1) killed."
fi
else
echo "None killed."
fi
return;
}
Another quick function to shorten something like this: ps aux | grep PROCESS_NAME. (Usage: psa NAME)
psa () {
ps aux | grep $1
}
The above function can be used like psa NAME, for example psa firefox, and it will list all the processes with that name.

Starting and Stopping Services

These are two aliases to start and respectively stop the Apache web server:
alias runweb='sudo service apache2 restart'
alias stopweb='sudo service apache2 stop'

Download Files Quickly

This may come in handy when you have to download a file from a certain location very often (the file gets updated frequently).
The following is an alias to quickly download the latest winetricks script and save it in the home directory, then make it executable. If the file already exists, overwrite it:
alias getwinetricks='wget -O $HOME/winetricks http://winetricks.org/winetricks && chmod 755 winetricks'

Disk Usage

And here is a function which will parse the output of the df command to only show disk space on /dev/sd* and /mnt/* mounted partitions: (Usage: ssd)
ssd () {
echo "Device Total Used Free Pct MntPoint"
df -h | grep "/dev/sd"
df -h | grep "/mnt/"
}

System Upgrade

This is an alias to update a Ubuntu system.
alias lmu='sudo apt-get update && sudo apt-get dist-upgrade'
Sometimes, even though the package list is updated properly, some public key related error may occur and the first command may exit with a non-zero status, causing the second command to not execute, in which case this may be the preferred way:
alias lmu='sudo apt-get update; sudo apt-get dist-upgrade'

APT Packages

The following function is useful on a Debian/Ubuntu/Mint system to list all the packages from repositories which contain a certain pattern: (Usage: showpkg NAME)
showpkg () {
apt-cache pkgnames | grep -i "$1" | sort
return;
}

Audio Aliases

The following will completely remove the tags in MP3 files:
alias stripmp3='id3v2 -d *.mp3; id3v2 -s *.mp3'
And this will rip all FLAC files in the current directory to Ogg:
alias ogg192='oggenc -b 192 *.flac'

Permissions

Give a file execute permissions or just read and write:
alias chx='chmod 755'
alias chr='chmod 644'

Changing Directory

Will go back to the previous directory:
alias back='cd "$OLDPWD"'

Listing Files

The way I prefer to have the long listing displayed as:
alias lsh='ls -lhXG' # long listing, human-readable, sort by extension, do not show group info

Removing Non-Empty Directories, Read-Only Files

alias rmf='rm -rf'

Running Emacs

This alias will run Emacs without a graphical window, in a terminal:
alias emw='emacs --no-window'

Compressed Files

Two aliases to uncompress GZIP and BZIP archives:
alias untarz='tar -xzf'
alias untarj='tar -xjf'

Three Different Prompts

Cycle between three different prompts. Usage: dp N
dp () {
if [[ $1 -eq "1" || $# -eq "0" ]]; then
PS1="\033[01;32m$\033[00m "
elif [[ $1 -eq "2" ]]; then
PS1="${debian_chroot:+($debian_chroot)}\w\033[01;32m$\033[00m "
elif [[ $1 -eq "3" ]]; then
PS1="\033[01;32m\u@\H:${debian_chroot:+($debian_chroot)}\w\033[01;32m$\033[00m "
fi
return;
}
Use it as dp N, where N is 1, 2 or 3. dp_prompt

Uptime

Shows uptime using a shorter formula:
myuptime () {
uptime | awk '{ print "Uptime:", $3, $4, $5 }' | sed 's/,//g'
return;
}
myuptime

Handy KDE Functions

Here are some functions that I use for my KDE desktop. I keep a running Yakuake and a desktop Plasma widget with a terminal to quickly change the volume of KMix (the KDE audio mixer) and Amarok from the command-line. (Usage: kmvol VALUE)
kmvol () {
if [ "$1" == "" ] || [ $1 -lt 0 ] || [ $1 -gt 100 ]; then
echo "Usage: kmvol N"
echo " N - integer between 0 and 100"
else
qdbus org.kde.kmix /Mixers/PulseAudio__Playback_Devices_1/alsa_output_pci_0000_00_1b_0_analog_stereo org.kde.KMix.Control.volume $1
echo "KMix volume set to $1" # set custom volume
fi
}
The above function will change the volume of KMix by typing kmvol N, where N is a value between 0 and 100. For example, kmvol 75 will change KMix volume to 75. It uses qdbus to do so.
Then, I define aliases for some values I need access to quickly. For example:
alias kmmin='kmvol 0'
alias kmv45='kmvol 45'
alias kmv70='kmvol 70'
alias kmv80='kmvol 80'
To show the current volume, I have this function (or even alias would work better for this):
kmshow () {
qdbus org.kde.kmix /Mixers/PulseAudio__Playback_Devices_1/alsa_output_pci_0000_00_1b_0_analog_stereo org.kde.KMix.Control.volume
}
For Amarok, in a similar way I defined this function: (Usage: amvol VALUE)
amvol () {
if [ "$1" == "" ] || [ $1 -lt 0 ] || [ $1 -gt 100 ]; then
echo "Usage: amvol N"
echo " N - integer between 0 and 100"
else
qdbus org.kde.amarok /Player VolumeSet $1
echo "Amarok volume set to $1"
fi
}
By typing amvol N I can change the Amarok volume.

Automatically Cycle GNOME Desktop Wallpaper

This is a small script which will cycle through wallpapers in a certain directory and change them every 60 seconds:
while [[ 1 -eq 1 ]]; do
for i in $(echo /usr/share/backgrounds/*.jpg); do
gsettings set org.gnome.desktop.background picture-uri file:///${i}
sleep 60;
done
done
You can set this script to run automatically when GNOME starts and it will cycle through all the wallpapers inside /usr/share/backgrounds every 60 seconds (change this value to something else for a different frequency - in seconds).
And finally, what follows is some Bash configuration stuff. Put this inside the $HOME/.bashrc file.

Colorful Manpages

To get some fancy, colorful manpages, you can put this inside your $HOME/.bashrc file (for changes to take effect source it or reset the terminal e.g. source ~/.bashrc):
export LESS_TERMCAP_mb=$(printf '\e[01;31m') # enter blinking mode – red
export LESS_TERMCAP_md=$(printf '\e[01;35m') # enter double-bright mode – bold, magenta
export LESS_TERMCAP_me=$(printf '\e[0m') # turn off all appearance modes (mb, md, so, us)
export LESS_TERMCAP_se=$(printf '\e[0m') # leave standout mode
export LESS_TERMCAP_so=$(printf '\e[01;33m') # enter standout mode – yellow
export LESS_TERMCAP_ue=$(printf '\e[0m') # leave underline mode
export LESS_TERMCAP_us=$(printf '\e[04;36m') # enter underline mode – cyan

Prompt

This will set a fancy prompt (PS1):
export PS1="\[\033[01;33m\][$USER@$HOSTNAME]\[\033[0;00m\] \[\033[01;32m\]\w\\$\[\033[0;00m\] "
And a greeting, to be displayed whenever Bash runs interactively (when you open a terminal for example):
echo "Welcome to the dark side of the moon, $USER!"
echo -e "Today is $(date)\nUptime: $(uptime)"
echo "Your personal settings have been loaded successfully."

Suggestions

I'm sure there's more than one way to do the same thing, and I'm sure some stuff here could've probably been written better. Do you have any useful functions or aliases that you'd like to share? Please do so in the comments below.

Easy Watermarking with ImageMagick

$
0
0
http://www.linuxjournal.com/content/easy-watermarking-imagemagick

Let's start with some homework. Go to Google (or Bing) and search for "privacy is dead, get over it". I first heard this from Bill Joy, cofounder of Sun Microsystems, but it's attributed to a number of tech folk, and there's an element of truth to it. Put something on-line and it's in the wild, however much you'd prefer to keep it under control.

Don't believe it? Ask musicians or book authors or film-makers. Now, whether the people who would download a 350-page PDF instead of paying $14 for a print book are hurting sales, that's another question entirely, but the Internet is public and open, even the parts that we wish were not.

This means if you're a photographer or upload images you'd like to protect or control, you have a difficult task ahead of you. Yes, you can add some code to your Web pages that makes it impossible to right-click to save the image, but it's impossible to shut down theft of intellectual property completely in the on-line world.

This is why a lot of professional photographers don't post images on-line that are bigger than low-resolution thumbnails. You can imagine that wedding photographers who make their money from selling prints (not shooting the wedding) pay very close attention to this sort of thing!
Just as people have learned to accept poor video in the interest of candor and funny content thanks to YouTube, so have people also learned to accept low-res images for free rather than paying even a nominal fee for license rights and a high-res version of the photograph or other artwork.
There is another way, however, that's demonstrated by the stock photography companies on-line: watermarking.

You've no doubt seen photos with embedded copyright notices, Web site addresses or other content that mars the image but makes it considerably harder to separate it from its original source.
It turns out that our friend ImageMagick is terrific at creating these watermarks in a variety of different ways, and that's what I explore in this column. It's an issue for a lot of content producers, and I know the photos I upload constantly are being ripped off and reused on other sites without permission and without acknowledgement.

To do this, the basic idea is to create a watermark-only file and then blend that with the original image to create a new one. Fortunately, creating the new image can be done programmatically with the convert program included as part of ImageMagick.

Having said that, it's really mind-numbingly complex, so I'm going to start with a fairly uninspired but quick way to add a watermark using the label: feature. In a nutshell, you specify what text you want, where you want it on the image, the input image filename and the output image filename. Let's start with an image (Figure 1).
Figure 1. Original Image, Kids at a Party
You can get the dimensions and so forth of the image with identify, of course:

$ identify kids-party.png
kids-party.png PNG 493x360 493x360+0+0 8-bit
↪DirectClass 467KB 0.000u 0:00.000
You can ignore almost all of this; it's just the size that you care about, and that's shown as 493x360.
Now, let's use composite to add a simple label:

$ composite label:'AskDaveTaylor.com' kids-party.png \
kids-party-labelled.png
Figure 2 shows the image with the label applied.
Figure 2. Label Added, No Styling
That's rather boring, although it's effective in a rudimentary sort of way. Let's do something more interesting now, starting by positioning the text centered on the bottom but also adding space below the image for the caption:

$ convert kids-party.png -background Khaki \
label:'AskDaveTaylor.com' \
-gravity center -append party-khaki.png
Here I've added a background color for the new text (khaki) and tapped the complicated but darn useful gravitycapability to center the text within the new append(appended) image space. Figure 3 shows the result.
Figure 3. Caption against a Khaki Background
I'm not done yet though. For the next example, let's actually have the text superimpose over the image, but with a semi-transparent background.
This is more ninja ImageMagick, so it involves a couple steps, the first of which is to identify the width of the original source image. That's easily done:

width=$(identify -format %w kids-party.png)
Run it, and you'll find out:

$ echo $width
493
Now, let's jump into the convert command again, but this time, let's specify a background color, a fill and a few other things to get the transparency to work properly:

$ convert -background '#0008' -fill white -gravity center \
-size ${width}x30 caption:AskDaveTaylor.com \
kids-party.png +swap -gravity south -composite \
party-watermark.png
I did warn you that it'd be complex, right? Let's just jump to the results so you can see what happened (Figure 4).
Figure 4. Improved Semi-Transparent Label
You can experiment with different backgrounds and colors, but for now, let's work with this and jump to the second part of the task, turning this into a script that can fix a set of images in a folder. The basic structure for this script will be easy actually:

for every image file
calculate width
create new watermarked version
mv original to a hidden directory
rename watermarked version to original image name
done
Because Linux is so "dot file"-friendly, let's have the script create a ".originals" folder in the current folder so that it's a nondestructive watermark process. Here's the script:

savedir=".originals"
mkdir $savedir

if [ $? -ne 0 ] ; then
echo "Error: failed making $savedir."
exit 1
fi

for image in *png *jpg *gif
do
if [ -s $image ] ; then # non-zero file size
width=$(identify -format %w $image)
convert -background '#0008' -fill white -gravity center \
-size ${width}x30 caption:AskDaveTaylor.com \
$image +swap -gravity south -composite new-$image
mv $image $savedir
mv new-$image $image
echo "watermarked $image successfully"
fi
done
You can see that it translates pretty easily into a script, with the shuffle of taking the original images and saving them in .originals.
The output is succinct when I run it in a specific directory:

watermarked figure-01.png successfully
watermarked figure-02.png successfully
watermarked figure-03.png successfully
watermarked figure-04.png successfully
Easily done.
You definitely can go further with all the watermarking in ImageMagick, but my personal preference is to tap into the reference works that already are on-line, including this useful, albeit somewhat confusing, tutorial: http://www.imagemagick.org/Usage/annotating.

However you slice it, if you're going to make your images available on-line in high resolution, or if they're unique and copyrighted intellectual property, knowing how to watermark them from the command line is a darn helpful skill.

How To Use Vagrant To Create Small Virtual Test Lab on a Linux / OS X / MS-Windows

$
0
0
http://www.cyberciti.biz/cloud-computing/use-vagrant-to-create-small-virtual-lab-on-linux-osx

Vagrant is a multi-platform command line tool for creating lightweight, reproducible and portable virtual environments. Vagrant acts as a glue layer between different virtualization solutions (Software, hardware PaaS and IaaS) and different configuration management utilities (Puppet, Chef, etc'). Vagrant was started back at 2010 by Mitchell Hashimoto as a side project and later became one of the first products of HashiCorp - the company Mitchell founded.
While officially described as a tool for setting up development environments, Vagrant can be used for a lot of other purposes by non developers as well:
  • Creating demo labs
  • Testing configuration management tools
  • Speeding up the work with non multi-platform tools such as Docker
In this tutorial I'll show how can we take Vagrant as use it to create small virtual test lab which we will be able to pass to our colleagues.

Vagrant installation

Vagrant installation packages are available for OS X, Windows and Linux (deb and rpm format). Installation is simple "Next, Next, Next, Done" process and doesn't require any special user interaction.

A note for Linux users

I strongly suggest that you will install Vagrant from its website and not by using your package manager. A lot of times the official repositories contains very old versions of Vagrant. In this tutorial I will use Vagrant 1.6.5.
Since Vagrant is not a virtualization software by itself, it relies on 3rd party providers for doing the virtualization part. For this tutorial I'll assume you have installed Oracle's VirtualBox. VirtualBox is a free multi-platform virtualization software which is supported by Vagrant out of the box.

Vagrant environment variables

By default Vagrant comes with sane default values and you don't need to change any to make it work. However, I will point out number of important ones you should be familiar with:
  • VAGRANT_DOTFILE_PATH> - This is the directory where Vagrant stores VM-specific state. By default it's '.vagrant' in the same directory as your
    Vagrantfile.
  • VAGRANT_HOME - This is the directory where Vagrant stores its global state. It can get quiet large since it will contain all the base boxes.
    By default this is '~/.vagrant.d'
  • VAGRANT_LOG - Verbosity level of Vagrant log messages. By default is 'info', change this to 'debug' in case you really want to know what's going
    on or want to submit bug report with log for example.
  • VAGRANT_DEFAULT_PROVIDER - By default Vagrant is using Virtualbox as its virtualization provider unless requested otherwise. If you using other
    provider most of the time, it would make sense to change the default.

Understanding Vagrant terminology

A quick terminology break down:

Providers

Providers are the components that enables Vagrant to use different virtualization solutions. Vagrant supports number of software hypervisors out of the box - VirtualBox and Hyper-V. Vagrant also allows adding additional providers via its plugins mechanism. For example you can add support for VMWare products as well as IaaS providers such as AWS.

Provisioners

Provisioners allow Vagrant to bring machine into desired state by using simple shell scripts or different configuration management tools such as Puppet or Chef. Provisioning in Vagrant usually happens after machine initialization but can be initiated on demand.
Configuration Management (CM) vs shell scripts: There's nothing wrong to use Vagrant with simple shell script instead of a full blown CM system. In fact in this tutorial I will use shell as well in order to not over complicate the tutorial. However CM system like Puppet or Chef has certain principles behind them which can help you with provisioning of your environment. For example one of the main principles of CMs is Idempotence which is the ability to apply certain action number of times but be sure the result won't change after the initial result. In environment you find provisioning to be helpful even after the initial provision, you should probably consider using CM instead of shell script.

Vagrant boxes

Boxes are vagrant packages that allows bundling provider specific machine data (such as vmdk disk images). These packages can be imported on any machine that runs vagrant.

Shared/Synced folders

Vagrant allows sharing or syncing of folders from host machine to guest machine. This allows you to edit your files locally and see the changes in the guest machine. Sharing of folders depends on your provider, for example VirtualBox provides its shared folder mechanism. Vagrant also allows syncing by using tools such as rsync or network file share using NFS.

Port forwarding

Some providers such as VirtualBox allow running VMs in NAT network mode. In this mode the VM sits in its own private address space which is not accessible from the host machine. In this case port forwarding allows creating forwarding rules that will forward traffic from local port to the port of the virtual machine.

Vagrant plugins

Vagrant provides extendability via its plugins API. This creates possibilities to add support for new provisioners, providers and other utilities. For a list of currently available Vagrant plugins take a look here.

Creating your first Vagrantfile

Vagrant initial goal was to speed up the creation of virtual environments for development. However, virtual environments has a broader use case. One of the common uses of Virtualization is to set up practice labs, and I will demonstrate how we can do it by using Vagrant.

Our (not so) imaginary scenario

Your company using Nagios as its monitoring solution and your team is responsible for maintaining it. You have new personal on your team which isn't familiar with Nagios and you'll like to do few training sessions for them. For the training session you want each member to have its own sandbox environment to play with.
One options is to provide your students virtual machine disk images that they can import and run. This options has its advantages:
  • Almost no time to first boot.
  • Relatively easy process (if you are familiar with Virtualization).
  • Access to the internet is not needed (the image can be saved on USB, local file server, etc).
And some disadvantages too:
  • Users usually not aware how the environment inside the virtual machine was created.
  • Can be time consuming to update image with new components.
  • Might need to provide additional instructions to users about how to set up the virtual environment (Assign more CPUs and RAM, network settings, etc).
To overcome some of the disadvantages above lets try to setup our lab environment by using Vagrant and VirtualBox.
Note: I'm not suggesting in any way that using Vagrant doesn't come with its own set of disadvantages.

The Vagrant configuration

Before we start you should download the tarball with configurations files from here. Bellow is the main config file (Vagrantfile). If you don't have any programing/scripting experience it may seem a bit confusing at first:
 
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION)do |config|
config.vm.box = "ubuntu/trusty64"
 
config.vm.provider"virtualbox"do |vb|
vb.cpus = 2
end
 
config.vm.network"private_network", type: "dhcp"
 
config.hostmanager.enabled = true
config.hostmanager.ip_resolver = procdo |vm, resolving_vm|
if vm.id
`VBoxManage guestproperty get #{vm.id} "/VirtualBox/GuestInfo/Net/1/V4/IP"`.split()[1]
end
end
 
config.vm.define:serverdo |srv|
srv.vm.hostname = "nagios-server"
srv.vm.synced_folder"server/", "/usr/local/nagios/etc", create: true
srv.vm.network"forwarded_port", guest: 80, host: 8080
srv.vm.provision"shell", path: "server-provision"
end
 
config.vm.define:clientdo |cl|
cl.vm.hostname = "nagios-client"
cl.vm.synced_folder"client/", "/usr/local/nagios/etc", create: true
cl.vm.provision"shell", path: "client-provision"
end
end
 
Vagrantfile is really a ruby file (or more specifically a Ruby DSL). While this makes the configuration look a bit weird for the average joe, it's nothing to be worried about and can be pretty powerful feature when needed.

Let's break this block by block:

 
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION)do |config|
#this is the outer block
end
 
The outer block creates the main configuration object. Think of it as creation of an object (named 'config') that will hold all our settings. VAGRANTFILE_API_VERSION is a variable storing the value "2". This represent the version number we want out config object to work with. This is mainly done for backwards compatibility between recent versions of Vagrant and older ones. If you are not sure about it, just leave it as is.
 
config.vm.box = "ubuntu/trusty64"
 
Here we use the config object to set our Vagrant base box to ubuntu/trusty64. More about where this box comes from later in the tutorial.
 
config.vm.provider"virtualbox"do |vb|
vb.cpus = 2
end
 
Here we create provider specific config block. In our case we ask that if machines are being run with VirtualBox provider, then each machine should receive 2 vCPUs by default. Each provider expose different set of options, so provider specific settings needs to be in separate blocks.
 
config.vm.network"private_network", type: "dhcp"
 
Network config is something that gets special treatment in Vagrant. While it is possible to configure Network via the provider block configuration, Vagrant tries to abstract as much as possible of the (usually) tedious network configuration into few high level settings. In our case, we ask Vagrant to configure private_network networking and assign IP addresses via DHCP. private_network means that each machine will get private ip address from the private address space and machines will be able to communicate with one to another via these addresses. I choose to use DHCP instead of static IPs because I would like my Vagrantfile to be bit more portable. Choosing static IPs could mean collisions with local networks at home or work.
 
config.hostmanager.enabled = true
config.hostmanager.ip_resolver = procdo |vm, resolving_vm|
if vm.id
`VBoxManage guestproperty get #{vm.id} "/VirtualBox/GuestInfo/Net/1/V4/IP"`.split()[1]
end
end
 
Hostmanager config namespace is not a part of core Vagrant install. It is added by using the Hostmanager Vagrat plugin. This plugin allows a dynamic edit of host file on host and guest machines. It allows us to skip hardcoding IPs into our configuration and to work with hostnames instead of IPs. So later on when actually working with Nagios I can refer to server as nagios-server instead of its IP.
Note for Windows users: In Windows, VBoxManage.exe is not in the %PATH%. You'll want to add VirtualBox's directory (usually something like C:\Program Files\Oracle\VirtualBox) into the PATH variable for this to work.
 
config.vm.define:serverdo |srv|
srv.vm.hostname = "nagios-server"
srv.vm.synced_folder"server/", "/usr/local/nagios/etc", create: true
srv.vm.network"forwarded_port", guest: 80, host: 8080
srv.vm.provision"shell", path: "server-provision"
end
config.vm.define:clientdo |cl|
cl.vm.hostname = "nagios-client"
cl.vm.synced_folder"client/", "/usr/local/nagios/etc", create: true
cl.vm.provision"shell", path: "client-provision"
end
 
These are two blocks doing the same thing - defining a virtual machine. First line of each block defines machine hostname. Second line of each block defines a synced folder between host and guest machine. These are the folder of Nagios configurations which will allow us to quickly reconfigure Nagios server and Nagios client without the need to SSH into the machine. Third line of the first block defines a port forwarding. This is done mainly for example only since we use private_network and can access machine by IP instead. Last line of each block describes what provisioner we want to use. In our case we would use the simple shell provisioner. Vagrant will copy the files specified in path and execute them after machine is up and running. The server-provision& client-provision files exists inside the tarball. I will not review them in this tutorial.

Run Vagrant

In this section we will initialize our Vagrant environment.

Vagrant CLI crash course

Vagrant CLI is quiet straightforward. To see the list of all the common commands type the following in your terminal:
vagrant --help
Sample outputs:
 
Usage: vagrant [options][]
 
-v, --version Print the version and exit.
-h, --help Print this help.
 
Common commands:
box manages boxes: installation, removal, etc.
connect connect to a remotely shared Vagrant environment
destroy stops and deletes all traces of the vagrant machine
global-status outputs status Vagrant environments for this user
halt stops the vagrant machine
help shows the help for a subcommand
init initializes a new Vagrant environment by creating a Vagrantfile
login log in to Vagrant Cloud
package packages a running vagrant environment into a box
plugin manages plugins: install, uninstall, update, etc.
provision provisions the vagrant machine
rdp connects to machine via RDP
reload restarts vagrant machine, loads new Vagrantfile configuration
resume resume a suspended vagrant machine
share share your Vagrant environment with anyone in the world
ssh connects to machine via SSH
ssh-config outputs OpenSSH valid configuration to connect to the machine
status outputs status of the vagrant machine
suspend suspends the machine
up starts and provisions the vagrant environment
version prints current and latest Vagrant version
 
For help on any individual command run `vagrant COMMAND -h`
 
Additional subcommands are available, but are either more advanced
or not commonly used. To see all subcommands, run the command
`vagrant list-commands`.
 
Primary commands you want to focus on at first:
  1. init - Create Vagrantfile in current directory.
  2. up - If environment never been created before, create it and run provision on machines. If machines are stopped (halt), just start them without provision
  3. halt - Shutdown environment. Equivalent to powering off machines.
  4. provision - (re)Run provision scripts as defined in Vagrantfile.
  5. destroy - Stop and delete environment.
  6. reload - Will restart environment and apply new setting from Vagrantfile without destroying.
  7. ssh - SSH into machine.
  8. status - Get environment status to see whether machines are running or not.
The commands above effects the states of the virtual machines and the general flow can be described in the following diagram:
Fig.01: Vagrant state
Fig.01: Vagrant state
Vagrant commands are executed on all machines in the environment. For example in our case, running vagrant up will cause vagrant to start and provision two virtual machines (server & client). If we wanted to create only one of them, we could do vagrant up server and only the server virtual machine would be initialized by vagrant.

Installing plugins

Before we can start our environment, lets install the hostmanager plugin:
$ vagrant plugin install vagrant-hostmanager
Installing the 'vagrant-hostmanager' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hostmanager (1.5.0)'!
You can list all installed vagrant plugins with the following command:
$ vagrant plugin list

Vagrant up

Now we are ready to initialize our environment:
$ vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
Bringing machine 'client' up with 'virtualbox' provider...
1==> server: Box 'ubuntu/trusty64' could not be found. Attempting to find and install...
server: Box Provider: virtualbox
server: Box Version: >= 0
==> server: Loading metadata for box 'ubuntu/trusty64'
server: URL: https://vagrantcloud.com/ubuntu/trusty64
==> server: Adding box 'ubuntu/trusty64' (v14.04) for provider: virtualbox
1 server: Downloading: https://vagrantcloud.com/ubuntu/trusty64/version/1/provider/virtualbox.box
1==> server: Successfully added box 'ubuntu/trusty64' (v14.04) for 'virtualbox'!
2==> server: Importing base box 'ubuntu/trusty64'...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'ubuntu/trusty64' is up to date...
3==> server: Setting the name of the VM: nagioslab_server_1410009838995_55673
==> server: Clearing any previously set forwarded ports...
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
server: Adapter 1: nat
server: Adapter 2: hostonly
4==> server: Forwarding ports...
server: 80 => 8080 (adapter 1)
server: 22 => 2222 (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
server: SSH address: 127.0.0.1:2222
server: SSH username: vagrant
server: SSH auth method: private key
server: Warning: Connection timeout. Retrying...
server: Warning: Remote connection disconnect. Retrying...
5==> server: Machine booted and ready!
==> server: Checking for guest additions in VM...
==> server: Setting hostname...
==> server: Configuring and enabling network interfaces...
5==> server: Mounting shared folders...
server: /vagrant => /Users/michael/VagrantLab/nagioslab
server: /usr/local/nagios/etc => /Users/michael/VagrantLab/nagioslab/server
5==> server: Updating /etc/hosts file on active guest machines...
5==> server: Running provisioner: shell...
server: Running: /var/folders/fz/81ddjj6s2bx9s7jg9z7347pr0000gn/T/vagrant-shell20140906-90949-1a8ntti
==> server: stdin: is not a tty
==> server: Ign http://archive.ubuntu.com trusty InRelease
... provisioning log ...
==> server: Starting nagios:
==> server: done.
==> client: Box 'ubuntu/trusty64' could not be found. Attempting to find and install...
client: Box Provider: virtualbox
client: Box Version: >= 0
==> client: Loading metadata for box 'ubuntu/trusty64'
client: URL: https://vagrantcloud.com/ubuntu/trusty64
6==> client: Adding box 'ubuntu/trusty64' (v14.04) for provider: virtualbox
==> client: Importing base box 'ubuntu/trusty64'...
... similar log to server part
==> client: Starting nagios remote plugin daemon: nrpe
==> client: .
Some key points in log:
  • 1 At first Vagrant looks for a base box (ubuntu/trusty64), once it doesn't find it, it will download it from vagrantcloud.com (unless configured otherwise).
  • 2 After download is finished, it will import the image into Virtualbox.
  • 3 Vagrant will rename the machine in Virtualbox to the name specified in the log. If you open Virtualbox, you'll find a running machine with this name.
  • 4 Port forwarding takes place.
  • 5 After machine is running and ready Vagrant will mount shared folders, edit hosts file (hostmanager plugin) and execute provision shell script that was copied to the machine.
  • 6 Notice that the second machine won't re-download the box before importing it since it was saved to Vagrant's cache in the first time.
The whole process took between 10 and 15 minutes on my computer including downloading of the base box. So in case I would like to re-create the whole environment from scratch again, it's going to take less then 10 minutes.

Checking for status

First of all, lets make sure our machines are really up using vagrant status command:
 
$ vagrant status
Current machine states:
 
server running (virtualbox)
client running (virtualbox)
 
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
 
Now let's check nagios is really running on server:
 
$ vagrant ssh server -c "service nagios status"
nagios (pid 4154) is running...
Connection to 127.0.0.1 closed.
 
Great. Server is up and Nagios is running. You can now open your web browser and point it to http://localhost:8080/nagios. User is nagiosadmin and password is nagios:
Fig.02: Nagios home
Fig.02: Nagios home
If you browse around the menus you should see that currently Nagios is only configured to monitor itself. In the next part we'll add the configurations needed so Nagios will start monitoring the client machine as well.

Working with Nagios

If you list your folder content after provisioning is finished, you should find two new directories which wasn't there before we ran vagrant up.
It's the client and server folders created during initial provisioning and now contains the configurations of Nagios server
and NRPE agent.
First we'll add the following configuration directive in server/nagios.cfg:
cfg_file=/usr/local/nagios/etc/objects/client.cfg
Next, we'll create server/object/client.cfg and put the following configuration inside:
define host{
use linux-server
host_name client
hostgroups linux-servers
alias client
address nagios-client
}
and now restart nagios daemon:
$ vagrant ssh server -c "sudo service nagios restart"
Running configuration check...
Stopping nagios: /etc/init.d/nagios: 147: kill: No such process
done.
Starting nagios: done.
Connection to 127.0.0.1 closed.
In your browser under "Host Groups" you should see new host with the name client and its status should change to UP after few seconds. We have instruct Nagios to watch add new configuration file and in that file we defined new host Nagios should monitor. The address of the new host is nagios-client and its being resolved to the correct address because of the hostmanager plugin which dynamically edits /etc/hosts file and added the correct IP addresses of nagios-client machine.
Now let's do one more thing and configure the client host to provide additional information to Nagios by using the NRPE monitoring daemon. First in client/nrpe.cfg we need to change line number 81 to:
allowed_hosts=127.0.0.1,nagios-server
Then restart nrpe daemon:
$ vagrant ssh client -c "sudo service nrpe restart"
Restarting nagios remote plugin daemon: nrpe.
Connection to 127.0.0.1 closed.
Now lets go back to server/object/client.cfg and add the following lines:
 
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}
 
define service{
use generic-service
host_name client
service_description Logged users
check_command check_nrpe!check_users
}
 
define service{
use generic-service
host_name client
service_description Load average
check_command check_nrpe!check_load
}
 
define service{
use generic-service
host_name client
service_description Zombie procs
check_command check_nrpe!check_zombie_procs
}
 
Now restart nagios daemon, and after few moments you should see our newly added service checks in
Fig.03: Nagios status (click to enlarge)
Fig.03: Nagios status (click to enlarge)

Rinse and repeat

So we created our small Nagios lab environment where we can experiment with different Nagios configurations, plugins and so on. The whole environment is defined in bunch of script files which you can store in your source repository such as Git. Your co-workers can now simply download these configuration files to their machine and run vagrant up and in the end of the process get exactly the same environment as you have on your machine.

Things to watch out for

Portability can be tricky thing and there are couple of thing you need to watch out for when you work with Vagrant. First of all, make sure you all work with same Vagrant versions. Vagrant fixes a lot of issues between releases (and obviously adds a bug or two as well) so you want to be on the same page when running Vagrant. You can include Vagrant.require_version helper config in your Vagrantfile to force only specific versions of Vagrant to run your config.
In case your Vagrantfile use plugins, you want to make sure all your users have the that plugin installed. You can add small ruby test to your Vagrant file that looks something like that:
 
if !Vagrant.has_plugin?('plugin-name')
puts"plugin is missing. Bye!"
exit1
end
 
Generally though, you want to minimize the number of 3rd party plugins to essential minimum.
Vagrant doesn't support running Vagrantfile with multiple number of providers at the same time. However you can run same Vagrantfile with different providers separately (using the --provider flag). However, supporting multiple providers in same Vagrantfile can be tricky as well since some options and features are not the same between providers. So for best compatibility you might want to decide to work with specific provider only. In our tutorial we assume that VirtualBox is used since we use the VBoxManage command line utility to find the IP address of our virtual machines. Running it under VMWare will cause hostmanager to act differently.
Another point you want to be extra careful with is running Vagrant on different OS. In our tutorial for example, the path of VBoxManage.exe needs to exists in the PATH variable or else hostmanager will fail.

How to use NumPy for scientific computing in Linux

$
0
0
http://xmodulo.com/numpy-scientific-computing-linux.html

Get serious with scientific computing in Linux by learning to use NumPy. NumPy is a Python-based open-source scientific computing package released under the BSD license that serves as a free yet powerful alternative to proprietary packages (such as MATLAB) that come with licensing fees. The numerous built-in data analysis tools, extensive documentation, and detailed examples render NumPy an ideal package for use in intensive scientific computing applications. This tutorial will highlight several features of NumPy to demonstrate its capabilities and ease of use.

Features

NumPy offers a vast array (pun intended!) of features, including (but certainly not limited to) the following:
  • Multidimensional array objects
  • Conversion from Python lists and tuples to NumPy arrays (and vice versa)
  • Importing data from text files
  • Math (arithmetic, trigonometry, exponents, logarithms...)
  • Random sampling (Normal, uniform, binomial, Poisson distributions...)
  • Statistics (mean, standard deviation, histograms...)
  • Fourier transforms (discrete, inverse, multidimensional)
  • Linear algebra (dot product, eigenvalues, solving systems of linear equations...)
  • Matrices (sum, product, transpose...)
  • Writing data to text files
  • Integration into existing Python workflows and scripts
NumPy offers an advantage over other scientific computing packages with no licensing fees (such as GNU Octave, released under the GNU General Public License) because you can create Python workflows that utilize NumPy AND any other Python packages, giving you a wide variety of tools at your disposal that are all controlled and connected via Python. Additionally, NumPy's syntax is inherently Pythonic, allowing you to break away from MATLAB-like syntax (used in GNU Octave) and apply your Python skills.

Installation

To install NumPy on Linux, run the following command:
On Debian or Ubuntu:
$ sudo apt-get install python-numpy
On Fedora or CentOS:
$ sudo yum install numpy
You must have Python installed (generally installed by default) in order to use NumPy.

NumPy Examples

This tutorial will provide several examples that demonstrate how to use NumPy:
  • Basic array arithmetic and comparisons
  • Importing data from a comma-delimited text file
  • Sampling uniformly between two values
In these examples we will use NumPy from the command-line via an interactive Python shell. Begin by starting an interactive Python shell, and then importing the NumPy library via the import command and assigning np as a reference to the numpy library:
$ python
Python 2.7.3
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np

Example 1: Basic array arithmetic and comparisons

Define a NumPy array object named "A" that has three rows, each of which contains three 32-bit integer values. Print the contents of the array by entering the name of the array object.
>>> A = np.array([[2, 2, 2], [4, 4, 4], [6, 6, 6]], np.int32)
>>> A
array([[2, 2, 2],
[4, 4, 4],
[6, 6, 6]], dtype=int32)
Define a second NumPy array object named "B" that has three rows, each of which contains three 32-bit integer values:
>>> B = np.array([[1, 1, 1], [5, 5, 5], [10, 10, 10]], np.int32)
>>> B
array([[ 1, 1, 1],
[ 5, 5, 5],
[10, 10, 10]], dtype=int32)
Define a third array as the sum of the first two arrays:
>>> C = A + B
>>> C
array([[ 3, 3, 3],
[ 9, 9, 9],
[16, 16, 16]], dtype=int32)
Determine which of the values in the third array are greater than 10:
>>> C.__gt__(10)
array([[False, False, False],
[False, False, False],
[ True, True, True]], dtype=bool)

Example 2: Importing data from a comma-delimited text file

Consider a file called data.txt that contains the following comma-delimited data:
1.0,2.0,3.0,4.0,5.0
600.0,700.0,800.0,900.0,1000.0
You can manually create a NumPy array object that contains these data, but that means you need to type in each value individually. With very large datasets, this can be quite tedious and error-prone. Character-delimited data from text files can easily be imported into NumPy arrays.
Define an array named "D" that contains the data from the data.txt file, and specify that the data to be imported are 64-bit floating-point numbers separated (delimited) with commas:
>>> D = np.loadtxt('data.txt', dtype=np.float64, delimiter=',')
>>> D
array([[ 1., 2., 3., 4., 5.],
[ 600., 700., 800., 900., 1000.]])
This feature of NumPy can save a tremendous amount of time that would otherwise be spent manually defining NumPy array objects. If you can format your data of interest into a character-delimited text file then importing these data into a NumPy array is easily accomplished through a single command.

Example 3: Sampling uniformly between two values

Suppose you want to generate 100 randomly-sampled values between 0.0 and 1.0 using a uniform probability distribution (all values between 0.0 and 1.0 have an equal chance of being selected). This is easily performed as follows, with the 100 samples stored in a NumPy array object called "E":
>>> E = np.random.uniform(0.0, 1.0, 100)
>>> E
array([ 0.90319756, 0.39696831, 0.87253663, 0.2541832 , 0.09188716,
0.41019978, 0.87418001, 0.13551479, 0.60185788, 0.8717379 ,
0.91012149, 0.9781284 , 0.97365995, 0.95618329, 0.25079489,
0.94314188, 0.92708129, 0.64377239, 0.27262929, 0.63310245,
0.7315558 , 0.53799042, 0.04425291, 0.1377755 , 0.69068289,
0.9929916 , 0.56488252, 0.25588388, 0.81735705, 0.98430142,
0.38541288, 0.81925846, 0.23941429, 0.9996938 , 0.49898967,
0.87731326, 0.41729317, 0.08407739, 0.09734557, 0.23217088,
0.29291853, 0.09453821, 0.05676644, 0.97170175, 0.25987992,
0.11203194, 0.68670969, 0.77228168, 0.85391461, 0.96315244,
0.34276206, 0.8918815 , 0.93095419, 0.33098585, 0.71910359,
0.73351498, 0.20238829, 0.75232483, 0.12985561, 0.13185072,
0.99842567, 0.78278125, 0.1550288 , 0.03083502, 0.34190622,
0.1755099 , 0.67803282, 0.31715532, 0.29491133, 0.35878659,
0.46047523, 0.27475024, 0.24985922, 0.5595999 , 0.14831301,
0.20137857, 0.79864609, 0.81361761, 0.22554692, 0.84947817,
0.48316828, 0.8848909 , 0.27639724, 0.02182878, 0.95491984,
0.31427821, 0.6760356 , 0.27305986, 0.73480237, 0.9581474 ,
0.5614434 , 0.12382754, 0.42856939, 0.69581633, 0.39598608,
0.86023031, 0.59549305, 0.41717616, 0.70233037, 0.66019342])
We can perform a sanity check for these results using NumPy's histogram tool. For the present example, we expect that approximately 50% of the sampled values will lie between 0.0 and 0.5, and that the remaining 50% will lie between 0.5 and 1.0 (given that we have two bins of equal width defined by lower and upper limits of 0.0 and 1.0, respectively):
>>> np.histogram(E, bins=2, range=(0.0, 1.0))
(array([49, 51]), array([ 0. , 0.5, 1. ]))
Our expectations are verified given that the histogram tool indicates that 49 out of the 100 samples (49%) lie in the first bin (0.0 to 0.5) and that 51 out of the 100 samples (51%) lie in the second bin (0.5 to 1.0).

Summary

This tutorial provides an overview of the features of the NumPy scientific computing package, and uses several examples to demonstrate how easy it is to learn and use. Documentation and examples for the NumPy package can be found at the official site.

8 Tips to Solve Linux & Unix Systems Hard Disk Problmes Like Disk Full Or Can’t Write to the Disk

$
0
0
http://www.cyberciti.biz/datacenter/linux-unix-bsd-osx-cannot-write-to-hard-disk

Can't write to the hard disk on a Linux or Unix-like systems? Want to diagnose corrupt disk issues on a server? Want to find out why you are getting "disk full" messages on screen? Want to learn how to solve full/corrupt and failed disk issues. Try these eight tips to diagnose a Linux and Unix server hard disk drive problems.



#1 - Error: No space left on device

When the Disk is full on Unix-like system you get an error message on screen. In this example, I'm running fallocate command and my system run out of disk space:
$ fallocate -l 1G test4.img
fallocate: test4.img: fallocate failed: No space left on device
The first step is to run the df command to find out information about total space and available space on a file system including partitions:
$ df
OR try human readable output format:
$ df -h
Sample outputs:
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6 117G 54G 57G 49% /
udev 993M 4.0K 993M 1% /dev
tmpfs 201M 264K 200M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1002M 0 1002M 0% /run/shm
/dev/sda1 1.8G 115M 1.6G 7% /boot
/dev/sda7 4.7G 145M 4.4G 4% /tmp
/dev/sda9 9.4G 628M 8.3G 7% /var
/dev/sda8 94G 579M 89G 1% /ftpusers
/dev/sda10 4.0G 4.0G 0 100% /ftpusers/tmp
From the df command output it is clear that /dev/sda10 has 4.0Gb of total space of which 4.0Gb is used.

Fixing problem when the disk is full

  1. Compress uncompressed log and other files using gzip or bzip2 or tar command:
    gzip /ftpusers/tmp/*.log
    bzip2 /ftpusers/tmp/large.file.name
  2. Delete unwanted files using rm command on a Unix-like system:
    m -rf /ftpusers/tmp/*.bmp
  3. Move files to other system or external hard disk using rsync command:
     
    rsync --remove-source-files -azv /ftpusers/tmp/*.mov /mnt/usbdisk/
    rsync --remove-source-files -azv /ftpusers/tmp/*.mov server2:/path/to/dest/dir/
     
  4. Find out the largest directories or files eating disk space on a Unix-like systesm:
     
    du -a /ftpusers/tmp | sort -n -r | head -n 10
    du -cks * | sort -rn | head
     
  5. Truncate a particular file. This is useful for log file:
     
    truncate -s 0 /ftpusers/ftp.upload.log
    ### bash/sh etc ##
    >/ftpusers/ftp.upload.log
    ## perl ##
    perl -e'truncate "filename", LENGTH'
     
  6. Find and remove large files that are open but have been deleted on Linux or Unix:
    ## Works on Linux/Unix/OSX/BSD etc ##
    lsof -nP | grep'(deleted)'
     
    ## Only works on Linux ##
    find /proc/*/fd -ls | grep'(deleted)'
     
    To truncate it:
    ## works on Linux/Unix/BSD/OSX etc all ##
    > "/path/to/the/deleted/file.name"
    ## works on Linux only ##
    > "/proc/PID-HERE/fd/FD-HERE"
     

#2 - Is the file system is in read-only mode?

You may end up getting an error such as follows when you try to create a file or save a file:
$ cat > file
-bash: file: Read-only file system

Run mount command to find out if the file system is mounted in read-only mode:
$ mount
$ mount | grep '/ftpusers'

To fix this problem, simply remount the file system in read-write mode on a Linux based system:
# mount -o remount,rw /ftpusers/tmp
Another example, from my FreeBSD 9.x server to remount / in rw mode:
# mount -o rw /dev/ad0s1a /

#3 - Am I running out of inodes?

Sometimes, df command reports that there is enough free space but system claims file-system is full. You need to check for the inode which identifies the file and its attributes on a file systems using the following command:
$ df -i
$ df -i /ftpusers/

Sample outputs:
Filesystem      Inodes IUsed   IFree IUse% Mounted on
/dev/sda8 6250496 11568 6238928 1% /ftpusers
So /ftpusers has 62,50,496 total inodes but only 11,568 are used. You are free to create another 62,38,928 files on /ftpusers partition. If 100% of your inodes are used, try the following options:
  • Find unwanted files and delete or move to another server.
  • Find unwanted large files and delete or move to another server.

#4 - Is my hard drive is dying?

I/O errors in log file (such as /var/log/messages) indicates that something is wrong with the hard disk and it may be failing. You can check hard disk for errors using smartctl command, which is control and monitor utility for SMART disks under Linux and UNIX like operating systems. The syntax is:
 
smartctl -a /dev/DEVICE
# check for /dev/sda on a Linux server
smartctl -a /dev/sda
 
You can also use "Disk Utility" to get the same information
Fig. 01: Gnome disk utility (Applications > System Tools > Disk Utility)
Fig. 01: Gnome disk utility (Applications > System Tools > Disk Utility)
Note: Don't expect too much from SMART tool. It may not work in some cases. Make backup on a regular basis.

#5 - Is my hard drive and server is too hot?

High temperatures can cause server to function poorly. So you need to maintain the proper temperature of the server and disk. High temperatures can result into server shutdown or damage to file system and disk. Use hddtemp or smartctl utility to find out the temperature of your hard on a Linux or Unix based system by reading data from S.M.A.R.T. on drives that support this feature. Only modern hard drives have a temperature sensor. hddtemp supports reading S.M.A.R.T. information from SCSI drives too. hddtemp can work as simple command line tool or as a daemon to get information from all servers:
 
hddtemp /dev/DISK
hddtemp /dev/sg0
 
Sample outputs:
Fig.02: hddtemp in action
Fig.02: hddtemp in action

You can use the smartctl command as follows too:
 
smartctl -d ata -A /dev/sda | grep -i temperature
 

How do I get the CPU temperature?

You can use Linux hardware monitoring tool such as lm_sensor to get the cpu temperature on a Linux based system:
 
sensors
 
Sample outputs from Debian Linux server:
Fig.03: sensors command providing cpu core temperature and other info on a Linux
Fig.03: sensors command providing cpu core temperature and other info on a Linux

#6 - Dealing with corrupted file systems

File system on server may be get corrupted due to a hard reboot or some other error such as bad blocks. You can repair corrupted file systems with the following fsck command:
 
umount /ftpusers
fsck -y /dev/sda8
 
See how to surviving a Linux filesystem failures for more info.

#7 - Dealing with software RAID on a Linux

To find the current status of a Linux software raid type the following command:
## get detail on /dev/md0 raid ##
mdadm --detail /dev/md0
 
## Find status ##
cat /proc/mdstat
watch cat /proc/mdstat
 
Sample outputs:
Fig. 04: Find the status of a Linux software raid command
Fig. 04: Find the status of a Linux software raid command

You need to replace a failed hard drive. You must u remove the correct failed drive. In this example, I'm going to replace /dev/sdb (2nd hard drive of RAID 6). It is not necessary to take the storage offline to repair the RAID on Linux. This only works if your server support hot-swappable hard disk:
## remove disk from an array md0 ##
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md0 --remove /dev/sdb1
 
# Do the same steps again for rest of /dev/sdbX ##
# Power down if not hot-swappable hard disk: ##
shutdown -h now
 
## copy partition table from /dev/sda to newly replaced /dev/sdb ##
sfdisk -d /dev/sda | sfdisk /dev/sdb
fdisk -l
 
## Add it ##
mdadm --manage /dev/md0 --add /dev/sdb1
# do the same steps again for rest of /dev/sdbX ##
 
# Now md0 will sync again. See it on screen ##
watch cat /proc/mdstat
 
See our tips on increasing RAID sync speed on Linux for more information.

#8 - Dealing with hardware RAID

You can use the samrtctl command or vendor specific command to find out the status of RAID and disks in your controller:
 
## SCSI disk
smartctl -d scsi --all /dev/sgX
 
## Adaptec RAID array
/usr/StorMan/arcconf getconfig 1
 
## 3ware RAID Array
tw_cli /c0 show
 
See your vendor specific documentation to replace a failed disk.

Monitoring disk health

See our previous tutorials:
  1. Monitoring hard disk health with smartd under Linux or UNIX operating systems
  2. Shell script to watch the disk space
  3. UNIX get an alert when disk is full
  4. Monitor UNIX / Linux server disk space with a shell scrip
  5. Perl script to monitor disk space and send an email
  6. NAS backup server disk monitoring shell script

Conclusion

I hope these tips will help you troubleshoot system disk issue on a Linux/Unix based server. I also recommend implementing a good backup plan in order to have the ability to recover from disk failure, accidental file deletion, file corruption, or complete server destruction:

How to run SQL queries against Apache log files on Linux

$
0
0
http://xmodulo.com/sql-queries-apache-log-files-linux.html

One of the distinguishing features of Linux is that, under normal circumstances, you should be able to know what is happening and has happened on your system by analyzing one or more system logs. Indeed, system logs are the first resource a system administrator tends to look to while troubleshooting system or application issues. In this article, we will focus on the Apache access log files generated by Apache HTTP web server. We will explore an alternative way of analyzing Apache access logs using asql, an open-source tool that allows one to run SQL queries against the logs in order to view the same information in a more friendly format.

Background on Apache Logs

There are two kinds of Apache logs:
  • Access log: Found at /var/log/apache2/access.log (for Debian) or /var/log/httpd/access_log (for Red Hat). Contains records of every request served by an Apache web server.
  • Error log: Found at /var/log/apache2/error.log (for Debian) or /var/log/httpd/error_log (for Red Hat). Contains records of all error conditions reported by an Apache web server. Error conditions include, but are not limited to, 403 (Forbidden, usually returned after a valid request missing access credentials or insufficient read permissions), and 404 (Not found, returned when the requested resource does not exist).
Although the verbosity of Apache access log file can be customized through Apache's configuration files, we will assume the default format in this article, which is as follows:
Remote IP - Request date - Request type - Response code - Requested resource - Remote browser (may also include operating system)
So a typical Apache log entry looks like:
192.168.0.101 - - [22/Aug/2014:12:03:36 -0300] "GET /icons/unknown.gif HTTP/1.1" 200 519 "http://192.168.0.10/test/projects/read_json/""Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0"
But what about Apache error log? Since error log entries dealing with particular requests have corresponding entries in the access log (which you can customize), you can use the access log file to obtain more information about error conditions (refer to example 5 for more details).
That being said, please note that access log is a system-wide log file. To find the log files of virtual hosts, you may also need to check their corresponding configuration files (e.g., within /etc/apache2/sites-available/[virtual host name] on Debian).

Installing asql on Linux

asql is written in Perl, and requires two Perl modules: a DBI driver for SQLite and GNU readline.

Install asql on Debian, Ubuntu or their derivatives

asql and its dependencies will automatically be installed with aptitude on Debian-based distributions.
# aptitude install asql

Install asql on Fedora, CentOS or RHEL

On CentOS or RHEL, you will need to enable EPEL repository first, and then run the commands below. On Fedora, proceed to the following commands directly.
# sudo yum install perl-DBD-SQLite perl-Term-ReadLine-Gnu
# wget http://www.steve.org.uk/Software/asql/asql-1.7.tar.gz
# tar xvfvz asql-1.7.tar.gz
# cd asql
# make install

How Does asql Work?

As you can guess from the dependencies listed above, asql converts unstructured plain-text Apache log files into a structured SQLite database, which can be queried using standard SQL commands. This database can be populated with the contents of current and past log files - including compressed rotated logs such as access.log.X.gz. or access_log.old.
First, launch asql from the command line with the following command
# asql
You will be entering asql's built-in shell interface.

Let's type help to list the available commands in the asql shell:

We will begin by loading all the access logs in asql, which can be done with:
asql> load
In case of Debian, the following command will do:
asql> load /var/log/apache2/access.*
In case of CentOS/RHEL, use this command instead:
asql> load /var/log/httpd/access_log*
When asql finishes loading access logs, we can start querying the database. Note that the database created after loading is "temporary," meaning that if you exit the asql shell, the database will be lost. If you want to preserve the database, you have to save it to a file first. We will see how to do that later (refer to examples 3 and 4).

The database contains a table named logs. The available fields in the logs table can be displayed using the show command:

The .asql hidden file, which is stored in each user's home directory, records the history of the commands that were typed by the user in an asql shell. Thus, you can browse through it using the arrow keys, and repeat previous commands by just pressing ENTER when you find the right one.

SQL Query Examples with asql

Here a few examples of running SQL queries against Apache log files with asql.
Example 1: Listing the request sources / dates and HTTP status codes returned during the month of October 2014.
SELECT source, date, status FROM logs WHERE date >= '2014-10-01T00:00:00' ORDER BY source;

Example 2: Displaying the total size (in bytes) of requests served per client in descending order.
SELECT source,SUM(size) AS Number FROM logs GROUP BY source ORDER BY Number DESC;

Example 3: Saving the database to [filename] in the current working directory.
save [filename]

This allows us to avoid the need for waiting while the log parsing is performed with the load command as shown earlier.
Example 4: Restoring the database in a new asql session after exiting the current one.
restore [filename]

Example 5: Returning error conditions logged in the access file. In this example, we will display all the requests that returned a 403 (access forbidden) HTTP code.
SELECT source,date,status,request FROM logs WHERE status='403' ORDER BY date

This goes to show that although asql only analyzes access logs, we can use the status field of a request to display requests with error conditions.

Summary

We have seen how asql can help us analyze Apache logs and present the results in a user friendly output format. Although you could obtain similar results by using command line utilities such as cat in conjunction with grep, uniq, sort, and wc (to name a few), in comparison asql represents a Swiss army knife due to the fact that it allows us to use standard SQL syntax to filter the logs according to our needs.

Ubuntu Linux Create and Add Swap File Tutorial

$
0
0
http://www.cyberciti.biz/faq/ubuntu-linux-create-add-swap-file

I'm a new Ubuntu Linux version 14.04 LTS user. I need additional swap space to improve my Ubuntu server performance. How can I add a swap space on Ubuntu Linux 14.04 LTS using command line over the ssh based session?

Tutorial details
DifficultyEasy (rss)
Root privilegesYes
RequirementsNone
Estimated completion time10m
Swap space is nothing but a disk storage used to increase the amount of memory available on the Ubuntu Linux server. In this tutorial, you will learn how to create and use a swap file on an Ubuntu Linux server.

What is a swap file on Ubuntu server or desktop system?

As a sysadmin it is necessary to add more swap space after installation on the server. Swap file allows Ubuntu Linux to use hard disk to increase virtual memory.
Virtual Memory = RAM + Swap space/file
Virtual Memory (1GB) = Actual RAM (512MB) + Swap space/file (512MB)
When the Ubuntu server runs low on memory, it swaps a section of RAM (say an idle program like foo) onto the hard disk (swap space) to free up memory for other programs. Then when you need that program (say foo again), kernel swapped out foo program, it changes places with another program in RAM.

Procedure to add a swap file on a Ubuntu Linux

Open the Terminal app or use the ssh client to get into the remote server. Login as a root user using sudo command:
 
sudo -s
 

Create a swap file command

Type the following command to create a 2GB swap file on Ubuntu:
# dd if=/dev/zero of=/swapfile bs=1G count=4
Sample outputs:
2+0 records in
2+0 records out
2147483648 bytes (2.1 GB) copied, 20.2256 s, 106 MB/s
Verify that file has been created on the server:
# ls -lh /swapfile
Sample outputs:
-rw-r--r-- 1 root root 2.0G Oct 29 14:07 /swapfile

Creating swap space using fallocate command instead of dd command

Instead of the dd command, you can use the the faster fallocate command to create swap file as follows:
# fallocate -l 1G /swapfile-1
# ls -lh /swapfile-1

Sample outputs:
-rw-r--r-- 1 root root 1.0G Oct 29 14:11 /swapfile-1

Secure the swap file

Type the following chmod command and chown command to secure and set correct file permission for security reasons:
# chown root:root /swapfile
# chmod 0600 /swapfile
# ls -lh /swapfile

Sample outputs:
-rw------- 1 root root 2.0G Oct 29 14:07 /swapfile
A world-readable swap file is a huge local vulnerability. The above commands make sure only root user can read and write to the file.

Turn on the swap file

First, use the mkswap command as follows to enable the swap space on Ubuntu:
# mkswap /swapfile
Sample outputs:
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=10231c61-6e55-4dd3-8324-9e2a892e7137
Finally, activate the swap file, enter:
# swapon /swapfile

Verify new swap file and settings on Ubuntu

Type the following command
# swapon -s
Sample outputs:
Filename    Type  Size Used Priority
/dev/sda5 partition 3998716 704 -1
/swapfile file 2097148 0 -2
You can also run the following commands to verify swap file and its usage:
# grep -i --color swap /proc/meminfo
# top
# htop
# atop

How can I disable swapfile on Ubuntu?

You need to use the swapoff command as follows:
# swapoff /swapfile
# swapon -s

Update /etc/fstab file

You need to make sure the swap file enabled when server comes on line after the reboot. Edit /etc/fstab file, enter:
# vi /etc/fstab
Append the following line:
/swapfile none            swap    sw              0       0
Save and close the file.

Tuning the swap file i.e. tuning virtual memory

You can tune the following two settings:
  1. swappiness
  2. min_free_kbytes
  3. vfs_cache_pressure

How do I set swappiness on a Ubuntu server?

The syntax is:
# sysctl vm.swappiness=VALUE
# sysctl vm.swappiness=20

OR
# echo VALUE > /proc/sys/vm/swappiness
# echo 30 > /proc/sys/vm/swappiness

The value in /proc/sys/vm/swappiness file controls how aggressively the kernel will swap memory pages. Higher values increase agressiveness, lower values descrease aggressiveness. The default value is 60. To make changes permanent add the following line to /etc/sysctl.conf:
 
echo'vm.swappiness=30'>> /etc/sysctl.conf
 
For database server such as Oracle or MySQL I suggest you set a swappiness value of 10. For more information see the official Linux kernel virtual memory settings page.

Shell Scripting – Checking Conditions with if

$
0
0
http://www.linuxtechi.com/shell-scripting-checking-conditions-with-if

In Bourne Shell if statement checks whether a condition is true or not. If so , the shell executes the block of code associated with the if statement. If the statement is not true , the shell jumps beyond the end of the if statement block & Continues on.
Syntax of if Statement :
if [ condition_command ]
then
        command1
        command2
        ……..
        last_command
fi
Example:
#!/bin/bash
number=150
if [ $number -eq 150 ]
then
echo "Number is 150"
fi

if-else Statement :

In addition to the normal if statement , we can extend the if statement with an else block. The basic idea is that if the statement is true , then execute the if block. If the statement is false , then execute the else block.
Syntax :
if [ condition_command ]
then
       command1
       command2
       ……..
       last_command
else
       command1
       command2
       ……..
       last_command
fi
Example:
#!/bin/bash
number=150
if [ $number -gt 250 ]
then
echo "Number is greater"
else
echo "Number is smaller"
fi

If..elif..else..fi Statement (Short for else if)

The Bourne shell syntax for the if statement allows an else block that gets executed if the test is not true. We can nest if statement , allowing for multiple conditions. As an alternative, we can use the elif construct , shot for else if.
Syntax :
if [ condition_command ]
then
       command1
       command2
       ……..
       last_command
elif [ condition_command2 ]
then
        command1
        command2
        ……..
        last_command
else
command1
command2
……..
last_command
fi
Example :
#!/bin/bash
number=150
if [ $number -gt 300 ]
then
echo "Number is greater"
elif [ $number -lt 300 ]
then
echo "Number is Smaller"
else
echo "Number is equal to actual value"
fi

Nested if statements :

If statement and else statement can be nested in a bash script. The keyword ‘fi’ shows the end of the inner if statement and all if statement should end with the keyword ‘fi’.
Basic syntax of nested if is shown below :
if [ condition_command ]
then
        command1
        command2
        ……..
        last_command
else
if [ condition_command2 ]
then
        command1
        command2
        ……..
        last_command
else
        command1
        command2
         ……..
         last_command
      fi
fi
Example:
#!/bin/bash
number=150
if [ $number -eq 150 ]
then
echo "Number is 150"
else
if [ $number -gt 150 ]
then
echo "Number is greater"
else
echo "'Number is smaller"
fi
fi

How to encrypt files and directories with eCryptFS on Linux

$
0
0
http://xmodulo.com/encrypt-files-directories-ecryptfs-linux.html

You do not have to be a criminal or work for the CIA to use encryption. You simply don't want anybody to spy on your financial data, family pictures, unpublished manuscripts, or secret notes where you have jotted down startup ideas which you think can make you super rich.
I have heard people telling me "I'm not important enough to be spied on" or "I don't hide anything to care about." Well, my opinion is that even if I don't have anything to hide, or I can publish a picture of my kids with my dog, I have the right to not do it and want to protect my privacy.

Types of Encryption

We have largely two different ways to encrypt files and directories. One method is filesystem-level encryption, where only certain files or directories (e.g., /home/alice) are encrypted selectively. To me, this is a perfect way to start. You don't need to re-install everything to enable or test encryption. Filesystem-level encryption has some disadvantages, though. For example, many modern applications cache (part of) files in unencrypted portions of your hard drive, such as swap partition, /tmp and /var folders, which can result in privacy leaks.
The other way is so-called full-disk encryption, which means that the entire disk is encrypted (possibly except for a master boot record). Full disk encryption works at the physical disk level; every bit written to the disk is encrypted, and anything read from the disk is automatically decrypted on the fly. This will prevent any potential unauthorized access to unencrypted data, and ensure that everything in the entire filesystem is encrypted, including swap partition or any temporarily cached data.

Available Encryption Tools

There are several options to implement encryption in Linux. In this tutorial, I am going to describe one option: eCryptFS a stacked cryptographic filesystem tool. For your reference, here is a roundup of available Linux encryption tools.

Filesystem-level encryption

  • EncFS: one of the easiest ways to try encryption. EncFS works as a stacked filesystem, so you just create an encrypted folder and mount it to a folder to work with.
  • eCryptFS: a POSIX compliant cryptographic filesystem, eCryptFS works in the same way as EncFS, so you have to mount it.

Full-disk encryption

  • Loop-AES: the oldest disk encryption method. It is really fast and works on old system (e.g., kernel 2.0 branch).
  • DMCrypt: the most common disk encryption scheme supported by the modern Linux kernel.
  • CipherShed: an open-source fork of the discontinued TrueCrypt disk encryption program.

Basics of eCryptFS

eCryptFS is a stacked cryptographic filesystem, which has been natively supported by the Linux kernel since 2.6.19 (as ecryptfs module). An eCryptFS-encrypted pseudo filesystem is mounted on top of your current filesystem. It works perfectly on EXT filesystem family and others like JFS, XFS, ReiserFS, Btrfs, even NFS/CIFS shares. Ubuntu uses eCryptFS as its default method to encrypt home directory, and so does ChromeOS. Underneath it, eCryptFS uses AES algorithm by default, but it supports others algorithms, such as blowfish, des3, cast5, cast6. You will be able to choose among them in case you create a manual setup of eCryptFS.
Like I said, Ubuntu lets us choose whether to encrypt our /home directory during installation. Well, this is the easiest way to use eCryptFS.

Ubuntu provides a set of user-friendly tools that make our life easier with eCryptFS, but enabling eCryptFS during Ubuntu installation only creates a specific pre-configured setup. So in case the default setup doesn't fit your needs, you will need to perform a manual setup. In this tutorial, I will describe how to set up eCryptFS manually on major Linux distros.

Installation of eCryptFS

Debian, Ubuntu or its derivatives:
$ sudo apt-get install ecryptfs-utils
Note that if you chose to encrypt your home directory during Ubuntu installation, eCryptFS should be already installed.
CentOS, RHEL or Fedora:
# yum install ecryptfs-utils
Arch Linux:
$ sudo pacman -S ecryptfs-utils
After installing the package, it is a good practice to load the eCryptFS kernel module just to be sure:
$ sudo modprobe ecryptfs

Configure eCryptFS

Now let's start encrypting some directory by running eCryptFS configuration tool:
$ ecryptfs-setup-private

It will ask for a login passphrase and a mount passphrase. The login passphrase is the same as your normal login password. The mount passphrase is used to derive a file encryption master key. Leave it blank to generate one as it's safer. Log out and log back in.
You will notice that eCryptFS created two directories by default: Private and .Private in your home directory. The ~/.Private directory contains encrypted data, while you can access corresponding decrypted data in the ~/Private directory. At the time you log in, the ~/.Private directory is automatically decrypted and mapped to the ~/Private directory, so you can access it. When you log out, the ~/Private directory is automatically unmounted and the content in the ~/Private directory is encrypted back into the ~/.Private directory.
The way eCryptFS knows that you own the ~/.Private directory, and automatically decrypts it into the ~/Private directory without needing us to type a password is through an eCryptFS PAM module which does the trick for us.
In case you don't want to have the ~/Private directory automatically mounted upon login, just add the "--noautomount" option when running ecryptfs-setup-private tool. Similarly, if you do not want the ~/Private directory to be automatically unmounted after logout, specify "--noautoumount" option. But then, you will have to mount or unmount ~/Private directory manually by yourself:
$ ecryptfs-mount-private ~/.Private ~/Private
$ ecryptfs-umount-private ~/Private
You can verify that .Private folder is mounted by running:
$ mount

Now we can start putting any sensitive files in ~/Private folder, and they will automatically be encrypted and locked down in ~/.Private folder when we log out.
All this seems pretty magical. Basically ecryptfs-setup-private tool makes everything easy to set up. If you want to play a little more and set up specific aspects of eCryptFS, go to the official documentation.

Conclusion

To conclude, if you care a great deal about your privacy, the best setup I recommend is to combine eCryptFS-based filesystem-level encryption with full-disk encryption. Always remember though, file encryption alone does not guarantee your privacy.

How to Get Open Source Android

$
0
0
http://www.linux.com/learn/tutorials/792900-how-to-get-open-source-android-

cyanogenmod on Samsung phone
CyanogenMod is one of the best and most popular FOSS Android variants. This is a complete replacement for Google's Android, just like you can replace Debian with Ubuntu or Linux Mint. Image credit: Flickr, creative commons.
Android is an astonishing commercial success, and is often touted as a Linux success. In some ways it is; Google was able to leverage Linux and free/open source software to get Android to market in record time, and to offer a feature set that quickly outstripped the old champion iOS. But it's not Linux as we know it. Most Android devices are locked-down, and we can't freely download and install whatever operating systems we want like we can with our Linux PCs, or install whatever apps we want without jailbreaking our own devices that we own. We can't set up a business to sell Google Android devices without jumping through a lot of expensive hoops (see The hidden costs of building an Android device and Secret Ties in Google's "Open" Android.) We can't even respin Google Android however we want to and redistribute it, because Google requires bundling a set of Google apps.
fdroid logo
Figure 1: F-Droid is a FOSS Android repository and an alternative to Google Play for downloading open source applications on Android.
So where do you go to find real open source Android? Does such a thing even exist? Why yes it does.

F-Droid: FOSS Repository

There are quite a few Android repositories other than the Google Play Store, such as Amazon Appstore for AndroidSamsung Galaxy Apps, and the Opera Mobile Store. But there is only one, as far as I know, that stocks only free/open source apps, and that is F-Droid (figure 1).
F-Droid is a pure volunteer effort. It was founded in 2010 by Ciaran Gultnieks, and is now operated by F-Droid Limited, a non-profit organisation registered in England. F-Droid relies on donations and community support. The good F-Droid people perform security and privacy checks on submitted apps, though they wisely warn that there are no guarantees. F-Droid promises to respect your privacy and to not track you, your devices, or what you install. You don't need to register for an account to use the F-Droid client, which sends no identifying information to their servers other than its version number.
To get F-Droid, all you do is download and install the F-Droid client (the download button is on the front page of the site). Easy peasey. You can browse and search apps on the website and in the client.

Other FOSS Android Directories

DroidBreak is a nice resource for finding FOSS Android apps. DroidBreak is not a software repository, but a good organized place to find apps.
AOpenSource.com is another FOSS Android directory. It gives more information on most of the apps, and has some good Android books links.
PRISM Break lists alternatives to popular closed-source propietary apps, and is privacy- and security-oriented.
Now let's look at how to get a FOSS Android operating system.

CyanogenMod

CyanogenMod is one of the best and most popular FOSS Android variants. This is a complete replacement for Google's Android, just like you can replace Debian with Ubuntu or Linux Mint. (Or Mint with Debian. Or whatever.) It is based on
cyanogenmod logo
Figure 2: CyanogenMod is a complete replacement for Google's Android OS.
the Android Open Source Project. All CyanogenMod source code is freely available on their Github repository. CyanogenMod supports bales of features including CPU overclocking, controlling permissions on apps, soft buttons, full tethering with no backtalk, easier Wi-fi, BlueTooth, and GPS management, and absolutely no spyware. Which seems to be the #1 purpose of most of the apps in the Play Store. CyanogenMod is more like a real Linux: completely open and modifiable.
CyanogenMod has a bunch of nice user-friendly features: a blacklist for blocking annoying callers, a quick setting ribbon for starting your favorite apps with one swipe, user-themeable, a customizable status bar, profiles for multiple users or multiple workflows, a customizable lockscreen...in short, a completely user-customizable interface. You get a superuser and unprivileged users, all just like your favorite Linux desktop.
CyanogenMod has been ported to a lot of devices, so chances are your phone or tablet is already supported. Amazon Kindle Fire, ASUS, Google Nexus, HTC, LG, Motorola, Samsung, Sony, and lots more. A large and active community supports CyanogenMod, and the Wiki contains bales of good documentation, including help for wannabe developers.
So how do you install CyanogenMod? Isn't that the scary part, where a mistake bricks your device? That is a real risk. So start with nothing-to-lose practice gadgets: look for some older used tablets and smartphones for cheap and practice on them. Don't risk your shiny new stuff until you've gained experience. Anyway, installation is not all that scary as the good CyanogenMod people have built a super-nice reliable installer that does not require that you be a mighty guru. You don't need to root your phone because the installer does that for you. After installation the updater takes care of keeping your installation current.

Replicant

Replicant gets my vote for best name. Please treat yourself to a viewing of the movie "Blade Runner" if you don't get the reference. Even with a Free Android operating system, phones and tablets still use a lot of proprietary blobs, and one of the goals of Replicant is to replace these with Free software. Replicant was originally based on the Android Open Source Project, and then migrated to CyanogenMod to take advantage of their extensive device support. Replicant is a little
replicant logo
Fig. 3: Replicant is a more Free software-oriented, CyanogenMod-based OS.
more work to install, so you'll acquire a deeper knowledge of how to get software on devices that don't want you to. Replicant is sponsored by the Free Software Foundation. The Google Play Store has over a million apps. This sounds impressive, but many of them are junk, most of them are devoted to data-mining you for all you're worth, and how many Mine Sweeper and Mahjongg ripoffs do you need? Android is destined to be a streamlined general-purpose operating system for a multitude of portable low-power devices (coming to a refrigerator near you! Why? Because!), and this is a great time to get acquainted with it on a deeper level.

How to scan Linux for vulnerabilities with lynis

$
0
0
http://xmodulo.com/how-to-scan-linux-for-vulnerabilities.html

As a system administrator, Linux security technician or system auditor, your responsibility can involve any combination of these: software patch management, malware scanning, file integrity checks, security audit, configuration error checking, etc. If there is an automatic vulnerability scanning tool, it can save you a lot of time checking up on common security issues.
One such vulnerability scanner on Linux is lynis. This tool is open-source (GPLv3), and actually supported on multiple platforms including Linux, FreeBSD, and Mac OS.
To install lynis on Linux, do the following.
$ wget http://cisofy.com/files/lynis-1.6.3.tar.gz
$ sudo tar xvfvz lynis-1.6.3.tar.gz -C /opt
To scan Linux for vulnerabilities with lynis, run the following.
$ cd /opt/lynis
$ sudo ./lynis --check-all -Q
Once lynis starts scanning your system, it will perform auditing in a number of categories:
  • System tools: system binaries
  • Boot and services: boot loaders, startup services
  • Kernel: run level, loaded modules, kernel configuration, core dumps
  • Memory and processes: zombie processes, IO waiting processes
  • Users, groups and authentication: group IDs, sudoers, PAM configuration, password aging, default mask
  • Shells
  • File systems: mount points, /tmp files, root file system
  • Storage: usb-storage, firewire ohci
  • NFS
  • Software: name services: DNS search domain, BIND
  • Ports and packages: vulnerable/upgradable packages, security repository
  • Networking: nameservers, promiscuous interfaces, connections
  • Printers and spools: cups configuration
  • Software: e-mail and messaging
  • Software: firewalls: iptables, pf
  • Software: webserver: Apache, nginx
  • SSH support: SSH configuration
  • SNMP support
  • Databases: MySQL root password
  • LDAP services
  • Software: php: php options
  • Squid support
  • Logging and files: syslog daemon, log directories
  • Insecure services: inetd
  • Banners and identification
  • Scheduled tasks: crontab/cronjob, atd
  • Accounting: sysstat data, auditd
  • Time and synchronization: ntp daemon
  • Cryptography: SSL certificate expiration
  • Virtualization
  • Security frameworks: AppArmor, SELinux, grsecurity status
  • Software: file integrity
  • Software: malware scanners
  • Home directories: shell history files
The screenshot of lynis in action is shown below:

Once scanning is completed, the auditing report of your system is generated and stored in /var/log/lynis.log.
The audit report contains warnings for potential vulnerabilities detected by the tool. For example:
$ sudo grep Warning /var/log/lynis.log
[20:20:04] Warning: Root can directly login via SSH [test:SSH-7412] [impact:M]
[20:20:04] Warning: PHP option expose_php is possibly turned on, which can reveal useful information for attackers. [test:PHP-2372] [impact:M]
[20:20:06] Warning: No running NTP daemon or available client found [test:TIME-3104] [impact:M]
The audit report also contains a number of suggestions that can help harden your Linux system. For example:
$ sudo grep Suggestion /var/log/lynis.log
[20:19:41] Suggestion: Install a PAM module for password strength testing like pam_cracklib or pam_passwdqc [test:AUTH-9262]
[20:19:41] Suggestion: When possible set expire dates for all password protected accounts [test:AUTH-9282]
[20:19:41] Suggestion: Configure password aging limits to enforce password changing on a regular base [test:AUTH-9286]
[20:19:41] Suggestion: Default umask in /etc/profile could be more strict like 027 [test:AUTH-9328]
[20:19:42] Suggestion: Default umask in /etc/login.defs could be more strict like 027 [test:AUTH-9328]
[20:19:42] Suggestion: Default umask in /etc/init.d/rc could be more strict like 027 [test:AUTH-9328]
[20:19:42] Suggestion: To decrease the impact of a full /tmp file system, place /tmp on a separated partition [test:FILE-6310]
[20:19:42] Suggestion: Disable drivers like USB storage when not used, to prevent unauthorized storage or data theft [test:STRG-1840]
[20:19:42] Suggestion: Disable drivers like firewire storage when not used, to prevent unauthorized storage or data theft [test:STRG-1846]
[20:20:03] Suggestion: Install package apt-show-versions for patch management purposes [test:PKGS-7394]
. . . .

Scan Your System for Vulnerabilities as a Daily Cron Job

To get the most out of lynis, it’s recommended to run it on a regular basis, for example, as a daily cronjob. When run with "--cronjob" option, lynis runs in automatic, non-interactive scan mode.
The following is a daily cronjob script that runs lynis in automatic mode to audit your system, and archives daily scan reports.
$ sudo vi /etc/cron.daily/scan.sh
#!/bin/sh

AUDITOR="automated"
DATE=$(date +%Y%m%d)
HOST=$(hostname)
LOG_DIR="/var/log/lynis"
REPORT="$LOG_DIR/report-${HOST}.${DATE}"
DATA="$LOG_DIR/report-data-${HOST}.${DATE}.txt"

cd /opt/lynis
./lynis -c --auditor "${AUDITOR}" --cronjob > ${REPORT}

mv /var/log/lynis-report.dat ${DATA}
$ sudo chmod 755 /etc/cron.daily/scan.sh

How to turn your CentOS box into a BGP router using Quagga

$
0
0
http://xmodulo.com/centos-bgp-router-quagga.html

In a previous tutorial, I described how we can easily turn a Linux box into a fully-fledged OPSF router using Quagga, an open source routing software suite. In this tutorial, I will focus on converting a Linux box into a BGP router, again using Quagga, and demonstrate how to set up BGP peering with other BGP routers.
Before we get into details, a little background on BGP may be useful. Border Gateway Protocol (or BGP) is the de-facto standard inter-domain routing protocol of the Internet. In BGP terminology, the global Internet is a collection of tens of thousands of interconnected Autonomous Systems (ASes), where each AS represents an administrative domain of networks managed by a particular provider.
To make its networks globally routable, each AS needs to know how to reach all other ASes in the Internet. That is when BGP comes into play. BGP is the language used by an AS to exchange route information with other neighboring ASes. The route information, often called BGP routes or BGP prefixes, contains AS number (ASN; a globally unique number) and its associated IP address block(s). Once all BGP routes are learned and populated in local BGP routing tables, each AS will know how to reach any public IP addresses on the Internet.
The ability to route across different domains (ASes) is the primary reason why BGP is called an Exterior Gateway Protocol (EGP) or inter-domain routing protocol. Whereas routing protocols such as OSPF, IS-IS, RIP and EIGRP are all Interior Gateway Protocols (IGPs) or intra-domain routing protocols which are responsible for routing within one domain.

Test Scenarios

For this tutorial, let us consider the following topology.

We assume that service provider A wants to establish a BGP peering with service provider B to exchange routes. The details of their AS and IP address spaces are like the following.
  • Service provider A: ASN (100), IP address space (100.100.0.0/22), IP address assigned to eth1 of a BGP router (100.100.1.1)
  • Service provider B: ASN (200), IP address space (200.200.0.0/22), IP address assigned to eth1 of a BGP router (200.200.1.1)
Router A and router B will be using the 100.100.0.0/30 subnet for connecting to each other. In theory, any subnet reachable from both service providers can be used for interconnection. In real life, it is advisable to use a /30 subnet from service provider A or service provider B's public IP address space.

Installing Quagga on CentOS

If Quagga is not already installed, we install Quagga using yum.
# yum install quagga
If you are using CentOS 7, you need to apply the following policy change for SELinux. Otherwise, SELinux will prevent Zebra daemon from writing to its configuration directory. You can skip this step if you are using CentOS 6.
# setsebool -P zebra_write_config 1
The Quagga software suite contains several daemons that work together. For BGP routing, we will focus on setting up the following two daemons.
  • Zebra: a core daemon responsible for kernel interfaces and static routes.
  • BGPd: a BGP daemon.

Configuring Logging

After Quagga is installed, the next step is to configure Zebra to manage network interfaces of BGP routers. We start by creating a Zebra configuration file and enabling logging.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
On CentOS 6:
# service zebra start
# chkconfig zebra on
For CentOS 7:
# systemctl start zebra
# systemctl enable zebra
Quagga offers a dedicated command-line shell called vtysh, where you can type commands which are compatible with those supported by router vendors such as Cisco and Juniper. We will be using vtysh shell to configure BGP routers in the rest of the tutorial.
To launch vtysh command shell, type:
# vtysh
The prompt will be changed to hostname, which indicates that you are inside vtysh shell.
Router-A#
Now we specify the log file for Zebra by using the following commands:
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
Save Zebra configuration permanently:
Router-A# write
Repeat this process on Router-B as well.

Configuring Peering IP Addresses

Next, we configure peering IP addresses on available interfaces.
Router-A# show interface
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
Configure eth0 interface's parameters:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
Go ahead and configure eth1 interface's parameters:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
Now verify configuration:
Router-A# show interface
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
Router-A# show interface description
Interface       Status  Protocol  Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
If everything looks alright, don't forget to save.
Router-A# write
Repeat to configure interfaces on Router-B as well.
Before moving forward, verify that you can ping each other's IP address.
Router-A# ping 100.100.0.2
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
Next, we will move on to configure BGP peering and prefix advertisement settings.

Configuring BGP Peering

The Quagga daemon responsible for BGP is called bgpd. First, we will prepare its configuration file.
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
On CentOS 6:
# service bgpd start
# chkconfig bgpd on
For CentOS 7
# systemctl start bgpd
# systemctl enable bgpd
Now, let's enter Quagga shell.
# vtysh
First verify that there are no configured BGP sessions. In some versions, you may find a BGP session with AS 7675. We will remove it as we don't need it.
Router-A# show running-config
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
We will remove any pre-configured BPG session, and replace it with our own.
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
Router-B should be configured in a similar way. The following configuration is provided as reference.
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
When both routers are configured, a BGP peering between the two should be established. Let's verify that by running:
Router-A# show ip bgp summary

In the output, we should look at the section "State/PfxRcd." If the peering is down, the output will show 'Idle' or 'Active'. Remember, the word 'Active' inside a router is always bad. It means that the router is actively seeking for a neighbor, prefix or route. When the peering is up, the output under "State/PfxRcd" should show the number of prefixes received from this particular neighbor.
In this example output, the BGP peering is just up between AS 100 and AS 200. Thus no prefixes are being exchanged, and the number in the rightmost column is 0.

Configuring Prefix Advertisements

As specified at the beginning, AS 100 will advertise a prefix 100.100.0.0/22, and AS 200 will advertise a prefix 200.200.0.0/22 in our example. Those prefixes need to be added to BGP configuration as follows.
On Router-A:
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
On Router-B:
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
At this point, both routers should start advertising prefixes as required.

Testing Prefix Advertisements

First of all, let's verify whether the number of prefixes has changed now.
Router-A# show ip bgp summary

To view more details on the prefixes being received, we can use the following command, which shows the total number of prefixes received from neighbor 100.100.0.2.
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes

To check which prefixes we are receiving from that neighbor:
Router-A# show ip bgp neighbors 100.100.0.2 routes

We can also check all the BGP routes:
Router-A# show ip bgp

These commands below can be used to check which routes in the routing table are learned via BGP.
Router-A# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
I - ISIS, B - BGP, > - selected route, * - FIB route

C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
Router-A# show ip route bgp
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
The BGP-learned routes should also be present in the Linux routing table.
[root@Router-A~]# ip route
100.100.0.0/30 dev eth0  proto kernel  scope link  src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
Finally, we are going to test with ping command. ping should be successful.
[root@Router-A~]# ping 200.200.1.1 -c 2
To sum up, this tutorial focused on how we can run basic BGP on a CentOS box. While this should get you started with BGP, there are other advanced settings such as prefix filters, BGP attribute tuning such as local preference and path prepend. I will be covering these topics in future tutorials.
Hope this helps.

How To Run Android Apps in Chrome on Mac / Linux / Windows

$
0
0
http://www.makeuseof.com/tag/run-android-apps-chrome-mac-linux-windows

It’s now possible to run Android apps in the Chrome browser — it just takes a little bit of work.
Google has officially brought four Android apps to Chromebooks, so it would seem that it’s only a matter of time before more and more Android apps become officially available on the Chrome browser. If you can’t wait, however, let’s run through a few options for running Android apps in Chrome right now.
Note: We’ll be looking at Chrome on Windows here, but the same processes should work on Macs or Linux devices as well.

Prerequisite: ARChon Custom Runtime

Before getting started, you’ll need to download this Chrome extension. This allows Android apps to work properly on Chrome, but it’s still very much unofficial and unstable, so don’t expect everything to work perfectly.
There are three download options available for the runtime that depend on your system. To check if you have a 32-bit or 64-bit browser, you can navigate to chrome://chrome in your address bar, or you can click the three line menu button in the upper right and select About Google Chrome at the bottom.
android chrome 2   How To Run Android Apps in Chrome on Mac / Linux / Windows
Once you’ve downloaded the correct version (and this may take a while, as it’s a 100MB file), unzip the folder. Then type chrome://extensions into your address bar to view a list of all your current extensions. Here, select the Developer Mode box in the upper right.
android chrome 1   How To Run Android Apps in Chrome on Mac / Linux / Windows

Now you’ll want to press Load unpacked extension and select the folder where you unzipped ARChon. Make sure it is enabled, and you’re good to go. You can now choose from one of the three options below, depending on which you find easiest.
android chrome 3   How To Run Android Apps in Chrome on Mac / Linux / Windows

Option 1: APK Conversion in Android App

Your Android apps as they are now on your phone or tablet are not able to run in Chrome. To make this possible, they have to be repackaged to be compatible with ARChon. This would be a pretty complicated task — if it weren’t for this Android app: ARChon Packager.
Once you’ve got the app downloaded and installed, open it up. You’ll be given two options for choosing an app: an installed app, or an APK from your phone’s storage. An APK is the installable file for an app, but you don’t need to worry about that if you just want to use a regular app you already have installed. Select Installed application and choose next.
android chrome 8   How To Run Android Apps in Chrome on Mac / Linux / Windows
I chose Pulse as the app I want to try on Chrome. You can then select if you want it to run in phone or tablet mode in Chrome, and if it should be oriented for portrait or landscape. You can also give it access to the files on your PC or enable ADB if you’re a developer.
Once you hit Finish, the app will be converted into a nearly Chrome-ready ZIP file. You then need to transfer that file over to your computer either by using a USB cable, or by selecting the share button at the end of the process to email it or upload it to your preferred cloud storage service.
When the ZIP file is on your computer, unzip it. You’ll then want to go back into chrome://extensions, select Load unpacked extensions, and select the unzipped folder. Once it’s loaded in, click Launch to access the app.
android chrome 4   How To Run Android Apps in Chrome on Mac / Linux / Windows
And there you have it. Using this method, Pulse ran perfectly for me.
android chrome pulse   How To Run Android Apps in Chrome on Mac / Linux / Windows
But if you don’t have an Android device, the next option might be better for you.

Option 2: APK Conversion in Chrome App

For this option, you’ll need to download Twerk from the Chrome Web Store. You will also need an APK file already, the installable file for an app. APKs are notoriously hard to get hold of because of the high likelihood of malware in so-called “cracked” apps, but there are quite a few legitimate APKs available for download straight from the developers over at the XDA forums.
If you have obtained a legitimate APK, this method will work perfectly. Otherwise, move on to option 3.
The process here is simple. Launch Twerk from the Chrome App Launcher or enter chrome://apps into your address bar. Then, locate your APK file in your local file browser and drag it over into the Twerk window.
android chrome twerk   How To Run Android Apps in Chrome on Mac / Linux / Windows
You can then select several options, like whether to run it in portrait or landscape, and build it by pressing the pink Android at the bottom. Then you’ll choose where to save it.
After that, head back into your Chrome extensions (chrome://extensions in your address bar) and select Load unpacked extension. Find the folder that Twerk created and select it. Your app should now be in Chrome, and you can launch it just like any other Chrome app!

Option 3: Find Converted APK Online

This option is probably the simplest out there because you don’t have to tinker with any of your own apps. For this one, you’re just going to download apps that are already compatible with ARChon — the biggest disadvantage is the limited amount of apps available like this.
Visit this community-created Google Spreadsheet of apps that have been tested with ARChon. Most of them have a download link at the far right to download the files, but you take your own risk when downloading these. There is no guarantee that they are safe files, so exercise regular caution. You can also try browsing this Chrome APKs subreddit.
Once it is downloaded, unzip it if it’s zipped, go to your Chrome extensions page (chrome://extensions in your address bar), and select Load unpacked extension. Find the unzipped downloaded folder and select it to load it into Chrome. You can now find it at chrome://apps to launch like a regular app!

What Is Your Favorite Android App On Chrome?

As we bide our time until Google makes this an official feature, this is your best bet for getting tons of Android apps running on your Chrome browser.
What is your favorite app that you’ve been able to get running? Do you have any other methods of running Android apps on Chrome that you’d recommend? Let us know in the comments!

eBay open sources a big, fast SQL-on-Hadoop database

$
0
0
https://gigaom.com/2014/10/22/ebay-open-sources-a-big-fast-sql-on-hadoop-database

Summary: eBay has open sourced a database technology, called Kylin, that takes advantage of distributed processing and the HBase data store in order to return faster results for SQL queries over Hadoop data.
Online auction site eBay has open sourced a database technology called Kylin that the company says enables fast queries over even petabytes of data stored in Hadoop. eBay isn’t a big data user on par with companies like Google and Facebook, but it does run technologies such as Hadoop at a fairly large scale and Kylin seems a good example of the type of innovation it’s doing on top of them.
eBay details Kylin in a blog post on Wednesday, citing among other features its REST APIs, ANSI-SQL compatibility, connections to analysis tools Tableau and Excel, and sub-second latency on some queries. However, the most unique features of Kylin involve how it deals with scale. eBay says it can query billions of rows of data — on datasets more that 14 terabytes in size — at speeds much faster than using the traditional Apache Hive tool.
kylin_diagram
The way Kylin works, at a high level, is to take data from Hive; pre-process large queries using MapReduce; and then store those results as key-value “cuboids” in HBase. When a user runs a Kylin query using a particular set of variables, the values are ready to go without requiring them to be processed again. It’s not entirely dissimilar from the cubes than analytic databases have been utilizing for years, but Kylin’s cuboids are designed with HBase’s preferred data structure in mind.
Here’s how eBay says Kylin has is used within the company:
At the time of open-sourcing Kylin, we already had several eBay business units using it in production. Our largest use case is the analysis of 12+ billion source records generating 14+ TB cubes. Its 90% query latency is less than 5 seconds. Now, our use cases target analysts and business users, who can access analytics and get results through the Tableau dashboard very easily – no more Hive query, shell command, and so on.
cuboid_topo
It would be interesting to know how Kylin stacks up against next-generation versions of Hive, Spark SQL and other options for SQL analysis in Hadoop that have emerged as a result of the YARN resource manager available in the latest versions of Apache Hadoop. My guess is it’s slower but more scalable than in-memory options or those not requiring MapReduce processing, but that it might be a solid option for the large percentage of Hadoop users still running earlier versions of the software.

Intro to Systemd Runlevels and Service Management Commands

$
0
0
http://www.linux.com/learn/tutorials/794615-systemd-runlevels-and-service-management



Linux kernel unified hierarchy cgroups and systemd.svg
cgroups, or control groups, have been present in the Linux kernel for some years, but have not been used very much until systemd.
In olden times we had static runlevels. systemd has mechanisms for more flexible and dynamic control of your system.
Before we get into learning more useful systemd commands, let's take a little trip down memory lane. There is this weird dichotomy in Linux-land, where Linux and FOSS are always pushing ahead and progressing, and people are always complaining about it. Which is why I am taking all of this anti-systemd uproar with a grain of salt, because I remember when:
  • Packages were evil, because real Linux users built everything from source code and kept strict control of what went on their systems.
  • Dependency-resolving package managers were evil, because real Linux users resolved dependency hells manually.
  • Except for apt-get, which was always good, so only Yum was evil.
  • Because Red Hat was the Microsoft of Linux.
  • Yay Ubuntu!
  • Boo hiss Ubuntu!
And on and on...as I have said lo so many times before, changes are upsetting. They mess with our workflow, which is no small thing because any disruption has a real productivity cost. But we are still in the infant stage of computing, so it's going to keep changing and advancing rapidly for a long time. I'm sure you know people who are stuck in the mindset that once you buy something, like a wrench or a piece of furniture or a pink flamingo lawn ornament, it is forever. These are the people who are still running Windows Vista, or deity help us Windows 95 on some ancient, feeble PC with a CRT monitor, and who don't understand why you keep bugging them to replace it. It still works, right?
Which reminds me of my greatest triumph in keeping an old computer running long after it should have been retired. Once upon a time a friend had this little old 286 running some ancient version of MS-DOS. She used it for a few basic tasks like appointments, diary, and a little old accounting program that I wrote in BASIC for her check register. Who cares about security updates, right? It's not connected to any network. So from time to time I replaced the occasional failed resistor or capacitor, power supply, and CMOS battery. It just kept going. Her tiny old amber CRT monitor grew dimmer and dimmer, and finally it died after 20+ years of service. Now she is using an old Thinkpad running Linux for the same tasks.
If there is a moral to this tangent it escapes me, so let's get busy with systemd.

Runlevels vs. States

SysVInit uses static runlevels to create different states to boot into, and most distros use five:
  • Single-user mode
  • Multi-user mode without network services started
  • Multi-user mode with network services started
  • System shutdown
  • System reboot.
Me, I don't see a lot of practical value in having multiple runlevels, but there they are. Instead of runlevels, systemd allows you to create different states, which gives you a flexible mechanism for creating different configurations to boot into. These states are composed of multiple unit files bundled into targets. Targets have nice descriptive names instead of numbers. Unit files control services, devices, sockets, and mounts. You can see what these look like by examining the prefab targets that come with systemd, for example /usr/lib/systemd/system/graphical.target, which is the default on CentOS 7:
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
After=multi-user.target
Conflicts=rescue.target
Wants=display-manager.service
AllowIsolate=yes
[Install]
Alias=default.target
So what do unit files look like? Let us peer into one. Unit files are in two directories:
  • /etc/systemd/system/
  • /usr/lib/systemd/system/
The first one is for us to play with, and the second one is where packages install unit files. /etc/systemd/system/ takes precedence over /usr/lib/systemd/system/. Hurrah, human over machine. This is the unit file for the Apache Web server:
[Unit]
Description=The Apache HTTP Server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/httpd
ExecStart=/usr/sbin/httpd/ $OPTIONS -DFOREGROUND
ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
ExecStop=/bin/kill -WINCH ${MAINPID}
KillSignal=SIGCONT
PrivateTmp=true
[Install]
WantedBy=multi.user.target
These files are fairly understandable even for systemd newcomers, and unit files are quite a bit simpler than a SysVInit init file, as this snippet from /etc/init.d/apache2 shows:
SCRIPTNAME="${0##*/}"
SCRIPTNAME="${SCRIPTNAME##[KS][0-9][0-9]}"
if [ -n "$APACHE_CONFDIR" ] ; then
if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then
DIR_SUFFIX="${APACHE_CONFDIR##/etc/apache2-}"
else
DIR_SUFFIX=
The whole file is 410 lines.
You can view unit dependencies, and it's always surprising to me how complex they are:
$ systemctl list-dependencies httpd.service

cgroups

cgroups, or control groups, have been present in the Linux kernel for some years, but have not been used very much until systemd. The kernel documentation says: "Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour." In other words, it has the potential to control, limit, and allocate resources in multiple useful ways. systemd uses cgroups, and you can see them. This displays your entire cgroup tree:
$ systemd-cgls
You can generate a different view with the good old ps command:
$ ps xawf -eo pid,user,cgroup,args

Useful Commands

This command reloads the configuration file of a daemon, and not its systemd service file. Use this when you make a configuration change and want to activate it with least disruption, like this example for Apache:
# systemctl reload httpd.service
Reloading a service file completely stops and then restarts a service. If it is not running this starts it:
# systemctl restart httpd.service
You can restart all daemons with one command. This reloads all unit files, and re-creates the whole systemd dependency tree:
# systemctl daemon-reload
You can reboot, suspend, and poweroff as an ordinary unprivileged user:
$ systemctl reboot
$ systemctl suspend
$ systemctl poweroff
As always, there is much, much more to learn about systemd. Here We Go Again, Another Linux Init: Intro to systemd and Understanding and Using Systemd are good introductions to systemd, with links to more detailed resources.

5 Awesome Open Source Backup Software For Linux and Unix-like Systems

$
0
0
http://www.cyberciti.biz/open-source/awesome-backup-software-for-linux-unix-osx-windows-systems

A good backup plan is essential in order to have the ability to recover from
  • Human errors
  • RAID or disk failure
  • File system corruption
  • Data center destruction and more.
In this post I'm going to list amazingly awesome open source Backup software for you.

What to look for when choosing backup software for an enterprise?

Make sure the following features are supported backup software you deploy:

  1. Open source software - You must use software for which the original source code is made freely available and may be and modified. This ensures that you can recover your data in case vendor/project stopped working on software or refused to provide patches.
  2. Cross-platform support - Make sure backup software works well on the OS deployed on all desktop and server operating systems.
  3. Data format - Open data format ensures that you can recover data in case vendor or project stopped working on software.
  4. Autochangers - Autochangers are nothing but a variety of backup devices, including library, near-line storage, and autoloader. Autochangers allows you to automate the task of loading, mounting, and labeling backup media such as tape.
  5. Backup media - Make sure you can backup data on tape, disk, DVD and in cloud storage such as AWS.
  6. Encryption datastream - Make sure all client-to-server traffic will be encrypted to ensure transmission integrity over the LAN/WAN/Internet.
  7. Database support - Make sure backup software can backup database server such as MySQL or Oracle.
  8. Backup span multiple volumes - Backup software can split each backup (dumpfile) into a series of parts, allowing for different parts to existing on different volumes. This ensures that large backups (such as 100TB file) can be stored on larger than a single backup device such as disk or tape volume.
  9. VSS (Volume Shadow Copy) - It is Microsoft's Volume Shadow Copy Service (VSS) and it is used to create snapshots of data that is to be backed up. Make sure backup software support VSS for MS-Windows client/server.
  10. Deduplication - It is a data compression technique for eliminating duplicate copies of repeating data (for example, images).
  11. License and cost - Make sure you understand and use of open source license under which the original backup software is made available to you.
  12. Commercial support - Open source software can provide community based (such as email list or fourm) or professional (such as subscriptions provided at additional cost) based support. You can use paid professional support for training and consulting purpose.
  13. Reports and alerts - Finally, you must able to see backup reports, current job status, and get alert when something goes wrong while making backups.

Bacula - Client/server backup tool for heterogeneous networks

I personally use this software to manage backup and recovery across a network of computers including Linux, OSX and Windows. You can configure it via a CLI, GUI or web interface.
Bacula  Network Backup Software
Operating system : Cross-platform
Backup Levels : Full, differential, incremental, and consolidation.
Data format: Custom but fully open.
Autochangers: Yes
Backup media: Tape/Disk/DVD
Encryption datastream: Yes
Database support: MSSQL/PostgreSQL/Oracle/
Backup span multiple volumes: Yes
VSS: Yes
License : Affero General Public License v3.0
Download url : bacula.org

Amanda - Another good client/server backup tool

AMANDA is an acronym for Advanced Maryland Automatic Network Disk Archiver. It allows the sysadmin to set up a single backup server to back up other hosts over network to tape drives or disk or authchangers.
Operating system : Cross-platform
Backup Levels : Full, differential, incremental, and consolidation.
Data format: Open (can be recovered using tool such as tar).
Autochangers: Yes
Backup media: Tape/Disk/DVD
Encryption datastream: Yes
Database support: MSSQL/Oracle
Backup span multiple volumes: Yes
VSS: Yes
License : GPL, LGPL, Apache, Amanda License
Download url : amanda.org

Backupninja - Lightweight backup system

Backupninja is a simple and easy to use backup system. You can simply drop a config files into /etc/backup.d/ to backup multiple hosts.
Ninjabackup Helper Script
Operating system : Linux/Unix
Backup Levels : Full and incremental (rsync+hard links)
Data format: Open
Autochangers: N/A
Backup media: Disk/DVD/CD/ISO images
Encryption datastream: Yes (ssh) and encrypted remote backups via duplicity
Database support: MySQL/PostgreSQL/OpenLDAP and subversion or trac repositories.
Backup span multiple volumes: ??
VSS: ??
License : GPL
Download url : riseup.net

Backuppc - High-performance client/server tool

Backuppc is can be used to backup Linux and Windows based systems to a master server's disk. It comes with a clever pooling scheme minimizes disk storage, disk I/O and network I/O.
BackupPC Server Status
Operating system : Linux/Unix and Windows
Backup Levels : Full and incremental (rsync+hard links and pooling scheme)
Data format: Open
Autochangers: N/A
Backup media: Disk/RAID storage
Encryption datastream: Yes
Database support: Yes (via custom shell scripts)
Backup span multiple volumes: ??
VSS: ??
License : GPL
Download url : backuppc.sourceforge.net

UrBackup - Easy to setup client/server system

It is an easy to setup open source client/server backup system, that through a combination of image and file backups accomplishes both data safety and a fast restoration time. Your files can be restored through the web interface or the Windows Explorer while the backups of drive volumes can be restored with a bootable CD or USB-Stick (bare metal restore). A web interface makes setting up your own backup server really easy.
Urbackup Restore CD option
Operating system : Linux/FreeBSD/Unix/Windows/several Linux based NAS operating systems. Client only runs on Linux and Windows.
Backup Levels : Full and incremental
Data format: Open
Autochangers: N/A
Backup media: Disk/Raid storage/DVD
Encryption datastream: Yes
Database support: ??
Backup span multiple volumes: ??
VSS: ??
License : GPL v3+
Download url : urbackup.org

Other awesome open source backup software for your consideration

The Amanda, Bacula and above-mentioned software are feature rich but can be complicated to set for small network or a single server. I recommend that you study and use the following backup software:
  1. Rsnapshot - I recommend this tool for local and remote filesystem snapshot utility. See how to set and use this tool on Debian/Ubuntu Linux and CentOS/RHEL based systems.
  2. rdiff-backup - Another great remote incremental backup tool for Unix-like systems.
  3. Burp - Burp is a network backup and restore program. It uses librsync in order to save network traffic and to save on the amount of space that is used by each backup. It also uses VSS (Volume Shadow Copy Service) to make snapshots when backing up Windows computers.
  4. Duplicity - Great encrypted bandwidth-efficient backup for Unix-like system. See how to Install Duplicity for encrypted backup in cloud for more infomation.
  5. SafeKeep - SafeKeep is a centralized and easy to use backup application that combines the best features of a mirror and an incremental backup.
  6. DREBS - DREBS is a tool for taking periodic snapshots of EBS volumes. It is designed to be run on the EC2 host which the EBS volumes to be snapshoted are attached.
  7. Old good unix programs like rsync, tar, cpio, mt and dump.
Conclusion
I hope you will find this post useful to backup your important data. Do not forgot to verify your backups and make multiple backup copies of your data. Also, RAID is not a backup solution. Use any one of the above-mentioned programs to backup your servers, desktop/laptop and personal mobile devices. If you know of any other open source backup software I didn't mention, share them in the comments below.

What are some obscure but useful Vim commands

$
0
0
http://xmodulo.com/useful-vim-commands.html

If my latest post on the topic did not tip you off, I am a Vim fan. So before some of you start stoning me, let me present you a list of "obscure Vim commands." What I mean by that is: a collection of commands that you might have not encountered before, but that might be useful to you. As a second disclaimer, I do not know which commands you might know and which one you find useful. So this list really is a collection of relatively less known Vim commands, but which can still probably be useful.

Saving a file and exiting

I am a bit ashamed of myself for that one, but I only recently learned that the command
:x
is equivalent to:
:wq
which is saving and quitting the current file.

Basic calculator

While in insert mode, you can press Ctrl+r then type '=' followed by a simple calculation. Press ENTER, and the result will be inserted in the document. For example, try:
Ctrl+r '=2+2' ENTER

And 4 will be inserted in the document.

Finding duplicate consecutive words

When you type something quickly, it happens that you write a word twice in a row. Just like this this. This kind of error can fool anyone, even when re-reading yourself. Hopefully, there is a simple regular expression to prevent this. Use the search ('/' by default) and type:
\(\<\w\+\>\)\_s*\1
This should display all the duplicate words. And for maximum effect, don't forget to place:
set hlsearch
in your .vimrc file to highlight all search hits.

Abbreviations

Probably one of the most impressive tricks, you can define abbreviations in Vim, which will replace what you type with somethig else in real time. The syntax is the following:
:ab [abbreviation] [what to replace it with]
The generic example is:
:ab asap as soon as possible
Which will replace "asap" with "as soon as possible" as you write.

Save a file that you forgot to open as root

This is maybe an all time favorite in the forums. Whenever you open a file that you do not have permission to write to (say a system configuration file for example) and make some changes, Vim will not save them with the normal command: ':w'
Instead of redoing the changes after opening it again as root, simply run:
:w !sudo tee %
Which will save it as root directly.

Crypt your text on the go

If you do not want someone to be able to read whatever is on your screen, Vim has the built in option to ROT13-encode your text with the following command:
ggVGg?

'gg' for moving the cursor to the first line of the Vim buffer, 'V' for entering visual mode, and 'G' for moving the cursor to the last line of the buffer. So 'ggVG' will make the visual mode cover the entire buffer. Finally 'g?' applies ROT13 encoding to the selected region.
Notice that this should be mapped to a key for maximum efficiency. It also works best with alphabetical characters. And to undo it, the best is simply to use the undo command: 'u'

Auto-completion

Another one to be ashamed of, but I see a lot of people around me not knowing it. Vim has by default an auto-completion features. Yes it is very basic, and can be enhanced by plugins, but it can still help you. The process is simple. Vim can try to guess the end of your word based on the word you wrote earlier. If you are typing the word "compiler" for the second time in the same file for example, just start typing "com" and still in insertion mode, press Ctrl+n to see Vim finish your word for you. Simple but handy.

Look at the diff between two files

Probably a lot of you know about vimdiff command, which allows you to open Vim in split mode and compare two files with the syntax:
$ vimdiff [file1] [file2]
But the same result is achievable with the Vim command:
:diffthis
First open your initial file in Vim. Then open the second one in split mode with:
:vsp [file2]
Finally launch:
:diffthis
in the first buffer, switch buffer with Ctrl+w and type:
:diffthis
again.
The two files will then be highlighted with focus on their differences.
To turn the diff off, simply use:
:diffoff

Revert the document in time

Vim keeps track of the changes you make to a file, and can easily revert it to what it was earlier in time. The command is quite intuitive. For example:
:earlier 1m
will revert the document to what it was a minute ago.
Note that you can inverse this with the command:
:later

Delete inside markers

Something that I always wanted to be comfortable doing when I started using Vim: easily delete text between brackets or parenthesis. Go to the first marker and simply use the syntax:
di[marker]
So for example, deleting between parenthesis would be:
di(
once your cursor is on the first parenthesis. For brackets or quotation marks, it would be:
di{
and:
di"

Delete until a specific maker

A bit similar to deleting inside a marker but for different purpose, the command:
dt[marker]
will delete everything in between your cursor and the marker (leaving it safe) if the marker is found on the same line. For example:
dt.
will delete the end of your sentence, leaving the '.' intact.

Turn Vim into a hex editor

This is not my favorite trick, but some might find it interesting. You can chain Vim and the xxd utility to convert the text into hexadecimal with the command:
:%!xxd

And similarly, you can revert this with:
:%!xxd -r

Place the text under your cursor in the middle of the screen

Everything is in the title. If you want to force the screen to scroll and place whatever is under your cursor in the middle, use the command:
zz
in visual mode.

Jump to previous/next position

When editing a very big file, it is frequent to make changes somewhere, and jump to another place right after. If you wish to jump back simply, use:
Ctrl+o
to go back to where you were.
And similarly:
Ctrl+i
will revert such jump back.

Render the current file as a web page

This will generate an HTML page displaying your text, and show the code in a split screen:
:%Tohtml

Very basic but so fancy.
To conclude, this list was assembled after reading some various forum threads and the Vim Tips wiki, which I really recommend if you want to boost your knowledge about the editor.
If you know any Vim command that you find useful and that you think most people do not know about, feel free to share it in the comments. As said in the introduction, an "obscure but useful" command is very subjective, but sharing is always good.

How to search multiple pdf documents for words on Linux

$
0
0
http://xmodulo.com/how-to-search-multiple-pdf-documents-for-words-on-linux.html

When it comes to text search within a pdf document, pretty much every pdf reader software supports it (be it Adobe Reader or any third-party pdf viewer). However, it becomes tricky when there are more than one pdf document to search.
In Linux, there are command-line tools (e.g., pdftotext or pdfgrep) that can be used to do simple search on multiple pdf documents at once. Compare to these command-line utilities, a desktop application called recoll is a much more advanced and user-friendly text search tool. In this tutorial, I will describe how to search multiple pdf documents for text by using recoll.

What is Recoll?

recoll is an open-source desktop application specializing in text search. recoll maintains a database index for all document files in a target storage location (e.g., a specific folder, home directory, disk drive, etc). The document index contains texts extracted from document files with external helper programs. Using the document index, recoll can perform more advanced queries than simple regular expression based search.
The powerful features of recoll include:
  • Supports multiple document formats (e.g., pdf, doc, text, html, mailbox).
  • Automatically indexes document contents from files, emails, email attachments, compressed archives, etc.
  • Indexes web pages you visited (with the help of Firefox extension).
  • Supports multiple languages and Unicode-based multi-character sets.
  • Supports advanced search, such as proximity search and filtering based on file type, file system location, modification time, and file size.
  • Supports search with multiple entry fields such as document title, keyword, author, etc.

Install Recoll on Linux

To install recoll and external helper programs on Debian, Ubuntu, or Linux Mint:
$ sudo apt-get install recoll poppler-utils antiword
To install recoll and external helper programs on Fedora:
$ sudo yum install recoll poppler-utils antiword
To install recoll on CentOS or RHEL, first enable EPEL repository, and then run:
$ sudo yum install recoll poppler-utils antiword

Build a Document Index with Recoll

To launch recoll, simply run:
$ recoll
The first time you launch recoll, you will see the screen shown below. Here you are asked to choose one of two menu before starting indexing: (1) "Indexing configuration" which controls how to build a document database index, or (2) "Indexing schedule" which controls how often to update a database index. For now, click on "Indexing configuration" menu.

In the configuration window, you will see "Top directories" (directories which contain documents to search), and "Skipped paths" (file system paths to avoid when building a document index) under "General parameters" tab. In this example, I add "~/Documents" to "Top directories" field.

Under "Local parameters" tab, you can specify other indexing criteria, such as file names to skip, max file size, etc. Once you are done, go ahead and create a document database index. The document index building process uses external programs (e.g., pdftotext for pdf documents, antiword for MS Word documents) to extract texts from individual documents, and create an index out of the extracted texts.

Once an initial document index is built, you can check what kind of documents have been indexed, by going to "Help"-->"Show indexed types" menu. Make sure that "application/pdf" mime-type is included.

Search Multiple PDF Documents for Text

You are now ready to conduct document search. Enter any word or phrase (with quotes) to search for.

A search result shows a list of pdf documents along with document snippets and page number information that are matched with search query. The example output shows a list of pdf documents that contain a phrase "virtual machine". You can check document previews, or open the matched documents by using an external pdf viewer.

Using recoll, you can search pdf documents that contains specific word(s) in the document title. For example, by typing in "title:kernel" in search query, you can search for pdf documents which contain "kernel" in their titles.

Using advanced search option, you can define various other search criteria.

As documents are added, updated or removed, you will need to update an existing document index. You can do it manually by clicking on "Update Index" menu.

You can also update an existing document index automatically, either with a periodic cron job or with a background daemon process.

Connect to WiFi network from command line in Linux

$
0
0
http://www.blackmoreops.com/2014/09/18/connect-to-wifi-network-from-command-line-in-linux

How many of you failed to connect to WiFi network in Linux? Did you bumped into issues like the followings in different forums, discussion page, blogs? I am sure everyone did at some point. Following list shows just the results from Page 1 of a Google search result with “Unable to connect to WiFi network in Linux” keywords.
  1. Cannot connect to wifi at home after upgrade to ubuntu 14.04
  2. Arch Linux not connecting to Wifi anymore
  3. I can’t connect to my wifi
  4. Cannot connect to WiFi
  5. Ubuntu 13.04 can detect wi-fi but can’t connect
  6. Unable to connect to wireless network ath9k
  7. Crazy! I can see wireless network but can’t connect
  8. Unable to connect to Wifi Access point in Debian 7
  9. Unable to connect Wireless
I came across this article in blogspot and this was one of the most well written guides I’ve ever came across. I am slightly changing that post to accommodate for all flavor of Linux releases.
Connect to WiFi network in Linux from command line - blackMORE Ops
Following guide explains how you can connect to a WiFi network in Linux from command Line. This guide will take you through the steps for connecting to a WPA/WPA2 WiFi network.

WiFi network from command line – Required tools

Following tools are required to connect to WiFi network in Linux from command line
  1. wpa_supplicant
  2. iw
  3. ip
  4. ping
Before we jump into technical jargons let’s just quickly go over each item at a time.

Linux WPA/WPA2/IEEE 802.1X Supplicant

wpa_supplicant is a WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2 (IEEE 802.11i / RSN). It is suitable for both desktop/laptop computers and embedded systems. Supplicant is the IEEE 802.1X/WPA component that is used in the client stations. It implements key negotiation with a WPA Authenticator and it controls the roaming and IEEE 802.11 authentication/association of the wlan driver.

iw – Linux Wireless

iw is a new nl80211 based CLI configuration utility for wireless devices. It supports all new drivers that have been added to the kernel recently. The old tool iwconfing, which uses Wireless Extensions interface, is deprecated and it’s strongly recommended to switch to iw and nl80211.

ip – ip program in Linux

ip is used to show / manipulate routing, devices, policy routing and tunnels. It is used for enabling/disabling devices and it helps you to find general networking informations. ip was written by Alexey N. Kuznetsov and added in Linux 2.2. Use man ip to see full help/man page.

ping

Good old ping For every ping, there shall be a pong …. ping-pong – ping-pong – ping-pong … that should explain it.
BTW man ping helps too …

Step 1: Find available WiFi adapters – WiFi network from command line

This actually help .. I mean you need to know your WiFi device name before you go an connect to a WiFi network. So just use the following command that will list all the connected WiFi adapters in your Linux machines.
root@kali:~# iw dev
phy#1
    Interface wlan0
        ifindex 4
        type managed
root@kali:~#
Let me explain the output:
This system has 1 physical WiFi adapters.
  1. Designated name: phy#1
  2. Device names: wlan0
  3. Interface Index: 4. Usually as per connected ports (which can be an USB port).
  4. Type: Managed. Type specifies the operational mode of the wireless devices. managed means the device is a WiFi station or client that connects to an access point.
Connect to WiFi network in Linux from command line - Find WiFi adapters - blackMORE Ops-1

Step 2: Check device status – WiFi network from command line

By this time many of you are thinking, why two network devices. The reason I am using two is because I would like to show how a connected and disconnected device looks like side by side. Next command will show you exactly that.
You can check that if the wireless device is up or not using the following command:
root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
root@kali:~#

As you can already see, I got once interface (wlan0) as state UP and wlan1 as state DOWN.
Look for the word “UP” inside the brackets in the first line of the output.
Connect to WiFi network in Linux from command line - Check device status- blackMORE Ops-2

In the above example, wlan1 is not UP. Execute the following command to

Step 3: Bring up the WiFi interface – WiFi network from command line

Use the following command to bring up the WiFI interface
root@kali:~# ip link set wlan0 up

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix
Connect to WiFi network in Linux from command line - Bring device up - blackMORE Ops-3
If you run the show link command again, you can tell that wlan1 is now UP.
root@kali:~# ip link show wlan0
4: wlan0: mtu 1500 qdisc mq state UP mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
root@kali:~#

Step 4: Check the connection status – WiFi network from command line

You can check WiFi network connection status from command line using the following command
root@kali:~# iw wlan0 link
Not connected.
root@kali:~#
Connect to WiFi network in Linux from command line - Check device connection - blackMORE Ops-4
The above output shows that you are not connected to any network.

Step 5: Scan to find WiFi Network – WiFi network from command line

Scan to find out what WiFi network(s) are detected
root@kali:~# iw wlan0 scan
BSS 9c:97:26:de:12:37 (on wlan0)
    TSF: 5311608514951 usec (61d, 11:26:48)
    freq: 2462
    beacon interval: 100
    capability: ESS Privacy ShortSlotTime (0x0411)
    signal: -53.00 dBm
    last seen: 104 ms ago
    Information elements from Probe Response frame:
    SSID: blackMOREOps
    Supported rates: 1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0
    DS Parameter set: channel 11
    ERP: Barker_Preamble_Mode
    RSN:     * Version: 1
         * Group cipher: CCMP
         * Pairwise ciphers: CCMP
         * Authentication suites: PSK
         * Capabilities: 16-PTKSA-RC (0x000c)
    Extended supported rates: 6.0 9.0 12.0 48.0
---- truncated ----

The 2 important pieces of information from the above are the SSID and the security protocol (WPA/WPA2 vs WEP). The SSID from the above example is blackMOREOps. The security protocol is RSN, also commonly referred to as WPA2. The security protocol is important because it determines what tool you use to connect to the network.
— following image is a sample only —
Connect to WiFi network in Linux from command line - Scan Wifi Network using iw - blackMORE Ops - 5

Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line

Now we will generate a configuration file for wpa_supplicant that contains the pre-shared key (“passphrase“) for the WiFi network.
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
abcd1234
root@kali:~#
(where 'abcd1234' was the Network password)
wpa_passphrase uses SSID as a string, that means you need to type in the passphrase for the WiFi network blackMOREOps after you run the command.
Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 6

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix
wpa_passphrase will create the necessary configuration entries based on your input. Each new network will be added as a new configuration (it wont replace existing configurations) in the configurations file /etc/wpa_supplicant.conf.
root@kali:~# cat /etc/wpa_supplicant.conf 
# reading passphrase from stdin
network={
ssid="blackMOREOps"
#psk="abcd1234"
psk=42e1cbd0f7fbf3824393920ea41ad6cc8528957a80a404b24b5e4461a31c820c
}
root@kali:~#

Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line

Now that we have the configuration file, we can use it to connect to the WiFi network. We will be using wpa_supplicant to connect. Use the following command
root@kali:~# wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf
ioctl[SIOCSIWENCODEEXT]: Invalid argument
ioctl[SIOCSIWENCODEEXT]: Invalid argument
root@kali:~#
Where,
-B means run wpa_supplicant in the background.
-D specifies the wireless driver. wext is the generic driver.
-c specifies the path for the configuration file.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 7

Use the iwcommand to verify that you are indeed connected to the SSID.
root@kali:~# iw wlan0 link
Connected to 9c:97:00:aa:11:33 (on wlan0)
    SSID: blackMOREOps
    freq: 2412
    RX: 26951 bytes (265 packets)
    TX: 1400 bytes (14 packets)
    signal: -51 dBm
    tx bitrate: 6.5 MBit/s MCS 0

    bss flags:    short-slot-time
    dtim period:    0
    beacon int:    100

Step 8: Get an IP using dhclient – WiFi network from command line

Until step 7, we’ve spent time connecting to the WiFi network. Now use dhclient to get an IP address by DHCP
root@kali:~# dhclient wlan0
Reloading /etc/samba/smb.conf: smbd only.
root@kali:~#
You can use ip or ifconfig command to verify the IP address assigned by DHCP. The IP address is 10.0.0.4 from below.
root@kali:~# ip addr show wlan0
4: wlan0: mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/24 brd 10.0.0.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::260:64ff:fe37:4a30/64 scope link
       valid_lft forever preferred_lft forever
root@kali:~#

(or)

root@kali:~# ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 00:60:64:37:4a:30
inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::260:64ff:fe37:4a30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23868 errors:0 dropped:0 overruns:0 frame:0
TX packets:23502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:22999066 (21.9 MiB) TX bytes:5776947 (5.5 MiB)

root@kali:~#
Add default routing rule.The last configuration step is to make sure that you have the proper routing rules.
root@kali:~# ip route show 
default via 10.0.0.138 dev wlan0
10.0.0.0/24 dev wlan0  proto kernel  scope link  src 10.0.0.4

Connect to WiFi network in Linux from command line - Check Routing and DHCP - blackMORE Ops - 8

Step 9: Test connectivity – WiFi network from command line

Ping Google’s IP to confirm network connection (or you can just browse?)
root@kali:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=3 ttl=42 time=265 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=42 time=176 ms
64 bytes from 8.8.8.8: icmp_req=5 ttl=42 time=174 ms
64 bytes from 8.8.8.8: icmp_req=6 ttl=42 time=174 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 4 received, 33% packet loss, time 5020ms
rtt min/avg/max/mdev = 174.353/197.683/265.456/39.134 ms
root@kali:~#

Summary

This is a very detailed and long guide. Here is a short summary of all the things you need to do in just few line.
root@kali:~# iw dev
root@kali:~# ip link set wlan0 up
root@kali:~# iw wlan0 scan
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
root@kali:~# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
root@kali:~# iw wlan0 link
root@kali:~# dhclient wlan0
root@kali:~# ping 8.8.8.8
(Where wlan0 is wifi adapter and blackMOREOps is SSID)
(Add Routing manually)
root@kali:~# ip route add default via 10.0.0.138 dev wlan0
At the end of it, you should be able to connect to WiFi network. Depending on the Linux distro you are using and how things go, your commands might be slightly different. Edit commands as required to meet your needs.
Thanks for reading.

How to run a command on Remote server without login to server shell prompt?

$
0
0
http://www.nextstep4it.com/how-to-run-a-command-on-remote-server-without-login-to-server-shell-prompt

By using SSH command, You can run a command on remote server without login to server shell.
SSH Command Format:
# ssh remoteuser@remotehost remotecommand
Suppose I want to run ls command on remote server pk.testmail.com
parveen@Earth:~$ ssh root@pk.testmail.com ls
The authenticity of host ' pk.testmail.com (x.x.x.x)' can't be established.
RSA key fingerprint is 7d:74:91:ed:30:1e:86:0b:69:c9:77:0b:72:0e:ad:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pk.testmail.com' (RSA) to the list of known hosts.
root@pk.testmail.com's password:

anaconda-ks.cfg
downloads
install.log
install.log.syslog
Note : - When we run any command on remote server, by default ssh will not allocate a pseudoterminal. For using pseudoterminal we have to use -t option with ssh command as explained below.
parveen@Earth:~$ ssh -t root@ pk.testmail.com ls
root@pk.testmail.com's password:
anaconda-ks.cfg  downloads  install.log  install.log.syslog
Connection to pk.testmail.com closed.
Now you can see, Command out put will be same as we run same command after login to remote server.
To run multiple commands on remote server, use semi-colon between commands as shown below:
parveen@Earth:~$ ssh -t root@ pk.testmail.com "ls ; df -h ; date ; cal"
root@pk.testmail.com's password:
anaconda-ks.cfg  downloads  install.log  install.log.syslog  server_invalid_mails_cleanup.sh
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         20G  6.9G   12G  37% /
tmpfs           246M     0  246M   0% /dev/shm
Mon Nov 10 00:52:12 EST 2014
November 2014
Su Mo Tu We Th Fr Sa
1
2  3  4  5  6  7  8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30
Connection to pk.testmail.com closed.
We can also use echo command to run a remotecommand on remoteserver.
parveen@Earth:~$ echo "ls" | ssh root@pk.testmail.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
root@pk.testmail.com's password:
anaconda-ks.cfg
downloads
install.log
install.log.syslog
Note:- We can’t use -t option with ssh command to run remotecommand using echo command because Pseudo-terminal will not be allocated because stdin is not a terminal.
Use ssh command with tar command to move a directory of files between two machines as an alternative to scp command. Example is explained below:
parveen@Earth:~$ tar -cvf - images/test1 | ssh root@ pk.testmail.com  '(cd /tmp/; tar -xf -)'
images/test1/
images/test1/7
images/test1/6
images/test1/5
images/test1/4
images/test1/9
images/test1/8
images/test1/3
images/test1/2
images/test1/1
root@pk.testmail.com's password:

Now files are uploaded on Remote server. Check files on remote server as shown below.

[root@pk tmp]# ls -l images/
total 4
drwxrwxr-x 2 1000 1000 4096 Nov 10 01:02 test1
[root@pk tmp]# ls -l images/test1/
total 0
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 1
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 2
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 3
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 4
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 5
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 6
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 7
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 8
-rw-rw-r-- 1 1000 1000 0 Nov 10 01:02 9
Viewing all 1417 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>