Quantcast
Channel: Sameh Attia
Viewing all 1407 articles
Browse latest View live

How to remove file metadata on Linux

$
0
0
http://xmodulo.com/2014/08/remove-file-metadata-linux.html

A typical data file often has associated "metadata" which is descriptive information about the file, represented in the form of a set of name-value pairs. Common metadata include creator's name, tools used to generate the file, file creation/update date, location of creation, editing history, etc. EXIF (images), RDF (web resources), DOI (digital documents) are some of popular metadata standards.
While metadata has its own merits in data management fields, it can actually affect your privacy adversely. EXIF data in photo images can reveal personally identifiable information such as your camera model, GPS coordinate of shooting, your favorite photo editor software, etc. Metadata in documents and spreadsheets contain author/affiliation information and other editing history. Not to be paranoid, but metadata gathering tools such as metagoofil are often exploited during information gathering stage as part of penetration testing.
For those of you who want to strip any personalizing metadata from any shared data, there are ways to remove metadata from data files. You can use existing document or image editor software which typically have built-in metadata editing capability. In this tutorial, let me introduce a nice standalone metadata cleaner tool which is developed for a single goal: anonymize all metadata for your privacy.
MAT (Metadata Anonymisation Toolkit) is a dedicated metadata cleaner written in Python. It was developed under the umbrella of the Tor project, and comes standard on Tails, privacy-enhanced live OS.
Compared to other tools such as exiftool which can write to only a limited number of file types, MAT can eliminate metadata from all kinds of files: images (png, jpg), documents (odt, docx, pptx, xlsx, pdf), archives (tar, tar.bz2), audio (mp3, ogg, flac), etc.

Install MAT on Linux

On Debian-based systems (Ubuntu or Linux Mint), MAT comes packaged, so installation is straightforward:
$ sudo apt-get install mat
On Fedora, MAT does not come as a pre-built package, so you need to build it from the source. Here is how I built MAT on Fedora (with some limited success; see the bottom of the tutorial):
$ sudo yum install python-devel intltool python-pdfrw perl-Image-ExifTool python-mutagen
$ sudo pip install hachoir-core hachoir-parser
$ wget https://mat.boum.org/files/mat-0.5.tar.xz
$ tar xf mat-0.5.tar.xz
$ cd mat-0.5
$ python setup.py install

Anonymize Metadata with MAT-GUI

Once installed, MAT can be accessible via GUI as well as from the command line. To launch MAT's GUI, simply type:
$ mat-gui
Let's clean up a sample document file (e.g., private.odt) which has the following metadata embedded.

To add the file to MAT for cleanup, click on "Add" icon. Once the file is loaded, click on "Check" icon to scan for any hidden metadata information.

Once any metadata is detected by MAT, "State" will be marked as "Dirty". You can double click the file to see detected metadata.

To clean up metadata from the file, click on "Clean" icon. MAT will automatically empty all private metadata fields from the file.

The cleaned up state is without any personally identifiable traces:

Anonymize Metadata from the Command Line

As mentioned before, another way to invoke MAT is from the command line, and for that, use mat command.
To check for any sensitive metadata, first go to the directory where your files are located, and then run:
$ mat -c .
It will scan all files in the current directory and its sub directories, and report their state (clean or unclean).

You can check actual metadata detected by using '-d' option:
$ mat -d

If you don't supply any option with mat command, the default action is to remove metadata from files. If you want to keep a backup of original files during cleanup, use '-b' option. The following command cleans up all files, and stores original files as '*.bak" files.
$ mat -b .

To see a list of all supported file types, run:
$ mat -l

Troubleshooting

Currently I have the following issue with a compiled version of MAT on Fedora. When I attempt to clean up archive/document files (e.g., *.gz, *.odt, *.docx) on Fedora, MAT fails with the following error. If you know how to fix this problem, let me know in the comment.
  File "/usr/lib64/python2.7/zipfile.py", line 305, in __init__
raise ValueError('ZIP does not support timestamps before 1980')
ValueError: ZIP does not support timestamps before 1980

Conclusion

MAT is a simple, yet extremely useful tool to prevent any inadvertent privacy leaks from metadata. Note that it is still your responsibility to anonymize file content, if necessary. All MAT does is to eliminate metadata associated with your files, but does nothing with the files themselves. In short, MAT can be a life saver as it can handle most common metadata removal, but you shouldn't rely solely on it to guarantee your privacy.

Check Hard drive for bad sectors or bad blocks in linux

$
0
0
http://www.linuxtechi.com/check-hard-drive-for-bad-sector-linux

badblocks is the command or utility in linux like operating system which can scan or test our hard disk and external drive for bad sectors. Bad sectors or bad blocks is the space of the disk which can't be used due to the permanent damage or OS is unable to access it. 
Badblocks  command will detect all bad blocks(bad sectors) on our hard disk and save them in a text file so that we can use it with e2fsck to configure  Operating System(OS) to not store our data on these damaged sectors.
Step:1 Use fdisk command to identify your hard drive info 
# sudo fdisk -l 
Step:2  Scan your hard drive for Bad Sectors or Bad Blocks 
# sudo badblocks -v /dev/sdb > /tmp/bad-blocks.txt
Just replace “/dev/sdb” with your own hard disk / partition. When we execute above command  a text file “bad-blocks” will be created under /tmp , which will contains all bad blocks.
Example :
badblocks
Step:3 Inform OS not to use bad blocks  for storing data
Once the scanning is completed , if the bad sectors are reported , then use file “bad-blocks.txt” with e2fsck command  and force OS not to use these bad blocks for storing data.
# sudo e2fsck -l /tmp/bad-blocks.txt  /dev/sdb
Note : Before running e2fsck command , you just make sure the drive is not mounted.
For any futher help on badblocks & e2fsck command , read their man pages 
# man badblocks
# man e2fsck 

How to manage a WiFi connection from the command line

$
0
0
http://xmodulo.com/2014/08/manage-wifi-connection-command-line.html

Whenever you install a new Linux distribution on a computer, it is in general recommended that you connect to the internet via a wired connection. There are two main reasons for this: one, your wireless adapter may not have the right driver loaded; second, if you are installing from the command line, managing WiFi is scary. I always tried to avoid dealing with WiFi over the command line. But in the Linux world, there is no place for fear. If you do not know how to do something, that is the only reason you need to go ahead and learn it. So I forced myself to learn how to manage a WiFi connection from the command line on Linux.
There are of course multiple ways to connect to a WiFi from the command line. But for the sake of this post, and as an advice, I will try to use the most basic way: the one that uses programs and utilities included in the "default packages" of any distribution. Or at least I will try. An obvious reason for this choice is that the process can potentially be reproduced on any Linux computer. The downside is its relative complexity.
First, I will assume that you have the correct drivers loaded for your wireless LAN card. There is no way to start anything without that. And if you don't, you should take a look at the Wiki and documentation for your distribution.
Then you can check which interface supports wireless connections with the command
$ iwconfig

In general, the wireless interface is called wlan0. There are of course exceptions, but for the rest of this tutorial, I will call it that way.
Just in case, you should make sure that the interface is up with:
$ sudo ip link set wlan0 up
Once you know that your interface is operational, you should scan for nearby wireless networks with:
$ sudo iw dev wlan0 scan | less

From the output, you can extract the name of the network (its SSID), its signal power, and which type of security it uses (e.g., WEP, WPA/WPA2). From there, the road splits into two: the nice and easy, and the slightly more complicated case.
If the network you want to connect to is not encrypted, you can connect straight to it with:
$ sudo iw dev wlan0 connect [network SSID]
If the network uses WEP encryption, it is also quite easy:
$ sudo iw dev wlan0 connect [network SSID] key 0:[WEP key]
But everything gets worse if the network uses WPA or WPA2 protocols. In this case, you have to use the utility called wpa_supplicant, which is not always included by default. You then have to modify the file at /etc/wpa_supplicant/wpa_supplicant.conf to add the lines:
1
2
3
4
5
network={
    ssid="[network ssid]"
    psk="[the passphrase]"
    priority=1
}
I recommend that you append it at the end of the file, and make sure that the other configurations are commented out. Be careful that both the ssid and the passphrase are case sensitive. You can also technically put the name of the access point as the ssid, and wpa_supplicant will replace it with the proper ssid.
Once the configuration file is completed, launch this command in the background:
$ sudo wpa_supplicant -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf
Finally, whether you connected to an open or a secure network, you have to get an IP address. Simply use:
$ sudo dhcpcd wlan0
If everything goes well, you should get a brand new local IP via DHCP, and the process will fork in the background. If you want to be sure that you are connected, you can always check again with:
$ iwconfig

To conclude, I think that getting over the first step is completely worth it. You never know when your GUI will be down, or when you cannot access a wired connection, so getting ready now seems very important. Also, as mentioned before, there are a lot of ways (e.g., NetworkManager, wicd, netcfg, wifi) to manage a wireless connection. If I try to stick to the most basic way, I know that in some cases, the utilities that I used may not even be available to you, and that you would have to download them prior to that. On the other side of the balance, there are some more advanced programs, which are definitely not included in the "default packages," which will greatly simplify the whole process. But as a general advice, it is good to stick to the basics at first.
What other ways would you recommend to connect via WiFi from the command line? Please let us know in the comments.

umask - find default permissions in linux

$
0
0
http://www.nextstep4it.com/magazines/nseditions/2014/June14_Edition/linux1.php

You may be wondering about where these file permissions come from. The answer is umask. The umask command sets the default permissions for any file or directory you create:

$ touch newfile
$ ls -al newfile
-rw-r--r-- 1 rich rich 0 Sep 20 19:16 newfile
$

The touch command created the file using the default permissions assigned to my user account. The umask command shows and sets the default permissions:

$ umask
0022
$

Unfortunately, the umask command setting isn't overtly clear, and trying to understand exactly how it works makes things even muddier. The first digit represents a special security feature called the sticky bit.


The next three digits represent the octal values of the umask for a file or directory. To understand how umask works, you first need to understand octal mode security settings.

Octal modesecurity settings take the three rwx permission values and convert them into a 3-bit binary value, represented by a single octal value. In the binary representation, each position is a binary bit. Thus, if the read permission is the only permission set, the value becomes r--, relating to a binary value of 100, indicating the octal value of 4

Octal mode takes the octal permissions and lists three of them in order for the three security levels (user, group, and everyone). Thus, the octal mode value 664 represents read and write permissions for the user and group, but read-only permission for everyone else.


Now that you know about octal mode permissions, the umask value becomes even more confusing. The octal mode shown for the default umask on my Linux system is 0022, but the file I created had an octal mode permission of 644. How did that happen ?

The umask value is just that, a mask. It masks out the permissions you don't want to give to the security level. Now we have to dive into some octal arithmetic to figure out the rest of the story.

The umask value is subtracted from the full permission set for an object. The full permission for a file is mode 666 (read/write permission for all), but for a directory it's 777 (read/write/execute permission for all). Thus, in the example, the file starts out with permissions 666, and the umask of 022 is applied, leaving a file permission of 644.

The umask value is normally set in the /etc/profile startup file You can specify a different default umask setting using the umask command:

$ umask 026
$ touch newfile2
$ ls -l newfile2
-rw-r----- 1 rich rich 0 Sep 20 19:46 newfile2
$

By setting the umask value to 026, the default file permissions become 640, so the new file now is restricted to read-only for the group members, and everyone else on the system has no permissions to the file. The umask value also applies to making new directories:

$ mkdir newdir
$ ls -l
drwxr-x--x 2 rich rich 4096 Sep 20 20:11 newdir/
$

Because the default permissions for a directory are 777, the resulting permissions from the umask are different from those of a new file. The 026 umask value is subtracted from 777, leaving the 751 directory permission setting.

That’s it for this edition , In Coming Classroom , we will discuss how to change permissions using chmod command & how to change owner of files or directories.

What are useful CLI tools for Linux system admins

$
0
0
http://xmodulo.com/2014/08/useful-cli-tools-linux-system-admins.html

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don't forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.
2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.
3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.
4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.
5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.
6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.
7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.
8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.
9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.
10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.
11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.
12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.
13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.
14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.
15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.
16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.
17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.
18. elinks/lynx: text-based web browsers for CLI-based server environment.

Security Tools

19. iptables: A user-space CLI tool for configuring Linux kernel firewall. Provides means to create and modify rules for incoming, transit and outgoing packets within Linux kernel space.
20. nmap: A popular port scanning and network discovery tool used for security auditing purposes. Useful to find out which hosts are up and running on the local network, and what ports are open on a particular host.
21. TCP Wrappers: A host-based network ACL tool that can be used to filter incoming/outgoing reqeuests/replies. Often used alongside iptables as an additional layer of security.
22. getfacl/setfacl: View and customize access control lists of files and directories, as extensions to traditional file permissions.
23. cryptsetup: Used to create and manage LUKS-encrypted disk partitions.
24. lynis: A CLI-based vulnerability scanner tool. Can scan the entire Linux system, and report potential vulnerabilities along with possible solutions.
25. maldet: A malware scanner CLI tool which can detect and quarantine potentially malware-infected files. Can run as a background daemon for continuous monitoring.
26. rkhunter/chkrootkit: CLI tools which scan for potential rootkits, hidden backdoors and suspected exploits on a local system, and disable them.

Storage Tools

27. fdisk: A disk partition editor tool. Used to view, create and modify disk partitions on hard drives and removable media.
28. sfdisk: A variant of fdisk which accesses or updates a partition table in a non-interactive fashion. Useful to automate disk partitioning as part of backup and recovery procedure.
29. parted: Another disk partition editor which can support disk larger than 2TB with GPT (GUID Partitioning Table). Gparted is a GTK+ GUI front-end of parted.
30. df: Used to check used/available storage and mount point of different partitions or file directories. A user-friendly variant dfc exists.
31. du: Used to view current disk usage associated with different files and directories (e.g., du -sh *).
32. mkfs: A disk formatting command used to build a filesystem on individual disk partitions. Filesystem-specific versions of mkfs exist for a number of filesystems including ext2, ext3, ext4, bfs, ntfs, vfat/fat.
33. fsck: A CLI tool used to check a filesystem for errors and repair where possible. Typically run automatically upon boot when necessary, but also invoked manually on demand once unmounting a partition.
34. mount: Used to map a physical disk partition, network share or remote storage to a local mount point. Any read/write in the mount point makes actual data being read/written in the correspoinding actual storage.
35. mdadm: A CLI tool for managing software RAID devices on top of physical block devices. Can create, build, grow or monitor RAID array.
36. lvm: A suite of CLI tools for managing volume groups and physical/logical volumes, which allows one to create, resize, split and merge volumes on top of multiple physical disks with minimum downtime.

Log Processing Tools

37. tail: Used to monitor trailing part of a (growing) log file. Other variants include multitail (multi-window monitoring) and ztail (inotify support and regex filtering and coloring).
38. logrotate: A CLI tool that can split, compresse and mail old/large log files in a pre-defined interval. Useful for administration of busy servers which may produce a large amount of log files.
39. grep/egrep: Can be used to filter log content for a particular pattern or a regular expression. Variants include user-friendly ack and faster ag.
40. awk: A versatile text scanning and processing tool. Often used to extract certain columns or fields from text/log files, and feed the result to other tools.
41. sed: A text stream editor tool which can filter and transform (e.g., remove line/whitespace, substitute/convert a word, add numbering) text streams and pipeline the result to stdout/stderr or another tool.

Backup Tools

42. rsync: A fast one-way incremental backup and mirroring tool. Often used to replicate a data repository to an offsite storage, optionally over a secure connection such as SSH or stunnel.
43. rdiff-backup: Another bandwidth-efficient, incremental backup tool. Maintains differential of two consecutive snapshots.
44. duplicity: An encrypted incremental backup utility. Uses GnuPG to encrypt a backup, and transfers to a remote server over SSH.

Performance Monitoring Tools

45. top: A CLI-based process viewer program. Can monitor system load, process states, CPU and memory utilization. Variants include more user-friendly htop.
46. ps: Shows a snapshot of all running processes in the system. The output can be customized to show PID, PPID, user, load, memory, cummulative user/system time, start time, and more. Variants include pstree which shows processes in a tree hierarchy.
47. nethogs: A bandwidth monitoring tool which groups active network connections by processes, and reports per-process (upload/download) bandwidth consumption in real-time.
48. ngxtop: A web-server access log parser and monitoring tool whose interface is inspired by top command. It can report, in real time, a sorted list of web requests along with frequency, size, HTTP return code, IP address, etc.
49. vmstat: A simple CLI tool which shows various run-time system properties such as process count, free memory, paging status, CPU utilization, block I/O activities, interrupt/context switch statistics, and more.
50. iotop: An ncurses-based I/O monitoring tool which shows in real time disk I/O activities of all running processes in sorted order.
51. iostat: A CLI tool which reports current CPU utilization, as well as device I/O utilization, where I/O utilization (e.g., block transfer rate, byte read/write rate) is reported on a per-device or per-partition basis.

Productivity Tools

52. screen: Used to split a single terminal into multiple persistent virtual terminals, which can also be made accessible to remote users, like teamviewer-like screen sharing.
53. tmux: Another terminal multiplexer tool which enables multiple persistent sessions, as well as horizontal/vertial splits of a terminal.
54. cheat: A simple CLI tool which allows you to read cheat sheets of many common Linux commands, conveniently right at your fingertips. Pre-built cheat sheets are fully customizable.
55. apropos: Useful when you are searching man pages for descriptions or keywords.

Package Management Tools

56. apt: The de facto package manager for Debian based systems like Debain, Ubuntu or Backtrack. A life saver.
57. apt-fast: A supporting utility for apt-get, which can significantly improve apt-get's download speed by using multiple concurrent connections.
58. apt-file: Used to find out which .deb package a specific file belongs to, or to show all files in a particular .deb package. Works on both installed and non-installed packages.
59. dpkg: A CLI utility to install a .deb package manually. Highly advised to use apt whenever possible.
60. yum: The de facto automatic package manager for Red Hat based systems like RHEL, CentOS or Fedora. Yet another life saver.
61. rpm: Typically I use rpmyum something. Has some useful parameters like -q, -f, -l for querying, files and locations, respectively.

Hardware Tools

62. lspci: A command line tool which shows various information about installed PCI devices, such as model names, device drivers, capabilities, memory address, PCI bus address.
63. lshw: A command line tool which queries and displays detailed information of hardware configuration in various categories (e.g., processor, memory, motherboard, network, video, storage). Supports multiple output formats: html, xml, json, text.
64. inxi: A comprehensive hardware reporting tool which gives an overview of various hardware components such as CPU, graphics card, sound card, network card, temperature/fan sensors, etc.
If you would like to recommend any useful tool which is not listed here, feel free to share it in the comment section.

How to Encrypt Email in Linux

$
0
0
http://www.linux.com/learn/tutorials/784165-how-to-encrypt-email-in-linux

fig-1 kgpg
Kgpg provides a nice GUI for creating and managing your encryption keys.
If you've been thinking of encrypting your email, it is a rather bewildering maze to sort through thanks to the multitude of email services and mail clients. There are two levels of encryption to consider: SSL/TLS encryption protects your login and password to your mailserver. GnuPG is the standard strong Linux encryption tool, and it encrypts and authenticates your messages. It is best if you manage your own GPG encryption and not leave it up to third parties, which we will discuss in a moment.
Encrypting messages still leaves you vulnerable to traffic analysis, as message headers must be in the clear. So that necessitates yet another tool such as the Tor network for hiding your Internet footprints. Let's look at various mail services and clients, and the pitfalls and benefits therein.

Forget Webmail

If you use GMail, Yahoo, Hotmail, or another Web mail provider, forget about it. Anything you type in a Web browser is vulnerable to JavaScript attacks, and whatever mischiefs the service provider engages in. GMail, Yahoo, and Hotmail all offer SSL/TLS encryption to protect your messages from wiretapping. But they offer no protections from their own data-mining habits, so they don't offer end-to-end encryption. Yahoo and Google both claim they're going to roll out end-to-end encryption next year. Color me skeptical, because they will wither and die if anything interferes with the data-mining that is their core business.
There are various third-party email security services such as Virtru and SafeMess that claim to offer secure encryption for all types of email. Again I am skeptical, because whoever holds your encryption keys has access to your messages, so you're still depending on trust rather than technology.
Peer messaging avoids many of the pitfalls of using centralized services. RetroShare and Bitmessage are two popular examples of this. I don't know if they live up to their claims, but the concept certainly has merit.
What about Android and iOS? It's safest to assume that the majority of Android and iOS apps are out to get you. Don't take my word for it-- read their terms of service and examine the permissions they require to install on your devices. And even if their terms are acceptable when you first install them, unilateral TOS changes are industry standard, so it is safest to assume the worst.

Zero Knowledge

Proton Mail is a new email service that claims zero-knowledge message encryption. Authentication and message encryption are two separate steps, Proton is under Swiss privacy laws, and they do not log user activity. Zero knowledge encryption offers real security. This means that only you possess your encryption keys, and if you lose them your messages are not recoverable.
There are many encrypted email services that claim to protect your privacy. Read the fine print carefully and look for red flags such as limited user data collection, sharing with partners, and cooperation with law enforcement. These indicate that they collect and share user data, and have access to your encryption keys and can read your messages.

Linux Mail Clients

A standalone open source mail client such as KMail, Thunderbird, Mutt, Claws, Evolution, Sylpheed, or Alpine, set up with your own GnuPG keys that you control gives you the most protection. (The easiest way to set up more secure email and Web surfing is to run the TAILS live Linux distribution. See Protect Yourself Online With Tor, TAILS, and Debian.)
Whether you use TAILS or a standard Linux distro, managing GnuPG is the same, so let's learn how to encrypt messages with GnuPG.

How to Use GnuPG

First, a quick bit of terminology. OpenPGP is an open email encryption and authentication protocol, based on Phil Zimmerman's Pretty Good Privacy (PGP). GNU Privacy Guard (GnuPG or GPG) is the GPL implementation of OpenPGP. GnuPG uses symmetric public key cryptography. This means that you create pairs of keys: a public key that anyone can use to encrypt messages to send to you, and a private key that only you possess to decrypt them. GnuPG performs two separate functions: digitally-signing messages to prove they came from you, and encrypting messages. Anyone can read your digitally-signed messages, but only people you have exchanged keys with can read your encrypted messages. Remember, never share your private keys! Only public keys.
Seahorse is GNOME's graphical front-end to GnuPG, and KGpg is KDE's graphical GnuPG tool.
Now let's run through the basic steps of creating and managing GnuPG keys. This command creates a new key:
$ gpg --gen-key
This is a multi-step process; just answer all the questions, and the defaults are fine for most people. When you create your passphrase, write it down and keep it in a secure place because if you lose it you cannot decrypt anything. All that advice about never writing down your passwords is wrong. Most of us have dozens of logins and passwords to track, including some that we rarely use, so it's not realistic to remember all of them. You know what happens when people don't write down their passwords? They create simple passwords and re-use them. Anything you store on your computer is potentially vulnerable; a nice little notebook kept in a locked drawer is impervious to everything but a physical intrusion, if an intruder even knew to look for it.
I must leave it as your homework to figure out how to configure your mail client to use your new key, as every one is different. You can list your key or keys:
$ gpg --list-keys
/home/carla/.gnupg/pubring.gpg
------------------------------
pub 2048R/587DD0F5 2014-08-13
uid Carla Schroder (my gpg key)
sub 2048R/AE05E1E4 2014-08-13
This is a fast way to grab necessary information like the location of your keys, and your key name, which is the UID. Suppose you want to upload your public key to a keyserver; this is how it looks using my example key:
$ gpg --send-keys 'Carla Schroder' --keyserver http://example.com
When you create a new key for upload to public key servers, you should also create a revocation certificate. Don't do it later-- create it when you create your new key. You can give it any arbitrary name, so instead of revoke.asc you could give it a descriptive name like mycodeproject.asc:
$ gpg --output revoke.asc --gen-revoke 'Carla Schroder'
Now if your key ever becomes compromised you can revoke it by first importing the revocation certificate into your keyring:
$ gpg --import ~/.gnupg/revoke.asc
Then create and upload a new key to replace it. Any users of your old key will be notified as they refresh their key databases.
You must guard your revocation certificate just as zealously as your private key. Copy it to a CD or USB stick and lock it up, and delete it from your computer. It is a plain-text key, so you could even print it on paper.
If you ever need a copy-and-paste key, for example on public keyrings that allow pasting your key into a web form, or if you want to post your public key on your Web site, then you must create an ASCII-armored version of your public key:
$ gpg --output carla-pubkey.asc --export -a 'Carla Schroder'
This creates the familiar plain-text public key you've probably seen, like this shortened example:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFPrn4gBCADeEXKdrDOV3AFXL7QQQ+i61rMOZKwFTxlJlNbAVczpawkWRC3l
IrWeeJiy2VyoMQ2ZXpBLDwGEjVQ5H7/UyjUsP8h2ufIJt01NO1pQJMwaOMcS5yTS
[...]
I+LNrbP23HEvgAdNSBWqa8MaZGUWBietQP7JsKjmE+ukalm8jY8mdWDyS4nMhZY=
=QL65
-----END PGP PUBLIC KEY BLOCK-----
That should get you started learning your way around GnuPG. The GnuPG manuals have complete details on using GnuPG and all of its options.

Top 4 Linux download managers

$
0
0
http://www.linuxuser.co.uk/reviews/top-4-linux-download-managers

Improve and better manage your web downloads for mirroring, mass grabs or just better control over your files

Download managers seem to be old news these days, but there are still some excellent uses for them. We compare the top four of them on Linux.

uGet

Advertised as lightweight and full- featured like a majority of other Linux apps, uGet can handle multi- threaded streams, includes filters and can integrate with an undefined selection of web browsers. It’s been around for over ten years now, starting out as UrlGet, and can also run on Windows.
uGetisactuallyveryfull-featured,withalotofthekindoffunctions that advanced torrent clients use
uGet is actually very full-featured, with a lot of the kind of functions that advanced torrent clients use
Interface
uGet reminds us of any number of torrent client interfaces, with categories for Active, Finished, Paused and so on for the different downloads. Although there is a lot of information to take in, it’s all presented very cleanly and clearly. The main downloading controls are easy to access, with more advanced ones alongside them.
Integration
While it can see into the clipboard for URLs, uGet doesn’t natively integrate into browsers like Chromium and Firefox. Still, there are add-ons for both these browsers that allow them to connect to uGet: Firefox via FlashGot and Chromium with a dedicated plug-in. Not ideal, but good enough.
Features
uGet’s maturity affords it a range of features, including advanced scheduling to switch downloading on and off, batch download via the clipboard and the ability to change which file types it looks for in the clipboard. There are plug-in options, but not a huge amount.
Availability
While it’s also available in most major distro repos, the uGet website includes regularly updated binaries for a variety of popular distributions as well as easily accessible source code. It runs on GTK 3+ so it has a smaller footprint in some desktop environments than others, although we’d say it’s worth the extra dependancies in KDE or other Qt desktops.
Overall
8/10
We very much like uGet – its wide variety of features and popularity have allowed it to develop quite a lot to be an all-encompassing solution to download management, with some decent integration with Linux browsers.

KGet

KDE’s own download manager seems to have been originally designed to work with Konqueror, the KDE web browser. It comes with the kind of features we’re looking for in this test: control of multiple downloads and the ability to run a checksum alongside the downloaded product.
You need to manually activate the ability to keep an eye on the clipboard for links
You need to manually activate the ability to keep an eye on the clipboard for links
Interface
As expected of a KDE app, KGet fits the aesthetic style of the desktop environment with similar icons and curves throughout. It’s quite a simple design as well, with only the most necessary functions available on the main toolbars and a minimal view of the current downloads.
Integration
KGet natively integrates with KDE’s Konqueror browser, although it’s not the most popular. Support for it in Firefox is done via FlashGot as usual, but there’s no real way to do it in Chromium. You can turn on a feature that asks if you want to download copied URLs, however it doesn’t parse the clipboard very well and sometimes wants to download text.
Features
The selection of features available are not that high. No scheduling, no batch operations and generally an almost bare-minimum amount of downloading features. The clipboard-scanning feature is a nice idea but it’s a bit buggy. It’s a little weird as the Settings menu looks like it’s designed to have more settings and options.
Availability
While it doesn’t come by default with a KDE install, it is available for any distro that supports KDE. It does need a few KDE libraries to run though, and it’s a bit tricky to find the source code. There isn’t a selection of binaries that you can use with a few distros either.
Overall
6/10
KGet doesnt really offer users a huge amount more than the download manager in the majority of popular browsers, although at least you can use it while the browsers are otherwise turned off.

DownThemAll!

DownThemAll, being somewhat platform-independent, comes to Linux by way of Firefox as an add- on. This limits it somewhat to use with only Firefox, however as one of the most popular browsers in the world its tighter integration may be just what some are looking for in a download manager.
There are actually a whole lot of options available for DownThemAll! that make it very flexible
There are actually a whole lot of options available for DownThemAll! that make it very flexible
Interface
Part of the integration in Firefox allows DownThemAll! to slot into the standard aesthetic of the browser, with right-clicking bringing up options alongside the normal downloading ones. The extra dialog menus are generally themed after Firefox as well, while the main download window is clean and based on its own design
Integration
It doesn’t integrate system-wide but its ability to camouflage itself with Firefox makes it seem like an extra part of the original browser. It can also run alongside the normal downloader if you want, and can find specific link types on a webpage with little manual filtering, and no need for copy and pasting.
Features
With the ability to control how many downloads can happen at once, limit bandwidth when not idle and advanced auto or manual filtering, DownThemAll! is full of excellent features that aid mass downloading. The One Click function also allows it to very quickly start downloads to a pre- determined folder faster than normal download functions.
Availability
Firefox is available on just about every distro and other operating system around, which makes DownThemAll! just as prolific. Unfortunately this is a double-edged sword, as Firefox may not be your browser of choice. It also adds a little weight to the browser, which isn’t the lightest to begin with.
Overall
7/10
DownThemAll! is excellent and if you use Firefox you may not need to use anything else. Not everyone uses Firefox as their preferred browser though, and it needs to be left on for the manager to start running.

Steadyflow

Easily available in Ubuntu and some Debian-based distros, Steadyflow may be limited in terms of where you can get it but it’s got a reputation in some circles as one of the better managers available for any distro. It can read the clipboard for URLs, use GNOME’s preset proxies and has many other features.
The settings in Steady flow are extremely limiting and somewhat difficult to access
The settings in Steady flow are extremely limiting and somewhat difficult to access
Interface
Steadyflow is quite simple in appearance with a pleasant, clean interface that doesn’t clutter the download window. The dialog for adding downloads is simple enough, with basic options for how to treat it and where the file should live. It’s nothing we can really complain about, although it does remind us of the lack of features in the app.
Integration
Reading copied URLs is as standard and there’s a plug-in for Chromium to integrate with that. Again, you can use FlashGot to link it up to Firefox if that’s your preferred browser. You can’t really edit what it parses from the clipboard though and there’s no batch ability like in uGet and DownThemAll!
Features
Extremely lacking in features and the Options menu is very limited as well. The Pause and Resume function also doesn’t seem to work – a basic part of any browser’s file download features. Still, notifications and default action on finished files can be edited, along with an option to run a script once downloads are finished.
Availability
Only available on Ubuntu and there’s no easy way to get the source code for the app either. This means while it’s easily obtainable on all Ubuntu- based distros, it’s limited to these types of distros. As it’s not even the best download manager available on Linux, that shouldn’t be too big of a concern.
Overall
5/10
Frankly, not that good. With very basic options and limited to only working on Ubuntu, Steadyflow doesn’t do enough to differentiate itself from the standard downloading options you’ll get on your web browser.
And the winner is…
uGet
In this test we’ve proven that there is a place for download managers on modern computers, even if the better ones have cribbed from the torrent clients that seem to have usurped them. While torrenting may be a more effective way for some, with ISPs getting wiser to torrent traffic some people may get better results with a good download manager. Not only are transfer caps imposed by most major ISPs, some are even beginning to slow- down or even block torrent traffic in peak hours – even legal traffic such as distro ISOs and other free software are throttled.
Steadyflow seems to be a very popular solution for this, but our usage and tests showed an underdeveloped and weak product. The much older uGet was the star of the show, with an amazing selection of features that can aid in downloading single items or filtering through an entire webpage for relevant items to grab. The same goes for DownThemAll!, the excellent Firefox add-on that, while stuck with Firefox, has just about the same level of features, albeit with better integration.
If you’re choosing between the two it really comes down to what your preferred browser is and whether you need to have downloads and uploads going around the clock. DownThemAll! requires Firefox running, whereas uGet runs on its own, saving a lot of resources and electricity in the process – obviously this makes uGet a much better prospect for 24-hour data transferring and it really isn’t a major hassle to set up big batch downloads, or even just get the download information from your browser.
Give download managers another chance. You will not be disappointed with the results.

Postfix – Enable logging of email’s subject in maillog

$
0
0
http://www.linuxtechi.com/log-email-subject-maillog

By default postfix only capture ‘From’ and ‘To’ details in the log file (/var/log/maillog). There are some scenarios where we want that email’s subject should be capture in the maillog. In this article we will discuss how to achieve this :
We are assusming that postfix is already up and running and will make below changes.
 
Step:1 Edit ‘/etc/postfix/main.cf’ file & uncomment below line:
#header_checks = regexp:/etc/postfix/header_checks
 
Step:2 Append the below line in ‘/etc/postfix/header_checks’
/^Subject:/     WARN
 
Step:3 Restart the postfix server
#service postfix restart
#postmap /etc/postfix/header_checks

 
Step:4 Now do testing and send a test mail & see the logs
mailsending-using-telnet
As we can see above , that info user has send email to gmail id with the subject "Linux Interview Call Details" .
Now see the maillogs using the command 'tailf /var/log/maillog'
maillog-with-email-subject
 

The Complete Beginner's Guide to Linux

$
0
0
http://www.linux.com/learn/tutorials/784060-the-complete-beginners-guide-to-linux

What is Linux
From smartphones to cars, supercomputers and home appliances, the Linux operating system is everywhere.
Linux. It’s been around since the mid ‘90s, and has since reached a user-base that spans industries and continents. For those in the know, you understand that Linux is actually everywhere. It’s in your phones, in your cars, in your refrigerators, your Roku devices. It runs most of the Internet, the supercomputers making scientific breakthroughs, and the world's stock exchanges. But before Linux became the platform to run desktops, servers, and embedded systems across the globe, it was (and still is) one of the most reliable, secure, and worry-free operating systems available.
For those not in the know, worry not – here is all the information you need to get up to speed on the Linux platform.

What is Linux?

Just like Windows XP, Windows 7, Windows 8, and Mac OS X, Linux is an operating system. An operating system is software that manages all of the hardware resources associated with your desktop or laptop. To put it simply – the operating system manages the communication between your software and your hardware. Without the operating system (often referred to as the “OS”), the software wouldn’t function.
The OS is comprised of a number of pieces: 
  • The Bootloader: The software that manages the boot process of your computer. For most users, this will simply be a splash screen that pops up and eventually goes away to boot into the operating system.
  • The kernel: This is the one piece of the whole that is actually called “Linux”. The kernel is the core of the system and manages the CPU, memory, and peripheral devices. The kernel is the “lowest” level of the OS.
  • Daemons: These are background services (printing, sound, scheduling, etc) that either start up during boot, or after you log into the desktop.
  • The Shell: You’ve probably heard mention of the Linux command line. This is the shell – a command process that allows you to control the computer via commands typed into a text interface. This is what, at one time, scared people away from Linux the most (assuming they had to learn a seemingly archaic command line structure to make Linux work). This is no longer the case. With modern desktop Linux, there is no need to ever touch the command line.
  • Graphical Server: This is the sub-system that displays the graphics on your monitor. It is commonly referred to as the X server or just “X”.
  • Desktop Environment: This is the piece of the puzzle that the users actually interact with. There are many desktop environments to choose from (Unity, GNOME, Cinnamon, Enlightenment, KDE, XFCE, etc). Each desktop environment includes built-in applications (such as file managers, configuration tools, web browsers, games, etc).
  • Applications: Desktop environments do not offer the full array of apps. Just like Windows and Mac, Linux offers thousands upon thousands of high-quality software titles that can be easily found and installed. Most modern Linux distributions (more on this in a moment) include App Store-like tools that centralize and simplify application installation. For example: Ubuntu Linux has the Ubuntu Software Center (Figure 1) which allows you to quickly search among the thousands of apps and install them from one centralized location. 
Ubuntu software center screenshot
The Ubuntu software center is a Linux app store that carries thousands of free and commerical applications for Linux.

Why use Linux?

This is the one question that most people ask. Why bother learning a completely different computing environment, when the operating system that ships with most desktops, laptops, and servers works just fine? To answer that question, I would pose another question. Does that operating system you’re currently using really work “just fine”? Or are you constantly battling viruses, malware, slow downs, crashes, costly repairs, and licensing fees?
If you struggle with the above, and want to free yourself from the constant fear of losing data or having to take your computer in for the “yearly clean up,” Linux might be the perfect platform for you. Linux has evolved into one of the most reliable computer ecosystems on the planet. Combine that reliability with zero cost of entry and you have the perfect solution for a desktop platform.
That’s right, zero cost of entry...as in free. You can install Linux on as many computers as you like without paying a cent for software or server licensing (including costly Microsoft Client Access License – CALs).
Let’s take a look at the cost of a Linux server, in comparison to Windows Server 2012. The price of the Windows Server 2012 software alone can run up to $1,200.00 USD. That doesn’t include CALs, and licenses for other software you may need to run (such as a database, a web server, mail server, etc). With the Linux server...it’s all free and easy to install. In fact, installing a full blown web server (that includes a database server), is just a few clicks or commands away (take a look at “Easy LAMP Server Installation” to get an idea how simple it can be).
If you’re a system administrator, working with Linux is a dream come true. No more daily babysitting servers. In fact, Linux is as close to “set it and forget it” as you will ever find. And, on the off chance, one service on the server requires restarting, re-configuring, upgrading, etc...most likely the rest of the server won’t be affected.
Be it the desktop or a server, if zero cost isn’t enough to win you over – what about having an operating system that will work, trouble free, for as long as you use it? I’ve personally used Linux for nearly twenty years (as a desktop and server platform) and have not once had an issue with malware, viruses, or random computer slow-downs. It’s that stable. And server reboots? Only if the kernel is updated. It is not out of the ordinary for a Linux server to go years without being rebooted. That’s stability and dependability.
Linux is also distributed under an open source license. Open source follows the following key philosophies:
  • The freedom to run the program, for any purpose.
  • The freedom to study how the program works, and change it to make it do what you wish.
  • The freedom to redistribute copies so you can help your neighbor.
  • The freedom to distribute copies of your modified versions to others.
The above are crucial to understanding the community that comes together to create the Linux platform. It is, without a doubt, an operating system that is “by the people, for the people”. These philosophies are also one of the main reasons a large percentage of people use Linux. It’s about freedom and freedom of choice.

What is a “distribution?"

Linux has a number of different versions to suit nearly any type of user. From new users to hard-core users, you’ll find a “flavor” of Linux to match your needs. These versions are called distributions (or, in the short form, “distros.”) Nearly every distribution of Linux can be downloaded for free, burned onto disk (or USB thumb drive), and installed (on as many machines as you like).
Unity desktop
Ubuntu's Unity desktop.
The most popular Linux distributions are: Each distribution has a different take on the desktop. Some opt for very modern user interfaces (such as Ubuntu’s Unity, above, and Deepin’s Deepin Desktop), whereas others stick with a more traditional desktop environment (openSUSE uses KDE). For an easy guide to Linux desktops check out How to Find the Best Linux Desktop for You.  
You can check out the top 100 distributions on the Distrowatch site.
And don’t think the server has been left behind. For this arena, you can turn to:
Some of the above server distributions are free (such as Ubuntu Server and CentOS) and some have an associated price (such as Red Hat Enterprise Linux and SUSE Enterprise Linux). Those with an associated price also include support.

Which distribution is right for you?

Which distribution you use will depend upon the answer to three simple questions:
  • How skilled of a computer user are you?
  • Do you prefer a modern or a standard desktop interface?
  • Server or desktop?
If your computer skills are fairly basic, you’ll want to stick with a newbie-friendly distribution such as Linux Mint, Ubuntu, or Deepin. If you’re skill set extends into the above-average range, you could go with a distribution like Debian or Fedora. If, however, you’ve pretty much mastered the craft of computer and system administration, use a distribution like Gentoo.
If you’re looking for a server-only distribution, you will also want to decide if you need a desktop interface, or if you want to do this via command-line only. The Ubuntu Server does not install a GUI interface. This means two things – your server won’t be bogged down loading graphics and you’ll need to have a solid understanding of the Linux command line. However (there is always an “however” with Linux), you can install a GUI package on top of the Ubuntu Server with a single command like sudo apt-get install ubuntu-desktop. System administrators will also want to view a distribution with regards to features. Do you want a server-specific distribution that will offer you, out of the box, everything you need for your server? If so, CentOS might be the best choice. Or, do you want to take a desktop distribution and add the pieces as you need them? If so, Debian or Ubuntu Linux might serve you well.
For new users, check out “The Best Linux Distribution for New Users”, to make the selection a much easier task.

Installing Linux

For most, the idea of installing an operating system might seem like a very daunting task. Believe it or not, Linux offers one of the easiest installations of all operating systems. In fact, most versions of Linux offer what is called a Live distribution – which means you run the operating system from either a CD/DVD or USB flash drive without making any changes to your hard drive. You get the full functionality without having to commit to the installation. Once you’ve tried it out, and decided you wanted to use it, you simply double-click the “Install” icon and walk through the simple installation wizard.
Typically, the installation wizards walk you through the process with the following steps ( I’ll illustrate the installation of Ubuntu Linux ): 
  • Preparation: Make sure your machine meets the requirements for installation. This also may ask you if you want to install third-party software (such as plugins for MP3 playback, video codecs, and more).
    install ubuntu
    Preparing for your Linux installation.
  • Wireless Setup (If necessary): If you are using a laptop (or machine with wireless), you’ll need to connect to the network, in order to download third-party software and updates.
  • Hard drive allocation (Figure 4): This step allows you to select how you want the operating system to be installed. Are you going to install Linux alongside another operating system (called “dual booting”), use the entire hard drive, upgrade an existing Linux installation, or install over an existing version of Linux.
    install choices
    Select your type of installation and click Install Now.
  • Location: Select your location from the map.
  • Keyboard layout: Select the keyboard layout for your system.
  • User setup: Set up your username and password.
That’s it. Once the system has completed the installation, reboot and you’re ready to go. For a more in-depth guide to installing Linux, take a look at “How to Install and Try Linux the Absolutely Easiest and Safest Way”, or download the Linux Foundation's PDF guide for Linux installation.

Installing software on Linux

Just as the operating system itself is easy to install, so too are applications. Most modern Linux distributions include what most would consider an “app store”. This is a centralized location where software can be searched and installed. Ubuntu Linux has the Ubuntu Software Center, Deepin has the Deepin Software Center, some distributions rely on Synaptic, while others rely on GNOME Software.
Regardless of the name, each of these tools do the same thing – a central place to search for and install Linux software. Of course, these pieces of software depend upon the presence of a GUI. For GUI-less servers, you will have to depend upon the command line interface for installation.
Let’s look at two different tools to illustrate how easy even the command line installation can be. Our examples are for Debian-based distributions and Fedora-based distributions. The Debian-based distros will use the apt-get tool for installing software and Fedora-based distros will require the use of the yum tool. Both work very similarly. I’ll illustrate using the apt-get command. Let’s say you want to install the wget tool (which is a handy tool used to download files from the command line). To install this using apt-get, the command would like like this:
sudo apt-get install wget
The sudo command is added because you need super user privileges in order to install software. Similarly, to install the same software on a Fedora-based distribution, you would first su to the super user (literally issue the command su and enter the root password), and issue this command:
yum install wget
That’s it...all there is to installing software on a Linux machine. It’s not nearly as challenging as you might think. Still in doubt? Recall the Easy Lamp Server Installation from earlier? With a single command:
sudo tasksel
You can install a complete LAMP (Linux Apache MySQL PHP) server on either a server or desktop distribution. It really is that easy.

More Resources

If you’re looking for one of the most reliable, secure, and dependable platforms for both the desktop and the server, look no further than one of the many Linux distributions. With Linux you can assure your desktops will be free of trouble, your servers up, and your support requests at a minimum.
If you’re looking for more resources to help guide you through your lifetime with Linux, check out the following resources: 

Monitoring Android Traffic with Wireshark

$
0
0
http://www.linuxjournal.com/content/monitoring-android-traffic-wireshark

The ubiquity and convenience of smartphones has been a real boon for getting information on the go. I love being able to jump on a Wi-Fi hotspot, catch up on my mail, check my banking balance or read the latest tech news—all without having to bring along or boot up a laptop. Now that mobile development is mainstream, most of this access is done via specialized apps, instead of via a Web browser.
This migration away from direct Web access in favor of dedicated smartphone apps has made for a richer user experience, but it also has made knowing exactly what is going on "under the hood" a lot harder. On our Linux boxes, there are many tools to help user peer into the internals of what's going to and from the machine. Our browsers have simple HTTP versus HTTPS checks to see if there's encryption, and there are simple but easy-to-use browser plugins like Firebug that let us view exactly what's being sent and retrieved over the Web. At the operating system level, powerful tools like Wireshark let us drill down even further, capturing all traffic flowing through a network interface. Smartphones usually are locked up to a point where it's almost impossible for a regular user to run any network monitoring or tracing software directly on the phone—so how can a curious user get access to that phone traffic?
Fortunately, with just a little bit of work, you can use Linux to transform almost any laptop into a secret-sharing wireless access point (WAP), connect your phone and view the data flowing to and from the phone with relative ease. All you really need is a laptop running Linux with one wireless and one Ethernet connection.

Intercepting Traffic

The first step is to set up your own "naughty" WAP where you can capture and log all the Internet traffic passing through it—simulating the kind of information that a rogue employee could be obtaining from a coffee-shop Wi-Fi hotspot. Let's do this in a distribution-independent way that doesn't mess around with your existing router (no need to change security settings) and doesn't require rooting or installing anything unseemly on your phone.

False Starts

It may be tempting to try a shortcut for capturing this traffic. Here are a few techniques I tried and discarded before sticking with a hostapd/dnsmasq/iptables solution.
Ubuntu's Built-in Hotspots:
Ubuntu has a handy "Use as Hotspot" feature tucked away in its networking settings. Unfortunately, it creates hotspots in ad hoc mode, which isn't compatible with most versions of Android. I didn't try Fedora's implementation, but the method I recommend instead will work on any distribution.
Monitor Mode:
It's tempting just to put the wireless card in monitor mode and capture all wireless traffic, independent of SSID. This is pretty cool, but there are quite a few "gotchas":
  • The drivers for your wireless card must support monitor mode. Many, but not all cards support this mode.
  • Your capture needs to include the four WPA "handshake" packets.
  • You'll probably have to compile and use airmon-ng to start monitor mode and then capture on the mon0 pseudo-device airmon creates.
  • If the WAP is using encryption, the packets you capture also will be encrypted. Wireshark does have a facility to help decode the packets, but you'll need to enter information about the security scheme used by the WAP and toggle a few sets of options until the decoded packets look right. For a first-time user, it's hard enough making sense out of Wireshark dumps without having to worry about toggling security options on and off.
Capturing with the Android Emulator:
Another approach would be to use an Android emulator on your capture device, install and then run the target application, and capture the traffic from the emulator. It's much harder than it sounds actually to get a banking app on the emulator though:
  • Due to recent Android licensing changes, the major Android VMs no longer include the Google Play store. (I tried both the Android SDK and the free product from Genymotion.)
  • If your phone isn't rooted, it's not easy to get the application's .apk off your phone and onto the VM.
To turn a laptop into a WAP, you'll first use hostapd to use the wireless card as an access point mode (broadcasting an SSID, authenticating with security and so on). Next, you'll use dnsmasq to provide DNS and DHCP services for clients connecting on the wireless connection. Finally, iptables' masquerading features will be used to direct IP traffic from clients on the wireless connection to the Internet (via your Ethernet connection), and then rout responses back to the correct client on the wireless side.

hostapd

hostapd is a small utility that lets you create your own wireless access point. Installation is straightforward, and configuration is just as easy. Most wireless cards and modern kernels will be using the mac80211 driver. Check yours via lsmod|grep mac80211. If that's your driver, find your wireless device via ifconfig, and set up the SSID of your choice as shown below for an unsecured, totally open access point:

===[/etc/hostapd/hostapd.conf]======
interface=wlan0
driver=nl80211
ssid=WatchingU
channel=1
===[/etc/hostapd/hostapd.conf]======
I recommend not using Wi-Fi security for this test; it would be overkill, as your access point will only be temporary. Should you desire a more permanent solution, hostapd supports many different authentication options.

dnsmasq

Now that hostapd is ready to start letting clients connect to your wireless connection, you need dnsmasq to serve DCHP and provide DNS for your access point. Fortunately, dnsmasq is also very easy to install and configure. The example below is the minimum required. Make sure the dhcp-range you specify will not conflict with anything already on your network. By default, dnsmasq will read your existing /etc/resolv.conf and propagate the DNS settings listed there to its clients. That's a pretty sane default configuration, but if you need something else, use the no-resolv option and specify the DNS servers manually:

========[/etc/dnsmasq.conf]===============
interface=wlan0
dhcp-range=10.0.0.3,10.0.0.20,12h
========[/etc/dnsmasq.conf]===============

iptables

The final piece of your wireless access point is iptables, which will use IP Masquerading to get the traffic from the wireless connection, send it over the wired connection and route any responses to back to the correct source on the wireless side. There are many distribution-specific ways to save and script iptables rules, but it's simpler to create a distribution-independent shell script to enable iptables and network address translation (NAT). A script for iptables that ties in hostapd and dnsmasq would look like the following (modify the wlan0 and eth0 entries to match your system):

=======[makeWAP.sh]==============
#!/bin/bash
export DEV_IN=wlan0;
export DEV_OUT=eth0;

echo "Bringing up $DEV_IN"
#This address/mask should match how you configured dnsmasq
ifconfig $DEV_IN up 10.0.0.1 netmask 255.255.255.0

echo "Starting dnsmasq"
dnsmasq

echo "Configuring iptables"
#Clear everything in iptables
iptables -Z;
iptables -F;
iptables -X;

#Turn on iptables NAT, forwarding, and enable
#forwarding in the kernel
iptables --table nat --append POSTROUTING --out-interface
↪$DEV_OUT -j MASQUERADE
iptables --append FORWARD --in-interface $DEV_IN -j ACCEPT
↪sysctl -w net.ipv4.ip_forward=1

echo "Starting hostapd"
hostapd /etc/hostapd/hostapd.conf 1> /dev/null
=======[makeWAP.sh]==============
To test everything, connect your capture laptop to a wired connection with Internet access and disconnect any existing wireless connections. Run the makeWAP.sh script (sudo ./makeWAP.sh) to start up the WAP.
On the phone, turn off mobile data (for Android 4.3, this is done via Settings→Data Usage→Mobile data→Off), turn on Wi-Fi, and connect to the new WAP (in the example above the SSID would be "WatchingU"). Once connected, test a few sites to make sure you can access data from the Internet.
If everything works, congratulations, you have transformed your laptop into the world's most ridiculously overqualified wireless router!

Wireshark

Wireshark is a network packet analyzer that you'll use to capture and make sense of the data flowing on your newly created access point. You'll be merely scratching the surface of its capabilities, as it is an extremely powerful tool with abilities stretching well beyond "poke at a few packets" as used in this project.
Install Wireshark for your version of Linux. If at all possible, get version 1.10 or higher, as 1.10 adds support for decoding gzip'ed HTTP data on the fly (and there's a lot of that). Prior to 1.10, you'd have to save the TCP stream to a file, edit out the header and then gunzip it to view the raw data. This becomes tedious quickly, so having Wireshark do all that for you behind the scenes is awesome.
When running Wireshark for the first time, if it complains that there are no devices available for capture, you have to give your ID permissions for the various devices and applications used by Wireshark. For Ubuntu, run sudo dpkg-reconfigure wireshark-common, and select the option to let nonroot users capture packets, and make sure your ID is in the "Wireshark" group. For other distributions, search for which devices and scripts need to be owned by which groups.
Before moving on to capturing traffic, shut down every non-essential app and service on the phone to make it easier to find the traffic of interest. The fewer packets you have to sort through, the better.

Capturing Unencrypted Web Traffic

Before you start looking for sensitive data, let's first get familiar with what unencrypted traffic looks like in Wireshark.
  • From the Wireshark starting screen, select the wireless device (wlan0) and then the "Start" icon to start a new capture.
  • On the phone, use a browser to go to http://www.linuxjournal.com.
  • Once the page finished loading on the phone, press the "Stop" icon in Wireshark, and save the capture file somewhere safe, called something like "Capture_LJ.pcapnp".
Now, let's take a look at this dump. With the dump file open in Wireshark, go to View→Name Resolution and make sure "Enable for Network Layer" is checked. This will improve readability by translating IP addresses to hostnames. The initial view (Figure 1) can be sort of intimidating, but there are some simple tips to make decoding this data easier.
Figure 1. Wireshark Output
As shown in Figure 1, Wireshark's dump screen has one row per TCP packet, but the data is more easily consumed when reassembled into a full TCP stream. To get the full stream, right-click on any row where the source or destination is www.linuxjournal.com, and choose "Follow TCP Stream". This automatically will find all the related packets and group them together in an easier-to-read format.
Figure 2. Follow TCP Stream
In this example, you can see the HTTP GET request from my phone in red, and the HTTP response from the Linux Journal Web server in blue. Here is where you can start to see unencrypted information flowing back and forth from the server. Since the server response's "Content-Type" header indicates that the response is a JPEG image, you can view that image with a little bit of extra manipulation. Press the "Save As" button to save the stream to a temporary file (use RAW format), then use an editor like emacs or vi to trim out the header text from the image binary contents. It takes a little bit of practice, but it's usually pretty obvious where the HTTP header stops and the binary bits begin.
Figure 3. Raw TCP Dump
Once you've removed the header (and any stray footer or additional header sections), you can save the file with a .jpeg extension and view it.
Continue browsing through the dump manually and look for interesting TCP segments. You also could take a more systematic approach by using Wireshark's filtering capabilities. Use a filter like tcp.stream eq 1 (Figure 4), and keep iterating the stream ID until you've seen all the streams, drilling down with "Follow Stream" if the packets look promising.
Figure 4. Filtering to a Single TCP Stream

Capturing Low-Sensitivity Application Traffic

Now that you're getting a little more comfortable with capturing and viewing dumps with Wireshark, let's try peeking at the information coming to and from an Android application. For this next test, I used the app "reddit is fun" since it sends and receives non-sensitive data that is probably not encrypted.
Capture an app search or query using the same technique as before: start Wireshark on the laptop, launch and exercise the app from the phone, then stop Wireshark and save the capture file.
Figure 5 shows an example TCP stream from "reddit is fun".
Figure 5. Gzip-Encoded JSON
Again, the request from the app is in red, and the response from the reddit server is in blue. Note that since the request is not encoded, anyone monitoring the WAP would be able to detect your interest in "Raspberry Pi" data. The content-type of the response is JSON, and even though the Content-Encoding is set to "gzip", Wireshark is letting you view the content body as pure JSON. If the data in your TCP Stream page looks garbled, you may have an older version of Wireshark that doesn't support on-the-fly gzip decoding. Either save the contents to a file and gunzip on your own, or upgrade your version of Wireshark.
Note: look at that hilarious "Server" header in the response—is some clever reddit engineer sending an SQL injection attack to some script kiddies?

Capturing High-Sensitivity App Data

By now, the process to capture traffic from an app should be pretty straightforward. Let's try running a banking or high-sensitivity app and use the tricks described earlier to see if you can detect the application sending any information in the clear that it shouldn't. To be perfectly honest, the odds of finding such a low-level (and easily avoidable) flaw are going to be very, very low. Android application development is pretty mature now, and the Android libraries make using SSL encryption pretty easy. It feels good to double-check though, so follow the same steps as before, but log on to a banking application of your choice.
Now, as you step through the TCP streams, you should note a few major differences. Most of the traffic will be HTTPS instead of HTTP, and the protocol will be TLS instead of TCP or HTTP. In addition, the TCP stream no longer will contain human-readable content, even after trying the standard gunzip tricks (Figure 6).
Figure 6. Encrpyted Traffic
Step through the TCP streams, following each one, and verify that there's no plain text or unencrypted communications that are exposing anything scary.

Next Steps

Now that you've almost certainly not found anything scary, where else can these network monitoring skills be applied? Here are some fun ideas:
  • Attach a console like a Wii or PS3 and see what kind of information it sends at startup and logon.
  • Create a WAP that doesn't actually go anywhere and just see what tries to connect. Maybe there's a device using Wi-Fi that you didn't even know about?
  • Get the SSL certificate for a server you support, and try out Wireshark's SSL decoding.
  • Reverse the wlan0 and eth0 designations in the scripts and set up the system backwards (connect the laptop's Wi-Fi to your existing WAP, and plug a device in to the laptop's Ethernet port) to monitor the output of wired-only devices. My "smart" Blu-ray player was communicating with all sorts of unexpected places at startup!

OpenSSH: Going flexible with forced commands

$
0
0
http://binblog.info/2008/10/20/openssh-going-flexible-with-forced-commands

Filed under: Security, UNIX & Linux— Tags: , — martin @ 9:32 am
As we all know, it is possible to use SSH not only for obtaining an interactive login session on a remote machine, but also for executing commands remotely. For instance, the following command will log on to myserver.example.com, execute “uname -a” and return to the local shell:
ssh myserver.example.com uname -a
(The local SSH client returns the exit code from the remote command, if you’re into this kind of detail.)
You might have some users (or scheduled automatisms) that you don’t want to be able to log on to that machine at all, but who should be permitted to execute only a given command. In order to achieve this, you can configure key-based authentication. Once this has been done, the key can be prefixed with a number of configuration options. Using one of these options, it is possible to enforce execution of a given command when this key is used for authentication.
In this example from ~/.ssh/authorized_keys, the user wants to look at the process list, so we set the command to “ps -ef”.
command="/bin/ps -ef"
Using this, when the user tries to log in, or tries to execute an arbitrary command, “/bin/ps -ef” is executed instead and the SSH session terminates.
In addition to enforcing a command, it is advisable to disable a number of advanced SSH features, such as TCP and X11 forwarding. Assignment of a pseudo terminal to the user’s SSH session may also be suppressed, by adding a number of additional configuration options next to the forced command:
no-port-forwarding,no-X11-forwarding,no-pty
Here’s what a full entry from ~/.ssh/authorized_keys might look like:
command="/bin/ps -ef",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAp0KMipajKK468mfihpZHqmrMk8w+PmzTnJrZUFYZZNmLkRk+icn+m71DdEHmza2cSf9WdiK7TGibGjZTE/Ez0IEhYRj5RM3dKkfYqitKTKlxVhXNda7az6VqAJ/jtaBXAMTjHeD82xlFoghLZOMkScTdWmu47FyVkv/IM1GjgX/I8s4307ds1M+sICyDUmgxUQyNF3UnAduPn1m8ux3V8/xAqPF+bRuFlB0fbiAEsSu4+AkvfX7ggriBONBR6eFexOvRTBWtriHsCybvd6tOpJHN8JYZLxCRYHOGX+sY+YGE4iIePKVf2H54kS5UlpC/fnWgaHbmu/XsGYjYrAFnVw== Test key
This is quite nice: We have successfully limited this user to requesting a process list.
This is called an SSH forced command.
So much for the introduction. :-D
Here’s what I’m really getting at today – What, if we want the user to not only execute a single command, but a number of commands, such as:
- Show process list (ps)
– Show virtual memory statistics (vmstat)
– Stop and start the print server (/etc/init.d/cupsys stop/start)
Following the approach described above, this would give us four key pairs, four entries in ~/.ssh/authorized_keys, and four entirely different invocations of SSH on the client side, each of them using a dedicated private key. In other words: An administrative nightmare.
This is where the environment variable $SSH_ORIGINAL_COMMAND comes in. (This nice facility was pointed out to me last week by G., who had read about it somewhere but wondered what it might be useful for.)
Until now, all we know is that with a forced command in place, the SSH server ignores the command requested by the user. This is not entirely true, though. The SSH server does in fact remember the command that was requested, stores it in $SSH_ORIGINAL_COMMAND and thus makes it available within the environment of the forced command.
With this in mind, it is possible to allow more flexibility inside forced commands, without the need to go crazy with countless key pairs. Instead, it is possible to just create a wrapper script that is called as the forced command from within ~/.ssh/authorized_keys and decides what to do, based on the content of $SSH_ORIGINAL_COMMAND:
#!/bin/sh
# Script: /usr/local/bin/wrapper.sh

case "$SSH_ORIGINAL_COMMAND" in
"ps")
ps -ef
;;
"vmstat")
vmstat 1 100
;;
"cups stop")
/etc/init.d/cupsys stop
;;
"cups start")
/etc/init.d/cupsys start
;;
*)
echo "Sorry. Only these commands are available to you:"
echo "ps, vmstat, cupsys stop, cupsys start"
exit 1
;;
esac

It is important to be aware of potential security issues here, such as the user escaping to a shell prompt from within one of the listed commands. Setting the “no-pty” option already makes this kind of attack somewhat difficult. In addition, some programs, such as “top”, for example, have special options to run them in a “secure” read-only mode. It is advisable to closely examine all programs that are called as SSH forced commands for well-meant “backdoors” and to find out about securing them.
It’s up to you to decide based on your own situation, whether you want to run this wrapper as the root user or if you prefer to use password-less “sudo” commands to raise privileges where needed.
If you encounter problems while debugging $SSH_ORIGINAL_COMMAND, please make absolutely sure that you are authenticating with the correct key. I found it helpful to unset SSH_AUTH_SOCK in the window where I do my testing, in order to prevent intervention from identies stored in the SSH agent.

Unix: Viewing your processes through the eyes of /proc

$
0
0
http://www.itworld.com/operating-systems/432024/unix-viewing-your-processes-through-eyes-proc

The /proc file system brings the processes on your Unix systems into view in some very useful ways, but only if you take the time to cd over to /proc and see all it can tell you.

The /proc virtual file system has been available on Unix systems for going on 20 years now.
I think it made its appearance on Solaris in 1996.
Providing access to information previously available only in the Unix kernel or through
a particular set of commands, /proc made it much easier to access and use information about the system and running processes. Even so, some aspects of /proc are easy to understand while others are something of a challenge to grasp.
The /proc file system is itself a virtual file system. The files are not "real" files like those
we're used to, associated with inodes and having space allocated to them on our disks.
If you look at the files in /proc, one of the first things you notice is that the files and the
directories all show 0 as their size.
$ cd /proc
$ ls -l
total 0
dr-xr-xr-x 6 root root 0 Jul 28 13:39 1
dr-xr-xr-x 6 root root 0 Jul 28 13:39 10
dr-xr-xr-x 6 root root 0 Jul 28 13:39 1020
dr-xr-xr-x 6 root root 0 Jul 28 13:39 11
dr-xr-xr-x 6 root root 0 Jul 28 13:39 111
...
-r-------- 1 root root 0 Aug 17 14:02 vmcore
-r--r--r-- 1 root root 0 Aug 17 14:02 vmstat
-r--r--r-- 1 root root 0 Aug 17 14:02 zoneinfo
These "empty" directories and files provide views into our processes that can be very hard to derive through other means.
When you cd to /proc and list its content, you will notice that many of the directories
have names that are simply numbers. Each of these corresponds to a process currently running on your system. Counting the files with numeric names and the number of processes running, we should yield the same result or something very close (the commands you use will be in both lists).
$ cd /proc
$ ls | grep '[0-9]' | wc -l
164
$ ps -ef | wc -l
164
To see an example of what /proc knows about a process, pick out a process from your ps -ef output and then move to the corresponding /proc directory.
$ ps -ef | grep httpd
root 2278 1 0 Jul23 ? 00:00:01 /usr/sbin/httpd
apache 3770 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3771 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3772 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3773 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3774 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3775 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3776 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
apache 3777 2278 0 04:02 ? 00:00:00 /usr/sbin/httpd
$ ls /proc/2278
ls: cannot read symbolic link /proc/2278/cwd: Permission denied
ls: cannot read symbolic link /proc/2278/root: Permission denied
ls: cannot read symbolic link /proc/2278/exe: Permission denied
attr cpuset fd loginuid mountstats schedstat status
auxv cwd fdinfo maps oom_adj smaps task
cmdline environ io mem oom_score stat wchan
coredump_filter exe limits mounts root statm
Using your personal credentials, you won't be able to look at all the files in /proc (as you can see from the display above) but, using sudo or switching to root, you can look at any file for any process.
The 2278 directory is just one of many, of course.
$ ls /proc
1 1607 199 2462 2844 31293 516 devices mounts
10 17 2 2478 2899 31294 6 diskstats mtrr
1020 1772 201 2479 29 31295 6428 dma net
11 18 202 2486 2900 31298 6430 driver partitions
111 1836 203 2491 2929 359 6486 execdomains schedstat
112 1842 204 2497 2932 3770 6642 fb scsi
113 1843 205 25 2933 3771 7 filesystems self
114 1844 2154 2506 2934 3772 7730 fs slabinfo
117 1845 2156 2559 2935 3773 7838 ide stat
119 1863 2186 2580 2936 3774 7966 interrupts swaps
12 1881 2189 2585 2938 3775 7970 iomem sys
13 1882 2240 26 2961 3776 7972 ioports sysrq-trigger
14 1883 2241 2602 2963 3777 8 irq sysvipc
15 1888 2242 2615 3 4 8006 kallsyms tty
15393 1895 2243 2631 31261 406 8008 kcore uptime
1567 1896 2265 2647 31264 412 8009 keys version
1568 1898 2278 27 31268 413 8059 key-users vmcore
1570 1899 2312 2704 31271 414 9 kmsg vmstat
1571 19 2313 2716 31272 415 acpi loadavg zoneinfo
1572 1907 2314 2719 31274 416 buddyinfo locks
16 1927 2315 2746 31285 437 bus mdstat
1601 1933 2361 2765 31287 458 cmdline meminfo
1603 1934 2391 2775 31288 483 cpuinfo misc
1605 198 2440 28 31289 5 crypto modules
Let's say you want to look at what /proc can tell you about your login shell. The first thing you probably want to do is display your shell's process ID.
$ echo $$
8009
OK, so let's look at its representation in /proc. First, here's the directory:
$ cd /proc
$ ls -ld 8009
dr-xr-xr-x 6 shs staff 0 Aug 17 12:31 8009
If we move into the directory, we can see all of the files that provide some information on the process. Notice that most of these files provide read access to anyone on the system. Notice also that they all have the same creation date and time -- when you logged in.
$ cd 8009
$ ls -l
total 0
dr-xr-xr-x 2 shs staff 0 Aug 17 12:52 attr
-r-------- 1 shs staff 0 Aug 17 12:52 auxv
-r--r--r-- 1 shs staff 0 Aug 17 12:52 cmdline
-rw-r--r-- 1 shs staff 0 Aug 17 12:52 coredump_filter
-r--r--r-- 1 shs staff 0 Aug 17 12:52 cpuset
lrwxrwxrwx 1 shs staff 0 Aug 17 12:52 cwd -> /proc/8009
-r-------- 1 shs staff 0 Aug 17 12:52 environ
lrwxrwxrwx 1 shs staff 0 Aug 17 12:52 exe -> /bin/bash
dr-x------ 2 shs staff 0 Aug 17 12:52 fd
dr-x------ 2 shs staff 0 Aug 17 12:52 fdinfo
-r-------- 1 shs staff 0 Aug 17 12:52 io
-r--r--r-- 1 shs staff 0 Aug 17 12:52 limits
-rw-r--r-- 1 shs staff 0 Aug 17 12:52 loginuid
-r--r--r-- 1 shs staff 0 Aug 17 12:52 maps
-rw------- 1 shs staff 0 Aug 17 12:52 mem
-r--r--r-- 1 shs staff 0 Aug 17 12:52 mounts
-r-------- 1 shs staff 0 Aug 17 12:52 mountstats
-rw-r--r-- 1 shs staff 0 Aug 17 12:52 oom_adj
-r--r--r-- 1 shs staff 0 Aug 17 12:52 oom_score
lrwxrwxrwx 1 shs staff 0 Aug 17 12:52 root -> /
-r--r--r-- 1 shs staff 0 Aug 17 12:52 schedstat
-r--r--r-- 1 shs staff 0 Aug 17 12:52 smaps
-r--r--r-- 1 shs staff 0 Aug 17 12:52 stat
-r--r--r-- 1 shs staff 0 Aug 17 12:52 statm
-r--r--r-- 1 shs staff 0 Aug 17 12:52 status
dr-xr-xr-x 3 shs staff 0 Aug 17 12:52 task
-r--r--r-- 1 shs staff 0 Aug 17 12:52 wchan
Some of these files are easy to use. Others require a lot more effort. The stat file provides information about a process' status, but represents the information in this format:
$ cat stat
8009 (bash) S 8008 8009 8009 34816 9706 4194304 5516 51685 0 1 6 8 37 47 15 0 1
0 1831964390 4898816 370 4294967295 134508544 135222164 3215953072 3215951
908 10765314 0 65536 3686404 1266761467 3225587569 0 0 17 2 0 0 0
The status file is much easier to use. It provides the same kind of information as stat but in a friendlier format.
$ cat status
Name: bash
State: S (sleeping)
SleepAVG: 98%
Tgid: 8009
Pid: 8009
PPid: 8008
TracerPid: 0
Uid: 263 263 263 263
Gid: 100 100 100 100
FDSize: 256
Groups: 100
VmPeak: 4784 kB
VmSize: 4784 kB
VmLck: 0 kB
VmHWM: 1476 kB
VmRSS: 1476 kB
VmData: 316 kB
VmStk: 88 kB
VmExe: 700 kB
VmLib: 1544 kB
VmPTE: 28 kB
StaBrk: 08851000 kB
Brk: 08893000 kB
StaStk: bfaf8cb0 kB
ExecLim: 080f6000
Threads: 1
SigQ: 0/32375
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000010000
SigIgn: 0000000000384004
SigCgt: 000000004b813efb
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 0000000f
Mems_allowed: 1
Some of this information is very straightforward -- the name of the process (bash), the process ID and parent process ID. You should be easily able to pick out the UID and GID.
It should come as no surprise that our process is sleeping. At any time, most processes are sleeping and, though we're obviously using the shell when we run this command, we're running another process within the shell. We can also see that our shell is sleeping 98% of the time.
The TracerPid variable set to 0 simply tells you that the process is not being traced.
A lot of the other variables -- those beginning with "Vm" -- relate to memory while those that start with "Sig" tell you how signals are being handled. Some may be blocked, others ignored, and still others caught. One thing to keep in mind while looking at all the 000000000038400 type values is that these are bit maps, but they are expressed in hexadecimal. So, every digit represents four bits in the overall value.
Take the SigBlk (blocked signals) value as an example. If set to 10000 sh shown above, being hex, that's 10000000000000000 in binary. Bit number 17 (counting from the right) is set and signal 17 is SIGCHLD. Thus, we can tell which signals are being blocked.
Our signals caught variable has a lot more bits set, but we can map them out if we're curious like so:
000000004b813efb ==> 0100 1011 1000 0001 0011 1110 1111 1011
| | || | | || ||| |||| | ||
| | || | | || ||| |||| | |+- 1 = SIGHUP
| | || | | || ||| |||| | +-- 2 = SIGINT
| | || | | || ||| |||| +---- 4 = SIGILL
| | || | | || ||| |||+------ 5 = SIGTRAP
| | || | | || ||| ||+------- 6 = SIGABRT
| | || | | || ||| |+-------- 7 = SIGBUS
| | || | | || ||| +--------- 8 = SIGFPE
| | || | | || ||+------------ 10 = SIGUSR1
| | || | | || |+------------- 11 = SIGSEGV
| | || | | || +-------------- 12 = SIGUSR2
| | || | | |+---------------- 13 = SIGPIPE
| | || | | +----------------- 14 = SIGALRM
| | || | +--------------------- 17 = SIGCHLD
| | || +----------------------------- 24 = SIGXCPU
| | |+------------------------------- 25 = SIGXFSZ
| | +-------------------------------- 26 = SIGVTALRM
| +---------------------------------- 28 = SIGWINCH
+-------------------------------------- 31 = SIGSYS
The FDSize variable may be set to 256, but we can see that the only file descriptors in use are 0, 1, 2 and 255. Just examine the contents of the fd directory.
$ ls fd
0 1 2 255
We can also look at the limits file to see what limits may be placed on our shell.
$ cat limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 10485760 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 50 50 processes
Max open files 1024 1024 files
Max locked memory 32768 32768 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 32375 32375 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
The environ file shows some details about our operational environment -- such as our search path.
$ cat environ
USER=shsLOGNAME=shsHOME=/home/staff /shsPATH=/usr/local/bin:/bin:/usr/binMAIL=
/var/mail/shsSHELL=/bin/bashSSH_CLIENT=10.20.30.111 8506 22SSH_CONNECTION=10.20.
30.111 8506 192.168.0.12 22SSH_TTY=/dev/pts/0TERM=xterm
Another interesting file to look at is the io file. As you can see, it's reporting on characters read and written. Note the changes between the first and second running.
$ cat io
rchar: 1953940
wchar: 57247
syscr: 1791
syscw: 917
read_bytes: 8192
write_bytes: 8192
cancelled_write_bytes: 4096
$ cat io
rchar: 1955293
wchar: 57370
syscr: 1804
syscw: 921
read_bytes: 8192
write_bytes: 8192
cancelled_write_bytes: 4096
  • rchar: the number of bytes the process read from files, pipes, etc.
  • wchar: the number of bytes the process wrote
  • syscr: the number of read-like system call invocations.
  • syscr: the number of write-like system call invocations
  • read_bytes: the number of bytes the read from disk
  • write_bytes: number of bytes to be written to the disk
  • cancelled_write_bytes: the number of bytes that the process caused to not happen (yeah, this one's hard to follow!)
The maps file shows regions in virtual memory that the process is using.
$ cat maps
00572000-0058d000 r-xp 00000000 68:02 4723528 /lib/ld-2.5.so
0058d000-0058e000 r-xp 0001a000 68:02 4723528 /lib/ld-2.5.so
0058e000-0058f000 rwxp 0001b000 68:02 4723528 /lib/ld-2.5.so
00591000-006e8000 r-xp 00000000 68:02 4723531 /lib/libc-2.5.so
006e8000-006ea000 r-xp 00157000 68:02 4723531 /lib/libc-2.5.so
006ea000-006eb000 rwxp 00159000 68:02 4723531 /lib/libc-2.5.so
006eb000-006ee000 rwxp 006eb000 00:00 0
006f0000-006f3000 r-xp 00000000 68:02 4723597 /lib/libdl-2.5.so
006f3000-006f4000 r-xp 00002000 68:02 4723597 /lib/libdl-2.5.so
006f4000-006f5000 rwxp 00003000 68:02 4723597 /lib/libdl-2.5.so
006f7000-006fa000 r-xp 00000000 68:02 4724408 /lib/libtermcap.so.2.0.8
006fa000-006fb000 rwxp 00002000 68:02 4724408 /lib/libtermcap.so.2.0.8
00a44000-00a45000 r-xp 00a44000 00:00 0 [vdso]
00c7e000-00c88000 r-xp 00000000 68:02 4723790 /lib/libnss_files-2.5.so
00c88000-00c89000 r-xp 00009000 68:02 4723790 /lib/libnss_files-2.5.so
00c89000-00c8a000 rwxp 0000a000 68:02 4723790 /lib/libnss_files-2.5.so
08047000-080f6000 r-xp 00000000 68:02 4756156 /bin/bash
080f6000-080fb000 rw-p 000ae000 68:02 4756156 /bin/bash
080fb000-08100000 rw-p 080fb000 00:00 0
08851000-08893000 rw-p 08851000 00:00 0 [heap]
b7d28000-b7f28000 r--p 00000000 68:07 5052872 /usr/lib/locale/locale-archive
b7f28000-b7f2a000 rw-p b7f28000 00:00 0
b7f30000-b7f32000 rw-p b7f30000 00:00 0
b7f32000-b7f39000 r--s 00000000 68:07 5116679 /usr/lib/gconv/gconv-modules.cache
bfae4000-bfaf9000 rw-p bffe9000 00:00 0 [stack]
There's also an uptime file, though you're probably never going to choose it over the uptime command.
$ cat uptime
18321661.93 1308472.21
$ uptime
13:04:14 up 212 days, 1:21, 1 user, load average: 0.00, 0.00, 0.00
The loadavg file, on the other hand, is easy to digest. Yes, the system this was run on is not a very busy one!
$ cat loadavg
0.00 0.00 0.00 1/175 8336
The /proc file system has a lot to tell you, but it's probably going to bemost useful when you use it fairly frequently and get used to what your processes look like from the kernel's point of view.

How to configure Access Control Lists (ACLs) on Linux

$
0
0
http://xmodulo.com/2014/08/configure-access-control-lists-acls-linux.html

Working with permissions on Linux is rather a simple task. You can define permissions for users, groups or others. This works really well when you work on a desktop PC or a virtual Linux instance which typically doesn't have a lot of users, or when users don't share files among themselves. However, what if you are a big organization where you operate NFS or Samba servers for diverse users. Then you will need to be nitpicky and set up more complex configurations and permissions to meet the requirements of your organization.
Linux (and other Unixes, that are POSIX compliant) has so-called Access Control Lists (ACLs), which are a way to assign permissions beyond the common paradigm. For example, by default you apply three permission groups: owner, group, and others. With ACLs, you can add permissions for other users or groups that are not simple "others" or any other group that the owner is not part of it. You can allow particular users A, B and C to have write permissions without letting their whole group to have writing permission.
ACLs are available for a variety of Linux filesystems including ext2, ext3, ext4, XFS, Btfrs, etc. If you are not sure if the filesystem you are using supports ACLs, just read the documentation.

Enable ACLs on your Filesystem

First of all, we need to install the tools to manage ACLs.
On Ubuntu/Debian:
$ sudo apt-get install acl
On CentOS/Fedora/RHEL:
# yum -y install acl
On Archlinux:
# pacman -S acl
For demonstration purpose, I will use Ubuntu server, but other distributions should work the same.
After installing ACL tools, it is necessary to enable ACL feature on our disk partitions so that we can start using it.
First, we can check if ACL feature is already enabled:
$ mount

As you noticed, my root partition has the ACL attribute enabled. In case yours doesn't, you need to edit your /etc/fstab file. Add acl flag in front of your options for the partition you want to enable ACL.

Now we need to re-mount the partition (I prefer to reboot completely, because I don't like losing data). If you enabled ACL for any other partitions, you have to remount them as well.
$ sudo mount / -o remount
Awesome! Now that we have enable ACL in our system, let's start to work with it.

ACL Examples

Basically ACLs are managed by two commands: setfacl which is used to add or modify ACLs, and getfacl which shows assigned ACLs. Let's do some testing.
I created a directory /shared owned by a hypothetical user named freeuser.
$ ls -lh /

I want to share this directory with two other users test and test2, one with full permissions and the other with just read permission.
First, to set ACLs for user test:
$ sudo setfacl -m u:test:rwx /shared
Now user test can create directories, files, and access anything under /shared directory.

Now we will add read-only permission for user test2:
$ sudo setfacl -m u:test2:rx /shared
Note that execution permission is necessary so test2 can read directories.

Let me explain the syntax of setfacl command:
  • -m means modify ACL. You can add new, or modify existing ACLs.
  • u: means user. You can use g to set group permissions.
  • test is the name of the user.
  • :rwx represents permissions you want to set.
Now let me show you how to read ACLs.
$ ls -lh /shared

As you noticed, there is a + (plus) sign after normal permissions. It means that there are ACLs set up. To actually read ACLs, we need to run:
$ sudo getfacl /shared

Finally if you want to remove ACL:
$ sudo setfacl -x u:test /shared

If you want to wipe out all ACL entries at once:
$ sudo setfacl -b /shared

One last thing. The commands cp and mv can change their behavior when they work over files or directories with ACLs. In the case of cp, you need to add the '-p' parameter to copy ACLs. If this is not posible, it will show you a warning. mv will always move the ACLs, and also if it is not posible, it will show you a warning.

Conclusion

Using ACLs gives you a tremendous power and control over files you want to share, especially on NFS/Samba servers. Moreover, if you administer shared hosting, this tool is a must have.

Linux Tutorial: Install Ansible Configuration Management And IT Automation Tool

$
0
0
http://www.cyberciti.biz/python-tutorials/linux-tutorial-install-ansible-configuration-management-and-it-automation-tool

Ansible Today I will be talking about ansible, a powerful configuration management solution written in python. There are many configuration management solutions available, all with pros and cons, ansible stands apart from many of them for its simplicity. What makes ansible different than many of the most popular configuration management systems is that its agent-less, no need to setup agents on every node you want to control. Plus, this has the benefit of being able to control you entire infrastructure from more than one place, if needed. That last point's validity, of being a benefit, may be debatable but I find it as a positive in most cases. Enough talk, lets get started with Ansible installation and configuration on a RHEL/CentOS, and Debian/Ubuntu based systems.

Prerequisites

  1. Distro: RHEL/CentOS/Debian/Ubuntu Linux
  2. Jinja2: A modern and designer friendly templating language for Python.
  3. PyYAML: A YAML parser and emitter for the Python programming language.
  4. parmiko: Native Python SSHv2 protocol library.
  5. httplib2: A comprehensive HTTP client library.
  6. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.

How Ansible works

Ansible tool uses no agents. It requires no additional custom security infrastructure, so it’s easy to deploy. All you need is ssh client and server:
     +----------------------+                    +---------------+
|Linux/Unix workstation| SSH | file_server1 |
|with Ansible |<------------------>| db_server2 | Unix/Linux servers
+----------------------+ Modules | proxy_server3 | in local/remote
192.168.1.100 +---------------+ data centers
------------------>
Where,
  1. 192.168.1.100 - Install Ansible on your local workstation/server.
  2. file_server1..proxy_server3 - Use 192.168.1.100 and Ansible to automates configuration management of all servers.
  3. SSH - Setup ssh keys between 192.168.1.100 and local/remote servers.

Ansible Installation Tutorial

Installation of ansible is a breeze, many distributions have a package available in their 3rd party repos which can easily be installed, a quick alternative is to just pip install it or grab the latest copy from github. To install using your package manager, on RHEL/CentOS Linux based systems you will most likely need the EPEL repo then:

Install ansible on a RHEL/CentOS Linux based system

Type the following yum command:
$ sudo yum install ansible

Install ansible on a Debian/Ubuntu Linux based system

Type the following apt-get command:
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Install ansible using pip

The pip command is a tool for installing and managing Python packages, such as those found in the Python Package Index. The following method works on Linux and Unix-like systems:
$ sudo pip install ansible

Install the latest version of ansible using source code

You can install the latest version from github as follows:
$ cd ~
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup

When running ansible from a git checkout, one thing to remember is that you will need to setup your environment everytime you want to use it, or you can add it to your bash rc file:
# ADD TO BASH RC
$ echo "export ANSIBLE_HOSTS=~/ansible_hosts">> ~/.bashrc
$ echo "source ~/ansible/hacking/env-setup">> ~/.bashrc

The hosts file for ansible is basically a list of hosts that ansible is able to perform work on. By default ansible looks for the hosts file at /etc/ansible/hosts, but there are ways to override that which can be handy if you are working with multiple installs or have several different clients for whose datacenters you are responsible for. You can pass the hosts file on the command line using the -i option:
$ ansible all -m shell -a "hostname" --ask-pass -i /etc/some/other/dir/ansible_hosts
My preference however is to use and environment variable, this can be useful if source a different file when starting work for a specific client. The environment variable is $ANSIBLE_HOSTS, and can be set as follows:
$ export ANSIBLE_HOSTS=~/ansible_hosts
Once all requirements are installed and you have you hosts file setup you can give it a test run. For a quick test I put 127.0.0.1 into the ansible hosts file as follow:
$ echo "127.0.0.1"> ~/ansible_hosts
Now lets test with a quick ping:
$ ansible all -m ping
OR ask for the ssh password:
$ ansible all -m ping --ask-pass
I have run across a problem a few times regarding initial setup, it is highly recommended you setup keys for ansible to use but in the previous test we used --ask-pass, on some machines you will need to install sshpass or add a -c paramiko like so:
$ ansible all -m ping --ask-pass -c paramiko
Or you can install sshpass, however sshpass is not always available in the standard repos so paramiko can be easier.

Setup SSH Keys

Now that we have gotten the configuration, and other simple stuff, out of the way lets move onto doing something productive. Alot of the power of ansible lies in playbooks, which are basically scripted ansible runs (for the most part), but we will start with some one liners before we build out a playbook. Lets start with creating and configuring keys so we can avoid the -c and --ask-pass options:
$ ssh-keygen -t rsa
Sample outputs:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
94:a0:19:02:ba:25:23:7f:ee:6c:fb:e8:38:b4:f2:42 mike@ultrabook.linuxdork.com
The key's randomart image is:
+--[ RSA 2048]----+
|... . . |
|. . + . . |
|= . o o |
|.* . |
|. . . S |
| E.o |
|.. .. |
|o o+.. |
| +o+*o. |
+-----------------+
Now obviously there are plenty of ways to put this in place on the remote machine but since we are using ansible, lets use that:
$ ansible all -m copy -a "src=/home/mike/.ssh/id_rsa.pub dest=/tmp/id_rsa.pub" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"dest": "/tmp/id_rsa.pub",
"gid": 100,
"group": "users",
"md5sum": "bafd3fce6b8a33cf1de415af432774b4",
"mode": "0644",
"owner": "mike",
"size": 410,
"src": "/home/mike/.ansible/tmp/ansible-tmp-1407008170.46-208759459189201/source",
"state": "file",
"uid": 1000
}
Next, add the public key in remote server, enter:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | FAILED | rc=1 >>
/bin/sh: /root/.ssh/authorized_keys: Permission denied
Whoops, we want to be able to run things as root, so lets add a -u option:
$ ansible all -m shell -a "cat /tmp/id_rsa.pub >> /root/.ssh/authorized_keys" --ask-pass -c paramiko -u root
Sample outputs:
SSH password:
127.0.0.1 | success | rc=0 >>
Please note, I wanted to demonstrate a file transfer using ansible, there is however a more built in way for managing keys using ansible:
$ ansible all -m authorized_key -a "user=mike key='{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}' path=/home/mike/.ssh/authorized_keys manage_dir=no" --ask-pass -c paramiko
Sample outputs:
SSH password:
127.0.0.1 | success >> {
"changed": true,
"gid": 100,
"group": "users",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+Z8/usprXk0aCAPyP0TGylm2MKbmEsHePUOd7p5DO1QQTHak+9gwdoJJavy0yoUdi+C+autKjvuuS+vGb8+I+8mFNu5CvKiZzIpMjZvrZMhHRdNud7GuEanusTEJfi1pUd3NA2iXhl4a6S9a/4G2mKyf7QQSzI4Z5ddudUXd9yHmo9Yt48/ASOJLHIcYfSsswOm8ux1UnyeHqgpdIVONVFsKKuSNSvZBVl3bXzhkhjxz8RMiBGIubJDBuKwZqNSJkOlPWYN76btxMCDVm07O7vNChpf0cmWEfM3pXKPBq/UBxyG2MgoCGkIRGOtJ8UjC/daadBUuxg92/u01VNEB mike@ultrabook.linuxdork.com",
"key_options": null,
"keyfile": "/home/mike/.ssh/authorized_keys",
"manage_dir": false,
"mode": "0600",
"owner": "mike",
"path": "/home/mike/.ssh/authorized_keys",
"size": 410,
"state": "file",
"uid": 1000,
"unique": false,
"user": "mike"
}
Now that the keys are in place lets try running an arbitrary command like hostname and hope we don't get prompted for a password
$ ansible all -m shell -a "hostname" -u root
Sample outputs:
127.0.0.1 | success | rc=0 >>
Success!!! Now that we can run commands as root and not be bothered by using a password we are in a good place to easily configure any and all hosts in the ansible hosts file. Let's remove the key from /tmp:
$ ansible all -m file -a "dest=/tmp/id_rsa.pub state=absent" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": true,
"path": "/tmp/id_rsa.pub",
"state": "absent"
}
Next, I'm going to make sure we have a few packages installed and on the latest version and we will move on to something a little more complicated:
$ ansible all -m zypper -a "name=apache2 state=latest" -u root
Sample outputs:
127.0.0.1 | success >> {
"changed": false,
"name": "apache2",
"state": "latest"
}
Alright, the key we placed in /tmp is now absent and we have the latest version of apache installed. This brings me to the next point, something that makes ansible very flexible and gives more power to playbooks, many may have noticed the -m zypper in the previous commands. Now unless you use openSuse or Suse enterpise you may not be familiar with zypper, it is basically the equivalent of yum in the suse world. In all of the examples above I have only had one machine in my hosts file, and while everything but the last command should work on any standard *nix systems with standard ssh configs, this leads to a problem. What if we had multiple machine types that we wanted to manage? Well this is where playbooks, and the configurability of ansible really shines. First lets modify our hosts file a little, here goes:
$ cat ~/ansible_hosts
Sample outputs:
 
[RHELBased]
10.50.1.33
10.50.1.47
 
[SUSEBased]
127.0.0.1
 
First, we create some groups of servers, and give them some meaningful tags. Then we create a playbook that will do different things for the different kinds of servers. You might notice the similarity between the yaml data structures and the command line instructions we ran earlier. Basically the -m is a module, and -a is for module args. In the YAML representation you put the module then :, and finally the args.
 
---
- hosts: SUSEBased
remote_user: root
tasks:
- zypper: name=apache2 state=latest
- hosts: RHELBased
remote_user: root
tasks:
- yum: name=httpd state=latest
 
Now that we have a simple playbook, we can run it as follows:
$ ansible-playbook testPlaybook.yaml -f 10
Sample outputs:
 
PLAY [SUSEBased] **************************************************************
 
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
 
TASK: [zypper name=apache2 state=latest] **************************************
ok: [127.0.0.1]
 
PLAY [RHELBased] **************************************************************
 
GATHERING FACTS ***************************************************************
ok: [10.50.1.33]
ok: [10.50.1.47]
 
TASK: [yum name=httpd state=latest] *******************************************
changed: [10.50.1.33]
changed: [10.50.1.47]
 
PLAY RECAP ********************************************************************
10.50.1.33 : ok=2 changed=1 unreachable=0 failed=0
10.50.1.47 : ok=2 changed=1 unreachable=0 failed=0
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
 
Now you will notice that you will see output from each machine that ansible contacted. The -f is what lets ansible run on multiple hosts in parallel. Instead of using all, or a name of a host group, on the command line you can put this passwords for the ask-pass prompt into the playbook. While we no longer need the --ask-pass since we have ssh keys setup, it comes in handy when setting up new machines, and even new machines can run from a playbook. To demonstrate this lets convert our earlier key example into a playbook:
 
---
- hosts: SUSEBased
remote_user: mike
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}"path=/root/.ssh/authorized_keys manage_dir=no
- hosts: RHELBased
remote_user: mdonlon
sudo: yes
tasks:
- authorized_key: user=root key="{{ lookup('file', '/home/mike/.ssh/id_rsa.pub') }}"path=/root/.ssh/authorized_keys manage_dir=no
 
Now there are plenty of other options here that could be done, for example having the keys dropped during a kickstart, or via some other kind of process involved with bringing up machines on the hosting of your choice, but this can be used in pretty much any situation assuming ssh is setup to accept a password. One thing to think about before writing out too many playbooks, version control can save you a lot of time. Machines need to change over time, you don't need to re-write a playbook every time a machine changes, just update the pertinent bits and commit the changes. Another benefit of this ties into what I said earlier about being able to manage the entire infrastructure from multiple places. You can easily git clone your playbook repo onto a new machine and be completely setup to manage everything in a repetitive manner.

Real world ansible example

I know a lot of people make great use of services like pastebin, and a lot of companies for obvious reasons setup their own internal instance of something similar. Recently, I came across a newish application called showterm and coincidentally I was asked to setup an internal instance of it for a client. I will spare you the details of this app, but you can google showterm if interested. So for a reasonable real world example I will attempt to setup a showterm server, and configure the needed app on the client to use it. In the process we will need a database server as well. So here goes, lets start with the client configuration.
 
---
- hosts: showtermClients
remote_user: root
tasks:
- yum: name=rubygems state=latest
- yum: name=ruby-devel state=latest
- yum: name=gcc state=latest
- gem: name=showterm state=latest user_install=no
 
That was easy, lets move on to the main server:
 
---
- hosts: showtermServers
remote_user: root
tasks:
- name: ensure packages are installed
yum: name={{item}} state=latest
with_items:
- postgresql
- postgresql-server
- postgresql-devel
- python-psycopg2
- git
- ruby21
- ruby21-passenger
- name: showterm server from github
git: repo=https://github.com/ConradIrwin/showterm.io dest=/root/showterm
- name: Initdb
command: service postgresql initdb
creates=/var/lib/pgsql/data/postgresql.conf
 
- name: Start PostgreSQL and enable at boot
service: name=postgresql
enabled=yes
state=started
- gem: name=pg state=latest user_install=no
handlers:
- name: restart postgresql
service: name=postgresql state=restarted
 
- hosts: showtermServers
remote_user: root
sudo: yes
sudo_user: postgres
vars:
dbname: showterm
dbuser: showterm
dbpassword: showtermpassword
tasks:
- name: create db
postgresql_db: name={{dbname}}
 
- name: create user with ALL priv
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- hosts: showtermServers
remote_user: root
tasks:
- name: database.yml
template: src=database.yml dest=/root/showterm/config/database.yml
- hosts: showtermServers
remote_user: root
tasks:
- name: run bundle install
shell: bundle install
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: run rake db tasks
shell: 'bundle exec rake db:create db:migrate db:seed'
args:
chdir: /root/showterm
- hosts: showtermServers
remote_user: root
tasks:
- name: apache config
template: src=showterm.conf dest=/etc/httpd/conf.d/showterm.conf
 
 
Not so bad, now keeping in mind that this is a somewhat random and obscure app that we can now install in a consistent fashion on any number of machines, this is where the benefits of configuration management really come to light. Also, in most cases the declarative syntax almost speaks for itself and wiki pages need not go into as much detail, although a wiki page with too much detail is never a bad thing in my opinion.

Expanding Configuration

We have not touched on everything here, Ansible has many options for configuring you setup. You can do things like embedding variables in your hosts file, so that Ansible will interpolate them on the remote nodes, eg.
 
[RHELBased]
10.50.1.33http_port=443
10.50.1.47http_port=80 ansible_ssh_user=mdonlon
 
[SUSEBased]
127.0.0.1http_port=443
 
While this is really handy for quick configurations, you can also layer variables across multiple files in yaml format. In you hosts file path you can make two sub directories named group_vars and host_vars. Any files in those paths that match the name of the group of hosts, or a host name in your hosts file will be interpolated at run time. So the previous example would look like this:

ultrabook:/etc/ansible # pwd
/etc/ansible
ultrabook:/etc/ansible # tree
.
├── group_vars
│   ├── RHELBased
│   └── SUSEBased
├── hosts
└── host_vars
├── 10.50.1.33
└── 10.50.1.47

2 directories, 5 files
ultrabook:/etc/ansible # cat hosts
[RHELBased]
10.50.1.33
10.50.1.47

[SUSEBased]
127.0.0.1
ultrabook:/etc/ansible # cat group_vars/RHELBased
ultrabook:/etc/ansible # cat group_vars/SUSEBased
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.33
---
http_port: 443
ultrabook:/etc/ansible # cat host_vars/10.50.1.47
---
http_port:80
ansible_ssh_user: mdonlon

Refining Playbooks

There are many ways to organize playbooks as well. In the previous examples we used a single file, and everything is really simplified. One way of organizing things that is commonly used is creating roles. Basically you load a main file as your playbook, and that then imports all the data from the extra files, the extra files are organized as roles. For example if you have a wordpress site, you need a web head, and a database. The web head will have a web server, the app code, and any needed modules. The database is sometimes ran on the same host and some times ran on remote hosts, and this is where roles really shine. You make a directory, and small playbook for each role. In this case we can have an apache role, mysql role, wordpress role, mod_php, and php roles. The big advantage to this is that not every role has to be applied on one server, in this case mysql could be applied to a separate machine. This also allows for code re-use, for example you apache role could be used with python apps and php apps alike. Demonstrating this is a little beyond the scope of this article, and there are many different ways of doing thing, I would recommend searching for ansible playbook examples. There are many people contributing code on github, and I am sure various other sites.

Modules

All of the work being done behind the scenes in ansible is driven by modules. Ansible has an excellent library of built in modules that do things like package installation, transferring files, and everything we have done in this article. But for some people this will not be suitable for their setup, ansible has a means of adding your own modules. One great thing about the API provided by Ansible is that you are not restricted to the language it was written in, Python, you can use any language really. Ansible modules work by passing around JSON data structures, so as long as you can build a JSON data structure in your language of choice, which I am pretty sure any scripting language can do, you can begin coding something right away. There is much documentation on the Ansible site, about how the module interface works, and many examples of modules on github as well. Keep in mind that some obscure languages may not have great support, but that would only be because not enough people are contributing code in that language, try it out and publish your results somewhere!

Conclusion

In conclusion, there are many systems around for configuration management, I hope this article shows the ease of setup for ansible which I believe is one of its strongest points. Please keep in mind that I was trying to show a lot of different ways to do things, and not everything above may be considered best practice in your private infrastructure, or the coding world abroad. Here are some more links to take you knowledge of ansible to the next level:

5 Awesome Open Source Cloning Software

$
0
0
http://www.cyberciti.biz/datacenter/5-awesome-open-source-cloning-software

Cloning is nothing but the copying of the contents of a server hard disk to a storage medium (another disk) or to an image file. Disk cloning is quite useful in modern data centers for:
  1. Full system backup.
  2. System recovery.
  3. Reboot and restore.
  4. Hard drive upgrade.
  5. Converting a physical server to virtual machine and more.
In this post, I'm going to list the Free and Open Source Software for Disk Imaging and Cloning that you can use for GNU/Linux, *BSD and Mac OS X desktop operating systems.

Clonezilla - One Partition and disk cloning program to rule them all

Clonezilla is a partition and disk imaging/cloning program similar to True Image and Norton Ghost. I frequently use Clonezilla software to do system deployment, bare metal backup and recovery. Clonezilla live is good for single machine backup and restore at home. Clonezilla SE is for massive deployment in data center, it can clone many (40 plus!) computers simultaneously. Clonezilla saves and restores only used blocks in the harddisk. This increases the clone efficiency. It supports the following file systems
  1. ext2, ext3, ext4, reiserfs, xfs, jfs of GNU/Linux
  2. FAT, NTFS of MS Windows
  3. HFS+ of Mac OS
  4. UFS of BSD
  5. minix of Minix and VMFS of VMWare ESX.

=> Download Clonezilla

Redo Backup - Easy to use GUI based backup, recovery and restore for new users

Redo Backup and Recovery is a bootable Linux CD image, with a GUI. It is capable of bare-metal backup and recovery of disk partitions. It can use external hard drives and network shares (NFS/CIFS) for storing images. Major feature includes:
  1. It can save and restore MS-Windows and Linux based servers/desktop systems.
  2. No installation needed; runs from a CD-ROM or a USB stick.
  3. Automatically finds local network shares.
  4. Access your files even if you can't log in.
  5. >Recover deleted pictures, documents, and other files.
  6. Internet access with a full-featured browser to download drivers.

=> Download Redo backup

Fog - Perfect cloning solution for Microsoft shop

FOG is a Linux-based, free and open source computer imaging solution for Windows XP, Windows Vista, Windows 7, Windows 8, and Linux (limited) that ties together a few open-source tools with a php-based web interface. FOG doesn't use any boot disks, or CDs; everything is done via TFTP and PXE. Your PC boots via PXE and automatically downloads a small Linux client. From there you can select many activities on the PC, including imaging the hard drive. FOG supports multi-casting, meaning that you can image many PCs from the same stream. So it should be as fast whether you are imaging 1 PC or 40 PCs.

=> Download Fog

Mondo Rescue - Disaster recovery solution for enterprise users

Mondo is reliable disater recovery software. It backs up your GNU/Linux server/desktop to tape, CD-R, CD-RW, DVD-R[W], DVD+R[W], NFS or hard disk partition. Mondo is in use by Lockheed-Martin, Nortel Networks, Siemens, HP, IBM, NASA's JPL, the US Dept of Agriculture, dozens of smaller companies, and tens of thousands of users world-wild. It supports LVM 1/2, RAID, ext2, ext3, ext4, JFS, XFS, ReiserFS, VFAT, and can support additional filesystems easily. It supports software raid as well as most hardware raid controllers.
Mondo Rescue In Action
Mondo Rescue In Action

=> Download Mondo Rescue

dd and friends - The ol' good *nix utilities

Warning: dd/ddrescue/dcfldd are power tools. You need to understand what it does, and you need to understand some things about the machines it does those things to, in order to use it safely.
The dd command converts and copies a file. You can clone a hard disk "sda" to "sdb":
 
ddif=/dev/sda of=/dev/sdb bs=1M conv=noerror
 
To clone one partition to another:
 
ddif=/dev/sdc3 of=/dev/sdd3 bs=4096conv=noerror
 

dcfldd: A fork of dd

dcfldd is an enhanced version of GNU dd with features useful for forensics and security. Here is an example of cloning a hard disk "sda" and store to an image called "/nfs/sda-image-server2.dd":
 
dcfldd if=/dev/sda hash=md5,sha256 hashwindow=10G md5log=md5.txt sha256log=sha256.txt \
hashconv=after bs=512conv=noerror,syncsplit=10G splitformat=aa of=/nfs/sda-image-server2.dd
 
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
=> Download dcfldd and GNU dd (GNU core utilities and installed on most Unix-like systems)

Linux: Hide Processes From Other Users

$
0
0
http://www.cyberciti.biz/faq/linux-hide-processes-from-other-users

I run a multi-user system. Most users access resources using ssh client. How can I stop leaking process information to all users on Linux operating systems? How do I prevent users from seeing processes that do not belong to them on a Debian/Ubuntu/RHEL/CentOS Linux server?

If you are using Linux kernel version 3.2+ (or RHEL/CentOS v6.5+ above) you can hide process from other users. Only root can see all process and user only see their own process. All you have to do is remount the /proc filesystem with the Linux kernel hardening hidepid option.
Tutorial details
DifficultyEasy (rss)
Root privilegesYes
RequirementsLinux kernel v3.2+
Estimated completion time2m

Say hello to hidepid option

This option defines how much info about processes we want to be available for non-owners. The values are as follows:
  1. hidepid=0 - The old behavior - anybody may read all world-readable /proc/PID/* files (default).
  2. hidepid=1 - It means users may not access any /proc/ / directories, but their own. Sensitive files like cmdline, sched*, status are now protected against other users.
  3. hidepid=2 It means hidepid=1 plus all /proc/PID/ will be invisible to other users. It compicates intruder's task of gathering info about running processes, whether some daemon runs with elevated privileges, whether another user runs some sensitive program, whether other users run any program at all, etc.

Linux kernel protection: Hiding processes from other users

Type the following mount command:
# mount -o remount,rw,hidepid=2 /proc
Edit /etc/fstab, enter:
# vi /etc/fstab
Update/append/modify proc entry as follows so that protection get enabled automatically at server boot-time:
 
proc /proc proc defaults,hidepid=200
 
Save and close the file.

Linux demo: Prevent users from seeing processes that do not belong to them

In this example, I'm login as vivek@cbz-test:
$ ssh vivek@cbz-test
$ ps -ef
$ sudo -s
# mount -o remount,rw,hidepid=2 /proc
$ ps -ef
$ top
$ htop

Sample outputs:
Animated gif 01: hidepid in action
Animated gif 01: hidepid in action

Tip: Dealing with apps that breaks when you implement this technique

You need to use gid=VALUE_HERE option:
gid=XXX defines a group that will be able to gather all processes' info (as in hidepid=0 mode). This group should be used instead of putting nonroot user in sudoers file or something. However, untrusted users (like daemons, etc.) which are not supposed to monitor the tasks in the whole system should not be added to the group.
So add the user called monapp to group (say admin) that want to see process information and mount /proc as follows in /etc/fstab:
proc /proc proc defaults,hidepid=2,gid=admin 0 0 

How to sniff HTTP traffic from the command line on Linux

$
0
0
http://xmodulo.com/2014/08/sniff-http-traffic-command-line-linux.html

Suppose you want to sniff live HTTP web traffic (i.e., HTTP requests and responses) on the wire for some reason. For example, you may be testing experimental features of a web server. Or you may be debugging a web application or a RESTful service. Or you may be trying to troubleshoot PAC (proxy auto config) or check for any malware files surreptitiously downloaded from a website. Whatever the reason is, there are cases where HTTP traffic sniffing is helpful, for system admins, developers, or even end users.
While packet sniffing tools such as tcpdump are popularly used for live packet dump, you need to set up proper filtering to capture only HTTP traffic, and even then, their raw output typically cannot be interpreted at the HTTP protocol level so easily. Real-time web server log parsers such as ngxtop provide human-readable real-time web traffic traces, but only applicable with a full access to live web server logs.
What will be nice is to have tcpdump-like sniffing tool, but targeting HTTP traffic only. In fact, httpry is extactly that: HTTP packet sniffing tool. httpry captures live HTTP packets on the wire, and displays their content at the HTTP protocol level in a human-readable format. In this tutorial, let's see how we can sniff HTTP traffic with httpry.

Install httpry on Linux

On Debian-based systems (Ubuntu or Linux Mint), httpry is not available in base repositories. So build it from the source:
$ sudo apt-get install gcc make git libpcap0.8-dev
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install
On Fedora, CentOS or RHEL, you can install httpry with yum as follows. On CentOS/RHEL, enable EPEL repo before running yum.
$ sudo yum install httpry
If you still want to build httpry from the source on RPM-based systems, you can easily do that by:
$ sudo yum install gcc make git libpcap-devel
$ git clone https://github.com/jbittel/httpry.git
$ cd httpry
$ make
$ sudo make install

Basic Usage of httpry

The basic use case of httpry is as follows.
$ sudo httpry -i
httpry then listens on a specified network interface, and displays captured HTTP requests/responses in real time.

In most cases, however, you will be swamped with the fast scrolling output as packets are coming in and out. So you want to save captured HTTP packets for offline analysis. For that, use either '-b' or '-o' options. The '-b' option allows you to save raw HTTP packets into a binary file as is, which then can be replayed with httpry later. On the other hand, '-o' option saves human-readable output of httpry into a text file.
To save raw HTTP packets into a binary file:
$ sudo httpry -i eth0 -b output.dump
To replay saved HTTP packets:
$ httpry -r output.dump
Note that when you read a dump file with '-r' option, you don't need root privilege.
To save httpry's output to a text file:
$ sudo httpry -i eth0 -o output.txt

Advanced Usage of httpry

If you want to monitor only specific HTTP methods (e.g., GET, POST, PUT, HEAD, CONNECT, etc), use '-m' option:
$ sudo httpry -i eth0 -m get,head

If you downloaded httpry's source code, you will notice that the source code comes with a collection of Perl scripts which aid in analyzing httpry's output. These scripts are found in httpry/scripts/plugins directory. If you want to write a custom parser for httpry's output, these scripts can be good examples to start from. Some of their capabilities are:
  • hostnames: Display a list of unique host names with counts.
  • find_proxies: Detect web proxies.
  • search_terms: Find and count search terms entered in search services.
  • content_analysis: Find URIs which contain specific keywords.
  • xml_output: Convert output into XML format.
  • log_summary: Generate a summary of log.
  • db_dump: Dump log file data into a MySQL database.
Before using these scripts, first run httpry with '-o' option for some time. Once you obtained the output file, run the scripts on it at once by using this command:
$ cd httpry/scripts
$ perl parse_log.pl -d ./plugins
You may encounter warnings with several plugins. For example, db_dump plugin may fail if you haven't set up a MySQL database with DBI interface. If a plugin fails to initialize, it will automatically be disabled. So you can ignore those warnings.
After parse_log.pl is completed, you will see a number of analysis results (*.txt/xml) in httpry/scripts directory. For example, log_summary.txt looks like the following.

To conclude, httpry can be a life saver if you are in a situation where you need to interpret live HTTP packets. That might not be so common for average Linux users, but it never hurts to be prepared. What do you think of this tool?

Linux Performance Tools at LinuxCon North America 2014

$
0
0
http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools-linuxcon-na-2014.html

This week I spoke at LinuxCon North America 2014 in Chicago, which was also my first LinuxCon. I really enjoyed the conference, and it was a privilege to take part and contribute. I'll be returning to work with some useful ideas from talks and talking with attendees.
I included my latest Linux performance observability tools diagram, which I keep updated here:
But I was really excited to share some new diagrams, which are all in the slides:
I gave a similar talk two years ago at SCaLE11x, where I covered performance observability tools. This time, I covered observability, benchmarking, and tuning tools, providing a more complete picture of the performance tools landscape. I hope these help you in a similar way, when you move from observability to performing load tests with benchmarks, and finally tuning the system.
I also presented an updated summary on the state of tracing, after my recent discoveries with ftrace, which is able to serve some tracing needs in existing kernels. For more about ftrace, see my lwn.net article Ftrace: The hidden light switch, which was made open the same day as my talk.
At one point I included a blank template for observability tools (PNG):
My suggestion was to print this out and fill it in with whatever observability tools make most sense in your environment. This may include monitoring tools, both in-house and commercial, and can be supplemented by the server tools from my diagram above.
At Netflix, we have our own monitoring system to observe our thousands of cloud instances, and this diagram helps to see which Linux and server components it currently measures, and what can be developed next. (This monitoring tool also includes many application metrics.) As I said in the talk, we'll sometimes need to login to an instance using ssh, and run the regular server tools.
This diagram may also help you develop your own monitoring tools, by showing what would ideally be observed. It can also help rank commercial products: next time a salesperson tells you their tool can see everything, hand them this diagram and a pen. :-)
My talk was standing room only, and some people couldn't get in the room and missed out. Unfortunately, it wasn't videoed, either. Sorry, I should have figured this out sooner and arranged something in time. Given how popular it was, I suspect I'll give it again some time, and will hopefully get it on video.
Thanks to those who attended, and the Linux Foundation for having me and organizing a great event!

jBilling tutorial – an open source billing platform

$
0
0
http://www.linuxuser.co.uk/tutorials/jbilling-tutorial-an-open-source-billing-platform

Discover jBilling and make managing invoices, payments and billing simple and stress-free


A lot more people are taking up the entrepreneurial route these days. To the uninitiated it looks very easy; you are your own boss and can do whatever you wish. But someone who has already taken the plunge knows that being an entrepreneur is a lot tougher – whether working as a freelancer or the founder of a start- up, you will almost always find yourself donning several hats. While managing everything is relatively easy when you are small, it can become a daunting task to manage things when you start growing rapidly. Multitasking becomes a real skill as you negotiate with clients, send proposals and work on current assignments. With all this chaos, you certainly don’t want to miss out on payments – after all, that’s what you’re working for!
Today we introduce jBilling, which can help you manage the most important aspect of your business – the income. This is not the typical invoice management kind of tool, rather a full-fledged platform with several innovative features. jBilling helps you manage invoices, track payments, bill your customers and more with little effort on your behalf – just what you want when juggling responsibilities.
In this tutorial we will first cover the necessary steps to install and set up jBilling before having a closer look at the various features that can help you manage your business better. We have used the latest stable community edition of jBilling, version 3.1.0, for demo purposes in this article.
The main menu bar gives you access to all the pages you'll use most frequently
The main menu bar gives you access to all the pages you’ll use most frequently

Step-by-step

Step 01 Installation
jBilling is integrated with the web server out of the box, which helps make the installation process straightforward. Just unzip the downloaded zip file to a folder (where you want the installation to be done), eg ‘my_jBilling’. Open the command prompt and navigate to the folder /path/my_jBilling/bin. Assign executable permissions to all the shell script files, with the command chmod +x *.sh. Also, remember to set the JAVA_HOME variable with your Java Home path. You can then start jBilling by running ./startup.sh. This completes the installation process – note that the process may slightly differ depending on the OS you use. As the startup.sh script executes, the command prompt shows five lines of logs indicating successful start. You can then access jBilling via your browser at http://localhost:8080/jbilling and login with credentials admin/123qwe. You can also access http://localhost:8080/jbilling/signup to create your new signup.
Step 02 Customers
No one wants to add a customer’s detail to the system every single time an invoice is sent to them! It is generally a good idea to keep the details of your customer with you and that’s precisely what jBilling lets you do – simply click on the ‘Customer’ button on the main menu to go to the customer page. Here you can view all the details related to the customer – but before that, you need to add a customer. To do so, click on the ‘Add New’ button and then fill in all of the relevant details. Note that once you add a customer, a separate login for the customer is also created and they can then log in to your jBilling system and manage their account as well (to make payments, view invoices and so on). This may seem trivial for smaller organisations with a smaller number of customers, but if you have a huge customer base and would like customers to handle payments themselves, you will definitely like this feature.
Step 03 Products
Besides customers, the other important aspect of a business is what you sell – your products or services. Handling your products in jBilling is nice and straightforward. Simply click on the ‘Products’ button to go to the products page. To add a new product here, you must add product categories first – click on the ‘Add Category’ button to do that. After the category is created, select it to add new products to that particular category or view all the products within it. Once you have all your products listed in the system, you can use them to create orders, invoices and so on.
Step 04 Orders
Before serving your customer you need an order from them. jBilling lets you handle orders in a way that closely resembles real-world scenarios. Clicking on the ‘Orders’ link on the main menu will take you to the orders page where you can view a list of all the orders received up to now. At this point you may be puzzled; unlike other pages there is no button to create an order here. To create an order you must first navigate to the particular customer you plan to create it for (in the customer page) and then click the ‘Create Order’ button (located below the customer details). This arrangement makes sure that there is tight coupling between an order and related customer. Once the order is created you can see it in the Order page. You can then edit orders to add products or create invoices out of it.
Step 05 Invoices
We have tight coupling with customers and orders, so it makes sense that invoices in jBilling should be related to an order too. So, to create an invoice you need to go to the order for which you are raising the invoice and click the ‘Generate Invoice’ button. The invoice is then created – note that you can even apply other orders to an invoice (if it hasn’t been paid). Also, an order can’t be used to generate an invoice if an earlier invoice (related to it) has already been paid. Having generated the invoice, you can send it via email or download it as a PDF. You may find that you want to change the invoice logo – but we’ll get to configuration and customisation later on. We will also see in later steps about how the payments related to an invoice can also be tracked.
Invoices
Invoices
Step 06 Billing
Billing is the feature that helps you automate the whole process of invoicing and payments. It can come in handy for businesses with a subscription model or other cases where customers are charged in a recurring manner. To set the billing process, you need to go the Configuration page first. Once you are on the page, click on ‘Billing Process’ on the left-hand menu bar to set the date and other parameters. With the parameters set, billing process runs automatically and shows a representation of the invoices. This output (invoices) needs to be approved by the admin – only once this has happened can the real invoices get generated and delivered to the customer. The customers (whose payments are not automatic) can then pay their bills with their own logins.
Step 07 Payments
Any payment made for an invoice is tracked on the Payments page, where you can view a list of all the payments already taken care of. To create a new payment, you need to select the customer (for whom payment is being made) on the Customer page and then click the ‘Make Payment’ button at the very bottom (next to the ‘Create Order’ button). This takes you to a page with details of all the paid/unpaid invoices (raised for that customer). Just select the relevant invoice and fill up the details of payment method to complete the payment process. Later, if there is a need to edit the payment details, you need to unlink the invoice before editing the details.
Step 08 Partners
Partners – for example, any affiliate marketing partners for an eCommerce website – are people or organisations that help your business grow. They are generally paid a mutually agreed percentage of the revenue they bring in. jBilling helps you manage partners in a easy, automated way. Click on the Partners link on the homepage to reach the Partners page and set about adding a new partner. Here you will need to fill in the details related to percentage rate, referral fee, payout date and period and so on. Now whenever a new customer is added (with the Partner ID field filled in) the relevant partner gets entitled to the commission percentage (as set during adding the partner) and the jBilling system keeps a track of the partner’s due payment. Note that, as with customers, partners also get their own login once you add their details to jBilling. It is up to you to give them the login access, though.
Step 09 Reports
The reporting engine of jBilling lets you have a bird’s-eye view of what’s going on with your company’s accounts. Click on the Reports link on the main menu; here there are four report types available – invoice, order, payment and customer. You can select one to reveal the different reports available inside that type. After a report is selected, you can see a brief summary of what the report is supposed to show. Set the end date and then click on the ‘Run Report’ button to run the report. Having done this, the system shows you the output. You can also change the output format to PDF, Excel or HTML.
Reports
Reports
Step 10 Configuration
The configuration page lets you fine-tune your jBilling installation settings. Click on the Configuration link and you will see a list of settings available on the left menu bar. The links are somewhat self-explanatory but we’ll run through the more useful ones. The Billing Process link allows you to set the billing run parameters. You can change the invoice logo using the Invoice Display setting. To add new users, simply click on the ‘Users’ link. To set the default currency or add a new currency to the system, click on the ‘Currencies’ link. You can even blacklist customers under the ‘Blacklist’ link. You will find many more settings to customise jBilling as per your tastes and requirements – just keep exploring and make jBilling work for you.

Security Hardening with Ansible

$
0
0
http://www.linuxjournal.com/content/security-hardening-ansible

Ansible is an open-source automation tool developed and released by Michael DeHaan and others in 2012. DeHaan calls it a "general-purpose automation pipeline" (see Resources for a link to the article "Ansible's Architecture: Beyond Configuration Management"). Not only can it be used for automated configuration management, but it also excels at orchestration, provisioning of systems, zero-time rolling updates and application deployment. Ansible can be used to keep all your systems configured exactly the way you want them, and if you have many identical systems, Ansible will ensure they stay identical. For Linux system administrators, Ansible is an indispensable tool in implementing and maintaining a strong security posture.
Ansible can be used to deploy and configure multiple Linux servers (Red Hat, Debian, CentOS, OS X, any of the BSDs and others) using secure shell (SSH) instead of the more common client-server methodologies used by other configuration management packages, such as Puppet and Chef (Chef does have a solo version that does not require a server, per se). Utilizing SSH is a more secure method because the traffic is encrypted. The secure shell transport layer protocol is used for communications between the Ansible server and the target hosts. Authentication is accomplished using Kerberos, public-key authentication or passwords.
When I began working in system administration some years ago, a senior colleague gave me a simple formula for success. He said, "Just remember, automate, automate, automate." If this is true, and I believe it is, then Ansible can be a crucial tool in making any administrator's career successful. If you do not have a few really good automation tools, every task must be accomplished manually. That wastes a lot of time, and time is precious. Ansible makes it possible to manage many servers almost effortlessly.
Ansible uses a very simple method called playbooks to orchestrate configurations. A playbook is a set of instructions written in YAML that tells the Ansible server what "plays" to carry out on the target hosts. YAML is a very simple, human-readable markup language that gives the user fine granularity when setting up configuration schemes. It is installed, along with Ansible, as a dependency. Ansible uses YAML because it is much easier to write than common data formats, like JSON and XML. The learning curve for YAML is very low, hence proficiency can be gained very quickly. For example, the simple playbook shown in Figure 1 keeps the Apache RPM on targeted Web servers up to date and current.
Figure 1. Example Playbook That Will Upgrade Apache to the Latest Version
From the Ansible management server, you can create a cron job to push the playbook to the target hosts on a regular basis, thus ensuring you always will have the latest-and-greatest version of the Apache Web server.
Using YAML, you can instruct Ansible to target a specific group of servers, the remote user you want to run as, tasks to assign and many other details. You can name each task, which makes for easier reading of the playbook. You can set variables, and use loops and conditional statements. If you have updated a configuration file that requires restarting a service, Ansible uses tasks called handlers to notify the system that a service restart is necessary. Handlers also can be used for other things, but this is the most common.
The ability to reuse certain tasks from previously written playbooks is another great feature. Ansible uses a mechanism called roles to accomplish this. Roles are organizational units that are used to implement a specific configuration on a group of hosts. A role can include a set of variable values, handlers and tasks that can be assigned to a host group, or hosts corresponding to specific patterns. For instance, you could create a role for installing and configuring MySQL on a group of targeted servers. Roles make this a very simple task.
Besides intelligent automation, you also can use Ansible for ad hoc commands to contact all your target hosts simultaneously. Ad hoc commands can be performed on the command line. It is a very quick method to use when you want to see a specific type of output from all your target machines, or just a subset of them. For example, if you want to see the uptime for all the hosts in a group called dbservers, you would type, as user root:

# ansible dbservers -a /usr/bin/uptime
The output will look like Figure 2.
Figure 2. Example of ad hoc Command Showing Uptime Output for All Targets
If you want to specify a particular user, use the command in this way:

# ansible dbservers -a /usr/bin/uptime -u username
If you are running the command as a particular user, but want to act as root, you can run it through sudo and have Ansible ask for the root password:

# ansible dbservers -a /usr/bin/uptime -u username
↪--sudo [ask-sudo-pass]
You also can switch to a different user by using the -U option:

# ansible dbservers -a /usr/bin/uptime -u username
↪-U otheruser --sudo
# [ask-sudo-pass]
Occasionally, you may want to run the command with 12 parallel forks, or processes:

# ansible dbservers -a /usr/bin/uptime -f 12
This will get the job done faster by using 12 simultaneous processes, instead of the default value of 5. If you would like to set a permanent default for the number of forks, you can set it in the Ansible configuration file, which is located in /etc/ansible/ansible.cfg.
It also is possible to use Ansible modules in ad hoc mode by using the -moption. In this example, Ansible pings the target hosts using the pingmodule:

# ansible dbservers -m ping
Figure 3. In this example, Ansible pings the target hosts using the ping module.
As I write this, Michael DeHaan has announced that, in a few weeks, a new command-line tool will be added to Ansible version 1.5 that will enable the encrypting of various data within the configuration. The new tool will be called ansible-vault. It will be implemented by using the new --ask-vault-pass option. According to DeHaan, anything you write in YAML for your configuration can be encrypted with ansible-vault by using a password.
Server security hardening is crucial to any IT enterprise. We must face the fact that we are protecting assets in what has become an informational war-zone. Almost daily, we hear of enterprise systems that have fallen prey to malevolent individuals. Ansible can help us, as administrators, protect our systems. I have developed a very simple way to use Ansible, along with an open-source project called Aqueduct, to harden RHEL6 Linux servers. These machines are secured according to the standards formulated by the Defense Information Systems Agency (DISA). DISA publishes Security Technical Implementation Guides (STIGs) for various operating systems that provide administrators with solid guidelines for securing systems.
In a typical client-server setup, the remote client dæmon communicates with a server dæmon. Usually, this communication is in the clear (not encrypted), although Puppet and Chef have their own proprietary mechanisms to encrypt traffic. The implementation of public-key authentication (PKI) in SSH has been well vetted for many years by security professionals and system administrators. For my purposes, SSH is strongly preferred. Typically, there is a greater risk in using proprietary client-server dæmons than using SSH. They may be relatively new and could be compromised by malevolent individuals using buffer-overflow attack strategies or denial-of-service attacks. Any time we can reduce the total number of services running on a server, it will be more secure.
To install the current version of Ansible (1.4.3 at the time of this writing), you will need Python 2.4 or later and the Extra Packages for Enterprise Linux (EPEL) repository RPM. For the purposes of this article, I use Ansible along with another set of scripts from an open-source project called Aqueduct. This is not, however, a requirement for Ansible. You also will need to install Git, if you are not already using it. Git will be used to pull down the Aqueduct package.
Vincent Passaro, Senior Security Architect at Fotis Networks, pilots the Aqueduct project, which consists of the development of both bash scripts and Puppet manifests. These are written to deploy the hardening guidelines provided in the STIGs. Also included are CIS (Center for Internet Security) benchmarks and several others. On the Aqueduct home page, Passaro says, "Content is currently being developed (by me) for the Red Hat Enterprise Linux 5 (RHEL 5) Draft STIG, CIS Benchmarks, NISPOM, PCI", but I have found RHEL6 bash scripts there as well. I combined these bash scripts to construct a very basic Ansible playbook to simplify security hardening of RHEL6 systems. I accomplished this by using the included Ansible module called script.
According to the Ansible documentation, "The script module takes the script name followed by a list of space-delimited arguments. The local script at path will be transferred to the remote node and then executed. The given script will be processed through the shell environment on the remote node. This module does not require Python on the remote system, much like the raw module."
Ansible modules are tiny bits of code used for specific purposes by the API to carry out tasks. The documentation states, "Ansible modules are reusable units of magic that can be used by the Ansible API, or by the ansible or ansible-playbook programs." I view them as being very much like functions or subroutines. Ansible ships with many modules ready for use. Administrators also can write modules to fit specific needs using any programming language. Many of the Ansible modules are idempotent, which means they will not make a change to your system if a change does not need to be made. In other words, it is safe to run these modules repeatedly without worrying they will break things. For instance, running a playbook that sets permissions on a certain file will, by default, update the permissions on that file only if its permissions differ from those specified in the playbook.
For my needs, the script module works perfectly. Each Aqueduct bash script corresponds to a hardening recommendation given in the STIG. The scripts are named according to the numbered sections of the STIG document.
In my test environment, I have a small high-performance compute cluster consisting of one management node and ten compute nodes. For this test, the SSH server dæmon is configured for public-key authentication for the root user. To install Ansible on RHEL6, the EPEL repository must first be installed. Download the EPEL RPM from the EPEL site (see Resources).
Then, install it on your management node:

# rpm -ivh epel-release-6-8.noarch.rpm
Now, you are ready to install Ansible:

# yum install ansible
Ansible's main configuration file is located in /etc/ansible/ansible.cfg. Unless you want to add your own customizations, you can configure it with the default settings.
Now, create a directory in /etc/ansible called prod. This is where you will copy the Aqueduct STIG bash scripts. Also, create a directory in /etc/ansible called plays, where you will keep your Ansible playbooks. Create another directory called manual-check. This will hold scripts with information that must be checked manually. Next, a hosts file must be created in /etc/ansible. It is simply called hosts. Figure 4 shows how I configured mine for the ten compute nodes.
Figure 4. The /etc/hosts File for My Test Cluster
Eight of the compute nodes are typical nodes, but two are equipped with GPGPUs, so there are two groups: "hosts" and "gpus". Provide the IP address of each node (the host name also can be given if your DNS is set up properly). With this tiny bit of configuration, Ansible is now functional. To test it, use Ansible in ad hoc mode and execute the following command on your management node:

# ansible all -m ping
If this results in a "success" message from each host, all is well.
The Aqueduct scripts must be downloaded using Git. If you do not have this on your management node, then:

# yum install git 
Git "is a distributed revision control and source code management (SCM) system with an emphasis on speed" (Wikipedia). The command-line for acquiring the Aqueduct package of scripts and manifests goes like this:
# git clone git://git.fedorahosted.org/git/aqueduct.git This will create a directory under the current directory called aqueduct. The bash scripts for RHEL6 are located in aqueduct/compliance/bash/stig/rhel-6/prod. Now, copy all scripts therein to /etc/ansible/prod. There are some other aspects of the STIG that will need to be checked by either running the scripts manually or reading the script and performing the required actions. These scripts are located in aqueduct/compliance/bash/stig/rhel-6/manual-check. Copy these scripts to /etc/ansible/manual-check.
Now that the scripts are in place, a playbook must be written to deploy them on all target hosts. Copy the playbook to /etc/ansible/plays. Make sure all scripts are executable. Figure 5 shows the contents of my simple playbook called aqueduct.yml.
Figure 5. My Simple Playbook to Execute STIG Scripts on All Targets
On a few of the STIG scripts, a few edits were needed to get them to execute correctly. Admittedly, a more eloquent solution would be to replace the STIG scripts by translating them into customized Ansible modules. For now, however, I am taking the easier route by calling the STIG scripts as described from my custom Ansible playbook. The script module makes this possible. Next, simply execute the playbook on the management node with the command:
# ansible-playbook aqueduct.yml This operation takes about five minutes to run on my ten nodes, with the understanding that the plays run in parallel on the target hosts. Ansible produces detailed output that shows the progress of each play and host. When Ansible finishes running the plays, all of the target machines should be identically hardened, and a summary is displayed. In this case, everything ran successfully.
Figure 6. Output Showing a Successful STIG Playbook Execution
For system security hardening, the combination of Ansible and Aqueduct is a powerfully productive force in keeping systems safe from intruders.
If you've ever worked as a system administrator, you know how much time a tool like this can save. The more I learn about Ansible, the more useful it becomes. I am constantly thinking of new ways to implement it. As my system administration duties drift more toward using virtual technologies, I plan on using Ansible to provision and manage my virtual configurations quickly. I am also looking for more avenues to explore in the way of managing high-performance computing systems, since this is my primary duty. Michael DeHaan has developed another tool called Cobbler, which is excellent for taking advantage of Red Hat's installation method, Kickstart, to build systems quickly. Together, Cobbler and Ansible create an impressive arsenal for system management.
As system administrators, we are living in exciting times. Creative developers are inventing an amazing array of tools that, not only make our jobs easier, but also more fun. I can only imagine what the future may hold. One thing is certain: we will be responsible for more and more systems. This is due to the automation wizardry of technologies like Ansible that enable a single administrator to manage hundreds or even thousands of servers. These tools will only improve, as they have continued to do. As security continues to become more and more crucial, their importance will only increase.

Resources

Ansible's Architecture: Beyond Configuration Management: http://blog.ansibleworks.com/2013/11/29/ansibles-architecture-beyond-configuration-management
Michael DeHaan's Blog: http://michaeldehaan.net
Git Home: http://git-scm.com
Aqueduct Home: http://www.vincentpassaro.com/open-source-projects/aqueduct-red-hat-enterprise-linux-security-development
Ansible Documentation: http://docs.ansible.com/index.html
EPEL Repository Home: https://fedoraproject.org/wiki/EPEL
DISA RHEL6 STIG: http://iase.disa.mil/stigs/os/unix/red_hat.html
Viewing all 1407 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>