Quantcast
Channel: Sameh Attia
Viewing all 1416 articles
Browse latest View live

Check The Number Of MySQL Open Database Connections on Linux Or Unix-like Server

$
0
0
http://www.cyberciti.biz/faq/howto-show-mysql-open-database-connections-on-linux-unix

I'm a new MySQL server user. My server is running on a CentOS Linux. How can I check the number of active MySQL connections on Linux based system?

You can use the following commands on Linux or Unix-like systems:
Tutorial details
DifficultyEasy (rss)
Root privilegesNo
RequirementsNone
Estimated completion time1m
a) mysqladmin status command
b) MySQL show status command
c) netstat or ss commands

mysqladmin status command example

Open the terminal App or login to the remote server using ssh:
 
ssh vivek@server1.cyberciti.biz
 
Type the following command to get a short status message from the MySQL server:
 
mysqladmin status
## OR ##
mysqladmin status -u root -p
## OR ##
mysqladmin status -h db1.cyberciti.biz -u root -p
 
Sample outputs:
Uptime: 691356  Threads: 5  Questions: 83237956  Slow queries: 102736  Opens: 3585  Flush tables: 1  Open tables: 1019  Queries per second avg: 120.398

MySQL show status command to see open database connections example

First, connect to the your mysql server:
 
mysql -u root -p
 
Type the following sql query to see the number of connection attempts to the MySQL server includes both failed and successful connection attempts:
mysql> show status like 'Conn%';
Sample outputs:
Fig.01: "show status like 'Conn%';" in action
Fig.01: "show status like 'Conn%';" in action

You can use the following sql command to see the number of currently open connections at mysql> prompt:
mysql> show status like '%onn%';
+--------------------------+---------+
| Variable_name | Value |
+--------------------------+---------+
| Aborted_connects | 7 |
| Connections | 6304067 |
| Max_used_connections | 85 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 7 | <---- 7="" connections="" currently="" in="" no="" of="" open="" pre="" rows="" sec="" set="">


Use show processlist sql command to see the number of open connections


Type the following sql command at mysql> prompt to see the number of currently open connections:

mysql> show processlist;
+---------+------------+-------------------+------------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+---------+------------+-------------------+------------+---------+------+-------+------------------+
| 6297128 | root | localhost | NULL | Query | 0 | NULL | show processlist |
| 6308321 | faqwpblogu | 10.10.29.66:42945 | lesaibkfaq | Sleep | 1 | | NULL |
| 6308323 | faqwpblogu | 10.10.29.74:46993 | lesaibkfaq | Sleep | 0 | | NULL |
| 6308325 | faqwpblogu | 10.10.29.74:46995 | lesaibkfaq | Sleep | 1 | | NULL |
| 6308326 | faqwpblogu | 10.10.29.74:46996 | lesaibkfaq | Sleep | 0 | | NULL |
+---------+------------+-------------------+------------+---------+------+-------+------------------+
5 rows in set (0.00 sec)
The above output indicates four currently open connection for user called 'faqwpblogu' from app server located at 10.10.29.66 and 10.10.29.74.

MySQL show status sql command summary

I suggest that you read the following pages for more info:
  1. SHOW STATUS Syntax
  2. Server Status Variables

Use netstat or ss (Linux only) command to list open database connections

The syntax is as follows for netstat command or ss command:
 
netstat -nat | grep10.10.29.68:3306
 
This will just give you an overview. I suggest that you use above sql commands only.---->

Here is How I Built my First RPM

$
0
0
http://techarena51.com/index.php/build-rpm-without-breaking-head

Here is How I Built my First RPM
I was building a rpmpackage for Tengine the Dynamic Module loading Nginx fork. As usual since there was a no decent tutorial I decided to write my own.
rpmbuild
Warning: DO NOT try as root for obvious Reasons
First Install the necessary packages.
sudo yum install rpm-build

sudo yum install redhat-rpm-config
Create the rpmbuild directories
[userid@hostname ~]$ mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
<!> Beware: this next command will overwrite an existing .rpmmacros file if it exists, so check that you don't already have one before continuing.

[userid@hostname ~]$ echo '%_topdir %(echo $HOME)/rpmbuild'> ~/.rpmmacros

Explanation of these directories as per rpm.org is as follows.
/usr/src/redhat/SOURCES — Contains the original sources, patches, and icon files.
/usr/src/redhat/SPECS — Contains the spec files used to control the build process.
/usr/src/redhat/BUILD — The directory in which the sources are unpacked, and the software is built.
/usr/src/redhat/RPMS — Contains the binary package files created by the build process.
/usr/src/redhat/SRPMS — Contains the source package files created by the build process.
Add the source or TAR file in the SOURCES directory.
Goto the SPEC directory to create your spec file.
The spec file is where you will need to add all the details of the package that needs to be installed.
From the files that need to be installed to the version of your package.
When you are creating a SPEC file for the first time, vim or emacs will automatically create a template for you:
vim tengine.spec
Below is the template
Name: Tengine
Version: 1.5.1
Release: 1%{?dist}
Summary: Tengine web server forked out of Nginx

Group: Applications/Internet
License: open BSD license
URL: http://tengine.taobao.org/download.html
Source0: tengine-1.5.1.tar.gz

#BuildRequires:
#Requires:

%description
Tengine by taboa which enables dso support for nginx

%prep
%setup -n tengine-1.5.1

%build
%_configure
make %{?_smp_mflags}

%install
rm -rf $RPM_BUILD_ROOT
make install DESTDIR=$RPM_BUILD_ROOT

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%doc LICENSE
%doc README

%config(noreplace) /usr/local/nginx/conf/*
/usr/local/nginx/html/
/usr/local/nginx/sbin/nginx
/usr/local/nginx/sbin/dso_tool

%changelog

The First part which is your name version etc is pretty much self explanatory I am not going to go into it
Anything with % before is a a Macros. Macros can be used to set a variable, and there are a few that are already set. To see these macros look in /usr/lib/rpm/macros or /usr/share/doc/rpm-[version]/macros, where [version] is the version of rpm. The value of the macro is returned by putting the name of the macro in curly braces: %{ }.
%description : Add a short description of your package.
%prep : Here is where you file will be untared Just use %setup -q here and it will untar it.
%build : This is where you file is build, %_configure will run ./configure in tengine directory and you need to add “make %{?_smp_mflags} ” to create a make file.
%install : Where your software is installed with make install
%files : The list of files that will be installed.
Once you have your spec file is ready run the below command
rpmbuild -ba tengine.spec
This will generate the rpm file in the RPMS directory
I have added my rpm on GitHUB. Feel free to fork and would love here inputs from others on this.
If you have a small shell script that you would like to manage via rpm then you can check out rpmwand. Just remember that this is for those packages which do not have a compile process.
Update : The Good Folks at Reddit suggested fpm you may want to check that out as well.
Images are not mine and are found on the internet.
Source
http://rpmbuildtut.wordpress.com/getting-started/
http://fedoraproject.org/wiki/How_to_create_an_RPM_package

How To Extract a Tar Files To a Different Directory on a Linux/Unix-like Systems

$
0
0
http://www.cyberciti.biz/faq/howto-extract-tar-file-to-specific-directory-on-unixlinux

I want to extract tar file to specific directory called /tmp/data. How can I extract a tar archive to a different directory using tar command on a Linux or Unix-like systems?

You do not need to change the directory using cd command and extract files. Untarring a file can be done using the following syntax:
Tutorial details
DifficultyEasy (rss)
Root privilegesNo
Requirementstar
Estimated completion time1m

Syntax

Typical Unix tar syntax:
tar -xf file.name.tar -C /path/to/directory
GNU/tar syntax:
tar xf file.tar -C /path/to/directory
tar xf file.tar --directory /path/to/directory

Example: Extract files to another directory

In this example, I'm extracting $HOME/etc.backup.tar file to a directory called /tmp/data. First, you have to create the directory manually, enter:
 
mkdir /tmp/data
 
To extract a tar archive $HOME/etc.backup.tar into a /tmp/data, enter:
 
tar -xf $HOME/etc.backup.tar -C /tmp/data
 
To see a progress pass the -v option:
 
tar -xvf $HOME/etc.backup.tar -C /tmp/data
 
Sample outputs:
Gif 01: tar Command Extract Archive To Different Directory Command
Gif 01: tar Command Extract Archive To Different Directory Command

You can extract specific files too use:
 
tar -xvf $HOME/etc.backup.tar file1 file2 file3 dir1 -C /tmp/data
 
To extract a foo.tar.gz (.tgz extension file) tarball to /tmp/bar, enter:
 
mkdir /tmp/bar
tar -zxvf foo.tar.gz -C /tmp/bar
 
To extract a foo.tar.bz2 (.tbz, .tbz2 & .tb2 extension file) tarball to /tmp/bar, enter:
 
mkdir /tmp/bar
tar -jxvf foo.tar.bz2 -C /tmp/bar
 
See tar command man page for more information.

Get back your privacy and control over your data in just a few hours: build your own cloud for you and your friends

$
0
0
https://www.howtoforge.com/tutorial/build-your-own-cloud-on-debian-wheezy

40'000+ searches over 8 years! That's my Google Search history. How about yours? (you can find out for yourself here) With so many data points across such a long time, Google has a very precise idea of what you've been interested in, what's been on your mind, what you are worried about, and how that all changed over the years since you first got that Google account.

Some of the most personal pieces of your identity are stored on servers around the world beyond your control

Let's say you've been a Gmail user between 2006 and 2013 like me, meaning you received 30'000+ emails and wrote about 5000 emails over that 7 year period. Some of the emails you sent or received are very personal, maybe so personal that you probably wouldn't like even some family members or close friends to go through them systematically. Maybe you also drafted a few emails that you never sent because you changed your mind at the last minute. But even if you never sent them, these emails are still stored somewhere on a server. As a result, it's fair to say that Google servers know more about your personal life than your closest friends or your family.

Statistically, it's a safe bet to consider that you've got a smartphone. You can barely use the phone without using the contacts app which stores your contacts in Google Contact on Google servers by default. So not only does Google know about your emails, but also about your offline contacts: who you like to call, who calls you, whom you text, and what you text them about. You don't have to take my word for it, you can verify for yourself by taking a look at the permissions you gave apps such as the Google Play Service to read the list of people that called you and the SMS you got. Do you also use the calendar app that comes with your phone? Unless you explicitly opted out while setting up your calendar, this means that Google knows precisely what you're up to, at every time of the day, day after day, year after year. The same applies if you chose an iPhone over an Android phone, except Apple gets to know about your correspondance, contacts and schedule instead of Google.

Do you also take great care to keep the contacts in your directory up-to-date, updating your friend's, colleagues's and and family's email addresses and phone numbers when they move to a new job or change carrier? That gives Google an extraordinarily accurate, up-to-date picture of your social network. And you love the GPS of your smartphone which you use a lot together with Google Maps.

This means Google not only knows what you do from your calendar but also where you are, where you live, where you work. And by correlating GPS location data across users, Google can also tell with whom you may socializing with right now.

Your daily habit of handing out your most personal information will impact your life in a way that no one can even forsee

To summarize, if you are an average internet user, Google has up-to-date, in-depth information about your interests, worries, passions, questions, over almost 10 years. It has a collection of some of your most personal messages (emails, SMS), an hour-by-hour detail of your daily activities and location, and a high-quality picture of your social network. Such an intimate knowledge of you likely goes beyond what your closest friends, family, or your sweetheart know of you.

It wouldn't come to mind to give this mass of deeply personal information to complete strangers, for instance by putting it all on a USB key and leaving it on a table in a random cafe with a note saying 'Personal data of Olivier Martin, use as you please'. Who knows who might find it and what they would do with it? Yet, we have no problem handing in core pieces of your identity to strangers at IT companies with a strong interest in our data (that's how they make their bread) and world-class experts in data analysis, perhaps just because it happens by default without us thinking about it when we hit that green 'Accept' button.

With so much high-quality information, over the years, Google may well get to know you better than you can ever hope to know yourself: heck, crawling through my digital past right now, I can't remember having written half of the emails I sent five years ago. I am surprised and pleased to rediscover my interest in marxism back in 2005 and my joining ATTAC (an organization which strives to limit speculation and improve social justice by taxing financial transactions) the next year.

And god knows why I was so much into dancing shoes back in 2007. These is pretty harmless information (you wouldn't have expected me to reveal something embarassing here, would you? ;-). But by connecting the dots between high-quality data over different aspects of your life (what, when, with whom, where, ...) over such time spans, one may extrapolate predictive statements about you.

For instance, from the shopping habits of a 17-year-old girl, supermarkets can tell that she is pregnant before her dad even hears about it (true story). Who knows what will become possible with high-quality data the like Google has, which goes well beyond shopping habits? By connecting the dots, maybe one can predict how your tastes or political views will change in the coming years.

Today, companies you have never heard of claim to have 500 data points about you, including religion, sexual orientation and political views. Speaking of politics, what if you decide to go into politics in 10 years from now? Your life may change, your views too, and you may even forget, but Google won't. Will you have to worry that your opponent is in touch with someone who has access to your data at Google and can dig up something embarassing on you from those bottomless wells of personal data you gave away over the years? How long until Google or Facebook get hacked just like Sony was recently hacked and all your personal data end up in the public sphere forever?

One of the reason most of us have entrusted our personal data to these companies is that they provide their services for free. But how free is it really? The value of the average Google account varies depending on the method used to estimate it: 1000 USD/year accounts for the amount of time you invest in writing emails, while the value of your account for the advertisement industry is somewhere between 220 USD/year and 500 USD/year. So the service is not exactly free: you pay for it through advertisement and the yet unknown uses that our data may find in the future.

I've been writing about Google mostly because that's the company I've entrusted most of my digital identify to so far and hence the one I know best. But I may well have written Apple or Facebook.

These companies truly changed the world with their fantastic advances in design, engineering and services we love(d) to use, every day. But it doesn't mean we should stack up all our most personal data in their servers and entrust them with our digital lives: the potential for harm is just too large.

Claim back your privacy and that of people you care for in just 5h

It does not have to be this way. You can live in the 21st century, have a smartphone, use email and GPS on daily basis, and still retain your privacy. All you need to do is get back control over your personal data: emails, calendar, contacts, files, etc.. The Prism-Break.org website lists software that help controlling the fate of your personal data. Beyond these options, the safest and most powerful way to get back control over your personal data is to host your cloud yourself, by building your own server. But you may just not have the time and energy to research how exactly to do that and make it work smoothly.

That's where the present article fits in. In just 5 hours, we will set up a server to host your emails, contacts, calendars and files for you, your friends and your family. The server is designed to act as a hub or cloud for your personal data, so that you always retain full control over it. The data will automatically be synchronized between your PC/laptop, your phone and your tablet. Essentially, we will set up a system that replaces Gmail, Google Drive / Dropbox, Google Contacts, Google Calendar and Picasa.

Just doing this for yourself will already be a big step. But then, a significant fraction of your personal information will still leak out and end up on some servers in the silicon valley, just because so many of the people you interact with every day use Gmail and have smartphones. So it's a good idea to have some of the people you are closest to join the adventure.

We will build a system that
  1. supports an arbitrary number of domains and users. This makes it easy to share your server with family and friends, so that they get control over their personal data too and can share the cost of the server with you. The people sharing your server can use their own domain name or share yours.
  2. lets you send and receive your emails from any network upon successfully logging in onto the server. This way, you can send your emails from any of your email addresses, from any device (PC, phone, tablet), and any network (at home, at work, from a public network, ...)
  3. encrypts network traffic when sending and receiving emails so people you don't trust won't fish out your password and won't be able to read your private emails.
  4. offers state-of-the-art antispam, combining black lists of known spammers, automatic greylisting, and adaptative spam filtering. Re-training the adaptative spam filter if an email is misclassified is simply done by moving spam in or out of the Junk/Spam folder. Also, the server will contribute to community-based spam fighting efforts.
  5. requires just a few minutes of maintenance once in a while, basically to install security updates and briefly check the server logs. Adding a new email address boils down to adding one record to a database. Apart from that, you can just forget about it and live your life. I set up the system described in this article 14 months ago and the thing has just been running smoothly since then. So I completely forgot about it, until I recently smiled at the thought that casually pressing the 'Check email' button of my phone caused electrons to travel all the way to Iceland (where my server sits) and back.
To go through this article, you'll need a minimum of technical capabilities. If you know what is the difference between SMTP and IMAP, what is a DNS, and have a basic understanding of TCP/IP, you know enough to follow through. You will also need a basic working knowledge of Unix (working with files from the command line, basic system administration). And you'll need a total of 5 hours of time to set it up.

Here's an overview what we will do:
  1. Get a Virtual Private Server, a domain name, and set them up
  2. Set up postfix and dovecot to send and receive email
  3. Prevent SPAM from reaching your INBOX
  4. Make sure the emails you send get through spam filters
  5. Host calendars, contacts, files with Owncloud and set up webmail
  6. Sync your devices to the cloud

This article was inspired by and builds upon previous work

This article draws heavily from two other articles, namely Xavier Claude's and Drew Crawford's introduction to email self-hosting.

The article includes all the features of Xavier's and Draw's articles, except from three features that Drew had and which I didn't need, namely push support for email (I like to check email only when I decide to, otherwise I get distracted all the time), fulltext search in email (which I don't have a use for), and storing emails in an encrypted form (my emails and data are not critical to the point that I have to encrypt them locally on the server). If you need any of these features, feel free to just add them by following to the respective section of Drew's article, which is compatible with the present one.

Compared to Xavier's and Drew's work, the present article improves on several aspects:
  • it fixes bugs and typos based on my experience with Drew's article and the numerous comments on his original article. I also went through the present article, setting up the server from scratch several times to replicate it and make sure it would work right out of the box.
  • low maintenance: compared to Xavier's work, the present article adds support for multiple email domains on the server. It does so by requiring the minimum amount of server maintenance possible: basically, to add a domain or a user, just add one row to a mysql table and that's it (no need to add sieve scripts, ...).
  • I added webmail.
  • I added a section on setting up a cloud, to host not just your emails but also your files, your addressbook / contacts (emails, phone numbers, birthdays, ...), calendars and pictures for use across your devices.

Get a Virtual Private Server, a domain name, and set them up

Let's start by setting the basic infrastructure: our virtual private server and our domain name.
I've had an excellent experience with the Virtual Private Servers (VPS) of 1984.is and Linode. In this article, we will use Debian Wheezy, for which both 1984 and Linode provide ready-made images to deploy on your VPS. I like 1984 because the servers are hosted in Iceland which run exclusively on renewable energy (geothermical and hydropower) and hence does not contribute to the climate change, unlike the coal power plants on which most US-based datacenters currently run on. Also, they put emphasis on civil liberties, transparency, freedom and Free Software.

It could be a good idea to start a file to store the various passwords we will need to set on the server (user accounts, mail accounts, cloud accounts, database accounts). It's definitely a good idea to encrypt this file (maybe with GnuPG), so that it won't be too easy to attack your server even if the computer you use to set up your server gets stolen or compromised.

For registering a domain name, I've been using the services of gandi for over 10 years now, also with satisfaction. For this article, we will set up a zone with the name jhausse.net. We then add a host named cloud.jhausse.net to it, set the MX record to that host. While you're at it, set short Time To Lives (TTL) to your records like 300 seconds so that you'll be able to make changes to your zone and test the result rapidly while you're setting up the server.

Finally, set the PTR record (reverse DNS) so that the IP address of the host maps back to its name. If you don't understand the previous sentence, read this article to get the background. If you use Linode, you can set the PTR record in the control panel in the Remote Access section. With 1984, contact the tech support who will help you with it.

On the server, we will start by adding a non-privledged user, so that we don't end up working as root all the time. Also, to log in as root will require an extra layer of security.
adduser roudy

Then, in /etc/ssh/sshd_config, we set
 
PermitRootLogin no

and reload the ssh server
 
service ssh reload

Then, we'll need to change the hostname of the server. Edit /etc/hostname so that it has just a single line with your hostname, in our case
 
cloud

Then, edit the ssh server's public key files /etc/ssh/ssh_host_rsa_key.pub, /etc/ssh/ssh_host_dsa_key.pub, /etc/ssh/ssh_host_ecdsa_key.pub so that the end of the file reflects your hostname, or instance root@cloud. Then restart the system to make sure the hostname is fixed wherever is should be
 
reboot

We will update the system and remove services we don't need to reduce the risk of remote attacks.
apt-get update
apt-get dist-upgrade
service exim4 stop
apt-get remove exim4 rpcbind
apt-get autoremove
apt-get install vim

I like to use vim for editing config files remotely. For this, it helps to automatically turn on syntax highlighting. We do so by adding
syn on
to ~/.vimrc.

Set up postfix and dovecot to send and receive email

apt-get install postfix postfix-mysql dovecot-core dovecot-imapd dovecot-mysql mysql-server dovecot-lmtpd postgrey
In the Postfix configuration menu, we select Internet Site, and set the system mail name to jhausse.net.

We will now set up a database to store the list of domains hosted on our server, the list of users for each of these domains (together with their password), and a list of mail aliases (to forward email from a given address to another one).
 
mysqladmin -p create mailserver
mysql -p mailserver
mysql> GRANT SELECT ON mailserver.* TO 'mailuser'@'localhost' IDENTIFIED BY 'mailuserpass';
mysql> FLUSH PRIVILEGES;
mysql> CREATE TABLE `virtual_domains` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(50) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
mysql> CREATE TABLE `virtual_users` (
`id` int(11) NOT NULL auto_increment,
`domain_id` int(11) NOT NULL,
`password` varchar(106) NOT NULL,
`email` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
mysql> CREATE TABLE `virtual_aliases` (
`id` int(11) NOT NULL auto_increment,
`domain_id` int(11) NOT NULL,
`source` varchar(100) NOT NULL,
`destination` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

We will host the jhausse.net domain. If there are other domains you'd like to host, you can also add them. We also set up a postmaster address for each domain, which forwards to roudy@jhausse.net.
 
mysql> INSERT INTO virtual_domains (`name`) VALUES ('jhausse.net');
mysql> INSERT INTO virtual_domains (`name`) VALUES ('otherdomain.net');
mysql> INSERT INTO virtual_aliases (`domain_id`, `source`, `destination`) VALUES ('1', 'postmaster', 'roudy@jhausse.net');
mysql> INSERT INTO virtual_aliases (`domain_id`, `source`, `destination`) VALUES ('2', 'postmaster', 'roudy@jhausse.net');

We now add a locally hosted email account roudy@jhausse.net. First, we generate a password hash for it:
 
doveadm pw -s SHA512-CRYPT

and then add the hash to the database
 
mysql> INSERT INTO `mailserver`.`virtual_users` (`domain_id`, `password`, `email`) VALUES ('1', '$6$YOURPASSWORDHASH', 'roudy@jhausse.net');

Now that our list of domains, aliases and users are in place, we will set up postfix (SMTP server, for outgoing mail). Replace the contents of /etc/postfix/main.cf with the following:
 
myhostname = cloud.jhausse.net
myorigin = /etc/mailname
mydestination = localhost.localdomain, localhost
mynetworks_style = host

# We disable relaying in the general case
smtpd_recipient_restrictions = permit_mynetworks, reject_unauth_destination
# Requirements on servers that contact us: we verify the client is not a
# known spammer (reject_rbl_client) and use a graylist mechanism
# (postgrey) to help reducing spam (check_policy_service)
smtpd_client_restrictions = permit_mynetworks, reject_rbl_client zen.spamhaus.org, check_policy_service inet:127.0.0.1:10023
disable_vrfy_command = yes
inet_interfaces = all
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no

# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/cloud.crt
smtpd_tls_key_file=/etc/ssl/private/cloud.key
smtpd_use_tls=yes
smtpd_tls_auth_only = yes
smtp_tls_security_level=may
smtp_tls_loglevel = 1
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# Delivery
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
message_size_limit = 50000000
recipient_delimiter = +

# The next lines are useful to set up a backup MX for myfriendsdomain.org
# relay_domains = myfriendsdomain.org
# relay_recipient_maps =

# Virtual domains
virtual_transport = lmtp:unix:private/dovecot-lmtp
virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf
local_recipient_maps = $virtual_mailbox_maps

Now we need to teach postfix to figure out which domains we would like him to accept emails for using the database we just set up. Create a new file /etc/postfix/mysql-virtual-mailbox-domains.cf and add the following:
 
user = mailuser
password = mailuserpass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_domains WHERE name='%s'

We teach postfix to find out whether a given email account exists by creating /etc/postfix/mysql-virtual-mailbox-maps.cf with the following content
 
user = mailuser
password = mailuserpass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_users WHERE email='%s'

Finally, postfix will use /etc/postfix/mysql-virtual-alias-maps.cf to look up mail aliases
 
user = mailuser
password = mailuserpass
hosts = 127.0.0.1
dbname = mailserver
query = SELECT virtual_aliases.destination as destination FROM virtual_aliases, virtual_domains WHERE virtual_aliases.source='%u' AND virtual_aliases.domain_id = virtual_domains.id AND virtual_domains.name='%d'

With all this in place, it is now time to test if postfix can query our database properly. We can do this using postmap:
 
postmap -q jhausse.net mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
postmap -q roudy@jhausse.net mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
postmap -q postmaster@jhausse.net mysql:/etc/postfix/mysql-virtual-alias-maps.cf
postmap -q bob@jhausse.net mysql:/etc/postfix/mysql-virtual-alias-maps.cf

If you set up everything properly, the first two queries should return 1, the third query should return roudy@jhausse.net and the last one should return nothing at all.

Now, let's set up dovecot (the IMAP server, to fetch incoming mail on the server from our devices).

Edit /etc/dovecot/dovecot.conf to set the following parameters:
 
# Enable installed protocol
# !include_try /usr/share/dovecot/protocols.d/*.protocol
protocols = imap lmtp

which will only enable imap (to let us fetch emails) and lmtp (which postfix will use to pass incoming emails to dovecot). Edit /etc/dovecot/conf.d/10-mail.conf to set the following parameters:
 
mail_location = maildir:/var/mail/%d/%n
[...]
mail_privileged_group = mail
[...]
first_valid_uid = 0

which will store emails in /var/mail/domainname/username. Note that these settings are spread at different locations in the file, and are sometimes already there for us to set: we just need to comment them out. The other settings which are already in the file, you can leave as is. We will have to do the same to update settings in many more files in the remaining of this article. In /etc/dovecot/conf.d/10-auth.conf, set the parameters:
 
disable_plaintext_auth = yes
auth_mechanisms = plain
#!include auth-system.conf.ext
!include auth-sql.conf.ext

In /etc/dovecot/conf.d/auth-sql.conf.ext, set the following parameters:
 
passdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
}
userdb {
driver = static
args = uid=mail gid=mail home=/var/mail/%d/%n
}

where we just taught dovecot that users have their emails in /var/mail/domainname/username and to look up passwords from the database we just created. Now we still need to teach dovecot how exactly to use the database. To do so, put the following into /etc/dovecot/dovecot-sql.conf.ext:
 
driver = mysql
connect = host=localhost dbname=mailserver user=mailuser password=mailuserpass
default_pass_scheme = SHA512-CRYPT
password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

We now fix permissions on config files
 
chown -R mail:dovecot /etc/dovecot
chmod -R o-rwx /etc/dovecot

Almost there! We just need to edit a couple files more. In /etc/dovecot/conf.d/10-master.conf, set the following parameters:
 
service imap-login {
inet_listener imap {
#port = 143
port = 0
}
inet_listener imaps {
port = 993
ssl = yes
}
}

service pop3-login {
inet_listener pop3 {
#port = 110
port = 0
}
inet_listener pop3s {
#port = 995
#ssl = yes
port = 0
}
}

service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
mode = 0666
group = postfix
user = postfix
}
user = mail
}

service auth {
unix_listener auth-userdb {
mode = 0600
user = mail
#group =
}

# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0666
user = postfix
group = postfix
}

# Auth process is run as this user.
#user = $default_internal_user
user = dovecot
}

service auth-worker {
user = mail
}

Note that we set ports for all services but imaps to 0, which effectively disables them. Then, in /etc/dovecot/conf.d/15-lda.conf, specify an email address for the postmaster:
postmaster_address = postmaster@jhausse.net

Last but not least, we need to generate a pair of public and private key for the server, which we will use both in dovecot and postfix:
openssl req -new -newkey rsa:4096 -x509 -days 365 -nodes -out "/etc/ssl/certs/cloud.crt" -keyout "/etc/ssl/private/cloud.key"

Make sure that you specify your the Fully Qualified Domain Name (FQDN) of the server, in our case:
 
Common Name (e.g. server FQDN or YOUR name) []:cloud.jhausse.net

If you don't, our clients may complain that the server name in the SSL certificate does not match the name of the server they are connecting to. We tell dovecot to use these keys by setting the following parameters in /etc/dovecot/conf.d/10-ssl.conf:
 
ssl = required
ssl_cert =

That's it! Now on to testing the postfix and dovecot servers!
 
service dovecot restart
service postfix restart

From the server itself, try to send an email to a local user:
 
telnet localhost 25
EHLO cloud.jhausse.net
MAIL FROM:youremail@domain.com
rcpt to:roudy@jhausse.net
data
Subject: Hallo!

This is a test, to check if cloud.jhausse.net is ready to be an MX!

Cheers, Roudy
.
QUIT

The server should accept our email with a message like
 
250 2.0.0 Ok: queued as 58D54101DB

Check the logs in /var/log/mail.log if everything went fine. There should be line saying something like
 
Nov 14 07:57:06 cloud dovecot: lmtp(4375, roudy@jhausse.net): ... saved mail to INBOX

So far so good? Good. Now, let's try the same from a different machine, like the computer we are using to set up the server. We'll talk to the server using encryption (TLS) this time:
 
openssl s_client -connect cloud.jhausse.net:25 -starttls smtp
EHLO cloud.jhausse.net
MAIL FROM:roudy@jhausse.net
rcpt to:bob@gmail.com

to which the server should respond
 
554 5.7.1 : Relay access denied

That's good: had the server accepted the mail, it would have meant that we set up postfix as an open relay for all the spammers of the world and beyhond to use. Instead of the 'Relay access denied' message, you may instead get the message
 
554 5.7.1 Service unavailable; Client host [87.68.61.119] blocked using zen.spamhaus.org; http://www.spamhaus.org/query/bl?ip=87.68.61.119

This means that you are trying to contact the server from an IP address that is considered as a spammer's address. I got this message while trying to connect to the server through my regular Internet Service Provider (ISP). To fix this issue, you can try to connect from another host, maybe another server you have access to through SSH. Alternatively, you can reconfigure Postfix's main.cf not to use Spamhaus's RBL, reload postfix, and verify that the above test works. In both cases, it's important that you find a solution that works for you because we'll test other things in a minute. If you chose to reconfigure Postfix not to use RBLs, don't forget to put the RBLs back in and to reload postfix after finishing the article to avoid getting more spam than necessary.
Now let's try to send a valid email by SMTP on port 25, which regular mail servers use to talk to each other:
 
openssl s_client -connect cloud.jhausse.net:25 -starttls smtp
EHLO cloud.jhausse.net
MAIL FROM:youremail@domain.com
rcpt to:roudy@jhausse.net

to which the server should respond
 
Client host rejected: Greylisted, see http://postgrey.schweikert.ch/help/jhausse.net.html
which shows that postgrey is working as it should. What postgrey does it to reject emails with a temporary error if the sender has never been seen before. The technical rules of email require email servers to try to deliver the email again. After five minutes, postgrey will accept the email. Legit email servers around the world will try repeatidly to redeliver the email to us, but most spammers won't. So, wait for 5 minutes, try to send the email again using the command above, and verify that postfix now accepts the email.

Afterwards, we'll check that we can fetch the two emails that we just sent ourselves by talking IMAP to dovecot:
 
openssl s_client -crlf -connect cloud.jhausse.net:993
1 login roudy@jhausse.net "mypassword"
2 LIST """*"
3 SELECT INBOX
4 UID fetch 1:1 (UID RFC822.SIZE FLAGS BODY.PEEK[])
5 LOGOUT

where you should replace mypassword with the password you set for this email account. If that works, we basically have a functional email server which can receive our incoming emails, and from which we get retreive these emails from our devices (PC/laptop, tablets, phones, ...). But we can't give it our emails to send unless we send them from the server itself. We'll now allow postfix to forward our emails, but only upon successful authentification, that is after it could make sure that the email comes from someone who has a valid account on the server. To do so, we'll open a special, SSL-only, SASL-authentified email submission service. Set the following parameters in /etc/postfix/master.cf:
 
submission inet n       -       -       -       -       smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_path=private/auth
-o smtpd_sasl_security_options=noanonymous
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject_non_fqdn_recipient,reject_unauth_destination

and reload postfix
 
service postfix reload

Now, let's try to use this service from a different machine than than the server, to verify postfix will now relay our emails and nobody else's:
 
openssl s_client -connect cloud.jhausse.net:587 -starttls smtp
EHLO cloud.jhausse.net

Notice the '250-AUTH PLAIN' capabilities advertized by server, which doesn't appear when we connect to port 25.
 
MAIL FROM:asdf@jkl.net
rcpt to:bob@gmail.com
554 5.7.1 : Relay access denied
QUIT

That's good, postfix won't relay our emails if he doesn't know us. So let's authentify ourselves first.

To do so, we first need to generate an authentification string:
 
echo -ne '\000roudy@jhausse.net\000mypassword'|base64

and let's try to send emails through the server again:
 
openssl s_client -connect cloud.jhausse.net:587 -starttls smtp
EHLO cloud.jhausse.net
AUTH PLAIN DGplYW5AMTk4NGNsb3VQLm5ldAA4bmFmNGNvNG5jOA==
MAIL FROM:asdf@jkl.net
rcpt to:bob@gmail.com

which postfix should now accept. To complete the test, let's verify that our virtual aliases work by sending an email to postmaster@jhausse.net and making sure it goes to roudy@jhausse.net:
 
telnet cloud.jhausse.net 25
EHLO cloud.jhausse.net
MAIL FROM:youremail@domain.com
rcpt to:postmaster@jhausse.net
data
Subject: Virtual alias test

Dear postmaster,
Long time no hear! I hope your MX is working smoothly and securely.
Yours sincerely, Roudy
.
QUIT

Let's check the mail made it all the way to the right inbox:
 
openssl s_client -crlf -connect cloud.jhausse.net:993
1 login roudy@jhausse.net "mypassword"
2 LIST """*"
3 SELECT INBOX
* 2 EXISTS
* 2 RECENT
4 LOGOUT

At this point, we have a functional email server, both for incoming and outgoing mails. We can set up our devices to use it.

PS: did you remember to try sending an email to an account hosted by the server through port 25 again, to verify that you are not longer blocked by postgrey?

Prevent SPAM from reaching your INBOX


For the sake of SPAM filtering, we already have Realtime BlackLists (RBLs) and greylisting (postgrey) in place. We'll now take our spam fighting capabilities up a notch by adding adaptative spam filtering. This means we'll add artificial intelligence to our email server, so that it can learn from experience what is spam and what is not. We will use dspam for that.
 
apt-get install dspam dovecot-antispam postfix-pcre dovecot-sieve

dovecot-antispam is a package that allows dovecot to retrain the spam filter if we find an email that is misclassified by dspam. Basically, all we need to do is to move emails in or out of the Junk/Spam folder. dovecot-antispam will then take care of calling dspam to retrain the filter. As for postfix-pcre and dovecot-sieve, we will use them respectively to pass incoming emails through the spam filter and to automatically move spam to the user's Junk/Spam folder.

In /etc/dspam/dspam.conf, set the following parameters to these values:
 
TrustedDeliveryAgent "/usr/sbin/sendmail"
UntrustedDeliveryAgent "/usr/lib/dovecot/deliver -d %u"
Tokenizer osb
IgnoreHeader X-Spam-Status
IgnoreHeader X-Spam-Scanned
IgnoreHeader X-Virus-Scanner-Result
IgnoreHeader X-Virus-Scanned
IgnoreHeader X-DKIM
IgnoreHeader DKIM-Signature
IgnoreHeader DomainKey-Signature
IgnoreHeader X-Google-Dkim-Signature
ParseToHeaders on
ChangeModeOnParse off
ChangeUserOnParse full
ServerPID /var/run/dspam/dspam.pid
ServerDomainSocketPath "/var/run/dspam/dspam.sock"
ClientHost /var/run/dspam/dspam.sock

Then, in /etc/dspam/default.prefs, change the following parameters to:
 
spamAction=deliver         # { quarantine | tag | deliver } -> default:quarantine
signatureLocation=headers # { message | headers } -> default:message
showFactors=on

Now we need to connect dspam to postfix and dovecot by adding these two lines at the end of /etc/postfix/master.cf:
 
dspam     unix  -       n       n       -       10      pipe
flags=Ru user=dspam argv=/usr/bin/dspam --deliver=innocent,spam --user $recipient -i -f $sender -- $recipient
dovecot unix - n n - - pipe
flags=DRhu user=mail:mail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${recipient}

Now we will tell postfix to filter every new email that gets submitted to the server on port 25 (normal SMTP traffic) through dspam, except if the email is submitted from the server itself (permit_mynetworks). Note that the emails we submit to postfix with SASL authentication won't be filtered through dspam either, as we set up a separate submission service for those in the previous section. Edit /etc/postfix/main.cf to change the smtpd_client_restrictions to the following:
 
smtpd_client_restrictions = permit_mynetworks, reject_rbl_client zen.spamhaus.org, check_policy_service inet:127.0.0.1:10023, check_client_access pcre:/etc/postfix/dspam_filter_access

At the end of the file, also also add:
 
# For DSPAM, only scan one mail at a time
dspam_destination_recipient_limit = 1

We now need to specify the filter we defined. Basically, we will tell postfix to send all emails (/./) to dspam through a unix socket. Create a new file /etc/postfix/dspam_filter_access and put the following line into it:
 
/./   FILTER dspam:unix:/run/dspam/dspam.sock

That's it for the postfix part. Now let's set up dovecot for spam filtering. In /etc/dovecot/conf.d/20-imap.conf, edit the imap mail_plugins plugins parameter such that:
 
mail_plugins = $mail_plugins antispam

and add a section for lmtp:
 
protocol lmtp {
# Space separated list of plugins to load (default is global mail_plugins).
mail_plugins = $mail_plugins sieve
}

We now configure the dovecot-antispam plugin. Edit /etc/dovecot/conf.d/90-plugin.conf to add the following content to the plugin section:
 
plugin {
...
# Antispam (DSPAM)
antispam_backend = dspam
antispam_allow_append_to_spam = YES
antispam_spam = Junk;Spam
antispam_trash = Trash;trash
antispam_signature = X-DSPAM-Signature
antispam_signature_missing = error
antispam_dspam_binary = /usr/bin/dspam
antispam_dspam_args = --user;%u;--deliver=;--source=error
antispam_dspam_spam = --class=spam
antispam_dspam_notspam = --class=innocent
antispam_dspam_result_header = X-DSPAM-Result
}

and in /etc/dovecot/conf.d/90-sieve.conf, specify a default sieve script which will apply to all users of the server:
 
sieve_default = /etc/dovecot/default.sieve

What is sieve and why do we need a default script for all users? Sieve lets us automatize tasks on the IMAP server. In our case, we won't all emails identified as spam to be put in the Junk folder instead of in the Inbox. We would like this to be the default behavior for all users on the server; that's why we just set this script as default script. Let's create this script now, by creating a new file /etc/dovecot/default.sieve with the following content:
 
require ["regex", "fileinto", "imap4flags"];
# Catch mail tagged as Spam, except Spam retrained and delivered to the mailbox
if allof (header :regex "X-DSPAM-Result""^(Spam|Virus|Bl[ao]cklisted)$",
not header :contains "X-DSPAM-Reclassified""Innocent") {
# Mark as read
# setflag "\\Seen";
# Move into the Junk folder
fileinto "Junk";
# Stop processing here
stop;
}

Now we need to compile this script so that dovecot can run it. We also need to give it appropriate permissions.
 
cd /etc/dovecot
sievec .
chown mail.dovecot default.siev*
chmod 0640 default.sieve
chmod 0750 default.svbin

Finally, we need to fix permissions on two postfix config files that dspam needs to read from:
 
chmod 0644 /etc/postfix/dynamicmaps.cf /etc/postfix/main.cf

That's it! Let's restart dovecot and postfix
 
service dovecot restart
service postfix restart

and test the antispam, by contacting the server from a remote host (e.g. the computer we are using to set the server):
 
openssl s_client -connect cloud.jhausse.net:25 -starttls smtp
EHLO cloud.jhausse.net
MAIL FROM:youremail@domain.com
rcpt to:roudy@jhausse.net
DATA
Subject: DSPAM test

Hi Roudy, how'd you like to eat some ham tonight? Yours, J
.
QUIT

Let's check if the mail arrived:
 
openssl s_client -crlf -connect cloud.jhausse.net:993
1 login roudy@jhausse.net "mypassword"
2 LIST """*"
3 SELECT INBOX
4 UID fetch 3:3 (UID RFC822.SIZE FLAGS BODY.PEEK[])

Which should return something the email with a collection of flag set by SPAM which look like this:
X-DSPAM-Result: Innocent
X-DSPAM-Processed: Sun Oct 5 16:25:48 2014
X-DSPAM-Confidence: 1.0000
X-DSPAM-Probability: 0.0023
X-DSPAM-Signature: 5431710c178911166011737
X-DSPAM-Factors: 27,
Received*Postfix+with, 0.40000,
Received*with+#+id, 0.40000,
like+#+#+#+ham, 0.40000,
some+#+tonight, 0.40000,
Received*certificate+requested, 0.40000,
Received*client+certificate, 0.40000,
Received*for+roudy, 0.40000,
Received*Sun+#+#+#+16, 0.40000,
Received*Sun+#+Oct, 0.40000,
Received*roudy+#+#+#+Oct, 0.40000,
eat+some, 0.40000,
Received*5+#+#+16, 0.40000,
Received*cloud.jhausse.net+#+#+#+id, 0.40000,
Roudy+#+#+#+to, 0.40000,
Received*Oct+#+16, 0.40000,
to+#+#+ham, 0.40000,
Received*No+#+#+requested, 0.40000,
Received*jhausse.net+#+#+Oct, 0.40000,
Received*256+256, 0.40000,
like+#+#+some, 0.40000,
Received*ESMTPS+id, 0.40000,
how'd+#+#+to, 0.40000,
tonight+Yours, 0.40000,
Received*with+cipher, 0.40000
5 LOGOUT

Good! You now have adaptive spam filtering set up for the users of your server. Of course, each user will need to train the filter in the first few weeks. To train a message as spam, just move it to a folder called "Spam" or "Junk" using any of your devices (PC, tablet, phone). Otherwise it'll be trained as ham.

Make sure the emails you send get through spam filters


Our goal in this section will be to make our mail server appear as clean as possible to the world and to make it harder for spammers to send emails in our name. As a side-effect, this will help us get our emails through the spam filters of other mail servers.

Sender Policy Framework

Sender Policy Framework (SPF) is a record that your add to your zone which declares which mail servers on the whole internet can send emails for your domain name. Setting it up is very easy, use the SPF wizard at microsoft.com to generate your SPF record, and then add it to your zone as a TXT record. It will look like this:
 
jhausse.net.	300 IN	TXT	v=spf1 mx mx:cloud.jhausse.net -all

Reverse PTR

We discussed this point earlier in this article, it's a good idea that you set up the reverse DNS for your server correctly, so that doing a reverse lookup on the IP address of your server returns the actual name of your server.

OpenDKIM

When we activate OpenDKIM, postfix will sign every outgoing email using a cryptographic key. We will then deposit that key in our zone, on the DNS. That way, every mail server in the world will be able to verify if the email actually came from us, or if it was forged by a spammer. Let's install opendkim:
 
apt-get install opendkim opendkim-tools

And set it up by editing /etc/opendkim.conf so that it looks like this:
 
##
## opendkim.conf -- configuration file for OpenDKIM filter
##
Canonicalization relaxed/relaxed
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
InternalHosts refile:/etc/opendkim/TrustedHosts
KeyTable refile:/etc/opendkim/KeyTable
LogWhy Yes
MinimumKeyBits 1024
Mode sv
PidFile /var/run/opendkim/opendkim.pid
SigningTable refile:/etc/opendkim/SigningTable
Socket inet:8891@localhost
Syslog Yes
SyslogSuccess Yes
TemporaryDirectory /var/tmp
UMask 022
UserID opendkim:opendkim

We'll need a couple of additional files which we will store in /etc/opendkim:
 
mkdir -pv /etc/opendkim/
cd /etc/opendkim/

Let's create a new file /etc/opendkim/TrustedHosts with the following content
 
127.0.0.1

and a new file called /etc/opendkim/KeyTable with the following content
 
cloudkey jhausse.net:mail:/etc/opendkim/mail.private

This tells OpenDKIM that we want to use an encryption key named 'cloudkey' whose contents can be found in /etc/opendkim/mail.private. We will create another file named /etc/opendkim/SigningTable and add the following line:
 
*@jhausse.net cloudkey

which tells OpenDKIM that every emails of the jhausse.net domain should be signed using the key 'cloudkey'. If we have other domains which we want to sign, we can add them here too.

The next step is to generate that key and fix permissions on OpenDKIM's config files.
 
opendkim-genkey -r -s mail [-t]
chown -Rv opendkim:opendkim /etc/opendkim
chmod 0600 /etc/opendkim/*
chmod 0700 /etc/opendkim

At first, it's a good idea to use the -t which will signal to other mail servers that you are just in testing mode, and that they shouldn't discard emails based on your OpenDKIM signature (yet). You can get your OpenDKIM key from the mail.txt file:
 
cat mail.txt

and then add it to your zone file as TXT record, which should look like this
 
mail._domainkey.cloud1984.net.	300	IN TXT	v=DKIM1; k=rsa; p=MIGfMA0GCSqG...

Finally, we need to tell postfix to sign outgoing emails. At the end of /etc/postfix/main.cf, add:
 
# Now for OpenDKIM: we'll sign all outgoing emails
smtpd_milters = inet:127.0.0.1:8891
non_smtpd_milters = $smtpd_milters
milter_default_action = accept

And reload the corresponding services
 
service postfix reload
service opendkim restart

Now let's test if our OpenDKIM public key can be found and matches the private key:
 
opendkim-testkey -d jhausse.net -s mail -k mail.private -vvv

which should return
 
opendkim-testkey: key OK

For this, you may need to wait a bit until the name server has reloaded the zone (on Linode, this happens every 15min). You can use dig to check if the zone was reloaded yet.

If this works, let's test if other servers can validate our OpenDKIM signatures and SPF record. To do this, we can use Brandon Checkett's email test. To send an email to a test address given to us on Brandon's webpage, we can run the following command on the server
 
mail -s CloudCheck ihAdmTBmUH@www.brandonchecketts.com

On Brandon's webpage, you should then see result = pass in the 'DKIM Signature' section, and Result: pass in the 'SPF Information' section. If our emails pass this test, just regenerate an OpenDKIM key without the -t switch, upload the new key to the zone file, and retest to still if it still passes the tests. If so, congrats! You just successfully set up OpenDKIM and SPF on your server!

Host calendars, contacts, files with Owncloud and set up a webmail with Roundcube


Now that we have a top-notch email server, let's add to it the possibility to store your contacts, calendars, and files in the cloud. These are services that the Owncloud provides out of the box. While we're at it, we'll also set up a webmail, so you can check email even if you're travelling without electronics, or in case your phone and laptop run out of battery.

Installing Owncloud is straighforward and is well described here. On Debian, it boils down to adding the owncloud repository to your apt sources, downloading owncloud's release key and adding it to your apt keyring, and then installing owncloud itself using apt-get:
 
echo 'deb http://download.opensuse.org/repositories/isv:/ownCloud:/community/Debian_7.0/ /'>> /etc/apt/sources.list.d/owncloud.list
wget http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_6.0/Release.key
apt-key add - < Release.key
apt-get update
apt-get install apache2 owncloud roundcube

When prompted for it, choose dbconfig and then say you want roundcube to use mysql. Then, provide the mysql root password and set a good password for the roundcube mysql user. Then, edit the roundcube config file /etc/roundcube/main.inc.php so that logging in on roundcube will default to using your IMAP server:
 
$rcmail_config['default_host'] = 'ssl://localhost';
$rcmail_config['default_port'] = 993;

Now we will set up the apache2 webserver with SSL so that we can talk to Owncloud and Roundcube using encryption for our passwords and data. Let's turn on Apache's ssl module:
 
a2enmod ssl

and edit /etc/apache2/ports.conf to set the following parameters:
 
NameVirtualHost *:80
Listen 80
ServerName www.jhausse.net


# If you add NameVirtualHost *:443 here, you will also have to change
# the VirtualHost statement in /etc/apache2/sites-available/default-ssl
# to
# Server Name Indication for SSL named virtual hosts is currently not
# supported by MSIE on Windows XP.
NameVirtualHost *:443
Listen 443



Listen 443


We'll set up a default website for encrypted connections to the webserver as https://www.jhausse.net under /var/www. Edit /etc/apache2/sites-available/default-ssl:
 

ServerAdmin webmaster@localhost

DocumentRoot /var/www
ServerName www.jhausse.net
[...]

Deny from all

[...]
SSLCertificateFile /etc/ssl/certs/cloud.crt
SSLCertificateKeyFile /etc/ssl/private/cloud.key
[...]


and let's also set up a website for unencrypted connections to http://www.jhausse.net under /var/www.

Edit /etc/apache2/sites-available/default:
 

DocumentRoot /var/www
ServerName www.jhausse.net
[...]

Deny from all



That way, we can serve pages for www.jhausse.net by putting them in /var/www. The 'Deny from all' directive prevents access to Owncloud through www.jhausse.net: we will set it up to access it through https://cloud.jhausse.net instead.

We will now set up the webmail (roundcube) so that it will be accessed through https://webmail.jhausse.net. Edit /etc/apache2/sites-available/roundcube to have the following content:
 


ServerAdmin webmaster@localhost

DocumentRoot /var/lib/roundcube
# The host name under which you'd like to access the webmail
ServerName webmail.jhausse.net

Options FollowSymLinks
AllowOverride None


ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined

# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on

# do not allow unsecured connections
# SSLRequireSSL
SSLCipherSuite HIGH:MEDIUM

# A self-signed (snakeoil) certificate can be created by installing
# the ssl-cert package. See
# /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
# If both key and certificate are stored in the same file, only the
# SSLCertificateFile directive is needed.
SSLCertificateFile /etc/ssl/certs/cloud.crt
SSLCertificateKeyFile /etc/ssl/private/cloud.key

# Those aliases do not work properly with several hosts on your apache server
# Uncomment them to use it or adapt them to your configuration
Alias /program/js/tiny_mce/ /usr/share/tinymce/www/

# Access to tinymce files

Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order allow,deny
allow from all



Options +FollowSymLinks
# This is needed to parse /var/lib/roundcube/.htaccess. See its
# content before setting AllowOverride to None.
AllowOverride All
order allow,deny
allow from all


# Protecting basic directories:

Options -FollowSymLinks
AllowOverride None



Options -FollowSymLinks
AllowOverride None
Order allow,deny
Deny from all



Options -FollowSymLinks
AllowOverride None
Order allow,deny
Deny from all



SSLOptions +StdEnvVars


SSLOptions +StdEnvVars

# SSL Protocol Adjustments:
# The safe and default but still SSL/TLS standard compliant shutdown
# approach is that mod_ssl sends the close notify alert but doesn't wait for
# the close notify alert from client. When you need a different shutdown
# approach you can use one of the following variables:
# o ssl-unclean-shutdown:
# This forces an unclean shutdown when the connection is closed, i.e. no
# SSL close notify alert is send or allowed to received. This violates
# the SSL/TLS standard but is needed for some brain-dead browsers. Use
# this when you receive I/O errors because of the standard approach where
# mod_ssl sends the close notify alert.
# o ssl-accurate-shutdown:
# This forces an accurate shutdown when the connection is closed, i.e. a
# SSL close notify alert is send and mod_ssl waits for the close notify
# alert of the client. This is 100% SSL/TLS standard compliant, but in
# practice often causes hanging connections with brain-dead browsers. Use
# this only for browsers where you know that their SSL implementation
# works correctly.
# Notice: Most problems of broken clients are also related to the HTTP
# keep-alive facility, so you usually additionally want to disable
# keep-alive for those clients, too. Use variable "nokeepalive" for this.
# Similarly, one has to force some clients to use HTTP/1.0 to workaround
# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
# "force-response-1.0" for this.
BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown



and declare the server in your DNS, for instance:
 
webmail.jhausse.net.	300	IN	CNAME	cloud.jhausse.net.

Now let's enable these three websites
 
a2ensite default default-ssl roundcube
service apache2 restart

and the webmail, accessible under https://webmail.jhausse.net, should basically work. Log in using the full email (e.g. roudy@jhausse.net) and the password you set in mailserver DB at the beginning of this article. The first time you connect, the browser will warn you that the certificate was not signed by a certification authority. That's fine, just add an exception.

Last but not least, we will create a virtual host for owncloud by putting the following content in /etc/apache2/sites-available/owncloud:
 


ServerAdmin webmaster@localhost

DocumentRoot /var/www/owncloud
ServerName cloud.jhausse.net

Options FollowSymLinks
AllowOverride None


Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all


ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all


ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined

# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on

# do not allow unsecured connections
# SSLRequireSSL
SSLCipherSuite HIGH:MEDIUM
SSLCertificateFile /etc/ssl/certs/cloud.crt
SSLCertificateKeyFile /etc/ssl/private/cloud.key


SSLOptions +StdEnvVars


SSLOptions +StdEnvVars


BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown



and activate owncloud by running
 
a2ensite owncloud
service apache2 reload

Then go ahead an configure owncloud by connecting to https://cloud.jhausse.net/ in a web browswer.
That's it! Now you've got your own Google Drive, Calendar, Contacts, Dropbox, and Gmail! Enjoy your freshly recovered privacy! :-)

Sync your devices to the cloud


To sync your emails, you can just use your favorite email client: the standard email program on Android or iOS, k9mail, or Thunderbird on your PC. Or you can also use the webmail we set up.

How to sync your calendar and contacts with the cloud is described in the doc of owncloud. On Android, I'm using the CalDAV-Sync and CardDAV-Sync apps which act as bridges between the Android calendar and contacts apps of the phone and the owncloud server.

For files, there is an Android app called Owncloud to access your files from your phone and automatically upload pictures and videos you take to your cloud. Accessing your files on the your Mac/PC is easy and well described in the Owncloud documentation.

Last tips


During the first few weeks, it's a good idea to monitor /var/log/syslog and /var/log/mail.log on a daily basis and make sure everything everything is running smoothly. It's important to do so before you invite others (friends, family, ...) to be hosted on your server; you might loose their trust in self-hosting for good if they trust you with their data and the server suddently becomes unavailable.

To add another email user, just add a row to the virtual_users table of the mailserver DB.

To add a domain, just add a row to the virtual_domains table. Then update /etc/opendkim/SigningTable to get outgoing emails signed, upload the OpenDKIM key to the zone, and reload OpenDKIM.

Owncloud has its own user DB which can be managed by logging in in Owncloud as administrator.

Finally, it's important to think in advance of a solution in case your server becomes temporarily unavailable. For instance, where would your mails go until your server returns? One solution would be to find a friend who can act as your backup MX, while you act as his backup MX (see the relay_domains and relay_recipient_maps setting in Postfix's main.cf file). Similarly, what if your server is compromised and a malicious individual erases all your files there? For that, it's important to think of a regular backup system. Linode offers backups as an option. On 1984.is, I set up a basic but sufficient automatic backup system using on crontabs and scp.

20 Unix Command Line Tricks – Part I

$
0
0
http://www.cyberciti.biz/open-source/command-line-hacks/20-unix-command-line-tricks-part-i

Let us start new year with these Unix command line tricks to increase productivity at the Terminal.

I have found them over the years and I'm now going to share with you.

unix-command-line-tricks.001

Deleting a HUGE file

I had a huge log file 200GB I need to delete on a production web server. My rm and ls command was crashed and I was afraid that the system to a crawl with huge disk I/O load. To remove a HUGE file, enter:
 
> /path/to/file.log
# or use the following syntax
: > /path/to/file.log
 
# finally delete it
rm /path/to/file.log

Want to cache console output?

Try the script command line utility to create a typescript of everything printed on your terminal.
 
script my.terminal.sessio

Type commands:
 
ls
date
sudo service foo stop

To exit (to end script session) type exit or logout or press control-D
 
exit

To view type:
 
more my.terminal.session
less my.terminal.session
cat my.terminal.session

Restoring deleted /tmp folder

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:
 
mkdir /tmp
chmod1777 /tmp
chown root:root /tmp
ls -ld /tmp

Locking a directory

For privacy of my data I wanted to lock down /downloads on my file server. So I ran:
 
chmod0000 /downloads

The root user can still has access and ls and cd commands will not work. To go back:
 
chmod0755 /downloads

Password protecting file in vim text editor

Afraid that root user or someone may snoop into your personal text files? Try password protection to a file in vim, type:
 
vim +X filename
 
Or, before quitting in vim use :X vim command to encrypt your file and vim will prompt for a password.

Clear gibberish all over the screen

Just type:
 
reset

Becoming human

Pass the -h or -H (and other options) command line option to GNU or BSD utilities to get output of command commands like ls, df, du, in human-understandable formats:
 
ls -lh
# print sizes in human readable format (e.g., 1K 234M 2G)
df -h
df -k
# show output in bytes, KB, MB, or GB
free -b
free -k
free -m
free -g
# print sizes in human readable format (e.g., 1K 234M 2G)
du -h
# get file system perms in human readable format
stat -c %A /boot
# compare human readable numbers
sort -h -a file
# display the CPU information in human readable format on a Linux
lscpu
lscpu -e
lscpu -e=cpu,node
# Show the size of each file but in a more human readable way
tree -h
tree -h /boot

Show information about known users in the Linux based system

Just type:

## linux version ##
lslogins
 
## BSD version ##
logins
 
Sample outputs:

UID USER      PWD-LOCK PWD-DENY LAST-LOGIN GECOS
0 root 0 0 22:37:59 root
1 bin 0 1 bin
2 daemon 0 1 daemon
3 adm 0 1 adm
4 lp 0 1 lp
5 sync 0 1 sync
6 shutdown 0 1 2014-Dec17 shutdown
7 halt 0 1 halt
8 mail 0 1 mail
10 uucp 0 1 uucp
11 operator 0 1 operator
12 games 0 1 games
13 gopher 0 1 gopher
14 ftp 0 1 FTP User
27 mysql 0 1 MySQL Server
38 ntp 0 1
48 apache 0 1 Apache
68 haldaemon 0 1 HAL daemon
69 vcsa 0 1 virtual console memory owner
72 tcpdump 0 1
74 sshd 0 1 Privilege-separated SSH
81 dbus 0 1 System message bus
89 postfix 0 1
99 nobody 0 1 Nobody
173 abrt 0 1
497 vnstat 0 1 vnStat user
498 nginx 0 1 nginx user
499 saslauth 0 1 "Saslauthd user"

How do I fix mess created by accidentally untarred files in the current dir?

So I accidentally untar a tarball in /var/www/html/ directory instead of /home/projects/www/current. It created mess in /var/www/html/. The easiest way to fix this mess:
 
cd /var/www/html/
/bin/rm -f "$(tar ztf /path/to/file.tar.gz)"

Confused on a top command output?

Seriously, you need to try out htop instead of top:
sudo htop

Want to run the same command again?

Just type !!. For example:
 
/myhome/dir/script/name arg1 arg2
 
# To run the same command again
!!
 
## To run the lastcommand again as root user
sudo !!
 
The !! repeats the most recent command. To run the most recent command beginning with "foo":

!foo
# Run the most recent command beginning with "service"as root
sudo !service
 
The !$ use to run command with the last argument of the most recent command:
 
# Edit nginx.conf
sudo vi /etc/nginx/nginx.conf
 
# Test nginx.conf for errors
/sbin/nginx -t -c /etc/nginx/nginx.conf
 
# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you
# can edit file again with vi
sudo vi !$

Get a reminder you when you have to leave

If you need a reminder to leave your terminal, type the following command:
 
leave +hhmm
 
Where,
  • hhmm - The time of day is in the form hhmm where hh is a time in hours (on a 12 or 24 hour clock), and mm are minutes. All times are converted to a 12 hour clock, and assumed to be in the next 12 hours.

Home sweet home

Want to go the directory you were just in? Run:
 
cd -

Need to quickly return to your home directory? Enter:
 
cd

The variable CDPATH defines the search path for the directory containing directories:
 
exportCDPATH=/var/www:/nas10

Now, instead of typing cd /var/www/html/ I can simply type the following to cd into /var/www/html path:
 
cd html

Editing a file being viewed with less pager

To edit a file being viewed with less pager, press v. You will have the file for edit under $EDITOR:
 
less *.c
less foo.html
## Press v to edit file ##
## Quit from editor and you would return to the less pager again ##
 

List all files or directories on your system

To see all of the directories on your system, run:
 
find / -type d | less
 
# List all directories in your $HOME
find$HOME -type d -ls | less

To see all of the files, run:
 
find / -type f | less
 
# List all files in your $HOME
find$HOME -type f -ls | less

Build directory trees in a single command

You can create directory trees one at a time using mkdir command by passing the -p option:
 
mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64}
ls -l /jail/

Copy file into multiple directories

Instead of running:
 
cp /path/to/file /usr/dir1
cp /path/to/file /var/dir2
cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:
 
echo /usr/dir1 /var/dir2 /nas/dir3 |  xargs -n 1cp -v /path/to/file
Creating a shell function is left as an exercise for the reader

Quickly find differences between two directories

The diff command compare files line by line. It can also compare two directories:
 
ls -l /tmp/r
ls -l /tmp/s
# Compare two folders using diff ##
diff /tmp/r/ /tmp/s/
 
Fig. : Finding differences between folders
Fig. : Finding differences between folders

Text formatting

You can reformat each paragraph with fmt command. In this example, I'm going to reformat file by wrapping overlong lines and filling short lines:
 
fmtfile.txt

You can also split long lines, but do not refill i.e. wrap overlong lines, but do not fill short lines:
 
fmt -s file.txt

See the output and write it to a file

Use the tee command as follows to see the output on screen and also write to a log file named my.log:
 
mycoolapp arg1 arg2 input.file | tee my.log

The tee command ensures that you will see mycoolapp output on on the screen and to a file same time.

The Best Free Tools for Creating a Bootable Windows or Linux USB Drive

$
0
0
http://www.howtogeek.com/127377/the-best-free-tools-for-creating-a-bootable-windows-or-linux-usb-drive

If you need to install Windows or Linux and you don’t have access to a CD/DVD drive, a bootable USB drive is the solution. You can boot to the USB drive, using it to run the OS setup program, just like a CD or DVD.

We have collected some links to free programs that allow you to easily setup a USB drive to install Windows or Linux on a computer.

NOTE: If you have problems getting the BIOS on your computer to let you boot from a USB drive, see our article about booting from a USB drive even if your BIOS won’t let you.

Windows 7 USB/DVD Download Tool

The Windows 7 USB/DVD Download Tool is an official, freeware tool from Microsoft that allows you to install Windows 7 and Windows 8 without having to first run an existing operating system on your computer. You can change the boot order of the drives in your computer’s BIOS so the Windows 7 installation on your USB drive runs automatically when you turn on your computer. Please see the documentation for your computer for information about how to access BIOS and change the boot order of drives.
01_windows_usb_dvd_download_tool

WiNToBootic

WiNToBootic is another free tool that allows you to create a bootable USB flash drive for installing Windows 7 or Windows 8. It supports an ISO file, a DVD, or a folder as the boot disk source. It’s a standalone tool that doesn’t require installation and it operates very fast.
02_wintobootic_orig

Windows Bootable Image (WBI) Creator

WBI Creator is a free program that allows you to create a bootable ISO image from Windows XP, Vista, and Windows 7 setup files. It’s a portable tool that’s easy to use. Simply tell the tool where the Windows setup files are and select a target folder for the new ISO file that will get created. Then, you can use one of the other tools mentioned in this article to setup a bootable USB flash drive or CD/DVD for use in setting up a Windows system.
03_wbicreator_orig

WinToFlash

WinToFlash is a free, portable tool that allows you to create a bootable USB flash drive from a Windows XP, Vista, Windows 7,  Server 2003, or Server 2008 installation CD or DVD. It will also transfer a Windows pre-install environments (WinPE), which are handy for troubleshooting and repairs, to a USB flash drive. You can even use WinToFlash to create a MSDOS bootable USB drive.
04_wintoflash

XBoot

XBoot is a free utility for creating multiboot USB flash drives or ISO image files. This allows you to combine multiple ISO files (Linux, utilities, and antivirus rescue CDs) onto one USB drive or ISO file, allowing you to create a handy utility drive. Simply drag and drop the ISO files onto the XBoot window and click Create ISO or Create USB.
NOTE: XBoot requires .NET Framework 4.0 (Standalone installer or Web installer) to be installed on your system to run.
05_xboot

UNetbootin

UNetbootin is a free program for both Windows, Linux, and Mac OS X that allows you to create bootable Live USB drives for Ubuntu, Fedora, and other Linux distributions instead of burning a CD. It runs on both Windows and Linux. Either use UNetbootin to download one of the many Linux distributions it supports or provide the location of your own Linux ISO file.
NOTE: The resulting USB drive is only bootable on PCs, not Macs.
07_unetbootin_orig

Ubuntu Startup Disk Creator

The Ubuntu Startup Disk Creator allows you to convert a USB flash drive or SD card into a drive from which you can run your Ubuntu system. You don’t have to dedicate the whole drive to the Ubuntu system. You can store other files in the remaining space.
The program also allows you to create a drive for Debian, or any other Debian-based OS for which you have a CD or .iso image.
08_startup_disk_creator

Universal USB Installer

Universal USB Installer is a program that allows you to choose from several Linux distributions to install on a USB flash drive. Select the Linux distribution, provide a location for the appropriate ISO file, select your USB flash drive, and click Create.
NOTE: The USB flash drive must be formatted as a Fat16, Fat32, or NTFS drive.
09_universal_usb_installer_orig

Rufus

Rufus is a small, portable program that allows you to create bootable USB drives for Windows and Linux. It also allows you to check the USB device for bad blocks, using up to four passes. Rufus runs in both 32-bit and 64-bit versions of Windows XP, Windows Vista, Windows 7, and Windows 8. You can create bootable USB drives for the listed versions of Windows, as well as almost all popular Linux distributions, such as Ubuntu, Kubuntu, Fedora, and OpenSUSE.
Rufus is very easy to use and the program looks like the default format window shown in Windows when you format a hard disk partition, USB drive, or other external drive.
In addition to Windows and Linux systems, you can also use Rufus to put utilities on USB drives, such as Parted Magic, Ultimate Boot CD, and BartPE.
10_rufus_orig
If there are any other free tools you’ve found useful for creating bootable USB flash drives, let us know.

What is Anacron and usage of Anacron in Linux

$
0
0
http://www.nextstep4it.com/anacron-and-usage-of-anacron-in-linux

Anacron is a service that runs after every system reboot, checking for any cron and at scheduled jobs that were to run while the system was down and hence, have not yet run. It scans the /etc/cron.hourly/0anacron file for three factors to determine whether to run these missed jobs. The three factors are the presence of the /var/spool/anacron/cron.daily file, the elapsed time of 24 hours since anacron last ran, and the presence of the AC power to the system. If all of the three factors are affirmative, anacron goes ahead and automatically executes the scripts located in the /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly directories, based on the settings and conditions defined in anacron’s main configuration file /etc/anacrontab. The default contents of the /etc/anacrontab file are displayed below:
nextstep4it@localhost:~$ cat /etc/anacrontab 
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
RANDOM_DELAY=45
START_HOURS_RANGE=3-22

#period in days   delay in minutes   job-identifier   command
1    5    cron.daily        nice run-parts /etc/cron.daily
7    25    cron.weekly        nice run-parts /etc/cron.weekly
@monthly 45    cron.monthly        nice run-parts /etc/cron.monthly

This file has five environment variables defined: the SHELL and PATH variables set the shell and path to be used for executing the scripts (defined at the bottom of this file); MAILTO defines the username or an email which is sent any output and error messages; RANDOM_DELAY expresses the maximum random delay in minutes (added to the base delay of the jobs as defined in the second column of the last three lines); and START_HOURS_RANGE states the range of hours when the jobs could begin.

The last three lines, in the above sample output, define the schedule and the scripts to be executed.
 
The first column represents the period in days (or @daily, @weekly, @monthly, or @yearly) which anacron uses to check whether the specified job has been executed in this many days or period, the second specifies the delay in minutes for anacron to wait before executing the job, the third identifies a job identifier, and the fourth column specifies the command to be used to execute the contents of the /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly files. Here the run-parts command is used to execute all files under the three directory locations at the default niceness.

For each job, anacron checks whether the job was run previously in the specified days or period (column 1) and executes it after waiting for the number of minutes (column 2) if it was not. Anacron may be run manually at the command prompt. For example, to run all the jobs that are scheduled in the /etc/anacrontab file but were missed, you can issue the following command:
nextstep4it@localhost:~# anacron

Anacron stores its execution date in the files located in the /var/spool/anacron directory for each defined schedule.

How to apply image effects to pictures on Raspberry Pi

$
0
0
http://xmodulo.com/apply-image-effects-pictures-raspberrypi.html

Like a common pocket camera which has a built-in function to add various effects on captured photos, Raspberry Pi camera board ("raspi cam") can actually do the same. With the help of raspistill camera control options, we can add the image effects function like we have in a pocket camera.

There are three comman-line applications which can be utilized for taking videos or pictures with raspi cam, and one of them is the raspistill application. The raspistill tool offers various camera control options such as sharpness, contrast, brightness, saturation, ISO, exposure, automatic white balance (AWB), image effects.

In this article I will show how to apply exposure, AWB, and other image effects with raspistill while capturing pictures using raspi cam. To automate the process, I wrote a simple Python script which takes pictures and automatically applies a series of image effects to the pictures. The raspi cam documentation describes available types of the exposure, AWB, and image effects. In total, the raspi cam offers 16 types of image effects, 12 types of exposure, and 10 types of AWB values.

The simple Python script looks like the following.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/usb/bin/python
importos
importtime
importsubprocess
list_ex=['auto','night']
list_awb=['auto','cloud',flash']
list_ifx=['blur','cartoon','colourswap','emboss','film','gpen','hatch','negative','oilpaint','posterise','sketch','solarise','watercolour']
x=0
forex inlist_ex:
    forawb inlist_awb:
        forifx inlist_ifx:
            x=x+1
            filename='img_'+ex+'_'+awb+'_'+ifx+'.jpg'
            cmd='raspistill -o '+filename+' -n -t 1000 -ex '+ex+' -awb '+awb+' -ifx '+ifx+' -w 640 -h 480'
            pid=subprocess.call(cmd,shell=True)
            print"["+str(x)+"]-"+ex+"_"+awb+"_"+ifx+".jpg"
            time.sleep(0.25)
print"End of image capture"

The Python script operates as follows. First, create three array/list variable for the exposure, AWB and image effects. In the example, we use 2 types of exposure, 3 types of AWB, and 13 types of image effects values. Then make nested loops for applying the value of the three variables that we have. Inside the nested loop, execute the raspistill application. We specify (1) the output filename; (2) exposure value; (3) AWB value; (4) image effect value; (5) the time to take a photo, which is set to 1 second; and (6) the size of the photo, which is set to 640x480px. This Python script will create 78 different versions of a captured photo with a combination of 2 types of exposure, 3 types of AWB, and 13 types of image effects.

To execute the Python script, simply type:
$ python name_of_this_script.py

Here is the first round of the sample result.

Bonus

For those who are more interested, there is another way to access and control the raspi cam besides raspistill. Picamera a pure Python interface which provides APIs for accessing and controlling raspi cam, so that one can build a complex program for utilizing raspi cam according to their needs. If you are skilled at Python, picamera is a good feature-complete interface for implementing your raspi cam project. The picamera interface is included by default in the recent image of Raspbian. If your Raspberry Pi operating system is not new or not Raspbian, you can install it on your system as follows.

First, install pip on your system by following this guideline.

Then, install picamera as follows.
 
$ sudo pip install picamera

Refer to the official documentation on how to use picamera.

A cloud management tool for simple deployments

$
0
0
http://opensource.com/business/15/1/virtkick-new-cloud-management-tool


Image by : 
opensource.com

For the past few years, cloud has been one of the biggest buzzwords among technology enthusiasts. Whether you want data accessibility across devices or need computation power for your business or even develop applications—cloud can help you.

With growing adoption for cloud computing, almost everyone from individuals to large corporations are leveraging it. For example CERN, the famous European nuclear lab, uses OpenStack to manage their IT infrastructure. Several open source projects related to cloud computing have also come up in last few years, prominent among them are ownCloud, OpenStack etc.

Setting up and managing cloud infrastructure can have its own set of challenges. For a novice who has little or no technical background, it can take a lot of work to set up a cloud by themselves.

Another challenge, at least with public clouds, is privacy. Once data is uploaded to a cloud server, you can’t really be sure of who is seeing it. The recent iCloud, Sony hacking attacks have shed some light on security best practices in the cloud.

What is Virtkick?

Virtkick, a new cloud infrastructure management tool, aims to solve these two main problems. Founders claim Virtkick is dead easy to install and use, and by hosting your cloud infrastructure in your own premises you can be sure that there is no else with access to your data. Lets first take a look at how cloud works and where Virtkick fits in.

You may be aware that cloud is actually a network of computers working together to give users an illusion of interacting with a single computer. To simplify things, you can imagine three layers, first the hardware (the actual servers), second the virtualization layer (the layer which creates illusion), and third the software layer (software like ownCloud) that users interacts with. As you’d have guessed by now, Virtkick belongs to the second layer. But there is more to it, not only does it help in virtualizing the hardware, it also serves as the panel for managing the different virtual machines you create—in a simple and easy to use interface.

See the Virtkick demo.


Some of the interesting Virtkick features are:
  1. Mount ISO packages: Install most popular systems, Ubuntu, CentOS, Arch and more.
  2. Install appliances: Deliver appliances with various Linux distributions (coming soon).
  3. One click integration with sandstorm (coming soon).
Virtkick also opens up a new opportunity for Virtual Private Servers (VPS) and data center professionals. It ships with optional e-commerce features, so VPS providers can use them to run their business and sell virtual machines. Since VirtKick is all free, it makes costly software obsolete thatis needed to run such a business, so providers can offer more for less.

Download Virtkick from their GitHub page.

Launched on October 29, 2014, VirtKick has already received 1000 email subscriptions, around 800 GitHub downloads, and 15,000 visitors. A successful example of crowdfunding, Virtkick crowdfunded their way to this level. They raised $4,216 via IndieGoGo and $18k from a startup accelerator, taking the total amount to $22,216. Though the IndieGoGo campaign is over now, you can still contribute contribute. You can also read about the team behind Virtkick.

Let us know your views in the comments!

Slow System? iotop Is Your Friend

$
0
0
 http://www.linuxjournal.com/content/slow-system-iotop-your-friend

 Back in 2010, Kyle Rankin did an incredible series on Linux Troubleshooting. In Part 1, he talked about troubleshooting a system struggling with a high load. At that point, I'd already been a system administrator for more than a decade, but it was the first time I'd ever heard of iotop.

If you weren't a subscriber in 2010, I highly recommend you read Kyle's entire series. Either way, I use iotop so often, I felt it was prudent to mention it again all these years later. The concept is pretty simple. It's like the top program, but instead of CPU and memory usage, it monitors disk I/O. If you have a system that is extremely slow to respond, but can't seem to figure out what is going on, give iotop a try. You'll probably have to install it, as I've never found a system with iotop installed by default, but it should be in the software repository of just about every Linux distro. And, if you find it useful? Be sure to read Kyle's entire series; it's just as helpful today as it was five years ago!
Figure 1. The Bitcoin dæmon is notorious for using a lot of disk I/O.

The current state of video editing for Linux

$
0
0
http://opensource.com/life/15/1/current-state-linux-video-editing

Image by : 
opensource.com
I often ask myself what the current state of video editing is for free and open source software (FOSS). Here are my thoughts.
I've spent many years in the visual effects (VFX) industry from the perspective of being either an artist, compositor, video editor, or systems engineer. (I've even got film creds on IMDB!) In the past, I had the pleasure of cutting on, training people on, setting up, and supporting Avid Media Composer, the cream of the crop of professional real-time video editing tools for film and TV alike—at least before things like Final Cut Pro and Adobe Premiere became useful enough to professionals.
In the VFX industry these three tools are used extensively among studios for cutting video and film and are both very simple to use for noobs and professionals alike as well as can be pushed very far in the hands of guru artists. The VFX industry has for the most part of the last 30 years been reliant on Mac and PC for video editing, primarily because all of the Linux-based FOSS tools have been less than great. This is a shame because all of the best 3D and 2D tools, other than video, are entrenched in the Linux environment and perform best there. The lack of decent video editing tools on Linux prevents every VFX studio from becoming a Linux-only shop.
That being said, there are some strides being made to bridge this gap, as I discovered over the last few weeks. They are not Hollywood big, production ready strides but they are useful enough for what I need to do which is basically a bunch of build training and demo videos as Senior Systems Engineer for Red Hat's Systems Engineering EngOps team.
 
I've installed and tested a number of tools before overcoming my fear of learning how to edit video in Blender. (When I first looked at it, the program seemed convoluted.) So, here's an account of the tools I looked at and what I thought about them. Let me qualify this by letting you know that I'm currently running Fedora 21, KDE, and Gnome (because I can't decide which to stick with) on a Lenovo T440s with a VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (so, no accellerated openGL unfortunately). I approached this as I would if I was an impatient artist trying to find THE tool for the job, with no time for messing about for little or no results.

Pitivi

Pitivi was recommended to me, so it was the first app I tried out. It's written in Python, so I thought maybe I can have fun with scripting this because I have a specific thing I'd like to do with overlaying timecode over the video based on the frame count showing actual passage of time regardless of the cuts made to the clip. (It's a demo thing.) It looked great and professional-esque, almost Avid/premiere like. So, I brought in a video clip... and CRASH! I opened it again, brought in a clip, no crash, so that's great. I added another video track... and CRASH! I tried at least 15 more times before giving up on it. And it's a shame, because it looks like it has potential to be simple to use and not overly garish.
I'll try again when version 1.0 is released. Normally, I persevere with beta versions because I've been involved with beta testing software all of my professional life, but this was frustrating and I wasn't getting anywhere.

OpenShot

For OpenShot: Open it, check. Bring in video, check. Cut video into timeline, check. Playback video, check. Add a title and hit render, then I waited... and waited... and waited. Then, I checked htop, and nothing happening but I couldn't cancel out of the render. CRASHH!! Oh no.
So, my take was that maybe this one can do the job if you don't want titles? It's free closed source competitor, so it may possibly be more useful? I don't know, but I moved on.

Lightworks

With Lightworks, I thought: now we're talking. Lightworks played a very large part in the professional video market about 10 years ago and was used by many PC based studios. It has cut some really cool films along the way and was very expensive then as I recall. So, these days they have released a free version for all platforms. This version gives you all the rudimentary things that you may want, and there's an RPM or deb download available. It installed without issues, then when I double-clicked the icon, nothing happened. No OpenGL, no video, no worky.
Could someone try this out and tell me what it's like? Or, if you're feeling generous, throw me a nifty laptop with at least a Nvidia 870M in it please.

Avidemux

For Avidemux, I installed it and opened it. Are people using this for editing? I looked at this as I've seen so many other writeups mention this as a editor which it most definately isn't. I moved on.

Cinelerra

For Cinelerra, I tried to download it and found the homepage had no download link (at the time). I noted that the team there seems very focused on the Ubuntu user. Then, I downloaded, extracted, and opened it. I brought some video in, hit the garish, big green tick to accept the import, hit play, and found that it didn't work. Bummer.

KDEnlive

KDEnlive is a relatively new discovery for me. I installed it, opened it, lay down some tracks, and cut with my "industry standard" keyboard shortcuts. All seemed pretty smooth. So, then I overlayed the end of one video over the start of another video track so that I could apply a transition, but I couldn't find any. The list of transitions was bare. Hmmm, maybe I have to go back and find out why this is.
So, I'll report back later on this.

Blender

By the time I got to Blender, I was really starting to get disheartened. I've looked at Blender in the past but it was a totally different paradigm than anything I had used before professionally. For a start, the keys we all wrong. But, I was back and not about to be defeated. I searched YouTube for something to help, something that wouldn't take me 365 days to go through the basics.
Here's a list of a few that I found useful. And, after about 30 mins of watching, I got started.
I imported the video clips that I needed, check. I laid down the first video track, check. I played the clip back in the player/viewer, check. I was begining to get excited. I started cutting my 45 minute clip down to 5 minutes. Blender has markers: awesome! Cutting long clips without markers is an exercise in futility. Avid started the marker trend and it was a godsend. By using markers with the "m" key you can start to map in real-time, while you're watching, where you want the cuts to happen. And once you're done watching through, you can skip to each marker and make a cut. You can then non-destructively delete the clips that you just cut. You can then automatically close the gap between each of the cuts so you're not screwing around trying to line up the ends of each consecutive clip.
Creating transitions was really simple too and reminded me of using Adobe Premiere. There are some "normal" transitions too, ones that you would expect to see on a film or TV drama, rather than just the "fractal swirl-over fade-back bubble" transition that all of the other apps seem to love.
Another nice thing about Blender is that the audio is able to be unlinked from the video. There are many uses for this, and I was happy to see that I could do it so easily. The next thing I tried was titling. You can go the 2D or 3D route. I chose the 3D route as this can give you much more flexibility for reuse. So, I overlayed this over the video perfectly, and then I chose the format and size that I wanted to render out with, and hit the GO button. It rendered out fast and perfectly.

The winner

I have found my new, open source video editor: Blender! It's not Avid, FCP, or Premiere, but it's more than that. It's a true suite of tools that I would say can go head to head with the best of what I've used in the VFX industry. And, I'm genuinely surprised!
One more great thing about Blender: it's fully scriptable in Python. Wow.

How to ping a specific port of a remote host

$
0
0
http://xmodulo.com/how-to-ping-specific-port-of-remote-host.html

ping is a networking utility used to test the reachability and round-trip time (RTT) delay of a remote host over Internet Protocol (IP).

The ping utility does so by sending out a series of Internet Control Message Protocol (ICMP) echo request packets to a remote host, and waiting for corresponding ICMP response packets from the host.

However, you cannot probe a specific port with ping command because ICMP belongs to layer-2 IP layer, not layer-3 transport layer (e.g., TCP/UDP).

In order to ping a specific port of a remote host, you need to use layer-3 transport protocols which have a notion of port numbers.

There are many command-line tools in Linux that read from and write to network connections using TCP or UDP. You can use some of these tools to ping a remote port, as described below.

Method One

nmap is a network mapper utility used to check the availability of hosts and their services. Using nmap, you can check whether a specific TCP/UDP port of a remote host is open or not.

To ping a TCP port of a remote host using nmap:
 
$ nmap -p 80 -sT www.xmodulo.com
Starting Nmap 5.00 ( http://nmap.org ) at 2012-08-29 13:43 EDT
Interesting ports on www.xmodulo.com:
PORT STATE SERVICE
80/tcp closed http

Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds

To ping a UDP port of a remote host using nmap:
$ sudo nmap -p 80 -sU www.xmodulo.com
Starting Nmap 5.00 ( http://nmap.org ) at 2012-08-29 13:47 EDT
Interesting ports on www.xmodulo.com:
PORT STATE SERVICE
80/udp closed http

Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds

Note that unlike TCP case, you need root privilege to send out raw UDP packets using nmap.

Method Two

netcat is another powerful tool, nicknamed as "swiss-army knife" of networking. Among its rich features, netcat can do port scanning as follows.

To ping a TCP port of a remote host using netcat:
 
$ nc -zvv mit.edu 80
DNS fwd/rev mismatch: mit.edu != WEB.MIT.EDU
mit.edu [18.9.22.69] 80 (www) open
sent 0, rcvd 0

To ping a UDP port of a remote host using netcat:
 
$ nc -zuvv mit.edu 80
DNS fwd/rev mismatch: mit.edu != WEB.MIT.EDU
mit.edu [18.9.22.69] 80 (www) open
sent 0, rcvd 0

Method Three

The above tools so far only check whether a given port number is open or closed, but do not measure the RTT delay, like in the original ping utility. If you would like to actually measure network latency as well, you can use paping, which is a cross-platform TCP port testing tool.
 
$ paping mit.edu -p 80 -c 3
 

5 new guides for using OpenStack

$
0
0
http://opensource.com/business/15/1/openstack-tutorials

Image by : 
opensource.com
Are you interested in creating an open source cloud using the latest and greatest that OpenStack has to offer? We're here to help. We have gathered some of the best howtos, guides, tutorials, and tips published over the past month into this easy-to-use collection. Check out the list, get ready to learn, and if you get tripped up, remember that the official documentation for OpenStack is there to help.

December was a bit slower than some previous months on the OpenStack front, but we've still managed to bring together some great finds for you. This month, we look at migrating Ceph volumes, using the serial console in Nova, getting started with Heat, and more.
  • First up, a quick post from Sébastien Han about how to import existing Ceph volumes in Cinder. If you're moving from one OpenStack installation to another for testing, upgrades, or just a general need to relocate, this may save you some time and trouble.
  • Next, a great piece from Lars Kellogg-Stedman on how to use the serial console feature for accessing Nova servers. A new featured added in the Juno release, the serial console support is easy to set up and use, if you know how.
  • We've covered Heat, the OpenStack orchestration project, a number of times here on Opensource.com. Here, Arthur Berezin walks you through the basics in his guide to getting started with Heat, which takes you through installation, a 'hello world' application, and running heat from the command line. Berezin also looks at some of the new features available in the Juno release.
  • Have a NetApp storage device in your infrastructure setup? NetApp has a new guide to using PackStack to install Cinder with one of their storage backends.
  • Another great new resource is Emily Hugenbruch's guide to testing in OpenStack, which gives some great ways for developers to test their code. Hugenbruch writes "As you begin creating patches for OpenStack, you have two choices: you can run some unit tests yourself and try to figure out your errors before taking them to the community, or you can write the code and then just throw it out there, hoping that reviewers and Jenkins will catch all your bugs. I highly recommend the first option. The second option will only make community members annoyed with you because they have to read your buggy code!"
That's it for this month. Check out our past OpenStack tutorials collection for more great guides and hints. And if we missed your favorite new guide or resource, let us know in the comments!

Going open source on Android with F-Droid

$
0
0
https://opensource.com/life/15/1/going-open-source-android-f-droid

Image by : 
opensource.com

Android. It can be a divisive word in the free and open source software world. Some embrace it, others shun it. Some still use open versions of Android like Cyanogenmod and Replicant. If you do use an Android device—no matter what version of the operating system it is—there's one thing that you need to get the most out of your device: apps. There's just no way around that.
Most people grab their Android apps from the Google Play Store. Some might install apps from the Amazon Appstore or another third-party source. A majority of the apps that you get from Google and Amazon's app stores are proprietary, and many of them collect a lot of information about you.
So what choice to you have if you want to use Android and keep your apps as free and open as possible? You turn to F-Droid.

F-Droid?

F-Droid is:
an installable catalogue of FOSS (Free and Open Source Software) applications for the Android platform.
All of the apps are FOSS and only FOSS. The source code is available, with app listings often pointing you to where you can download it. F-Droid also warns you if an app uses or relies upon a proprietary service.
F-Droid Non-Free Warning

Getting going

You can do this in two ways: Download and install the F-Droid client, or download the .apk installer for the app that you want and install it by hand. In either case, you might need to enable the installation of third-party apps on your device. To do that, tap settings. Then, tap security. Finally, select the unknown sources option.
I prefer to use the F-Droid client because it makes searching for and updating apps a lot easier.

Using F-Droid

Let's assume that you plan to use the F-Droid client to install your apps. Once you've installed the client, fire it up. It may take a few moments (or longer) for the database of apps to refresh and to check whether or not the apps in the catalogue are compatible with your device.
F-Droid Main Window
Once that's done, you're ready to go.

Apps, apps, apps

The first thing you'll notice is the number of apps available with F-Droid: just over 1,300, in contrast to the hundreds of thousands that are available in the Google Play Store. You won't find many of the popular apps that you may have grown to know and love in the F-Droid repository, but that doesn't mean you won't recognize some. These include Firefox, ownCloud, VLC Media Player, DuckDuckGo, K-9 Mail, and FBReader.
The apps are divided into a dozen categories, ranging from education to games, to internet, to office and productivity apps. Tap the right-facing triangle in the top corner of the F-Droid window to open a list of the categories.
F-Droid Categories
From there, tap a category and then tap the app that you want to install. F-Droid downloads the installer. You'll be shown a list of the permissions the app needs (if any) and whether or not you want to proceed. When you're ready, tap Install.

Maintenance and such

The main window of the F-Droid app has tabs that list the available apps, the apps you've installed, and the ones you've installed that have updates available. You can remove unwanted apps and update installed apps, with just a tap (or two). You'll also want to keep the list of available apps up to date. To do that, tap the Refresh icon to load an updated listing.
So why would you want to do this, besides a desire to be on the bleeding edge? While new apps aren't added to F-Droid all that regularly, the number of apps available through F-Droid jumped from about 1,200 to 1,340 in the past month and a half or so.
F-Droid may not have the breadth of apps available in the Google Play Store and other third-party Android software libraries, but if you want to use as many free and open source apps with your Android device as you can, then it's an option you'll want to explore.

A Shell Primer: Master Your Linux, OS X, Unix Shell Environment

$
0
0
http://www.cyberciti.biz/howto/shell-primer-configuring-your-linux-unix-osx-environment

On a Linux or Unix-like systems each user and process runs in a specific environment. An environment includes variables, settings, aliases, functions and more. Following is a very brief introduction to some useful shell environment commands, including examples of how to use each command and setup your own environment to increase productivity in the command prompt.
bash-shell-welcome-image

Finding out your current shell

Type any one of the following command at the Terminal app:
ps $$
ps -p $$
OR
echo"$0"
Sample outputs:
Fig.01: Finding out your shell name
Fig.01: Finding out your shell name

Finding out installed shells

To find out the full path for installed shell type:
type -a zsh
type -a ksh
type -a sh
type -a bash
Sample outputs:
Fig.02: Finding out your shell path
Fig.02: Finding out your shell path

The /etc/shells file contains a list of the shells on the system. For each shell a single line should be present, consisting of the shell's path, relative to root. Type the following cat command to see shell database:
cat /etc/shells
Sample outputs:
# List of acceptable shells for chpass(1).
# Ftpd will not allow users to connect who are not using
# one of these shells.
 
/bin/bash
/bin/csh
/bin/ksh
/bin/sh
/bin/tcsh
/bin/zsh
/usr/local/bin/fish

Changing your current shell temporarily

Just type the shell name. In this example, I'm changing from bash to zsh:
zsh
You just changed your shell temporarily to zsh. Also known as subshell. To exit from subshell/temporary shell, type the following command or hit CTRL-d:
exit

Finding out subshell level/temporary shell nesting level

The $SHLVL incremented by one each time an instance of bash is started. Type the following command:
echo"$SHLVL"
Sample outputs:
Fig. 03: Bash shell nesting level (subshell numbers)
Fig. 03: Bash shell nesting level (subshell numbers)

Changing your current shell permanently with chsh command

Want to change your own shell from bash to zsh permanently? Try:
chsh -s /bin/zsh
Want to change the other user's shell from bash to ksh permanently? Try:
sudochsh -s /bin/ksh

Finding out your current environment

You need to use the
env
env | more
env | less
env | grep'NAME'
Sample outputs:
TERM_PROGRAM=Apple_Terminal
SHELL=/bin/bash
TERM=xterm-256color
TMPDIR=/var/folders/6x/45252d6j1lqbtyy_xt62h40c0000gn/T/
Apple_PubSub_Socket_Render=/tmp/launch-djaOJg/Render
TERM_PROGRAM_VERSION=326
TERM_SESSION_ID=16F470E3-501C-498E-B315-D70E538DA825
USER=vivek
SSH_AUTH_SOCK=/tmp/launch-uQGJ2h/Listeners
__CF_USER_TEXT_ENCODING=0x1F5:0:0
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/usr/local/go/bin:/usr/local/sbin/modemZapp:/Users/vivek/google-cloud-sdk/bin
__CHECKFIX1436934=1
PWD=/Users/vivek
SHLVL=2
HOME=/Users/vivek
LOGNAME=vivek
LC_CTYPE=UTF-8
DISPLAY=/tmp/launch-6hNAhh/org.macosforge.xquartz:0
_=/usr/bin/env
OLDPWD=/Users/vivek
Here is a table of commonly used bash shell variables:
Fig.04: Common bash environment variables
Fig.04: Common bash environment variables
Warning: It is always a good idea not to change the following environment variables. Some can be changed and may results into unstable session for you:
SHELL
UID
RANDOM
PWD
PPID
SSH_AUTH_SOCK
USER
HOME
LINENO

Displays the values of environment variables

Use any one of the following command to show the values of environment variable called HOME:
## Use printenv ##
printenv HOME
 
## or use echo ##
echo"$HOME"
 
# or use printffor portability ##
printf"%s\n""$HOME"
Sample outputs:
/home/vivek

Adding or setting a new variables

The syntax is as follows in bash or zsh or sh or ksh shell:
## The syntax is ##
VAR=value
FOO=bar
 
## Set the default editor to vim ##
EDITOR=vim
export$EDITOR
 
## Set default shell timeout for security ##
TMOUT=300
export TMOUT
 
## You can directly use exportcommand to set the search path for commands ##
exportPATH=$PATH:$HOME/bin:/usr/local/bin:/path/to/mycoolapps
 
Again, use the printenv or echo or printf command to see the values of environment variables called PATH, EDITOR, and TMOUT:
printenv PATH
echo"$EDITOR"
printf"%s\n"$TMOUT

How do I change an existing environment variables?

The syntax is as follows:
exportVAR=value
## OR ##
VAR=value
export$VAR
 
## Change the default editor from vim to emacs ##
echo"$EDITOR"## <--- print="" span="" vim="">
EDITOR=emacs ## <--- change="" it="" span="">
export$EDITOR## <--- span="" style="color: #7a0874; font-weight: bold;">export---> it for next session too --->

echo"$EDITOR"## <--- emacs="" print="" span="">
 --->
--->
The syntax is as follows for the tcsh shell for adding or changing a variables:
## Syntax 
setenv var value
printenv var
 
## Set foo variable with bar as a value ##
setenv foo bar
echo"$foo"
printenv foo
 
## Set PATH variable ##
setenv PATH $PATH\:$HOME/bin
echo"$PATH"
 
## set PAGER variable ##
setenv PAGER most
printf"%s\n"$PAGER
 

Finding your bash shell configuration files

Type the following command to list your bash shell files, enter:
ls -l ~/.bash* ~/.profile /etc/bash* /etc/profile
Sample output:
Fig.05:  List all bash environment configuration files
Fig.05: List all bash environment configuration files

To look at all your bash config files, enter:
less ~/.bash* ~/.profile /etc/bash* /etc/profile
You can edit bash config files one by one using the text editor such as vim or emacs:
vim ~/.bashrc
To edit files located in /etc/, type:
## first make a backup.. just incase
sudocp -v /etc/bashrc /etc/bashrc.bak.22_jan_15
 
########################################################################
## Alright, edit it to your hearts content and by all means, have fun ##
## with your environment or just increase the productivity :) ##
########################################################################
sudo vim /etc/bashrc

Confused by Bash shell Initialization files?

The following "bash file initialization" graph will help you:
BashStartupfiles
Depending on which shell is set up as your default, your user profile or system profile can be one of the following:

Finding your zsh shell configuration files

The zsh wiki recommend the following command:
strings =zsh | grep zshrc
Sample outputs:
/etc/zshrc
.zshrc
Type the following command to list your zsh shell files, enter:
ls -l /etc/zsh/* /etc/profile ~/.z*
To look at all your zsh config files, enter:
less /etc/zsh/* /etc/profile ~/.z*

Finding your ksh shell configuration files

  1. See ~/.profile or /etc/profile file.

Finding your tcsh shell configuration files

  1. See ~/.login, ~/.cshrc for the C shell.
  2. See ~/.tcshrc and ~/.cshrc for the TC shell.

Can I have a script like this execute automatically every time I login?

Yes, add your commands or aliases or other settings to ~/.bashrc (bash shell) or ~/.profile (sh/ksh/bash) or ~/.login (csh/tcsh) file.

Can I have a script like this execute automatically every time I logout?

Yes, add your commands or aliases or other settings to ~/.bash_logout (bash) or ~/.logout (csh/tcsh) file.

History: Getting more info about your shell session

Just type the history command to see session history:
history
Sample outputs:
    9  ls
10 vi advanced-cache.php
11 cd ..
12 ls
13 w
14 cd ..
15 ls
16 pwd
17 ls
....
..
...
91 hddtemp /dev/sda
92 yum install hddtemp
93 hddtemp /dev/sda
94 hddtemp /dev/sg0
95 hddtemp /dev/sg1
96 smartctl -d ata -A /dev/sda | grep -i temperature
97 smartctl -d ata -A /dev/sg1 | grep -i temperature
98 smartctl -A /dev/sg1 | grep -i temperature
99 sensors
Type history 20 to see the last 20 commands from your history:
history20
Sample outputs:
Fig.06:  View session history in the bash shell using history command
Fig.06: View session history in the bash shell using history command

You can reuses commands. Simply hit [Up] and [Down] arrow keys to see previous commands. Press [CTRL-r] from the shell prompt to search backwards through history buffer or file for a command. To repeat last command just type !! at a shell prompt:
ls -l /foo/bar
!!
To see command #93 (hddtemp /dev/sda)from above history session, type:
!93

Changing your identity with sudo or su

The syntax is as follows:
su userName
 
## To log inas a tom user ##
su tom
 
## To start a new login shell for tom user ##
su tom
 
## To loginas root user ##
su -
 
## The sudocommand syntax (must be configured on your system) ##
sudo -s
sudo tom
 
See "Linux Run Command As Another User" post for more on sudo, su and runuser commands.

Shell aliases

An alias is nothing but shortcut to commands.

Listing aliases

Type the following command:
alias
Sample outputs:
alias ..='cd ..'
alias ...='cd ../../../'
alias ....='cd ../../../../'
alias .....='cd ../../../../'
alias .4='cd ../../../../'
alias .5='cd ../../../../..'
alias bc='bc -l'
alias cd..='cd ..'
alias chgrp='chgrp --preserve-root'
alias chmod='chmod --preserve-root'
alias chown='chown --preserve-root'
alias cp='cp -i'
alias dnstop='dnstop -l 5 eth1'
alias egrep='egrep --color=auto'
alias ethtool='ethtool eth1'

Create an alias

The bash/zsh syntax is:
aliasc='clear'
aliasdown='sudo /sbin/shutdown -h now'
Type c alias for the system command clear, so we can type c instead of clear command to clear the screen:
c
Or type down to shutdown the Linux based server:
 
down
 
You can create as many aliases you want. See "30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X" for practical usage of aliases on Unix-like system.

Shell functions

Bash/ksh/zsh functions allows you further customization of your environment. In this example, I'm creating a simple bash function called memcpu() to display top 10 cpu and memory eating process:
 
memcpu(){echo"*** Top 10 cpu eating process ***"; ps auxf | sort -nr -k 3 | head-10;
echo"*** Top 10 memory eating process ***"; ps auxf | sort -nr -k 4 | head-10; }
 
Just type memcpu to see the info on screen:
memcpu
 
*** Top 10 cpu eating process ***
nginx 3955913.00.226402035168 ? S 04:260:00 \_ /usr/bin/php-cgi
nginx 395456.60.121648413088 ? S 04:250:04 \_ /usr/bin/php-cgi
nginx 394716.20.627335281704 ? S 04:220:17 \_ /usr/bin/php-cgi
nginx 395445.70.121648413084 ? S 04:250:03 \_ /usr/bin/php-cgi
nginx 395405.50.122126019296 ? S 04:250:04 \_ /usr/bin/php-cgi
nginx 395425.40.121648413152 ? S 04:250:04 \_ /usr/bin/php-cgi
nixcraft 395435.30.121648414096 ? S 04:250:04 \_ /usr/bin/php-cgi
nixcraft 395385.20.122124818608 ? S 04:250:04 \_ /usr/bin/php-cgi
nixcraft 395395.00.121648416272 ? S 04:250:04 \_ /usr/bin/php-cgi
nixcraft 395414.80.121648414860 ? S 04:250:04 \_ /usr/bin/php-cgi
 
*** Top 10 memory eating process ***
498638590.54.02429652488084 ? Ssl 2014177:41 memcached -d -p 11211 -u memcached -m 2048 -c 18288 -P /var/run/memcached/memcached.pid -l 10.10.29.68 -L
mysql 642214.23.44653600419868 ? Sl 20141360:40 \_ /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --open-files-limit=65535 --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock
nixcraft 394180.41.1295312138624 ? S 04:170:02 | \_ /usr/bin/php-cgi
nixcraft 394190.50.9290284113036 ? S 04:180:02 | \_ /usr/bin/php-cgi
nixcraft 394640.70.829435699200 ? S 04:200:02 | \_ /usr/bin/php-cgi
nixcraft 394690.30.728840091256 ? S 04:200:01 | \_ /usr/bin/php-cgi
nixcraft 394716.20.627335281704 ? S 04:220:17 \_ /usr/bin/php-cgi
vivek 392612.20.625317282812 ? S 04:050:28 \_ /usr/bin/php-cgi
squid 99950.00.517515272396 ? S 201427:00 \_ (squid) -f /etc/squid/squid.conf
cybercit 39220.00.430338056304 ? S Jan10 0:13 | \_ /usr/bin/php-cgi
 
See "how to write and use shell functions" for more information.

Putting it all together: Customizing your Linux or Unix bash shell working environment

Now, you are ready to configure your environment using bash shell. I'm only covering bash. But the theory remains same from zsh, ksh and other common shells. Let us see how to adopt shell to my need as a sysadmin. Edit your ~/.bashrc file and append settings. Here are some useful configuration options for you.

#1: Setting up bash path and environment variables

# Set path ##
exportPATH=$PATH:/usr/local/bin:/home/vivek/bin:/opt/firefox/bin:/opt/oraapp/bin
 
# Also set path forcd command
exportCDPATH=.:$HOME:/var/www
 
Use less or most command as a pager:
exportPAGER=less
Set vim as default text editor for us:
export EDITOR=vim
export VISUAL=vim
export SVN_EDITOR="$VISUAL"
Set Oracle database specific stuff:
exportORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server
exportORACLE_SID=XE
exportNLS_LANG=$($ORACLE_HOME/bin/nls_lang.sh)
Set JAVA_HOME and other paths for java as per java version:
exportJAVA_HOME=/usr/lib/jvm/java-6-sun/jre
 
# Add ORACLE, JAVA to PATH
exportPATH=$PATH:$ORACLE_HOME/bin:$JAVA_HOME/bin
 
Secure my remote SSH login using keychain for password less login:
# No need to input password again ever
/usr/bin/keychain $HOME/.ssh/id_rsa
source$HOME/.keychain/$HOSTNAME-sh
Finally, turn on bash command completion
source /etc/bash_completio

#2: Setting up bash command prompt

Set custom bash prompt (PS1):
PS1='{\u@\h:\w }\$'

#3: Setting default file permissions

## Set default to 644 ##
umask022

#4: Control your shell history settings

# Dont put duplicate lines in the history
HISTCONTROL=ignoreboth
 
# Ignore these commands
HISTIGNORE="reboot:shutdown *:ls:pwd:exit:mount:man *:history"
 
# Set history length via HISTSIZE and HISTFILESIZE
exportHISTSIZE=10000
exportHISTFILESIZE=10000
 
# Add timestamp to historyfile.
exportHISTTIMEFORMAT="%F %T "
 
#Append to history, don't overwrite
shopt -s histappend

#5: Set the time zone for your session

## set to IST for my own session ##
TZ=Asia/Kolkata

#6: Setting up shell line editing interface

## use a vi-style line editing interface for bash from default emacs mode ##
set -o vi

#7: Setting up your favorite aliases

## add protection ##
aliasrm='rm -i'
aliascp='cp -i'
aliasmv='mv -i'
 
## Memcached ##
aliasmcdstats='/usr/bin/memcached-tool 10.10.29.68:11211 stats'
aliasmcdshow='/usr/bin/memcached-tool 10.10.29.68:11211 display'
aliasmcdflush='echo "flush_all" | nc 10.10.29.68 11211'
 
## Default command options ##
aliasvi='vim'
aliasgrep='grep --color=auto'
aliasegrep='egrep --color=auto'
aliasfgrep='fgrep --color=auto'
aliasbc='bc -l'
aliaswget='wget -c'
aliaschown='chown --preserve-root'
aliaschmod='chmod --preserve-root'
aliaschgrp='chgrp --preserve-root'
aliasrm='rm -I --preserve-root'
aliasln='ln -i'
 
Here are some additional OS X Unix bash shell aliases:
# Open desktop apps from bash
aliaspreview="open -a '$PREVIEW'"
aliassafari="open -a safari"
aliasfirefox="open -a firefox"
aliaschrome="open -a google\ chrome"
aliasf='open -a Finder '
 
# Get rid of those .DS_Store files
aliasdsclean='find . -type f -name .DS_Store -delete'

#8: Colour my world

# Get colored grep output 
aliasgrep='grep --color=auto'
exportGREP_COLOR='1;33'
 
# colored ls too
exportLSCOLORS='Gxfxcxdxdxegedabagacad'
# Gnu/linux ls
ls='ls --color=auto'
 
# BSD/os x ls command
# aliasls='ls -G'

#9: Setting up your favorite bash functions

# Show top 10historycommand on screen 
function ht {
history | awk'{a[$2]++}END{for(i in a){print a[i] "" i}}' | sort -rn | head
}
 
# Wrapper for host and ping command
# Accept http:// or https:// or ftps:// names for domain and hostnames
_getdomainnameonly(){
localh="$1"
localf="${h,,}"
# remove protocol part of hostname
f="${f#http://}"
f="${f#https://}"
f="${f#ftp://}"
f="${f#scp://}"
f="${f#scp://}"
f="${f#sftp://}"
# remove username and/or username:password part of hostname
f="${f#*:*@}"
f="${f#*@}"
# remove all /foo/xyz.html*
f=${f%%/*}
# show domain name only
echo"$f"
}
 
 
ping(){
localarray=( $@ )# get all args in an array
locallen=${#array[@]} # find the length of an array
localhost=${array[$len-1]}# get the last arg
localargs=${array[@]:0:$len-1}# get all args before the last arg in $@ in an array
local_ping="/bin/ping"
localc=$(_getdomainnameonly "$host")
["$t" != "$c"]&& echo"Sending ICMP ECHO_REQUEST to \"$c\"..."
# pass args and host
$_ping$args$c
}
 
host(){
localarray=( $@ )
locallen=${#array[@]}
localhost=${array[$len-1]}
localargs=${array[@]:0:$len-1}
local_host="/usr/bin/host"
localc=$(_getdomainnameonly "$host")
["$t" != "$c"]&& echo"Performing DNS lookups for \"$c\"..."
$_host$args$c
}

#10: Configure bash shell behavior via shell shopt options command

Finally, you can make changes to your bash shell environment using set and shopt commands:
# Correct dir spellings
shopt -q -s cdspell
 
# Make sure display get updated when terminal window get resized
shopt -q -s checkwinsize
 
# Turn on the extended pattern matching features
shopt -q -s extglob
 
# Append rather than overwrite history on exit
shopt -s histappend
 
# Make multi-line commandsline in history
shopt -q -s cmdhist
 
# Get immediate notification of background job termination
set -o notify
 
# Disable [CTRL-D]which is used to exit the shell
set -o ignoreeof

Conclusion

This post is by no means comprehensive. It provided a short walkthrough of how to customize your enviorment. For a thorough look at bash/ksh/zsh/csh/tcsh capabilities, I suggest you read the man page by typing the following command:
man bash
manzsh
man tcsh
man ksh
This article was contributed by Aadrika T. J.; Editing and additional content added by admin. You can too contribute to nixCraft.

Reset The Root Password For A Linux VM Hosted On XenServer

$
0
0
http://www.unixmen.com/reset-root-password-linux-vm-hosted-xenserver

If you have ever tried to reset the root password for a linux VM hosted on XenServer, this section will guide you how to make that.

Follow the procedure to reset the root password of a Linux VM:

You need to to boot your virtual machine in single user mode.

1- Shut down your server using the Xencenter controls

2- Right click on machine and select Properties
boot

3- Go under Boot options 
You already have something in the OS Boot Parameters you will need to take note of this as You will need to save it and put it back once the password reset is complete.

Change the OS Boot Parameters to rw init=/bin/bash 

Some times for some OS especial CentOS  you will need to write in the field the word single instead of rw init=/bin/bash  so try both if first trick didn’t work.
boot-option

4- Save and Start your virtual machine
Your system will  boot up in single user mode. So to change your password, you need to type this command:
 
 bash# passwd root

5- Type in your new password you will then be asked to confirm it
Your password has now been reset.

 6- Shutdown your virtual machine.
bash# shutdown -h now

Or,

Shutdown from Xencenter controls.

password

Now go back to the xencenter and startup options and remove rw init=/bin/bash and change it back to whatever was there before. Start up your server and you should be able to logon with your new root password.

How to Manage Network using nmcli Tool in RedHat / CentOS 7.x

$
0
0
http://linoxide.com/linux-command/nmcli-tool-red-hat-centos-7

A new feature of Red Hat Enterprise Linux 7 and CentOS 7 is that the default networking service is provided by NetworkManager, a dynamic network control and configuration daemon that attempts to keep network devices and connections up and active when they are available while still supporting the traditional ifcfg type configuration files. NetworkManager can be used with the following types of connections: Ethernet, VLANs, Bridges, Bonds, Teams, Wi-Fi, mobile broadband (such as cellular 3G), and IP-over-InfiniBand. For these connection types, NetworkManager can configure network aliases, IP addresses, static routes, DNS information, and VPN connections, as well as many connection-specific parameters.
The NetworkManager can be controlled with the command-line tool, nmcli.

General nmcli usage

The general syntax for nmcli is:
# nmcli [ OPTIONS ] OBJECT { COMMAND | help }
One cool thing is that you can use the TAB key to complete actions when you write the command so if at any time you forget the syntax you can just press TAB to see a list of available options.
nmcli tab
Some examples of general nmcli usage:
# nmcli general status
Will display the overall status of NetworkManager.
# nmcli connection show
Will display all connections.
# nmcli connection show -a
Will display only the active connections.
# nmcli device status
Will display a list of devices recognized by NetworkManager and their current state.
nmcli general

Starting / stopping network interfaces

You can use the nmcli tool to start or stop network interfaces from the command line, this is the equivalent of up/down in ifconfig.
To stop an interface use the following syntax:
# nmcli device disconnect eno16777736
To start it you can use this syntax:
# nmcli device connect eno16777736

Adding an ethernet connection with static IP

To add a new ethernet connection with a static IP address you can use the following command:
# nmcli connection add type ethernet con-name NAME_OF_CONNECTION ifname interface-name ip4 IP_ADDRESS gw4 GW_ADDRESS
replacing the NAME_OF_CONNECTION with the name you wish to apply to the new connection, the IP_ADDRESS with the IP address you wish to use and the GW_ADDRESS with the gateway address you use (if you don't use a gateway you can omit this last part).
# nmcli connection add type ethernet con-name NEW ifname eno16777736 ip4 192.168.1.141 gw4 192.168.1.1
To set the DNS servers for this connection you can use the following command:
# nmcli connection modify NEW ipv4.dns "8.8.8.8 8.8.4.4"
To bring up the new Ethernet connection, issue a command as follows:
# nmcli connection up NEW ifname eno16777736
To view detailed information about the newly configured connection, issue a command as follows:
# nmcli -p connection show NEW
nmcli add static

Adding a connection that will use DHCP

If you wish to add a new connection that will use DHCP to configure the interface IP address, gateway address and dns servers, all you have to do is omit the ip/gw address part of the command and Network Manager will use DHCP to get the configuration details.
For example, to create a DHCP configured connection profile named NEW_DHCP, on device
eno16777736 you can use the following command:
# nmcli connection add type ethernet con-name NEW_DHCP ifname eno16777736

7 Awesome Open Source Cloud Storage Software For Your Privacy and Security

$
0
0
http://www.cyberciti.biz/cloud-computing/7-awesome-open-source-cloud-storage-software-for-your-privacy-and-security

Cloud storage is nothing but an enterprise-level cloud data storage model to store the digital data in logical pools, across the multiple servers. You can use a hosting company such as Amazon, Google, Rackspace, Dropbox and others for keeping your data available and accessible 24x7. You can access data stored on cloud storage via API or desktop/mobile apps or web based systems.

In this post, I'm going to list amazingly awesome open source cloud storage engines that you can use to access and sync your data privately for security and privacy reasons.

Why use open source cloud storage software?

The cloud - Source http://www.xkcd.net/908/

The cloud - Source http://www.xkcd.net/908/
  1. Create a cloud on your own server or in a data center.
  2. Control and own your own data.
  3. Privacy protection.
  4. Encryption.
  5. Verify source code for bugs and/or backdoors.
  6. Avoid spying on your files on the server using encryption.
  7. Legal compliance - HIPAA and others.
  8. Good performance as your data stored in local storage instead of remote data center.
  9. Good reliability and availability due to local LAN. You are no longer depends upon WAN bandwidth or the service provider for network.
  10. No artificially imposed limits on storage space or client connections and more
  11. Share your files and data with or without password or time limit. Share it publicly, or privately. No 3rd party corporation own your data.

Suggested sample cloud storage setup for home users

                      +----------------+
Internet/ISP----|Router/Wireless |
+----+-----------+
|
+----+---+
|Home Lan|
+--------+ +-------------------+
| | Raspberry Pi |
+-------+ Or Intel |
| Atom based server |
| + |
| Cloud storage |
+-------------------+
You can use the Raspberry Pi or an Intel Atom CPU based small server as a home cloud storage system. Use an external USB drive or secure backup service such as rsync.net/tarsnap.com to backup your cloud server in an encrypted format. This setup ensures that you keep all your data and not to trust the entirety of your personal data to a corporation.

Seafile: Easy to setup cloud storage for home users

Seafile is a file hosting cloud storage software to store files. You can synchronized files and data with PC and mobile devices easily or use the server's web interface for managing your data files. There is no limits on data storage space (except for hard disk capacity) or the number of connected clients to your private server (except for CPU/RAM capacity).

Seafile cloud storage
Operating system: Cross-platform (written in C and Python) - MS-Windows/Raspberry Pi/Linux private server
Desktop clients: Yes (Windows/Mac OS X/Linux)
Mobile clients: Yes (Android/iPad/iPhone)
Type: File cloud storage and data synchronization
Paid support: Yes via Professional Edition
Licence: GPLv3 (Community Edition)
Download: seafile.com

ownCloud: Dropbox replacement

ownCloud is another very popular file hosting cloud storage software and often described as Dropbox replacement. Just like Dropbox you can synchronizes your files to your private server. Files placed in ownCloud server are accessible via the mobile and desktop apps. You can add external storage to your ownCloud with Dropbox, SWIFT, FTPs, Google Docs, S3, external WebDAV servers and more.
Enable the encryption app to encrypt data on external storage for improved security and privacy.

owncloud  web client
Operating system: Cross-platform (written in PHP & JavaScript) - MS-Windows/Linux private server
Desktop clients: Yes (Windows/Mac OS X/Linux)
Mobile clients: Yes (Android/Apple iOS)
Type: File cloud storage and data synchronization
Paid support: Yes via Enterprise Edition
Licence: AGPLv3
Download: owncloud.org

git-annex assistant

The git-annex assistant creates a synchronised folder on each of your OSX and Linux computers, Android devices, removable drives, NAS appliances, and cloud services. You can manage, share, and sync your large files with the power of git and the ease of use of a simple folder you drop files into.

Please note that the software is still under heavy development and new features are added regularly.

git-cloud-storage
Operating system: Cross-platform - MS-Windows(beta)/Linux/OS X/FreeBSD/Docker private server
Desktop clients: No (porting)
Mobile clients: Yes (Android)
Type: File cloud storage and data synchronization
Paid support: ???
Licence: GPL version 3
Download: git-annex.branchable.com

SparkleShare: Easy to use cloud storage with git as a storage backend

It is also a Dropbox clone and very easy to setup. From the project site:
SparkleShare creates a special folder on your computer. You can add remotely hosted folders (or "projects") to this folder. These projects will be automatically kept in sync with both the host and all of your peers when someone adds, removes or edits a file.
sparkleshare
Operating system: Cross-platform (written in C#) - MS-Windows/Linux/OS X
Desktop clients: Yes ( MS-Windows/Linux/OS X)
Mobile clients: No (Android/iOS on hold)
Type: File and data synchronization
Paid support: ???
Licence: GPL version 3
Download: sparkleshare.org

Syncthing for private, encrypted & authenticated distribution of data

Syncthing is an open-source file synchronization client/server application, written in Go. It replaces proprietary sync and cloud services with something open, trustworthy and decentralized.

SyncthingWebInterface-1
Operating system: Cross-platform (written in Go) - Linux, Mac OS X, Microsoft Windows, Android, BSD, Solaris
Desktop clients: Yes (MS-Windows/Linux/OS X/OpeBSD and Unix-like)
Mobile clients: Yes (Android/F-Driod)
Type: File and data synchronization
Paid support: ???
Licence: GPL version 3
Download: syncthing.net

Stacksync cloud storage

StackSync is an open-source scalable Personal Cloud that can adapt to the necessities of organizations. It puts a special emphasis on security by encrypting data on the client side before it is sent to the server.
stacksync
Operating system: Linux
Desktop clients: Yes (MS-Windows/Linux/)
Mobile clients: Yes (Android)
Type: File and data synchronization
Paid support: ???
Licence: GPL version 2
Download: stacksync.org

OpenStack Object Storage (Swift)

Swift is a scalable redundant storage system. Objects and files are written to multiple disk drives spread throughout servers in the data center, with the OpenStack software responsible for ensuring data replication and integrity across the cluster. Please note that Swift is meant for a large or enterprise users only and not recommended for home users due to complex setup procedures.

Operating system: Cross-platform (written in Python)
Desktop clients: ???
Mobile clients: ???
Type: File, data synchronization and more
Paid support: ???
Licence: Apache License 2.0
Download: openstack.org

Conclusion

Personally, I'm using Owncloud as FOSS based cloud solution for my file sharing with friends and family. It offers me Calendar, Contacts, and Dropbox like storage. My cloud server has total 5 disks, 2 Gib RAM, and an Intel atom cpu. I use a Debian Linux with RAID 6. I backup my cloud to an external USB drive and currently, testing tarsanp backup service. I'm also planning to try out SparkleShare on the Raspberry Pi soon.

Are you using any other personal FOSS cloud basesd software? Add your suggestions the comments below.

World record set for 100 TB sort by open source and public cloud team

$
0
0
http://opensource.com/business/15/1/apache-spark-new-world-record

In October 2014, Databricks participated in the Sort Benchmark and set a new world record for sorting 100 terabytes (TB) of data, or 1 trillion 100-byte records. The team used Apache Spark on 207 EC2 virtual machines and sorted 100 TB of data in 23 minutes.

In comparison, the previous world record set by Hadoop MapReduce used 2100 machines in a private data center and took 72 minutes. This entry tied with a UCSD research team building high performance systems and we jointly set a new world record.

Additionally, while no official petabyte (PB) sort competition exists, we pushed Apache Spark (Spark) further to also sort 1 PB of data (10 trillion records) on 190 machines in under 4 hours. This PB time beats previously reported results based on Hadoop MapReduce (16 hours on 3800 machines). To the best of our knowledge, this is the first time a combination of open source software (Spark) and public cloud infrastructure (EC2) was used to set a new record on 100 TB sort, and the first petabyte-scale sort ever done in a public cloud.

Named after Jim Gray, the benchmark workload is resource intensive by any measure: sorting 100 TB of data following the strict rules generates 500 TB of disk I/O and 200 TB of network I/O.

Organizations from around the world often build dedicated sort machines (specialized software and sometimes specialized hardware) to compete in this benchmark.

 
Hadoop MR
Record
Spark
Record
Spark
1 PB
Data Size102.5 TB100 TB1000 TB
Elapsed Time72 mins23 mins234 mins
# Nodes2100206190
# Cores50400 physical6592 virtualized6080 virtualized
Cluster disk throughput3150 GB/s
(est.)
618 GB/s570 GB/s
Sort Benchmark Daytona RulesYesYesNo
Networkdedicated data center, 10Gbpsvirtualized (EC2) 10Gbps networkvirtualized (EC2) 10Gbps network
Sort rate1.42 TB/min4.27 TB/min4.27 TB/min
Sort rate/node0.67 GB/min20.7 GB/min22.5 GB/min

What is Spark?

Widely deemed the successor to Hadoop MapReduce, Apache Spark is a fast and general engine for large-scale data processing. It provides programming APIs in Java, Python, Scala, and SQL, and can be used to efficiently execute diverse workloads, including common ETL, data streaming, machine learning, graph computation, and SQL.

Spark is one of the most actively developed open source projects. It has over 465 contributors in 2014, making it the most active project in the Apache Software Foundation and among Big Data open source projects.

Sorting

The Sort Benchmark (Benchmark) was initially proposed and sponsored by Jim Gray to measure the state-of-the-art development of computer systems. After Jim Gray passed away in 2007, the Benchmark is now run by a consortium of past winners. The Benchmark consists of multiple categories, each with a different focus. Daytona Gray (named after Dr. Gray) is the most challenging category, as it requires participating systems to sort 100 terabytes (TB) of data in the fastest time possible, regardless of computing resources used.

At the core of sorting is the shuffle operation, which moves data across all machines. Shuffle underpins almost all distributed data processing workloads. For example, a SQL query joining two disparate data sources uses shuffle to move tuples that should be joined together onto the same machine, and collaborative filtering algorithms such as ALS rely on shuffle to send user/product ratings and weights across the network.

Most data pipelines start with a large amount of raw data, but as the pipeline progresses, the amount of data is reduced due to filtering out irrelevant data or more compact representation of intermediate data. A SQL query on 100 TB of raw input data most likely only shuffles a tiny fraction of the 100 TB across the network. This pattern is also reflected in the naming of the popular data processing framework MapReduce.

Sorting, however, is one of the most challenging because there is no reduction of data along the pipeline. Sorting 100 TB of input data requires shuffling 100 TB of data across the network. As a matter of fact, the Daytona Gray competition requires us to replicate both input and output data for fault-tolerance, and thus sorting 100 TB of data effectively generates 500 TB of disk I/O and 200 TB of network I/O.

For the above reasons, when we were looking for metrics to measure and improve Spark, thus sorting, one of the most demanding workloads, became a natural choice to focus on.

What made it possible?

A lot of development has gone into improving Spark for very large scale workloads. In particular, there are three major pieces of work that are highly relevant to this benchmark.
First and foremost, in Spark 1.1 we introduced a new shuffle implementation called sort-based shuffle (SPARK-2045). The previous Spark shuffle implementation was hash-based that required maintaining P (the number of reduce partitions) concurrent buffers in memory. In sort-based shuffle, at any given point only a single buffer is required. This has led to substantial memory overhead reduction during shuffle and can support workloads with hundreds of thousands of tasks in a single stage (our PB sort used 250,000 tasks).

Second, we revamped the network module in Spark based on Netty’s Epoll native socket transport via JNI (SPARK-2468). The new module also maintains its own pool of memory, thus bypassing JVM’s memory allocator, reducing the impact of garbage collection.

Last but not least, we created a new external shuffle service (SPARK-3796) that is decoupled from the Spark executor itself. This new service builds on the aforementioned network module and ensures that Spark can still serve shuffle files even when the executors are in GC pauses.
Network activity during sort

With these three changes, our Spark cluster was able to sustain 3GB/s/node I/O activity during the map phase, and 1.1 GB/s/node network activity during the reduce phase, saturating the 10Gbps link available on these machines.

The nitty-gritty

TimSort: In Spark 1.1, we switched our default sorting algorithm from quicksort to TimSort, a derivation of merge sort and insertion sort. It performs better than quicksort in most real-world datasets, especially for datasets that are partially ordered. We use TimSort in both the map and reduce phases.

Exploiting cache locality: In the sort benchmark, each record is 100 bytes, where the sort key is the first 10 bytes. As we were profiling our sort program, we noticed the cache miss rate was high, because each comparison required an object pointer lookup that was random. We redesigned our record in-memory layout to represent each record as one 16-byte record (two longs in the JVM), where the first 10 bytes represent the sort key, and the last 4 bytes represent the position of the record (in reality it is slightly more complicated than this due to endianness and signedness). This way, each comparison only required a cache lookup that was mostly sequential, rather than a random memory lookup. Originally proposed by Chris Nyberg et al. in AlphaSort, this is a common technique used in high-performance systems.

Spark’s nice programming abstraction and architecture allow us to implement these improvements in the user space (without modifying Spark) in a few lines of code. Combining TimSort with our new layout to exploit cache locality, the CPU time for sorting was reduced by a factor of 5.

Fault-tolerance at scale: At scale a lot of things can break. In the course of this experiment, we have seen nodes going away due to network connectivity issues, the Linux kernel spinning in a loop, or nodes pausing due to memory defrag. Fortunately, Spark is fault-tolerant and recovered from these failures.

Power of the cloud: As mentioned previously, we leveraged 206 i2.8xlarge instances to run this I/O intensive experiment. These instances deliver high I/O throughput via SSDs. We put these instances in a placement group in a VPC to enable enhanced networking via single root I/O virtualization (SR-IOV). Enabling enhanced networking results in higher performance (10Gbps), lower latency, and lower jitter. We would like to thank everyone involved at AWS for their help making this happen including: the AWS EC2 services team, AWS EC2 Business Development team, AWS product marketing and AWS solutions architecture team. Without them this experiment would not have been possible.

Why Contributing to the Linux Kernel is Easier Than You Think

$
0
0
http://www.linux.com/news/software/linux-kernel/801601-4-myths-about-linux-kernel-programming-debunked

Konrad Zapalowicz LinuxCon slide
Konrad Zapalowicz presented at LinuxCon Europe in Dusseldorf, Germany in 2014, about how to get started as a Linux kernel developer.
I gave a talk at LinuxCon Europe in Dusseldorf last year with the main goal being to show people how easy it is to start with Linux kernel development. Despite my fear that the audience might be too advanced and find this topic rather boring I received good feedback with several opinions that these kind of guidelines and advice are more than welcome. Now, since the room capacity was about 30 people, which is not really much, I have the impression that there are more folks out there who would enjoy this topic. Therefore I decided to form the presentation into a series of articles. (See the full presentation at Events.LinuxFoundation.org.)
These articles, similar to the talk, will be divided into three parts. In the first, not really technical article, I will explain that Linux kernel development is super easy especially for those who possess the right attitude. In the second part I'm going to show where to get inspiration and the best angles to approach Linux kernel development for newcomers. And in the third and last part, I will describe some of the things that I wish that I knew before I started.

4 Myths

For some reason there is a group of negative opinions or myths describing either Linux kernel programming itself or the effort required to become a Linux kernel developer. In particular these are:
  • Linux Kernel programming is hard and requires special skills.
  • Linux Kernel programming requires access to special hardware.
  • Linux Kernel programming is pointless because all of the drivers have already been written.
  • Linux Kernel programming is time consuming.
Let's put more detail into this way of thinking:
Myth #1: The Linux Kernel programming is hard and requires special skills.
This thinking comes from the fact that many people, especially without proper knowledge of the kernel internals tend to view the the whole project as one big blob of code, effectively an operating system itself. Now, we all know that writing the operating system is a damn hard job and requires deep understanding of quite a number of different topics. Usually this is not just a hobby ;) but something that you are well prepared for. Looking at the top-level Linux kernel developers does not help either because all of them have many years of experience and judging your own skills using them as a reference leads one to believe that special skills are in fact required.
Myth #2:  Linux Kernel programming requires access to a special hardware.
Jim Zemlin, who is the executive director of the Linux Foundation, said during his LinuxCon keynote that open source software is running on 80 percent of electronic devices. The Linux kernel, as the biggest open source project ever, gets more than a huge bite of this cake. In fact this is the most portable software of this size ever created and it supports an insane number of different hardware configurations. With this in mind one might get the impression that working on the kernel is about running it on different kinds of devices and since the most popular are already supported a successful developer needs to have access to all sorts of odd hardware.
Myth #3: Linux Kernel programming is pointless because all of the drivers have already been written.
The very popular impression of Linux kernel programming is writing drivers for various kind of peripheral devices. This is in fact the way that many professional kernel hackers nowadays  have started their Linux carers. However, with the portability that the kernel offers it may seem that it is hard to find unsupported devices. Naturally we could look at the USB devices landscape as here we have the majority of peripherals, however most of those are either already supported or it is better to use the libusb and solve the problem from the user space, thus no kernel work.
Myth #4: Linux Kernel programming is time consuming.
While reading the LKML or any other kernel-related mailing list such as the driverdevel list it is easy to notice that the number of patches sent weekly is significant. For instance the work on the comedi drivers generates sets with many patches in it. It clearly shows that someone is working really hard out there and the comedi is not alone as an example. For people for whom kernel development is going to be a hobby, not a daily job, this might be off-putting as they could feel that they just cannot keep up the pace with that kind of speed of development.

The Facts

These either alone or accumulated can draw a solid, thick line between trying Linux kernel development and letting it go. This is especially true for the less experienced individuals who therefore may fear trying, however the truth is that, to quote Dante, “the devil is not as black as he is painted.” All of these myths can be taken down so let's do it one by one:
Fact:  Linux Kernel programming is fairly easy.
One can view the kernel code as a single blob with rather high complexity, however this blob is highly modularized. Yes, some of the modules are really hardcore (like scheduler), however there are areas of less complexity and the truth is that in order to do very simple maintenance tasks the required skill is a decent knowledge of C.
Not everyone has to redesign kernel core modules, there is plenty of other work that needs to be done. For example, the very popular newbie task is to improve the code quality by fixing either the code style issues or compiler warnings.
Fact: Special hardware is not required.
Well, the old x86 is still good enough to do some parts of the work and since this architecture is still quite popular I would say that it is more than enough for most people. Those who seek more can buy one of the cheap ARM-based boards such as PandaBoard, BeagleBone or RaspberryPi.
Fact: It is not pointless, there is still work to be done.
The first thing to know is that the Linux kernel is not only about the drivers but also the core code which needs to be taken care of. Second there is still a vast amount of drivers to be completed and help in this area is more than appreciated.
Fact: It does not have to be time consuming.
Whoever works on the kernel allocates as much time as he or she wants. The people who do it out of passion, aside from their daily duties, use a few evenings a week and they still contribute. I started contributing during the period where I run every second day (evening), I still did a complete renovation of part of my apartment, I went for holidays, and I watched almost every game during the World Cup 2014 and World Volleyball Championship 2014. There was not much time left for kernel stuff and still I succeeded in sending a few patches.
The important thing to remember is that unless you are paid for it there is no pressure and no hurry so take it easy and do as much as you can.

A New Mindset

In this first installment of a series aimed at encouraging people to do kernel programming. I introduced a complete change of mindset by explaining that what might have seemed hard is in fact fairly easy to do. Just remember that:
  • Linux kernel programming is fairly easy.
  • It is not required to have access to special hardware.
  • There is still a lot of work to be done.
  • You can allocate as much time as you want and as you can.
Armed with this knowledge we are ready for the next part which will give insight into what could be your starting point in Linux kernel development.
This blog is republished with permission from Zapalowicz.pl.
Viewing all 1416 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>