Quantcast
Channel: Sameh Attia
Viewing all 1407 articles
Browse latest View live

Getting Started with Swift on Linux

$
0
0
https://www.twilio.com/blog/2015/12/getting-started-with-swift-on-linux.html

Getting Started With Swift On Linux
When I wrote my first line of Swift code I immediately had visions of being able to use this beautiful programming language for more than just iOS and OSX apps. Yesterday, Apple officially made Swift open source and my dreams came true. This blog post will help you quickly get started writing your first application using the open source version of Swift on Linux.
Here We Go!
The Linux implementation of Swift currently only runs on Ubuntu 14.04 or Ubuntu 15.10. For our application, I’ll be using Ubuntu 14.04.3. If you don’t have an Ubuntu server sitting around you can always spin one up on your hosting provider of choice (Looking for one? Check out DigitalOcean or Linode). The Swift GitHub page shows you how to build Swift manually but you may want to start writing code without having to wrestle with Linux. Fortunately Apple provides snapshots that you can download and get running with a quickness.
Grab the URL of the Ubuntu 14.04 snapshot and then pull it down to your server using wget:
This may take a couple minutes to download. While you’re waiting, it’s a good time to watch Stephen Malkmus and the Jicks cover Taylor Swift’s “Blank Space”.
Once you have the snapshot, decompress it:
In order to use Swift from the command line you need to update your path. Make sure to update this command to reference the path to where you downloaded the Swift snapshot:
If you’re using a fresh Ubuntu server like I was you may be missing a few packages required for Swift to run correctly. To make sure you have everything in place you can run the following command:
Once that completes install all the required dependencies with this command:
More waiting. Perhaps time for an impromptu dance party?

Test that everything is running correctly by running swift --version. You should see a version number like this: Swift version 2.2-dev.
Let’s Sling some Code
Are you ready to write some code? I sure am! We’ll start by firing up the Swift REPL with the swift command. Once the REPL is running you can start throwing down some Swift. Here are some examples to try:
When you’re ready to exit out of the REPL just type :q. Wow. Wonder where they got that idea?
The time has come to write our first app. First, we’ll create a new folder for our application to live. Within that folder we’ll create another folder called sources where we’ll write the code for our applications:
Within the sources folder create a file called main.swift. The name of this file is important, by naming it main.swift our application will automatically be built into an executable when we run swift build:
Do you ever wish you could play “Rock, Paper, Scissors, Lizard, Spock” but no one is around? We just built a Swift application that will let you play against the computer whenever you want!
You can play your first game with the following command:
One thing I’m very excited about in the open source version of Swift is the Swift Package Manager because it makes it super easy to package and share code like our game with others. We won’t take advantage of some of the advanced features of the package manager in this post but we do need to create a Package.swift file to be able to build our application. The Swift Package Manager is case sensitive so make sure you have the capital P. We can keep this pretty basic for now:
Now that we have everything in place we can build our application by running our build command:
The build command will produce an executable that we can run from the command line:
What Will You Build?
You’ve built your first “Hello, Swift!” application using the open source version Swift on Linux. Looking for some inspiration? I’ve found Apple’s A Swift Tour extremely helpful as I’ve started learning the language. There’s a lot of good stuff to be found on Swift.org as well.
There’s a lot left to be added to the Linux version of Swift but I am looking forward to seeing it grow. Maybe you’ll even want to contribute to the project so we can all benefit from what you’ve learned.
I’d love to see what you build now that you have your environment set up!

How to connect your Android device on Ubuntu Linux

$
0
0
https://www.howtoforge.com/tutorial/how-to-connect-your-android-device-on-linux

Buying a media device that needs a special driver and/or connectivity suite to navigate and update its contents is a common case nowadays, and has been ever since manufacturers decided that it would be a good idea to just limit the access that users can have on the products that they bought. This may not be a huge problem to Windows and Mac OS users who can simply download the manufacturer's suite and use it to connect to their device, but Linux is often (if not always) left unsupported in that part. The first time I encountered this problem was with the first generation of iPods and Creative Zen players that refused to show any contents on the File Manager when connected via the USB port, and then came the newest generations of Android devices which do the same. In this quick guide, we will see how we can overcome this problem, and connect our media device on our Linux system.

MTP - Basic File Transfer Options

First thing we need to do is to install “libmtp” which enables us to use an additional media transfer protocol for the USB ports. If you're using Ubuntu, you can do this by opening a terminal and typing:
sudo apt-get install libmtp
After this is done, you may connect your media device on the USB, and then type:
mtp-detect
On the terminal.
Installation of libmpt on Ubuntu.
This command will yield some basic information for the connected device. You may have to wait for a few moments for everything to be displayed and the command to finish running. If your device can't be detected, then you may have to find a newer version of libmtp in the hope that support for your device has been added.
Then insert the command “mtp-connect” followed by “mtp-folders” to see the contained folders and their IDs.
mtp-connect
mtp-folders
Note that you should not attempt to open the device from your file manager in the meantime, as this will interfere and make it “busy” so the “mtp-connect” command won't work.
The mtp-connect and mtp-folders commands.
Using the “mtp-files” command will display all files in your device, their IDs, their parent folders IDs, and their file sizes. Now if you want to copy a file from the media device to your computer, you simply use the “mtp-getfile” command followed by the file's ID and the filename that you want to be used for the newly created file. The exact opposite which is sending a file from your computer to your USB device can be done by using the “mtp-sendfile” command.
Here's an example where I want to send a file named fg.ods and I want it to be copied without a change in it's title.
Send a file with mtp-sendfile.

MTP – Mount Options and GUI Navigation

Working through the terminal can be cumbersome, especially when your media device contains a large number of files. If you give the “mtp-detect” command a go and you see that it is working with your device, then you have the option to mount it and navigate in its storage more conveniently through your file manager.
For this, we have to install mtpfs by giving “sudo apt-get install mtpfs” on a terminal, and then “sudo mtpfs -o allow_other ~/mnt”.
sudo apt-get install mtpfs
sudo mtpfs -o allow_other ~/mnt
This action should create a new mountpoint on /mnt which you can also access via the terminal if you prefer to. If this doesn't work, you can give Qlix a try which is a minimalistic GUI MTP devices manager.
How to use mtpfs.
mtpfs - part 2.
As we're dealing with Android devices on this tutorial, we should keep in mind that those are not just phones but also mp3 players and cameras. This means that you can access them in a smarter way as well, like through the Clementine music player for example. Open Clementine, go to “Devices” and double click on the Android icon. This should mount your device and display the contained audio files that should be perfectly accessible and playable.
Open Clementine and navigate to android icon.
If you right-click on the icon of the device and choose the “Properties” option, you will get information such as the device's mount point, formats supported, and the USB radio interface. The mount point in particular, can be used to access the storage of the device with your file manager.
open properties of the android device.

Network Interfaces Name change in Ubuntu 15.10 (Wily Werewolf)

$
0
0
http://www.ubuntugeek.com/network-interfaces-name-change-in-ubuntu-15-10-wily-werewolf.html

In Wily Werewolf, starting with systemd/udev will automatically assign predictable, stable network interface names for all local Ethernet, Wlan and Wwan interfaces.

The following different naming schemes for network interfaces are now supported by udev natively:
1) Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1)
2) Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1)
3) Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0)
4) Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da)
5) Classic, unpredictable kernel-native ethX naming (example: eth0) -- deprecated
Example
In my case i have installed ubuntu 15.10 server in virtualbox and my interface naming started with "enp0s3" and now when i try to add second interface i didn't know what is the interface name so i have to use the following command to find which inteface it assigned
ip link
Output
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:5f:dd:a1 brd ff:ff:ff:ff:ff:ff
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:30:2d:00 brd ff:ff:ff:ff:ff:ff
From the above output the new network card interface name is "enp0s8"
Change the default network interface names
You can change the device name by defining the name manually with an udev-rule.These rules will be applied automatically at boot.
First you need to get the MAC address using "ip link" command.I have showed the output of ip link from the above i can see mAC address is "08:00:27:30:2d:00".You need to create 10-network.rules file
sudo vi /etc/udev/rules.d/10-network.rules
Add the following line
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="08:00:27:30:2d:00″,KERNEL=="enp0s8″, NAME="eth1″
From the above command you have to replace MAC:ADDRESS with your interface mac address and KERNEL=="" to what your kernel is naming the device when it boots.
Save and exit the file and reboot your ubuntu system.After reboot my interface name should be eth1.

Linux video editing in real time with Open Broadcast Studio

$
0
0
https://opensource.com/life/15/12/real-time-linux-video-editing-with-obs-studio

OBS Studio video editing

It may be a relatively niche market, but not all video editing is done in post production. There are use cases for live, on-the-fly video editing and basic compositing. You've seen it done yourself, whether you realize it or not—news broadcasts, live webcasts, and live TV events usually use multiple-camera setups controlled by one central software suite.
Open Broadcast Studio (formerly Open Broadcaster Software) is an open source central control room for live, realtime video editing. It features instant encoding using x264 (an open source h.264 encoder) and AAC and streams to services like YouTube, DailyMotion, Twitch, your own streaming server, or just to a file.

Scenes and sources

Assuming you have installed OBS Studio, you can launch it as usual. It is compatible with Pulse Audio, ALSA, and JACK, so you can manage audio however you prefer. ALSA and Pulse are the most straightforward, although JACK offers more options.
The initial window of Open Broadcast Studio is the main way for controlling the application.
The OBS Studio user interface
The large video monitor in the middle is your canvas; anything in that screen is being streamed to your delivery destination. The panels in the bottom of the window are quick-access lists to scenes and sources.
You can think of scenes as directories that contain sources, which are clips or streams of media.
The first step is for you to create your sources. These are probably location-based; if you have a studio setup, some pre-recorded video files, and some on-screen footage, then one source might be studio, another vids, and the third screencap.
If your studio set up has two cameras, then create two sources within the scene. Sources can be nearly any kind of media you can imagine: still images on your hard drive, webcam feeds, JACK inputs, video files, and more. For example, to add a video camera as a source, click the plus button under the Sources window and select Video Capture Device (V4L2).

Select the camera you want to add and its appropriate settings (or accept the defaults) and click the OK button in the bottom right corner.
Once the source has been added to a scene, it becomes the displayed source. Depending on what kind of camera you are using and your intended destination, there may be a disconnect between the input and your output. If this is the case, scale the image as needed so that it fits into your screen.

Similarly, for your pre-recorded clips, select the vids scene and add a Media Source source. Set the source as a local file and select the video clip you want to add.

Adjust the clip to fit the portion of the screen you need it to fill, and continue adding sources.

Compositing

Sources within a scene are exactly like layers in GIMP or Kdenlive; the top source takes precedence over lower sources, and any source may be made invisible by clicking the eye icon to the source's left.
By clicking and dragging the red bounding box, sources can also be scaled to achieve a picture-in-picture effect.

Text and still images are also acceptable input formats, so standard lower thirds are easy to cobble together. Add text as a Text (Freetype2) source, and add a backdrop for the text created in GIMP. Usually, a PNG file with an alpha channel is best.

If your project wants animated titles, then you'll have to animate separately in Synfig Studio, Blender, or Phil Shapiro's magical blend of Inkscape and Animatron.

Filters

OBS features a select few video filters, too. Currently it's just the bare basics, but the ones that are included are useful:
  • Gain
  • Audio/Video sync adjustment
  • Noise gate
  • Color Correction
  • Mask
  • Keys
There are a few more that I haven't mentioned, and more will be added as soon as they become stable.
Filters can be added to either an entire scene or to individual source. To add a filter, right-click on the scene or source and select Filters. In the filter window that appears, add either an audio or video filter or a video effect. Effects are not yet keyframe-able, so they affect the entire clip.

Output

The destination of an OBS project is either a file or, more likely, a live stream. Any action you make within OBS that shows in its canvas view will be sent to your output device, whatever it may be. There is currently no built-in time delay, so everything is streamed pretty much as it happens (not counting for network delays).

Streaming

To set up streaming, click the Settings button on the right side of OBS.

In the Settings window that appears, choose Stream from the left column. Select the streaming server you have an account on and enter your credentials. If you run your own streaming server, select the Custom option from the topmost dropdown menu.
Click the Apply and OK buttons when finished.
To go live, click the Start Streaming button in the main OBS interface.

Recording

If you're not streaming, then you're probably recording your "broadcast" to a file.
To set up recording, click the Settings button on the right side of OBS.

In the Settings window that appears, choose Output from the left column. What settings you use are up to you and will depend on all the usual factors: How much space do you want your file to take up? What kind of quality are you looking for? What kind of quality are you capturing in the first place?
For HD streams, I usually set a bitrate to roughly 15000 (that's about twice the bitrate of a standard definition DVD, but at the low end of what would be considered Blu Ray quality), and an audio rate of 80kbps (dialogue is not terribly demanding). If you want advanced options, such as video rescaling and access to x264 profiles, use the Advanced setting in the topmost dropdown menu.
Click the Apply and OK buttons when finished.
To begin recording, click the Start Recording button in the main OBS interface.

Open source broadcasting

Open Broadcast Studio is, to some degree, in a class all its own. While there are certainly other applications that stream video and audio on Linux, none of of them are geared so directly toward professional-style workflow. While it lacks some of the features (like transitions) of advanced software that's been around a lot longer, it is a stable and capable application that allows everyone to be broadcasters.

How to summarize detailed system resource usage for given command on a Linux or Unix

$
0
0
http://www.cyberciti.biz/faq/linux-unix-summarize-detailed-system-resource-usage-with-time

How do I determine the system resource usage during of execution of a particular command on a Linux, OS X Unix, BSD and Unix-like operating system?

You need to use /usr/bin/time (hereinafter referred to as "time") command to find the system resource usage during of execution of a particular command. The following information can be obtained from the "time" command:
  1. User time
  2. System time
  3. Percent of CPU this command got
  4. Elapsed time
  5. Average shared text size
  6. Average unshared data size
  7. Average stack size
  8. Average total size
  9. Maximum resident set size
  10. Average resident set size
  11. Major (requiring I/O) page faults
  12. Minor (reclaiming a frame) page faults
  13. Voluntary context switches
  14. Involuntary context switches
  15. Swaps
  16. File system inputs
  17. File system outputs
  18. Socket messages sent
  19. Socket messages received
  20. Signals delivered
  21. Page size (bytes)
  22. Exit status
The above describing the resources utilized by the current process or command and can be obtained by "time" command. It is defined as follows in sys/resource.h
/* taken from OSX/FreeBSD unix */
struct rusage {
struct timeval ru_utime; /* user time used */
struct timeval ru_stime; /* system time used */
long ru_maxrss; /* max resident set size */
long ru_ixrss; /* integral shared text memory size */
long ru_idrss; /* integral unshared data size */
long ru_isrss; /* integral unshared stack size */
long ru_minflt; /* page reclaims */
long ru_majflt; /* page faults */
long ru_nswap; /* swaps */
long ru_inblock; /* block input operations */
long ru_oublock; /* block output operations */
long ru_msgsnd; /* messages sent */
long ru_msgrcv; /* messages received */
long ru_nsignals; /* signals received */
long ru_nvcsw; /* voluntary context switches */
long ru_nivcsw; /* involuntary context switches */
};
 

Syntax

The syntax is as follows on Linux:
 
/usr/bin/time -v command
/usr/bin/time -v command arg1 arg2
 
The syntax is as follows on FreeBSD or OS X unix:
 
/usr/bin/time -l command
/usr/bin/time -l command arg1 arg2
 

Examples

Let us run host command on Linux to find out the resources utilized by the host command during execution:
$ /usr/bin/time -v host www.cyberciti.biz
Sample outputs:
Fig.01: Determine the duration of execution of a particular command on Linux with resource utilization
Fig.01: Determine the duration of execution of a particular command on Linux with resource utilization
Let us run date command on OS X or FreeBSD Unix based system to find out the resources utilized by the date command during execution:
$ /usr/bin/time -l date
Sample outputs:
Fig.02: Determine the duration of execution of a particular command on Unix/OSX
Fig.02: Determine the duration of execution of a particular command on Unix/OSX

A note about "/usr/bin/time" and time command

  1. time is a shell command.
  2. /usr/bin/time is an external command and provides additional information such as the resources utilized by a particular command.
For more information see man pages time(1), getrusage(2), bash(1), ksh(1).

Contribute Anonymously To Git Repositories Over Tor With Gitnonymous Project

$
0
0
http://fossbytes.com/contribute-anonymously-to-git-repositories-over-tor-with-gitnonymous-project

tor-githubShort Bytes: With gitnonymous project, now you can obfuscate your true identity while making Git commits and pushing to public repositories. Using the instructions given on the GitHub page, learn to setup your anonymous account.
Chris McCormick (aka chr15m) released an open source projectcalled gitnonymous which can help you contribute to any public repository by obfuscating your true identity. You can follow the instructions given on the GitHub page to setup your anonymous account.It can be helpful for developers who are passionate about open source projects and willing to contribute but their corporates policies don’t allow them to contribute to open source projects.
Though this project doesn’t provide complete anonymity, the reasons below suggest that it is a good start. As more developers join the project, it will soon achieve maturity and complete anonymity.
@ryancdotorg on Hacker News pointed out following information leaks that may still be used to try to identify you –
  • Your timezone will appear in Git commits (narrows down location).
  • Commit times will be leaked (narrows down sleeping/working hours).
  • SSH client version will be leaked to servers you connect to (shows Linux distro version and patch level).
chr15m later called it pseudonymous method.
Read more about the gitnonymous on GitHub.
Do you think we need this project? Add your views in the comments below.

How to find out AES-NI (Advanced Encryption) Enabled on Linux System

$
0
0
http://www.cyberciti.biz/faq/how-to-find-out-aes-ni-advanced-encryption-enabled-on-linux-system

The Intel Advanced Encryption Standard (AES) or New Instructions (AES-NI) engine enables extremely fast hardware encryption and decryption for openssl, ssh, vpn, Linux/Unix/OSX full disk encryption and more. How do I check support for Intel or AMD AES-NI is loaded in my running Linux in my Linux based system including openssl?

The Advanced Encryption Standard Instruction Set (or the Intel Advanced Encryption Standard New Instructions - "AES-NI") allows certain Intel/AMD and other CPUs to do extremely fast hardware encryption and decryption. "AES-NI" is an extension to the x86 instruction set architecture for microprocessors from Intel and AMD. It increases the speed of apps performing encryption and decryption using the AES. Several server and laptop vendors have shipped BIOS configurations with the AES-NI extension disabled. You may need a BIOS update to enable them or change the BIOS settings. The following CPUs are supported:
  1. Intel Westmere/Westmere-EP (Xeon 56xx)/Clarkdale (except Core i3, Pentium and Celeron)/Arrandale(except Celeron, Pentium, Core i3, Core i5-4XXM).
  2. Intel Sandy Bridge cpus (except Pentium, Celeron, Core i3).
  3. Intel mobile Core i7 and Core i5.
  4. Intel Ivy Bridge processors All i5, i7, Xeon and i3-2115C only.
  5. Intel Haswell processors (all except i3-4000m, Pentium and Celeron).
    AMD Bulldozer/Piledriver/Steamroller/Jaguar/Puma-based processors.
  6. AMD Geode LX processors.
  7. VIA PadLock (a different instruction set than Intel AES-NI but does the same thing at the end of the day).
  8. ARM - selected Allwinner and Broadcom using security processor. There are few more ARM based processor.
Please note that the AES-NI support is automatically enabled if the detected processor is among the supported list as above. For a list of processors that support the AES-NI engine, see Intel ARK/AMD/ARM (vendor)/VIA padlock site and documentation.

How do I find out that the processor has the AES/AES-NI instruction set?

To find out cpu type and architecture type:
# lscpu
Type the following command to make sure that the processor has the AES instruction set and enabled in the BIOS:
# grep -o aes /proc/cpuinfo
OR
# grep -m1 -o aes /proc/cpuinfo
Sample outputs:
Fig.01: Linux Verify That Processor/CPU Has the AES-NI Instruction
Fig.01: Linux Verify That Processor/CPU Has the AES-NI Instruction

The aes output indicates that I have the AES-NI support enabled by Linux.

How do I verify that all my CPU supports AES NI?

The output of the following two commands should be same:
# lscpu | grep '^CPU(s):'
32

And:
# grep -o aes /proc/cpuinfo | wc -l
32

Is Intel AES-NI instructions optimized driver loaded for my Linux server/laptop/desktop?

Type the following command:
# sort -u /proc/crypto | grep module
Sample outputs:
module       : aesni_intel
module : aes_x86_64

module : crc32_pclmul
module : crct10dif_pclmul
module : ghash_clmulni_intel
module : kernel

Is Intel AES-NI enabled for openssl enabled?

Now that we have verified support, it's time to test it. Is my AES-NI/VIA padlock engine supported?
$ openssl engine
Sample outputs from VIA based cpu that supports the AES:
(padlock) VIA PadLock (no-RNG, no-ACE)
(dynamic) Dynamic engine loading support
Another output from Intel based system that support the AES-NI:
$ openssl engine
(aesni) Intel AES-NI engine
(dynamic) Dynamic engine loading support

Test: AES-NI CPU vs Normal CPU without the AES-NI/Packlock support

In this example, serverA has the AES-NI and serverB has no support for hardware encryption:
$ dd if=/dev/zero count=1000 bs=1M | ssh -l vivek -c aes128-cbc serverA "cat >/dev/null"
Password:
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 10.6691 s, 98.3 MB/s

And:
$ dd if=/dev/zero count=1000 bs=1M | ssh -l vivek -c aes128-cbc serverB "cat >/dev/null"
vivek@localhost's password:
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 31.6675 s, 33.1 MB/s

Test: How do I benchmark my openssl performance?

Again run the following commands on both the systems:
# openssl speed
OR
# openssl speed aes-128-cbc

Popular Linux or Unix/BSD applications that can benefit from the AES-NI from high speed ecryption/decryption

  • dm-crypt for full-disk encryption on Linux.
  • 7-Zip app.
  • Google chrome and firefox browsers
  • FreeBSD's OpenCrypto API i.e aesni driver for zfs and other file systems.
  • OpenSSL 1.0.1 and above.
  • TrueCrypt 7.0 and above or VeraCrypt.
  • Citrix XenClient 1.0 and above.
  • Compilers such as GCC 4.4+, Intel C/C++ compiler 11.1+, Clang 3.3+ and more.
  • Libraries for golang, java, NSS, openssl and more.
  • Linux and BSD firewalls and vpn especially easy to use pfsense, ipcop and more.
  • Operating system based on Linux, *BSD, Unix, Microsoft, Android, iOS, Apple OS X and more.
References

5 open source web app alternatives to Google Drive

$
0
0
https://opensource.com/life/15/12/5-open-source-web-apps-self-hosted

Image of spider web

Last year, Kenton Varda and I ran a successful fundraising campaign that let us build Sandstorm. During the campaign, he published a treatise on how open source and indie software has proliferated on desktop and mobile, yet stagnated on the web because decentralized hosting has historically been so difficult.
Non-technical users comprise most of humanity, so that's where Sandstorm is setting the usability bar. We've come a long way since last year—we launched the App Market, managed hosting, and free automated dynamic DNS & SSL certs for self-hosters, to name a few milestones.
I'd like to take a moment to highlight some of my favorite open source web apps that have become part of my work and life routine. Before we dive into the apps, let's give a shoutout to the Sandstorm community, and in particular, the awesome folks who authored and/or packaged these apps. Be sure to check out the demos for each app. Demo accounts last for one hour, but you can sign into Oasis, our managed hosting, to keep your data in a free basic account or self-install on your own hardware.

1. Davros (personal file sync and storage)


Davros is essentially Dropbox or Google Drive style file sync and storage, but running on hardware that you control, wherever you want to install it. As soon as it came out earlier this month, I immediately 1) Installed it and created a grain for my Skitches, 2) Installed the ownCloud client on my laptop, and 3) Copy-pasted the key from my Davros instance ("grain"). In under a minute, I had everything working, and most of that minute was downloading the ownCloud client.
Since I capture and annotate screenshots for filing bug reports and other things almost every day, I've often felt a bit awkward about keeping those hosted by Evernote (who acquired Skitch in 2011). How can I really be sure if they've deleted a backup of someone else's bug (or some other potentially embarrassing photo)? Thanks to Michael Nutt's Davros, I can sync my files to my own server.

2. SandForms (Google Forms alternative)


SandForms is an open source alternative to Google Forms. It was developed by a team at ThoughtWorks that worked closely with the journalists and activists for whom Sandstorm's security features are of vital importance, as well as the Radical Librarians Collective and the Library Freedom Project.
That thing is beautiful. Seriously, give it a spin. I always love it when the open source apps are heads and shoulders above their conventional brethren in design, usability, and polish.

3. EtherCalc (real-time collaborative spreadsheet)

EtherCalc screenshot
EtherCalc is a real-time collaborative spreadsheet built by Audrey Tang. It does all the things you expect a collaborative spreadsheet like Google Spreadsheets to do, except you can install it on your own server and have control over your data.
Audrey has also written a fascinating history of EtherCalc (and its predecessors, WikiCalc and SocialCalc) for the book series The Architecture of Open Source Applications. These chapters are also available on the EtherCalc web site. It's a technically meaty read that does multiple deep dives into how each feature was implemented, optimizing for the performance of various features, the constraints they were solving for the environments in which each of these spreadsheet apps were designed to be deployed, and more. Great stuff!

4. HackerSlides (Minimalist presentation editor)

HackerSlides screenshot
HackerSlides lets you write your slides in Markdown into an Ace Editor while showing you a live preview on the right with Reveal.js. I've used it to write and present just about every presentation I've given since it came out (one exception for a conference that mandated a slide template). Personally, I prefer writing my slides in Markdown as I find it's so much faster to write without fussing about and moving boxes around with my mouse.
HackerSlides' author, Jack Singleton, gave a talk about it at Chaos Communication Camp earlier this year (video) and gathered a team of his colleagues to work on SandForms.

5. Etherpad (Real-time collaborative document editor)

Etherpad screenshot
Etherpad is a real-time collaborative document editor, like Google Docs, but running on your servers, not Google's. After Appjet (the original author) was acquired by Google, they open sourced the Etherpad code, where it is currently maintained for the community by John McLear and friends at the Etherpad Foundation.
Etherpad is one of the most popular open source web apps. Large groups like Mozilla and Wikimedia run instances, and so do small activist groups like La Quadrature du Net. By contrast, I self-host Etherpad on Sandstorm, where I get two practical advantages over using a shared instance: Sandstorm shows me a list of Etherpad documents I've created, and it adds security sandboxing to every app. The security features have mitigated a number of real Etherpad security issues.
This is the only app in the list maintained by someone on the Sandstorm core dev team, by the way. All four other apps are packaged for Sandstorm by the community that created them.

One login, one workspace

When I used to use conventional SaaS apps from various developer-hosts, I had to log in separately into each service, entrust them with my data, and hope that they never pulled the plug on an app that I loved. These days, when I deploy apps I use on Sandstorm, I can have all my data in one place and share access with my collaborators. Most importantly, my data lives on hardware that I control.

Get involved

Want to make your own apps available to the Sandstorm community and self-hosters everywhere, regardless of their sysadmin skills? Check out this packaging tutorial, and drop a line to the sandstorm-dev mailing list with any questions. (Maybe even ask for a community review of your app before it goes live!) By default, your app will also get a one-click live demo to help your users try out a fresh instance of your app before installing it, and we'll even help you out with app icons if you need the help.
Want to try out new apps before they get into the App Market? Want to help app authors test their almost-ready apps? Join the sandstorm-dev mailing list to help out, and be sure to assist open source app authors by reporting bugs and edge cases on their repo. I love community-driven development because everyone gets to pitch in and participate, and we're all in this together, creating better technology for everyone.
Want to connect with other open source enthusiasts who like to self-host? Join or create a Sandstorm meetup group; it's a great opportunity to show and tell your latest and greatest open source app, or learn from local experts and get help on your work in progress. Put your city on the map via this SandForms survey, and I'd love to help you get started.
Stay tuned for my upcoming roundup of open source alternatives to other popular SaaS apps. Or explore the App Market and write about your favorites.

Linux / Unix: jobs Command Examples

$
0
0
http://www.cyberciti.biz/faq/unix-linux-jobs-command-examples-usage-syntax

I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux?

Job control is nothing but the ability to stop/suspend the execution of processes (command) and continue/resume their execution as per your requirements. This is done using your operating system and shell such as bash/ksh or POSIX shell.
jobs command details
DescriptionShow the active
jobs in shell
Category
Difficulty
Root privilegesNo
Estimated completion time10m
Contents
You shell keeps a table of currently executing jobs and can be displayed with jobs command.

Purpose

Displays status of jobs in the current shell session.

Syntax

The basic syntax is as follows:
jobs
OR
jobs jobID
OR
jobs [options] jobID

Starting few jobs for demonstration purpose

Before you start using jobs command, you need to start couple of jobs on your system. Type the following commands to start jobs:
## Start xeyes, calculator, and gedit text editor ###
xeyes &
gnome-calculator &
gedit fetch-stock-prices.py &
 
Finally, run ping command in foreground:
 
ping www.cyberciti.biz
 
To suspend ping command job hit the Ctrl-Z key sequence.

jobs command examples

To display the status of jobs in the current shell, enter:
$ jobs
Sample outputs:
[1]   7895 Running                 gpass &
[2] 7906 Running gnome-calculator &
[3]- 7910 Running gedit fetch-stock-prices.py &
[4]+ 7946 Stopped ping cyberciti.biz
To display the process ID or jobs for the job whose name begins with "p," enter:
$ jobs -p %p
OR
$ jobs %p
Sample outputs:
[4]-  Stopped                 ping cyberciti.biz
The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping.

How do I show process IDs in addition to the normal information?

Pass the -l(lowercase L) option to jobs command for more information about each job listed, run:
$ jobs -l
Sample outputs:
Fig.01: Displaying the status of jobs in the shell
Fig.01: Displaying the status of jobs in the shell

How do I list only processes that have changed status since the last notification?

First, start a new job as follows:
$ sleep 100 &
Now, only show jobs that have stopped or exited since last notified, type:
$ jobs -n
Sample outputs:
[5]-  Running                 sleep 100 &

Display lists process IDs (PIDs) only

Pass the -p option to jobs command to display PIDs only:
$ jobs -p
Sample outputs:
7895
7906
7910
7946
7949

How do I display only running jobs?

Pass the -r option to jobs command to display only running jobs only, type:
$ jobs -r
Sample outputs:
[1]   Running                 gpass &
[2] Running gnome-calculator &
[3]- Running gedit fetch-stock-prices.py &

How do I display only jobs that have stopped?

Pass the -s option to jobs command to display only stopped jobs only, type:
$ jobs -s
Sample outputs:
[4]+  Stopped                 ping cyberciti.biz
To resume the ping cyberciti.biz job by entering the following bg command:
$ bg %4

jobs command options

From the bash(1) command man page:
OptionDescription
-lShow process id's in addition to the normal information.
-pShow process id's only.
-nShow only processes that have changed status since the last notification are printed.
-rRestrict output to running jobs only.
-sRestrict output to stopped jobs only.
-xCOMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td>

A note about /usr/bin/jobs and shell builtin

Type the following type command to find out whether jobs is part of shell, external command or both:
$ type -a jobs
Sample outputs:
jobs is a shell builtin
jobs is /usr/bin/jobs
In almost all cases you need to use the jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The /usr/bin/jobs command can not be used in the current shell. The /usr/bin/jobs command operates in a different environment and does not share the parent bash/ksh's shells understanding of jobs.

Related media

This tutorials is also available in a quick video format:
See also
CategoryList of Unix and Linux commands
File Managementcat
Network Utilitiesdighostip
Processes Managementbgchrootdisownfgjobskillkillallpwdxtimepidofpstree
Searchingwhereiswhich
User Informationgroupsidlastlastcommlognameuserswwhowhoamilidmembers

How do I forcefully unmount a Linux disk partition?

$
0
0
http://www.cyberciti.biz/tips/how-do-i-forcefully-unmount-a-disk-partition.html

Sometimes you try to unmount a disk partition or mounted CD/DVD disk or device, which is accessed by other users, then you will get an error umount: /xxx: device is busy. However, Linux or FreeBSD comes with the fuser command to kill forcefully mounted partition. For example, you can kill all processes accessing the file system mounted at /nas01 using the fuser command.

Understanding device error busy error

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:
# lsof | grep '/dev/sda1'
Output:
vi 4453       vivek    3u      BLK        8,1                 8167 /dev/sda1
Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:
# umount /dev/sda1

How do I list the users on the file-system /nas01/?

Type the following command:
# fuser -u /nas01/
# fuser -u /var/www/

Sample outputs:
/var/www:             3781rc(root)  3782rc(nginx)  3783rc(nginx)  3784rc(nginx)  3785rc(nginx)  3786rc(nginx)  3787rc(nginx)  3788rc(nginx)  3789rc(nginx)  3790rc(nginx)  3791rc(nginx)  3792rc(nginx)  3793rc(nginx)  3794rc(nginx)  3795rc(nginx)  3796rc(nginx)  3797rc(nginx)  3798rc(nginx)  3800rc(nginx)  3801rc(nginx)  3802rc(nginx)  3803rc(nginx)  3804rc(nginx)  3805rc(nginx)  3807rc(nginx)  3808rc(nginx)  3809rc(nginx)  3810rc(nginx)  3811rc(nginx)  3812rc(nginx)  3813rc(nginx)  3815rc(nginx)  3816rc(nginx)  3817rc(nginx)
The following discussion allows you to unmout device and partition forcefully using mount or fuser Linux commands.

Linux fuser command to forcefully unmount a disk partition

Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:
WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).
Type the command to unmount /mnt forcefully:
# fuser -km /mnt
Where,
  • -k : Kill processes accessing the file.
  • -m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt
Linux umount command to unmount a disk partition.
You can also try the umount command with –l option on a Linux based system:
# umount -l /mnt
Where,
  • -l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.
If you would like to unmount a NFS mount point then try following command:
# umount -f /mnt
Where,
  • -f: Force unmount in case of an unreachable NFS system
Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.
See also:

How to resume a large SCP file transfer on Linux

$
0
0
http://ask.xmodulo.com/resume-large-scp-file-transfer-linux.html

Question: I was downloading a large file using SCP, but the download transfer failed in the middle because my laptop got disconnected from the network. Is there a way to resume the interrupted SCP transfer where I left off, instead of downloading the file all over again?
Originally based on BSD RCP protocol, SCP (Secure copy) is a mechanism that allows you to transfer a file between two end points over a secure SSH connection. However, as a simple secure copy protocol, SCP does not understand range-request or partial transfer like HTTP does. As such, popular SCP implementations like the scp command line tool cannot resume aborted downloads from lost network connections.
If you want to resume an interrupted SCP transfer, you need to rely on other programs which support range requests. One popular such program is rsync. Similar to scp, rsync can also transfer files over SSH.
Suppose you were trying to download a file (bigdata.tgz) from a remote host remotehost.com using scp, but the SCP transfer was stopped in the middle due to a stalled SSH connection. You can use the following rsync command to easily resume the stopped transfer. Note that the remote server must have rsync installed as well.
$ cd /path/to/directory/of/partially_downloaded_file
$ rsync -P -rsh=ssh userid@remotehost.com:bigdata.tgz ./bigdata.tgz
The "-P" option is the same as "--partial --progress", allowing rsync to work with partially downloaded files. The "-rsh=ssh" option tells rsync to use ssh as a remote shell.
Once the command is invoked, rsync processes on local and remote hosts compare a local file (./bigdata.tgz) and a remote file (userid@remotehost.com:bigdata.tgz), determine among themselves what portion of the file is not the same, and transfer the discrepancy to either end. In this case, missing bytes in the partially downloaded local file is downloaded from a remote host.

If the above rsync session itself gets interrupted, you can resume it as many time as you want by typing the same command. rsync will automatically restart the transfer where it left off.

Take Control of Your PC with UEFI Secure Boot

$
0
0
http://www.linuxjournal.com/content/take-control-your-pc-uefi-secure-boot

UEFI (Unified Extensible Firmware Interface) is the open, multi-vendor replacement for the aging BIOS standard, which first appeared in IBM computers in 1976. The UEFI standard is extensive, covering the full boot architecture. This article focuses on a single useful but typically overlooked feature of UEFI: secure boot.
Often maligned, you've probably encountered UEFI secure boot only when you disabled it during initial setup of your computer. Indeed, the introduction of secure boot was mired with controversy over Microsoft being in charge of signing third-party operating system code that would boot under a secure boot environment.
In this article, we explore the basics of secure boot and how to take control of it. We describe how to install your own keys and sign your own binaries with those keys. We also show how you can build a single standalone GRUB EFI binary, which will protect your system from tampering, such as cold-boot attacks. Finally, we show how full disk encryption can be used to protect the entire hard disk, including the kernel image (which ordinarily needs to be stored unencrypted).

UEFI Secure Boot

Secure boot is designed to protect a system against malicious code being loaded and executed early in the boot process, before the operating system has been loaded. This is to prevent malicious software from installing a "bootkit" and maintaining control over a computer to mask its presence. If an invalid binary is loaded while secure boot is enabled, the user is alerted, and the system will refuse to boot the tampered binary.
On each boot-up, the UEFI firmware inspects each EFI binary that is loaded and ensures that it has either a valid signature (backed by a locally trusted certificate) or that the binary's checksum is present on an allowed list. It also verifies that the signature or checksum does not appear in the deny list. Lists of trusted certificates or checksums are stored as EFI variables within the non-volatile memory used by the UEFI firmware environment to store settings and configuration data.

UEFI Key Overview

The four main EFI variables used for secure boot are shown in Figure a. The Platform Key (often abbreviated to PK) offers full control of the secure boot key hierarchy. The holder of the PK can install a new PK and update the KEK (Key Exchange Key). This is a second key, which either can sign executable EFI binaries directly or be used to sign the db and dbx databases. The db (signature database) variable contains a list of allowed signing certificates or the cryptographic hashes of allowed binaries. The dbx is the inverse of db, and it is used as a blacklist of specific certificates or hashes, which otherwise would have been accepted, but which should not be able to run. Only the KEK and db (shown in green) keys can sign binaries that may boot the system.
Figure a. Secure Boot Keys
The PK on most systems is issued by the manufacturer of the hardware, while a KEK is held by the operating system vendor (such as Microsoft). Hardware vendors also commonly have their own KEK installed (since multiple KEKs can be present). To take full ownership of a computer using secure boot, you need to replace (at a minimum) the PK and KEK, in order to prevent new keys being installed without your consent. You also should replace the signature database (db) if you want to prevent commercially signed EFI binaries from running on your system.
Secure boot is designed to allow someone with physical control over a computer to take control of the installed keys. A pre-installed manufacturer PK can be programmatically replaced only by signing it with the existing PK. With physical access to the computer, and access to the UEFI firmware environment, this key can be removed and a new one installed. Requiring physical access to the system to override the default keys is an important security requirement of secure boot to prevent malicious software from completing this process. Note that some locked-down ARM-based devices implement UEFI secure boot without the ability to change the pre-installed keys.

Testing Procedure

You can follow these procedures on a physical computer, or alternatively in a virtualized instance of the Intel Tianocore reference UEFI implementation. The ovmf package available in most Linux distributions includes this. The QEMU virtualization tool can launch an instance of ovmf for experimentation. Note that the fatargument specifies that a directory, storage, will be presented to the virtualized firmware as a persistent storage volume. Create this directory in the current working directory, and launch QEMU:

qemu-system-x86_64 -enable-kvm -net none \
-m 1024 -pflash /usr/share/ovmf/ovmf_x64.bin \
-hda fat:storage/
Files present in this folder when starting QEMU will appear as a volume to the virtualized UEFI firmware. Note that files added to it after starting QEMU will not appear in the system—restart QEMU and they will appear. This directory can be used to hold the public keys you want to install to the UEFI firmware, as well as UEFI images to be booted later in the process.

Generating Your Own Keys

Secure boot keys are self-signed 2048-bit RSA keys, in X.509 certificate format. Note that most implementations do not support key lengths greater than 2048 bits at present. You can generate a 2048-bit keypair (with a validity period of 3650 days, or ten years) with the following openssl command:

openssl req -new -x509 -newkey rsa:2048 -keyout PK.key \
-out PK.crt -days 3650 -subj "/CN=My Secure PK/"
The CN subject can be customized as you wish, and its value is not important. The resulting PK.key is a private key, and PK.crt is the corresponding certificate (containing the public key), which you will install into the UEFI firmware shortly. You should store the private key securely on an encrypted storage device in a safe place.
Now you can carry out the same process for both the KEK and for the db key. Note that the db and KEK EFI variables can contain multiple keys (and in the case of db, SHA256 hashes of bootable binaries), although for simplicity, this article considers only storing a single certificate in each. This is more than adequate for taking control of your own computer. Once again, the .key files are private keys, which should be stored securely, and the .crt files are public certificates to be installed into your UEFI system variables.

Taking Ownership and Installing Keys

Every UEFI firmware interface differs, and it is therefore not possible to provide step-by-step instructions on how to install your own keys. Refer to your motherboard or laptop's instruction manual, or search on-line for the maker of the UEFI firmware. Enter the UEFI firmware interface, usually by holding a key down at boot time, and locate the security menu. Here there should be a section or submenu for secure boot. Change the mode control to "custom" mode. This should allow you to access the key management menus.
Figure 1. Enabling Secure Boot and Entering Custom Mode
At this point, you should make a backup of the UEFI platform keys currently installed. You should not need this, since there should be an option within your UEFI firmware interface to restore the default keys, but it does no harm to be cautious. There should be an option to export or save the current keys to a USB Flash drive. It is best to format this with the FAT filesystem if you have any issues with it being detected.
After you have copied the backup keys somewhere safe, load the public certificate (.crt) files you created previously onto the USB Flash drive. Take care not to mix them up with the backup certificates from earlier. Enter the UEFI firmware interface, and use the option to reset or clear all existing secure boot keys.
Figure 2. Erasing the Existing Platform Key
This also might be referred to as "taking ownership" of secure boot. Your system is now in secure boot "setup" mode, which will remain until a new PK is installed. At this point, the EFI PK variable is unprotected by the system, and a new value can be loaded in from the UEFI firmware interface or from software running on the computer (such as an operating system).
Figure 3. Loading a New Key from a Storage Device
At this point, you should disable secure boot temporarily, in order to continue following this article. Your newly installed keys will remain in place for when secure boot is enabled.

Signing Binaries

After you have installed your custom UEFI signing keys, you need to sign your own EFI binaries. There are a variety of different ways to build (or obtain) these. Most modern Linux bootloaders are EFI-compatible (for example, GRUB 2, rEFInd or gummiboot), and the Linux kernel itself can be built as a bootable EFI binary since version 3.3. It's possible to sign and boot any valid EFI binary, although the approach you take here depends on your preference.
One option is to sign the kernel image directly. If your distribution uses a binary kernel, you would need to sign each new kernel update before rebooting your system. If you use a self-compiled kernel, you would need to sign each kernel after building it. This approach, however, requires you to keep on top of kernel updates and sign each image. This can become arduous, especially if you use a rolling-release distribution or test mainline release candidates. An alternative, and the approach we used in this article, is to sign a locked-down UEFI-compatible bootloader (GRUB 2 in the case of this article), and use this to boot various kernels from your system.
Some distributions configure GRUB to validate kernel image signatures against a distribution-specified public key (with which they sign all kernel binaries) and disable editing of the kernel cmdline variable when secure boot is in use. You therefore should refer to the documentation for your distribution, as the section on ensuring your boot images are encrypted would not be essential in this case.
The Linux sbsigntools package is available from the repositories of most Linux distributions and is a good first port of call when signing UEFI binaries. UEFI secure boot binaries should be signed with an Authenticode-format signature. The command of interest is sbsign, which is invoked as follows:

sbsign --key DB.key --cert DB.crt unsigned.efi \
--output signed.efi
Due to subtle variations in the implementation of the UEFI standards, some systems may reject a correctly signed binary from sbsign. The best alternative we found was to use the osslsigncode utility, which also generates Authenticode signatures. Although this tool was not specifically intended for use with secure boot, it produces signatures that match the required specification. Since osslsigncode does not appear to be commonly included in distribution repositories, you should build it from its source code. The process is relatively straightforward and simply requires running make, which will produce the executable binary. If you encounter any issues, ensure you have installed openssland curl, which are dependencies of the package. (See Resources for a link to the source code repository.)
Binaries are signed with osslsigntool in a similar manner to sbsign (note that the hash is defined as sha256 per the UEFI specification; this should not be altered):

osslsigncode -certs DB.crt -key DB.key \
-h sha256 -in unsigned.efi -out signed.efi

Booting with UEFI

After you have signed an EFI binary (such as the GRUB bootloader binary), the obvious next step is to test it. Computers using the legacy BIOS boot technology load the initial operating system bootloader from the MBR (master boot record) of the selected boot device. The MBR contains code to load a further (and larger) bootloader held within the disk, which loads the operating system. In contrast, UEFI is designed to allow for more than one bootloader to exist on one drive, without the need for those bootloaders to cooperate or even know the others exist.
Bootable UEFI binaries are located on a storage device (such as a hard disk) within a standard path. The partition containing these binaries is referred to as the EFI System Partition. It has a partition ID of 0xEF00 in gdisk, the GPT-compatible equivalent to fdisk. This partition is conventionally located at the beginning of the filesystem and formatted with a FAT32 filesystem. UEFI-bootable binaries are then stored as files in the EFI/BOOT/ directory.
This signed binary should now boot if it is placed at EFI/BOOT/BOOTX64.EFI within the EFI system partition or an external drive, which is set as the boot device. It is possible to have multiple EFI binaries available on one EFI system partition, which makes it easier to create a multi-boot setup. For that to work however, the UEFI firmware needs a boot entry created in its non-volatile memory. Otherwise, the default filename (BOOTX64.EFI) will be used, if it exists.
To add a new EFI binary to your firmware's list of available binaries, you should use the efibootmgr utility. This tool can be found in distribution repositories and often is used automatically by the installers for popular bootloaders, such as GRUB.
At this point, you should re-enable secure boot within your UEFI firmware. To ensure that secure boot is operating correctly, you should attempt to boot an unsigned EFI binary. To do so, you can place a binary (such as an unsigned GRUB EFI binary) at EFI/BOOT/BOOTX64.EFI on a FAT32-formatted USB Flash drive. Use the UEFI firmware interface to set this drive as the current boot drive, and ensure that a security warning appears, which halts the boot process. You also should verify that an image signed with the default UEFI secure boot keys does not boot—an Ubuntu 12.04 (or newer) CD or bootable USB stick should allow you to verify this. Finally, you should ensure that your self-signed binary boots correctly and without error.

Installing Standalone GRUB

By default, the GRUB bootloader uses a configuration file stored at /boot/grub/grub.cfg. Ordinarily, this file could be edited by anyone able to modify the contents of your /boot partition, either by booting to another OS or by placing your drive in another computer.

Bootloader Security

Prior to the advent of secure boot and UEFI, someone with physical access to a computer was presumed to have full access to it. User passwords could be bypassed by simply adding init=/bin/bash to the kernel cmdline parameter, and the computer would boot straight up into a root shell, with full access to all files on the system.
Setting up full disk encryption is one way to protect your data from physical attack—if the contents of the hard disk is encrypted, the disk must be decrypted before the system can boot. It is not possible to mount the disk's partitions without the decryption key, so the data is protected.
Another approach is to prevent an attacker from altering the kernel cmdline parameter. This approach is easily bypassed on most computers, however, by installing a new bootloader. This bootloader need not respect the restrictions imposed by the original bootloader. In many cases, replacing the bootloader may prove unnecessary—GRUB and other bootloaders are fully configurable by means of a separate configuration file, which could be edited to bypass security restrictions, such as passwords.
Therefore, there would be no real security advantage in signing the GRUB bootloader, since the signed (and verified) bootloader would then load unsigned modules from the hard disk and use an unsigned configuration file. By having GRUB create a single, bootable EFI binary, containing all the necessary modules and configuration files, you no longer need to trust the modules and configuration file of your GRUB binary. After signing the GRUB binary, it cannot be modified without secure boot rejecting it and refusing to load. This failure would alert you to someone attempting to compromise your computer by modifying the bootloader.
As mentioned earlier, this step may not be necessary on some distributions, as their GRUB bootloader automatically will enforce similar restrictions and checks on kernels when booted with secure boot enabled. So, this section is intended for those who are not using such a distribution or who wish to implement something similar themselves for learning purposes.
To create a standalone GRUB binary, the grub-mkstandalonetool is needed. This tool should be included as part of recent GRUB2 distribution packages:

grub-mkstandalone -d /usr/lib/grub/x86_64-efi/ \
-O x86_64-efi --modules="part_gpt part_msdos" \
--fonts="unicode" --locales="en@quot" \
--themes="" -o "/home/user/grub-standalone.efi" \
"boot/grub/grub.cfg=/boot/grub/grub.cfg"
A more detailed explanation of the arguments used here is available on the man page for grub-mkstandalone. The significant arguments are -o, which specifies the output file to be used, and the final string argument, specifying the path to the current GRUB configuration file. The resulting standalone GRUB binary is directly bootable and contains a memdisk, which holds the configuration file and modules, as well as the configuration file. This GRUB binary now can be signed and used to boot the system. Note that this process should be repeated when the GRUB configuration file is re-generated, such as after adding a new kernel, changing boot parameters or after adding a new operating system to the list, since the embedded configuration file will be out of date with the regular system one.

A Licensing Warning

As GRUB 2 is licensed under the GPLv3 (or later), this raises one consideration to be aware of. Although not a consideration for individual users (who simply can install new secure boot keys and boot a modified bootloader), if the GRUB 2 bootloader (or indeed any other GPL-v3-licensed bootloader) was signed with a private signing key, and the distributed computer system was designed to prevent the use of unsigned bootloaders, use of the GPL-v3-licensed software would not be in compliance with the licence. This is a result of the so-called anti-tivo'ization clause of GPLv3, which requires that users be able to install and execute their own modified version of GPLv3 software on a system, without being technically restricted from doing so.

Locking Down GRUB

To prevent a malicious user from modifying the kernel cmdline of your system (for example, to point to a different init binary), a GRUB password should be set. GRUB passwords are stored within the configuration file, after being hashed with a cryptographic hashing function. Generate a password hash with the grub-mkpasswd-pbkdf2 command, which will prompt you to enter a password.
The PBKDF2 function is a slow hash, designed to be computationally intensive and prevent brute-force attacks against the password. Its performance is adjusted using the -c parameter, if desired, to slow the process further on a fast computer by carrying out more rounds of PBKDF2. The default is for 10,000 rounds. After copying this password hash, it should be added to your GRUB configuration files (which normally are located in /etc/grub.d or similar). In the file 40_custom, add the following:

set superusers="root"
password_pbkdf2 root
This will create a GRUB superuser account named root, which is able to boot any GRUB entry, edit existing boot items and enter a GRUB console. Without further configuration, this password also will be required to boot the system. If you prefer to have yet another password on boot-up, you can skip the next step. With full disk encryption in use though, there is little need in requiring a password on each boot-up.
To remove the requirement for the superuser password to be entered on a normal boot-up, edit the standard boot menu template (normally /etc/grub.d/10-linux), and locate the line creating a regular menu entry. It should look somewhat similar to this:

echo "menuentry '$(echo "$title" | grub_quote)'
↪${CLASS} \$menuentry_id_option
↪'gnulinux-$version-$type-$boot_device_id' {" | sed
↪"s/^/$submenu_indentation/"
Change this line by adding the argument --unrestricted, before the opening curly bracket. This change tells GRUB that booting this entry does not require a password prompt. Depending on your distribution and GRUB version, the exact contents of the line may differ. The resulting line should be similar to this:

echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS}
↪\$menuentry_id_option
↪'gnulinux-$version-$type-$boot_device_id'
↪--unrestricted {" | sed "s/^/$submenu_indentation/"
After adding a superuser account and configuring the need (or otherwise) for boot-up passwords, the main GRUB configuration file should be re-generated. The command for this is distribution-specific, but is often update-grub or grub-mkconfig. The standalone GRUB binary also should be re-generated and tested.

Protecting the Kernel

At this point, you should have a system capable of booting a signed (and password-protected) GRUB bootloader. An adversary without access to your keys would not be able to modify the bootloader or its configuration or modules. Likewise, attackers would not be able to change the parameters passed by the bootloader to the kernel. They could, however, modify your kernel image (by swapping the hard disk into another computer). This would then be booted by GRUB. Although it is possible for GRUB to verify kernel image signatures, this requires you to re-sign each kernel update.
An alternative approach is to use full disk encryption to protect the full system, including kernel images, the root filesystem and your home directory. This prevents someone from removing your computer's drive and accessing your data or modifying it—without knowing your encryption password, the drive contents will be unreadable (and thus unmodifiable).
Most on-line guides will show full disk encryption but leave a separate, unencrypted /boot partition (which holds the kernel and initrd images) for ease of booting. By only creating a single, encrypted root partition, there won't be an unencrypted kernel or initrd stored on the disk. You can, of course, create a separate boot partition and encrypt it using dm-crypt as normal, if you prefer.
The full process of carrying out full disk encryption including the boot partition is worthy of an article in itself, given the various distribution-specific changes necessary. A good starting point, however, is the ArchLinux Wiki (see Resources). The main difference from a conventional encryption setup is the use of the GRUB GRUB_ENABLE_CRYPTODISK=yconfiguration parameter, which tells GRUB to attempt to decrypt an encrypted volume prior to loading the main GRUB menu.
To avoid having to enter the encryption password twice per boot-up, the system's /etc/crypttab can be used to decrypt the filesystem with a keyfile automatically. This keyfile then can be included in the (encrypted) initrd of the filesystem (refer to your distribution's documentation to find out how to add this to the initrd, so it will be included each time it is regenerated for a kernel update).
This keyfile should be owned by the root user and does not require any user or group to have read access to it. Likewise, you should give the initrd image (in the boot partition) the same protection to prevent it from being accessed while the system is powered up and the keyfile is being extracted.

Final Considerations

UEFI secure boot allows you to take control over what code can run on your computer. Installing your own keys allows you to prevent malicious people from easily booting their own code on your computer. Combining this with full disk encryption will keep your data protected against unauthorized access and theft, and prevent an attacker from tricking you into booting a malicious kernel.
As a final step, you should apply a password to your UEFI setup interface, in order to prevent a physical attacker from gaining access to your computer's setup interface and installing their own PK, KEK and db key, as these instructions did. You should be aware, however, that a weakness in your motherboard or laptop's implementation of UEFI could potentially allow this password to be bypassed or removed, and that the ability to re-flash the UEFI firmware through a "rescue mode" on your system could potentially clear NVRAM variables. Nonetheless, by taking control of secure boot and using it to protect your system, you should be better protected against malicious software or those with temporary physical access to your computer.

Resources

Information about third-party secure boot keys: http://mjg59.dreamwidth.org/23400.html
More information about the keys and inner workings of secure boot: http://blog.hansenpartnership.com/the-meaning-of-all-the-uefi-keys
osslsigncode repository: http://sourceforge.net/projects/osslsigncode
ArchLinux Wiki instructions for fully encrypted systems: https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#Encrypted_boot_partition_.28GRUB.29
Guide for full-disk encryption including kernel image: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption
Fedora Wiki on its secure boot implementation: https://fedoraproject.org/wiki/Features/SecureBoot

Linux / Unix Curl: Find Out If a Website Is Using Gzip / Deflate

$
0
0
http://www.cyberciti.biz/faq/linux-unix-curl-gzip-compression-test

How do I find out if a web-page is gzipped or compressed using Unix command line utility called curl? How do I make sure mod_deflate or mod_gzip is working under Apache web server?

When content is compressed, downloads are faster because the files are smaller—in many cases, less than a quarter the size of the original. This is very useful for JavaScript and CSS files (including html), faster downloads translates into faster rendering of web pages for end-user. The mod_deflate or mod_gzip Apache module provides the DEFLATE output filter that allows output from your server to be compressed before being sent to the client over the network. Most modern web browser support this feature. You can use the curl command to find out if a web page is gzipped or not using the the following simple syntax.

Syntax

The syntax is:

curl -I -H 'Accept-Encoding: gzip,deflate' http://example.com

OR

curl -s -I -L -H 'Accept-Encoding: gzip,deflate' http://example.com

Where,
  1. -s - Don't show progress meter or error messages.
  2. -I - Work on the HTTP-header only.
  3. -H 'Accept-Encoding: gzip,deflate' - Send extra header in the request when sending HTTP to a server.
  4. -L - f the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
  5. http://example.com - Your URL, it can start with http or https.

Examples

Type the following command:
 
curl -I -H 'Accept-Encoding: gzip,deflate' http://www.cyberciti.biz/
 
Sample outputs:
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 06 Nov 2012 18:59:26 GMT
Content-Type: text/html
Connection: keep-alive
X-Whom: l2-com-cyber
Vary: Cookie
Vary: Accept-Encoding
Last-Modified: Tue, 06 Nov 2012 18:51:58 GMT
Cache-Control: max-age=152, must-revalidate
Content-Encoding: gzip
X-Galaxy: Andromeda-1
X-Origin-Type: DynamicViaDAL

Curl command accept-encoding gzip bash test function

Create a bash shell function and add to your ~/.bashrc file:
 
gzipchk(){ curl -I -H 'Accept-Encoding: gzip,deflate'"$@" | grep --color 'Content-Encoding:'; }
 
OR use the silent mode to hide progress bar:
 
gzipchk(){ curl -sILH 'Accept-Encoding: gzip,deflate'"$@" | grep --color 'Content-Encoding:'; }
 
Save and close the file. Reload ~/.bashrc file, run:
$ source ~/.bashrc file
Test the gzipchk() as follows:
$ gzipchk www.cyberciti.biz
gzipchk http://www.redhat.com

Sample outputs:
Fig.01: Linux curl deflate gzip test in action
Fig.01: Linux curl deflate gzip test in action

Linux: Use smartctl To Check Disk Behind Adaptec RAID Controllers

$
0
0
http://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers

I can use the "smartctl -d ata -a /dev/sdb" command to read hard disk health status directly connected to my system. But, how do I read smartctl command to check SAS or SCSI disk behind Adaptec RAID controller from the shell prompt on Linux operating system?

You need to use the following syntax to check SATA or SAS disk which are typically simulate a (logical) disk for each array of (physical) disks to the OS. /dev/sgX can be used as pass through I/O controls providing direct access to each physical disk for Adaptec raid controllers.

Is my Adaptec RAID card detected by Linux?

Type the following command:
# lspci | egrep -i 'raid|adaptec'
Sample outputs:
81:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09)

Download and install Adaptec Storage Manager

You need to install Adaptec Storage Manager for your Linux distribution as per installed RAID card. Visit this site to grab the software.

SATA Health Check Disk Syntax

To scan disk, enter:
# smartctl --scan
Sample outputs:
/dev/sda -d scsi # /dev/sda, SCSI device
So /dev/sda is one device reported as SCSI device. This RAID device is made of 4 disks located in /dev/sg{1,2,3,4}. Type the following smartclt command to check disk behind /dev/sda raid:
# smartctl -d sat--all /dev/sgX
# smartctl -d sat --all /dev/sg1

Ask the device to report its SMART health status or pending TapeAlert message if any, run:
# smartctl -d sat --all /dev/sg1 -H
For SAS disk use the following syntax:
# smartctl -d scsi --all /dev/sgX
# smartctl -d scsi --all /dev/sg1
### Ask the device to report its SMART health status or pending TapeAlert message ###
# smartctl -d scsi --all /dev/sg1 -H

Sample outputs:
smartctl version 5.38[x86_64-redhat-linux-gnu] Copyright (C)2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
 
Device: SEAGATE ST3146855SS Version: 0002
Serial number: xxxxxxxxxxxxxxx
Device type: disk
Transport protocol: SAS
Local Time is: Wed Jul 704:34:302010 CDT
Device supports SMART and is Enabled
Temperature Warning Enabled
SMART Health Status: OK
 
Current Drive Temperature: 24 C
Drive Trip Temperature: 68 C
Elements in grown defect list: 0
Vendor (Seagate) cache information
Blocks sent to initiator =1857385803
Blocks received from initiator =1967221471
Blocks read from cache and sent to initiator =804439119
Number of read and write commands whose size <= segment size = 312098925
Number of read and write commands whose size > segment size =45998
Vendor (Seagate/Hitachi) factory information
number of hours powered up =13224.42
number of minutes until next internal SMART test =42
 
Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 589840491058984050589840503151.7300
write: 000009921230881.6000
verify: 130800130813080.0000
 
Non-medium error count: 0
No self-tests have been logged
Long (extended) Self Test duration: 1367 seconds [22.8 minutes]
 
Here is another output from SAS based disk called /dev/sg2
# smartctl -d scsi --all /dev/sg2 -H
Sample outputs:
Fig.01: How To Check Hardware Raid Status in Linux Command Line
Fig.01: How To Check Hardware Raid Status in Linux Command Line

Replace /dev/sg1 with your disk number. If you've raid 10 array with 4 disks than:
  • /dev/sg0 - RAID 10 controller (you will not get any info or /dev/sg0).
  • /dev/sg1 - First disk in RAID 10 array.
  • /dev/sg2 - Second disk in RAID 10 array.
  • /dev/sg3 - Third disk in RAID 10 array.
  • /dev/sg4 - Fourth disk in RAID 10 array.

How do I run hard disk check?

Type the following command:
# smartctl -t short -d scsi /dev/sg2
# smartctl -t long -d scsi /dev/sg2

Where,
  1. -t short : Run short test.
  2. -t long : Run long test.
  3. -d scsi : Specify scsi as device type.
  4. --all : Show all SMART information for device.

How do I use Adaptec Storage Manager?

Another simple command to just check basic status is as follows:
# /usr/StorMan/arcconf getconfig 1 | more
# /usr/StorMan/arcconf getconfig 1 | grep State
# /usr/StorMan/arcconf getconfig 1 | grep -B 3 State

Sample outputs:
----------------------------------------------------------------------
Device #0
Device is a Hard drive
State : Online
--
S.M.A.R.T. : No
Device #1
Device is a Hard drive
State : Online
--
S.M.A.R.T. : No
Device #2
Device is a Hard drive
State : Online
--
S.M.A.R.T. : No
Device #3
Device is a Hard drive
State : Online
 
Please note that newer version of arcconf is located in /usr/Adaptec_Event_Monitor directory. So your full path must be as follows:
# /usr/Adaptec_Event_Monitor/arcconf getconfig [AD | LD [LD#] | PD | MC | [AL]] [nologs]
Where,
 Prints controller configuration information.

Option AD : Adapter information only
LD : Logical device information only
LD# : Optionally display information about the specified logical device
PD : Physical device information only
MC : Maxcache 3.0 information only
AL : All information (optional)

How do I check the health of my Adaptec RAID array itself on Linux?

\
Simply use the following command:
# /usr/Adaptec_Event_Monitor/arcconf getconfig 1
OR (older version)
# /usr/StorMan/arcconf getconfig 1
Sample outputs:
Fig.02:  Device #1 is Online, while Device #2 is Failed i.e. you have a degraded array.
Fig.02: Device #1 is Online, while Device #2 is Failed i.e. you have a degraded array.

See also:

Protecting Apache Server From Denial-of-Service (Dos) Attack

$
0
0

http://www.unixmen.com/protecting-apache-server-denial-service-dos-attack

Denial-of-Service (DoS) attack is an attempt to make a machine or network resource unavailable to its intended users, such as to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet. A distributed denial-of-service (DDoS) is where the attack source is more than one–and often thousands of-unique IP addresses.

What is mod_evasive?

mod_evasive is an evasive maneuvers module for Apache to provide evasive action in the event of an HTTP DoS or DDoS attack or brute force attack. It is also designed to be a detection and network management tool, and can be easily configured to talk to ipchains, firewalls, routers, and etcetera. mod_evasive presently reports abuses via email and syslog facilities.

Installing mod_evasive

  • Server Distro: Debian 8 jessie
  • Server IP: 10.42.0.109
  • Apache Version: Apache/2.4.10
mod_evasive appears to be in the Debian official repository, we will need to install using apt
# apt-get update
# apt-get install libapache2-mod-evasive

Setting up mod_evasive

We have mod_evasive installed but not configured, mod_evasive config is located at /etc/apache2/mods-available/evasive.conf. We will be editing that which should look similar to this

#DOSHashTableSize    3097
#DOSPageCount        2
#DOSSiteCount        50
#DOSPageInterval     1
#DOSSiteInterval     1
#DOSBlockingPeriod   10
#DOSEmailNotify      you@yourdomain.com
#DOSSystemCommand    "su - someuser -c '/sbin/... %s ...'"
#DOSLogDir           "/var/log/mod_evasive"

mod_evasive Configuration Directives

  • DOSHashTableSize
    This directive defines the hash table size, i.e. the number of top-level nodes for each child’s hash table. Increasing this number will provide faster performance by decreasing the number of iterations required to get to the record, but will consume more memory for table space. It is advisable to increase this parameter on heavy load web servers.
  • DOSPageCount:
    This sets threshold for total number of hits on same page (or URI) per page interval. Once this threshold is reached, the client IP is locked out and their requests will be dumped to 403, adding the IP to blacklist
  • DOSSiteCount:
    This sets the threshold for total number of request on any object by same client IP per site interval. Once this threshold is reached, the client IP is added to blacklist
  • DOSPageInterval:
    The page count interval, accepts real number as seconds. Default value is 1 second
  • DOSSiteInterval:
    The site count interval, accepts real number as seconds. Default value is 1 second
  • DOSBlockingPeriod:
    This directive sets the amount of time that a client will be blocked for if they are added to the blocking list. During this time, all subsequent requests from the client will result in 403 (Forbidden) response and the timer will be reset (e.g. for another 10 seconds). Since the timer is reset for every subsequent request, it is not necessary to have a long blocking period; in the event of a DoS attack, this timer will keep getting reset.The interval is specified in seconds and may be a real number.
  • DOSEmailNotify:
    This is an E-mail if provided will send notification once an IP is being blacklisted
  • DOSSystemCommand:
    This is a system command that can be executed once an IP is blacklist if enabled. Where %s is the blacklisted IP, this is designed for system call to IP filter or other tools
  • DOSLogDir:
    This is a directory where mod_evasive stores it’s log
This configuration is what I’m using which is working well and I recommend it if you don’t know how to go about the configuration

DOSHashTableSize    2048
DOSPageCount        5
DOSSiteCount        100
DOSPageInterval     1
DOSSiteInterval     2
DOSBlockingPeriod   10
DOSEmailNotify      you@yourdomain.com
#DOSSystemCommand    "su - someuser -c '/sbin/... %s ...'"
DOSLogDir           "/var/log/mod_evasive"
As you’ll replace you@yourdomain.com with your email. Since mod_evasive doesn’t create the log directory automatically, we are to create it for it:
# mkdir /var/log/mod_evasive
# chown :www-data /var/log/mod_evasive
# chmod 771 /var/log/mod_evasive
Once setup is done, make sure mod_evasive is enabled by typing:
# a2enmod evasive
Restart Apache for changes to take effect
# systemctl restart apache2

Testing mod_evasive Setup

mod_evasive set up correctly,  now we are going to test if our web server has protection again DoS attack using ab (Apache Benchmark). Install ab if you don’t have it by typing:
# apt-get install apache2-utils
Current stat of our /var/log/mod_evasive
root@debian-server:/var/log/mod_evasive# ls -l
total 0
root@debian-server:/var/log/mod_evasive#
We will now send bulk requests to the server, causing a DoS attack  by typing:
# ab -n 100 -c 10 http://10.42.0.109/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.42.0.109 (be patient).....done
Server Software:        Apache/2.4.10
Server Hostname:        10.42.0.109
Server Port:            80
Document Path:          /
Document Length:        11104 bytes
Concurrency Level:      10
Time taken for tests:   0.205 seconds
Complete requests:      100
Failed requests:        70
(Connect: 0, Receive: 0, Length: 70, Exceptions: 0)
Non-2xx responses:      70
Total transferred:      373960 bytes
HTML transferred:       353140 bytes
Requests per second:    488.51 [#/sec] (mean)
Time per request:       20.471 [ms] (mean)
Time per request:       2.047 [ms] (mean, across all concurrent requests)
Transfer rate:          1784.01 [Kbytes/sec] received
Connection Times (ms)
min  mean[+/-sd] median   max
Connect:        0    1   1.5      1       7
Processing:     3   15  28.0     10     177
Waiting:        2   14  28.0      9     176
Total:          3   17  28.4     12     182
Percentage of the requests served within a certain time (ms)
50%     12
66%     13
75%     14
80%     15
90%     18
95%     28
98%    175
99%    182
100%    182 (longest request)
Sending 100 request on 10 concurrent requests per request, the current stat of my /var/log/mod_evasive directory is now
root@debian-server:/var/log/mod_evasive# ls -l
total 4
-rw-r--r-- 1 www-data www-data 5 Dec 15 22:10 dos-10.42.0.1
Checking Apache access logs at /var/log/apache2/access.log we can see all connections from ApacheBench/2.3 were dropped to 403:
mod-evasive2
You see, with mod_evasive you can reduce the attack of DoS. Something that Nginx doesn’t have ;)

A simple way to install and configure puppet on CentOS 6

$
0
0
http://techarena51.com/index.php/a-simple-way-to-install-and-configure-a-puppet-server-on-linux


A simple way to install and configure puppet on CentOS 6
Puppet is an automation tool which allows you to automate the configuration of software like apache and nginx across multiple servers.
Puppet installation
In this tutorial we will be installing Puppet in the Puppet/Agent mode.You can install it in a Stand Alone mode as well.
OS & software Versions
Centos 6.5
Linux kernel 2.6.32
Puppet 3.6.2
Let’s get to it then.
Puppet server configuration
#Add Puppet repos 
[user@puppet ~]# sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

[user@puppet ~]# sudo yum install puppet-server

# Add your puppet server hostnames to the conf file under the [main] section
[user@puppet ~]# sudo vim /etc/puppet/puppet.conf

dns_alt_names = puppet,puppet.yourserver.com

[user@puppet ~]# sudo service puppetmaster start
Puppet listens on port no 8140, ensure to unblock it in CSF or your firewall.
Puppet client configuration
#Add Puppet repos 
[user@client ~]# sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

[user@client ~]# sudo yum install puppet

#Open the conf file and add the puppet server hostname
[user@client ~]#sudo vim /etc/puppet/puppet.conf
[main]
# The puppetmaster server
server=puppet.yourserver.com



[user@client ~]# sudo service puppet start
In the log file you should see the following lines.
info: Creating a new SSL key for vps.client.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for agent1.localdomain
info: Certificate Request fingerprint (md5): FD:E7:41:C9:5C:B7:5C:27:11:0C:8F:9C:1D:F6:F9:46
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled
Puppet uses SSL to communicate with it’s clients, when you start puppet on a client, it will automatically connect to the puppet server in it’s conf file and request for it’s certificate to be signed.
On the puppet server run
[user@puppet ~]# sudo  puppet cert list
vps.client.com (FD:E7:41:C9:2C:B7:5C:27:11:0C:8F:9C:1D:F6:F9:46)

[user@puppet ~]# sudo puppet cert sign vps.client.com
notice: Signed certificate request for vps.client.com
notice: Removing file Puppet::SSL::CertificateRequest vps.client.com at '/etc/puppetlabs/puppet/ssl/ca/requests/vps.client.pem'
Now our client server “vps.client.com” is authorized to fetch and apply configurations from the puppet server. To understand how puppet ssl works and to troubleshoot any issues you can read http://docs.puppetlabs.com/learning/agent_master_basic.html
Let’s look at a sample puppet configuration.
Installing apache web server with puppet
Although puppet server configuration is stored in “/etc/puppet/puppet.conf”, client configurations are stored in files called manifests.
#On the puppet server run
[user@puppet ~]# sudo vim /etc/puppet/manifests/site.pp

node ‘vps.client.com’ {

package { ‘httpd’ :
ensure => installed,
}
}
The configuration is pretty self explanatory, the first line indicates that we need to install this configuration on a client machine with the hostname ‘vps.client.com’. If you want to apply the configuration to the puppet server then replace ‘vps.client.com’ with ‘default’ .
Read node definitions for multiple node configurations.
The next two lines tell puppet that we need to ensure that the apache web server is installed. Puppet will check if apache is installed and if not, install it.
Think of a “package” as an object, “httpd” as the name of the object and “ensure => present” as the action to be performed on the object.
So if I wanted puppet to install a mysql database server, the configuration would be
node ‘vps.client.com’ {
package { ‘mysql-server’ :
ensure => installed,
}
}
The puppet server will compile this configuration into a catalog and serve it to a client when a request is sent to it.
How do I pull my configuration to a client immediately?
Puppet client’s usually pull configuration once every 30 minutes, But you can pull a configuration immediately buy running “service puppet restart or the following command.
[user@puppet ~]# sudo puppet agent --test
What if I wanted puppet to add a user ‘Tom’?
Then the object would be user, the name of the object would be ‘tom’ and the action would be ‘present’.
node ‘vps.client.com’ {

user { ‘tomr’ :
ensure => present,
}
}
In puppet terms, these objects are known as Resources, the name of the objects are Titles and the actions are called Attributes.
Puppet has a number of these resources to help ease your automation, You can read about them at http://docs.puppetlabs.com/references/latest/type.html
How to ensure a service is running with puppet?
Once you have package like apache installed, you will want to ensure that it is running. On the command line you can do this with the service command, However in puppet you will need to use the manifest file and add the configuration as follows.
node ‘vps.client.com’ {

package { ‘httpd’ :
ensure => installed,
}
->
service { ‘httpd’ : #Our resource and it’s title
ensure => running, #Action to be performed on resource or attribute
enable => true, # Start apache at boot


}

}
Now you must have noticed I have added an “->” symbol. This is because Puppet is not particular about ordering, But we want the service command to run only after apache is installed and not before, hence I have added the arrow symbol which tells Puppet to run only after “httpd” is installed.
To know more about puppet ordering read.
How to automate installation of predefined conf files?
You may want to have a customised apache conf file for this client, which will have the vhost entry and other specific parameters you choose. In this case we need to use the file resource.
Before we go into the configuration, you should know how puppet serves files. A Puppet server provides access to custom files via mount points. One such mount point by default is the modules directory.
The modules directory is where you would add your modules. Modules make it easier to reuse configurations, rather than having to write configurations for every node we can store them as a module and call them whenever we like.
In order to write a module, you need to create a subdirectory inside the modules directory with the module name and create a manifest file called init.pp which should contain a class with the same name as the subdirectory.
[user@puppet ~]# cd /etc/puppet/modules
[user@puppet ~]# mkdir httpd
[user@puppet ~]# mkdir -p httpd/manifests httpd/files
[user@puppet ~]# vim httpd/manifests/init.pp


class httpd { #Same name as our Sub Directory

package { 'httpd':
ensure => present,

}
->
file {'/etc/httpd/conf/httpd.conf': #Path to file on the client we want puppet to administer
ensure => file, #Ensure it is a file,
mode => 0644, #Permissions for the file
source => 'puppet:///modules/httpd/httpd.conf', #Path to our customised file on the puppet server
}

->
service { 'httpd':
ensure => running,
enable => true,
subscribe => File['/etc/httpd/conf/httpd.conf'] # Restart service if any any change is made to httpd.conf

}
}
You need to add your custom httpd.conf file in the files subdirectory located at “/etc/puppet/modules/httpd/files/”
To understand the how the URI to the source attribute works read http://docs.puppetlabs.com/guides/file_serving.html
Now call the module in our main manifest file.
[user@puppet ~]#sudo vim /etc/puppet/manifests/site.pp

node ‘vps.client.com’ {

include httpd

}

Incase you need a Web interface to  Manage your Linux Servers then read my tutorial Using Foreman, an Opensource Frontend for Puppet
Update: For more Automation and other System Administration/Devops Guides see https://github.com/Leo-G/DevopsWiki
Puppet FAQ
How do I change the time interval for a client to fetch it’s configuration from the server ?
Add “runinterval = 3600 “ under [main] section in “/etc/puppet/puppet.conf” on the client.
Time is in seconds.
How do I install modules from puppet forge?
[user@puppet ~]#sudo puppet module install "full module name"

#Example
[user@puppet ~]#sudo puppet module install puppetlabs-mysql
read more here and for publishing your own modules read http://docs.puppetlabs.com/puppet/latest/reference/modules_publishing.html

How To Avoid Sudden Outburst Of Backup Shell Script or Program Disk I/O on Linux

$
0
0
http://www.cyberciti.biz/tips/linux-set-io-scheduling-class-priority.html

A sudden outburst of violent disk I/O activity can bring down your email or web server. Usually, a web, mysql, or mail server serving millions and millions pages (requests) per months are prone to this kind of problem. Backup activity can increase current system load too. To avoid this kind of sudden outburst problem, run your script with scheduling class and priority. Linux comes with various utilities to manage this kind of madness.

CFQ scheduler

You need Linux kernels 2.6.13+ with the CFQ IO scheduler. CFQ (Completely Fair Queuing) is an I/O scheduler for the Linux kernel, which is default in 2.6.18+ kernel. RHEL 4/ 5 and SuSE Linux has all scheduler built into kernel so no need to rebuild your kernel. To find out your scheduler name, enter:
# for d in /sys/block/sd[a-z]/queue/scheduler; do echo "$d => $(cat $d)" ; done
Sample output for each disk:
/sys/block/sda/queue/scheduler => noop anticipatory deadline [cfq]
/sys/block/sdb/queue/scheduler => noop anticipatory deadline [cfq]
/sys/block/sdc/queue/scheduler => noop anticipatory deadline [cfq]
CFQ is default and recommended for good performance.

Old good nice program

You can run a program with modified scheduling priority using nice command (19 = least favorable):
# nice -n19 /path/to/backup.sh
Sample cronjob:
@midnight /bin/nice -n19 /path/to/backup.sh

Say hello to ionice utility

The ionice command provide better control as compare to nice command for the I/O scheduling class and priority of a program or script. It supports following three scheduling classes (quoting from the man page):
  • Idle : A program running with idle io priority will only get disk time when no other program has asked for disk io for a defined grace period. The impact of idle io processes on normal system activity should be zero. This scheduling class does not take a priority argument.
  • Best effort : This is the default scheduling class for any process that hasn’t asked for a specific io priority. Programs inherit the CPU nice setting for io priorities. This class takes a priority argument from 0-7, with lower number being higher priority. Programs running at the same best effort priority are served in a round-robin fashion. This is usually recommended for most application.
  • Real time : The RT scheduling class is given first access to the disk, regardless of what else is going on in the system. Thus the RT class needs to be used with some care, as it can starve other processes. As with the best effort class, 8 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window. This is should be avoided for all heavily loaded system.

Syntax

The syntax is:
 
ionice options PID
ionice options -p PID
ionice -c1 -n0 PID
 

How do I use the ionice command on Linux?

Linux refers the scheduling class using following number system and priorities:
Scheduling classNumberPossible priority
real time18 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window
best-effort20-7, with lower number being higher priority
idle3Nil ( does not take a priority argument)

Examples

To display the class and priority of the running process, enter:
# ionice -p {PID}
# ionice -p 1

Sample output:
none: prio 0
Dump full web server disk / mysql or pgsql database backup using best effort scheduling (2) and 7 priority:
# /usr/bin/ionice -c2 -n7 /root/scripts/nas.backup.full
Open another terminal and watch disk I/O network stats using atop/tip or top or your favorite monitoring tool:
# atop
Sample cronjob:
@weekly /usr/bin/ionice -c2 -n7 /root/scripts/nas.backup.full >/dev/null 2>&1
You can set process with PID 1004 as an idle io process, enter:
# ionice -c3 -p 1004
Runs rsync.sh script as a best-effort program with highest priority, enter:
# ionice -c2 -n0 /path/to/rsync.sh
Type the following command to run 'zsh' as a best-effort program with highest priority.
# ionice -c 2 -n 0 zsh
Finally, you can combine both nice and ionice together:
# nice -n 19 ionice -c2 -n7 /path/to/shell.script
Related: chrt command to set / manipulate real time attributes of a Linux process and taskset command to retrieve or set a processes's CPU affinity.
To see help on options type:
$ ionice --help
Sample outputs:
 
Sets or gets the IO scheduling class and priority of processes.
 
Usage:
ionice [options] -p ...
ionice [options] -P ...
ionice [options] -u ...
ionice [options]
 
Options:
-c, --class name or number of scheduling class,
0: none, 1: realtime, 2: best-effort, 3: idle
-n, --classdata priority (0..7) in the specified scheduling class,
only for the realtime and best-effort classes
-p, --pid ... act on these already running processes
-P, --pgid ... act on already running processes in these groups
-t, --ignore ignore failures
-u, --uid ... act on already running processes owned by these users
 
-h, --help display this help and exit
-V, --version output version information and exit
 

Other suggestion to improve disk I/O

  1. Use hardware RAID controller.
  2. Use fast SCSI / SA-SCSI / SAS 15k speed disk.
  3. Use fast SSD based storage (costly option).
  4. Use slave / passive server to backup MySQL
Recommended readings:

How to install RegRipper registry data extraction tool on Linux

$
0
0
http://linuxconfig.org/how-to-install-regripper-registry-data-extraction-tool-on-linux#h5-regripper-command-examples

RegRipper is an open source forensic software used as a Windows Registry data extraction command line or GUI tool. It is written in Perl and this article will describe RegRipper command line tool installation on the Linux systems such as Debian, Ubuntu, Fedora, Centos or Redhat. For the most part, the installation process of command line tool RegRipper is OS agnostic except the part where we deal with installation pre-requisites.

1. Pre-requisites

Fist we need to install all prerequisites. Choose a relevant command below based on the Linux distribution you are running:
DEBIAN/UBUNTU
# apt-get install cpanminus make unzip wget
FEDORA
# dnf install perl-App-cpanminus.noarch make unzip wget perl-Archive-Extract-gz-gzip.noarch which
CENTOS/REDHAT
# yum install perl-App-cpanminus.noarch make unzip wget perl-Archive-Extract-gz-gzip.noarch which

2. Installation of required libraries

The RegRipper command line tool depends on perl Parse::Win32Registry library. The following commands will take care of this pre-requisite and install this library into /usr/local/lib/rip-lib directory:
# mkdir /usr/local/lib/rip-lib
# cpanm -l /usr/local/lib/rip-lib Parse::Win32Registry

3. RegRipper script installation

At this stage we are ready to install rip.pl script. The script is intended to run on MS Windows systems and as a result we need to make some small modifications. We will also include a path to the above installed Parse::Win32Registry library. Download RegRipper source code from https://regripper.googlecode.com/files/. Current version is 2.8:
#  wget -q https://regripper.googlecode.com/files/rrv2.8.zip
Extract rip.pl script:
# unzip -q rrv2.8.zip rip.pl 
Remove interpretor line and unwanted DOS new line character ^M:

# tail -n +2 rip.pl > rip
# perl -pi -e 'tr[\r][]d' rip
Modify script to include an interpretor relevant to your Linux system and also include library path to Parse::Win32Registry:
# sed -i "1i #!`which perl`" rip
# sed -i '2i use lib qw(/usr/local/lib/rip-lib/lib/perl5/);' rip
Install your RegRipper rip script and make it executable:
# cp rip /usr/local/bin
# chmod +x /usr/local/bin/rip

4. RegRipper Plugins installation

Lastly, we need to install RegRipper's Plugins.
# wget -q https://regripper.googlecode.com/files/plugins20130429.zip
# mkdir /usr/local/bin/plugins
# unzip -q plugins20130429.zip -d /usr/local/bin/plugins
RegRipper registry data extraction tool is now installed on your system and available via rip command:
# rip
Rip v.2.8 - CLI RegRipper tool
Rip [-r Reg hive file] [-f plugin file] [-p plugin module] [-l] [-h]
Parse Windows Registry files, using either a single module, or a plugins file.

-r Reg hive file...Registry hive file to parse
-g ................Guess the hive file (experimental)
-f [profile].......use the plugin file (default: plugins\plugins)
-p plugin module...use only this module
-l ................list all plugins
-c ................Output list in CSV format (use with -l)
-s system name.....Server name (TLN support)
-u username........User name (TLN support)
-h.................Help (print this information)

Ex: C:\>rip -r c:\case\system -f system
C:\>rip -r c:\case\ntuser.dat -p userassist
C:\>rip -l -c

All output goes to STDOUT; use redirection (ie, > or >>) to output to a file.

copyright 2013 Quantum Analytics Research, LLC

5. RegRipper command examples

Few examples using RegRipper and NTUSER.DAT registry hive file.

List all available plugins:
$ rip -l -c
List software installed by the user:
$ rip -p listsoft -r NTUSER.DAT
Launching listsoft v.20080324
listsoft v.20080324
(NTUSER.DAT) Lists contents of user's Software key

listsoft v.20080324
List the contents of the Software key in the NTUSER.DAT hive
file, in order by LastWrite time.

Mon Dec 14 06:06:41 2015Z Google
Mon Dec 14 05:54:33 2015Z Microsoft
Sun Dec 29 16:44:47 2013Z Bitstream
Sun Dec 29 16:33:11 2013Z Adobe
Sun Dec 29 12:56:03 2013Z Corel
Thu Dec 12 07:34:40 2013Z Clients
Thu Dec 12 07:34:40 2013Z Mozilla
Thu Dec 12 07:30:08 2013Z MozillaPlugins
Thu Dec 12 07:22:34 2013Z AppDataLow
Thu Dec 12 07:22:34 2013Z Wow6432Node
Thu Dec 12 07:22:32 2013Z Policies
Extract all available information using all plugins and save it to case1.txt. file:
$ for i in $( rip -l -c | grep NTUSER.DAT | cut -d , -f1 ); do rip -p $i -r NTUSER.DAT &>> case1.txt ; done

Getting Started with Docker

$
0
0
https://www.linux.com/news/enterprise/systems-management/873287-getting-started-with-docker

Docker is the excellent new container application that is generating much buzz and many silly stock photos of shipping containers. Containers are not new; so, what's so great about Docker? Docker is built on Linux Containers (LXC). It runs on Linux, is easy to use, and is resource-efficient.
Docker containers are commonly compared with virtual machines. Virtual machines carry all the overhead of virtualized hardware running multiple operating systems. Docker containers, however, dump all that and share only the operating system. Docker can replace virtual machines in some use cases; for example, I now use Docker in my test lab to spin up various Linux distributions, instead of VirtualBox. It's a lot faster, and it's considerably lighter on system resources.
Docker is great for datacenters, as they can run many times more containers on the same hardware than virtual machines. It makes packaging and distributing software a lot easier:
"Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries -- anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in."
Docker runs natively on Linux, and in virtualized environments on Mac OS X and MS Windows. The good Docker people have made installation very easy on all three platforms.

Installing Docker

That's enough gasbagging; let's open a terminal and have some fun. The best way to install Docker is with the Docker installer, which is amazingly thorough. Note how it detects my Linux distro version and pulls in dependencies. The output is abbreviated to show the commands that the installer runs:
$ wget -qO- https://get.docker.com/ | sh
You're using 'linuxmint' version 'rebecca'.
Upstream release is 'ubuntu' version 'trusty'.
apparmor is enabled in the kernel, but apparmor_parser missing
+ sudo -E sh -c sleep 3; apt-get update
+ sudo -E sh -c sleep 3; apt-get install -y -q apparmor
+ sudo -E sh -c apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80
 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
+ sudo -E sh -c mkdir -p /etc/apt/sources.list.d
+ sudo -E sh -c echo deb https://apt.dockerproject.org/repo ubuntu-trusty main > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c sleep 3; apt-get update; apt-get install -y -q docker-e
The following NEW packages will be installed:
 docker-engine
As you can see, it uses standard Linux commands. When it's finished, you should add yourself to the docker group so that you can run it without root permissions. (Remember to log out and then back in to activate your new group membership.)

Hello World!

We can run a Hello World example to test that Docker is installed correctly:
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
[snip]
Hello from Docker.
This message shows that your installation appears to be working correctly.
This downloads and runs the hello-world image from the Docker Hub. This contains a library of Docker images, which you can access with a simple registration. You can also upload and share your own images. Docker provides a fun test image to play with, Whalesay. Whalesay is an adaption of Cowsay that draws the Docker whale instead of a cow (see Figure 1 above).
$ docker run docker/whalesay cowsay "Visit Linux.com every day!"
The first time you run a new image from Docker Hub, it gets downloaded to your computer. Then, after that Docker uses your local copy. You can see which images are installed on your system.
$ docker images
REPOSITORY       TAG      IMAGE ID      CREATED       VIRTUAL SIZE
hello-world      latest   0a6ba66e537a  7 weeks ago   960 B
docker/whalesay  latest   ded5e192a685  6 months ago  247 MB
So, where, exactly, are these images stored? Look in /var/lib/docker.

Build a Docker Image

Now let's build our own Docker image. Docker Hub has a lot of prefab images to play with (Figure 2), and that's the best way to start because building one from scratch is a fair bit of work. (There is even an empty scratch image for building your image from the ground up.) There are many distro images, such as Ubuntu, CentOS, Arch Linux, and Debian.
docker-hub
Figure 2: Docker Hub.

We'll start with a plain Ubuntu image. Create a directory for your Docker project, change to it, and create a new Dockerfile with your favorite text editor.
$ mkdir dockerstuff
$ cd dockerstuff
$ nano Dockerfile
Enter a single line in your Dockerfile:
FROM ubuntu
Now build your new image and give it a name. In this example the name is testproj. Make sure to include the trailing dot:
$ docker build -t testproj .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu
---> 89d5d8e8bafb
Successfully built 89d5d8e8bafb
Now you can run your new Ubuntu image interactively:
$ docker run -it ubuntu
root@fc21879c961d:/#
And there you are at the root prompt of your image, which in this example is a minimal Ubuntu installation that you can run just like any Ubuntu system. You can see all of your local images:
$ docker images
REPOSITORY       TAG       IMAGE ID        CREATED        VIRTUAL SIZE
testproj         latest    89d5d8e8bafb    6 hours ago    187.9 MB
ubuntu           latest    89d5d8e8bafb    6 hours ago    187.9 MB
hello-world      latest    0a6ba66e537a    8 weeks ago    960 B
docker/whalesay  latest    ded5e192a685    6 months ago   247 MB
The real power of Docker lies in creating Dockerfiles that allow you to create customized images and quickly replicate them whenever you want. This simple example shows how to create a bare-bones Apache server. First, create a new directory, change to it, and start a new Dockerfile that includes the following lines.
FROM ubuntu

MAINTAINER DockerFan version 1.0

ENV DEBIAN_FRONTEND noninteractive

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid

RUN apt-get update && apt-get install -y apache2

EXPOSE 8080

CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Now build your new project:
$ docker build -t apacheserver  .
This will take a little while as it downloads and installs the Apache packages. You'll see a lot of output on your screen, and when you see "Successfully built 538fea9dda79" (but with a different number, of course) then your image built successfully. Now you can run it. This runs it in the background:
$ docker run -d  apacheserver
8defbf68cc7926053a848bfe7b55ef507a05d471fb5f3f68da5c9aede8d75137
List your running containers:
$ docker ps
CONTAINER ID  IMAGE        COMMAND                 CREATED            
8defbf68cc79  apacheserver "/usr/sbin/apache2ctl" 34 seconds ago
And kill your running container:
$ docker kill 8defbf68cc79
You might want to run it interactively for testing and debugging:
$ docker run -it  apacheserver /bin/bash
root@495b998c031c:/# ps ax
 PID TTY      STAT   TIME COMMAND
   1 ?        Ss     0:00 /bin/bash
  14 ?        R+     0:00 ps ax
root@495b998c031c:/# apachectl start
AH00558: apache2: Could not reliably determine the server's fully qualified
domain name, using 172.17.0.3. Set the 'ServerName' directive globally to
suppress this message
root@495b998c031c:/#
A more comprehensive Dockerfile could install a complete LAMP stack, load Apache modules, configuration files, and everything you need to launch a complete Web server with a single command.
We have come to the end of this introduction to Docker, but don't stop now. Visit docs.docker.com to study the excellent documentation and try a little Web searching for Dockerfile examples. There are thousands of them, all free and easy to try.

HowTo: Speedup ping and traceroute Command Responses under Linux / Unix

$
0
0
http://www.cyberciti.biz/faq/unix-linux-bsd-appleosx-speedup-ping-traceroute-command-probs

The following question was asked in the Unix networking exam:

     How do you speed up ping and traceroute command responses under Unix or Linux operating systems?
How can I speed up my ping or traceroute commands on a Linux?

The ping command line utility act as a computer network tool. It used to test whether a particular host is reachable across an IP network. The traceroute command also act as a computer network diagnostic tool for displaying the route (path) and measuring transit delays of packets across an Internet Protocol (IP) network.

Speedup ping command

The syntax is:
 
ping -n -W VALUE -i VALUE host
 
Where,
  1. -n : Disable DNS lookup to speed up queries.
  2. -W NUMBER : Time to wait for a response, in seconds. The option affects only timeout in absense of any responses, otherwise ping waits for two RTTs.
  3. -i SECONDS : Wait interval seconds between sending each packet. The default is to wait for one second between each packet normally, or not to wait in flood mode. Only super-user may set interval to values less 0.2 seconds.
The default command will produce output as follows:
$ ping -c 5 www.cyberciti.biz
Sample outputs:
PING www.cyberciti.biz (75.126.153.206) 56(84) bytes of data.
64 bytes from www.cyberciti.biz (75.126.153.206): icmp_req=1 ttl=55 time=293 ms
64 bytes from www.cyberciti.biz (75.126.153.206): icmp_req=2 ttl=55 time=295 ms
64 bytes from www.cyberciti.biz (75.126.153.206): icmp_req=3 ttl=55 time=293 ms
64 bytes from www.cyberciti.biz (75.126.153.206): icmp_req=4 ttl=55 time=294 ms
64 bytes from www.cyberciti.biz (75.126.153.206): icmp_req=5 ttl=55 time=294 ms
--- www.cyberciti.biz ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 293.571/294.170/295.158/0.869 ms
Now optimize the ping command:
$ ping -c 5 -n -i 0.2 -W1 www.cyberciti.biz
Sample outputs:
PING www.cyberciti.biz (75.126.153.206) 56(84) bytes of data.
64 bytes from 75.126.153.206: icmp_req=1 ttl=55 time=293 ms
64 bytes from 75.126.153.206: icmp_req=2 ttl=55 time=294 ms
64 bytes from 75.126.153.206: icmp_req=3 ttl=55 time=293 ms
64 bytes from 75.126.153.206: icmp_req=4 ttl=55 time=293 ms
64 bytes from 75.126.153.206: icmp_req=5 ttl=55 time=294 ms
--- www.cyberciti.biz ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 810ms
rtt min/avg/max/mdev = 293.279/293.955/294.522/0.799 ms, pipe 2
Here is another output showing the difference between two command line options:
Fig.01: Unix and Linux speedup ping command
Fig.01: Unix and Linux speedup ping command

Speedup traceroute command

The syntax is:
 
traceroute -n -w SECONDS -q NUMBER host
 
Where,
  1. -n : Disable DNS lookup to speed up queries.
  2. -w seconds : Set the time (in seconds) to wait for a response to a probe (default 5.0 sec).
  3. -q NUMBER : Sets the number of probe packets per hop. The default is 3.
The following example will wailt 3 seconds (instead of 5), only send out 1 query to each hop (ineader of 3):
$ traceroute -n -w 3 -q 1 www.cyberciti.biz
The -N option specifies the number of probe packets sent out simultaneously. Sending several probes concurrently can speed up traceroute considerably. The default value is 16:
$ traceroute -n -w 3 -q 1 -N 32 www.cyberciti.biz
Please Note that some routers and hosts can use ICMP rate throttling. In such a situation specifying too large number can lead to loss of some responses. You can also limit the maximum number of hops to 16 before giving up (instead of default 30) using the -m option:
$ traceroute -n -w 3 -q 1 -N 32 -m 16 www.cyberciti.biz
Sample outputs:
Fig.02: Unix and Linux speedup traceroute command
Fig.02: Unix and Linux speedup traceroute command
References:
Viewing all 1407 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>