Quantcast
Channel: Sameh Attia
Viewing all 1415 articles
Browse latest View live

How to Install and Configure PostgreSQL Replication with Hot Standby on Ubuntu 15.04

$
0
0
https://www.howtoforge.com/tutorial/postgresql-replication-on-ubuntu-15-04

PostgreSQL or Postgres is an open source object-relational database management system (ORDBMS) with more than 15 years of active development. It's a powerful database server and can handle high workloads. PostgreSQL can be used on Linux, Unix, BSD and Windows servers.
The master/slave database replication is a process of copying (syncing) data from a database on one server (the master) to a database on another server (the slaves). The main benefit of this process is to distribute databases to multiple machines, so when the master server has a problem, there is a backup machine with same data available for handling requests without interruption.
PostgreSQL provides several ways to replicate a database. It can be used for backup purposes and to provide a high availability database server. In this tutorial, we I will show you how to install and configure PostgreSQL replication by using hot standby mode. Hot standby mode is easy to configure, and it's a very good starting point to learn PostgreSQL in depth.
Hot standby mode requires 2 database servers, we will use Ubuntu as operating system on both servers.
  1. Master Server - accepts connections from the client with read and write permissions.
  2. Slave Server - the standby server runs copy of the data from the master server with read-only permission.
Prerequisites
  • 2 Ubuntu servers - 1 for master and 1 for slave.
  • Root privileges on the servers.
  • Some basic knowledge about Ubuntu, apt, etc.

Step 1 - Setup the Hostname

Login to both servers with ssh:
ssh user@masterip
ssh user@slaveip
Now set the hostname for both servers - master server and slave server - with the hostnamectl command.
On the master server:
sudo hostnamectl set-hostname master-server
On the slave server:
sudo hostnamectl set-hostname slave-server
Next, edit the /etc/hosts file with vim editor:
sudo vim /etc/hosts
Paste this configuration for the master server:
192.168.1.249   master-server
Paste this configuration for the slave server:
192.168.1.248   slave-server
Save the file and exit the editor.

Step 2 - Install PostgreSQL on Master and Slave Server

Before we start to install PostgreSQL, update the Ubuntu repository:
sudo apt-get update
Next, install PostgreSQL with all its dependencies:
sudo apt-get install postgresql postgresql-client postgresql-contrib
After Postgres installed, give a new password for postgres user (created automatically when the installation).
passwd postgres
Type your postgres user password.
Now testing the PostgreSQL:
su - postgres
psql
\conninfo
You will see result below:
Check Postgres connection info.

Step 3 - Configure Master-server

In this step, we will configure the 'master server' with IP address '192.168.1.249'. We will create a new user/role with special permission to perform the replication, then we edit the PostgreSQL configuration file to enable the hot standby replication mode.
From the root privileges, switch to the PostgreSQL user with the su command:
su - postgres
Access the Postgres shell with the psql command and type in this PostgreSQL query to create the new user/role:
psql
CREATE USER replica REPLICATION LOGIN ENCRYPTED PASSWORD 'replicauser@';
Check new replica user with PostgreSQL command below:
\du
New replica user has been created.
Add a replication user in PostgreSQL
Next, go to the PostgreSQL directory '/etc/postgresql/9.4/main' to edit the configuration file.
cd /etc/postgresql/9.4/main/
Open the postgresql.conf file with vim:
vim postgresql.conf
Uncomment line 59 and add the server IP address.
listen_addresses = 'localhost,192.168.1.249'
In the WAL (Write Ahead Log) setting line 175, uncomment and change the value to hot_standby.
wal_level = hot_standby
In the checkpoints section line 199, uncomment the 'checkpoint_segments' and change the value to 8.
checkpoint_segments = 8
In the archive section line 206 and 208, turn on the archiving option and add the archiving command.
archive_mode = on
archive_command = 'cp -i %p /var/lib/postgresql/9.4/main/archive/%f'
In the replication section line 224 and 226, change the value to the max number of WAL sender process.
max_wal_senders = 3
wal_keep_segments = 8
Save the file and exit vim.
Now create a new directory inside of the 'main' directory for the archive configuration - run the command below as postgres user:
mkdir -p /var/lib/9.4/main/archive/
Next, edit pg_hba.conf file to allow the replication connection.
vim pg_hba.conf
In the end of the line, add a new configuration for user 'replica' to make the connection.
host    replication     replica      192.168.1.248/24            md5
#192.168.1.248 is slave-server ip address
Save and exit.

Step 4 - Slave-server Configuration

Configure the slave server like the master server. Use su to become the postgres user and go to the PostgreSQL configuration directory.
su - postgres
cd /etc/postgresql/9.4/main/
Edit the postgresql.conf with vim:
vim postgresql.conf
Uncomment line 59 and add the slave server IP address.
listen_addresses = 'localhost,192.168.1.248'
Go to line 175 and uncomment the wal_level setting, change the value to hot_standby.
wal_level = hot_standby
Uncomment line 199 on the checkpoint section.
checkpoint_segments = 8
Uncomment line 224 and 226 to configure max_wal_sender process.
max_wal_senders = 3
wal_keep_segments = 8
Uncomment line 245 to enable hot_standby mode on the slave server.
hot_standby = on
Save and exit.

Step 5 - Syncronize Data from Master server to Slave server

In this step, we will move the PostgreSQL data directory '/var/lib/postgresql/9.4/main' to a backup folder and then replace it with the latest master data with 'pg_basebackup' command.
Run all the command sbelow on the slave server only!
Stop PostgreSQL on the slave server:
systemctl stop postgresql
Now login to the postgres user and rename the 'main' directory to 'main_original' as a backup.
su - postgres
mv 9.4/main 9.4/main_original
Run the command below to copy data from the master server to slave server:
pg_basebackup -h 192.168.1.249 -D /var/lib/postgresql/9.4/main -U replica -v -P
Start the replication.

Note:
  • 192.168.1.249 is master server IP address.
  • And you will be prompted to entering the password for user 'replica' for the replication.
Go to the new 'main' directory and create the new recovery file 'recovery.conf' with vim:
cd /var/lib/postgresql/9.4/main/
vim recovery.conf
Paste the configuration below:
standby_mode = 'on'
primary_conninfo = 'host=192.168.1.249 port=5432 user=replica password=replicauser@'
restore_command = 'cp //var/lib/postgresql/9.4/main/archive/%f %p'
trigger_file = '/tmp/postgresql.trigger.5432'
Now back to the root user with exit and start PostgreSQL with systemctl command:
exit
systemctl start postgresql
Make sure there is no error after run the start command.

Step 6 - Testing

Go to the master server and log into the postgres user, then run the command below to see the replication info.
su - postgres
psql -x -c "select * from pg_stat_replication;"
You will see the replication info below:
Check the PostgreSQL replication state.
Next, test to create a new database from the master server and then check that the database exist on the slave server.
su - postgres
psql
create database howtoforge;
Create a PostgreSQL database on the master server.
Now login to the slave server and check that the 'howtoforge' database has been mirrored to the slave server automatically.
su - postgres
psql
\list
Check the PostgreSQL relication on the slave server.
The database has been replicated from the master server to the slave server.

Reference


Linux how long a process has been running?

$
0
0
https://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-running


I‘m a new Linux system user. How do I check how long a process or pid has been running on my Ubuntu Linux server?

You need to use the ps command to see information about a selection of the active processes. The ps command provide following two formatting options:
  1. etime Display elapsed time since the process was started, in the form [[DD-]hh:]mm:ss.
  2. etimes Display elapsed time since the process was started, in seconds.

How to check how long a process has been running?

You need to pass the -o etimes or -o etime to the ps command. The syntax is:
ps -p {PID-HERE} -o etime
ps -p {PID-HERE} -o etimes

Step 1: Find PID of a process (say openvpn)

$ pidof openvpn
6176

Step 2: How long a openvpn process has been running?

$ ps -p 6176 -o etime
OR
$ ps -p 6176 -o etimes
To hide header:
$ ps -p 6176 -o etime=
$ ps -p 6176 -o etimes=

Sample outputs:

Fig.01: Linux check how long a openvpn process has been running on a server
Fig.01: Linux check how long a openvpn process has been running on a server

The 6176 is the PID of the process you want to check. In this case I’m looking into openvpn process. Feel free to replace openvpn and PID # 6176 as per your own requirements. In this find example, I am printing PID, command, elapsed time, user ID, and group ID:
$ ps -p 6176 -o pid,cmd,etime,uid,gid
Sample outputs:
  PID CMD                             ELAPSED   UID   GID
6176 /usr/sbin/openvpn --daemon 15:25 65534 65534

An Introduction to Iridium, an Open Source Selenium and Cucumber Testing Tool

$
0
0
https://dzone.com/articles/an-introduction-to-iridium-an-open-source-selenium

Learn how to write and execute plain english test scripts that validate your web applications using Iridium open source testing tool.

Learn how to build modern digital experience apps with Crafter CMS. Download this eBook now. Brought to you in partnership with Crafter Software
Today I would like to introduce Iridium, an open source web testing tool built around Cucumber and Selenium and designed to make automated testing of web sites easy and accessible.
Iridium was built to satisfy a few requirements that we faced as developers working on insurance applications:
  1. It had to be simple to write and run tests. The people writing tests have limited or no understanding of Java, and the people writing the testing software didn’t have time to configure individuals PCs with custom software.
  2. It had to support applications that were themed. We found ourselves testing the same application over and over with different styles, and the testing software needed to automate this.
  3. The tests had to be readable, even though our applications were not developed with testing in mind. This meant dealing with obtuse xpaths and css selectors.
  4. Tests had to be run in a wide variety of browsers, including those run in headless CI environments, as well as supporting testing environments like BrowserStack.
You can find the documentation for Iridium on GitBook, but in this article I’ll demonstrate how Iridium can be used to automate some testing on one of my favorite websites: DZone.
Before we get started, make sure that you have Java 8 installed, and have trusted the S3 bucket that holds the WebStart JAR that we’ll be using for this demo. You can find detailed instructions on how to do this in the Installation chapter of the documentation.
You will also need to ensure that you have Firefox installed.
This is the Web Start file that is used to launch this particular test script.
version="1.0" encoding="UTF-8"?>
<jnlpspec="1.0+"codebase="https://s3-ap-southeast-2.amazonaws.com/ag-iridium/">
<information>
<title>Iridium Web Application Tester</title>
<vendor>Auto and General</vendor>
<homepagehref="https://autogeneral.gitbooks.io/iridiumapplicationtesting-gettingstartedguide/content/"/>
<offline-allowed/>
</information>
<resources>
<j2seversion="1.8+"href="http://java.sun.com/products/autodl/j2se"/>
<propertyname="jnlp.packEnabled"value="true"/>
<propertyname="javaws.configuration"value=""/>
<propertyname="javaws.dataset"value=""/>
<propertyname="javaws.appURLOverride"value="https://dzone.com"/>
<propertyname="javaws.featureGroupName"value=""/>
<propertyname="javaws.testSource"value="https://raw.githubusercontent.com/AutoGeneral/IridiumApplicationTesting/master/examples/17.simplesteps/test.feature"/>
<propertyname="javaws.importBaseUrl"value=""/>
<propertyname="javaws.testDestination"value="FIREFOX"/>
<propertyname="javaws.webdriver.chrome.driver"value=""/>
<propertyname="javaws.webdriver.opera.driver"value=""/>
<propertyname="javaws.phantomjs.binary.path"value=""/>
<propertyname="javaws.leaveWindowsOpen"value="false"/>
<propertyname="javaws.openReportFile"value="true"/>
<propertyname="javaws.saveReportsInHomeDir"value="true"/>
<propertyname="javaws.webdriver.ie.driver"value=""/>
<propertyname="javaws.enableVideoCapture"value="false"/>
<propertyname="javaws.numberOfThreads"value="1"/>
<propertyname="javaws.numberURLs"value="1"/>
<propertyname="javaws.numberDataSets"value="1"/>
<propertyname="javaws.enableScenarioScreenshots"value="true"/>
<propertyname="javaws.tagsOverride"value=""/>
<propertyname="javaws.phantomJSLoggingLevel"value="NONE"/>
<propertyname="javaws.startInternalProxy"value=""/>
<propertyname="jnlp.versionEnabled"value="true"/>
<jarhref="webapptesting-signed.jar"main="true"version="0.0.4"/>
</resources>
<application-desc
name="Web Application tester"
main-class="au.com.agic.apptesting.Main"
width="300"
height="300">
</application-desc>
<updatecheck="background"/>
<security>
<all-permissions/>
</security>
</jnlp>
For those that just want to jump right in, right click and save the webstart jnlp file to your local disk. Opening this file will launch Java Web Start, and then run the test script. Don’t worry if the download progress bar appears to stall, reset to 0% and then jump to 100%; that is expected behaviour. Once you have downloaded the JAR file once, Web Start will cache it and the download will be instantaneous the next time you run the test.
The ability to run Cucumber test scripts from Web Start is a unique feature of Iridium, and makes launching the test scripts trivial for those who might otherwise struggle to work with JAR files and using the command line.
Let’s take a look at the test script that will navigate around the DZone website.
The script is written in Gherkin, which is a library that provides a natural language to write tests in. Gherkin is part of the Cucumber library. Gherkin breaks tests down into features and scenarios.
Feature: Open an application
# This is where we give readable names to the xpaths, ids, classes, name attributes or
# css selectors that this test will be interacting with.
Scenario: Generate Page Object
  Given the alias mappings
    | HomeLink        | //*[@id="ng-app"]/body/div[1]/div/div/div[1]/div/div[1]/div/a      |
    | NoProfileImage  | //*[@id="ng-app"]/body/div[1]/div/div/div[1]/div/div[2]/div[3]/i   |
    | ProfileImage    | //*[@id="ng-app"]/body/div[1]/div/div/div[1]/div/div[2]/div[3]/img |
    | LoginBackground | ngdialog-overlay                                                   |
Our first scenario provides a mapping between human readable alias names and various strings that are used to identify elements in a HTML page. In this small demo we have used xpaths and class names to identify HTML elements within the DZone web site.
These aliases are important for removing obtuse strings like //*[@id="ng-app"]/body/div[1]/div/div/div[1]/div/div[1]/div/a from our test scripts. Often there is no nice way to identify elements in a HTML page generated by code that was not written with automated testing in mind, and these aliases provide a level of abstraction between the raw HTML and the behaviors that the elements expose to the end user.
 # Open up the web page
  Scenario: Launch App
    And I set the default wait time between steps to "2"
    And I open the application
    And I maximise the window
Next we have some instructions set the default wait time between steps, open the application, and maximize the browser window. The default wait time is important, because it means our test script will not zip through the test at inhuman speeds.
The URL of the application to open (https://dzone.com in this case) is supplied as a system property defined in the Web Start jnlp file with the javaws.appURLOverride property.
# Open the login dialog and close it again
Scenario: Open Profile
  # Click on an element referencing the aliased xpath we set above
  And I click the element found by alias "NoProfileImage"
   # Click on an element referencing the aliased class name we set above
  And I click the element found by alias "LoginBackground"
The next scenario simulates opening and closing the login window. This scenario demonstrates two important features of Iridium.
The first feature is that we have referenced the HTML elements that are clicked on to open and close the login window via an alias. The alias names mean that someone can read the step and understand what is going on without having any understanding of how the HTML is structured.
The second feature is the use of simplified steps that don’t require the test writer to specifically identify the selection string as an xpath. When using simple steps (steps that have the string “found by” in them), Iridium will match the selection string to an:
  •  ID
  • class
  • name attribute
  • xpath
  • css selector
And return the first matching element. You can only use simple steps when the element you want to interact with can be uniquely identified by one of these methods. But unless you are storing xpaths in the ID attribute or have mixed class names with IDs, then simple steps are fine to use.
Scenario: Navigate the main links
  And I click the link with the text content of "REFCARDZ"
  And I click the link with the text content of "GUIDES"
  And I click the link with the text content of "ZONES"
  And I click the link with the text content of "AGILE"
  And I click the link with the text content of "BIGDATA"
  And I click the link with the text content of "CLOUD"
  And I click the link with the text content of "DATABASE"
  And I click the link with the text content of "DEVOPS"
  And I click the link with the text content of "INTEGRATION"
  And I click the link with the text content of "IOT"
  And I click the link with the text content of "JAVA"
  And I click the link with the text content of "MOBILE"
  And I click the link with the text content of "PERFORMANCE"
  And I click the link with the text content of "WEB DEV"
Scenario: Open some refcardz
  And I click the element found by alias "HomeLink"
  # WebDriver considers this link to be obscured by another element, so
  # we use a special step to click these "hidden" links
  And I click the hidden link with the text content of "Learn Swift"
  And I go back
  And I wait "30" seconds for the element found by alias "HomeLink" to be displayed
  And I click the hidden link with the text content of "Learn Microservices"
  And I go back
  And I wait "30" seconds for the element found by alias "HomeLink" to be displayed
  And I click the hidden link with the text content of "Learn Scrum"
  And I go back
  And I wait "30" seconds for the element found by alias "HomeLink" to be displayed
The remaining scenarios navigate around the web site using links and the back button. There is no need to use aliases here because the name of the link is descriptive enough.
When you run this test, you’ll see Firefox open up and begin navigating the DZone website by itself. Once complete, a number of report files are saved in the WebAppTestingReports directory in your user’s home directory. The CucumberThread1.html/index.html file is the easiest to read, and provides an interactive report of the feature that was run the success or failure of any individual steps.
Image title
It is early days for the Iridium project, but I hope from this simple demo you can see how this tool might help you get started with testing your own web applications.

Top 50 Linux System Administrator Interview Questions

$
0
0
http://fossbytes.com/top-50-linux-system-administrator-interview-questions-answers


linux sysadmin interview questionShort Bytes: Today, the job opportunities for Linux experts are more than ever. The Linux SysAdmin interview questions range from basic Linux questions to networking, DevOps, and MySQL questions. So, one needs to prepare adequately to ensure success in the Linux system administrator interview process. 
According to a report, the open source and Linux job market is full of new opportunities. Due to the increasing adoption of open source technologies by the technology giants (Microsoft says HELLO!), there are ample job opportunities for system administrators and DevOps professionals.
While a huge demand continues to exist, just like any other job in the technology world, SysAdmins have to go through a rigorous hiring process that consists of preparing a professional resume, technical exams, and interview questions. Out of these, cracking a job interview is often the most critical test.
During an interview, a candidate’s personal qualities are also checked and it’s evaluated if he/she is a right fit for the company. Apart from being calm and composed, being well-prepared for an interview is the best thing one can do in order to crack a Linux SysAdmin interview.
If you open your web browser and search for the phrase Linux SysAdmin interview questions, you’ll get a long list of search results that will help your practice. Apart from the straightforward conceptual questions like “What does the permission 0750 on a file mean?”, Linux SysAdmin interviews also come loaded with expert questions like “How do you catch a Linux signal on a script?”
To help you out in the Linux system administrator  interviews, I’ve compiled a list of my favorite questions of variable difficulty. These questions are framed with different approaches to find out more about the candidate and test his/her problem-solving skills:
1. What does nslookup do?
2. How do you display the top most process utilizing CPU process?
3. How to check all open ports on a Linux machine and block the unused ports?
4. What is Linux? How is it different from UNIX?
5. Explain the boot process of Unix System in details.
6. How do you change the permissions? How to create a file that’s read-only property?
7. Explain SUDO in detail. What are its disadvantages?
8. What is the difference between UDP and TCP?
9. Describe the boot order of a Linux machine.
10. Design a 3-tier web application.
11. Sketch how you would route network traffic from the internet into a few subnets.
12. How do you know about virtualization? Is it good to use?
13. What are different levels of RAID and what level will you use for a web server and database server?
14. List some latest developments in open source technologies.
15. Have you ever contributed to an open source project?
16. Systems engineer or a systems administrator? Explain?
17. List some of the common unethical practices followed by a system professional.
18. What is the common size for a swap partition under a Linux system?
19. What does a nameless directory represent in a Linux system?
20. How to list all files, including hidden ones, in a directory?
21. How to add a new system user without login permissions?
22. Explain a hardlink. What happens when a hardlink is removed?
23. What happens when a sysadmin executes this command: chmod 444 chmod
24. How do you determine the private and public IP addresses of a Linux system?
25. How do you send a mail attachment using bash?
26. Tell me something about the Linux distros used on servers.
27. Explain the process to re-install Grub in Linux in the shortest manner.
28. What is an A record, an NS record, a PTR record, a CNAME record, an MX record?
29. What is a zombie process? State its causes?
30. When do we prefer a script over a compiled program?
31. How to create a simple master/slave cluster?
32. What happens when you delete the source to a symlink?
33. How to restrict an IP so that it may not use the FTP Server?
34. Explain the directory structure of Linux. What contents go in /usr/local?
35. What is git? Explain its structure and working.
36. How would you send an automated email to 100 people at 12:00 AM?
37. Tell me about ZFS file system.
38. How to change the default run level in a Linux system?
39. How would you change the kernel parameters in Linux?
40. State the differences between SSH and Telnet.
41. How would you virtualize a physical Linux machine?
42. Tell me about some quirky Linux commands.
43. Explain how HTTPS works.
44. Do you know about TOR browser? Explain its working.
45. How to trigger a forced system check the next time you boot your machine?
46. What backup techniques do you prefer?
47. Tell me something about SWAP partition.
48. Explain Ping of Death attack.
49. How do you sniff the contents of an IP packet?
50. Which OSI layer is responsible for making sure that the packet reaches its correct destination?

Get The Complete Linux System Administrator Bundle Here

(Compiled from GitHub, StackOverflow, Quora)
may the foss be with you
Recommended:

Avoiding data disasters with Sanoid

$
0
0
https://opensource.com/life/16/7/sanoid


Avoiding data disasters with Sanoid
Image by : 
opensource.com
Sanoid helps to recover from what I like to call "Humpty Level Events." In other words, it can help you put Humpty Dumpty back together again, on ZFS filesystems.
Humpty Dumpty, Tenniel.
Humpty Dumpty sat on the wall,
Humpty Dumpty had a great fall.
All the King's horses and all the King's men
Couldn't put Humpty together again.
As a child, long before I read Lewis Carroll's books, I knew this snatch of doggerel from Mother Goose's nursery rhymes by heart. Humpty Dumpty's fall is probably the best-known nursery rhyme in the English language. Why is this simple verse so popular?
It outlines a fear, and an experience, common to everyone: some seminal, horrible event happened, and in the space of a moment there was no going back. What had been order became chaos, and there was no way to restore it. It sucks, but it's a basic part of the human experience; you can't put an egg back into its shell, you can't unsay the mean thing you said to your friend, and you can't undo the horrible mistake you made on your computer.
Maybe you clicked the wrong link or the wrong email attachment and a cryptomalware payload executed. Or maybe a bad system update came in from your operating system vendor and bricked the boot process for your machine. (I haven't actually seen this particular thing happen under Linux, but it happens with depressing regularity to those of us who manage enough Windows servers.) Perhaps a mission-critical application needed an upgrade, and the vendor emailed you a 150-page PDF with instructions, and it all went south on page 75. Heck, maybe you paid the vendor to do the upgrade and it all went south on page 75.
Like most of the computing experience, none of these things are truly new. They're all just rewritings of Humpty's parable. Entropy happens.

You can't unscramble the egg. (Or can you?)

Humpty Dumpty sat on the wall, Humpty Dumpty had a great fall.
The sysadmin did a rollback! Humpty Dumpty sat on the wall...
If you're a *nix person, rm -rf / is as apocryphal a tale as Humpty's fall itself. You may never have done it yourself, or even seen it done in person. We've all at least heard the stories, though, and cringed at the thought. GNU rm even added a special argument, --no-preserve-root, to try to make it a little more difficult for fast, clumsy fingers to wipe out the system! That still doesn't stop you from accidentally nuking all sorts of important things that aren't root, though: /bin, /var, /home... you name it. (I accidentally destroyed /etc on an important system once. Once. And let us never speak of it again.)
In the most prosaic sense, Sanoid is a snapshot management framework. It takes snapshots of ZFS filesystems, it monitors their presence, and it deletes them when they should go away. You feed it a policy, such as "for this dataset and all its children, take a snapshot every hour, every day, and every month. Keep 30 hourlies, 30 dailies, and 3 monthlies", and it makes that happen for you.
But forget the prose, and let me get a little poetic with you for a moment: Sanoid's real purpose is to rewrite the tale of Humpty's fall.
I used to get a feeling of existential dread when I'd see certain clients' names on my caller ID. There were days where I spent hours trying to pull arcane rabbits out of my hat to rescue broken systems in-place, fielding unanswerable how much longer? questions from anxious users, wondering when it was time to abandon the in-place rescue and begin the laborious restore-from-backup.
Of course, the only thing worse than "we accidentally borked the whole server" is, after you've finished your restore process, hearing that plaintive cry: "Where is SuperFooBarApp? It's mission critical!" ... and SuperFooBarApp is something the client installed themselves, six months ago, and never told you about. And it was outside the scope of your backup process, and now it's. Just. Gone.
Sanoid was the thing I built out of sheer desperation, to make all of that stop happening. And it works! By doing the Real Worktm on virtual machines which are being snapshotted hourly, and keeping the underlying bare metal clean as a whistle, running nothing but Sanoid itself, there is no such thing as a Humpty Level Event any more. Cryptomalware incursion? Rollback. Rogue (or even malicious!) user deleting giant swathes of data? Rollback. Bad system updates hosed the machine? Rollback. Client stealth-installed SuperFooBarApp six months ago in some squirrely location using some squirrely back-end db engine you've never heard of? Doesn't matter; the snapshots are whole-disk-image, it's on there.
In super technical business-y planning terms, using Sanoid makes my recovery point objective (RPO) 59 minutes or less, and my recovery time objective (RTO) roughly 60 seconds from the time it takes me to get to a keyboard. In less technical person-who-has-to-fix-it terms, it means I can always make the client happy, and it means that my day got a whole lot more predictable!

Configuring Sanoid

All you need is one single-line cron job, and a simple, easy to read TOML configuration file.
Cron:


root@banshee:~# crontab -l | grep sanoid

* * * * * /usr/local/bin/sanoid --cron


Configuration:


root@banshee:~# cat /etc/sanoid/sanoid.conf



[banshee/images]

        use_template = production

        recursive = yes



[banshee/home/phonebackup]

        use_template = production



### templates below this line ###



[template_production]

        hourly = 36

        daily = 30

        monthly = 3

        yearly = 0

        autosnap = yes

        autoprune = yes


And Sanoid will apply the policies you've set easily, neatly and predictably everywhere you've asked it to. That first module definition covers all nine of the VMs currently on my workstation, and will automatically pick up any new VMs I create (as long as I place them under /images).


root@banshee:~# zfs list -r banshee/images

NAME                             USED  AVAIL  REFER  MOUNTPOINT

banshee/images                  83.2G  26.4G  15.3G  /images

banshee/images/freebsd          1.32G  26.4G  1.29G  /images/freebsd

banshee/images/freenas            22K  26.4G    20K  /images/freenas

banshee/images/openindiana        22K  26.4G    20K  /images/openindiana

banshee/images/unifi-server     13.7G  26.4G  9.88G  /images/unifi-server

banshee/images/win2012R2-demo   8.79G  26.4G  8.40G  /images/win2012R2-demo

banshee/images/win7             29.5G  26.4G  26.6G  /images/win7

banshee/images/xenial-gold      2.27G  26.4G  1.92G  /images/xenial-gold

banshee/images/xenial-gui-gold  5.80G  26.4G  4.48G  /images/xenial-gui-gold

banshee/images/xenial-test      6.41G  26.4G  4.37G  /images/xenial-test


Not a whole lot to set up, and better yet, not much to forget when new things inevitably get created later! There is still a missing piece to this puzzle, though. What if banshee, the local machine itself, catches on fire?

Look, Humpty didn't just get sick—he broke!

So far, we've been assuming that the hardware underneath the VM stays healthy. Unfortunately, that isn't always the case. Snapshots are great for recovering from soft failures—basically, disasters that happen via software, or users interacting with software. But if you lose the storage hardware, the snapshots go with it. And if you lose the machine running the hardware, you're down for hours, maybe even a day or two, waiting for replacements.
Since our goal is to get rid of all the Humpty Level Events, we also need to plan for hard failures, too. Hard drives died. The power supply died, and we're out of town and a project is due tonight. Somebody stored food in the server room, and a moth infestation shorted across components on the motherboard. (Laugh it up - that happened to a client this year!)
It can get worse than that, too—what about whole-site disasters? The fire sprinklers came on in the server room. The fire sprinklers didn't come on in the server room, and now the whole building's gone... you get the idea.
So we want snapshots, but we want them on more than one machine, and we want them in more than one place, too. This is where syncoid comes in. syncoid uses filesystem-level snapshot replication to move data from one machine to another, fast. For enormous blobs like virtual machine images, we're talking several orders of magnitude faster than rsync.
If that isn't cool enough already, you don't even necessarily need to restore from backup if you lost the production hardware; you can just boot up the VM directly on the local hotspare hardware, or the remote disaster recovery hardware, as appropriate. So even in case of catastrophic hardware failure, you're still looking at that 59m RPO, <1m p="" rto.="">
Backups—and recoveries—don't get much easier than this.
The syntax is dead simple:
root@box1:~# syncoid pool/images/vmname root@box2:pooln
ame/images/vmname
Or if you have lots of VMs, like I usually do... recursion!
root@box1:~# syncoid -r pool/images/vmname root@box2:po
olname/images/vmname
This makes it not only possible, but easy to replicate multiple-terabyte VM images hourly over a local network, and daily over a VPN. We're not talking enterprise 100mbps symmetrical fiber, either. Most of my clients have 5mbps or less available for upload, which doesn't keep them from automated, nightly over-the-air backups, usually to a machine sitting quietly in an owner's house.

Preventing your own Humpty Level Events

Sanoid is open source software, and so are all its dependencies. You can run Sanoid and Syncoid themselves on pretty much anything with ZFS. I developed it and use it on Linux myself, but people are using it (and I support it) on OpenIndiana, FreeBSD, and FreeNAS too.
You can find the GPLv3 licensed code on the website (which actually just redirects to Sanoid's GitHub project page), and there's also a Chef Cookbook and an Arch AUR repo available from third parties.
1m>

How to setup thin Provisioned Logical Volumes in CentOS 7 / RHEL 7

$
0
0
https://www.linuxtechi.com/thin-provisioned-logical-volumes-centos-7-rhel-7

LVM (Logical Volume Management) is a good way to use the disk space on the server more efficiently. One of the benefits of LVM is that we can take the snapshots of lvm based partitions and can create thinly provisioned logical volumes.
Thin Provisioning allows us to create the larger logical volumes than the available disk space. To use the thin provisioning we have to create a thin pool from volume group and then we can create logical volumes from that thin pool.
In this article we will demonstrate how to setup thin provisioned Logical Volumes in CentOS 7.x and RHEL7.x step by step.
Let’s assume we have Linux Server (CentOS 7.x / RHEL 7.x) and have assigned new a disk of 10 GB. We will be creating a thin pool of 10 GB, from this thin pool initially we will create two logical volumes of each 4 GB and one logical volume of size 1GB.

Refer the following steps to create thinly provisioned Logical Volumes

Step:1 Create the Physical Volume using pvcreate command

Let’s assume new disk is detected as /dev/sdb.
[root@linuxtechi ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@linuxtechi ~]#

Step:2 Create Volume group using vgcreate command

[root@linuxtechi ~]# vgcreate volgrp /dev/sdb
Volume group "volgrp" successfully created
[root@linuxtechi ~]#

Step:3 Create a thin pool from the volume group

Thin pool is like a logical volume which is created using lvcreate command
Syntax :
# lvcreate –L -T /
Where –L is used to specify the size of the pool and –T specify the thin pool
[root@linuxtechi ~]# lvcreate -L 9.90G -T volgrp/lvpool
Rounding up size to full physical extent 9.90 GiB
Logical volume "lvpool" created.
[root@linuxtechi ~]#
Verify the thin pool size.
[root@linuxtechi ~]# lvs /dev/volgrp/lvpool
  LV     VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvpool volgrp twi-a-tz-- 9.90g             0.00   0.59
[root@linuxtechi ~]#

Step:4 Create Logical Volumes from thin pool.

Let’s create two logical volumes of each size 4 GB.
Syntax :
# lvcreate -V -T / -n
[root@linuxtechi ~]# lvcreate -V 4G -T volgrp/lvpool -n node1
Logical volume "node1" created.
[root@linuxtechi ~]# lvcreate -V 4G -T volgrp/lvpool -n node2
Logical volume "node2" created.
[root@linuxtechi ~]#
Verify the status of thin pool and logical volumes
[root@linuxtechi ~]# lvs /dev/volgrp/lvpool && lvs /dev/volgrp/node{1..2}
LV     VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
lvpool volgrp twi-aotz-- 9.90g             0.00   0.65
LV    VG     Attr       LSize Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
node1 volgrp Vwi-a-tz-- 4.00g lvpool        0.00
node2 volgrp Vwi-a-tz-- 4.00g lvpool        0.00
[root@linuxtechi ~]#

Step:5 Format the thin provisioned logical volumes

Use the mkfs command to  create file system(ext4) on logical volumes.
[root@linuxtechi ~]# mkfs.ext4 /dev/volgrp/node1
[root@linuxtechi ~]# mkfs.ext4 /dev/volgrp/node2
[root@linuxtechi ~]# mkdir /opt/vol1 && mkdir /opt/vol2
[root@linuxtechi ~]# mount /dev/volgrp/node1 /opt/vol1/ && mount /dev/volgrp/node2 /opt/vol2/
[root@linuxtechi ~]#
Check the mount points
[root@linuxtechi ~]# df -Th /opt/vol1/ /opt/vol2/
Filesystem               Type  Size  Used Avail Use% Mounted on
/dev/mapper/volgrp-node1 ext4 3.9G   16M  3.6G   1% /opt/vol1
/dev/mapper/volgrp-node2 ext4 3.9G   16M  3.6G   1% /opt/vol2
[root@linuxtechi ~]#
Write some data into the above created file systems
[root@linuxtechi ~]# dd if=/dev/zero of=/opt/vol1/file.txt bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.26031 s, 329 MB/s
[root@linuxtechi ~]# dd if=/dev/zero of=/opt/vol2/file.txt bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.70821 s, 396 MB/s
[root@linuxtechi ~]#
Now verify the size of thinly provisioned  logical volumes using lvs command.
lvs-ouptput-thinly-lvms
As we can see that both the logical volumes consume 29 % data.
Now try to create third logical volume from the Thin pool.
[root@linuxtechi ~]# lvcreate -V 1G -T volgrp/lvpool -n node3
  Logical volume "node3" created.
[root@linuxtechi ~]#

Scenario :

Well as of now we have consume whole space of thin pool in logical volumes. If some one ask me to create one more logical volume of size 2G.
Can I create new logical volume from the thin pool… ?
What will happen ? Does it support over-committed… ?
Answers is Yes, we can create logical volume as it supports over-committed or over-provisioning but while creating logical volume it will throw an warning message. Example is  shown below :
[root@linuxtechi ~]# lvcreate -V 2G -T volgrp/lvpool -n node4
  WARNING: Sum of all thin volume sizes (11.00 GiB) exceeds the size of thin pool volgrp/lvpool and the size of whole volume group (10.00 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Logical volume "node4" created.
[root@linuxtechi ~]#
Now Verify the logical volume status again
lvs-output2

Step:6 Extend the size of thin pool using lvextend command

Lets assume one more disk of 5G is assigned to the server (/dev/sdc), we will be using this disk to extend the thin pool.
Refer the following steps
Create the physical volume and extend the volume group (volgrp)
[root@linuxtechi ~]# pvcreate /dev/sdc
Physical volume "/dev/sdc" successfully created
[root@linuxtechi ~]# vgextend volgrp /dev/sdc
Volume group "volgrp" successfully extended
[root@linuxtechi ~]#
As thin pool is a logical volume so we can extend its size by lvextend command
[root@linuxtechi ~]# lvextend -L+5G volgrp/lvpool
Now verify the thin pool size, it should be around 15 GB.
lvm-thinpool-after-extension
Note: We can not reduce or shrink thin pool only extension is supported

Install Nginx, MariaDB and PHP (FEMP stack) on FreeBSD 11

$
0
0
https://www.howtoforge.com/tutorial/install-nginx-mariadb-and-php-femp-stack-in-freebsd-11x

In this tutorial, I will describe the process of installing and configuring the FEMP stack on FreeBSD 11.x. FEMP software stack is an acronym which stands for a group of programs that are usually installed in Unix/Linux operating systems and mainly used for deploying dynamic web applications. In this case, the FEMP acronym refers to the FreeBSD Unix-like operating system, on top of which are installed these applications:
  • Nginx web server, which is a fast-growing popular web server mainly used for serving HTML content, but it can also provide load-balancing, high-availability or reverse-proxy for a web server or for other network services.
  • PHP dynamic programming language interpreter, used in the backend to manipulate databases data and create dynamic web content which can be included into plain HTML. PHP scripts are executed only on the server side, never in client side (in browsers)
  • Mariadb\MySQL RDBMS which is where the data is stored in backed, while the dynamic processing is handled by PHP. In this tutorial, we’ll install and use MariaDB relational database management system, a community fork of MySQL, in favor of MySQL database, which is now owned and developed by Oracle.
REQUIREMENTS:
  • A minimal installation of FreeBSD 11.x.
  • A static IP Address configured for a network interface.
  • A regular account configured with root privileges or direct access to the system via root account.
  • Preferably, a publicly registered domain name configured with the minimal DNS records (A and CNAME records).

Step 1 – Install MariaDB Database

In the first step, we’ll install the MariaDB database system, which is the FEMP component that will be used for storing and managing the dynamic data of the website. MariaDB/MySQL is one of the most used open source relational databases in the world in conjunction with Nginx or Apache web server. Both servers are highly utilized for creating and developing complex web applications or dynamic websites. MariaDB can be installed in FreeBSD directly from the binaries provided by PORTS repositories. However, a simple search using ls command in FreeBSD Ports databases section reveals multiple versions of MariaDB, as shown in the following command output. Also, running Package Manager pkg command displays the same results.
ls -al /usr/ports/databases/ | grep mariadb
pkg search mariadb
 MariaDB versions available for FreeBSD 11
In this guide, we’ll install the latest release of the MariaDB database and client by using the pkg command as illustrated in the below excerpt.
pkg install mariadb102-server mariadb102-client
After MariaDB has finish installing in the system, issue the following command in order to enable the MySQL server system-wide. Also, make sure you start MariaDB daemon as shown below.
sysrc mysql_enable=”YES”
service mysql-server start
Next, we’ll need to secure MariaDB database by running mysql_secure_installation script. While running the script, a series of questions we’ll be asked. These questions purpose is to provide a level of security for MySQL engine, such as set up a root password for MySQL root user, remove the anonymous user, disable remote login for root user and delete the test database. After choosing a strong password for the MySQL root user, answer with yes on all questions, as illustrated in the below sample of the script. Do not confuse the MariaDB database root user with the system root user. Although these accounts have the same name, root, they are not equivalent and are used for different purposes, one for system administration and the other for database administration.
/usr/local/bin/mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
 ... Success!
Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
 ... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
 ... Success!
Cleaning up...
All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
Finally, after you’ve finished securing MariaDB database, test if you are allowed to perform local login to the database from root account by running the following command. Once connected to the database prompt, just type quit or exit in order to leave the database console and return to system user console prompt as shown in the below screenshot.
mysql -u root -p
MariaDB> quit
 Test The MariaDB database login
Running sockstat command in FreeBSD quickly reveals the fact that MariaDB is opened to external network connections and can be remotely accessed from any network via 3306/TCP port.
sockstat -4 -6
 Check MariaDB socket and port
In order to completely disable remote network connections to MariaDB, you need to force mysql network socket to bind to the loopback interface only by adding the following line to /etc/rc.conf file with the below command.
sysrc mysql_args="--bind-address=127.0.0.1"
Afterwards, restart MariaDB daemon to apply the changes and execute sockstat command again to display the network socket for mysql service. This time, MariaDB service should listen for network connections on localhost:3306 socket only.
service mysql-server restart
sockstat -4 -6|grep mysql
MariDB is bount to localhost interface
If you are developing a remote web application that needs access to the database on this machine, revert MySQL socket changes made so far by removing or commenting the line mysql_args="--bind-address=127.0.0.1" from /etc/rc.conf file and restarting the database to reflect changes. In this case, you should take into consideration other alternatives to limit or disallow remote access to MySQL, such as running a firewall locally and filter the IP addresses of clients who need remote login or create MySQL users with the proper IP addresses grants to login to the server.

Step 2 – Install Nginx Web Server

The next important daemon that we’ll install in FreeBSD for our FEMP stack is the web server, represented by Nginx service. The process of installing Nginx web server in FreeBSD is pretty straightforward. Nginx web server can be installed from the binaries provided by FreeBSD 11.x Ports. A simple search through Ports repositories in the www section can show a list of what pre-compiled versions are available for Nginx software, as shown in the below command excerpt.
ls /usr/ports/www/ | grep nginx
Issuing the package management command can display the same results as shown in the below image.
pkg search –o nginx
List Nginx versions on FreeBSD
In order to install the most common version of Nginx in FreeBSD, run the below command. While installing the binary package, the package manager will ask you if you agree with downloading and installing Nginx package. Usually, you should type the word yes or y in the prompt in order to start the installation process. To avoid the prompt add the –y flag while issuing the command: pkg –y install nginx.
pkg install nginx
 Install Nginx on FreeBSD
After Nginx web server software has been installed on your system, you should enable and run the service by issuing the below commands.
sysrc nginx_enable=”yes”
service nginx start
Start Nginx Service
You can execute sockstat command in order to check if Nginx service is started on your system and on what network sockets it binds on. Normally, it should bind by default on *:80 TCP socket. You can use the grep command line filter to display only the sockets that match nginx server.
sockstat -4 -6 | grep nginx
 Check if Nginx is started with sockstat command
In order to visit Nginx default web page, open a browser on a computer in your network and navigate to the IP address of your server via HTTP protocol. In case you’ve registered a domain name or you use a local DNS server at your premises, you can write the fully qualified domain name of your machine or the domain name in browser’s URI filed. A title message saying "Welcome to nginx!" alongside a few HTML lines should be displayed in your browser, as shown in the following screenshot.
 Nginx Welcome page
The location where web files are stored for Nginx in FreeBSD 11.x is /usr/local/www/nginx/ directory. This directory is a symbolic link to the nginx-dist directory. To deploy a website, copy the html or php script files into this directory. In order to change Nginx default webroot directory, open Nginx configuration file from /usr/local/etc/nginx/ directory and update root statement line as shown in the below example.
nano /usr/local/etc/nginx/nginx.conf
This will be the new webroot path for Nginx:
root       /usr/local/www/new_html_directory;
 Change Nginx Web root directory

Step 3 – Install PHP Programming Language

By default, Nginx web server cannot directly parse PHP scripts, Nginx needs to pass the PHP code trough the FastCGI gateway to the PHP-FPM daemon, which interprets and executes the PHP scripts. In order to install the PHP-FPM daemon in FreeBSD, search for available PHP pre-compiled binary packages by issuing the below commands.
ls /usr/ports/lang/ | grep php
pkg search –o php
From the multitude of PHP versions available in FreeBSD Ports repositories, choose to install the latest version of PHP interpreter, currently PHP 7.1 release, by issuing the following command.
pkg install php71
In order to install some extra PHP extensions, which might be needed for deploying complex web applications, issue the below command. A list of officially supported PHP extensions can be found by visiting the following link: http://php.net/manual/en/extensions.alphabetical.php
If you're planning to build a website based on a content management system, review the CMS documentation in order to find out the requirements for your system, especially what PHP modules or extensions are needed.
php71-mcrypt mod_php71 php71-mbstring php71-curl php71-zlib php71-gd php71-json
Because we are running a database server in our setup, we should also install the PHP database driver extension, which is used by PHP interpreter to connect to MariaDB database.
pkg install php71-mysqli
Next, update the PHP-FPM user and group to match the Nginx runtime user by editing PHP-FPM configuration file. Change the user and group lines variables to www as shown in the below excerpt.
cp /usr/local/etc/php-fpm.d/www.conf{,.backup}
nano /usr/local/etc/php-fpm.d/www.conf
Change the following lines to look as below.
user = www
group = www
 Change PHP user
By default, Nginx daemon runs with privileges of the 'nobody' system user. Change Nginx runtime user to match PHP-FPM runtime user, by editing /usr/local/etc/nginx/nginx.conf file and update the following line:
user www;
User www user
By default, PHP-FPM daemon in FreeBSD opens a network socket on localhost:9000 TCP port in listening state. To display this socket you can use sockstat command as shown in the below example.
sockstat -4 -6| grep php-fpm
 Check php-fpm socket
In order for Nginx web server to exchange PHP scripts with PHP FastCGI gateway on 127.0.0.1:9000 network socket, open Nginx configuration file and update the PHP-FPM block as shown in the below sample.
PHP FastCGI gateway example for Nginx:
        location ~ \.php$ {
        root               /usr/local/www/nginx;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param SCRIPT_FILENAME $request_filename;   
        include        fastcgi_params;
               }
 Nginx PHP configuration
After you’ve made all the above changes, create a configuration file for PHP based on the default production file by issuing the following command. You can change the PHP runtime settings by editing the variables present in php.ini file.
ln -s /usr/local/etc/php.ini-production /usr/local/etc/php.ini
Finally, in order to apply all changes made so far, enable the PHP-FPM daemon system-wide and restart PHP-FPM and Nginx services by issuing the below commands.
sysrc php_fpm_enable=yes
service php-fpm restart
Test nginx configurations for syntax errors:
nginx –t  
service nginx restart
 Test nginx syntax and restart nginx
In order to get the current PHP information available for your FEMP stack in FreeBSD, create a phpinfo.php file in your server document root directory by issuing the following command.
echo "" | tee /usr/local/www/nginx/phpinfo.php
Then, open a browser and navigate to the phpinfo.php page by visiting your server's domain name or public IP address followed /phpinfo.php file, as illustrated in the below screenshot.
 PHPinfo output
That’s all! You’ve successfully installed and configured FEMP stack in FreeBSD 11. The environment is now ready and fully functional to start deploying dynamic web applications at your premises.

Server Name Indication (SNI) in Tomcat

$
0
0
https://octopus.com/blog/sni-in-tomcat

Server Name Indication (SNI) has been implemented in Tomcat 8.5 and 9, and it means certificates can be mapped to the hostname of the incoming request. This allows Tomcat to respond with different certificates on a single HTTPS port.
This blog post looks at how to configure SNI in Tomcat 9.

Creating Self Signed Certificates

For this example we'll create two self signed certificates. This is done with the openssl command.
The output below shows how the first self signed certificate is created for the "Internet Widgets" company.
$ openssl req -x509 -newkey rsa:4096 -keyout widgets.key -out widgets.crt -days 365
Generating a 4096 bit RSA private key
......++
........++
writing newprivate key to 'widgets.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a defaultvalue,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:AU
State or Province Name (full name) []:QLD
Locality Name (eg, city) []:Brisbane
Organization Name (eg, company) []:Internet Widgets
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:
Email Address []:
Octopuss-MBP-2:Development matthewcasperson$ ls
widgets.crt widgets.key
We then create a second self signed certificate for the "Acme" company.
$ openssl req -x509 -newkey rsa:4096 -keyout acme.key -out acme.crt -days 365
Generating a 4096 bit RSA private key
..............................++
.....................................................................++
writing newprivate key to 'acme.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a defaultvalue,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:Au
State or Province Name (full name) []:QLD
Locality Name (eg, city) []:Brisbane
Organization Name (eg, company) []:Acme
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:
Email Address []:
Octopuss-MBP-2:Development matthewcasperson$ ls
acme.crt acme.key widgets.crt widgets.key
Copy the files acme.crt, acme.key, widgets.crt and widgets.key into the Tomcat 9 conf directory.

Configuring the

In the conf/server.xml file we'll add a new element to reference these certificates.
<ConnectorSSLEnabled="true"defaultSSLHostConfigName="acme.com"port="62000"protocol="org.apache.coyote.http11.Http11AprProtocol">
<SSLHostConfighostName="acme.com">
<CertificatecertificateFile="${catalina.base}/conf/acme.crt"certificateKeyFile="${catalina.base}/conf/acme.key"certificateKeyPassword="Password01!"type="RSA"/>
</SSLHostConfig>
<SSLHostConfighostName="widgets.com">
<CertificatecertificateFile="${catalina.base}/conf/widgets.crt"certificateKeyFile="${catalina.base}/conf/widgets.key"certificateKeyPassword="Password01!"type="RSA"/>
</SSLHostConfig>
</Connector>
There are a few important aspects to this configuration block, so we'll work through them one by one.
The defaultSSLHostConfigName="acme.com" attribute has defined the to be the default. This means that when a request comes for a host that is not either acme.com or widgets.com, the response will be generated using the acme.com certificate. You must have at least one default host configured.
The protocol="org.apache.coyote.http11.Http11AprProtocol" attribute configures Tomcat to use the Apache Portable Runtime (APR), which means that the openssl engine will be used while generating the HTTPS response. Typically deferring to openssl results in better performance then using native Java protocols. The Tomcat documentation has more details on the protocols that are available.
We then have the certificate configuration for each of the hostnames. This is the configuration for the acme.com hostname.
<SSLHostConfighostName="acme.com">
<CertificatecertificateFile="${catalina.base}/conf/acme.crt"certificateKeyFile="${catalina.base}/conf/acme.key"certificateKeyPassword="Password01!"type="RSA"/>
</SSLHostConfig>
The certificateFile="${catalina.base}/conf/acme.crt" and certificateKeyFile="${catalina.base}/conf/acme.key" attributes define the location of the certificate and the private key, relative to the CATALINA_BASE directory. The Tomcat documentation has more details on what CATALINA_BASE refers to:
The CATALINA_HOME environment variable should be set to the location of the root directory of the "binary" distribution of Tomcat.
The CATALINA_BASE environment variable specifies location of the root directory of the "active configuration" of Tomcat. It is optional. It defaults to be equal to CATALINA_HOME.

Testing the Connection

Since we don't actually own the acme.com or widgets.com domains, we'll edit the hosts file to resolve these addresses to localhost. On a Mac and Linux OSs, this file is found under /etc/hosts.
Adding the following lines to the hosts file will direct these domains to localhost. We'll also throw in the somethingelse.com hostname to see which certificate an unmapped host returns.
127.0.0.1acme.com
127.0.0.1widgets.com
127.0.0.1somethingelse.com
We can now open up the link https://widgets.com:62000. In Firefox, we can see that this request has the following certificate details. Notice the Verified by field, which shows Internet Widgets.
widgets.com certificate
Then open up https://acme.com:62000. The Verified by field now shows Acme.
acme.com certificate
Now open up https://somethingelse.com:62000. The Verified by field still shows Acme, because this certificate is the default, and is used for any host that doesn't have a specific mapping defined.
somethingelse.com certificate

Conclusion

So we can see that a single instance of Tomcat on a single port can respond with multiple different certificates depending on the host that was requested. This is the benefit that SNI provides to web servers.
If you are interested in automating the deployment of your Java applications, download a trial copy of Octopus Deploy, and take a look at our documentation.

CIA programs to steal your SSH credentials (BothanSpy and Gyrfalcon)

$
0
0
https://www.linux.org/threads/cia-programs-to-steal-your-ssh-credentials-bothanspy-and-gyrfalcon.12645

WikiLeaks yesterday released documentation on two very specific scripts meant to steal OpenSSH login credentials from the client side. One script is for Windows clients, the other for Linux clients.

On the Windows side of things, they have released documentation on a script called BothanSpy. This program targets the SSH client program Xshell on the Microsoft Windows platform and steals user credentials for all active SSH sessions. Their program works regardless of if you're using simple user/password, user/key, or user and key w/ password. It then sends the credentials / key file to a CIA-controlled server.

Similarly, on the Linux side, there is a program called Gyrfalcon. The documentation on this program was written in January, 2013 for v.1 and November 2013 for v.2. Scanning through the user guide for version 2.0 shows very detailed information on how to prepare and plant the software on the target computer, starting with how to cover your tracks:
The document goes on in detail of what the package contains, for instance, Gyrfalcon clients and libraries in both 32bit and 64bit flavors for:
  • CentOS 5.6 - 6.4
  • RHEL 4.0 - 6.4
  • Debian 6.0.8
  • Ubuntu 11.10
  • SuSU 10.1
That being said, you have to remember the documentation was dated 2013, so you'd have to assume they have an updated version now to work with current Linux versions.

It continues on in detail on how to install it on the target system. Installing on the target system also requires that they install the JQC/KitV root kit, also developed by the CIA.

You can see they had a meeting about JQC as a rootkit in their NERDStech talk series meetings: https://fdik.org/wikileaks/year0/vault7/cms/page_2621796.html



So, secure your systems people. Attackers potentially trying to use these tools still need to somehow get a shell on your system in order to install this stuff.

Detecting on your system
As far as detecting on your system, that's going to be tough since:
  • The instructions note to name the script something before uploading/running it
  • We don't have a copy of any of the scripts they're talking about
But - we do know a couple things..
  • It runs in the background. A simple 'ps' will show you the processes and you should be able to spot something unfamiliar running, and kill it
  • history file gone would indicate that 'something' happened.. not necessarily this though.
  • if you find evidence of the 'CIA' JQC/KitV root kit on your system which may be tough..

More Information
WikiLeaks announcement:
https://wikileaks.org/vault7/#BothanSpy

Gyrfalcon 2.0 User Manual:
https://wikileaks.org/vault7/document/Gyrfalcon-2_0-User_Guide/Gyrfalcon-2_0-User_Guide.pdf

Gyrfalcon 1.0 User Manual:
https://wikileaks.org/vault7/document/Gyrfalcon-1_0-User_Manual/Gyrfalcon-1_0-User_Manual.pdf
 

Attached Files:

Cloudwards Guide: The 9 Best Server Backup Solutions

$
0
0

https://www.cloudwards.net/best-server-backup-solutions




Welcome to the Cloudwards.net guide for the best server backup solutions available today. During this roundup, we’ll introduce you to storage solutions designed to aid in disaster recovery for crashed or corrupted servers and planned migrations to new hardware.
If you’re looking for disaster recovery options best suited for all of your office devices, our best online backup for business buyer’s guide is a good place to start.
Servers are used to store critical data and application files, among other things. As any sysadmin can tell you, however, no server lasts forever. However, with a good disaster recovery plan in place, the inevitable crash is merely an inconvenience; without one, it’s quite possibly a catastrophe.
Let’s take a look at the best ways to avoid catastrophe, without breaking the bank or raising your blood pressure as you try and learn how to work it. If you’re in the market for an actual server, as well, we also recommend you check out our best small business server article.

How We Made Our Picks for Top Server Backup

We’ve been evaluating and reviewing backup software for servers for years and pulled from that experience to make our top selection for server backup. While a few picks, including Carbonite and CloudBerry Backup, were no-brainers, a few of the others weren’t as easy.
When evaluating server backup, there are several features we look for. For one, as cloud enthusiasts, we prefer solutions that make it easy to backup server files to remote data centers. However, we also recognize that local backup has plenty of benefits, too, including recovery. With that in mind, we tend to prefer tools that support a hybrid backup approach.
We also looked for tools that could perform both file-based and image-based backup, in addition to full, incremental and differential backup. We also considered backup scheduling capabilities that let people plan off-peak backups to limit system resources. Other key features include hot backup, which lets you backup files that are currently in use, and multithreaded backup, which lets you run multiple backup processes at the same time.
Finally, we favor a capability called bare-metal restore. With bare-metal restoration, you can restore an image to a server that doesn’t yet have any software installed, which saves valuable time in the disaster recovery process.

Best Server Backup Pick: IDrive for Business

IDrive for Business consistently ranks as one of our favorite backup options for consumers. Its business backup plans are also excellent, providing platform flexibility and features that match or exceed more expensive options like MozyPro and SOS Online Backup for Business.
 Plan One:
Plan Two:
Plan Three:
Annual Cost:$99.50
$199.50
$499.50
Computers:UnlimitedUnlimitedUnlimited
Servers:UnlimitedUnlimitedUnlimited
Total Storage:250GB500GB1.25 TB
You get a 25 percent discount on the first year of service (or two years if you don’t mind that kind of commitment), making the pricing even more attractive.
There are more subscription options, too, all the way up to 12.5TB. Unlike Carbonite Office and some other options, it’s a bit harder to scale storage with IDrive because you have to stick to one if its set storage plans. But its low cost generally makes it a better value, anyway.
IDrive is able to backup most server applications, including Windows Server, Linux Server, MS SQL Server, MS Exchange Server, MS SharePoint Server, Oracle Server, Hyper-V and VMWare.
IDrive supports hybrid backup if you like to keep server files copied both locally and in the cloud. It also has decent monitoring tools to keep you on top of your backups in near real-time. Bare-metal disaster recovery and disk-image backup are both supported, too.
Phone, email and online chat options are all available for customer support, which we’ve always found to be very timely. Support is also available 24/7, as you can read in our IDrive for Business.

Number Two: CloudBerry Backup

CloudBerry Backup is a bit more complex than IDrive for Business in that it doesn’t offer its own cloud storage network. Instead, you have to purchase CloudBerry Backup software and then purchase cloud storage space separately from a supported service.
However, with over fifty different cloud storage options supported, those who are okay with getting a bit technical might enjoy the versatility. Supported vendors include Amazon S3, Microsoft Azure, Google Cloud and Backblaze B2. CloudBerry Backup also supports local backup.
You’ll have to buy software according on what kind of server or servers you use:
  • Windows Server: $119.99
  • SQL Server: $149.99
  • Exchange: $229.99
The SQL Server and Exchange software options both also support Windows Server. There’s also an $229.99 Ultimate Plan that can be used for Windows Server, SQL Server and Exchange.
Image-based backup and bare-metal restore are both features. CloudBerry Backup also supports restoration to servers with dissimilar hardware and restores directly from the cloud using a USB flash drive.
The software has a user-friendly wizard to walk you through the options, including scheduling times for your backups to run. Both full and differential backup are supported. Files can be compressed when backing up to reduce bandwidth used, too.
CloudBerry Backup supports client-side backup to restrict access to your files in the cloud. The level of client-side encryption used is 256-bit AES. For more details, check out our CloudBerry Backup review.

Number Three: Carbonite for Office

Carbonite for Office plans include options for both cloud and local backup, making it a hybrid backup tool. It supports physical and virtual servers, plus network-attached storage (NAS), storage area network (SAN) and external drives. It’s a solid backup service, although the platform support isn’t as good as what you get with IDrive — and it costs quite a bit more.
In addition to backing up and restoring individual files, you can perform a full system restore with Carbonite. The service supports both imaging and bare-metal restores. In order to set up a disaster recovery plan for your server, you’ll need to look beyond Carbonite’s basic Core plan, which is limited to computer backup.
PlanStorageAnnual CostNo. of ComputersNo. of Servers
Core250GB$269.99UnlimitedNone
Power500GB$599.99Unlimited1
Ultimate500GB$999.99UnlimitedUnlimited
Carbonite for Office Power lets you backup one server, while Carbonite for Office Ultimate lets you backup unlimited servers. Both give you 500GB of base storage and let you add 100GB of storage for $99 per year.
Maybe what we love most about Carbonite is how much easier it is to set up compared to most other server backup options. Some of those backup options might come at a lower price tag than Carbonite, but they’re also more difficult to manage. Business owners looking to get a server backup plan in place quickly and keep it running without spending a bunch of money on support staff should be happy.
The interface is easy on the eyes and walks you through most of the process. You can schedule automatic backups with a simple scheduler and manage system resources load by setting up full, incremental and differential backup. Other options include bandwidth throttling, notification setup and monitoring tools.
For security, Carbonite uses 128-bit AES to protect content stored in its data centers. If you’d like, you can also switch to private encryption and maintain control of your encryption keys yourself. That means nobody at Carbonite will ever be able to decrypt your files.
Finally, Carbonite has some of the most helpful support staff we’ve ever dealt with, at least for a cloud backup service. You can call, email or chat online with support seven days a week. Carbonite also offers free valet installations, which means a Carbonite employee will set up and optimize your server backup for free.
Read more about this service in our Carbonite for Office review.

Number Four: Acronis Backup 12.5

Acronis is one of the more popular server backup tools available today because it offers a great online interface and it’s extremely easy to use. However, it can also be a bit expensive. It’s base subscription price is $469 per year per server, and you also have to pay for storage:
  • 250GB: $299 per year
  • 500GB: $499 per year
  • 1TB: $899 per year
  • 5TB: $4299 per year
The $469 price is for a standard subscription for Windows and Linux server backup. There’s also an advanced plan for $839. Both plans get discounted if you sign up for multiple years in advance.
Both Acronis Backup Standard and Acronis Backup Advanced offer disk-imaging capabilities and hybrid storage. Local storage options include NAS and SAN. Advanced also includes deduplication, tape drive support and much better admin and reporting options.
Acronis can be used to backup physical and virtual servers, and can perform file-based, full and bare-metal recovery. It can also restore to dissimilar hardware if you need to move to a new server.

Number Five: StorageCraft ShadowProtect SPX Server 

ShadowProtect SPX Server is another favorite, though it comes with quite a few more limitations than our first four picks. For one, while many will find its desktop interface pretty easy to use for backup-plan creation, it’s not nearly as easy to use as Carbonite or even IDrive, CloudBerry Backup or Acronis.
The bigger limitation is that it doesn’t integrate easily with cloud storage, meaning you’ll need someone with strong technical knowledge to help you out. The upshot of all this is that it’s best used for local backup.
However, if you can work around those limitations and primarily need an on-premise solution, ShadowProtect SPX provides a nice range of features for securing your physical and virtual Windows Servers for disaster recovery.
Those features include full and incremental backup, backup scheduling and very good monitoring tools. You can backup files or an image, and backup images can be restored to machines with dissimilar hardware.
ShadowProtect SPX Server does have another issue which might not make it suitable for individuals or small business in need of server backup: cost. A lifetime license will set you back $1095 and one year of premium support costs an additional $164. Priority support does include 24-hour phone support, however.

Number Six: Macrium Reflect 7

Macrium suffers from some of the same issues as ShadowProtect SPX Server, has fewer features and is less user friendly. However, despite not facilitating cloud backup, it’s still popular among IT departments as a on-premise backup tool. That may be because it costs quite a bit less that ShadowProtect SPX.
Macrium Reflect 7 Server costs $275 per server for a perpetual license. As with StorageCraft, support does cost extra, though, even if you elect for standard support over premium. However, you get the first year of standard support free. The first year of premium support adds $13 to the cost.
 Standard:Premium:
Renewal Cost:$55$69
Response Time:24 hours
12 hours
24/7:
NoYes
Telephone Support:NoYes
Dedicated Case Manager:NoYes
The software can be used to backup entire physical and virtual servers into a single compressed image file, or files and folders into a single compressed archive file. A handy scheduler will let you plan full and incremental backups for when you can spare the system resources.
With Macrium ReDeploy, you can restore files directly to dissimilar hardware in case your server disks become corrupted or it’s time to upgrade. Virtual booting is also supported.

Number Seven: NovaBACKUP Server

NovaBACKUP Server lets you backup both locally and to the cloud. Options for cloud backup include NovaStor, which you have to purchase through a managed-service provider, or Amazon S3, which you can set up yourself.
NovaBACKUP can also be integrated with a few file-sharing services, including Dropbox, Google Drive and OneDrive. However, those options aren’t really designed for server backup.
The software itself can be purchased annually for $200 or you can go with a perpetual license for $400. If you do go with a perpetual license, you’ll only get one-year of NovaCare support included. NovaCare includes email, chat and telephone support channels.
Aside from being a hybrid storage solution, NovaBACKUP supports file-based, image-based, incremental, full and multithreaded backup. It also supports bare-metal restores and restores to dissimilar devices.
The software is also capable of encrypting data using 256-AES, whether saving it locally or sending it to the cloud, and provides tools for near real-time monitoring.

Number Eight: EaseUS Todo Backup Server

EaseUS produces some of the most popular tools for IT professionals in the world, including an excellent data recovery tool we’ve written up in our Data Recovery Wizard review. Its server backup software, Todo Backup Server, is also very good.
One of the most attractive things about Todo Backup Server is its cost. A lifetime license for the current version of Todo Backup Server costs $199. A lifetime licence for the current and all future versions costs $359.
Unfortunately, while Todo Backup Server has some very nice features, it doesn’t support simple cloud backup and that prevents it from being much higher on this list. FTP is an option, however, so you can still backup remotely if you’re up for a bit of work.
Too Backup Server provides full system backup for Windows Server, fast block-level disk imaging and file backup. It can also perform incremental and full backups, and has a nice backup scheduler. Images and files are compressed to take up more space and can be encrypted with 256-bit AES. You can also create notifications to alert you of ongoing, completed or failed processes.
Systems restores can be performed using bootable media and on dissimilar hardware. You can restore the entire image or perform file-based restores.

Final Thoughts

Those are our picks for the best server backup tools. For our money, Carbonite has earned its place as the top overall backup tool thanks a strong feature set, a user experience that really simplifies the backup process and excellent support that you don’t have to spend extra money on.

Free Chapter of Kali Linux – A Guide to Ethical Hacking by Rassoul Ghaznavi-Zadeh

$
0
0
https://www.vpnmentor.com/blog/kali-linux-a-guide-to-ethical-hacking

Rassoul Ghaznavi-zadeh, was kind enough to answer a few questions, and share a free chapter from his book "kali linux – Hacking tools introduction". Share
Rassoul Ghaznavi-zadeh, author of “kali linux – Hacking tools introduction”, has been an IT security consultant since 1999. He started as a network and security engineer, gathering knowledge on enterprise businesses, security governance, and standards and frameworks such as ISO, COBIT, HIPPA, SOC and PCI. With his assistance, numerous enterprise organizations have reached safe harbors by testing, auditing and following his security recommendations.Rassoul Ghaznavi-zadeh, author of "kali linux – Hacking tools introduction

What made you write this book?

I have been working on Cybersecurity for more than 10 years now. A couple of years ago, I put together all my notes about penetration and ethical hacking and released them as a book. While I didn’t expect it, I received lots of good comments, and sold a lot of copies. This year, I decided to release a similar book with more details and information which can even be used in academic environments.

The first chapter states that the purpose of your book is to encourage and prepare the readers to act and work as ethical hackers. Can you describe your views on what it means to be an ethical hacker?

Ethical hacking is a process of investigating vulnerabilities in an environment, analyzing them and using the information gathered to tighten security to protect that environment.
An Ethical hacker would have extensive knowledge about a range of devices and systems. Ideally you should have multiple years of experience in the IT industry and be familiar with different hardware, software and networking technologies.
As an Ethical hacker you have a clear responsibility about how you use your knowledge and techniques. It is also important to understand the client’s expectations from an ethical hacker, and consider them when assessing the security of a customer’s organization.

Can you give us a quick tip on starting a penetration project as an ethical hacker?

As hackers, breaking the law or getting into trouble can sometimes be difficult to avoid, so it’s important to act legitimately and get your paperwork ready in advance. This includes signed approvals to access the customer’s network and system, signing an NDA, defining clear goals and timelines for you and your team and notifying appropriate parties, such as the sys admin, security department, legal department etc.

What new knowledge did you gain whilst writing your book?

 Obviously writing a book is not an easy task, considering this is not my main job. Writing this book was a good opportunity for me not only to learn more about professional writing, but also refreshing my knowledge about the hacking tools and techniques. For every single tool introduction in this book, I have done some manual work by installing and testing the latest version of them on the newest version of Kali operating system.

Where can one acquire your book?

The book is available on most online stores like Amazon, Google, Itunes, Barns and Noble, Kobo, etc. I also have a couple of more books which can be found there including the original version of this book, “Hacking and Securing Web Applications” and “Enterprise Security Architecture”.
Following is the first of three chapters from “Kali Linux- Hacking tools introduction”.

Chapter 1- Ethical Hacking and Steps

By Rassoul Ghaznavi-zadeh
Ethical hacking is a process of investigating vulnerabilities in an environment, analyse them and use the information gathered to protect that environment from those vulnerabilities. Ethical hacking requires a legal and mutual agreement between ethical hacker and the asset and system owners with a defined and agreed scope of work. Any act outside of the agreed scope of work is illegal and not considered as part of ethical hacking.

What is the purpose of this book?
The purpose of this book is to prepare the readers to be able to act and work as an ethical hacker. The techniques on this book must not be used on any production network without having a formal
approval from the ultimate owners of the systems and assets. Using these techniques without having an approval can be illegal and can cause serious damage to others intellectual property and is a crime.

What are the responsibilities of an Ethical Hacker?
As an Ethical hacker you have a clear responsibly about how you use your knowledge and techniques. It is also very important to understand what the expectations from an Ethical hacker are
and what you should consider when assessing the security of a customer’s organization. Below are a couple of important things you must consider as an Ethical hacker:
  • Must use your knowledge and tools only for legal purposes
  • Only hack to identify security issues with the goal of defence
  • Always seek management approval before starting any test
  • Create a test plan with the exact parameters and goals of test and get the management approval for that plan
  • Don’t forget, your job is to help strengthen network and nothing else!

What are the customer’s expectations?
It is very important to understand the customer’s expectation before starting any work. As the nature of this work (Ethical hacking) is high risk and requires a lot of attentions; if you don’t have a
clear understanding of their requirements and expectations, the end result might not be what they want and your time and effort will be wasted. This could also have some legal implications as well if you don’t follow the rules and address customer’s expectation. Below are some important things you should note:
  • You should work with customer to define goals and expectations
  • Don’t surprise or embarrass them by the issues that you might find
  • Keep the results and information confidential all the time
  • Company usually owns the resultant data not you
  • Customers expect full disclosure on problems and fixes

What are the required skills of the hacker?
To be an Ethical hacker you should have extensive knowledge about a range of devices and systems. Ideally you should have multiple years of experience in IT industry and be familiar with different hardware, software and networking technologies. Some of the important skills required to be an Ethical hacker are as below:
  • Should already be a security expert in other areas (perimeter security, etc.)
  • Should already have experience as network or systems administrator
  • Experience on wide variety of Operating Systems such as Windows, Linux, UNIX, etc.
  • Extensive knowledge of TCP/IP – Ports, Protocols, Layers
  • Common knowledge about security and vulnerabilities and how to correct them
  • Must be familiar with hacking tools and techniques (We will cover this in this book)

How to get prepared for the Preparation testing
Once you want to start a penetration project, there are number of things that you need to consider. Remember, without following the proper steps, getting approvals and finalizing an agreement with customer; using these techniques is illegal and against the law.
  • Important things to consider before you start:
  • Get signed approval for all tests from the customer
  • You need to sign confidentiality agreement (NDA)
  • Get approval of collateral parties (ISPs)
  • Put together team and tools and get ready for the tests
  • Define goals (DoS, Penetration, etc.)
  • Set the ground rules (rules of engagement with the customer and team)
  • Set the schedule (non-work hours, weekends?)
  • Notify appropriate parties (Sys admin, Security department, Legal department, law enforcement)

How to Secure Your Public WiFi Connection

$
0
0
https://thebestvpn.com/public-wifi-security

Andrey Doichev
Last updated:
In this post, I’ll show some tips and recommendations on how to keep your public Wi-fi connection secure and safe. 
Ever felt uneasy doing your online banking in your favorite coffee shop?
Me too.
Are you sure you want to hit “buy” on that chic blue and black (or was it gold and white?) dress, exposing your credit card details to cyber criminals who may be watching?
Think again.
I don’t blame you if every time you log onto social media, on public Wi-Fi, you worry if some hacker is about to steal your password.
My friend, you’re not alone.
Over half a billion personal records stolen in 2015
According to leading cybersecurity firm Symantec (SYMC:NASDAQ), millions of Americans who access their personal emails (58 percent), log into social media platforms (56 percent) or do their online banking (22 percent) on Public Wi-Fi networks, are opening themselves up to being spied on; getting their passwords and credit card details stolen.
Risk Based Security reports that in 2016 alone, there have been more than 4,100 security breaches totaling 4.2 billion stolen and exposed personal records – emails, passwords, SSNs, addresses etc.
Over 4 billion people at risk of identity theft, drained bank accounts or worse!
There are many tricks to ensure this never happens to you when using Public Wi-Fi. Best of all, we are going to show you all of them.
Feeling strongly about ensuring your safety, we decided to write this comprehensive article on public Wi-Fi security and show you step by step, how you can thwart hacking attempts on your systems.
Follow our step-by-step guide to a T and take a look at our exhaustive list of additional security strategies. Implement those which you think you can benefit from and your devices will become impenetrable fortresses.
In times where 87 percent of the population is using public networks and free Wi-Fi services, it is an imperative to practice good internet hygiene and ensure every precaution to secure one’s personal data, has been taken.
Fortunately for me and you, there are some awesome people out there which have made it their mission in life to simplify cybersecurity. Whether you’re a 16-year-old programming genius, a flashy management consultant or a single mom – we’ve got you covered.
Everything you will read below is straightforward and simple to implement.
Let me help you by giving you the necessary knowledge & tools to protect yourself from any and all cyber attackers every time you decide to click “connect” on an open Wi-Fi network.
Ready?

Step-By-Step Public Wi-Fi Security Guide


In this first part of our article I will be guiding you through the three most important steps you need to take to ensure maximum protection in the quickest time possible. These security precautions will take you only minutes to enforce and will make you practically invisible to hackers.

Safe Public WiFi Browsing Infographic

1. Be Mindful & Proactive

Congratulations! By reading these very lines you already completed step one to securing yourself from cyber attacks.
Really, kudos to you for giving a second thought to just how vulnerable you may be while waiting for your flight in that crowded airport.
The first thing you have to accept is that, like it or not, you’re exposing yourself to unnecessary dangers. Prevention is the best cure. Use public Wi-Fi only if you have implemented our next two steps.
Before I get to them, let’s explore how awareness can help you out.
Just like anything in life, being aware and mindful of your surroundings is of utmost importance. An often overlooked part of public Wi-Fi security is that first word – public.
Just because you’re connected to an open network, doesn’t mean all attacks will be carried over online.
Be mindful of who is peaking at your screen and never leave your technology unattended. Hackers can plug in a flash drive, which automatically installs malware, in half the time it takes you to grab your Latte from the cute barista.
Public Wi-Fi networks are inherently dangerous and incredibly attractive to no-gooders. The ability to blend in and potentially have physical access to their victim’s machines is too sweet to give up.
Do not, under any circumstance, provide access to your notebook to strangers. Never let anybody you don’t know touch your laptop. It’s your property and nobody has the right to touch it without a warrant!
Further, be proactive in your security habits. If you’re connecting to an unknown open Wi-Fi that asks you to download a client in order to connect, move on. Why risk it when you can connect without hassle to the another network two blocks down the road?
Sure, most of the time that client is just a method for your network provider to serve you pesky ads. A hacker, however, needs to trick you only once to download his fake client for you to get into deep trouble. More on this later.
If you practice habits of mindfulness and proactive thinking, you will easily be able to recognize sketchy situations to beware of.
They can’t get to you, if you outsmart them at their own game.

2. Turn OFF “Sharing”

Moving onto a more practical and critical step of network security – turning off sharing.
By disabling this option, you will block anybody, connected to the same network, from snooping around your files.
On default, this is already taken care of if the network is marked as public. Sometimes, however, you may mis-click and leave yourself exposed.
Even If you follow all security techniques, if sharing is not disabled – it’s no good. You’re practically installing a state of the art home alarm system, but leaving your front door open.
Anybody could access your files; they don’t even have to be a hacker.
How to Turn Off Sharing on Windows
  1. Open your control panel and click on Network and Internet.
  2. Click through to Network and Sharing centre.
  3. Find Change advanced sharing settings and click it.
  4. There are two main options you want to disable: Network discovery& File and printer sharing. Turn both of these off.
P.S. Make sure you do this both for “Private” and “Guest or Public” networks!
Turn off sharing in Windows 10
How to Turn Off Sharing on Mac (OS X)
  1. Navigate to “System Preferences”.
  2. Under “Internet & Wireless” click on the “Sharing” folder icon.
  3. Find the list of sharing options on the left hand site of the new screen. T
  4. To disable sharing, simply uncheck the options named “File Sharing”.
If you’d like, read through the other options and uncheck them all as well.
How long did this take you, a minute? You’ve just made sure that nobody will be able to access or see any of your files. While this will deter low effort intrusions, it will do little for concentrated attacks.
In order to truly protect yourself from hackers, you have to…

3. Use a VPN

How Do Virtual Private Networks Work
Also known as The Holy Grail of identity protection.
The #1 tool you want to own and implement. If I could sum up Wi-Fi security in one sentence it would be USE A VPN!
Not wanting to bore you with technical jargon, let me give you a quick rundown on the benefits of a VPN, how these networks function and why you should spend 15 minutes setting up yours.
VPN stands for Virtual Private Network. Living up to its name, it encrypts your information, keeping it private and 100% anonymous.
In the olden days (read a few decades ago), VPNs were exclusively used by fortune 500 companies. Encrypting data and thereby enabling anonymous communications across vast distances was a much sought after solution in the highly competitive world of business.
As the years went on, the advantages of VPNs quickly gained popularity with the outside world. From small businesses to individuals; folks quickly learned the value of remaining anonymous online.
Once the VPN client is installed and set up, it takes only a couple of clicks to connect and start browsing securely.
I promised no technical jargon so I will only touch on how a VPN works. Make sure to read through our in-depth article on how VPN encrypt your data, if you’d like a more detailed look.
In the simplest of terms, a VPN transmits your data packets via a protected tunnel protocol. This protocol is layered with security features which will immediately sever the connection if an intrusion is detected. If an intrusion is attempted, the VPN will immediately reconnect through a different route, staying one step ahead.
It’s a cat and mouse game with the mouse being able to teleport to a new city anytime it spots the cat. The best part – you decide where to teleport. You can mask your traffic and connect to any server that your VPN provider provides.
To find a VPN, use this VPN review chart.

4. Verify Public Wi-Fi Connection

It’s a beautiful, sunny day and you sit down in the newly opened french bistro around the corner; “La Vie” (I’m terrible at coming up with fake bistro names). You ask your waiter for the Wi-Fi password as you notice the pretty girl sitting a few feet from you…
Wait!
The first step when connecting to an open Wi-Fi is to verify you’re actually connecting to it. It is incredibly easy for hackers to set up a fake Wi-Fi hotspot resembling the original one. Distracted by the pretty girl, you can easily be fooled into connecting with “La Wie” instead of “La Vie”.
The last thing you want is to willingly connect to said hacker and give him access to your system on a silver platter.
If unsure, call back that waiter and verify the name of their Wi-Fi.

5. Avoid High Profile Websites & Activities

Listen, you’ve been owing aunt Sue those $20 for months; she can wait a few more hours.
Cyber criminals will take whatever they can get, but they are especially interested in banking details, passwords and personal information.
Don’t ever log into your online banking service or Paypal. Delay your banking activities for when you’re at home.
It would be impossible to list all sites and activities you should avoid, here are the most important ones:
  • Online banking & Financial Services
  • Emails
  • Social Media
  • Utilities
  • …and anything that will have you typing in sensitive information – SSN, Address etc
I know this doesn’t leave much room for much else, but this is what you signed up for if you want to be immune to attacks.

6. Remove Sensitive Data

Pretty straightforward, if you know you’re going to use public internet, make sure you remove any sensitive or personal data off your system. Remove banking files, passwords, documents showing your address or social security number from your laptop.
If you have to access such a file, opt for remote access to your home system instead. Just make sure it’s not residing in the laptop you’re using in public.
Hide a Folder In WindowsA less secure alternative is to hide all folders containing sensitive information. On windows, just right click a folder and navigate to its Properties. In the Attributes tab, enable the “Hidden” option.
To display hidden folders, go to your file explorer’s View tab and check Hidden Items in the Show/Hide pane on the far right side.
Find Hidden Folders in Windows 10

7. Use SSL Encrypted Websites (and Avoid Doing Money Transfers)

A Secure Sockets Layer is an advanced layer of security for establishing an encrypted link between a you and a website. The security features ensure that any data that you may submit to an SSL secured website remains private. Million of websites make use of a SSL to protect their clients and readers.
Websites which have set up a SSL are easily recognizable. You need only look at the address bar of your browser. Take a look at ours for example:
No SSL Certificate versus SSL Certificate

SSL certificates are neither cheap nor easy to procure. To be awarded an SSL certificate, a website owner must implement several security measures and answer a number of questions about the identity of their website and company.

8. Enable Firewall

Also known as a ‘packet filter’. Basically, software which monitors network traffic and connection attempts into and out of a network or computer and determines whether or not to allow it to pass. Depending on the sophistication, this can be limited to simple IP/port combinations or do full content-aware scans.
Firewall is a standard software or hardware network security system built into most operating systems. Your firewall is data packet filter which monitors an incoming and outgoing traffic and connection attempts. It’s a warden which only permits trusted networks to communicate with you. Anything fishy will get blocked by your firewall.
Safe to say, we want this enabled. Here’s how to do it…
How to Disable Firewall on Windows
  1. Open your Control Panel and click on System and Security.
  2. Navigate to your Windows Firewall.
  3. Check out the left panel and click on “Turn Windows Firewall on or off“.
Pretty straightforward from there. This is how it should look:
How to Turn Firewall on in Windows 10
How to Disable Firewall on MacBook
Apple’s Firewall works differently since their OS X v10.5.1 update. They operate on a per-application basis. Often programs and services are flagged by your Firewall as a false positive. With OS X you can configure your Firewall to allow traffic from these programs and services.
To enable your Firewall on your OS X system follow these steps:
  1. Choose System Preferences from your Apple menu and click on Security or Security & Privacy.
  2. Navigate to the Firewall tab and unlock the pane by clicking the lock in the lower-left corner and enter the administrator username and password.
  3. Click on “Turn On Firewall” or “Start“.
How to Turn on OS X Firewall
To further customize your firewall and enable specific apps and services to circumvent your firewall, click Advanced.

9. Update your Antivirus

If you’re part of the 84.9% of antivirus users with updated software; kudos to you.
For the rest of you, keeping your Antivirus up to date is an imperative. The updates are of vital importance to detecting newly coded malware and staying one step ahead of cyber attacks. I promise you, if you’re to be unlucky and have your data compromised, a system restart will seem a small price to pay in hindsight.
A curious statistics is the percentage of people who do not make full use of their antivirus software. According to a 2015 report by OPSWAT, 91.3% of antivirus users hadn’t run a full full system scan via their installed antivirus product within the last seven days.
If I have to be honest with you, I am part of that 90+%. That said, if you’re often making use of public Wi-Fi hotspots, a full or quick scan of your system might be a good idea.

10. Update your System & Browser

Out of date browsers may harbor loopholes for hackers to exploit. Make sure, whichever browser you use, it’s kept up to date.
Having said that, not all browsers are created equal. Take a look at this 2016 vulnerabilities chart.
Browser Security
More important than keeping your browser up to date is making sure your system itself has been updated. Whether you’re running Windows, OS X, Android or iOS, checking your up to date status is a healthy habit you should cultivate.

11. Implement Two Factor Authentication

Two Factor Authentication is a genius way to ensure you and only you can access websites or services which require a password.
Two Factor Authentication Systems
A Two Factor Authentication system connects your account to your phone number. Any login attempt will automatically prompt an authentication code to be sent to your phone by text messages (free of charge) which you will then have to type in next to your password in order to log in.
Twitter Two Factor AuthenticationThough a bit cumbersome at times, Two Factor Authentication has saved me more than once.
63 times to be precise. Since early 2015, someone has attempted to hack my Twitter 63 times (Yes, I counted)
Notice the time stamps, all codes sent at 9:58. Clearly a hacker.
I admire his or her persistence, the last hacking attempt was made just two days ago. Still, no cigar. Bless Two Factor Authentication!

12. Keep Passwords Unique

One of the biggest self-imposed vulnerabilities is using the same password for every service, account and website we use. Once this password leaks, everything we fight hard to protect
Tell me if this sound familiar to you. You have two or three “main” passwords each with several variations. One site requires 8 character passwords, another requires numbers and special symbols and a third one wants less than 8 with no special symbols at all.
I do my forearm exercises at the gym and college tested my memory skills enough, thank you very much.
Now think if every password you had to use had to be a one, unique to that specific website. Two weeks ago I cleared my browser history, including my saved passwords.
Since then, my Chrome’s Password Auto-fill has saved passwords to 56 sites (Yeah, I counted again…). Now, I may be a bigger nerd than most people, still – 56 password protected sites. Imagine if you had to have a unique one to every one of these.
Saving your passwords to a notepad is the biggest no-go, since that is the first thing intruders will look for. On the flip side, if a hacker does get access to your password and it’s the same password you use everywhere, you are in BIG trouble.
Data Breach Password Statistics
(Source: Keeper)
The solution to this problem is a secure Password Manager. A password managers will create high-strength, random passwords, store them securely and auto-fill them. Across any and all devices.
Since we’re best friends, let me level with you; do I use a password manager? No.
Should I ? Yes, and so should you.

13. Use Mobile Data Instead

While your cell phone isn’t hack-proof, hackers who are targeting open networks won’t be able to hijack your mobile data.
This is useful in emergency situations where you’re forced to engage in high-profile activities.
Keeping that in mind, I feel the need to note that any device connected to the internet could never be 100% unhackable and is potentiall susceptible to outside attacks.
At least if your mobile data gets hacked you have someone to blame, free iPhones anyone?

14. Turn Off Wi-Fi When Not in Use

I have to admit something – I lied. There is one thing that trumps VPNs – turning of your Wi-Fi
*Ba Dum Tss*
Dad jokes aside, turning off your Wi-Fi when not in use will block any and all attempt of tampering. It’s your ultimate defense. If you’re sitting in a train or airport, watching a movie, turn your Wi-Fi off.
Not only will you be safe from no-gooders you will also take advantage of the added benefit of power savings, prolonging your battery life.
Two birds with one stone? Don’t mind if I do.

15. “Forget” Public Networks

As a rule of thumb you will want to disable auto-connect and delete public Wf-Fi networks once you’re done with them.
You can take all the preventative measures in the world, but if you are forced to reset your laptop, reinstall your operating system or otherwise, you have to do it all over again. In these instances, making sure your system doesn’t connect prematurely, before you’ve repeated all security steps, is of utmost importance.
How to “Forget” Public Networks on Windows
  1. Open your Control panel and click on Network and Internet.
  2. Click through to Network and Sharing center.
  3. In the left pannel, click on Change adapter settings.
  4. In the new window, double click on your Wi-Fi connection. Another window will open.
  5. Click on the fourth option; Manage known networks.
Do the following for all public networks:
Select a network  and click on Forget. This will ensure you will have to connect manually next time.
How to forget a network
How to “Forget” Networks on MacBook
  1. Click on the Wi-Fi symbol on the top menu bar and
  2. Then, click on Open Network Preferences at the bottom of the drop down menu.
  3. Click on Wi-Fi in the menu on the left and click Advanced at the bottom right of the new window.
  4. Select the WiFi network you want to forget, and click the minus sign.

16. Don’t  Install Connection Clients

Unless you’re staying at a hotel or an otherwise trusted Wi-Fi provider, downloading and installing a client in order to connect to a free Wi-Fi network is both unnecessary and fishy.
The primary purpose of a Wi-Fi client is the ability to meter your bandwidth usage. This is entirely useless unless you’re paying for your Wi-Fi, or if the hotspot provider wishes to limit your internet usage; blocking torrenting for example.
While this makes sense in a respected hotel chain or paid network you should be weary anytime this is required with a free network.

17. Find Best Network

Piggy bagging from #13, you want to make sure you’re choosing the best network to connect to. Paid is not always better than free, and free can often be twice as cumbersome as paid.
If you’re finding yourself in the Parisian Charles de Gaulle airport and you don’t wish to pay their exorbitant “20 min for €2.90″  fees for Wi-Fi, make sure to shop around. You’ll find nearby coffee shops with their own Wi-Fi.
Always make sure you’re choosing the best network. I would rather connect to a free network without installing a download client than an expensive “more secure” one.

18. Use TOR

Tor is a free software which enables you to browse anonymously via a volunteer-operated networks of servers. Named after the original project “The Onion Router”, Tor relays your data through several points before sending it to your destination. This effectively conceals your location and negates any surveillance or tracking or tracing efforts.
TOR Network Structure

Tor is also known as the “deep web” or the “dark web” as once you connect to it, you can browse to websites accessible only in the Tor network. These websites often have obscure, randomly generated domain names and can be distinguished with the .onion instead of a .com.
Here is the domain name of the anonymity search engine “DuckDuckGo”: http://3g2upl4pq6kufc4m.onion
There are several disadvantages to using Tor. First is the super long loading times. With your traffic having to be bounced by several servers, webpages take an eternity to load.
Tor was created in the conjunction with the U.S. Navy and many government agency use Tor. The network is widely used in countries with censorship laws; political refugees, journalists but also criminals.
Due to the anonymity of the network, many criminals have made it their safe haven. Drugs, guns, false identities and worse are sold on certain websites within the Tor network.
While government can’t track what you do on Tor, the network isn’t foolproof. Certain software vulnerabilities and website admin errors can and are exploited by government agency. In 2013, Tor’s biggest black market website – The Silk Road – was busted by the FBI.

19. Read Free Wi-Fi TOC

I can hear your sigh, but stay with me. You’ll want to read this.
In June 2014 the Cyber Security Research Institute conducted an experiment in some of the busiest neighborhoods in London. Backed by the European law enforcement agency Europol and sponsored by F-Secure, security researches set up a free Wi-Fi hotspot and tested just how attentive Londoners really are.
All you had to do to connect, was to accept a Terms of Service agreement. Buried in the fineprint, however, there was a clause which stated “the recipient agreed to assign their first born child to us for the duration of eternity.
Six Londoners agreed.
This tongue-in-cheek experiment highlights the serious risks you willingly accept in order to connect to free Wi-Fi.
Nobody reads Terms of Service agreements, but you would be well-advised to skim them over and look for any irregularities. Terms of Services are legally binding, the last thing you want to do is to connect to a hacker’s public Wi-Fi network and agree to hand over sensitive data such as GPS locations or personal information.
Oh, and by the way, Finnish security firm F-Secure said it had decided not to enforce the clause. Phew!

20. Buy a Privacy Screen/Filter

Privacy Filter Viewing AnglesThe first time I saw a privacy screen in action, the look on my face must have been awesome.
This guy in a suit was staring at a black screen and typing away. Little did I know, he was using a privacy filter which severely restricts viewing angles. The laptop screen is visible only when looked at dead on. Such filters are often used by bank tellers and installed in ATMs.
Procrastinating roommates, nosy coworkers or even fellow fliers – prying eyes are everywhere. Not only does it become annoying and uncomfortable to have your screen stared at, it’s downright dangerous in public environment where you can’t always know just who is looking over your shoulder.

Conclusion

With ever faster internet speeds, wider coverage and the expectation of free Wi-Fi from every millennial, it has become an imperative to form good internet security habits and be weary of the dangers that one big, connected globe harbors.
I sincerely hope this article has helped you find your preferred method of keeping yourself safe on the inter-webs. Please share it with your friends and colleagues and let me know about your experiences with public Wi-Fi.
Surf Safe,
Andrey

The Ultimate Guide for Online Privacy

$
0
0
https://www.vpnmentor.com/blog/ultimate-guide-online-privacy

Thanks to Edward Snowden releasing the documents regarding NSA spying activity, we now have an idea of just how vulnerable we are when we are online. The worst part is that it is not only the NSA that is spying on people. There are many governments working to enact laws that would allow them to watch and store information from their citizen’s online activity, as well as what they say over phone calls and write in text messages. While it seems like a dystopian world described by George Orwell is, in fact, upon us, there are still ways that you can protect yourself from the all-seeing eyes of Big Brother.
Encryption is one best ways to protect your online behavior, data, and communication. While this method is very effective, it usually flags you to the NSA and other organizations, inviting them to monitor you more closely.
There have been changes to the guidelines surrounding data collection by the NSA. However, the rules and guidelines still allow the NSA to collect and examine the data. The encrypted data is stored until the NSA decrypts it and discards any uninteresting data. For non-US citizens, all of the collected data is allowed to be kept permanently, but for practical reasons, the data that is encrypted will be processed first.
The reality is that if more people used encryption for day-to-day browsing, encrypted data would be less of a red flag. This would make the NSA’s and other surveillance organizations around the world job much harder, as it would take an exorbitant amount of time to decrypt all the data.

Is Encryption Secure?

Since more people started using encryption, the NSA has been working to decrypt the data in a shorter amount of time. At the same time, there are new encryption types that are being worked on to protect data. Encryption is never going to be perfectly secure, but lengthening the amount of time it takes to decrypt the data can help to protect it for a longer time. Let’s look at what is affecting the security of encrypted code.

NIST

National Institute of Standards and Technology (NIST) works with the NSA to develop new ciphers. They have also developed and/or certified RSA, SHA-1, SHA-2, and AES.
Knowing that the NSA has worked to weaken and leave backdoors into many of the international encryption standards, you should question the integrity and strength of the cipher algorithms that NIST creates or certifies.
NIST denies that they have done anything wrong, as well as invited the public to participate in many upcoming events about encryption related standards. However, both of these moves are just ploys to raise public confidence in their work.
Combination Padlock
Nearly all trust in NIST was destroyed when news was released that the NSA weakened the Dual Elliptic Curve algorithm, which was NIST’s certified standard for cryptographic programs, twice.
There could also have been a deliberate back door in the Dual Elliptic Curve algorithm. In 2006, at Eindhoven University of Technology, researchers noted that it was simple to launch an attack from any ordinary computer.
While there are many concerns with NIST, many of the industry leaders still have these algorithms in their cryptographic libraries. One major reason for this is that they have to be compliant with NIST standards to obtain contracts from the US government.
With just about every industry in the world using NIST-certified standards, it is rather chilling to think about all the data that is at risk. There is the possibility that the reason we rely so heavily on these standards is that most cryptographic experts are not willing to face the problems in their field.

Key Length of the Encryption

The key length of a cipher is the simplest and crudest way of calculating how long the cipher would take to decrypt. Most times, the cipher is made up of zeros and ones. At the same time, the simplest and crudest way of breaking ciphers is the exhaustive key searches method or brute force attacks. This method tries every combination of the cipher until the right combination is found.Computer Chip
Anyone can break ciphers, but modern ciphers are more complex than traditional ones. The NSA still has problems decrypting data by just using the method of exhaustive key search. In the past, before the leaks from Edward Snowden, it was assumed that decrypting a 128-bit encryption was nearly impossible to do with brute force. Further to this, that it would be at least another 100 years before it was possible using Moore’s Law.
This theory still holds some truth; however, the amount of resources that the NSA puts into cracking all types of encryption has shaken many encryption expert’s faith of this prediction. This has also lead to system administrators scrambling to update and upgrade the key lengths of their ciphers.
With the reality of quantum computing becoming available in the near future, we also have to face the reality that encrypted data could be a thing of the past. The reason for this is that quantum computers will be exponentially faster and more powerful compared to any other computer in existence. In just a few hours or days after the release of quantum computers, all the suites and ciphers will become redundant.
There is a theory that posits that the issue will be fixed by the creation of quantum encryption. This is not going to be the case for some time, however, since quantum computers will be very expensive when they are released. This price tag means that only well-funded businesses and wealthy governments will have this technology. Therefore, at the current time, having strong encryption can still protect your data.
We should note that both the NSA and the United States government use 265-bit for sensitive data, along with 128-bit for all other data. This method uses AES, which we will discuss later because it has many problems of its own

Ciphers

The key length is the amount of numbers that are involved in an encryption, whereas the cipher is the mathematic algorithm that is used. The strength, or lack thereof, of the cipher’s algorithms is what allows the encryption to be broken.
The most commonly used ciphers that you are likely to have encountered are AES and Blowfish. There is also RSA, which is used to decrypt and encrypt ciphers’ keys. There is also SHA-1 and SHA-2, which are used to authenticate encrypted data and connections.
AES is considered to be the most secure cipher for use in VPNs and is used by the United States government. While AES seems reliable and secure, you should be using more than just AES to protect yourself.

What Can You Do To Improve Your Privacy?

Knowing that nearly all types of encryption can be broken if someone was motivated enough, we can better protect our privacy. Even though not all the recommendations will work perfectly every time, they will improve your online privacy overall.

Anonymizing Internet Use

There are two popular ways in which you can anonymize your use of the internet: using a VPN or using the Tor network. Both hide your internet use from your Internet Service Provider (ISP) and the government. They also hide your location and identity from the websites and services that you visit and use.
While these technologies sound like they serve similar purposes, there is only a tiny amount of overlap. They are used for very different purposes and are coded very differently.

VPNsGirl Using Computer

Most people use VPNs to hide their internet usage from their ISP and the government. They can also be used to avoid censorship, to “geo-spoof” in order to access the websites of other countries, as well to protect you from hackers when using public Wi-Fi  hotspots.
Depending on the VPN that you are using, you may have to sign up and pay for the service, most setting you back about $5 to $10 per month. While a VPN provides you with a high level of internet privacy, they do not provide any level of anonymity, reason being that the VPN provider knows what you are doing online.
Check out VPNMentor’s top VPNs here.

Tor Network

If you require a high degree of anonymity online, the Tor network is a great option. However, you lose a lot of the usability of the internet that we use daily. The Tor Network is free to use and is a useful system, as you do not provide your information to anyone. This has made it a popular anti-censorship tool. There are governments that have tried to block the Tor Network, but they have not always been successful.

Using VPN and Tor Together

If you are willing to do some work, you can use Tor and a VPN at the same time. You will need to find a VPN that supports Tor and install the VPN using their guide.
 Tor

More Ways to Stay Anonymous Online

Tor and VPNs are the most known and popular ways to stay anonymous and avoid censorship, but there are many other ways to do this. The use of proxy servers has become a popular option, but they provide about the same level of anonymity as a VPN.
Psiphon, 12P, Lahana, and JonDonym are all services that could be of interest. Most of these can be used with a VPN and/or Tor for higher levels of security.

Stop Using Search Engines That Track You

Many of the most widely used search engines store information about you. This is especially true for Google. The information that is stored includes your IP address, the time and date you use the website, the search terms, and your computer’s Cookie ID.
The gathered information is then transmitted to the web page owner and the owners of any advertising on the website. This allows these advertisers to collect data on you while you surf the internet. The collected data is then used to create a profile about you, which they use to create targeted advertisements based on your internet history..
Along with giving this data to the website and advertising owners, search engines have to hand over the collected information to courts and governments. This is only done if the information is requested, but these requests are becoming more often.
However, some search engines do not collect data on their users. One of the most popular is DuckDuckGo. Along with not collecting your data, this non-tracking search engine avoids the ‘filter bubble’. Many search engines will use your past search terms and other information, like your social media, to profile you. They do this to order the results in a way that puts the websites that are most likely to interest you first. In some cases, this produces search results based on your believed point of view, known as a ‘filter bubble’.  Therefore, the alternative options and viewpoints are downgraded, which makes them hard to find. Filter bubbles are dangerous as they limit knowledge and confirm prejudices.

Clearing Your Google Search History

If you are worried about the information that Google is keeping about you, you can clear your Google search history, if you have a Google account. This is not going to stop anyone from spying on you or gathering information on you, but it limits Google’s ability to profile you. Regardless of whether you plan to switch to a non-tracking search engine or stay with Google, you should clear your search history from time to time.
This is simple to do; you simply need to sign into your Google account on www.google.com/history. After logging in, you will find a list of your previous searches. From here, you can select the items that you would like to remove and click the ‘Remove Items’ button.

Anonymity While Making Online Purchases

The first step towards improving your online privacy is paying anonymously. You will still need to provide an address for physical items, so you will not perfectly anonymous online. Even if you are going to switch to just buying local goods, and pay for the items with cash, you are not fully anonymous.
Luckily, the use of Bitcoin and other online services is becoming more common.

Bitcoin

Bitcoin is the largest opBitcoinen source and decentralized virtual currency at the moment. It operates using a type of peer-to-peer technology, which is conducive to online anonymity since no middle man is involved. There are many debates about if investing in Bitcoins is wise or not. Bitcoin acts like any other currency (meaning it has the potential to lose or gain value at any time) and can be used to buy items and traded as normal currency online. It is not yet ubiquitous, but more businesses are starting to accept it as a valid form of payment.

Other Forms of Anonymous Payment

If buying Bitcoins is not for you, another option is to use pre-paid credit cards from one of many online stores. There are also many options for crypto-currencies other than Bitcoin, but Bitcoin is the most stable as well as the most popular.

Securing Your Browser

The NSA is not the only entity that wants your information; there are also advertisers. They often use sneaky methods to track you, putting a profile together so they can advertise their items to you, or even sell the information that they have on you.
Many people who use the internet know about HTTP cookies. Clearing HTTP cookies is very simple, and most browsers have a Private Browsing mode, such as Chrome’s Incognito Mode. Using this mode stops the browser from saving the internet history and blocks the HTTP cookies. Using this tactic is a good idea when you are browsing the internet, and a step in the right direction, but it is not enough to fully stop the tracking.

Clear Your DNS cache

To speed up your browsing, browsers cache website IP addresses that are received from your DNS server. It is very simple to clean the DNS cache. Windows uses the command: ipconfig /flushdns in the command prompt. OSX 10.5 and above uses the command: dscacheuitl –flushcache in the Terminal. OSX 10.4 and below uses the command: lookup –flushcache in the Terminal.

Flash Cookies

Flash Cookies are used for some insidious purposes and are sometimes not blocked by disabling cookies. Flash Cookies track in a similar manner to regular cookies. Flash Cookies can be located and manually deleted. CCleaner is one of the best options of removing Flash Cookies, along with other trash that is on your computer.
Since most people now know about Flash Cookies and Zombie Cookies, the use of these cookies is on the decline. There are also many browsers that block them when you choose to block cookies. However, they are still a threat even with their lower numbers.

More Internet Tracking Technologies

Due to the amount of money that can be made on the internet by companies tracking their users, more sophisticated and devious methods have been developed in the last few years.

ETags

ETags are markers used by browsers to track the changes in resources at specific URLs. Comparing the changes in the markets allows a website to track you and create a fingerprint. ETags have also been used create respawning HTML and HTTP cookies, which will also track your browsing.
ETags are nearly impossible to detect, so there is no reliable method of prevention. Cache clearing after every website and turning off your cache can work. However, these methods are time-consuming and have negative effects on browsing experience. Firefox has an add-on called Secret Agent that prevents ETag tracking.

History StealingStealing

This is where truly scary form of tracking comes in. History stealing exploits the way that the internet is designed, allowing any website to find out  your whole browsing history. This is done using a very complicated method and is very bad news since it is nearly impossible to prevent. The information that is found can be used with profiles on social networks to create a profile about you.
There is some good news, however, in that it that while it can be very effective, it is not reliable. Using a VPN or Tor to mask your IP address makes it much harder to find your identity using just web behavior.

Browser Fingerprinting

Brower Fingerprinting looks at the configuration of your browser by accessing your operating system to track and uniquely identify the user. The major problem with this is that the more measures you use to avoiding tracking, the more unique you become for browser fingerprinting.
To avoid browser fingerprinting, you should use the most common operating system and browser that you can. This will, however, leave you open for other types of attacks, as well as limit the functionally of the computer. For mos, this is not very practical.

HTML Web Storage

Web Storage is built into HTML5, which is used for most websites. The problem with web storage is that it is much better at storing information than cookies. The stored information is not allowed to be monitored or selectively removed, as cookies are.
For all internet browsers, web storage is enabled by default. If you are using Internet Explorer or Firefox, you can simply turn off web storage. You can also use the add-on Better Privacy for Firefox, which removes the web storage information automatically. The Click and Clean extension and Google NotScripts can be used for Google Chrome.

Email

Securing Emails

While most of the email providers’ use SSL encryption for the connection between your computer, the email servers and the other person’s computer, Google has worked to fix the weaknesses that SSL has.
The problem is that many email providers are handing over information to the NSA, especially Microsoft and Google. At the moment, it looks like the smaller email providers are not doing this as of yet. However, with the NSA and other government surveillance extending their reach, this will most likely change in the near future.
The simplest solution to this growing problem of email encryptions is to encrypt email in a way that enables only the sender and the recipient to decrypt. This can be hard to implement since all of your contacts would have to be willing to implement this. It is simply not a feasible option at this point.
Another problem with encrypted emails is not everything is encrypted. The email address of the sender and recipient, the time and date when the email was sent, and the subject line are all not encrypted. The only data that is encrypted are the attachments and the message body. This information can be very damaging in the wrong hands..
If you are worried about the NSA spying on you, encrypting your email more than it already is not a good idea. The NSA is most likely going to store that email and your other emails until they have the time to decrypt and read them.

Encrypting Your Emails with GNU Privacy Guard

If you want to encrypt your emails, there are many programs that can help you do so. The most famous is Pretty Good Privacy or PGP; however, GNU Privacy Guard or GPG is recommended, as Symantec now owns PGP.
GPG is open source and is compatible with PGP. It is available on Linux, OSX, and Windows. While the program uses a command line interface, there are versions that are more sophisticated for Mac called GTGtools and Windows called Gpg4win. There is also EnigMail, which adds the functions of GPG to SeaMonkey and Thunderbird email clients.

Encrypting Webmail

Hushmail was, for a long time, considered the best service for encrypted webmail as it used PGP encryption in it’s web-based email service. The problem is that during 2007, the owners used a backdoor to gather emails from three accounts. The data was then sent to the Canadian Courts.
All the web-based services can be modified to capture their user’s decryption key. It is recommended that you use desktop services like GPG.

Encrypting Gmail

If you are using Gmail, you can use Encrypted Communication, which is a Firefox extension. This extension provides 265-bit AES encryption. After the extension is installed, you can just write the email. Once you have finished the email, you can simply right-click on the text and select “Encrypted Communication.” You will need to input a password that the recipient knows, so the message can be decrypted. You should be transmitting the password using a communication method other than email.
A more secure option is Mailvelope. This service provides full OpenPGP encryption on email services like Hotmail, Gmail, GMX, and Yahoo!, using Chrome and Firefox.
Check out our blog on creating an (practically) uncrackable password here.

Cloud

Cloud Storage

With internet speeds becoming faster, the price of cloud-based storage have become cheaper. This has also led to reduced memory requirements for most devices, and more devices using cloud storage. While cloud storage has been a great move, the question is still how secure the cloud is. Most of the big providers of cloud storage have worked with the NSA in the past. This includes Dropbox, Amazon, Apple, Google, and Microsoft. Most also state in their ToCs that they reserve the right to investigate all uploaded files, and will hand over the files to authorities if they receive a court order. While this is not going to affect most people, the idea of someone looking through our files is creepy, to say the least.
If you want to make sure that your files in the cloud are secure, there are some basic approaches that you can use.

Manually Encrypt the Files before Uploading

The most secure and simple method is to manually encrypt the files, and there are many programs that you can use to do so. The major advantage of this method is that you can use all Cloud storage services without having to worry about your files. As long as you do not upload your encryption keys, your files are safe.
The downside comes in with some of the encryption software needing to be online to decrypt your files. This is the case with Wuala and SpiderOak.
There is also the option to use a less mainstream cloud base. There are many cloud storage providers, and with the technology getting cheaper, there will be more in the future.

Use Cloud Services that Automatic Encrypts

There are some Cloud services that will automatically encrypt the files before they are uploaded to the Cloud. The changes that are made to folders or files are synced with the local versions, then secured and uploaded to the Cloud. There is the chance that the provider has the decryption key, so your data is still at risk; however, this risk is not as high as other Cloud service providers.
Wuala and SpiderOak have apps for Android and iOS. This allows for easy syncing of files to all your mobile devices and computers. There is a small security issue due to the fact that both store your password to authenticate you and direct your files. Wuala uses the same password to decrypt and encrypt files when you are using your mobile device. However, both services delete your password after your session is completed.
There is also Mega, which is been high-profile since it is owned by by Kin Dotcom. The service offers 50GB of free encrypted space. Mega uses whatever web browser you have to encrypt your files before they are uploaded, then decrypts the files after they are downloaded. This is one of the most convenient options, but Mega is not as secure as the other methods that we have covered.

Cloudless Syncing With BitTorrent Sync

BitTorrent Sync is free, and was released to the public after a long public testing phase. It is designed to be a replacement for Dropbox. However, BitTorrent Sync does not store files in the Cloud to sync them.
It is very simple to use, you just need to select the files that you want to share, then BitTorrent Sync will give you a password for them. After you have the password, you can link that folder to another device’s folder as long as BitTorrent Sync is installed. You can do this with any number of folders using the same method. The encryption protocol that is used is P2P 256-bit AES.
While BitTorrent Sync is easy to use and free, its limitations mean that it is not a true Cloud-based service, and cannot be used to store data for long periods of time. Depending on your ISP, you could be charged for the extra bandwidth that you are using.

Anti-Malware, Antivirus and Firewall Software

Anti-Malware

There is a huge amount of malicious code on the internet; this commonly known as Malware. If you are a not Windows user, you do not really need to worry about Malware. However, if you do use Windows, it is advisable to have anti-malware software installed. Bitdefender comes installed on all versions of Windows that are newer than Vista. There is also Malwarebytes and Spybot Search and Destroy, both of which are free.
Virus

Antivirus

This is the first program that you should install on a new computer, or after a clean install of an Operating System. Viruses can not only mess up your computer, but also have to potential to allow hackers into your computer and give them access to everything that is on your computer. They can also install keyloggers, which can record all your keypresses, enabling your banking information to be stolen.
While most people believe that hackers work for themselves, most of the best hackers in the world work for their country’s government. The Syrian government had a virus created and launched called Blackshade to spy on the people of Syria.
Most people have antivirus software on their computers, but most do not have antiviruses on their phones. At the moment, there are far fewer viruses on mobile devices than on computer. However, with smartphones and tablets becoming more powerful, we could see more attacks in the future. Phones with open-source systems, such as Android phones, are more susceptible than those with closed-source systems, such as iOS (Apple) phones.

FirewallFirewall

A firewall on a personal computer monitors the network traffic and can be configured to allow or disallow traffic to your computer. Firewalls can be a pain from time to time, but they are important because they work to ensure that no program or other software is accessing your computer. The problem is that firewalls have a hard time determining what programs are safe, and which ones are malicious. However, once you have them set up, they are simple to use.

Securing Your Voice Conversations

Before we talk about anything else, we should make it clear that all regular phone calls are not secure, and there is no way to make them secure. Governments around the world are working to record all their citizen’s phone conversations. Unlike using the internet and sending emails, which can usually can be protected to at least some extent, phone calls cannot be protected in any way.
There are disposable and anonymous ‘burner phones’ on the market; the problem is that unless you are only calling other ‘burner phones, data will be collected, which makes those phones very suspicious.

Encryption of VoIP

If you want privacy during a voice conversation, you will need to use an encrypted VoIP. A VoIP allows you to make phone calls using the Internet. In the last few years, VoIPs have become popular, due to the fact that they provide free or cheap calling to anywhere in the world. Skype is the largest VoIP provider in the world. However, Skype is a perfect example of the problem that most of these services have, because there is a middleman that can hand over the conversations to the government. This has happened in the past with Skype, as Microsoft owns Skype and gives the NSA a backdoor into Skype conversations.
Like with email, a VoIP needs end-to-end encryption. This does not allow outside sources into the conversation, meaning that the conversation is private. Many popular VoIP services like Jitsi have built-in chat features, and many are designed to offer most of the features of Skype, making them a great replacement.

Do Not Have a Cellphone

While most people cannot bear to be without their cellphones for more than a few minutes, if you do not want to tracked, you need get rid of your cellphone. Unless you have an early-era cell phone, your cell phone is tracking you, and it’s not just Google Now and GPS tracking your cell phone. It’s also the phone providers and the cellular towers. If you really do not want to be tracked, you will need to leave your cell phone at home, or you can buy a RFID signal-blocking bag. You can also stop Google Now from tracking by turning off your Google History.
Lock

How Much Is Privacy Worth?

This question is worth considering. All the measures described above take time and effort to use every day, and some can bring special attention from the NSA. Additionally, some of the precautions we suggested taking could cause you to lose some of the cooler functions from web-based services that need cookies and other data to perform well.
Google Now is one of the best examples of information being used for good. Google Now is software that can anticipate the information that you need, effectively acting as a “personal assistant” of sorts. Google Glass was designed to use the Google Now software, so it could store information to provide you with better recommendations in the future.
At the moment, interesting and exciting developments of how we interact with computers are reliant upon on allowing all your data to be stored and researched by a company. By using privacy protection when we are online, we turn off this technology.
The question of ‘how much is privacy worth?’ is going to be an ongoing discussion as more powerful technology is developed. There will always be a cost for privacy, so you need to know what compromises you are willing to make, as well as which you are not. While privacy is a basic human right, technology has made it harder to maintain it. Businesses and governments are always gathering data to learn more about you. In summation, even taking basic security measures can help protect you, and could force change in the future.

Linux yes Command Tutorial for Beginners (with Examples)

$
0
0
https://www.howtoforge.com/linux-yes-command

Most of the Linux commands you encounter do not depend on other operations for users to unlock their full potential, but there exists a small subset of command line tool which you can say are useless when used independently, but become a must-have or must-know when used with other command line operations. One such tool is yes, and in this tutorial, we will discuss this command with some easy to understand examples.
But before we do that, it's worth mentioning that all examples provided in this tutorial have been tested on Ubuntu 16.04 LTS.

Linux yes command

The yes command in Linux outputs a string repeatedly until killed. Following is the syntax of the command:
yes [STRING]...
yes OPTION
And here's what the man page says about this tool:
Repeatedly output a line with all specified STRING(s), or 'y'.
The following Q&A-type examples should give you a better idea about the usage of yes.

Q1. How yes command works?

As the man page says, the yes command produces continuous output - 'y' by default, or any other string if specified by user. Here's a screenshot that shows the yes command in action:
How yes command works
I could only capture the last part of the output as the output frequency was so fast, but the screenshot should give you a good idea about what kind of output the tool produces.
You can also provide a custom string for the yes command to use in output. For example:
yes HTF
Repeat word with yes command

Q2. Where yes command helps the user?

That's a valid question. Reason being, from what yes does, it's difficult to imagine the usefulness of the tool. But you'll be surprised to know that yes can not only save your time, but also automate some mundane tasks.
For example, consider the following scenario:
Where yes command helps the user
You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way:
yes | rm -ri test
yes command in action
So the command made sure user doesn't have to write 'y' each time when rm asked for it. Of course, one would argue that we could have simply removed the '-i' option from the rm command. That's right, I took this example as it's simple enough to make people understand the situations in which yes can be helpful.
Another - and probably more relevant - scenario would be when you're using the fsck command, and don't want to enter 'y' each time system asks your permission before fixing errors.

Q3. Is there any use of yes when it's used alone?

Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor.

Q4. What command line options yes offers?

The tool only offers generic command line options: --help and --version. As the names suggests. the former displays help information related to the command, while the latter one outputs version related information.
What command line options yes offers

Conclusion

So now you'd agree that there could be several scenarios where the yes command would be of help. There are no command line options unique to yes, so effectively, there's no learning curve associated with the tool. Just in case you need, here's the command's man page.

How to Deploy a MongoDB Sharded Cluster on CentOS 7

$
0
0
https://www.howtoforge.com/tutorial/deploying-mongodb-sharded-cluster-on-centos-7

Sharding is a MongoDB process to store data-set across different machines. It allows you to perform a horizontal scale of data and to partition all data across independent instances. Sharding allows you to add more machines based on data growth to your stack.

Sharding and Replication

Let's make it simple. When you have collections of music, 'Sharding' will save and keep your music collections in different folders on different instances or replica sets while 'Replication' is just syncing your music collections to other instances.

Three Sharding Components

Shard - Used to store all data. And in a production environment, each shard is replica sets. Provides high-availability and data consistency.
Config Server - Used to store cluster metadata, and contains a mapping of cluster data set and shards. This data is used by mongos/query server to deliver operations. It's recommended to use more than 3 instances in production.
Mongos/Query Router - This is just mongo instances running as application interfaces. The application will make requests to the 'mongos' instance, and then 'mongos' will deliver the requests using shard key to the shards replica sets.

Prerequisites

  • 2 centOS 7 server as Config Replica Sets
      • 10.0.15.31      configsvr1
      • 10.0.15.32      configsvr2
  • 4 CentOS 7 server as Shard Replica Sets
      • 10.0.15.21      shardsvr1
      • 10.0.15.22      shardsvr2
      • 10.0.15.23      shardsvr3
      • 10.0.15.24      shardsvr4
  • 1 CentOS 7 server as mongos/Query Router
      • 10.0.15.11       mongos
  • Root privileges
  • Each server connected to another server

Step 1 - Disable SELinux and Configure Hosts

In this tutorial, we will disable SELinux. Change SELinux configuration from 'enforcing' to 'disabled'.
Connect to all nodes through OpenSSH.
ssh root@SERVERIP
Disable SELinux by editing the configuration file.
vim /etc/sysconfig/selinux
Change SELINUX value to 'disabled'.
SELINUX=disabled
Save and exit.
Next, edit the hosts file on each server.
vim /etc/hosts
Paste the following hosts configuration:
    10.0.15.31      configsvr1
    10.0.15.32      configsvr2
    10.0.15.11      mongos
    10.0.15.21      shardsvr1
    10.0.15.22      shardsvr2
    10.0.15.23      shardsvr3
    10.0.15.24      shardsvr4
Save and exit.
Now restart all servers using the reboot command.
reboot

Step 2 - Install MongoDB on all instances

We will use the latest MongoDB version (3.4) for all instances. Add new MongoDB repository by executing the following command:
cat <<'EOF'>> /etc/yum.repos.d/mongodb.repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
EOF
Now install MongoDB 3.4 from mongodb repository using the following yum command.
sudo yum -y install mongodb-org
After mongodb is installed, you can use the 'mongo' or 'mongod' command.
mongod --version
Check MongoDB version

Step 3 - Create Config Server Replica Set

In the 'prerequisites' section, we've already defined config server with 2 machines 'configsvr1' and 'configsvr2'. And in this step, we will configure it to be a replica set.
If there is a mongod service running on the server, stop it using the systemctl command.
systemctl stop mongod
Edit the default mongodb configuration 'mongod.conf' using the Vim editor.
vim /etc/mongod.conf
Change the DB storage path to your own directory. We will use '/data/db1' for the first server, and '/data/db2' directory for the second config server.
storage:
  dbPath: /data/db1
Change the value of the line 'bindIP' to your internal network addres - 'configsvr1' with IP address 10.0.15.31, and the second server with 10.0.15.32.
bindIP: 10.0.15.31
On the replication section, set a replication name.
replication:
  replSetName: "replconfig01"
And under sharding section, define a role of the instances. We will use these two instances as 'configsvr'.
sharding:
  clusterRole: configsvr
Save and exit.
Next, we must create a new directory for MongoDB data, and then change the owner of that directory to 'mongod' user.
mkdir -p /data/db1
chown -R mongod:mongod /data/db1
After this, start the mongod service with the command below.
mongod --config /etc/mongod.conf
You can use the netstat command to check whether or not the mongod service is running on port 27017.
netstat -plntu
Configure MongoDB
Configsvr1 and Configsvr2 are ready for the replica set. Connect to the 'configsvr1' server and access the mongo shell.
ssh root@configsvr1
mongo --host configsvr1 --port 27017
Initiate the replica set name with all configsvr member using the query below.
rs.initiate(
  {
    _id: "replconfig01",
    configsvr: true,
    members: [
      { _id : 0, host : "configsvr1:27017" },
      { _id : 1, host : "configsvr2:27017" }
    ]
  }
)
If you get a results '{ "ok" : 1 }', it means the configsvr is already configured with replica set.
Initiate the replica set name
and you will be able to see which node is master and which node is secondary.
rs.isMaster()
rs.status()
see which node is master and which node is secondary
The configuration of Config Server Replica Set is done.

Step 4 - Create the Shard Replica Sets

In this step, we will configure 4 'centos 7' servers as 'Shard' server with 2 'Replica Set'.
  • 2 server - 'shardsvr1' and 'shardsvr2' with replica set name: 'shardreplica01'
  • 2 server - 'shardsvr3' and 'shardsvr4' with replica set name: 'shardreplica02'
Connect to each server, stop the mongod service (If there is service running), and edit the MongoDB configuration file.
systemctl stop mongod
vim /etc/mongod.conf
Change the default storage to your specific directory.
storage:
  dbPath: /data/db1
On the 'bindIP' line, change the value to use your internal network address.
bindIP: 10.0.15.21
On the replication section, you can use 'shardreplica01' for the first and second instances. And use 'shardreplica02' for the third and fourth shard servers.
replication:
  replSetName: "shardreplica01"
Next, define the role of the server. We will use all this as shardsvr instances.
sharding:
  clusterRole: shardsvr
Save and exit.
Now, create a new directory for MongoDB data.
mkdir -p /data/db1
chown -R mongod:mongod /data/db1
Start the mongod service.
mongod --config /etc/mongod.conf
Check MongoDB is running using the following command:
netstat -plntu
You will see MongoDB is running on the local network address.
MongoDB is running on the local network address
Next, create a new replica set for these 2 shard instances. Connect to the 'shardsvr1' and access the mongo shell.
ssh root@shardsvr1
mongo --host shardsvr1 --port 27017
Initiate the replica set with the name 'shardreplica01', and the members are 'shardsvr1' and 'shardsvr2'.
rs.initiate(
  {
    _id : "shardreplica01",
    members: [
      { _id : 0, host : "shardsvr1:27017" },
      { _id : 1, host : "shardsvr2:27017" }
    ]
  }
)
If there is no error, you will see results as below.
Results from shardsvr3 and shardsvr4 with replica set name 'shardreplica02'.
Results from shardsvr3 and shardsvr4 with replica set name 'shardreplica02'.
Redo this step for shardsvr3 and shardsvr4 servers with different replica set name 'shardreplica02'.
Now we've created 2 replica sets - 'shardreplica01' and 'shardreplica02' - as the shard.

Step 5 - Configure mongos/Query Router

The 'Query Router' or mongos is just instances that run 'mongos'. You can run mongos with the configuration file, or run with just a command line.
Login to the mongos server and stop the MongoDB service.
ssh root@mongos 
systemctl stop mongod
Run mongos with the command line as shown below.
mongos --configdb "replconfig01/configsvr1:27017,configsvr2:27017"
Use the '--configdb' option to define the config server. If you are on production, use at least 3 config servers.
You should see results similar to the following.
Successfully connected to configsvr1:27017
Successfully connected to configsvr2:27017
mongos instances are running.
Configure mongos/Query Router

Step 6 - Add shards to mongos/Query Router

Open another shell from the previous step, connect to the mongos server again, and access the mongo shell.
ssh root@mongos
mongo --host mongos --port 27017
Add shard server with the sh mongodb query.
For 'shardreplica01' instances:
sh.addShard( "shardreplica01/shardsvr1:27017")
sh.addShard( "shardreplica01/shardsvr2:27017")
For 'shardreplica02' instances:
sh.addShard( "shardreplica02/shardsvr3:27017")
sh.addShard( "shardreplica02/shardsvr4:27017")
Make sure there is no error and check the shard status.
sh.status()
You will see sharding status similar to the way what the following screenshot shows.
Add shards to mongos/Query Router
We have 2 shard replica set and 1 mongos instance running on our stack.

Step 7 - Testing

To test the setup, access the mongos server mongo shell.
ssh root@mongos
mongo --host mongos --port 27017
Enable Sharding for a Database
Create a new database and enable sharding for the new database.
use lemp
sh.enableSharding("lemp")
sh.status()
Enable Sharding for a Database
Now see the status of the database, it's has been partitioned to the replica set 'shardreplica01'.
Enable Sharding for Collections
Next, add new collections to the database with sharding support. We will add new collection named 'stack' with shard collection 'name', and then see database and collections status.
sh.shardCollection("lemp.stack", {"name":1})
sh.status()
Enable Sharding for Collections
New collections 'stack' with shard collection 'name' has been added.
Add documents to the collections 'stack'.
Now insert the documents to the collections. When we add documents to the collection on sharded cluster, we must include the 'shard key'.
In the example below, we are using shard key 'name', as we added when enabling sharding for collections.
db.stack.save({
    "name": "LEMP Stack",
    "apps": ["Linux", "Nginx", "MySQL", "PHP"],
})
As shown in the following screenshots, documents have been successfully added to the collection.
Add documents to the collections 'stack'.
If you want to test the database, you can connect to the replica set 'shardreplica01' PRIMARY server and open the mongo shell. I'm logging in to the 'shardsvr2' PRIMARY server.
ssh root@shardsvr2
mongo --host shardsvr2 --port 27017
Check database available on the replica set.
show dbs
use lemp
db.stack.find()
You will see that the database, collections, and documents are available in the replica set.
MongoDB Sharded Cluster on CentOS 7 has been successfully installed and deployed.
MongoDB Sharded Cluster on CentOS 7 has been successfully installed and deployed.

Reference


Why You Should Still Love Telnet

$
0
0
https://bash-prompt.net/guides/telnet

Telnet, the protocol and the command line tool, were how system administrators used to log into remote servers. However, due to the fact that there is no encryption all communication, including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH almost as soon as SSH was created.
For the purposes of logging into a remote server, you should never, and probably have never considered it. This does not mean that the telnet command is not a very useful tool when used for debugging remote connection problems.
In this guide, we will explore using telnet to answer the all too common question, “Why can’t I ###### connect‽”.
This frustrated question is usually encountered after installing a application server like a web server, an email server, an ssh server, a Samba server etc, and for some reason, the client won’t connect to the server.
telnet isn’t going to solve your problem but it will, very quickly, narrow down where you need to start looking to fix your problem.
telnet is a very simple command to use for debugging network related issues and has the syntax:
telnet 
Because telnet will initially simply establish a connection to the port without sending any data it can be used with almost any protocol including encrypted protocols.
There are four main errors that you will encounter when trying to connect to a problem server. We will look at all four, explore what they mean and look at how you should fix them.
For this guide we will assume that we have just installed a Samba server at samba.example.com and we can’t get a local client to connect to the server.

Error 1 - The connection that hangs forever

First, we need to attempt to connect to the Samba server with telnet. This is done with the following command (Samba listens on port 445):
telnet samba.example.com 445
Sometimes, the connection will get to this point stop and hang indefinitely:
telnet samba.example.com 445
Trying 172.31.25.31...
This means that telnet has not received any response to its request to establish a connection. This can happen for two reasons:
  1. There is a router down between you and the server.
  2. There is a firewall dropping your request.
In order to rule out 1. run a quick mtr samba.example.com to the server. If the server is accessible then it’s a firewall (note: it’s almost always a firewall).
Firstly, check if there are any firewall rules on the server itself with the following command iptables -L -v -n, if there are none then you will get the following output:
iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
If you see anything else then this is likely the problem. In order to check, stop iptables for a moment and run telnet samba.example.com 445 again and see if you can connect. If you still can’t connect see if your provider and/or office has a firewall in place that is blocking you.

Error 2 - DNS problems

A DNS issue will occur if the hostname you are using does not resolve to an IP address. The error that you will see is as follows:
telnet samba.example.com 445
Server lookup failure: samba.example.com:445, Name or service not known
The first step here is to substitute the IP address of the server for the hostname. If you can connect to the IP but not the hostname then the problem is the hostname.
This can happen for many reasons (I have seen all of the following):
  1. Is the domain registered? Use whois to find out if it is.
  2. Is the domain expired? Use whois to find out if it is.
  3. Are you using the correct hostname? Use dig or host to ensure that the hostname you are using resolves to the correct IP.
  4. Is your A record correct? Check that you didn’t accidentally create an A record for something like smaba.example.com.
Always double check the spelling and the correct hostname (is it samba.example.com or samba1.example.com) as this will often trip you up especially with long, complicated or foreign hostnames.

Error 3 - The server isn’t listening on that port

This error occurs when telnet is able to reach to the server but there is nothing listening on the port you specified. The error looks like this:
telnet samba.example.com 445
Trying 172.31.25.31...
telnet: Unable to connect to remote host: Connection refused
This can happen for a couple of reasons:
  1. Are you sure you’re connecting to the right server?
  2. Your application server is not listening on the port you think it is. Check exactly what it’s doing by running netstat -plunt on the server and see what port it is, in fact, listening on.
  3. The application server isn’t running. This can happen when the application server exits immediately and silently after you start it. Start the server and run ps auxf or systemctl status application.service to check it’s running.

Error 4 - The connection was closed by the server

This error happens when the connection was successful but the application server has a built in security measure that killed the connection as soon as it was made. This error looks like:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
Connection closed by foreign host.
The last line Connection closed by foreign host. indicates that the connection was actively terminated by the server. In order to fix this, you need to look at the security configuration of the application server to ensure your IP or user is allowed to connect to it.

A successful connection

This is what a successful telnet connection attempt looks like:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
The connection will stay open for a while depending on the timeout of the application server you are connected to.
A telnet connection is closed by typing CTRL+] and then when you see the telnet> prompt, type “quit” and hit ENTER i.e.:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
^]
telnet> quit
Connection closed.

Conclusion

There are a lot of reasons that a client application can’t connect to a server. The exact reason can be difficult to establish especially when the client is a GUI that offers little or no error information. Using telnet and observing the output will allow you to very rapidly narrow down where the problem lies and save you a whole lot of time.

How to Redirect a Domain

$
0
0
https://www.rosehosting.com/blog/how-to-redirect-a-domain


How to Redirect a Domain
We’ll show you, how to redirect a domain. URL redirection, also called URL forwarding, is a World Wide Web technique for making a web page available under more than one URL address. When a web browser attempts to open a URL that has been redirected, a page with a different URL is opened. There are few ways to redirect a domain and it depends on the web server used etc. In this tutorial we are going to show you, how to redirect a domain with Apache web server and URL redirection with NGINX web server.

How to Redirect a Domain with Apache web server

The Apache HTTP Server, is free and open-source cross-platform web server software. 92% of Apache HTTPS Server copies run on Linux distributions.

Install Apache on your server if it is not installed yet.

On RPM based Linux distributions, like CentOS and Fedora, use the following command to install Apache:
yum install httpd

Verify that mod_rewrite module is enabled:

httpd -M | grep rewrite
rewrite_module (shared)

On Ubuntu and Debian, run:

sudo apt-get update
sudo apt-get install apache2

Activate the apache mod_rewrite module:

sudo a2enmod rewrite

Restart the Apache service:

sudo service apache2 restart

Create a simple virtual host in Apache

Create a simple virtual host in Apache for the old domain in which you redirect it to the new domain:
Use the Redirect Permanent directive to redirect the web client to the new URL:

ServerName old-domain.com
ServerAlias www.old-domain.com
RedirectPermanent / http://www.new-domain.com/
# optionally add an AccessLog directive here for logging the requests e.g. :
CustomLog ${APACHE_LOG_DIR}/access.log combined

Restart the Apache server:

Restart the Apache service to apply the changes.
You can also redirect a domain name to a different one using rewrite rules placed in .htaccess file located in the document root directory of the old domain name. Create a new .htaccess file and add the following rules to it:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^old-domain.com [NC,OR]
RewriteCond %{HTTP_HOST} ^www.old-domain.com [NC]
RewriteRule ^(.*)$ http://new-domain.com/$1 [L,R=301,NC]

How to Redirect a Domain with NGINX web server

Nginx is free and open source web server/software, which can also be used as a reverse proxy, load balancer and HTTP cache.  A large fraction of web servers use NGINX, very often as a load balancer.

Stop Apache

Stop Apache on your server
service httpd stop

Disable Apache service

Disable Apache service to automaticaly start on boot (CentOS 7):
systemctl disable httpd

Install NGINX  on RPM Linux Distros

Install nginx web server. On RPM based Linux distributions, like CentOS and Fedora, use the following commands:
yum install epel-release
yum install nginx
systemctl enable nginx
service nginx start

Install NGINX  on Ubuntu

On Ubuntu (and other Debian based Linux distributions), run:
sudo service apache2 stop
sudo apt-get remove --purge apache2 apache2-utils
sudo rm -rf /etc/apache2
sudo apt-get update
sudo apt-get install nginx
If you receive a message that there is no nginx package available or so, install nginx using the nginx repository:
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:nginx/stable
sudo apt-get update
sudo apt-get install nginx

Start NGINX

Start the nginx service with the following command:
sudo service nginx start

Configure NGINX

Edit the current nginx server block about the old domain or create a new server block if it is not created yet.
Add the following lines:
server {
listen 80;
server_name old-domain.com www.old-domain.com;
return 301 http://www.new-domain.com$request_uri;
}
Please note that $request_uri will listen for and redirect to anything after the domain.
If you have an older version of nginx (version 0.9.1 or lower) add the following lines:
server {
listen 80;
server_name old-domain.com www.old-domain.com;
rewrite ^ http://www.new-domain.com$request_uri? permanent;
}

Restart NGINX

Do not forget to restart the nginx service for the changes to take effect:
service nginx restart

Of course you don’t have to redirect a domain if you use one of our managed VPS hosting services, in which case you can simply ask our expert Linux admins to redirect a domain name for you. They are available 24×7 and will take care of your request immediately.
PS. If you liked this post, on how to redirect a domain, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.

The Fold Command Tutorial With Examples For Beginners

$
0
0
https://www.ostechnix.com/fold-command-tutorial-examples-beginners

Fold Command
Have you ever found yourself in a situation where you want to fold or break the output of a command to fit within a specific width? I have find myself in this situation few times while running VMs, especially the servers with no GUI. Just in case, if you ever wanted to limit the output of a command to a particular width, look nowhere! Here is where fold command comes in handy! The fold command wraps each line in an input file to fit a specified width and prints it to the standard output.
In this brief tutorial, we are going to see the usage of fold command with practical examples.

The Fold Command Tutorial With Examples

Fold command is the part of GNU coreutils package, so let us not bother about installation.
The typical syntax of fold command:
fold [OPTION]... [FILE]...
Allow me to show you some examples, so you can get a better idea about fold command. I have a file named linux.txt with some random lines.

To wrap each line in the above file to default width, run:
fold linux.txt
80 columns per line is the default width. Here is the output of above command:

As you can see in the above output, fold command has limited the output to a width of 80 characters.
Of course, we can specify your preferred width, for example 50, like below:
fold -w50 linux.txt
Sample output would be:

Instead of just displaying output, we can also write the output to a new file as shown below:
fold -w50 linux.txt > linux1.txt
The above command will wrap the lines of linux.txt to a width of 50 characters, and writes the output to new file named linux1.txt.
Let us check the contents of the new file:
cat linux1.txt

Did you closely notice the output of the previous commands? Some words are broken between lines. To overcome this issue, we can use -s flag to break the lines at spaces.
The following command wraps each line in a given file to width “50” and breaks the line at spaces:
fold -w50 -s linux.txt
Sample output:

See? Now, the output is much clear. This command puts each space separated word in a new line and words with length > 50 are wrapped.
In all above examples, we limited the output width by columns. However, we can enforce the width of the output to the number of bytes specified using -b option. The following command breaks the output at 20 bytes.
fold -b20 linux.txt
Sample output:


Also read:

For more details, refer the man pages.
man fold
And, that’s for now folks. You know now how to use fold command to limit the output of a command to fit in a specific width. I hope this was useful. We will be posting more useful guides everyday. Stay tuned!
Cheers!

How To Install & Configure OTRS Help Desk Ticketing System On Linux

$
0
0
https://www.2daygeek.com/how-to-install-and-configure-otrs-help-desk-ticketing-system-on-centos-rhel

Every company maintains the ticketing system to track user request. Ticketing tool is one of the essential application in the IT industry.
You can find a lot of ticketing tools in the market and each one has their own unique features. Wide range of options are available to choose ticketing system, so you have to pick suitable one for you as per your requirement.
Every company (small level to top level) has manage ticketing systems as per their needs.
Are you looking for good ticketing systems with Free of cost? If yes, then otrs is the best choice for you.
Today we are going to discuss about otrs ticketing system which comes completely free of cost. Also you can find so many free ticketing tools in the market.

What Is OTRS?

OTRS stands for Open-source Ticket Request System is one of the most flexible web-based ticketing systems used for Customer Service, Help Desk, IT Service Management.
OTRS Free package offers wide range of configuration (more than 1,000) possibilities, endless customization and integration possibilities which turning it into a powerful IT service management tool. It supports 38 languages.
Every ticket has the history which shows detailed information about the ticket. It supports multiple agents that allows user to work simultaneously on the tickets. There is no limitation for agents creation and agent can handle “N” of tickets per day.
It allow agent to manage incoming inquiries, complaints, support requests, defect reports, and other communications. Also user can able to merge more than one requests which generated for the same issue.
OTRS was written in Perl programming language and web interface is made more user-friendly by using JavaScript. The web interface itself uses its own templating mechanism called DTL (Dynamic Template Language) to facilitate the display of the systems output data.

Prerequisites for OTRS

Make sure your system should have LAMP setup. If no, don’t worry and refer the following tutorials to install it.
# yum install httpd httpd-devel gcc mariadb-server
Also enable EPEL Repository which will install some additional perl modules for OTRS.
# yum install epel-release

Update your system

It’s a good practice to make your system packages up to date. Run the following command as root user.
# yum update

Setup the database for OTRS

OTRS supports different database back-end like MySQL or MariaDB, PostgreSQL or Oracle. MariaDB is the most popular database to deploy OTRS that was suggested by OTRS team.
No need to create a database manually and we can create this later using OTRS web interface. Just configure the following settings.
Modify the following parameter in order to make it suitable for OTRS.
# vi /etc/my.cnf

[mysqld]
max_allowed_packet = 64M
query_cache_size = 32M
innodb_log_file_size = 256M
Restart MariaDB service to take this change effect.
# systemctl restart mariadb

Disable SELinux

SELinux stands for Security-Enhanced Linux is a Linux kernel security module which allows users and administrators more control over access control. It gives that extra layer of security to the resources in the system.
Check whether SELinux enabled or disabled using following command.
# sestatus
SELinux status: enabled

or

# getenforce
Enabled
If it’s enabled, edit /etc/sysconfig/selinux file and change SELINUX=enabled to SELINUX=disabled and save the file then exit.
# nano /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enabled - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enabled.
# enabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Reboot the system for the changes to take effect.
# shutdown -r now

Download and Install OTRS

Download OTRS rpm file from the OTRS website and install it.
# yum -y install http://ftp.otrs.org/pub/otrs/RPMS/rhel/7/otrs-6.0.3-02.noarch.rpm
Restart Apache web server to load the configuration changes for OTRS.
# systemctl restart httpd

Configure firewall

By default CentOS/RHEL 7 will block all http and https traffic. We need to allow this traffic using following commands.
# firewall-cmd --add-service=http --permanent 
success

# firewall-cmd --add-service=https --permanent
success

# firewall-cmd --reload
success

# systemctl restart firewalld

Check and install additional Perl Modules

Since OTRS written in perl and you may need to install additonal perl modules manually. Run the following command to check what modules you are missing.
# /opt/otrs/bin/otrs.CheckModules.pl

Run the following YUM command to install missing perl modules.
# yum install -y "perl(Crypt::Eksblowfish::Bcrypt)""perl(DBD::Pg)""perl(Encode::HanExtra)""perl(JSON::XS)""perl(Mail::IMAPClient)""perl(Authen::NTLM)""perl(ModPerl::Util)""perl(Text::CSV_XS)""perl(YAML::XS)"
Run the otrs.CheckModules.pl file once again to confirm all the perl modules installed successfully.
# /opt/otrs/bin/otrs.CheckModules.pl
o Apache::DBI......................ok (v1.12)
o Apache2::Reload..................ok (v0.13)
o Archive::Tar.....................ok (v1.92)
o Archive::Zip.....................ok (v1.30)
o Crypt::Eksblowfish::Bcrypt.......ok (v0.009)
o Crypt::SSLeay....................ok (v0.64)
o Date::Format.....................ok (v2.24)
o DateTime.........................ok (v1.04)
o DBI..............................ok (v1.627)
o DBD::mysql.......................ok (v4.023)
o DBD::ODBC........................Not installed! (optional - Required to connect to a MS-SQL database.)
o DBD::Oracle......................Not installed! (optional - Required to connect to a Oracle database.)
o DBD::Pg..........................ok (v2.19.3)
o Digest::SHA......................ok (v5.85)
o Encode::HanExtra.................ok (v0.23)
o IO::Socket::SSL..................ok (v1.94)
o JSON::XS.........................ok (v3.01)
o List::Util::XS...................ok (v1.27)
o LWP::UserAgent...................ok (v6.26)
o Mail::IMAPClient.................ok (v3.37)
o IO::Socket::SSL................ok (v1.94)
o Authen::SASL...................ok (v2.15)
o Authen::NTLM...................ok (v1.09)
o ModPerl::Util....................ok (v2.000010)
o Net::DNS.........................ok (v0.72)
o Net::LDAP........................ok (v0.56)
o Template.........................ok (v2.24)
o Template::Stash::XS..............ok (undef)
o Text::CSV_XS.....................ok (v1.00)
o Time::HiRes......................ok (v1.9725)
o XML::LibXML......................ok (v2.0018)
o XML::LibXSLT.....................ok (v1.80)
o XML::Parser......................ok (v2.41)
o YAML::XS.........................ok (v0.54)

Configure OTRS using the web installer

Navigate your browser to http://localhost/otrs/installer.pl to Configure OTRS using the web installer.
Follow the instruction and input the required information.
1) Welcome Page : This is welcome screen which will shows about the OTRS offices and click on Next to continue.

2) : Hit Accept License and Continue button to move forward to next step.

3) Database Selection : Choose the database that you want to use with OTRS. I’m going to choose MySQL which is advised by OTRS team. Then click the Next button to continue.

4) Validate Database Credentials : Input the Database Credentials and click Check Database settings to validate the given information. If it’s correct information then you should able to see the message stating that Database check successful otherwise you will get error message.

5) Configure Database : If you are able to connect your database in the above steps. Now, it’s time to create a database, database user and password then click on Next to continue.

6) Database Creation: When you click the Next button in the above steps. Instantly it will create a database and grant privileges then shows Database setup successful.

7) System Settings : Enter all the required information on this page and click on Next to continue.

8) Mail Configuration : Enter the Inbound, Outbound and mail server information on this page and click on Next to continue.

9) Setup Completed: Congratulations! you have completed the OTRS installation. It’s time to play on it.

10) Access OTRS : Navigate your browser to http://localhost/otrs/index.pl to access OTRS.
To login as OTRS administrator, use the root@localhost as username and the generated password by OTRS in the above step.

You can start configure the OTRS system to meet your needs.

11) Kick Start OTRS daemon and watchdog : Make sure you have to kick start OTRS daemon and watchdog by the otrs user.
# su - otrs
$ /opt/otrs/bin/otrs.Daemon.pl start

Manage the OTRS daemon process.

Daemon started

$ /opt/otrs/bin/Cron.sh start
(using /opt/otrs) done
After enabled OTRS daemon & watchdog, your output similar to below.

How to Find a File in Linux with the Find Command

$
0
0
https://www.maketecheasier.com/find-a-file-in-linux
The Linuxfind command is one of the most important and handy commands in Linux systems. It can, as the name suggests, find files on your Linux PC based on pretty much whatever conditions and variables you set. You can find files by permissions, users, groups, file type, date, size and other possible criteria using the find command.
The find command is available on most Linux distro by default, so you do not have to install a package for it.
In this tutorial we will show you how to find files on Linux using various common combinations of search expressions in the command line.
The most obvious way of searching for files is by name. To find a file by name in the current directory, run:
find . -name photo.png
how-to-find-a-file-in-linux-photo
If you want to find a file by name that contains both capital and small letters, run:
find . -iname photo.png
find-iname-photo
If you want to find a file in the root directory, prefix your search with sudo which will give you all permissions required to do so, and also the ‘/’ symbol which tells Linux to search in the root directory. Finally, the -print expression displays the directories of your search results. If you were looking for Gzip, you’d type:
sudofind/-namegzip-print
how-to-find-a-file-in-linux-gzip
If you want to find files under a specific directory like “/home,” run:
find/home -name filename.txt
If you want to find files with the “.txt” extension under the “/home” directory, run:
find/home -name*.txt
To find files whose name is “test.txt” under multiple directories like “/home” and “/opt,” run:
find/home /opt -name test.txt
To find hidden files in the “/home” directory, run:
find/home -name".*"
To find a single file called “test.txt” and remove it, run:
find/home -type f -name test.txt -execrm-f{}
To find all empty files under the “/opt” directory, run:
find/opt -type f -empty
If you want to find all directories whose name is “testdir” under the “/home” directory, run:
find/home -type d -name testdir
To file all empty directories under “/home,” run:
find/home -type d -empty
The find command can be used to find files with a specific permission using the perm option.
To find all files whose permissions are “777” in the “/home” directory, run:
find/home -type f -perm 0777 -print
To find all the files without permission “777,” run:
find . -type f !-perm777
To find all read only files, run:
find/home -perm/u=r
To find all executable files, run:
find/home -perm/a=x
To find all the sticky bit set files whose permissions are “553,” run:
find/home -perm1553
To find all SUID set files, run:
find/home -perm/u=s
To find all files whose permissions are “777” and change their permissions to “700,” run:
find/home -type f -perm 0777 -print-execchmod700{} ;
To find all the files under “/opt” which are modified twenty days earlier, run:
find/opt -mtime20
To find all the files under “/opt” which are accessed twenty days earlier, run:
find/opt -atime20
To find all the files under “/opt” which are modified more than thirty days earlier and less than fiffy days after:
find/opt -mtime +30-mtime-50
To find all the files under “/opt” which are changed in the last two hours, run:
find/opt -cmin-120
To find all 10MB files under the “/home” directory, run:
find/home -size 10M
To find all the files under the “/home” directory which are greater than 10MB and less than 50MB, run:
find/home -size +10M -size-50M
To find all “.mp4” files under the “/home” directory with more than 10MB and delete them using a single command, run:
find/home -type f -name*.mp4 -size +10M -execrm{} ;
And there it is – a wholesome list of ways to find whatever files you’re looking for on Linux. It may not be as simple as your rudimentary Windows search, but it’s much more detailed and specific. Are there any commands here that we missed? Let us know in the comments!
Viewing all 1415 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>