Quantcast
Channel: Sameh Attia
Viewing all 1413 articles
Browse latest View live

Linux mv Command Explained for Beginners (8 Examples)

$
0
0
https://www.howtoforge.com/linux-mv-command

Just like cp for copying and rm for deleting, Linux also offers an in-built command for moving and renaming files. It's called mv. In this article, we will discuss the basics of this command line tool using easy to understand examples. Please note that all examples used in this tutorial have been tested on Ubuntu 16.04 LTS.

Linux mv command

As already mentioned, the mv command in Linux is used to move or rename files. Following is the syntax of the command:
mv [OPTION]... [-T] SOURCE DEST
mv [OPTION]... SOURCE... DIRECTORY
mv [OPTION]... -t DIRECTORY SOURCE...
And here's what the man page says about it:
Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY.
The following Q&A-styled examples will give you a better idea on how this tool works.

Q1. How to use mv command in Linux?

If you want to just rename a file, you can use the mv command in the following way:
mv [filename] [new_filename]
For example:
mv names.txt fullnames.txt
How to use mv command in Linux
Similarly, if the requirement is to move a file to a new location, use the mv command in the following way:
mv [filename] [dest-dir]
For example:
mv fullnames.txt /home/himanshu/Downloads
Linux mv command

Q2. How to make sure mv prompts before overwriting?

By default, the mv command doesn't prompt when the operation involves overwriting an existing file. For example, the following screenshot shows the existing full_names.txt was overwritten by mv without any warning or notification.
How to make sure mv prompts before overwriting
However, if you want, you can force mv to prompt by using the -i command line option.
mv -i [file_name] [new_file_name]
the -i command option
So the above screenshots clearly shows that -i leads to mv asking for user permission before overwriting an existing file. Please note that in case you want to explicitly specify that you don't want mv to prompt before overwriting, then use the -f command line option.

Q3. How to make mv not overwrite an existing file?

For this, you need to use the -n command line option.
mv -n [filename] [new_filename]
The following screenshot shows the mv operation wasn't successful as a file with name 'full_names.txt' already existed and the command had -n option in it.
How to make mv not overwrite an existing file
Note:
If you specify more than one of -i, -f, -n, only the final one takes effect.

Q4. How to make mv remove trailing slashes (if any) from source argument?

To remove any trailing slashes from source arguments, use the --strip-trailing-slashes command line option.
mv --strip-trailing-slashes [source] [dest]
Here's how the official documentation explains the usefulness of this option:
This is useful when a source argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, mv, for example, (via the system’s rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced directory and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard.

Q5. How to make mv treat destination as normal file?

To be absolutely sure that the destination entity is treated as a normal file (and not a directory), use the -T command line option.
mv -T [source] [dest]
Here's why this command line option exists:
This can help avoid race conditions in programs that operate in a shared area. For example, when the command ‘mv /tmp/source /tmp/dest’ succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest: it could have been renamed to/tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that/tmp/source was renamed to /tmp/dest.
In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory (-t) option.

Q6. How to make mv move file only when its newer than destination file?

Suppose there exists a file named fullnames.txt in Downloads directory of your system, and there's a file with same name in your home directory. Now, you want to update ~/Downloads/fullnames.txt with ~/fullnames.txt, but only when the latter is newer. Then in this case, you'll have to use the -u command line option.
mv -u ~/fullnames.txt ~/Downloads/fullnames.txt
This option is particularly useful in cases when you need to take such decisions from within a shell script.

Q7. How make mv emit details of what all it is doing?

If you want mv to output information explaining what exactly it's doing, then use the -v command line option.
mv -v [filename] [new_filename]
For example, the following screenshots shows mv emitting some helpful details of what exactly it did.
How make mv emit details of what all it is doing

Q8. How to force mv to create backup of existing destination files?

This you can do using the -b command line option. The backup file created this way will have the same name as the destination file, but with a tilde (~) appended to it. Here's an example:
How to force mv to create backup of existing destination files

Conclusion

As you'd have guessed by now, mv is as important as cp and rm for the functionality it offers - renaming/moving files around is also one of the basic operations after all. We've discussed a majority of command line options this tool offers. So you can just practice them and start using the command. To know more about mv, head to its man page.

Linux kill Command Tutorial for Beginners (5 Examples)

$
0
0
https://www.howtoforge.com/linux-kill-command

Sometimes, while working on a Linux machine, you'll see that an application or a command line process gets stuck (becomes unresponsive). Then in those cases, terminating it is the only way out. Linux command line offers a utility that you can use in these scenarios. It's called kill.
In this tutorial, we will discuss the basics of kill using some easy to understand examples. But before we do that, it's worth mentioning that all examples in the article have been tested on an Ubuntu 16.04 machine.

Linux kill command

The kill command is usually used to kill a process. Internally it sends a signal, and depending on what you want to do, there are different signals that you can send using this tool. Following is the command's syntax:
kill [options] [...]
And here's how the tool's man page describes it:
The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful 
signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways:
-9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID
column in ps command output.  A PID of -1 is special; it indicates all processes except the kill
process  itself and init.
The following Q&A-styled examples should give you a better idea of how the kill command works.

Q1. How to terminate a process using kill command?

This is very easy - all you need to do is to get the pid of the process you want to kill, and then pass it to the kill command.
kill [pid]
For example, I wanted to kill the 'gthumb' process on my system. So i first used the ps command to fetch the application's pid, and then passed it to the kill command to terminate it. Here's the screenshot showing all this:
How to terminate a process using kill command

Q2. How to send a custom signal?

As already mentioned in the introduction section, TERM is the default signal that kill sends to the application/process in question. However, if you want, you can send any other signal that kill supports using the -s command line option.
kill -s [signal] [pid]
For example, if a process isn't responding to the TERM signal (which allows the process to do final cleanup before quitting), you can go for the KILL signal (which doesn't let process do any cleanup). Following is the command you need to run in that case.
kill -s KILL [pid]

Q3. What all signals you can send using kill?

Of course, the next logical question that'll come to your mind is how to know which all signals you can send using kill. Well, thankfully, there exists a command line option -l that lists all supported signals.
kill -l
Following is the output the above command produced on our system:
What all signals you can send using kill

Q4. What are the other ways in which signal can be sent?

In one of the previous examples, we told you if you want to send the KILL signal, you can do it in the following way:
kill -s KILL [pid]
However, there are a couple of other alternatives as well:
kill -s SIGKILL [pid]
kill -s 9 [pid]
The corresponding number can be known using the -l option we've already discussed in the previous example.

Q5. How to kill all running process in one go?

In case a user wants to kill all processes that they can (this depends on their privilege level), then instead of specifying a large number of process IDs, they can simply pass the -1 option to kill.
For example:
kill -s KILL -1

Conclusion

The kill command is pretty straightforward to understand and use. There's a slight learning curve in terms of the list of signal options it offers, but as we explained in here, there's an option to take a quick look at that list as well. Just practice whatever we've discussed and you should be good to go. For more information, head to the tool's man page.

How to Set Up and Implement DMARC Email Security

$
0
0
https://www.esecurityplanet.com/applications/how-to-set-up-imlement-dmarc-email-security.html

Curious about DMARC? Learn how to set up a basic DMARC email security policy, including SPF and DKIM, in this eSecurity Planet tutorial.

Domain-based Message Authentication, Reporting and Conformance (DMARC) is an increasingly important approach for helping ensure the integrity of email coming from a given domain.
Unfortunately, DMARC is not turned on by default for every domain, at every web host or every email server. DMARC requires organizations and email administrators to configure and set up policies. While DMARC is not the default for email servers, adoption is now growing, thanks to adoption by the U.S. government and enterprises around the world.
In the first part of the eSecurityPlanet series on DMARC, we provided an overview of the components that make up DMARC, including Sender Policy Framework (SPF) and Domain Keys Identified Mail (DKIM).
In this installment, we'll go over how to implement a basic DMARC setup on your own domain. It's a set of processes that includes changing DNS records at the domain registrar and optimally configuring email providers to send signed emails.

How to set up Sender Policy Framework (SPF)

Sender Policy Framework (SPF) is one of the easiest parts of a DMARC deployment to set up and configure. SPF is used to specify which email exchanges are authorized to send email for a given domain name.
At its most basic level, SPF just requires a simple one line change to a domain record in order to work.
  1. Log into your domain registrar and click on the option to manage or configure DNS settings
  2. Find and click the 'Add a New Record' option and choose a 'TXT' record
  3. In the host name dialogue, enter either @ or the name of your domain.
The next part is the entry for "value," which defines SPF options. There are multiple options that can be inputted for an SPF record that can limit and define which email exchanges are able to send email on behalf of a domain and how strictly the policy should be enforced.
Example SPF record:
So, for example, on the domain DMARC.site (a test domain we created just for this article), a basic SPF policy might look like this:
"v=spf1 dmarc.site ~all"
In the above policy, only an email exchange hosted on dmarc.site would allowed to send email for the domain. The ~all piece of the policy is typically the right way to end an SPF DNS entry and simply means that the policy is all there is and no other servers should be sending email on behalf of a given domain.
Using the ~ (tilde) is typically preferred in an initial SPF configuration to using a "-" (for example : -all rather than ~all), as the tilde denotes a Soft Fail. That is, using the ~ will still enable mail to be sent that doesn't meet the policy, but the mail won't be identified as non-compliant. To have a strict policy (after initial testing), the '"-" indicates a hard fail if the policy is not met.
In many cases, organizations will have mail servers that are separate from the domain host (i.e. Google, Office 365 or a mail forwarder). So how do you enable an SPF policy to define authorized email servers beyond the domain host? That's where the 'include:' items come in.
For example, to specify an SPF policy that will enable Google's mail servers to send email on behalf of a domain, the following simple policy will work:
"v=spf1 include:_spf.google.com ~all"
There are lot of other options that can be inputted for an SPF record. Another common configuration is to specifically allow email to be sent only from the same email servers that are already defined in the MX (mail exchange) record for the given domain's DNS entry. An example:
"v=spf1 mx mx:DMARC.site ~all"
Once the DNS TXT record for SPF has been inputted and saved, it's time to move to the next step of the DMARC process.

How to set up Domain Keys Identified Email

Domain Keys Identified Email (DKIM) is a somewhat more involved and challenging element to implement than SPF. With DKIM, in addition to a DNS entry, organizations also need to make changes on outgoing email servers.
There are two elements to DKIM: a DNS record that includes a public cryptography key to help verify that a sender is allowed to send email for a given domain, and the private key that is used for signing outgoing email.
Adding a DKIM entry to a domain's DNS is the same basic process as it was for the SPF record:
  1. Log into your domain registrar and click on the option to manage or configure DNS settings
  2. Find and click the 'Add a New Record' option and choose a 'TXT' record
  3. For the host name option, DKIM requires what is known as a 'selector,' which is basically a prefix (Example: dmarc._domainkey.dmarc.site)
  4. Instead of inputting a policy (as was the case for SPF), what is needed with the DKIM entry is a public cryptography key
There are multiple ways to generate a public key that can be used for a DKIM record. On Linux systems, a common approach is using the ssh-keygen tool, while on Windows, PuTTYgen is a reasonable option.
There are also multiple online tools that can help generate the public/private key pair; one of the easiest is DKIM Core Tools.
Sample DKIM entry:
DKIM public key
The DNS entry is only half the equation for DKIM. The other half is getting a DKIM signer setup on a mail server, which is a process that isn't all that easy for many email systems. The exception is Google's Gsuite, which has a simple how-to guide to get a DKIM signer in place.
Microsoft Office 365 users can benefit from Microsoft's detailed guide on how to implement DKIM signing on that platform.
There are also multiple vendors that can help enable DKIM signing with different approaches, which we will detail in the next article in this series, an outline of vendor solutions.

How to set up a DMARC record

Now that SPF and DKIM have been set up, it's time to finally set up the DMARC policy. It is possible to define a DMARC policy in a DNS record without first setting up SPF and DKIM, but it actually won't be able to do anything.
DMARC policies define how SPF and DKIM records should be handled by email servers. A critically important element of DMARC policy is that it also provides a reporting mechanism so domain administrators can identify if email is failing or if an attacker is attempting to spoof a given domain.
Just like SPF, DMARC is a simple one line entry in the domain's DNS records.
  1. Log into your domain registrar and click on the option to manage or configure DNS settings
  2. Find and click the 'Add a New Record' option and choose a 'TXT' record
Here's a sample DMARC entry for the test domain DMARC site:
v=DMARC1; p=quarantine; rua=mailto:reports@dmarc.site; ruf=mailto:reports@dmarc.site; adkim=r; aspf=r; rf=afrf
  • The "p" option has three options: none, quarantine, or reject, for how email that violates policies should be handled
  • The adkim and aspf options define how strictly DKIM and SPF policy should be applied, with 's' indicating strict and 'r' indicating relaxed
  • The RUA provides an address for aggregate data reports, while the RUF provides an address for forensic reports

Testing and next steps

Following the steps in this guide, it's possible to set up a basic set of policies that will enable DMARC for a given domain. Simply setting up DMARC in a test implementation is only the beginning of the DMARC journey, however.
It's important to test configuration for SPF, DKIM and DMARC to make sure that the defined policies work as intended and don't end up blocking legitimate email. That's why starting with relaxed and quarantine options is likely a good place to start.
DMARC reporting and forensics allow an organization to understand what is going on with their email domains. While it's possible to look at and parse each DMARC report email to see what is going on, that's not an approach that scales.
In the next part of the eSecurityPlanet guide to DMARC, we'll provide an overview of different DMARC vendor solutions that can help organizations with implementation and report monitoring.

Threading in Python

$
0
0
http://www.linuxjournal.com/content/threading-python

Threads can provide concurrency, even if they're not truly parallel.
In my last article, I took a short tour through the ways you can add concurrency to your programs. In this article, I focus on one of those forms that has a reputation for being particularly frustrating for many developers: threading. I explore the ways you can use threads in Python and the limitations the language puts upon you when doing so.
The basic idea behind threading is a simple one: just as the computer can run more than one process at a time, so too can your process run more than one thread at a time. When you want your program to do something in the background, you can launch a new thread. The main thread continues to run in the foreground, allowing the program to do two (or more) things at once.
What's the difference between launching a new process and a new thread? A new process is completely independent of your existing process, giving you more stability (in that the processes cannot affect or corrupt one another) but also less flexibility (in that data cannot easily flow from one thread to another). Because multiple threads within a process share data, they can work with one another more closely and easily.
For example, let's say you want to retrieve all of the data from a variety of websites. My preferred Python package for retrieving data from the web is the "requests" package, available from PyPI. Thus, I can use a for loop, as follows:

length = {}

for one_url in urls:
response = requests.get(one_url)
length[one_url] = len(response.content)

for key, value in length.items():
print("{0:30}: {1:8,}".format(key, value))

How does this program work? It goes through a list of URLs (as strings), one by one, calculating the length of the content and then storing that content inside a dictionary called length. The keys in lengthare URLs, and the values are the lengths of the requested URL content.
So far, so good; I've turned this into a complete program (retrieve1.py), which is shown in Listing 1. I put nine URLs into a text file called urls.txt (Listing 2), and then timed how long retrieving each of them took. On my computer, the total time was about 15 seconds, although there was clearly some variation in the timing.
Listing 1. retrieve1.py

#!/usr/bin/env python3

import requests
import time

urls = [one_line.strip()
for one_line in open('urls.txt')]

length = {}

start_time = time.time()

for one_url in urls:
response = requests.get(one_url)
length[one_url] = len(response.content)

for key, value in length.items():
print("{0:30}: {1:8,}".format(key, value))

end_time = time.time()

total_time = end_time - start_time

print("\nTotal time: {0:.3} seconds".format(total_time))

Listing 2. urls.txt

http://lerner.co.il
http://LinuxJournal.com
http://en.wikipedia.org
http://news.ycombinator.com
http://NYTimes.com
http://Facebook.com
http://WashingtonPost.com
http://Haaretz.co.il
http://thetech.com

Improving the Timing with Threads

How can I improve the timing? Well, Python provides threading. Many people think of Python's threads as fatally flawed, because only one thread actually can execute at a time, thanks to the GIL (global interpreter lock). This is true if you're running a program that is performing serious calculations, and in which you really want the system to be using multiple CPUs in parallel.
However, I have a different sort of use case here. I'm interested in retrieving data from different websites. Python knows that I/O can take a long time, and so whenever a Python thread engages in I/O (that is, the screen, disk or network), it gives up control and hands use of the GIL over to a different thread.
In the case of my "retrieve" program, this is perfect. I can spawn a separate thread to retrieve each of the URLs in the array. I then can wait for the URLs to be retrieved in parallel, checking in with each of the threads one at a time. In this way, I probably can save time.
Let's start with the core of my rewritten program. I'll want to implement the retrieval as a function, and then invoke that function along with one argument—the URL I want to retrieve. I then can invoke that function by creating a new instance of threading.Thread, telling the new instance not only which function I want to run in a new thread, but also which argument(s) I want to pass. This is how that code will look:

for one_url in urls:
t = threading.Thread(target=get_length, args=(one_url,))
t.start()

But wait. How will the get_length function communicate the content length to the rest of the program? In a threaded program, you really must not have individual threads modify built-in data structures, such as a list. This is because such data structures aren't thread-safe, and doing something such as an "append" from one thread might cause all sorts of problems.
However, you can use a "queue" data structure, which is thread-safe, and thus guarantees a form of communication. The function can put its results on the queue, and then, when all of the threads have completed their run, you can read those results from the queue.
Here, then, is how the function might look:

from queue import Queue

queue = Queue()

def get_length(one_url):
response = requests.get(one_url)
queue.put((one_url, len(response.content)))

As you can see, the function retrieves the content of one_url and then places the URL itself, as well as the length of the content, in a tuple. That tuple is then placed in the queue.
It's a nice little program. The main thread spawns a new thread, each of which runs get_length. In get_length, the information gets stuck on the queue.
The thing is, now it needs to retrieve things from the queue. But if you do this just after launching the threads, you run the risk of reading from the queue before the threads have completed. So, you need to "join" the threads, which means to wait until they have finished. Once the threads have all been joined, you can read all of their information from the queue.
There are a few different ways to join the threads. An easy one is to create a list where you will store the threads and then append each new thread object to that list as you create it:

threads = [ ]

for one_url in urls:
t = threading.Thread(target=get_length, args=(one_url,))
threads.append(t)
t.start()

You then can iterate over each of the thread objects, joining them:

for one_thread in threads:
one_thread.join()

Note that when you call one_thread.join() in this way, the call blocks. Perhaps that's not the most efficient way to do things, but in my experiments, it still took about one second—15 times faster—to retrieve all of the URLs.
In other words, Python threads are routinely seen as terrible and useless. But in this case, you can see that they allowed me to parallelize the program without too much trouble, having different sections execute concurrently.
Listing 3. retrieve2.py

#!/usr/bin/env python3

import requests
import time
import threading
from queue import Queue

urls = [one_line.strip()
for one_line in open('urls.txt')]

length = {}
queue = Queue()
start_time = time.time()
threads = [ ]

def get_length(one_url):
response = requests.get(one_url)
queue.put((one_url, len(response.content)))

# Launch our function in a thread
print("Launching")
for one_url in urls:
t = threading.Thread(target=get_length, args=(one_url,))
threads.append(t)
t.start()

# Joining all
print("Joining")
for one_thread in threads:
one_thread.join()

# Retrieving + printing
print("Retrieving + printing")
while not queue.empty():
one_url, length = queue.get()
print("{0:30}: {1:8,}".format(one_url, length))

end_time = time.time()

total_time = end_time - start_time

print("\nTotal time: {0:.3} seconds".format(total_time))

Considerations

The good news is that this demonstrates how using threads can be effective when you're doing numerous, time-intensive I/O actions. This is especially good news if you're writing a server in Python that uses threads; you can open up a new thread for each incoming request and/or allocate each new request to an existing, pre-created thread. Again, if the threads don't really need to execute in a truly parallel fashion, you're fine.
But, what if your system receives a very large number of requests? In such a case, your threads might not be able to keep up. This is particularly true if the code being executed in each thread is CPU-intensive.
In such a case, you don't want to use threads. A popular option—indeed, the popular option—is to use processes. In my next article, I plan to look at how such processes can work and interact.

Keep Accurate Time on Linux with NTP

$
0
0
https://www.linux.com/learn/intro-to-linux/2018/1/keep-accurate-time-linux-ntp

usno-amc.jpg

USNO
Learn how to keep your computers synchronized, using NTP and systemd.
How to keep the correct time and keep your computers synchronized without abusing time servers, using NTP and systemd.

What Time is It?

Linux is funky when it comes to telling the time. You might think that the time tells the time, but it doesn't because it is a timer that measures how long a process runs. To get the time, you run the date command, and to view more than one date, you use cal. Timestamps on files are also a source of confusion as they are typically displayed in two different ways, depending on your distro defaults. This example is from Ubuntu 16.04 LTS:
$ ls -l
drwxrwxr-x 5 carla carla 4096 Mar 27 2017 stuff
drwxrwxr-x 2 carla carla 4096 Dec 8 11:32 things
-rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf
-rw-rw-r-- 1 carla carla 2781 Apr 18 2017 oddlots.txt
Some display the year, some display the time, which makes ordering your files rather a mess. The GNU default is files dated within the last six months display the time instead of the year. I suppose there is a reason for this. If your Linux does this, try ls -l --time-style=long-iso to display the timestamps all the same way, sorted alphabetically. See How to Change the Linux Date and Time: Simple Commands to learn all manner of fascinating ways to manage the time on Linux.

Check Current Settings

NTP, the network time protocol, is the old-fashioned way of keeping correct time on computers. ntpd, the NTP daemon, periodically queries a public time server and adjusts your system time as needed. It's a simple lightweight protocol that is easy to set up for basic use. Systemd has barged into NTP territory with the systemd-timesyncd.service, which acts as a client to ntpd.
Before messing with NTP, let's take a minute to check that current time settings are correct.
There are (at least) two timekeepers on your system: system time, which is managed by the Linux kernel, and the hardware clock on your motherboard, which is also called the real-time clock (RTC). When you enter your system BIOS, you see the hardware clock time and you can change its settings. When you install a new Linux, and in some graphical time managers, you are asked if you want your RTC set to the UTC (Coordinated Universal Time) zone. It should be set to UTC, because all time zone and daylight savings time calculations are based on UTC. Use the hwclock command to check:
$ sudo hwclock --debug
hwclock from util-linux 2.27.1
Using the /dev interface to the clock.
Hardware clock is on UTC time
Assuming hardware clock is kept in UTC time.
Waiting for clock tick...
...got clock tick
Time read from Hardware Clock: 2018/01/22 22:14:31
Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969
Time since last adjustment is 1516659271 seconds
Calculated Hardware Clock drift is 0.000000 seconds
Mon 22 Jan 2018 02:14:30 PM PST .202760 seconds
"Hardware clock is kept in UTC time" confirms that your RTC is on UTC, even though it translates the time to your local time. If it were set to local time it would report "Hardware clock is kept in local time."
You should have a /etc/adjtime file. If you don't, sync your RTC to system time:
$ sudo hwclock -w
This should generate the file, and the contents should look like this example:
$ cat /etc/adjtime
0.000000 1516661953 0.000000
1516661953
UTC
The new-fangled systemd way is to run timedatectl, which does not need root permissions:
$ timedatectl
Local time: Mon 2018-01-22 14:17:51 PST
Universal time: Mon 2018-01-22 22:17:51 UTC
RTC time: Mon 2018-01-22 22:17:51
Time zone: America/Los_Angeles (PST, -0800)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
"RTC in local TZ: no" confirms that it is on UTC time. What if it is on local time? There are, as always, multiple ways to change it. The easy way is with a nice graphical configuration tool, like YaST in openSUSE. You can use timedatectl:
$ timedatectl set-local-rtc 0
Or edit /etc/adjtime, replacing UTC with LOCAL.

systemd-timesyncd Client

Now I'm tired, and we've just gotten to the good part. Who knew timekeeping was so complex? We haven't even scratched the surface; read man 8 hwclock to get an idea of how time is kept on computers.
Systemd provides the systemd-timesyncd.service client, which queries remote time servers and adjusts your system time. Configure your servers in /etc/systemd/timesyncd.conf. Most Linux distributions provide a default configuration that points to time servers that they maintain, like Fedora:
[Time]
#NTP=
#FallbackNTP=0.fedora.pool.ntp.org 1.fedora.pool.ntp.org
You may enter any other servers you desire, such as your own local NTP server, on the NTP= line in a space-delimited list. (Remember to uncomment this line.) Anything you put on the NTP= line overrides the fallback.
What if you are not using systemd? Then you need only NTP.

Setting up NTP Server and Client

It is a good practice to set up your own LAN NTP server, so that you are not pummeling public NTP servers from all of your computers. On most Linuxes NTP comes in the ntp package, and most of them provide /etc/ntp.conf to configure the service. Consult NTP Pool Time Servers to find the NTP server pool that is appropriate for your region. Then enter 4-5 servers in your /etc/ntp.conf file, with each server on its own line:
driftfile   /var/ntp.drift
logfile /var/log/ntp.log
server 0.europe.pool.ntp.org
server 1.europe.pool.ntp.org
server 2.europe.pool.ntp.org
server 3.europe.pool.ntp.org
The driftfile tells ntpd where to store the information it needs to quickly synchronize your system clock with the time servers at startup, and your logs should have their own home instead of getting dumped into the syslog. Use your Linux distribution defaults for these files if it provides them.
Now start the daemon; on most Linuxes this is sudo systemctl start ntpd. Let it run for a few minutes, then check its status:
$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================
+dev.smatwebdesi 192.168.194.89 3 u 25 64 37 92.456 -6.395 18.530
*chl.la 127.67.113.92 2 u 23 64 37 75.175 8.820 8.230
+four0.fairy.mat 35.73.197.144 2 u 22 64 37 116.272 -10.033 40.151
-195.21.152.161 195.66.241.2 2 u 27 64 37 107.559 1.822 27.346
I have no idea what any of that means, other than your daemon is talking to the remote time servers, and that is what you want. To permanently enable it, run sudo systemctl enable ntpd. If your Linux doesn't use systemd then it is your homework to figure out how to run ntpd.
Now you can set up systemd-timesyncd on your other LAN hosts to use your local NTP server, or install NTP on them and enter your local server in their /etc/ntp.conf files.
NTP servers take a beating, and demand continually increases. You can help by running your own public NTP server. Come back next week to learn how.
Learn more about Linux through the free "Introduction to Linux" course from The Linux Foundation and edX.

How to secure Nginx with Let’s Encrypt on CentOS 7

$
0
0
https://www.cyberciti.biz/faq/how-to-secure-nginx-lets-encrypt-on-centos-7

How do I secure my Nginx web server with Let’s Encrypt free ssl certificate on my CentOS 7 or RHEL 7 server? How to configure Nginx with Let’s Encrypt on CentOS 7?

Let’s Encrypt is a free, automated, and open certificate authority for your website or any other projects. This page shows how to use Let’s Encrypt to install a free SSL certificate for Nginx web server. You will learn how to properly deploy Diffie-Hellman on your server to get SSL labs A+ score on a CentOS/RHEL 7.

How to secure Nginx with Let’s Encrypt on CentOS 7

Our sample setup is as follows:
How to secure configure Nginx with Let's Encrypt on CentOS RHEL 7

How to secure Nginx with Let’s Encrypt on CentOS 7

The procedure is as follows to obtaining an SSL certificate:
  1. Get acme.sh software:
    git clone https://github.com/Neilpang/acme.sh.git
  2. Create /.well-known/acme-challenge/ directory:
    mkdir -p /var/www/html/.well-known/acme-challenge/
  3. Obtaining an SSL certificate your domain:
    acme.sh --issue -w /DocumentRootPath/ -d your-domain
  4. Configure TLS/SSL on Nginx:
    vi /etc/nginx/sites-available/default
  5. Setup cron job setup for auto renewal
  6. Open port 443 (HTTPS):
    sudo firewall-cmd --add-service=https
Let us see how to install acme.sh client and use it on a CentOS/RHEL 7 to get an SSL certificate from Let’s Encrypt.

Step 1 – Install the required software

Install the git, wget, curl and bc packages with the yum command:
$ sudo yum install git bc wget curl

Step 2 – Install acme.sh Let’s Encrypt client

Clone the repo:
$ cd /tmp/
$ git clone https://github.com/Neilpang/acme.sh.git

clone acme.sh git
Install acme.sh client on to your system, run:
$ cd acme.sh/
$ sudo -i
# ./acme.sh --install

install acme.sh client on centos 7 or rhel 7
After install, you must close current terminal and reopen again to make the alias take effect. Or simply type the following source command:
$ sudo source ~/.bashrc

Step 3 – Create acme-challenge directory

Type the following mkdir command. Make sure you set D to actual DocumentRoot path as per your needs:
# D=/usr/share/nginx/html
# mkdir -vp ${D}/.well-known/acme-challenge/
###---[ NOTE: Adjust permission as per your setup ]---###
# chown -R nginx:nginx ${D}/.well-known/acme-challenge/
# chmod -R 0555 ${D}/.well-known/acme-challenge/

Also create directory to store SSL certificate:
# mkdir -p /etc/nginx/ssl/cyberciti.biz/

Step 4 – Create dhparams.pem file

Run openssl command:
# cd /etc/nginx/ssl/cyberciti.biz/
# openssl dhparam -out dhparams.pem -dsaparam 4096

Step 5 – Obtain a certificate for domain

Issue a certificate for your domain:
acme.sh --issue -w /path/to/www/htmlRoot/ -d example.com -k 2048
sudo acme.sh --issue -w /usr/local/nginx/html -d server2.cyberciti.biz -k 2048

CentOS Obtain Let's Encrypt certificate for domain

Step 6 – Configure Nginx

You just successfully requested an SSL Certificate from Let’s Encrypt for your CentOS 7 or RHEL 7 server. It is time to configure it. Edit default.ssl.conf:
$ sudo vi /etc/nginx/conf.d/default.ssl.conf
Append the following config:
## START: SSL/HTTPS server2.cyberciti.biz ###
server {
#------- Start SSL config with http2 support ----#
listen 10.21.136.134:443 http2;
server_name server2.cyberciti.biz;
ssl on;
ssl_certificate /etc/nginx/ssl/cyberciti.biz/server2.cyberciti.biz.cer;
ssl_certificate_key /etc/nginx/ssl/cyberciti.biz/server2.cyberciti.biz.key;
ssl_session_timeout 30m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc/nginx/ssl/cyberciti.biz/dhparams.pem;
ssl_prefer_server_ciphers on;
 
## Improves TTFB by using a smaller SSL buffer than the nginx default
ssl_buffer_size 8k;
 
## Enables OCSP stapling
ssl_stapling on;
resolver 8.8.8.8;
ssl_stapling_verify on;
 
## Send header to tell the browser to prefer https to http traffic
add_header Strict-Transport-Security max-age=31536000;
 
## SSL logs ##
access_log /var/log/nginx/ssl_access.log;
error_log /var/log/nginx/ssl_error.log;
#-------- END SSL config -------##
# Add rest of your config below like document root, php and more ##
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# Allow php apps
location ~ \.php$ {
root /usr/share/nginx/html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
## END SSL server2.cyberciti.biz ######
Save and close the file in vi/vim text editor.

Step 7 – Install certificate

Install the issued cert to nginx server:
# acme.sh --installcert -d server2.cyberciti.biz \
--keypath /etc/nginx/ssl/cyberciti.biz/server2.cyberciti.biz.key \
--fullchainpath /etc/nginx/ssl/cyberciti.biz/server2.cyberciti.biz.cer \
--reloadcmd 'systemctl reload nginx'

install let us encrupt certifcate in rhel 7
Make sure port os open with the ss command or netstat command:
# ss -tulpn

Step 7 – Firewall configuration

You need to open port 443 (HTTPS) on your server so that clients can connect it. Update the rules as follows:
$ sudo firewall-cmd --add-service=https
$ sudo firewall-cmd --runtime-to-permanent

Step 8 – Test it

Fire a web browser and type your domain such as:
https://server2.cyberciti.biz
Test it with SSLlabs test site:
https://www.ssllabs.com/ssltest/analyze.html?d=server2.cyberciti.biz
RHEL CentOS 7 Nginx SSL Labs A+ Test result for Nginx with Lets Encrypt Certificate

Step 9 – acme.sh commands

List all certificates:
# acme.sh --list
Renew a cert for domain named server2.cyberciti.biz
# acme.sh --renew -d server2.cyberciti.biz
Please note that a cron job will try to do renewal a certificate for you too. This is installed by default as follows (no action required on your part). To see job run:
# crontab -l
Sample outputs:
8 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh"> /dev/null
Upgrade acme.sh client:
# acme.sh --upgrade
Getting help:
# acme.sh --help | more
This entry is 3 of 3 in the Linux, Nginx, MySQL, PHP (LEMP) Stack for CentOS/RHEL 7 Tutorial series. Keep reading the rest of the series:
  1. How to install and use Nginx on CentOS 7 / RHEL 7
  2. How to install PHP 7.2 on CentOS 7/RHEL 7
  3. How to configure Nginx with Let's Encrypt on CentOS 7

This entry is 4 of 4 in the Secure Web Server with Let's Encrypt Tutorial series. Keep reading the rest of the series:
  1. How to configure Nginx with Let's Encrypt on Debian/Ubuntu Linux
  2. How to secure Lighttpd with Let's Encrypt certificate on Debian/Ubuntu
  3. How to secure Nginx with Let's Encrypt certificate on Alpine Linux
  4. How to configure Nginx with Let's Encrypt on CentOS 7

8 ways to generate random password in Linux

$
0
0
https://kerneltalks.com/tips-tricks/8-ways-to-generate-random-password-in-linux

Learn 8 different ways to generate random password in Linux using Linux native commands or third party utilities.
Different ways to generate password in Linux

In this article, we will walk you through various different ways to generate random password in Linux terminal. Few of them are using native Linux commands and others are using third party tools or utilities which can easily be installed on Linux machine. Here we are looking at native commands like openssl, dd, md5sum, tr, urandom and third party tools like mkpasswd, randpw, pwgen, spw, gpg, xkcdpass, diceware, revelation, keepaasx, passwordmaker.
These are actually ways to get some random alphanumeric string which can be utilized as password. Random passwords can be used for new users so that there will be uniqueness no matter how large your user base is. Without any further delay lets jump into those 15 different ways to generate random password in Linux.

Generate password using mkpasswd utility

mkpasswd comes with install of expect package on RHEL based systems. On Debian based systems mkpasswd comes with package whois. Trying to install mkpasswd package will results in error –
No package mkpasswd available. on RHEL system and E: Unable to locate package mkpasswd in Debian based.
So install their parent packages as mentioned above and you are good to go.
Run mkpasswd to get passwords
Command behaves differently on different systems so work accordingly. There are many switches which can be used to control length etc parameters. You can explore them from man pages.

Generate password using openssl

Openssl comes in build with almost all the Linux distributions. We can use its random function to get alphanumeric string generated which can be used as password.
Here, we are using base64 encoding with random function and last digit for argument to base64 encoding.

Generate password using urandom

Device file /dev/urandom is another source of getting random characters. We are using tr function and trimming output to get random string to use as password.

dd command to generate password

We can even use /dev/urandom device along with dd command to get string of random characters.
We need to pass output through base64 encoding to make it human readable. You can play with count value to get desired length. For much cleaner output, redirect std2 to /dev/null. Clean command will be –

Using md5sum to generate password

Another way to get array of random characters which can be used as password is to calculate MD5 checksum! s you know checksum value is indeed looks like random characters grouped together we can use it as password. Make sure you use source as something variable so that you get different checksum every time you run command. For example date ! date command always yields changing output.
Here we passed date command output to md5sum and get the checksum hash! You can use cut command to get desired length of output.

Generate password using pwgen

pwgen package comes with repositories like EPEL. pwgen is more focused on generating passwords which are pronounceable but not a dictionary word or not in plain English. You may not find it in standard distribution repo. Install the package and run pwgen command. Boom !
You will be presented with list of passwords at your terminal! What else you want? Ok. You still want to explore, pwgen comes with many custom options which can be referred for man page.

Generate password using gpg tool

GPG is a OpenPGP encryption and signing tool. Mostly gpg tool comes pre-installed (at least it is on my RHEL7). But if not you can look for gpg or gpg2 package and install it.
Use below command to generate password from gpg tool.
Here we are passing generate random byte sequence switch (--gen-random) of quality 1 (first argument) with count of 12 (second argument). Switch --armor ensures output is base64 encoded.

Generate password using xkcdpass

Famous geek humor website xkcd, published a very interesting post about memorable but still complex passwords. You can view it here. So xkcdpass tool took inspiration from this post and did its work! Its a python package and available on python’s official website here
All installation and usage instructions are mentioned on that page. Here is install steps and outputs from my test RHEL server for your reference.
Now running xkcdpass command will give you random set of dictionary words like below –
You can use these words as input to other commands like md5sum to get random password (like below) or you can even use Nth letter of each words to form your password!
Or even you can use all those words together as such a long password which is easy to remember for a user and very hard to crack using computer program.

There are tools like Diceware, KeePassX, Revelation, PasswordMaker for Linux which can be considered for making strong random passwords.

What Is bashrc and Why Should You Edit It

$
0
0
https://www.maketecheasier.com/what-is-bashrc


There are a number of hidden files tucked away in your home directory. If you run macOS or a popular Linux distribution, you’ll see a file named “.bashrc” up near the top of your hidden files. What is bashrc, and why is editing bashrc useful?
finder-find-bashrc
If you run a Unix-based or Unix-like operating system, you likely have bash installed as your default terminal. While many different shells exist, bash is both the most common and, likely, the most popular. If you don’t know what that means, bash interprets your typed input in the Terminal program and runs commands based on your input. It allows for some degree of customization using scripting, which is where bashrc comes in.
In order to load your preferences, bash runs the contents of the bashrc file at each launch. This shell script is found in each user’s home directory. It’s used to save and load your terminal preferences and environmental variables.
Terminal preferences can contain a number of different things. Most commonly, the bashrc file contains aliases that the user always wants available. Aliases allow the user to refer to commands by shorter or alternative names, and can be a huge time-saver for those that work in a terminal regularly.
terminal-edit-bashrc-1
You can edit bashrc in any terminal text editor. We will use nano in the following examples.
To edit bashrc using nano, invoke the following command in Terminal:
nano ~/.bashrc
If you’ve never edited your bashrc file before, you might find that it’s empty. That’s fine! If not, you can feel free to put your additions on any line.
Any changes you make to bashrc will be applied next time you launch terminal. If you want to apply them immediately, run the command below:
source ~/.bashrc
You can add to bashrc where ever you like, but feel free to use command (proceeded by #) to organize your code.
Edits in bashrc have to follow bash’s scripting format. If you don’t know how to script with bash, there are a number of resources you can use online. This guide represents a fairly comprehensive introduction into the aspects of bashrc that we couldn’t mention here.
There’s a couple of useful tricks you can do to make your terminal experience more efficient and user-friendly.

Bash Prompt

The bash prompt allows you to style up your terminal and have it to show prompts when you run a command. A customized bash prompt can indeed make your work on the terminal more productive and efficient.
Check out some of the useful and interesting bash prompts you can add to your bashrc.

Aliases

terminal-edit-bashrc-3
Aliases can also allow you to access a favored form of a command with a shorthand code. Let’s take the command ls as an example. By default, lsdisplays the contents of your directory. That’s useful, but it’s often more useful to know more about the directory, or know the hidden contents of the directory. As such, a common alias is ll, which is set to run ls -lha or something similar. That will display the most details about files, revealing hidden files and showing file sizes in “human readable” units instead of blocks.
You’ll need to format your aliases like so:
aliasll="ls -lha"
Type the text you want to replace on the left, and the command on the right between quotes. You can use to this to create shorter versions of command, guard against common typos, or force a command to always run with your favored flags. You can also circumvent annoying or easy-to-forget syntax with your own preferred shorthand. Here are some of the commonly used aliases you can add to your bashrc.

Functions

terminal-edit-bashrc-2
In addition to shorthand command names, you can combine multiple commands into a single operation using bash functions. They can get pretty complicated, but they generally follow this syntax:
function_name (){
command_1
command_2
}
The command below combines mkdir and cd. Typing md folder_name creates a directory named “folder_name” in your working directory and navigates into it immediately.
md (){
mkdir-p$1
cd$1
}
The $1 you see in the function represents the first argument, which is the text you type immediately after the function name.
Unlike some terminal customization tricks, messing with bashrc is fairly straight-forward and low risk. If you mess anything up, you can always delete the bashrc file completely and start over again. Try it out now and you will be amazed at your improved productivity.

How Much Swap Should You Use in Linux?

$
0
0
https://itsfoss.com/swap-size

How much should be the swap size? Should the swap be double of the RAM size or should it be half of the RAM size? Do I need swap at all if my system has got several GBs of RAM?
Perhaps these are the most common asked questions about choosing swap size while installing Linux.
It’s nothing new. There has always been a lot of confusion around swap size.
For a long time, the recommended swap size was double of the RAM size but that golden rule is not applicable to modern computers anymore. We have systems with RAM sizes up to 128 GB, many old computers don’t even have this much of hard disk.
But what swap size would you allot to a system with 32 GB of RAM?  64GB? That would be a ridiculous waste of hard disk, won’t it?
Before we see how much swap size you should have, let’s first quickly know a thing or two about swap memory. This will help you understand why swap is used.
The explanation has been simplified for (almost) everyone’s understanding.

What is swap? When is swap used?

Your system uses Random Access Memory (aka RAM) when it runs an application. When there are only a few applications running your system manages with the available RAM.
But if there are too many applications running or if the applications need a lot of RAM, then your system gets into trouble. If an application needs more memory but entire RAM is already in use, the application will crash.
Swap acts as a breather to your system when the RAM is exhausted. What happens here is that when the RAM is exhausted, your Linux system uses part of the hard disk memory and allocates it to the running application.
That sounds cool. This means if you allocate like 50GB of swap size, your system can run hundreds or perhaps thousands of applications at the same time? WRONG!
You see, the speed matters here. RAM access data in the order of nanoseconds. An SSD access data in microseconds while as a normal hard disk accesses the data in milliseconds. This means that RAM is 1000 times faster than SSD and 100,000 times faster than the usual HDD.
If an application relies too much on the swap, its performance will degrade as it cannot access the data at the same speed as it would have in RAM. So instead of taking 1 second for a task, it may take several minutes to complete the same task. It will leave the application almost useless. This is known as thrashing in computing terms.
In other words, a little swap is helpful. A lot of it will be of no good use.

Why is swap needed?

There are several reasons why you would need swap.
  • If your system has RAM less than 1 GB, you must use swap as most applications would exhaust the RAM soon.
  • If your system uses resource heavy applications like video editors, it would be a good idea to use some swap space as your RAM may be exhausted here.
  • If you use hibernation, then you must add swap because the content of the RAM will be written to the swap partition. This also means that the swap size should be at least the size of RAM.
  • Avoid strange events like a program going nuts and eating RAM.

Do you need swap if you have lots of RAM?

This is a good question indeed. If you have 32GB or 64 GB of RAM, chances are that your system would perhaps never use the entire RAM and hence it would never use the swap partition.
But will you take the chance? I am guessing if your system has 32GB of RAM, it should also be having a hard disk of 100s of GB. Allocating a couple of GB of swap won’t hurt. It will provide an extra layer of ‘stability’ if a faulty program starts misusing RAM.

Can you use Linux without swap?

Yes, you can, especially if your system has plenty of RAM. But as explained in the previous section, a little bit of swap is always advisable.

How much should be the swap size?

Now comes the big question. What should be the ideal swap space for a Linux install?
And the problem here is that there is no definite answer to this swap size question. There are just recommendations.
Different people have a different opinion on ideal swap size. Even the major Linux distributions don’t have the same swap size guideline.
If you go by Red Hat’s suggestion, they recommend a swap size of 20% of RAM for modern systems (i.e. 4GB or higher RAM).
CentOS has a different recommendation for the swap partition size. It suggests swap size to be:
  • Twice the size of RAM if RAM is less than 2 GB
  • Size of RAM + 2 GB if RAM size is more than 2 GB i.e. 5GB of swap for 3GB of RAM
Ubuntu has an entirely different perspective on the swap size as it takes hibernation into consideration. If you need hibernation, a swap of the size of RAM becomes necessary for Ubuntu.
Otherwise, it recommends:
  • If RAM is less than 1 GB, swap size should be at least the size of RAM and at most double the size of RAM
  • If RAM is more than 1 GB, swap size should be at least equal to the square root of the RAM size and at most double the size of RAM
  • If hibernation is used, swap size should be equal to size of RAM plus the square root of the RAM size
Confused? I know it is confusing. This is why I have created this table that will tell give you the Ubuntu recommended swap size based on your RAM size and hibernation need.
RAM SizeSwap Size (Without Hibernation) Swap size (With Hibernation)
 256MB 256MB 512MB
 512MB 512MB 1GB
 1GB 1GB 2GB
 2GB 1GB 3GB
 3GB 2GB 5GB
 4GB 2GB 6GB
 6GB 2GB 8GB
 8GB 3GB 11GB
 12GB 3GB 15GB
 16GB 4GB 20GB
 24GB 5GB 29GB
 32GB 6GB 38GB
 64GB 8GB 72GB
 128GB 11GB 139GB

How much swap size do you use?

The answer is never simple. As I stated earlier, for a long time, swap has been recommended to be of double the size of RAM. In fact my Dell XPS 13 Ubuntu edition has 16GB of swap size for an 8GB of RAM. So even Dell decided to go with the golden rule of swap=2xRAM.
What swap size do you prefer for your Linux system?

Linux whereis Command Explained for Beginners (5 Examples)

$
0
0
https://www.howtoforge.com/linux-whereis-command

Sometimes, while working on the command line, we just need to quickly find out the location of the binary file for a command. Yes, the find command is an option in this case, but it's a bit time consuming and will likely produce some non-desired results as well. There's a specific command that's designed for this purpose: whereis.
In this article, we will discuss the basics of this command using some easy to understand examples. But before we do that, it's worth mentioning that all examples in this tutorial have been tested on Ubuntu 16.04LTS.

Linux whereis command

The whereis command lets users locate binary, source, and manual page files for a command. Following is its syntax:
whereis [options] [-BMS directory... -f] name...
And here's how the tool's man page explains it:
whereis locates the binary, source and manual files for the specified command names. The supplied 
names are first stripped of leading pathname components and any (single) trailing extension of the
form .ext (for example: .c) Prefixes of s. resulting from use of source code control are also dealt
with. whereis then attempts to locate the desired program in the standard Linux places, and in the
places specified by $PATH and $MANPATH.
The following Q&A-styled examples should give you a good idea on how the whereis command works.

Q1. How to find location of binary file using whereis?

Suppose you want to find the location for, let's say, the whereis command itself. Then here's how you can do that:
whereis whereis
How to find location of binary file using whereis
Note that the first path in the output is what you are looking for. The whereis command also produces paths for manual pages and source code (if available, which isn't in this case). So the second path you see in the output above is the path to the whereis manual file(s).

Q2. How to specifically search for binaries, manuals, or source code?

If you want to search specifically for, say binary, then you can use the -b command line option. For example:
whereis -b cp
How to specifically search for binaries, manuals, or source code
Similarly, the -m and -s options are used in case you want to find manuals and sources.

Q3. How to limit whereis search as per requirement?

By default whereis tries to find files from hard-coded paths, which are defined with glob patterns. However, if you want, you can limit the search using specific command line options. For example, if you want whereis to only search for binary files in /usr/bin, then you can do this using the -B command line option.
whereis -B /usr/bin/ -f cp
Note: Since you can pass multiple paths this way, the -f command line option terminates the directory list and signals the start of file names.
Similarly, if you want to limit manual or source searches, you can use the -M and -S command line options.
There's an option for this as well. Just run the command with -l.
whereis -l
Here is the list (partial) it produced for us:
How to see paths that whereis uses for search

Q5. How to find command names with unusual entries?

For whereis, a command becomes unusual if it does not have just one entry of each explicitly requested type. For example, commands with no documentation available, or those with documentation in multiple places are considered unusual. The -u command line option, when used, makes whereis show the command names that have unusual entries.
For example, the following command should display files in the current directory which have no documentation file, or more than one.
whereis -m -u *

Conclusion

Agreed, whereis is not the kind of command line tool that you'll require very frequently. But when the situation arises, it definitely makes your life easy. We've covered some of the important command line options the tool offers, so do practice them. For more info, head to its man page.

A step-by-step guide to Git

$
0
0
https://opensource.com/article/18/1/step-step-guide-git

Don't be nervous. This beginner's guide will quickly and easily get you started using Git.

A step-by-step guide to Git
Image by : 
Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
If you've never used Git, you may be nervous about it. There's nothing to worry about—just follow along with this step-by-step getting-started guide, and you will soon have a new Git repository hosted on GitHub.
Before we dive in, let's clear up a common misconception: Git isn't the same thing as GitHub. Git is a version-control system (i.e., a piece of software) that helps you keep track of your computer programs and files and the changes that are made to them over time. It also allows you to collaborate with your peers on a program, code, or file. GitHub and similar services (including GitLab and BitBucket) are websites that host a Git server program to hold your code.

Step 1: Create a GitHub account

The easiest way to get started is to create an account on GitHub.com (it's free).
Pick a username (e.g., octocat123), enter your email address and a password, and click Sign up for GitHub. Once you are in, it will look something like this:

Step 2: Create a new repository

A repository is like a place or a container where something is stored; in this case we're creating a Git repository to store code. To create a new repository, select New Repository from the + sign dropdown menu (you can see I've selected it in the upper-right corner in the image above).
Enter a name for your repository (e.g, "Demo") and click Create Repository. Don't worry about changing any other options on this page.
Congratulations! You have set up your first repo on GitHub.com.

Step 3: Create a file

Once your repo is created, it will look like this:
Don't panic, it's simpler than it looks. Stay with me. Look at the section that starts "...or create a new repository on the command line," and ignore the rest for now.
Open the Terminal program on your computer.
Type git and hit Enter. If it says command bash: git: command not found, then install Git with the command for your Linux operating system or distribution. Check the installation by typing git and hitting Enter; if it's installed, you should see a bunch of information about how you can use the command.
In the terminal, type:
mkdir Demo
This command will create a directory (or folder) named Demo.
Change your terminal to the Demo directory with the command:
cd Demo
Then enter:
echo "#Demo">> README.md
This creates a file named README.md and writes #Demo in it. To check that the file was created successfully, enter:
cat README.md
This will show you what is inside the README.md file, if the file was created correctly. Your terminal will look like this:
To tell your computer that Demo is a directory managed by the Git program, enter:
git init
Then, to tell the Git program you care about this file and want to track any changes from this point forward, enter:
git add README.md

Step 4: Make a commit

So far you've created a file and told Git about it, and now it's time to create a commit. Commit can be thought of as a milestone. Every time you accomplish some work, you can write a Git commit to store that version of your file, so you can go back later and see what it looked like at that point in time. Whenever you make a change to your file, you create a new version of that file, different from the previous one. To make a commit, enter:
git commit -m "first commit"
That's it! You just created a Git commit and included a message that says first commit. You must always write a message in commit; it not only helps you identify a commit, but it also enables you to understand what you did with the file at that point. So tomorrow, if you add a new piece of code in your file, you can write a commit message that says, Added new code, and when you come back in a month to look at your commit history or Git log (the list of commits), you will know what you changed in the files.

Step 5: Connect your GitHub repo with your computer

Now, it's time to connect your computer to GitHub with the command:
git remote add origin https://github.com//Demo.git
Let's look at this command step by step. We are telling Git to add a remote called origin with the address https://github.com//Demo.git (i.e., the URL of your Git repo on GitHub.com). This allows you to interact with your Git repository on GitHub.com by typing origin instead of the full URL and Git will know where to send your code. Why origin? Well, you can name it anything else if you'd like.
Now we have connected our local copy of the Demo repository to its remote counterpart on GitHub.com. Your terminal looks like this:
Now that we have added the remote, we can push our code (i.e., upload our README.md file) to GitHub.com.
Once you are done, your terminal will look like this:
And if you go to https://github.com//Demo you will see something like this:
That's it! You have created your first GitHub repo, connected it to your computer, and pushed (or uploaded) a file from your computer to your repository called Demo on GitHub.com. Next time, I will write about Git cloning (downloading your code from GitHub to your computer), adding new files, modifying existing files, and pushing (uploading) files to GitHub.

How To Manage NodeJS Packages Using Npm

$
0
0
https://www.ostechnix.com/manage-nodejs-packages-using-npm


Manage NodeJS Packages Using Npm
A while ago, we have published a guide to manage Python packages using PIP. Today, we are going to discuss how to manage NodeJS packages using Npm. NPM is the largest software registry that contains over 600,000 packages. Everyday, developers across the world shares and downloads packages through npm. In this guide, I will explain the the basics of working with npm, such as installing packages(locally and globally), installing certain version of a package, updating, removing and managing NodeJS packages and so on.

Manage NodeJS Packages Using Npm

Installing NPM

Since npm is written in NodeJS, we need to install NodeJS in order to use npm. To install NodeJS on different Linux distributions, refer the following link.
Once installed, ensure that NodeJS and NPM have been properly installed. There are couple ways to do this.
To check where node has been installed:
$ which node
/home/sk/.nvm/versions/node/v9.4.0/bin/node
Check its version:
$ node -v
v9.4.0
Log in to Node REPL session:
$ node
> .help
.break Sometimes you get stuck, this gets you out
.clear Alias for .break
.editor Enter editor mode
.exit Exit the repl
.help Print this help message
.load Load JS from a file into the REPL session
.save Save all evaluated commands in this REPL session to a file
> .exit
Check where npm installed:
$ which npm
/home/sk/.nvm/versions/node/v9.4.0/bin/npm
And the version:
$ npm -v
5.6.0
Great! Node and NPM have been installed and are working! As you may have noticed, I have installed NodeJS and NPM in my $HOME directory to avoid permission issues while installing modules globally. This is the recommended method by NodeJS team.
Well, let us go ahead to see managing NodeJS modules (or packages) using npm.

Installing NodeJS modules

NodeJS modules can either be installed locally or globally(system wide). Now I am going to show how to install a package locally.
Install packages locally
To manage packages locally, we normally use package.json file.
First, let us create our project directory.
$ mkdir demo
$ cd demo
Create a package.json file inside your project’s directory. To do so, run:
$ npm init
Enter the details of your package such as name, version, author, github page etc., or just hit ENTER key to accept the default values and type YES to confirm.
This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install ` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (demo)
version: (1.0.0)
description: demo nodejs app
entry point: (index.js)
test command:
git repository:
keywords:
author:
license: (ISC)
About to write to /home/sk/demo/package.json:

{
"name": "demo",
"version": "1.0.0",
"description": "demo nodejs app",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\"&& exit 1"
},
"author": "",
"license": "ISC"
}

Is this ok? (yes) yes
The above command initializes your project and create package.json file.
You can also do this non-interactively using command:
npm init --y
This will create a package.json file quickly with default values without the user interaction.
Now let us install package named commander.
$ npm install commander
Sample output:
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN demo@1.0.0 No repository field.

+ commander@2.13.0
added 1 package in 2.519s
This will create a directory named “node_modules” (if it doesn’t exist already) in the project’s root directory and download the packages in it.
Let us check the package.json file.
$ cat package.json 
{
"name": "demo",
"version": "1.0.0",
"description": "demo nodejs app",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\"&& exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"commander": "^2.13.0"
}
}
You will see the dependencies have been added. The caret (^) at the front of the version number indicates that when installing, npm will pull the highest version of the package it can find.
$ ls node_modules/
commander
The advantage of package.json file is if you had the package.json file in your project’s directory, you can just type “npm install”, then npm will look into the dependencies that listed in the file and download all of them. You can even share it with other developers or push into your GitHub repository, so when they type “npm install”, they will get all the same packages that you have.
You may also noticed another json file named package-lock.json. This file ensures that the dependencies remain the same on all systems the project is installed on.
To use the installed package in your program, create a file index.js(or any name of you choice) in the project’s directory with the actual code, and then run it using command:
$ node index.js
Install packages globally
If you want to use a package as a command line tool, then it is better to install it globally. This way, it works no matter which directory is your current directory.
$ npm install async -g
+ async@2.6.0
added 2 packages in 4.695s
Or,
$ npm install async --global
To install a specific version of a package, we do:
$ npm install async@2.6.0 --global

Updating NodeJS modules

To update the local packages, go the the project’s directory where the package.json is located and run:
$ npm update
Then, run the following command to ensure all packages were updated.
$ npm outdated
If there is no update, then it returns nothing.
To find out which global packages need to be updated, run:
$ npm outdated -g --depth=0
If there is no output, then all packages are updated.
To update the a single global package, run:
$ npm update -g 
To update all global packages, run:
$ npm update -g 

Listing NodeJS modules

To list the local packages, go the project’s directory and run:
$ npm list
demo@1.0.0 /home/sk/demo
└── commander@2.13.0
As you see, I have installed “commander” package in local mode.
To list global packages, run this command from any location:
$ npm list -g
Sample output:
/home/sk/.nvm/versions/node/v9.4.0/lib
├─┬ async@2.6.0
│ └── lodash@4.17.4
└─┬ npm@5.6.0
├── abbrev@1.1.1
├── ansi-regex@3.0.0
├── ansicolors@0.3.2
├── ansistyles@0.1.3
├── aproba@1.2.0
├── archy@1.0.0
[...]
This command will list all modules and their dependencies.
To list only the top level modules, use –depth=0 option:
$ npm list -g --depth=0
/home/sk/.nvm/versions/node/v9.4.0/lib
├── async@2.6.0
└── npm@5.6.0

Searching NodeJS modules

To search for a module, use “npm search” command:
npm search 
Example:
$ npm search request
This command will display all modules that contains the search string “request”.

Removing NodeJS modules

To remove a local package, go to the project’s directory and run following command to remove the package from your node_modules directory:
$ npm uninstall 
To remove it from the dependencies in package.json file, use the save flag like below:
$ npm uninstall --save 
To remove the globally installed packages, run:
$ npm uninstall -g 

Cleaning NPM cache

By default, NPM keeps the copy of a installed package in the cache folder named npm in your $HOME directory when installing it. So, you can install it next time without having to download again.
To view the cached modules:
$ ls ~/.npm
The cache folder gets flooded with all old packages over time. It is better to clean the cache from time to time.
As of npm@5, the npm cache self-heals from corruption issues and data extracted from the cache is guaranteed to be valid. If you want to make sure everything is consistent, run:
$ npm cache verify
To clear the entire cache, run:
$ npm cache clean --force

Viewing NPM configuration

To view the npm configuration, type:
$ npm config list
Or,
$ npm config ls
Sample output:
; cli configs
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/5.6.0 node/v9.4.0 linux x64"

; node bin location = /home/sk/.nvm/versions/node/v9.4.0/bin/node
; cwd = /home/sk
; HOME = /home/sk
; "npm config ls -l" to show all defaults.
To display the current global location:
$ npm config get prefix
/home/sk/.nvm/versions/node/v9.4.0
And, that’s all for now. What we have just covered here is just the basics. NPM is a vast topic. For more details, head over to the the NPM Getting Started guide.
Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!

Creating a simple GTK+ ToDo application with Ruby

$
0
0
http://iridakos.com/tutorials/2018/01/25/creating-a-gtk-todo-application-with-ruby

Lately I was experimenting with GTK and its Ruby bindings and I decided to write a tutorial introducing this functionality. In this post we are going to create a simple ToDo application (something like what we created here with Ruby on Rails) using the gtk3 gem a.k.a. the GTK+ Ruby bindings.
Note: The code of the tutorial is available at GitHub.

What is GTK+

Quoting the toolkit’s page:
GTK+, or the GIMP Toolkit, is a multi-platform toolkit for creating graphical user interfaces. Offering a complete set of widgets, GTK+ is suitable for projects ranging from small one-off tools to complete application suites.
..and about its creation:
GTK+ was initially developed for and used by the GIMP, the GNU Image Manipulation Program. It is called the “The GIMP ToolKit” so that the origins of the project are remembered. Today it is more commonly known as GTK+ for short and is used by a large number of applications including the GNU project’s GNOME desktop.

Prerequisites

GTK+ version

Make sure you have GTK+ installed.
The OS in which I developed the tutorial’s application is Ubuntu 16.04 which has GTK+ installed by default (version: 3.18).
You can check yours with the following command:
dpkg -l libgtk-3-0

Ruby

You should have ruby installed on your system. I use RVM to manage multiple ruby versions installed on my system. If you want to go with that too, you can find instructions for installing the tool in its homepage and for installing ruby versions (a.k.a. rubies) the related documentation page.
This tutorial is using Ruby 2.4.2. You can check yours using: ruby --version or via RVM with rvm list.
rvm list screenshot

Glade

Again, quoting the tool’s page
Glade is a RAD tool to enable quick & easy development of user interfaces for the GTK+ toolkit and the GNOME desktop environment
We will use Glade to design the user interface of our application. If you are on Ubuntu, install glade with:
sudo apt install glade

gtk3 gem

This gem provides the Ruby bindings for the GTK+ toolkit. In other words, it allows us to talk to the GTK+ API using the Ruby language.
Install the gem via:
gem install gtk3

The application specs

We will build an application that:
  • it will have a user interface (desktop application)
  • it will allow users to set miscellaneous properties to each item (such as priority)
  • it will allow users to create and edit ToDo items
    • all items will be saved as files in the user’s home directory in a folder named .gtk-todo-tutorial
  • it will allow users to archive ToDo items
    • archived items should be put in their own folder archived

The application structure

gtk-todo-tutorial# root directory
|--application
|--ui# everything related to the ui of the application
|--models# our models
|--lib# the directory to host any utilities we might need
|--resources# directory to host the resources of our application
gtk-todo# the executable that will start our application
Let’s start!

Building the ToDo application

Initializing the application

Create a directory in which we will save all files that the application will need. As shown in the section above, I named mine gtk-todo-tutorial.
In there create a file named gtk-todo (that’s right, no extension) and add the following:
#!/usr/bin/env ruby

require'gtk3'

app=Gtk::Application.new'com.iridakos.gtk-todo',:flags_none

app.signal_connect:activatedo|application|
window=Gtk::ApplicationWindow.new(application)
window.set_title'Hello GTK+Ruby!'
window.present
end

putsapp.run
This is going to be the script that will start the application.
Note the shebang in the first line. This is how we define which interpreter must be used to execute the script under UNIX/Linux operating systems. This way, we don’t have to use ruby gtk-todo but just the script’s name gtk-todo.
Don’t try it yet though because we haven’t changed the mode of the file so as to be executable. To do so, type the following command in a terminal after navigating to the application’s root directory:
chmod +x ./gtk-todo # make the script executable
Now from the console execute:
./gtk-todo # execute the script
Ta daaaaaa
first gtk+ruby screenshot
Notes
  • The application object we defined above and all of the GTK+ widgets in general, emit signals to trigger events. Once an application starts running for example, it emits a signal to trigger the activate event. All we have to do is to define what we want to happen when this signal is emitted. We accomplished this by using the signal_connect instance method and passing it a block whose code will be executed upon the given event. We will be doing this a lot throughout the tutorial.
  • When we initialized the Gtk::Application object we passed two parameters:
    • com.iridakos.gtk-todo: this is our application’s id and in general it should be a reverse DNS style identifier. For more information about its usage and best practices check here.
    • :flags_none: this is a flag defining the behavior of the application. In our case, we used the default behavior. Check here all the flags and the type of applications they define. You can use the Ruby equivalent flags as defined in Gio::ApplicationFlags.constants. For example, instead of using the :flags_none we could instead use Gio::ApplicationFlags::FLAGS_NONE
Suppose the application object we previously created (Gtk::Application) had a lot of things to do when the activate signal was emitted or that we wanted to connect to more signals. We would end up creating a huge gtk-todo script file making it hard to read/maintain. Time to refactor.
As described in the The application structure section, create a folder named application along with its sub-folders ui, models and lib.
  • In the ui folder we will place all files related to our user interface.
  • In the models folder we will place all files related to our models.
  • In the lib folder we will place all other files that don’t belong to either of the aforementioned folders.
We are going to define a new subclass of the Gtk::Application class for our application. Create a file named application.rb under application/ui/todo with the following contents.
moduleToDo
classApplication<Gtk::Application
definitialize
super'com.iridakos.gtk-todo',Gio::ApplicationFlags::FLAGS_NONE

signal_connect:activatedo|application|
window=Gtk::ApplicationWindow.new(application)
window.set_title'Hello GTK+Ruby!'
window.present
end
end
end
end
and change the gtk-todo script accordingly:
#!/usr/bin/env ruby

require'gtk3'

app=ToDo::Application.new

putsapp.run
Much cleaner, isn’t it? Yeah, but it doesn’t work. You should be getting something like:
./gtk-todo:5:in `
': uninitialized constant ToDo (NameError)
The problem is that we haven’t required any of the ruby files placed in the application folder. Change the script file as follows and execute it again.
#!/usr/bin/env ruby

require'gtk3'

# Require all ruby files in the application folder recursively
application_root_path=File.expand_path(__dir__)
Dir[File.join(application_root_path,'**','*.rb')].each{|file|requirefile}

app=ToDo::Application.new

putsapp.run
You should be fine.

Resources

At the beginning of this tutorial we said that we would use Glade to design the user interface of the application. Glade actually produces xml files with the appropriate elements and attributes that reflect what we designed via its user interface. We somehow need to make use of these files so that our application gets the UI we designed.
These files are resources for the application and the GResource API provides a way for packing them all together in a binary file and afterwards accessing them from inside the application with advantages as opposed to manually having to deal with already loaded resources, their location on the file system etc. Read more about the API here.

Describing the resources

First, we need to create a file describing the resources of the application. Create a file named gresources.xml and place it directly under the resources folder.


prefix="/com/iridakos/gtk-todo">
preprocess="xml-stripblanks">ui/application_window.ui

In this “description” we actually say: we have a resource which is located under the ui directory (relative to this xml file) with name application_window.ui. Before loading this resource please remove the blanks. Thanks. Of course this is not going to work now since we haven’t created the resource via Glade yet. Don’t worry though, one thing at a time.
Note: the xml-stripblanks directive will use the xmllint command to remove the blanks. In Ubuntu you have to install the package libxml2-utils to obtain it.

Building the resources binary file

In order to produce the binary resources file, we are going to use another utility of the GLib library called glib-compile-resources. Check if you have it installed with dpkg -l libglib2.0-bin. You should be seeing something like this:
ii  libglib2.0-bin     2.48.2-0ubuntu amd64          Programs for the GLib library
If not, then install the package (sudo apt install libglib2.0-bin in Ubuntu).
Let’s build the file. We will add the code in our script so that the resources are getting built every time we execute it. Change the gtk-todo script as follows.
#!/usr/bin/env ruby

require'gtk3'
require'fileutils'

# Require all ruby files in the application folder recursively
application_root_path=File.expand_path(__dir__)
Dir[File.join(application_root_path,'**','*.rb')].each{|file|requirefile}

# Define the source & target files of the glib-compile-resources command
resource_xml=File.join(application_root_path,'resources','gresources.xml')
resource_bin=File.join(application_root_path,'gresource.bin')

# Build the binary
system("glib-compile-resources",
"--target",resource_bin,
"--sourcedir",File.dirname(resource_xml),
resource_xml)

at_exitdo
# Before existing, please remove the binary we produced, thanks.
FileUtils.rm_f(resource_bin)
end

app=ToDo::Application.new
putsapp.run
and execute it. This happens in the console and it’s fine, we’ll fix it later on:
/.../gtk-todo-tutorial/resources/gresources.xml: Failed to locate 'ui/application_window.ui' in any source directory.
What we did:
  • added a require statement for the fileutils library so that we can use it in the at_exit call
  • defined the source and target files of the glib-compile-resources command
  • executed the glib-compile-resources command
  • set a hook so that before exiting the script (before the application exits) the binary file gets deleted so next time it gets build again

Loading the resources binary file

Ok, we described the resources, we packed them in a binary file. Now we have to load them and register them in the application so that we can use them. This is so easy as adding the following two lines before the at_exit hook.
resource=Gio::Resource.load(resource_bin)
Gio::Resources.register(resource)
That’s it. From now on, we are able to use the resources from anywhere inside the application (we’ll see how later on). Well now the script fails since it can’t load the binary which is not produced but…be patient, we are going to get to the interesting part soon. Actually now.

Designing the main application window

Introducing glade

Open glade.
Glade empty project screen
A quick description of what you see.
  • On the left section there is a list of widgets which you can drag and drop in the middle section given that a widget can be placed there. For example, you can’t add a top level window inside a label widget. I will be calling this as the Widget section from now on.
  • On the middle section you see your widgets as they will appear (most of the times) in the application. I will be calling this as the Design section from now on.
  • On the right section there are two subsections:
    • the top section contains the hierarchy of the widgets as added to the resource. I will be calling this as the Hierarchy section from now on.
    • the bottom section contains all the properties that you can configure via Glade for a given selected widget of the aforementioned top section. I will be calling this as the Properties section from now on.
I will try to describe the steps for building this tutorial’s UI using Glade but if you are interested in building GTK+ applications you should take a look at the resources & tutorials for using the tool on the official page.

Create the application window design

We are going to create the application window. As you can guess, all we have to do is drag the widget ‘Application Window’ from the widget section to the design section.
Glade application window
Gtk::Builder is an object used in GTK+ applications to read textual descriptions of a user interface (like the one we will build via Glade) and build the described objects-widgets.
In the properties section, the first property is the ID and it has a default value applicationWindow1. If we let this property as is, then later on, through our code we would create a Gtk::Builder that would load the file produced by glade and in order to obtain the application window we would have to use something line:
application_window=builder.get_object('applicationWindow1')

application_window.signal_connect'whatever'do|a,b|
...
The application_window object would be of class Gtk::ApplicationWindow and thus whatever we had to add to its behavior (like setting its title) would take place out of the original class. Also, as shown in the snippet above, the code to connect to a signal of the window would be placed inside the file that instantiated it.
Good news though, a GTK+ feature introduced in 2013 allows the creation of composite widget templates which among other advantages allows as to define the custom class for the widget (which eventually derives from an existing GTK::Widget class in general). Don’t worry if you are confused. You are going to understand what is going on after we write some code and view the results.
Now, in order to define our design as a template, check the Composite checkbox in the property widget. Note that the ID property changed to Class Name. Fill in there TodoApplicationWindow. This is the class we are going to create in our code to represent this widget.
Glade application window composite
Save the file with name application_window.ui in a new folder named ui inside the resources. If you open the file from an editor you will see this:



lib="gtk+"version="3.12"/>
As you can see, our widget has a class and parent attribute. Following the parent class attribute convention, obviously, our class has to be defined inside a module named Todo. Before getting there, let’s try to start the application by executing the script (./gtk-todo).
Yeah! It starts!

Create the application window class

While running the application, if you check the contents of the application’s root directory you can see the gresource.bin file there. Even though the application starts successfully because the resource bin is present and it can register it, we do not use it yet. We still initiate an ordinary Gtk::ApplicationWindow in our application.rb file and that all we show. Time to create our custom application window class.
Create a file named application_window.rb, place it under application/ui/todo folder and add the following content.
moduleTodo
classApplicationWindow<Gtk::ApplicationWindow
# Register the class in the GLib world
type_register

class<<self
definit
# Set the template from the resources binary
set_templateresource: '/com/iridakos/gtk-todo/ui/application_window.ui'
end
end

definitialize(application)
superapplication: application

set_title'GTK+ Simple ToDo'
end
end
end
We defined the init method as a singleton method on the class after opening the eigenclass in order to bind the template of this widget to the previously registered resource file.
Before that, we called the type_register class method which registers and make available our custom widget class to the GLib world.
Finally, each time we create an instance of this window, we set its title to GTK+ Simple ToDo.
Now, let’s go back to the application.rb file and use what we just implemented:
moduleToDo
classApplication<Gtk::Application
definitialize
super'com.iridakos.gtk-todo',Gio::ApplicationFlags::FLAGS_NONE

signal_connect:activatedo|application|
window=Todo::ApplicationWindow.new(application)
window.present
end
end
end
end
Execute the script.
GTK+ ToDo window

Define the model

For simplicity, we are going to save the ToDo items in files in json format under a dedicated hidden folder in user’s home directory. Of course, in a real life application we would use a database but this is out of the scope of this tutorial.
Our Todo::Item model will have the following properties:
  • id: the id of the item
  • title: the title
  • notes: any notes
  • priority: its priority
  • creation_datetime: the date & time the item was created
  • filename: the name of the file that an item is saved to
Create a file named item.rb under the application/models directory, with contents:
require'securerandom'
require'json'

moduleTodo
classItem
PROPERTIES=[:id,:title,:notes,:priority,:filename,:creation_datetime].freeze

PRIORITIES=['high','medium','normal','low'].freeze

attr_accessor*PROPERTIES

definitialize(options={})
ifuser_data_path=options[:user_data_path]
# New item. When saved, it will be placed under the :user_data_path value
@id=SecureRandom.uuid
@creation_datetime=Time.now.to_s
@filename="#{user_data_path}/#{id}.json"
elsiffilename=options[:filename]
# Load an existing item
load_from_filefilename
else
raiseArgumentError,'Please specify the :user_data_path for new item or the :filename to load existing'
end
end

# Loads an item from a file
defload_from_file(filename)
properties=JSON.parse(File.read(filename))

# Assign the properties
PROPERTIES.eachdo|property|
self.send"#{property}=",properties[property.to_s]
end
rescue=>e
raiseArgumentError,"Failed to load existing item: #{e.message}"
end

# Resolves if an item is new
defis_new?
!File.exists?@filename
end

# Saves an item to its `filename` location
defsave!
File.open(@filename,'w')do|file|
file.writeself.to_json
end
end

# Deletes an item
defdelete!
raise'Item is not saved!'ifis_new?

File.delete(@filename)
end

# Produces a json string for the item
defto_json
result={}
PROPERTIES.eachdo|prop|
result[prop]=self.sendprop
end

result.to_json
end
end
end
As you can see, we defined methods to:
  • initialize an item
    • as new by defining the :user_data_path in which it will be saved later on
    • as existing by defining the :filename to be loaded from. The filename must be a json file previously generated by an item
  • load an item from a file
  • resolve whether an items is new or not (saved at least once in the :user_data_path or not)
  • save an item by writing its json string to a file
  • delete an item
  • produce the json string of an item as a hash of its properties

Add a new item

Create the button

Let’s add a button to our application window for adding a new item. Open the resources/ui/application_window.ui file in glade.
  • Drag a Button from the widget section to the design section.
  • In the properties section, set its ID value to add_new_item_button.
  • Near the bottom of the General tab in the properties section there’s a text area just below the Label with optional image option. Change its value from button to Add new item
  • Save the file and execute the script
Add new item button in application window
Don’t worry, we will improve the design later on. Now, let’s see how to connect functionality to our button’s clicked event.
First, we have to update our application window class so that it learns about its new child, the button with id add_new_item_button. Then, we can access the child to alter its behavior.
Change the init method as follows:
definit
# Set the template from the resources binary
set_templateresource: '/com/iridakos/gtk-todo/ui/application_window.ui'

bind_template_child'add_new_item_button'
end
Pretty simple, right? The bind_template_child method does exactly what it says and from now on every instance of our Todo::ApplicationWindow class will have an add_new_item_button method to access the related button. So, let’s alter the initialize method as follows.
definitialize(application)
superapplication: application

set_title'GTK+ Simple ToDo'

add_new_item_button.signal_connect'clicked'do|button,application|
puts"OMG! I AM CLICKED"
end
end
As you can see, we access the button by the add_new_item_button method and we define what we want to take place when clicked. Restart the application and try clicking it. In the console you should see the message OMG! I AM CLICKED every time you click the button.
What we want though to happen when we click this button is to show a new window through which we will save a ToDo item. You guessed right. Glade o’clock.

Create the new item window

  • Create a new project in Glade by pressing the most left icon of the top bar or by selecting File > New from the application menu.
  • Drag a Window from the widget section to the design area.
  • Check its Composite property and name the class TodoNewItemWindow.
GTK+ Todo new item window empty
  • Drag a Grid from the widget section and place it in the window we added in the previous steps.
  • Set its rows number to 5 and its columns number to 2 in the window that popped up.
  • In the General tab of its properties window, set its Rows spacing and Columns spacing to 10 (the numbers are in pixels).
  • In the Common tab of the properties section, set the Widget Spacing > Margins > Top, Bottom, Left, Right all to 10 so that the contents are not stuck to the borders of the window.
GTK+ Todo new item window with grid
  • Drag four times a Label widget from the widget section and place them in each row of the Grid.
  • Change their Label property from top to bottom as:
    • Id:
    • Title:
    • Notes:
    • Priority:
  • In the General tab of the properties section, change the Alignment and Padding > Alignment > Horizontal property from 0.50 to 1 for each property. This will align the label text on the right.
  • This step is optional but I suggest that you do it: We will not bind these labels in our window since we don’t need to alter their state or behavior. So in this context, we don’t need to set a descriptive id for each of them like we did for the add_new_item_button button in the application window. BUT. We are going to add more elements to our design and the hierarchy of the widgets in Glade will be hard to read with all the label1. label2. So set a descriptive id to each to make our lives easier (like id_label, title_label, notes_label, priority_label). I even set the grid’s id to main_grid cause I don’t like seeing numbers in ids or variable names :)
GTK+ Todo new item with grid and labels
  • Drag a Label from the widget section to the second column of the grid’s first row. The id is automatically generated by our model thus we won’t allow editing so a label to display it is more than enough.
  • Set the ID property to id_value_label.
  • Set the Alignment and Padding > Alignment > Horizontal property to 0 so that the text aligns on the left.
  • We are going to bind this widget to our window class so that we can change its text each time we load the window so setting a label through glade is not needed but doing so makes the design look closer to what it’ll look like when rendered with actual data. So you can optionally set a label here to whatever suits you better. I set mine to id-of-the-todo-item-here.
GTK+ Todo new item with grid and labels
  • Drag a Text Entry from the widget section to the second column of the second row of the grid.
  • Set its ‘ID’ property to title_text_entry. As you may have noticed, I prefer obtaining the widget type in the id so that the code in the class becomes more readable later on.
  • In the Common tab of the properties section, check the Widget Spacing > Expand > Horizontal checkbox and turn on the switch which is right next to it. This way, the widget will expand horizontally every time its parent (a.k.a. the grid) is resized.
GTK+ Todo new item with grid and labels
  • Drag a Text View from the widget section to the second column of the third row of the grid.
  • Set its ID to notes. Nope, just testing you. Set its ID property to notes_text_view.
  • In the Common tab of the properties section, check the Widget Spacing > Expand > Horizontal, Vertical checkboxes and turn on the switches which are right next to them. This way, the widget will expand horizontally and vertically every time its parent (a.k.a. the grid) is resized.
GTK+ Todo new item with grid and labels
  • Drag a Combo Box from the widget section to the second column of the forth row of the grid.
  • Set its ID to priority_combo_box.
  • In the Common tab of the properties section, check the Widget Spacing > Expand > Horizontal checkbox and turn on the switch which is right next to it. This way, the widget will expand horizontally every time its parent (a.k.a. the grid) is resized.
  • This widget is actually a drop down element and we are going to populate its values that can be selected by the user when it shows up inside our window class.
GTK+ Todo new item with grid and labels
  • Drag a Button Box from the widget section to the second column of the last row of the grid.
  • On the popped up window select 2 number of items.
  • In the General tab of the properties section set the Box Attributes > Orientation property to Horizontal.
  • In the General tab of the properties section set the Box Attributes > Spacing property to 10.
  • In the Common tab of the properties section set the Widget Spacing > Alignment > Horizontal to Center.
  • Again, this widget won’t be altered by our code but you can give it a descriptive ID for readability. I named mine actions_box
GTK+ Todo new item with grid and labels
  • Drag a Button widget twice and place it to each of the two boxes of the button box widget we added in the previous step.
  • Set their ID properties to cancel_button& save_button respectively.
  • In the General tab of the properties window, set their Button Content > Label with option image property to Cancel and Save respectively.
GTK+ Todo new item with grid and labels
The window is ready. Save the file under resources/ui/new_item_window.ui.
Time to port it in our application.

Implement the new item window class

Before implementing the new class, we must update our GResource description file a.k.a. resources/gresources.xml to obtain the new resource:


prefix="/com/iridakos/gtk-todo">
preprocess="xml-stripblanks">ui/application_window.ui
preprocess="xml-stripblanks">ui/new_item_window.ui

Now we can create the new window class. Create a file under application/ui/todo named new_item_window.rb and set its contents as follows.
moduleTodo
classNewItemWindow<Gtk::Window
# Register the class in the GLib world
type_register

class<<self
definit
# Set the template from the resources binary
set_templateresource: '/com/iridakos/gtk-todo/ui/new_item_window.ui'
end
end

definitialize(application)
superapplication: application
end
end
end
Nothing special here. We just changed the template resource to point to the correct file of our resources.
We have to change the add_new_item_button code that executes on the clicked signal to show the new item window. Go ahead and change that code in application_window.rb to this:
add_new_item_button.signal_connect'clicked'do|button|
new_item_window=NewItemWindow.new(application)
new_item_window.present
end
Let’s see what we have done. Start the application and click on the Add new item button. Ta daaaaaaa
GTK+ Todo new item with grid and labels
Of course, nothing happens when pressing the buttons. We will change that.
First we will bind the ui widgets in the Todo::NewItemWindow class.
Change the init method the to this:
definit
# Set the template from the resources binary
set_templateresource: '/com/iridakos/gtk-todo/ui/new_item_window.ui'

# Bind the window's widgets
bind_template_child'id_value_label'
bind_template_child'title_text_entry'
bind_template_child'notes_text_view'
bind_template_child'priority_combo_box'
bind_template_child'cancel_button'
bind_template_child'save_button'
end
This window is going to be shown either when creating a new Todo item or editing an existing one. Thus the new_item_window naming is not very valid. This was intentional though so that we refactor the code later (No it was not :D I made a mistake when writing the tutorial. In any case, we’ll refactor later on).
For now, we will update the initialize method of the window to require one extra parameter, the Todo::Item to be created or edited. We can then set a more meaningful window title and change the children widgets to reflect the current item.
Change the initialize method to this:
definitialize(application,item)
superapplication: application
set_title"ToDo item #{item.id} - #{item.is_new??'Create':'Edit'} Mode"

id_value_label.text=item.id
title_text_entry.text=item.titleifitem.title
notes_text_view.buffer.text=item.notesifitem.notes

# Configure the combo box
model=Gtk::ListStore.new(String)
Todo::Item::PRIORITIES.eachdo|priority|
iterator=model.append
iterator[0]=priority
end

priority_combo_box.model=model
renderer=Gtk::CellRendererText.new
priority_combo_box.pack_start(renderer,true)
priority_combo_box.set_attributes(renderer,"text"=>0)

priority_combo_box.set_active(Todo::Item::PRIORITIES.index(item.priority))ifitem.priority
end
and add the constant PRIORITIES in the application/models/item.rb file just below the PROPERTIES constant as:
PRIORITIES=['high','medium','normal','low'].freeze
What did we do here?
  • We set the window’s title to a string containing the id of the current item and the mode depending on whether the item is now being created or edited.
  • We set the id_value_label text to display the current item’s id.
  • We set the title_text_entry text to display the current item’s title.
  • We set the notes_text_view text to display the current item’s notes.
  • We create a model for the priority_combo_box whose entries are going to have only one String value. At a first sight, a Gtk::ListStore model might look a little confusing. I will try to explain how it works now.
    • Suppose we want to display in a combo box a list of country codes and their respective country names.
    • We would create a Gtk::ListStore defining that its entries would consist of two string values: one for the country code and one for the country name. Thus we would initialize the ListStore as:
      model=Gtk::ListStore.new(String,String)
    • In order to fill the model with data we would do something like this (make sure you don’t miss the comments in the snippet):
      [['gr','Greece'],['jp','Japan'],['nl','Netherlands']].eachdo|country_pair|
      entry=model.append
      # Each entry has two string positions since that's how we initialized the Gtk::ListStore
      # Store the country code in position 0
      entry[0]=country_pair[0]
      # Store the country name in position 1
      entry[1]=country_pair[1]
      end
    • We also had to configure the combo box to render two text columns/cells (again, make sure you don’t miss the comments in the snippet):
      country_code_renderer=Gtk::CellRendererText.new
      # Add the first renderer
      combo.pack_start(country_code_renderer,true)
      # Use the value in index 0 of each model entry a.k.a. the country code
      combo.set_attributes(country_code_renderer,'text'=>0)

      country_name_renderer=Gtk::CellRendererText.new
      # Add the second renderer
      combo.pack_start(country_name_renderer,true)
      # Use the value in index 1 of each model entry a.k.a. the country name
      combo.set_attributes(country_name_renderer,'text'=>1)
    • I hope I made it a little more clearer…
  • We add a simple text renderer in the combo box and instruct it to display the one and only value of each model’s entry (a.k.a. position 0). Imagine that our model is something like [['high'],['medium'],['normal'],['low']] and 0 is actually the first element of each sub-array. I will stop with the model-combo-text-renderer explanations now…

Configure the user data path

Remember that when initializing a new Todo::Item (not an existing one) we had to define a :user_data_path in which it would be saved. We are going to resolve this path when the application starts and make it accessible from all the widgets.
All we have to do is check if the .gtk-todo-tutorial path exists inside the user’s home ~ directory. If not, then we will create it. Then we set this as an instance variable of the application. All widgets have access to the application instance. Sooooo….all widgets have access to this user path variable.
Change the application/application.rb file to this:
moduleToDo
classApplication<Gtk::Application
attr_reader:user_data_path

definitialize
super'com.iridakos.gtk-todo',Gio::ApplicationFlags::FLAGS_NONE

@user_data_path=File.expand_path('~/.gtk-todo-tutorial')
unlessFile.directory?(@user_data_path)
puts"First run. Creating user's application path: #{@user_data_path}"
FileUtils.mkdir_p(@user_data_path)
end

signal_connect:activatedo|application|
window=Todo::ApplicationWindow.new(application)
window.present
end
end
end
end
One last thing that we have to do before testing what we have done so far is to instantiate the Todo::NewItemWindow when the add_new_item_button is clicked complying with the changes we made a.k.a. change the code in application_window.rb to this:
add_new_item_button.signal_connect'clicked'do|button|
new_item_window=NewItemWindow.new(application,Todo::Item.new(user_data_path: application.user_data_path))
new_item_window.present
end
Start the application and click on the Add new item button. Ta daaaaaa (note the - Create mode part in the title).
New item window

Cancel the item creation/update

In order to close the Todo::NewItemWindow window when user clicks the cancel_button all we have to do is to add this to the window’s initialize method:
cancel_button.signal_connect'clicked'do|button|
close
end
close is an instance method of the Gtk::Window class that surprisingly enough closes the window.

Save the item

Saving an item involves two steps:
  • Update the item’s properties based on the widgets’ values
  • Call the save! method on the Todo::Item instance
Again, our code will be placed in the initialie method of the Todo::NewItemWindow:
save_button.signal_connect'clicked'do|button|
item.title=title_text_entry.text
item.notes=notes_text_view.buffer.text
item.priority=priority_combo_box.active_iter.get_value(0)ifpriority_combo_box.active_iter
item.save!
close
end
Note that we again close the window after saving the item.
Let’s try that out.
New item window
Pressing save and navigating to your ~/.gtk-todo-tutorial folder you should see a file there. Mine had the following contents:
{
"id":"3d635839-66d0-4ce6-af31-e81b47b3e585",
"title":"Optimize the priorities model creation",
"notes":"It doesn't have to be initialized upon each window creation.",
"priority":"high",
"filename":"/home/iridakos/.gtk-todo-tutorial/3d635839-66d0-4ce6-af31-e81b47b3e585.json",
"creation_datetime":"2018-01-25 18:09:51 +0200"
}
Don’t forget to try out the cancel button as well.
Awesome!!!

View ToDo items

We have left the Todo::ApplicationWindow to contain only one button. Time to change that.
We want the window to have the Add new item on the top but below it there should be a list with all of our todo items. To accomplish that we are going to add a Gtk::ListBox in our design which can contain any number of rows.

Update the application window

  • Open the resources/ui/application_window.ui file in Glade.
  • If you try to drag a List Box widget from the widget section directly on the window nothing happens. That is normal. First we have to split the window in two parts. One part for the button and one for the list box. Bear with me.
  • Right click on the new_item_window in the hierarchy section and select Add parent > Box.
  • In the popped up window, set that you need 2 items.
  • The orientation of the box is already vertical so we are fine.
View todo items
  • Now, drag a List Box and place it on the free are of the previously added box.
  • Set its ID property to todo_items_list_box
  • Set is Selection mode to None since we won’t provide such a functionality.
View todo items

Design the ToDo item list box row

Each row of the list box that we created in the previous step is going to be more complex than a row of text. It is going to contain widgets that will allow the user to expand an item’s notes, and to delete or edit the item.
  • Create a new project in Glade as we did for the new_item_window.ui. Save it under resources/ui/todo_item_list_box_row.ui.
  • Unfortunately, at least in my version of Glade, there is no List Box Row widget in the widget section so in order to add one directly as the top level widget of our project, we will do it in a kinda hackish way.
  • Drag a List Box from the widget section to the design area.
  • Inside the hierarchy section right click on the List Box and select Add Row
View todo items
  • Inside the hierarchy section right click on the newly added List Box Row which is nested under the List Box and select Remove parent. There it is. The List Box Row is the top level widget of the project now.
View todo items
  • Check the widget’s Composite property and set its name to TodoItemListBoxRow.
  • Drag a Box from the widget section to the design area inside our List Box Row.
  • Set 2 items in the popped up window.
  • Set its ID property to main_box
View todo items
  • Drag another Box from the widget section to the first row of the previously added box.
  • Set 2 items in the popped up window.
  • Set its ID property to todo_item_top_box.
  • Set its Orientation property to Horizontal.
  • Set its Spacing (General tab) property to 10.
View todo items
  • Drag a Label from the widget section to the first column of the todo_item_top_box.
  • Set its ID property to todo_item_title_label.
  • Set its Alignment and Padding > Alignment > Horizontal property to 0.00.
  • In the Common tab of the properties section, check the Widget Spacing > Expand > Horizontal checkbox and turn on the switch which is right next to it so that the label expands to available space.
View todo items
  • Drag a Button from the widget section to the second column of the todo_item_top_box.
  • Set its ID property to details_button
  • Check the Button Content > Label with optional image radio and type ... (three dots).
View todo items
  • Drag a Revealer widget from the widget section to the second row of the main_box.
  • Turn off the Reveal Child switch in the General tab.
  • Set its ID property to todo_item_details_revealer.
  • Set its Transition type property to Slide Down.
View todo items
  • Drag a Box from the widget section to the reveal space.
  • Set its items to 2 in the popped up window.
  • Set its ID property to details_box.
  • In the Common tab, set its Widget Spacing > Margins > Top property to 10.
View todo items
  • Drag a Button Box from the widget section to the first row of the details_box.
  • Set its ID property to todo_item_action_box.
  • Set its Layout style property to expand.
View todo items
  • Drag two Button widgets to the first and second column of the todo_item_action_box respectively.
  • Set their ID properties to delete_button and edit_button respectively.
  • Set their Button Content > Label with optional image properties to Delete and Edit respective.
View todo items
  • Drag a Viewport widget from the widget section to the second row of the details_box.
  • Set its ID property to todo_action_notes_viewport.
  • Drag a Text View widget from the widget section to the todo_action_notes_viewport that we just added.
  • Set is ID to todo_item_notes_text_view.
  • Uncheck its Editable property in the General tab of the properties section.
View todo items

Create the ToDo item list box row class

Now we will create the class reflecting the user interface of the list box row which we just created.
First we have to update our GResource description file to include the newly created design. Change the resources/gresources.xml file as follows:


prefix="/com/iridakos/gtk-todo">
preprocess="xml-stripblanks">ui/application_window.ui
preprocess="xml-stripblanks">ui/new_item_window.ui
preprocess="xml-stripblanks">ui/todo_item_list_box_row.ui

Create a file named item_list_box_row.rb inside the application/ui folder and add the following content.
moduleTodo
classItemListBoxRow<Gtk::ListBoxRow
type_register

class<<self
definit
set_templateresource: '/com/iridakos/gtk-todo/ui/todo_item_list_box_row.ui'
end
end

definitialize(item)
super()
end
end
end
We will not bind any children at the moment here.
When starting the application, we have to search for files in the :user_data_path and for each file we must create a Todo::Item instance. For each instance, we must also add a new Todo::ItemListBoxRow to the Todo::ApplicationWindow’s todo_items_list_box list box. One thing at a time.
First of all, let’s bind the todo_items_list_box in the Todo::ApplicationWindow class. Change the init method as follows:
definit
# Set the template from the resources binary
set_templateresource: '/com/iridakos/gtk-todo/ui/application_window.ui'

bind_template_child'add_new_item_button'
bind_template_child'todo_items_list_box'
end
Next, we will add an instance method in the same class that will be responsible to load the todo list items in the related list box. Add this code in Todo::ApplicationWindow.
defload_todo_items
todo_items_list_box.children.each{|child|todo_items_list_box.removechild}

json_files=Dir[File.join(File.expand_path(application.user_data_path),'*.json')]
items=json_files.map{|filename|Todo::Item.new(filename: filename)}

items.eachdo|item|
todo_items_list_box.addTodo::ItemListBoxRow.new(item)
end
end
and then call this method at the end of the initialize method.
definitialize(application)
superapplication: application

set_title'GTK+ Simple ToDo'

add_new_item_button.signal_connect'clicked'do|button|
new_item_window=NewItemWindow.new(application,Todo::Item.new(user_data_path: application.user_data_path))
new_item_window.present
end

load_todo_items
end
Note: We first make sure we empty the list box of its current children rows and we refill it. This way, we will call this method after saving a Todo::Item via the signal_connect of the save_button of the Todo::NewItemWindow and the parent application window will be reloaded! Here’s the updated code (in application/ui/new_item_window.rb):
save_button.signal_connect'clicked'do|button|
item.title=title_text_entry.text
item.notes=notes_text_view.buffer.text
item.priority=priority_combo_box.active_iter.get_value(0)ifpriority_combo_box.active_iter
item.save!

close

# Locate the application window
application_window=application.windows.find{|w|w.is_a?Todo::ApplicationWindow}
application_window.load_todo_items
end
Previously, we used this code:
json_files=Dir[File.join(File.expand_path(application.user_data_path),'*.json')]
in order to find all the names of the files that exist in the application user data path with json extension.
Let’s see what we’ve created. Start the application and try adding a new ToDo item. After pressing the Save button you should see the parent Todo::ApplicationWindow being automatically updated with the new item!
View todo items
What’s left to do, is to complete the functionality of the Todo::ItemListBoxRow.
We will first bind the widgets. Change the init method of the Todo::ItemListBoxRow class as follows:
definit
set_templateresource: '/com/iridakos/gtk-todo/ui/todo_item_list_box_row.ui'

bind_template_child'details_button'
bind_template_child'todo_item_title_label'
bind_template_child'todo_item_details_revealer'
bind_template_child'todo_item_notes_text_view'
bind_template_child'delete_button'
bind_template_child'edit_button'
end
Then, we are going to setup the widgets based on the item of each row.
definitialize(item)
super()

todo_item_title_label.text=item.title||''

todo_item_notes_text_view.buffer.text=item.notes

details_button.signal_connect'clicked'do
todo_item_details_revealer.set_reveal_child!todo_item_details_revealer.reveal_child?
end

delete_button.signal_connect'clicked'do
item.delete!

# Locate the application window
application_window=application.windows.find{|w|w.is_a?Todo::ApplicationWindow}
application_window.load_todo_items
end

edit_button.signal_connect'clicked'do
new_item_window=NewItemWindow.new(application,item)
new_item_window.present
end
end

defapplication
parent=self.parent
parent=parent.parentwhile!parent.is_a?Gtk::Window
parent.application
end
  • As you can see, when the details_button is clicked, we instruct the todo_item_details_revealer to swap the visibility of its contents.
  • After deleting an item, we find the application’s Todo::ApplicationWindow in order to call its load_todo_items as we did after saving an item.
  • When clicking to edit a button, we create a new instance of the Todo::NewItemWindow passing as item the current item. Works like a charm :D
  • Finally, we had to reach at the application parent of a list box row so we defined a simple instance method application that navigates through the widget’s parents until it reaches a window from which it can obtain the application object.
Save and run the application. There it is.
View todo items
This has been a really long tutorial and even though there are so many items that we haven’t covered I think we better end it here.
Long post, cat photo.
View todo items

Running a Python application on Kubernetes

$
0
0
https://opensource.com/article/18/1/running-python-application-kubernetes

This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes.

Running a Python application on Kubernetes
Image by : 
opensource.com

Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
Kubernetes is an open source platform that offers deployment, maintenance, and scaling features. It simplifies management of containerized Python applications while providing portability, extensibility, and self-healing capabilities.
Whether your Python applications are simple or more complex, Kubernetes lets you efficiently deploy and scale them, seamlessly rolling out new features while limiting resources to only those required.
In this article, I will describe the process of deploying a simple Python application to Kubernetes, including:
  • Creating Python container images
  • Publishing the container images to an image registry
  • Working with persistent volume
  • Deploying the Python application to Kubernetes

Requirements

You will need Docker, kubectl, and this source code.
Docker is an open platform to build and ship distributed applications. To install Docker, follow the official documentation. To verify that Docker runs your system:


$ docker info

Containers: 0

Images: 289

Storage Driver: aufs

 Root Dir: /var/lib/docker/aufs

 Dirs: 289

Execution Driver: native-0.2

Kernel Version: 3.16.0-4-amd64

Operating System: Debian GNU/Linux 8(jessie)

WARNING: No memory limit support

WARNING: No swap limit support


kubectl is a command-line interface for executing commands against a Kubernetes cluster. Run the shell script below to install kubectl:


curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl


Deploying to Kubernetes requires a containerized application. Let's review containerizing Python applications.

Containerization at a glance

Containerization involves enclosing an application in a container with its own operating system. This full machine virtualization option has the advantage of being able to run an application on any machine without concerns about dependencies.
Roman Gaponov's article serves as a reference. Let's start by creating a container image for our Python code.

Create a Python container image

To create these images, we will use Docker, which enables us to deploy applications inside isolated Linux software containers. Docker is able to automatically build images using instructions from a Docker file.
This is a Docker file for our Python application:


FROM python:3.6

MAINTAINER XenonStack



# Creating Application Source Code Directory

RUN mkdir -p /k8s_python_sample_code/src



# Setting Home Directory for containers

WORKDIR /k8s_python_sample_code/src



# Installing python dependencies

COPY requirements.txt /k8s_python_sample_code/src

RUN pip install --no-cache-dir -r requirements.txt



# Copying src code to Container

COPY . /k8s_python_sample_code/src/app



# Application Environment variables

ENV APP_ENV development



# Exposing Ports

EXPOSE 5035



# Setting Persistent data

VOLUME ["/app-data"]



# Running Python Application

CMD ["python", "app.py"]


This Docker file contains instructions to run our sample Python code. It uses the Python 3.5 development environment.

Build a Python Docker image

We can now build the Docker image from these instructions using this command:


docker build -t k8s_python_sample_code .


This command creates a Docker image for our Python application.

Publish the container images

We can publish our Python container image to different private/public cloud repositories, like Docker Hub, AWS ECR, Google Container Registry, etc. For this tutorial, we'll use Docker Hub.
Before publishing the image, we need to tag it to a version:


docker tag k8s_python_sample_code:latest k8s_python_sample_code:0.1


Push the image to a cloud repository

Using a Docker registry other than Docker Hub to store images requires you to add that container registry to the local Docker daemon and Kubernetes Docker daemons. You can look up this information for the different cloud registries. We'll use Docker Hub in this example.
Execute this Docker command to push the image:


docker push k8s_python_sample_code


Working with CephFS persistent storage

Kubernetes supports many persistent storage providers, including AWS EBS, CephFS, GlusterFS, Azure Disk, NFS, etc. I will cover Kubernetes persistence storage with CephFS.
To use CephFS for persistent data to Kubernetes containers, we will create two files:
persistent-volume.yml


apiVersion: v1

kind: PersistentVolume

metadata:

  name: app-disk1

  namespace: k8s_python_sample_code

spec:

  capacity:

  storage: 50Gi

  accessModes:

  - ReadWriteMany

  cephfs:

  monitors:

    - "172.17.0.1:6789"

  user: admin

  secretRef:

    name: ceph-secret

  readOnly: false


persistent_volume_claim.yaml


apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: appclaim1

  namespace: k8s_python_sample_code

spec:

  accessModes:

  - ReadWriteMany

  resources:

  requests:

    storage: 10Gi


We can now use kubectl to add the persistent volume and claim to the Kubernetes cluster:


$ kubectl create -f persistent-volume.yml

$ kubectl create -f persistent-volume-claim.yml


We are now ready to deploy to Kubernetes.

Deploy the application to Kubernetes

To manage the last mile of deploying the application to Kubernetes, we will create two important files: a service file and a deployment file.
Create a file and name it k8s_python_sample_code.service.yml with the following content:


apiVersion: v1

kind: Service

metadata:

  labels:

  k8s-app: k8s_python_sample_code

  name: k8s_python_sample_code

  namespace: k8s_python_sample_code

spec:

  type: NodePort

  ports:

  - port: 5035

  selector:

  k8s-app: k8s_python_sample_code


Create a file and name it k8s_python_sample_code.deployment.yml with the following content:


apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: k8s_python_sample_code

  namespace: k8s_python_sample_code

spec:

  replicas: 1

  template:

  metadata:

    labels:

    k8s-app: k8s_python_sample_code

  spec:

    containers:

    - name: k8s_python_sample_code

      image: k8s_python_sample_code:0.1

      imagePullPolicy: "IfNotPresent"

      ports:

      - containerPort: 5035

      volumeMounts:

        - mountPath: /app-data

          name: k8s_python_sample_code

     volumes:

         - name:

           persistentVolumeClaim:

             claimName: appclaim1


Finally, use kubectl to deploy the application to Kubernetes:
$ kubectl create -f k8s_python_sample_code.deployment.yml $ kubectl create -f k8s_python_sample_code.service.yml
Your application was successfully deployed to Kubernetes.
You can verify whether your application is running by inspecting the running services:


kubectl get services


May Kubernetes free you from future deployment hassles!
Want to learn more about Python? Nanjekye's book, Python 2 and 3 Compatibility offers clean ways to write code that will run on both Python 2 and 3, including detailed examples of how to convert existing Python 2-compatible code to code that will run reliably on both Python 2 and 3.

How to Make a Minecraft Server

$
0
0
https://thishosting.rocks/how-to-make-a-minecraft-server

We’ll show you how to make a Minecraft server with beginner-friendly step-by-step instructions. It will be a persistent multiplayer server that you can play on with your friends from all around the world. You don’t have to be in a LAN.

How to Make a Minecraft Server – Quick Guide

This is our “Table of contents” if you’re in a hurry and want to go straight to the point. We recommend reading everything though.
Before going into the actual instructions, a few things you should know:

Reasons why you would NOT use a specialized Minecraft server hosting provider

Since you’re here, you’re obviously interested in hosting your own Minecraft server. There are more reasons why you would not use a specialized Minecraft hosting provider, but here are a few:
  • They’re slow most of the time. This is because you actually share the resources with multiple users. It becomes overloaded at some point. Most of them oversell their servers too.
  • You don’t have full control over the Minecraft server or the actual server. You cannot customize anything you want to.
  • You’re limited. Those kinds of hosting plans are always limited in one way or another.
Of course, there are positives to using a Minecraft hosting provider. The best upside is that you don’t actually have to do all the stuff we’ll write about below. But where’s the fun in that? 🙂

Why you should NOT use your personal computer to make a Minecraft server

We noticed lots of tutorials showing you how to host a server on your own computer. There are downsides to doing that, like:
  • Your home internet is not secured enough to handle DDoS attacks. Game servers are often prone to DDoS attacks, and your home network setup is most probably not secured enough to handle them. It’s most likely not powerful enough to handle a small attack.
  • You’ll need to handle port forwarding. If you’ve tried making a Minecraft server on your home network, you’ve surely stumbled upon port forwarding and had issues with it.
  • You’ll need to keep your computer on at all times. Your electricity bill will sky-rocket and you’ll add unnecessary load to your hardware. The hardware most servers use is enterprise-grade and designed to handle loads, with improved stability and longevity.
  • Your home internet is not fast enough. Home networks are not designed to handle multiplayer games. You’ll need a much larger internet plan to even consider making a small server. Luckily, data centers have multiple high-speed, enterprise-grade internet connections making sure they have (or strive to have) 100% uptime.
  • Your hardware is most likely not good enough. Again, servers use enterprise-grade hardware, latest and fastest CPUs, SSDs, and much more. Your personal computer most likely does not.
  • You probably use Windows/MacOS on your personal computer. Though this is debatable, we believe that Linux is much better for game hosting. Don’t worry, you don’t really need to know everything about Linux to make a Minecraft server (though it’s recommended). We’ll show you everything you need to know.
Our tip is not to use your personal computer, though technically you can. It’s not expensive to buy a cloud server. We’ll show you how to make a Minecraft server on cloud hosting below. It’s easy if you carefully follow the steps.

Making a Minecraft Server – Requirements

There are a few requirements. You should have and know all of this before continuing to the tutorial:
  • You’ll need a Linux cloud server. We recommend Vultr. Their prices are cheap, services are high-quality, customer support is great, all server hardware is high-end. Check the Minecraft server requirements to find out what kind of server you should get (resources like RAM and Disk space). We recommend getting the $20 per month server. They support hourly pricing so if you only need the server temporary for playing with friends, you’ll pay less. Choose the Ubuntu 16.04 distro during signup. Choose the closest server location to where your players live during the signup process. Keep in mind that you’ll be responsible for your server. So you’ll have to secure it and manage it. If you don’t want to do that, you can get a managed server, in which case the hosting provider will likely make a Minecraft server for you.
  • You’ll need an SSH client to connect to the Linux cloud server. PuTTy is often recommended for beginners, but we also recommend MobaXTerm. There are many other SSH clients to choose from, so pick your favorite.
  • You’ll need to setup your server (basic security setup at least). Google it and you’ll find many tutorials. You can use Linode’s Security Guide and follow the exact steps on your Vultr server.
  • We’ll handle the software requirements like Java below.
And finally, onto our actual tutorial:

How to Make a Minecraft Server on Ubuntu (Linux)

These instructions are written for and tested on an Ubuntu 16.04 server from Vultr. Though they’ll also work on Ubuntu 14.04, Ubuntu 18.04, and any other Ubuntu-based distro, and any other server provider.
We’re using the default Vanilla server from Minecraft. You can use alternatives like CraftBukkit or Spigot that allow more customizations and plugins. Though if you use too many plugins you’ll essentially ruin the server. There are pros and cons to each one. Nevertheless, the instructions below are for the default Vanilla server to keep things simple and beginner-friendly. We may publish a tutorial for CraftBukkit soon if there’s an interest.



1. Login to your server

We’ll use the root user. If you use a limited-user, you’ll have to execute most commands with ‘sudo’. You’ll get a warning if you’re doing something you don’t have enough permissions for.
You can login to your server via your SSH client. Use your server IP and your port (most likely 22).
After you log in, make sure you secure your server.

2. Update Ubuntu

You should always first update your Ubuntu before you do anything else. You can update it with the following commands:
apt-get update && apt-get upgrade
Hit “enter” and/or “y” when prompted.

3. Install necessary tools

You’ll need a few packages and tools for various things in this tutorial like text editing, making your server persistent etc. Install them with the following command:
apt-get install nano wget screen bash default-jdk ufw
Some of them may already be installed.

4. Download Minecraft Server

First, create a directory where you’ll store your Minecraft server and all other files:
mkdir /opt/minecraft
And navigate to the new directory:
cd /opt/minecraft
Now you can download the Minecraft Server file. Go to the download page and get the link there. Download the file with wget:
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.12.2/minecraft_server.1.12.2.jar

5. Install the Minecraft server

Once you’ve downloaded the server .jar file, you need to run it once and it will generate some files, including an eula.txt license file. The first time you run it, it will return an error and exit. That’s supposed to happen. Run in with the following command:
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
“-Xms2048M” is the minimum RAM that your Minecraft server can use and “-Xmx3472M” is the maximum. Adjust this based on your server’s resources. If you got the 4GB RAM server from Vultr you can leave them as-is, if you don’t use the server for anything else other than Minecraft.
After that command ends and returns an error, a new eula.txt file will be generated. You need to accept the license in that file. You can do that by adding “eula=true” to the file with the following command:
sed -i.orig 's/eula=false/eula=true/g' eula.txt
You can now start the server again and access the Minecraft server console with that same java command from before:
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
Make sure you’re in the /opt/minecraft directory, or the directory where you installed your MC server.
You’re free to stop here if you’re just testing this and need it for the short-term. If you’re having trouble loggin into the server, you’ll need to configure your firewall.
The first time you successfully start the server it will take a bit longer to generate
We’ll show you how to create a script so you can start the server with it.

6. Start the Minecraft server with a script, make it persistent, and enable it at boot

To make things easier, we’ll create a bash script that will start the server automatically.
So first, create a bash script with nano:
nano /opt/minecraft/startminecraft.sh
A new (blank) file will open. Paste the following:
#!/bin/bash
cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.2.jar nogui
If you’re new to nano – you can save and close the file with “CTRL + X”, then “Y”, and hitting enter. This script navigates to your Minecraft server directory you created previously and runs the java command for starting the server. You need to make it executable with the following command:
chmod +x startminecraft.sh
Then, you can start the server anytime with the following command:
/opt/minecraft/startminecraft.sh
But, if/when you log out of the SSH session the server will turn off. To keep the server up without being logged in all the time, you can use a screen session. A screen session basically means that it will keep running until the actual server reboots or turns off.
Start a screen session with this command:
screen -S minecraft
Once you’re in the screen session (looks like you would start a new ssh session), you can use the bash script from earlier to start the server:
/opt/minecraft/startminecraft.sh
To get out of the screen session, you should press CTRL + A-D. Even after you get out of the screen session (detach), the server will keep running. You can safely log off your Ubuntu server now, and the Minecraft server you created will keep running.
But, if the Ubuntu server reboots or shuts off, the screen session won’t work anymore. So to do everything we did before automatically at boot, do the following:
Open the /etc/rc.local file:
nano /etc/rc.local
and add the following line above the “exit 0” line:
screen -dm -S minecraft /opt/minecraft/startminecraft.sh
exit 0
Save and close the file.
To access the Minecraft server console, just run the following command to attach to the screen session:
screen -r minecraft
That’s it for now. Congrats and have fun! You can now connect to your Minecraft server or configure/modify it.

Configure your Ubuntu Server

You’ll, of course, need to set up your Ubuntu server and secure it if you haven’t already done so. Follow the guide we mentioned earlier and google it for more info. The configurations you need to do for your Minecraft server on your Ubuntu server are:

Enable and configure the firewall

First, if it’s not already enabled, you should enable UFW that you previously installed:
ufw enable
You should allow the default Minecraft server port:
ufw allow 25565/tcp
You should allow and deny other rules depending on how you use your server. You should deny ports like 80 and 443 if you don’t use the server for hosting websites. Google a UFW/Firewall guide for Ubuntu and you’ll get recommendations. Be careful when setting up your firewall, you may lock yourself out of your server if you block the SSH port.
Since this is the default port, it often gets automatically scanned and attacked. You can prevent attacks by blocking access to anyone that’s not of your whitelist.
First, you need to enable the whitelist mode in your server.properties file. To do that, open the file:
nano /opt/minecraft/server.properties
And change “white-list” line to “true”:
white-list=true
Save and close the file.
Then restart your server (either by restarting your Ubuntu server or by running the start bash script again):
/opt/minecraft/startminecraft.sh
Access the Minecraft server console:
screen -r minecraft
And if you want someone to be able to join your server, you need to add them to the whitelist with the following command:
whitelist add PlayerUsername
To remove them from the whitelist, use:
whitelist remove PlayerUsername
Exit the screen session (server console) with CTRL + A-D. It’s worth noting that this will deny access to everyone but the whitelisted usernames.
how to create a minecraft server

How to Make a Minecraft Server – FAQs

We’ll answer some frequently asked questions about Minecraft Servers and our guide.

How do I restart the Minecraft server?

If you followed every step from our tutorial, including enabling the server to start on boot, you can just reboot your Ubuntu server. If you didn’t set it up to start at boot, you can just run the start script again which will restart the Minecraft server:
/opt/minecraft/startminecraft.sh

How do I configure my Minecraft server?

You can configure your server using the server.properties file. Check the Minecraft Wiki for more info, though you can leave everything as-is and it will work perfectly fine.
If you want to change the game mode, difficulty and stuff like that, you can use the server console. Access the server console by running:
screen -r minecraft
And execute commands there. Commands like:
difficulty hard
gamemode survival @a
You may need to restart the server depending on what command you used. There are many more commands you can use, check the wiki for more.

How do I upgrade my Minecraft server?

If there’s a new release, you need to do this:
Navigate to the minecraft directory:
cd /opt/minecraft
Download the latest version, example 1.12.3 with wget:
wget https://s3.amazonaws.com/Minecraft.Download/versions/1.12.3/minecraft_server.1.12.3.jar
Next, run and build the new server:
java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.3.jar nogui
Finally, update your start script:
nano /opt/minecraft/startminecraft.sh
And update the version number accordingly:
#!/bin/bash
cd /opt/minecraft/ && java -Xms2048M -Xmx3472M -jar minecraft_server.1.12.3.jar nogui
Now you can restart the server and everything should go well.

Why is your Minecraft server tutorial so long, and yet others are only 2 lines long?!

We tried to make this beginner-friendly and be as detailed as possible. We also showed you how to make the Minecraft server persistent and start it automatically at boot, we showed you how to configure your server and everything. I mean, sure, you can start a Minecraft server with a couple of lines, but it would definitely suck, for more than one reason.

I don’t know Linux or anything you wrote about here, how do I make a Minecraft server?

Just read all of our article and copy and paste the commands. If you really don’t know how to do it all, we can do it for you, or just get a managed server provider and let them do it for you.

How do I install mods on my server? How do I install plugins?

Our article is intended to be a starting guide. You should check the Minecraft wiki for more info, or just google it. There are plenty of tutorials online.

Any other questions?

Leave a comment below and we’ll reply ASAP.
In the meantime, you can read other content:

Get ready to use Linux containers

$
0
0
https://www.networkworld.com/article/3250626/linux/exploring-linux-containers.html

oe Brockmeier, a senior evangelist at Red Hat, explains the benefits of containers on Linux, how they work and how to prepare to use them.

Exploring Linux containers
James Saunders(CC BY-SA 2.0)
One of the most exciting things to happen in the Linux world in the past few years is the emergence of containers — self-contained Linux environments that live inside another OS and provide a way to package and isolate applications.
They're not quite virtual systems, since they rely on the host OS to operate, nor are they simply applications. Dan Walsh from Red Hat has said that on Linux, "everything is a container," reminding me of the days when people claimed that everything on Unix was a file. But the vision has less to do with the guts of the OS and more to do with explaining how containers work and how they are different than virtual systems in some very interesting and important ways.
To get some perspective on containers, I spoke with Joe Brockmeier, a senior evangelist at Red Hat. He suggests that we can think of containers as lightweight virtual machines, though he pointed out that we'd not be technically correct. Container runtimes talk to the host's kernel and run applications out of tarballs. They provide a very convenient format for shipping applications — avoiding the pain associated with tracking down dependencies, compiling anything, or struggling with any sort of configuration. Instead, you get the end result you're looking for in one package — the container. It won't interfere with other applications you're running or require you to worry about configuration or work beyond the installation.
None of this is meant to imply that containers don't require work. The work required, however, is on the part of the organization or individuals building each container. Moving a legacy application into a container to run on its own can involve a lot of work and require a lot of expertise. It's just that none of that work gets passed onto the people installing it.

Is there a performance hit when using containers?

The likelihood of a performance hit associated with running a container is very small, especially compared with virtual systems. Containers run with an agility that is comparable to bare metal. Unless the container is flawed because someone upstream made mistakes in putting one together, you should not notice any performance loss.

What about security?

Linux containers offer a lot of advantages when it comes to system security— particular because they provide a serious way to isolate applications from one another and from other running processes. With containers, you could run 20 different versions of Python at the same time if you were so inclined with no problems.
In addition, containers cannot see or be affected by other containers' network traffic. They simply can't interfere with other applications that are running on the system.
Containers allow applications to be moved around with all of the files they require, making it easy to move them from one environment to another — whether from testing to production or from production to a secondary/alternate site.

Where are containers heading?

Linux containers provide an extremely convenient way to ship applications and avoid a lot of the follow-up support that your customers might require if they were to run into problems setting them up and configuring them.
We're probably still just seeing the start of the application-as-container delivery wave as companies begin to recognize the advantages and jump deeply into container technology.

How to start using containers on Linux

Since containers are likely to become critical parts of our networks, this is a good time to investigate the various tools and models that are becoming available — from LXC to Docker and Kubernetes.
You can try out the commands for building LXC containers on one of your Linux systems. For example, using LXC, you can easily set up a container and get a feel for how it works and maintains its isolation. Here are some basic steps:
  • Install LXC: sudo apt-get install lxc
  • Create a container: sudo lxc-create -t fedora -n fed-01
  • List your containers: sudo lxc-ls
  • Start a container: sudo lxc-start -d -n fed-01
  • Get a console for your container: sudo lxc-console -n fed-01
More information on getting started with LXC is available at LinuxContainers.org
You can also look into some of the premier tools for containerization — such as Docker and Kubernetes. These two tools might at first to do the same thing, but they work at different layers in the stack and can in some ways actually work together.
Read about container technology:

Managing containers

Once you and your organization are deploying containers with enthusiasm (or maybe even before), you may find yourself looking into how to best manage a population of containers. Here's a reference on Container orchestration tools to get you started.

What is Linux sticky bit and how to set Linux sticky bit.

$
0
0
https://linuxroutes.com/linux-sticky-bit-how-to-set-linux-sticky-bit

This article will quickly guide you about Linux Sticky bit. Also it will guide you about how to set Linux sticky bit.
Linux sticky bit

Linux Sticky bit

Sticky bit is an special permission on files as well as on the directories. Whenever you set Linux sticky bit on directory there will be a special restriction on the directory and files. In such case , normal user cannot remove or rename files or directories inside directory except the owner of directory and the root user although the directory is publicly writable.

Where we should use Linux Sticky bit?

We must implement or set Sticky bit in publicly writable directories. In such case normal user who is not an owner of the directory or file cannot remove or rename files inside it.

How to set Linux Sticky bit

In order to set or to remove sticky bit we must use “t” flag in the chmod command as below:
Example of Linux sticky Bit:
Lets create test directory which publicly writable in /tmp directory.
Make this directory publicly writable with below command:
Now set sticky bit using chmod command as below along with “t” flag:
Now if you do ls command you can able to special “t” permission for test directory as below:
Create sample file 1 2 3 4 using touch command inside the test directory.
In order to test this up I am login with normal user manmohan and change directory to /tmp/
Now try to remove “test” directory, System will deny this by saying “Operation not permitted” although the test directory is publicly writable as below:

How to unset sticky bit

In case you want to reverse the sticky bit or unset sticky bit use chmod with minus “t” flag as below:

In case you want to learn more about securing plain text file follow this article.


Download Free book

Get your free copy of Linux command line Cheat Sheet!!!!

Download This Book: Click Here!!

How to resolve permission denied Linux error

$
0
0
https://linuxroutes.com/avoid-permission-denied-linux-error

This article will teach you quickly what is permission denied Linux error. And also what ways you can avoid permission denied error in Linux.permission denied Linux

What is permission denied Linux error?

This error comes when you try to list files or try execute the file inside the directory where you don’t have sufficient permission. Since Linux operating system is very particular about its security aspect.

Example of Permission denied Linux error

Let’s say you are a normal user who is trying to list or trying change the directory inside the /root file-system. Since you do not have sufficient permissions system will respond with permission denied error message as below:
One way to avoid such error is to switch to root user using su – command. However this solution is not recommended since it will gain unnecessary access to all the root file system.

How to resolve Permission denied Error

  • Resolving Permission denied error related to script execution:

Let’s say you have created a shell script for performing any task. but when you try to execute the script you may end with below error due absence of permission denied error.
Now to avoid such case you need to add execute permission “x” to the file myshell.sh using chmod command as below:
In the last output you can see that there is “x” (execution) permission added after chmod command. So next time when you try to execute the shell script , it will execute without any error.

Resolving permission denied Linux error while listing or writing to a file

In this type of permission denied error you try to list or write the file in which you do not have sufficient permission to do so as below:
If you look at the permissions of the “myfolder” directory using ls -l command you will come to know about the permissions.
As per the permission given in above output only owner of the directory who is root can have all permission that is read, write and execute.  So in such case you need to change the permission of the directory to read using below chmod command:
Now this time when normal user manmohan try to list directory he will not get the permission denied error.
In case you want to have write permission on this directory you need to specify w flag as well in chmod command as below:
Same is applicable to file level permission as well.
One more way is to changing the ownership of the directory using chown command. Since in our example we are getting error for user manmohan we will change ownership of the directory  “myfolder” using below command.
Since manmohan user is now the owner of the directory he can able to do any operation on the directory. In case you want to recursive permission do not forget to add -r while chown command as below:
  • Resolving permission denied Linux error for specific user

In above method of changing the permission using chmod is not suitable as per my opinion. Because when you give permission to others, it will be open for all the users within the system. Which is wrong in terms of security perspective.  To resolve this error specific to user you can implement it using access control list or ACL. Follow my article on Access control list ACL for the same.

Download Free book

Get your free copy of Linux command line Cheat Sheet!!!!

Download This Book: Click Here!!

Extremely Fast MySQL Backup and Restore Using Mydumper/Myloader

$
0
0
https://dotlayer.com/extremely-fast-mysql-backup-restore-using-mydumpermyloader

Mydumper and Myloader are utility software programs that allow you to perform extremely fast and reliable multi-threaded MySQL backup and restore which is written in the C programming language.
It was initially developed by MySQL engineers who later moved to Facebook. Mydumper is approximately 10 times faster than the mysqldump tools typically used for backups.
When it comes to backing up and restoring MySQL database, most people usually use the very popular mysqldump. Whilst, mysqldump can be very easy to use for a smaller database, it doesn’t work well with larger databases. It’s very slow for huge databases and very error prone when used for very big MySQL databases.
In this article, we discuss how to use Mydumper and Myloader to perform very fast backups and restores for MySQL. Before we begin, we want to highlight the major benefits of Mydumper below:

The main advantages of Mydumper & Myloader

  • Parallelism and performance – Mydumper is able to use multiple threads to perform simultaneous connections and imports at the same time.
  • Easier to manage output (separate files for tables, dump metadata,etc, easy to view/parse data)
  • Consistency – maintains snapshot across all threads, provides accurate master and slave log positions, etc
  • Manageability – supports PCRE for specifying database and tables inclusions and exclusions

Install mydumper on ubuntu

We are going to install Mydumper on Ubuntu using the apt-get package manager, other operating systems use their own package managers. Open the terminal and run the following command
sudo apt-get install mydumper

How to Use Mydumper

Below is the complete breakdown of the MyDumper command with the respective options and what they mean:
Syntax

mydumper [options]

Application Options:
-B, --database Database to dump
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-o, --outputdir Directory to output files to
-s, --statement-size Attempted size of INSERT statement in bytes, default 1000000
-r, --rows Try to split tables into chunks of this many rows
-c, --compress Compress output files
-e, --build-empty-files Build dump files even if no data available from table
-x, --regex Regular expression for ‘db.table' matching
-i, --ignore-engines Comma delimited list of storage engines to ignore
-m, --no-schemas Do not dump table schemas with the data
-k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups
-l, --long-query-guard Set long query timer in seconds, default 60
--kill-long-queries Kill long running queries (instead of aborting)
-b, --binlogs Get a snapshot of the binary logs as well as dump data
-D, --daemon Enable daemon mode
-I, --snapshot-interval Interval between each dump snapshot (in minutes), requires --daemon, default 60
-L, --logfile Log file name to use, by default stdout is used
-h, --host The host to connect to
-u, --user Username with privileges to run the dump
-p, --password User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
This is how you would use Mydumper for create a backup of a MySQL database, replace the variables (bash words starting with $) with the actual values. Once this process is complete you can zipup the folder and transfer it to the destination folder.
mydumper \
--database=$DB_NAME \
--host=$DB_HOST \
--user=$DB_USER \
--password=$DB_PASS \
--outputdir=$DB_DUMP \
--rows=500000 \
--compress \
--build-empty-files \
--threads=2 \
--compress-protocol

Description of Mydumper’s output data

Mydumper does not output to files, but rather to files in a directory. The --outputdir option specifies the name of the directory to use.
The output is two parts Schema. For each table in the database, a file containing the CREATE TABLE statement will be created. It will be named: dbname.tablename-schema.sql.gz. Secondly, Data, for each table with number of rows above the –rows parameter, you will have a file called:
dbname.tablename.0000n.sql.gz.
Where “n” starts with 0 up to the number of.
Below is the complete describtion of MyLoader and all the optins and theier meaning.
Usage:
myloader [OPTION...] multi-threaded MySQL loader

Help Options:
-?, --help Show help options

Application Options:
-d, --directory Directory of the dump to import
-q, --queries-per-transaction Number of queries per transaction, default 1000
-o, --overwrite-tables Drop tables if they already exist
-B, --database An alternative database to restore into
-s, --source-db Database to restore
-e, --enable-binlog Enable binary logging of the restore data
-h, --host The host to connect to
-u, --user Username with privileges to run the dump
-p, --password User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
If you want to restore these backup you can use Myloader
myloader \
--database=$DB_NAME \
--directory=$DB_DUMP \
--queries-per-transaction=50000 \
--threads=10 \
--compress-protocol \
--verbose=3
We hope this article helped with doing MySQL backups. If you liked this article, then please subscribe to our Facebook/Twitter page for updates.

How To Create Virtual Hosts On Apache Server To Host Multiple Websites

$
0
0
http://www.linuxandubuntu.com/home/how-to-create-virtual-hosts-on-apache-server-to-host-multiple-websites


How To Create Virtual Hosts On Apache Server To Host Multiple Sites
If you have apache installed, you probably know what localhost is. Localhost allows a single website to be hosted locally. However, when using virtual hosts, you can host multiple websites on the single server. The process is fairly simple and I will demonstrate it here itself.
I am assuming you are running Ubuntu with apache server.

Step 1

Move to the directory called /etc/apache2/sites-available

You will see a file called 000-default.conf, we need to copy that file to the same place with a change in the name.

I am creating a virtual host for sample.com so I will just copy and rename it to sample.com.conf using the following command -
Create a virtual host file

Step 2

​Now we need to edit this file. I will be using gedit for this. You can see that there are a lot of comments in this file. We need to get rid of all the comments to make it more understandable.
edit apache virtual host file in ubuntu
So you can see in the image below I have removed the comments and this is what it looks like now.
create a new apache virtualhost file
​Now we need to add 2 important configurations The ServerName and the ServerAlias. The server name is the very basic domain that should match your virtual host. The ServerAlias is another configuration that should match the base domain. So both these configurations will be as follows -
ServerName & ServerAlias
Just add both configs to the file and change the DocumentRoot to where you would want the website to be stored.  I am using a sub-folder called sample in /var/www/htmldirectory.

So I will change my document root to this -

DocumentRoot /var/www/html/sample.

​So my file now looks like this.
host multiple sites on localhost

Step 3

​You need to now create an index file for your website. I have created my index.php file with the following code.
Example PHP Code

Step 4

​Just edit your hosts file and match your virtual host domain to your localhost IP (127.0.0.1).
Edit Host File
edit apache virtualhost file

Step 5

Enable the virtual host site by typing in the following command -
Enable Virtual host
​You will then be asked to restart apache -
Restart Apache Server

Step 6

Test your website by visiting the domain name you specified.
multiple sites on ubuntu localhost with apache
Hurray! We have successfully created a virtual host on our apache server. If you ever get stuck at any step, feel free to drop a comment below.
Viewing all 1413 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>