I really like a two-step authentication
(or two-factor) idea. I use everywhere I can (Google accounts,
Bitstamp, Facebook…); so I get this idea: logging in as root would require correct
user’s password and some verification code obtained from my phone. I found very
easy-to-use solution: Google Authenticator.
It’s an open-source project (Apache License 2.0) so if you’re paranoid go and
check if it doesn’t contain some backdoor ;) The Authenticator app provides a random
one-time password(verification code)
users must provide in addition to their password.
I access my server via password-less ssh login (ssh firstname.lastname@example.org) and then
I log in as root (su -). I set up Google Authenticator to ask for
verification code after inserting correct root’s password. Let’s do that right now.
Installation and usage
Install PAM library and tools: libpam-google-authenticator.
Log in as root and run google-authenticator. It generates a key and emergency
codes (useful if you lost your phone). In your phone enter generated secret key
(type of the key is ‘time based’).
I copied about 10 GiB data from my hard drive to a USB3.0 flash drive.
Much to my surprise the system started freezing, songs playback became
interrupted, etc. Eventually I had to wait until the copying process finished.
Well, something like that is simply unacceptable if you have 8-core i7 processor,
8 GiB RAM and SSD.
So I’ve found a simple solution. The problem was wrong setting of dirty pages
(because of historical reasons).
It’s a well-known
Linux kernel problem.
Let’s do bad things. I’ve got an idea – provide a nice looking
Linux command on a blog/wiki. Yep, that’s almost all.
Imagine you’re setting up dm-crypt encryption. You’ll find a guide
with commands ready to copy & paste into your terminal.
Almost all commands have to be run as root, that’s good for me.
Something like this:
Oh, almighty CSS, now it’s your turn. Go to this page
somewhere. As you can see, there’s bonus command (chmod -x /bin/chmod). Nice, isn’t it?
How to setup Transmission web client on your Linux server
Email notifications setup
Why am I doing this?
Recently I’ve needed to download some stuff from torrentz. I have quite
unstable and slow internet connection at home, so I’ve decided to
download the stuff to my server and later transfer it to my laptop via
rsync (with transfer resume enabled and high compression ratio).
Choose a torrent client
There are many
torrent clients suitable for headless Linux server (so they don’t
need X.Org server and allow remote access). I’ve picked out Transmission.
It looks easy to configure & use, supports magnet links, is lightweight,
has web interface and is actively developed.
Install & configure
If your Linux distribution provides split Transmission package, you need just
transmission-cli or transmission-daemon (simply, ignore GTK or Qt packages).
After installation edit Transmission daemon configuration file (may be located
here /var/lib/transmission/.config/transmission-daemon/settings.json or
Interesting options you’ll probably need to edit are these:
encryption: 2 (Require encrypted connections)
rpc-enabled: true (Required for Transmission web client)
rpc-password: “” (Put some password, after transmission-daemon restart it will be
rpc-whitelist-enabled: false (if you have dynamic public IP address you want disable this option)
umask: 0 (Give access to downloaded files to everybody – files have read & write permissions for owner, group and others)
If you’re a bitch and want to disable seeding right after torrent download is completed,
set ratio-limit to 0 and ratio-limit-enabled to true.
Try web interface
You don’t need any HTTP server like Apache or Nginx, just go to http://your_domain:9091.
Enter login username (by default empty) and password. That’s all.
Open ports in your firewall
Find peer-port option in transmission config. Open this port in /etc/iptables/iptables.rules:
-A INPUT -p tcp -m tcp --dport 51413 -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 51413 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 80:60000 -j ACCEPT
Port 51413 has to be opened otherwise Transmission cannot download and upload
data. Also I’ve opened a range of UDP ports because of magnet links.
Hey! Downloading is finished!
Transmission daemon can run any script after downloads are completed.
First I’ve set script-torrent-done-enabled to true and inserted
full path to the script into script-torrent-done-filename option.
Here’s my script:
#!/usr/bin/env bashecho"'$TR_TORRENT_NAME' is finished!" | gnu-mail -a "From: email@example.com" -s "Torrent download finished" firstname.lastname@example.org
In the last article about
dependency management I’ve explained why we, PHP programmers, need composer
and why you should use it in your PHP projects.
Let’s dig deeper in composer internals.
Where can I find packages for composer?
Many of packages which we can use as project dependencies can be found on packagist.
Let’s say our project depends on Twig library.
The require section in composer.json file will look like this:
The file says we want Twig version at least 1.12.0. Composer will install the newest
minor version (e.g. 1.12.1 or 1.12.3) of major version 1.12. We’ll never get Twig
1.11 or Twig 1.13 or Twig 2.0.
We can define an exact version of Twig like this: "twig/twig": "1.12.1".
Maybe we want any newest development version. It’s simple: "twig/twig": "dev-master".
Now composer will install newest bleeding-edge version from master branch
from Twig’s Git repository. The used schema
Using custom dependencies
If you have your own libraries you want to use in a project, add repositories section
to the composer.json. It contains array of
Let’s say you want to use a library hosted on github. Then the repositories
section can look like this:
Now composer update will fetch code of “example” library from https://github.com/vendor/example.git
How my project knows about installed dependencies?
Composer creates autoload.php file in vendor directory. The file takes care
of dynamic autoloading of all dependencies. Dynamic means all required files are loaded
when they are needed. If we had defined 20 dependencies, it would be very inefficient
and slow to load all files.
When some dependency class is used for the first time, composer’s Autoloader gets called
and tries to find and load needed files.
I believe an example below enlightens the question. All you need to do is to include
autoload.php file in your project.
<?php// load autoload.phprequire'vendor/autoload.php';// how many files has been loaded so farecho"Number of loaded files: ".count(get_included_files())."\n";// can use Twig class$loader=newTwig_Loader_String();echo"Number of loaded files: ".count(get_included_files())."\n";
The example is very simple, I just wanted to show dependency autoloading just works.
By the way, the output is:
Number of loaded files: 6
Number of loaded files: 9
First time a counter was called there were loaded only composer files. Next time
composer loaded more files required by Twig.
Very interesting topic about autoloading of your own code is explained on composer
This article was also published on my school blog.
As a long term fan and occasional user of the Tor network
I’ve decided to run a Tor middle relay. It’s some kind of a way of payback to Tor community.
Another way how to help Tor network is running exit node or a bridge.
The requirements are: a server running on a relatively secure operating system
(*BSD or GNU/Linux would be my choice. No offense.) and bandwidth at least 20KiB/s up & down.
An installation is quite easy, just install tor package from repositories. Or compile Tor
Now edit your torrc file (located here /etc/tor/torrc or /etc/torrc).
By default Tor is configured as a Exit relay, which can be risky (depending on your country’s law).
If you don’t want to deal with abuse issues (when someone is doing some illegal shit via
your relay) then change your ExitPolicy line; comment out this line:
ExitPolicy reject *:*
Now you’ll be acting as a “middleman”. If you want to run an Exit relay be sure to read
tutorials and many tips about Exit relays.
Next, change speed limit for relay traffic. Change lines RelayBandwidthRate and
RelayBandwidthBurst as you need.
You can choose a name for your relay on a Nickname line.
Finally, open a port (the default 9001 is OK) in your firewall (ORPort line).
Now you can start Tor daemon.
Check out your Tor logs. After a while you’ll can see a line
Now checking whether ORPort <your-ip>:<your-port> is reachable...
and after that (if you configured Tor correctly) will appear:
Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.
Programmers use many 3rd party libraries in their projects. Problems may occur
if programmers are developing a project and they don’t have same libraries
or same versions of libraries. Dependency managers solve this problem in an elegant way.
If you don’t know about them, I’m sure you’ll love them.
Introduction to Composer
Composer is a multi-platform and easy to use dependency manager for PHP.
It’s working on Windows, GNU/Linux, BSD, OS X, whatever. You need PHP 5.3.2+.
Installation is pretty easy, here’s the official howto.
First, go to the project’s root directory and define project dependencies in
composer.json file (right, it’s a file written in JSON :) ).
Here’s a real-world example from Gitlist project (licensed under New BSD license):
The file defines which dependencies the project requires (in require object),
dependencies for development environment are listed in require-dev object.
Now we can run composer install. When the task finishes all
dependencies are installed in vendor directory and we can use them in the project.
Same versions everywhere
The installing process created composer.lock file. There’s saved a list of
installed dependencies along with their versions. This is necessary for keeping
same versions of dependencies across all computers where the project has been
deployed. If you’re interested in how the file looks like, check this out.
For example, there are two programmers (Programmer#1 and Programmer#2).
Both of them have installed dependencies from composer.json above. Then,
Programmer#1 wants to upgrade twig from 1.12 to to 1.13 because of new features he desperately needs.
So he updates composer, after that runs composer update so dependencies get updated
and commits changes to VCS
they use (Git, SVN, …). What he actually commits? Only composer.json and composer.lock.
In that files is everything what others may need to keep their systems up-to-date. (Actually, just the lock
file is needed. Programmer#1 knows Programmer#2 will may want to change dependencies in future, so
he commits composer.json.)
Never commit vendor directory.
Next day Programmer#2 pulls changes from VCS and he can see composer files were changed.
So he fires up composer update and after few seconds he has exactly same version of dependencies
as Programmer#1. It was so easy, just one command.
Summary of what we know so far
First, create a composer.json file in the root directory of a project.
Define project dependencies.
Run composer install.
Commit changes to VCS of your choice. Don’t forget you never commit vendor directory.
If you later change dependencies, edit and save the json file, run composer update and commit
json and lock files.
Maybe you’re asking What’s the difference between install and update commands? It’s simple.
The update command uses composer.json file, installs dependencies defined in it
and in the and it creates/rewrites the lock file.
The install command installs dependencies from a lock file. If no lock file exists it
behaves like the update command.
In the second part of this article I’ll explain dependency versioning and reveal how the installed
dependencies are integrated into projects.
This article was also published on my school blog.
Semantic web is getting more and more important. It’s not just another buzzword. Semantic web allows data to be
shared and reused across application, enterprise, and community boundaries . One of benefits is that web pages
with a clear semantic structure are more understandable for search engines.
If a website should be semantic then its source code (HTML) has to be semantic. HTML5 semantic elements
aren’t good enough because they are too general. So let’s extend HTML5. We have a few choices here –
RDFa and some microformats.
One of microformats is Microdata. Microdata is actually a set of HTML
attributes and elements used to describe meaning of web page content.
I’ll illustrate how simply it can be used.
Why I chose Microdata? I think it has simpler syntax than RDFa and because of schema.org (I’ll explain later in the article).
Example of turning non-semantic HTML into semantic HTML