Welcome to Kotrfa’s guides

Contents:

Introduction

Who am I?

I’m student of technical university in Czech republic and one of my hobbies is IT. I’m user of Arch Linux on my laptop and I’m also running small home server hosting several websites of my friends and myself.

Everything I’ve learned I’ve learned on the internet, which is full of great guides and tips. But sometimes it was really stressful - 10 different guides from different authors on different versions of specific software.

There is also missing bridge between guides for beginners (or normal users) and for professionals. Unfortunately for me I’m somewhere middle of this two groups.

Second motivation for me was also the fact that I’ve terrible memory and started to forget all useful things. So this also serves as kind of a notebook for me.

What are these guides about

So I decided to write guides for this middle class. In these guides I’m going to cover several topics as it’s coming to my mind.

Main areas of these guides are:

  • Linux (mainly Arch Linux)
  • systemd (I’m happy user :) )
  • Raspberry Pi
  • Configuration of server
  • Domains, IP addresses, redirecting

Disclaimer

Keep in mind that I’m not profesional.

Everything you find here is without absolutly no waranty and I’m not responsible for any inconveniences or issues that might occurs.

Solutions I propose also don’t have to be the best or might be far from perfect! On several parts of these guides exists different configurations, tools, packages... This is just one of them which I chose. Feel free to find more about them and find which fits you best!

Some remarks

When I feel that some topic is covered solidly somewhere else, I will redirect you there. So think about this also as summary of available materials.

Somewhere is necessary to know some basics about Linux and terminal. I’ll try to remark them.

Sometimes I might be improper in explaining some terms or details. This might be mainly because of two reasons:

  1. I don’t understand enough to explain it better
  2. It’s not necessary you to know it for our purpose

I hope I’ll be able to explain all things somehow human-likely. If you want to know technical details, just find them on your own :) .

On GitHub repository of this guide you can find some of my configs from both of my machines - laptop which I use as a work station and for fun and RPi which is server for several of my websites, FTP, SSH...

One extremely important remark - USE GOOGLE!

My English

My English isn’t great, but I believe it’s good enough to understand what I’m trying to explain.

Feedback

I’ll be really happy for you feedback - don’t hesitate to ask about some additional info or next guides and also if you find some mistakes, please let me know.

Both can be done by submitting an issue on git hub, where are these guides placed.

Necessary knowledge

Here I cover two things that I encourage to use during this guide.

Vim

We’ll need text editor to configure everything and most of our time we’ll spend in command line. How to edit files inside terminal? There are multiple console-based terminal. I chose one of them, which is called vim. We will install it in next chapters, but I will tell you some basics here (you can try them later).

You can edit a file in vim by typing vim <file>. Vim has three modes, while I will tell you about two of them. Command and insert mode. In command mode, you press special keys to make something. It’s the basic mode and you cannot edit file in it. Press ESC to get into command mode. To edit a file, press i. Now you are in insert mode and you can navigate by arrows and type, delete etc. as you know from other text editors.

After you make a change, you can save the file. To do it, get to command mode (ESC). Now write :w. This will save the file. To exit, type :q. That’s all you need for now. More about vim is under command vimtutor.

Of course feel free to use other editor.

Systemd

systemd is astonishingly great and also astonishingly hated package, but that’s not necessary to know now. Briefly - systemd cares about running processes in the background. These are called daemons. For example in next chapter we will use SSH - it will run at background. There is also package, which takes care of automatically connect to internet (again in next chapters).

systemd is controlled by systemctl. To start some program, which is in this context called service (and we will stick to that), just run systemctl start <unit>. There are other useful (and that’s 90% of what you need to know about systemctl) commands (all starts with systemctl and ends with desired_unit - watch example):

  • enable - this allow to run service after boot (but it will not start immediately)
  • disable - this will make device not to start after boot
  • start - this will immediately start a service (but will not enable it - it won’t be run after boot)
  • stop - stop service immediately (but not disable)
  • status - this will print out all information in pretty format - you can find if it is enabled, started, if there are any errors etc.

Example

There is service, which takes care about connection to network. We will cover it in special chapter for RPi, but we will just play with that for a minute now. It’s called systemd-networkd. Try to start it, enable it, disable it and then stop it and get status to see what every command does by trying these:

  • systemctl start systemd-networkd
  • systemctl status systemd-networkd
  • systemctl enable systemd-networkd
  • systemctl status systemd-networkd
  • systemctl disable systemd-networkd
  • systemctl status systemd-networkd
  • systemctl stop systemd-networkd

Last thing you need to know about systemd for our guide is where these services has it’s own configuration files. They are all stored in /usr/lib/systemd/system/. For example, I’ve noticed SSH service. Configuration file for this service is in /usr/lib/systemd/system/sshd.service. You can type cat /usr/lib/systemd/system/sshd.service to see what is inside and of course it can be edited.

systemctl just looks inside these folders when you call command for starting/enabling/... specific unit.

How domains, IP addresses and servers works

I’m going to walk you through how part of the internet works as simply as possible.

Domains

How does it happen, that someone type example.com and then see some web pages? Where this example.com come from?

A little background. On internet we have IP addresses to give every computer it’s specific name. Because we don’t like these awful numbers (like 123.28.13.234), we have nicknames for them - domains. For example - you know that when you type google.com, you get to google page. But that happens even if you type: 74.125.224.72. In first example it does this:

  1. It takes name google.com
  2. Goes to world data bank of domains and their appropriate IP addresses
  3. Find out which IP address belongs to this name google.com
  4. Get you to this IP address

You can buy your own domain and then redirect it to your IP address. Redirecting is made by adding A record. This is usually done on their administration website. Then, after few minutes (or hours), it is registered in this world data bank.

There is also free alternative, for example freeavailabledomains.com. It’s not second-level domain (the one which is right before .com, .eu...) , but third-level (e.g. yourname.flu.cc, ...). You can use it in our example - just register there and choose your domain name (e.g. mojepks.flu.cc). Then you add name (that’s prefix before .flu.cc - in my case mojepks) and destination - that’s your public IP address.

Public IP

What is public IP address? It means that this address is one on the whole internet - it points to one specific place on the world.

if you are connected to some network, for example to your home router, you have one internal network. Usually it’s something like 192.168.0.xxx. But this is not an address people can see you from the internet. It’s just the internal one. On Linux you can find it by typing:

ip addr

to find your public IP address, you can find it e.g. here. But this address is most probably isn’t of your PC, but of the router you have. And not even that - it might be and IP address of some other node to which your router is connected to. To find this out is best to ask your administrator or IPS (a company, which is offering you and internet).

It means, unfortunately, that not every one has it’s own “public” IP address and even worse, it can change! And that is not what you want - then you should have had to change it every time your IP get changed. You have to ask to your IPS if your address is “static” or “dynamic”. My IPS (UPC CZ) told me that my is “dynamic”. But after a little research I found out that it means that my IP address can theoretically change. In real, it is same for few years now :) . It is relatively common, so maybe you are lucky.

If you don’t want to buy “first level domains” (the one which are just something.com) and second level is enough (like something.somethingelse.com), then you can take a look on getfreedomain. It will serve good for our purposes.

Deploying nginx + django + python 3

Hello,

due to the lacks of informations about deploying latests version of django (1.6+) with latest nginx (1.6+) using gunicorn (18+) inside virtual environment of python 3 (3.4+), it was really hard for a beginner like me to deploy a django project.

I finally made it and now after several months I decided to share my experiences with all the world. So again:

What this guide is about

We’ll deploy (that means making website available to the world) django project with server engine nginx using gunicorn for that. We’ll also use virtual environment of python and installation will be static - it will not depend on your system-wide python installation.

Prerequisites and informations

I’m using Linux and commands I’m going to introduce are thought to be run in bash shell. Sometimes root privileges might be required and I’ll not remark that. If you are not familiar with Linux, please read my other guides.

You do not need any special knowledge. But keep in mind that this is not guide how django, nginx or gunicorn work! This is about how it should be brought together to work.

My choice - why nginx, python 3 etc.

I’m not the one who tried all of possible choices. I’ve just tried a lot of them and this one was first of them which worked. So I stuck to it.

I’ve chosen nginx over apache because this seems to be trend today thanks to Apache’s age. It’s also seems to me easier.

I’ve chosen django because I love python. And I’m quite new to it so when I decided to learn this great language, I started with python 3. It was somehow logical because I could choose what I want and the newer one is of course better investment to the future.

Gunicorn is just so easy to use and I’ve found great documentations and guides for it.

Virtual environment is necessity. You don’t want all python projects (not just django websites) depend on one specific configuration. It’s not stable (one package can hurt other) nor secure. We’ll use hard virtualenv because it’s also safer - you can update this version when you want and not with every time your distribution says to you.

How the hell all that works

Here is a little model I’ve made for myself and I think it’s not too bad to be a starting point for you ;) . We have five terms: nginx, python, virtualenv, gunicorn and django.

First layer (nginx)

nginx is what cares about requests from the world. It’s what catches your request (e.g. Google) and redirects it to according folder (in case of static HTML page with index.html, not our case), or to some application.

Second layer (gunicorn)

This application in our case is gunicorn. It’s powered by python and it basically makes a magical communication channel nginx``~``django app. This tunnel is represented by socket (we’ll get to it). Why can’t do this nginx? It’s just not clever enough (better - it just hasn’t do that and that’s absolutely OK in Unix philosophy). Gunicorn can make server similar to django test server. But it can also serve django app content to nginx and hence solving nginx’s limitation.

Third layer (django)

Then there is just django - your project with you pages - this is what your website is about. All previous (and next) is just working layer.

Wrapper for second and third layer

python is engine of gunicorn even for django. This python is running inside sandbox called virtualenv. Gunicorn will activate this virtualenv for us and run django app.

That’s all. Not that hard, huh? Basically nginx~gunicorn~django. python is powering it and virtualenv is just wraps python version (it is not necessity to have virtualenv, if it’s easier to understand it).

Let’s do it

nginx configuration

Covered in other tutorial, you must be able to make it to the point that you are able to see nginx welcome page.

Installing python and virtualenv

About virtual environments there is tons of guides on the internet - feel free to educate yourself :D . Here is my brief guide.

To do that we’ll need to install python-virtualenv. Install it and then create some folder for our test case. Let it be /var/www/test. Also install python of version 3, if you haven’t done that before.

We’ll now create sandbox of python for our test case inside this folder (/var/www/test). Why /var/www? It’s just good place to put websites on Unix (and hence - Linux). But it can be anywhere of course. To create a REAL copy (and not just a linked variant) of python version 3 we need to use this syntax:

virtualenv --python=python3 --always-copy venv

What happened here? We created python sandbox called venv in current directory. It use python3 as default choice and we copied all necessary files for life of this installation (by default there are only symlinked to the system one).

What now? We need to switch from system python installation to venv python installation. First try to type python -c "import sys; print(sys.path)". The output is similar to this:

['', '/usr/lib/python34.zip', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-linux', '/usr/lib/python3.4/lib-dynload', '/usr/lib/python3.4/site-packages']

where you can notice that current default python interpreter gets it’s config from somewhere in /usr/lib/....

We will now activate our virtualenv by this command: source /var/www/test/venv/bin/activate. Now try same command as above (python -c ...) and it should print instead of /usr/lib/... something starting with /var/www/test/venv/.... If yes, it’s working :) .

To quit from this environment and get to your system-wide, type deactivate.

pip installing django and gunicorn

One of the best advantages of python 3.4 is a fact that pip is installed by default. What is pip? pip is installer for python packages.

All python packages can be found here. Of course you can find you package there (try for example with django), download it and build it with python on your own. But that sound like a lot of work. Let’s pip do it for us.

Assure yourself you are working in our virtualenv (you can again activate it) and type this:

pip install django
pip install gunicorn

you can specify version just by typing = behind name package:

pip install django=1.6.0

but of course this version must exists on pypi.python.org. If there are errors, try adding -v switch for verbose.

To list installed packages type:

pip list

try if you see django and gunicorn there :) .

and that’s all you need for now with pip (although there isn’t much more about pip).

Sample django project

Now we’ll need to create django project for our test case. Go inside /var/www/test and activate our virtualenv where is django and gunicorn (you can do that again by source /var/www/test/venv/bin/activate).

Create django project by:

django-admin3.py startproject ourcase

it should create this structure inside /var/www/test:

ourcase
|-- manage.py
`-- ourcase
    |-- __init__.py
    |-- settings.py
    |-- urls.py
    `-- wsgi.py

1 directory, 5 files

check if it’s working with local django testing server by python manage.py runserver. Check in browser 127.0.0.1:8000 - if there is django welcome page, it’s good.

Just for comfort make manage.py executable by chmod +x ourcase/manage.py.

gunicorn and daemonizing it

Now we’ll replace django testing server, which is just for kids (it’s just great future :) ), with fully mature nginx for adults.

As was previously stated, for that we’ll need gunicorn. Gunicorn will have to be running to enable communication between nginx and django project.

First, we’ll use just gunicorn to display our django test project on 127.0.0.1:8000. It’s incredibly easy. Again - assure yourself you are working in current virtualenv.

Now navigate yourself inside /var/www/test/ourcase/ and run this magical command:

gunicorn ourcase.wsgi:application

it will start something like gunicorn server - you should be able to see your django welcome page on 127.0.0.1:8000.

This is just the most stupid configuration, which is enough for this test, but not for deploying on server. For that we’ll want to add much more. Create starting script /var/www/test/gunicorn_start.sh:

#!/bin/bash

NAME="ourcase"                              #Name of the application (*)
DJANGODIR=/var/www/test/ourcase             # Django project directory (*)
SOCKFILE=/var/www/test/run/gunicorn.sock        # we will communicate using this unix socket (*)
USER=nginx                                        # the user to run as (*)
GROUP=webdata                                     # the group to run as (*)
NUM_WORKERS=1                                     # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=ourcase.settings             # which settings file should Django use (*)
DJANGO_WSGI_MODULE=ourcase.wsgi                     # WSGI module name (*)

echo "Starting $NAME as `whoami`"

# Activate the virtual environment
cd $DJANGODIR
source /var/www/test/venv/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH

# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR

# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /var/www/test/venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
  --name $NAME \
  --workers $NUM_WORKERS \
  --user $USER \
  --bind=unix:$SOCKFILE

Wow! A lot happened here compared to our stupid variant. Everything marked with (*) in comments can be changed (or must be changed if your paths differs).

The most important change here is that we added SOCKFILE - socket. This is the magic thingie which will enable nginx to server django project (app). Gunicorn will somehow run server as in previous stupid variant and transfer this into socket file in language which nginx understands. nginx is looking to this socket file and is happy to serve everything there is.

It’s common practice (and I strongly encouraged it) to run server as some specific user. It’s for security reasons. So if you haven’t done it before, create some user and group for these purposes (ALSO IN OTHER MY TUTORIAL).

Workers are just how much computing power you enable for this website.

If you are not working as a user which is in script set to USER variable, you won’t be able to run this script (you’ll get some errors). That’s because of permissions reasons. If you’d like to check or debug this script (and it’s recommended), uncomment --user $USER line - it should work then even if you run it as another user. Of course you need to make script executable.

See gunicorn documentation for more informations.

This script is laying all over the internet in multiple variants. If you have problems to run it, try to uncomment some other lines in last part of script. For example I wasn’t able to run this script with directive --log-level=warning.

If it is working, it’s great! Now we’ll daemonize it by using systemd. Of course you can use another init system (like Ubuntu upstart. Just search for “how to run script after boot”.

Create new service file /usr/lib/systemd/system/gunicorn_ourcase.service and insert this:

[Unit]
Description=Ourcase gunicorn daemon

[Service]
Type=simple
User=nginx
ExecStart=/var/www/test/gunicorn_start.sh

[Install]
WantedBy=multi-user.target

now enable it as with other units:

systemctl enable gunicorn_ourcase

now this script should be run after boot. Try if it’s working (reboot and use systemctl status gunicorn_ourcase).

That’s all for gunicorn.

django project deployment

Deploying django project is topic for longer tutorial then is this. So I’ll make it as small as possible.

If you’ve just developed django project with test server, it makes a tons of things for you without any notices. In reality it’s not as easy - everything isn’t done automatically and django is prepared for that - but you need to activate this futures, since it’s not by default.

Directories

Nice example is with static files. There are e.g. some CSS styles for django administration page. These needs to be in special folder and we’ll tell nginx that when website asks for file style.css, it should looks into ~/var/www/test/ourcase/static/style.css.

But how to find all this static files? Right now they are sourced from django installation directory (probably something like /var/www/test/venv/lib/python3.4/django/.... manage.py has a special command for this, but first we need to tell him few details in settings.py.

The most common configuration is to has a special directory for static files where you can edit them, past them etc. Then there will be static directory, where you won’t do any changes - this will be for manage.py command - it will collects them from your special directory, from django installation directory etc. In templates, when you want to use e.g. some static image on background, you use { STATIC_URL}/static_images/mybgrnd.png.

To do this we’ll add this to settings.py:

STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATICFILES_DIRS = (os.path.join(BASE_DIR, "sfiles"), )

all your static files used should now be placed inside /var/www/test/ourcase/sfiles. If you just want to try it, create this directory and touch sfiles/example.png inside it.

Now run ./manage.py collectstatic. It should ask you if you really want to do that (and you want). Process will start and after it’s finish you’ll have collected all static files inside static folder. This you need to do every time you change something inside sfiles folder.

Websites also usually has media folder, which is used for user files - for example images to blog posts. Usually we use MEDIA_URL for calling things from media dir in templates.

Configuration should be same as with django testing server and you don’t need to do any special changes here. My looks like this:

MEDIA_ROOT = os.path.join(BASE_DIR, "media")
MEDIA_URL = '/media/'
ADMIN_MEDIA_PREFIX = '/media/admin/'

and all user files (uploaded images, sounds...) are inside /var/www/test/ourcase/media` directory. You don’t need to do something like collectstatics here.

Steps for other directories should be same.

Enough for directories. But some other changes are needed to deploy django project. In some cases I don’t really know why I need to add this to settings.py, but I know what that does and it’s just working.

Templates

I had to add this for templates:

TEMPLATE_DIRS = (os.path.join(BASE_DIR, 'templates'),)
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',)

where I’ve put my base.html which is used in all other templates in whole website (in every app). If you use flatpages, you can also make a directory inside templates called flatpages, where you can copy base.html` as ``default.html and use this template as base for flatpages.

SITE_ID

For some purposes is needed to set SITE_ID. In my case it was because of flatpages. It’s easy:

SITE_ID = 1
ALLOWED_HOSTS

You need to past all your domains here. If your domain is www.example.com and I guess example.com also, it should looks like this:

ALLOWED_HOSTS = ['example.com', 'www.example.com']
DEBUG

This directive should be set to False. But when you are configuring your server for first time, let True there. It helps you find out bugs on your site.

That’s it!

nginx server configuration

Last part is configuring nginx to make him listen on socket created by gunicorn. It’s not hard.

Edit /etc/nginx/nginx.conf and paste this into http block:

upstream test_server {
  server unix:/var/www/test/run/gunicorn.sock fail_timeout=10s;
}

# This is not neccessary - it's just commonly used
# it just redirects example.com -> www.example.com
# so it isn't treated as two separate websites
server {
        listen 80;
        server_name example.com;
        return 301 $scheme://www.example.com$request_uri;
}

server {
    listen   80;
    server_name www.example.com;

    client_max_body_size 4G;

    access_log /var/www/test/logs/nginx-access.log;
    error_log /var/www/test/logs/nginx-error.log warn;

    location /static/ {
        autoindex on;
        alias   /var/www/test/ourcase/static/;
    }

    location /media/ {
        autoindex on;
        alias   /var/www/test/ourcase/media/;
    }

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;

        if (!-f $request_filename) {
            proxy_pass http://test_server;
            break;
        }
    }

    #For favicon
    location  /favicon.ico {
        alias /var/www/test/test/static/img/favicon.ico;
    }
    #For robots.txt
    location  /robots.txt {
        alias /var/www/test/test/static/robots.txt ;
    }
    # Error pages
    error_page 500 502 503 504 /500.html;
    location = /500.html {
        root /var/www/test/ourcase/static/;
    }
}

OK, that’s whipping. I’ll be fast.

First, we tell nginx where is socket file (gunicorn.sock) from gunicorn.

Then there is redirect from non-www domain to www domain. This can be omitted or solved in other way (CNAME).

Then there is main body of server configuration: * logs are useful for catching bugs and errors - has multiple parameters, like how much should they bother you. Don’t forget to create log directory. * static and media block - these are extremely important - this is why we played all that games with collectstatics etc. It just tells nginx where it should looks when website asks for e.g. /static/style.css/ or /media/img/picture_of_my_cat.png. * Block with all that proxy things is also important and is used for technical background around socket communication and redirecting. Don’t care about that. * Favicon and robots.txt is not necessary, but all browsers and web crawlers are still searching for them. So if you don’t like errors in your logs, add create these two things. * Last block is telling nginx where it should looks for error pages when something doesn’t exists.

Save and exit. Next great future of nginx is it’s ability of checking configuration. Type nginx -t (don’t forget root permissions) and you’ll see if configuration is syntactically correct. Don’t forget about that stupid ;.

Finally enable nginx to be ran after reboot:

systemctl enable nginx

Some sugar candy

Install with pip package called setproctitle. It’s useful for displaying more info about ran processes like gunicorn in system process managers (htop, ps, ...).

Debugging

That’s it. Now restart computer and see if it doesn’t explode. You can analyse nginx or gunicorn with systemctl, e.g.:

systemctl status gunicorn_ourcase

and some informations should be also in log files. Try to get to your website from browser and see what happens. Don’t forget that browser likes caching and press CTRL+r for reload to see changes you’ve made.

After every change in configuration of nginx you need to restart it by running nginx -s reload.

To see what processes are spawned you can use your task manager like htop or ps.

Finalization

That’s all! I hope this guide helped you and you has successfully start up your websites! :)

Installing Arch Linux on laptop and how to make it usable

In this tutorial I’d like to cover all steps from installing Arch Linux OS to stable, secure, and working system with working Wi-Fi, windows manager (i3) etc.

All my current configs are in this repository. Feel free to inspire from them (as I did from others).

Installation

There are tons of step-by-step guides how to install Arch so I will not go to deep here. Just look here: . Anyway, short summary:

Primary installation

Partitioning

You have to prepare disc(s) where you’ll install Arch Linux. One disc can be separated to more partitions. I recommend you to use two partitions for Arch Linux. One for system and second for user data (alias “home directory”). You can of course have one or more - as you wish. In this tutorial I will use 2 partition. It doesn’t matter if there are next partitions with other systems (next Linux, Windows...). How big partitions should be? I recommend you 40GB-50GB for system and rest for home. The easiest way to partition is to use GParted. If you are using Linux, you can download from your distribution and make it from there. Of course, you’ll not be able to resize, create etc. partitions on disc which is currently used. In that case, or if you don’t have Linux, there is GParted life distribution - make a booting USB flash with that. In GParted you have to create two partitions with previously stated sizes and format them to file system “ext4”. Piece of cake. For convenience it’s fine to label them also (when you create partition, you can add label).

There might be problem with your BIOS - it don’t have to has booting from USB flash as default. You need to change an order of priority in your BIOS (the “thing” before operating systems boots up). Google is your friend :) .

Installing Arch Linux

Now you have prepared disc for installation. Download last ISO of Arch Linux and make a booting USB flash with that. When you’ll be done with that, insert USB flash to PC and run it. It should boot up to Arch Linux prompt (terminal, console). Now we need to connect to the internet. You can use command Wi-Fi-menu or just plug in ethernet cable. To check if you are connect, try ping google.com. If you get response, it’s working. Now we need to join prepared partitions to currently running OS. Which are they? You can find it out by typing lsblk. There will be listed all partitions. You care about the two you partitioned earlier. You should recognize them thanks to the size. If you are not sure, you’ll find it out in a minute. In my case, there is /dev/sda5 for system (40GB) and /dev/sda6 for home. Of course it might differ from yours, so substitute it to your case. Do * mount /dev/sda5 /mnt * check if it is empty ls /mnt. If you don’t see anything (or there is only something like lost+found), it’s our partition :). * Create directory for home mkdir /mnt/home * mount /dev/sda6 /mnt/home

Now we can physically install Arch Linux. Type pacstrap /mnt base and waits. It will download and install packages.

Post install

Now we need to tell system which partition is system disc and which home partition. This will help us a little: genfstab

Link some zone info

TODO

Change root settings

Now we will change root to new system - from the current one, which is the USB one, we will magically get to the new. This magic will happen by command: arch-chroot /mnt.

We need to install packages for connecting to the internet as we did on the start of installation. For that, we will need these packages (which are included on this USB version, but not on installation): pacman -S dialog wpa_actiond ifplugd wpa_suppicant sudo zsh That should be sufficient for making Wi-Fi or wired connection in our new system, when we finish work from here. There are also two useful packages sudo and zsh. I will cover them in next paragraph.

In Linux, there is always one user, which is equivalent to god. His name is “root”. You are currently login as him. We will change a password for him. Type passwd and set new password. We also want to add regular user (think about it as a god who is creating humans). This can be done by: useradd -m -G wheel -s /usr/bin/zsh username, where username is as you wish. I will use “bob” in next chapters as default user. There are also some other switches in command. -m is for creating bob’s sandbox for his files and -G to add him to the wheel group. Why? Remember installing sudo and mentioning root? It is better working as bob (being god all the time means a lot of responsibility), but sometimes has some superpowers as root has. Sudo will do it for as. sudo can grant you superusers privileges. More about it here. Now the wheel group. Every user, who is in wheel group, will have this ability to use sudo. Type: visudo and find this line: # %wheel ALL=(ALL) ALL and delete # character (for future reference, this means to “uncomment line”). It will look like this %wheel ALL=(ALL) ALL. Save and exit (in vim just press escape and :x). Next switch in creating command was -s /usr/bin/zsh. This will just save your time in terminal (where you’ll be a lot). Enough for now. We will make this also for root by chsh -s /usr/bin/zsh. Last thing - we need to set password for bob. Do it by typing passwd bob.

Bootloader

We need to tell to your PC what systems are installed and add you the ability to choose between them (windows, Linux(other distros)...). For that we will need one or two more packages pacman -S grub. If you have windows installed on other partitions, also install pacman -S os-prober. When you boot your PC there is APROXIMATELY this sequence: * BIOS - it then looks to the beginning of your disc for first part of GRUB * GRUB first stage - if it is found, GRUB takes control and then looks for other files with more informations and pass control to GRUB second stage * GRUB second stage - it gives you option to choose system you want to boot up and then kick it up * OS will boot up this is not precise, but sufficient for our purposes and to be honest, for 90% of what you need on daily basis (personally, I don’t know more than this :) ). BIOS is installed from factory. So our work is to install GRUB stages. Resolve which disc you want to use - I recommend you to use the first one, which is usually called /dev/sda. If you have only one disc in PC, it is this one :) .

CAUTION - notice that I’m not speaking about partition, in which case I’d need to add number after sda. First stage of GRUB is somehow “partition” independent. OK, now install it: grub-install --target=i386-pc --recheck /dev/sda again - now number after sda. Of course, change a to your case. Now we need to install second stage of GRUB. It will be to the current system partition, so run grub-mkconfig -o /boot/grub/grub.cfg. Now you are ready to restart your PC. Do it by typing shutdown now, plug off USB flash and turn PC on again. If everything went well, you should be in white-black window with names of available systems. Choose Arch, of course. If not, just boot again from USB flash, mount system partitions with already installed system, arch-chroot inside it and try installing grub again or find what went wrong. Don’t panic :).

Making system usable

Login in

You should be looking to the Arch Linux console with asking for username and password. You have two options now: sign in as bob or as a root. For now, I recommend you to join as a root because we will maintain the system for a while. But for future, always use regular user for common tasks and when you need root privileges., use sudo command. So, username is root and password is the one you specified in the past. If you forgot it, you can again boot up from USB flash, arch chroot and change it.

Setting connection

We will set up simple connection manager, which will auto connect to known Wi-Fi networks and auto connect if you plug in a ethernet cable. If you’ll want to connect to new yet unknown Wi-Fi network, you will use Wi-Fi-menu.

So now connect to internet using Wi-Fi-menu. Now we will enable networking daemon (things which runs silently on the background) to start after boot. For that we’ll need how is your Wi-Fi or ethernet device inside your laptop called. We can find it by typing ip addr. Output should be similar to this:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether e8:03:9a:97:b5:a7 brd ff:ff:ff:ff:ff:ff
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 88:53:2e:c1:e4:d1 brd ff:ff:ff:ff:ff:ff

you care about the two of them, which starts with wlp... and enp.... Let’s say it’s enp2s0 and wlp3s0.

Now we are ready to start autoconnect to known networks. Let’s do that by systemctl enable netctl-auto@wlp3s0 and systemctl enable netctl-ifplugd@enp2s0. That’s it. Now, if you wan’t to connect to unknown Wi-Fi, just type (needs root) Wi-Fi-menu and when you want cable connection, just plug it in :) .

Graphic enviroment

Installing i3

As I sad before, we are going to use I3. Take a look at there webpage and guide. For make it run we will need to install these pacman -S i3 dmenu xorg xorg-xinit. It might ask you about some choices - just install anything. It isn’t necessary to have all crap from Xorg, but to figure out which is and which isn’t needed is just pain (wayland should solve this in near future). If it asks you about installing i3-status, approve it. Xorg is used for all advance displaing in linux. i3 needs it also. When you run a graphic enviroment anywhere on linux, it means that Xorg is runned and than there might be some windows managers etc. So now we just tell Xorg to run i3 after it’s start. To do that, we will edit this file: vim ~/.xinitrc to this:

#! /bin/bash
exec i3

this should be sufficient. Since now, you can start i3 by typing startx (try it :) ). To quit from i3 back to console press Windows+Shift+E or Ctrl+Alt+Del. How to actually use i3 we will cover in next part.

We’d like to start i3 (startx) after logging in after boot. Open file /etc/profile and add there this:

# autostart systemd default session on tty1
if [[ "$(tty)" == '/dev/tty1' ]]; then
    exec startx
fi

What this does? Next time you reboot your computer and you log in with your username and password, i3 will start :) . If you don’t want to start i3 and you just need console (or i3 is broken), you can just change tty. Linux has by default 7 of them. In majority of distributions with DE (desktop enviroment) Xorg is running on seventh tty. In our case it will be the first one.

Configuring i3 status bar

i3status bar is just what is is - status bar. After install you need to edit it a bit. It’s located in ~/.i3status. Usually it is necessary to adjust these: battery You have to find out number* of your battery. Type ``ls /sys/class/power_supply``. It should show something like ``ADP1 BAT1``. Number after ``BAT`` is you lucky number. Usually it’s 1 or 0. **wireless and ethernet device name Here you need to replace wlan0 and eth0 with ones you have. To find it out again type ip addr. There should be something like wlp1s0 and enp2s0 (on older distros there is still wlan0 or eth0 - in that case keep it as is :) ) .

Installing terminal

My choice of terminal with i3 is urxvt. Let’s install it: pacman -S rxvt-unicode rxvt-unicode-terminfo. terminfo is just for some compatibility issues with sshing and screen.

Now configure it by opening ~/.Xdefaults. Add this:

! urxvt

URxvt*geometry:                115x40
!URxvt*font: xft:Liberation Mono:pixelsize=14:antialias=false:hinting=true
URxvt*font: xft:Inconsolata:pixelsize=17:antialias=true:hinting=true
URxvt*boldFont: xft:Inconsolata:bold:pixelsize=17:antialias=false:hinting=true
!URxvt*boldFont: xft:Liberation Mono:bold:pixelsize=14:antialias=false:hinting=true
URxvt*depth:                24
URxvt*borderless: 1
URxvt*scrollBar:            false
URxvt*saveLines:  2000
URxvt.transparent:      true
URxvt*.shading: 10

! Meta modifier for keybindings
!URxvt.modifier: super

!! perl extensions
URxvt.perl-ext:             default,url-select,clipboard

! url-select (part of urxvt-perls package)
URxvt.keysym.M-u:           perl:url-select:select_next
URxvt.url-select.autocopy:  true
URxvt.url-select.button:    2
URxvt.url-select.launcher:  chromium
URxvt.url-select.underline: true

! Nastavuje kopirovani
URxvt.keysym.Shift-Control-V: perl:clipboard:paste
URxvt.keysym.Shift-Control-C:   perl:clipboard:copy

! disable the stupid ctrl+shift 'feature'
URxvt.iso14755: false
URxvt.iso14755_52: false

!urxvt color scheme:

URxvt*background: #2B2B2B
URxvt*foreground: #DEDEDE

URxvt*colorUL: #86a2b0

! black
URxvt*color0  : #2E3436
URxvt*color8  : #555753
! red
URxvt*color1  : #CC0000
URxvt*color9  : #EF2929
! green
URxvt*color2  : #4E9A06
URxvt*color10 : #8AE234
! yellow
URxvt*color3  : #C4A000
URxvt*color11 : #FCE94F
! blue
URxvt*color4  : #3465A4
URxvt*color12 : #729FCF
! magenta
URxvt*color5  : #75507B
URxvt*color13 : #AD7FA8
! cyan
URxvt*color6  : #06989A
URxvt*color14 : #34E2E2
! white
URxvt*color7  : #D3D7CF
URxvt*color15 : #EEEEEC

now you have nice looking terminal for i3. You can start i3 by startx and press Windows+d to open something like run promt. There you can type program you’d like to run and press entre. Open urxvt for now :) .

Install yaourt and AUR

Archlinux has several official repositories and also unofficial AUR. It’s not trivial to install packages from there and there are helpers for that, such as yaourt, which is equivalent to pacman for oficial repos.

In AUR are usefull packages as Oracle Java implementation, proprietary software, software which is used rarely etc.

To install yaourt do this: * pacman -S base-devel wget * wget https://aur.archlinux.org/packages/pa/package-query/package-query.tar.gz * wget https://aur.archlinux.org/packages/ya/yaourt/yaourt.tar.gz * tar xvf package-query.tar.gz * cd package-query * makepkg -s * pacman -U package-query* * tar xvf yaourt.tar.gz * cd yaourt * makepkg -s * pacman -U yaourt*

That’s it. We have installed yaourt and package-query from AUR and you see that it is not hard, but seems a bit...

...ehh - long. Now, to install something from AUR, for example copy-agent, just type: yaourt -S copy-agent. It will do all this for you :) . Why this is not allowed by default? It might be danger to install something from AUR, since everyone can add there something. So be aware of that!

Some other usefull packages to make system usefull

Office suite My choice of office suite (alternative to MS Office) is Libre office. pacman -S libreoffice-writer libreoffice-calc libreoffice-impress. (I will not type pacman -S since now when I’ll talk about installing) PDF viewer I like lightweigt and fast viewer called zathura. Install zathura zathura-pdf-poppler Text editor Even I use vim for 90% of my work, sometimes is usefull to has simple graphic text editor. I’d recommend geany. Partitioning Just gparted. Great tool. FTP client filezilla Graphics For low level use imagemagick. For something normal use gpicview. Instead of photoshop use gimp. Analyzing processes etc. * htop - processes * iotop - writes to disk LaTex All you in most cases need is texlive-core. The rest is optional and install it only if you need it.

For editor I’d recommend texmaker for beginners and texworks for the rest.

tree Try it in terminal :) . Show structure of current folder. To limit level type tree -L <n>. torrents transmission-gtk

Console-based browser lynx - it can be handy when you need web-browser and can’t run graphical enviroment. Console based file manager ranger - vim like bindings, tabs, written in python and fast file manager? YES! media player vlc should be sufficient.

Fonts

Install ttf-dejavu ttf-inconsolata.

Nice look of GTK2 apps

You maybe noticed that apps looks bit awfull. For configuration like this exists great tool called lxappearance. Install also simple greybird theme from AUR - so we’ll need to use yaourt: yaourt -S xfce-theme-greybird.

Now just open lxappearance (by typing Win+d and lxappearance) and set greybird as default theme.

Multiple monitors

arandr (xrandr)

For multiple monitor configuration I love app called arandr. Install it :) . Now just run it and you should be able to configure layouts, positions, resolutions etc. as you wish. You can even save your layout.

arandr is just a frontend gui for xrandr. It means that clicking with mouse is converted into shell command, which is send to xrandr. Command for setting HDMI1 connected monitor to right next to notebook monitor is as follows: xrandr --output HDMI1 --right-of LVDS1 --preferred --primary --output LVDS1 --preferred. This knowledge will be usefull in next chapter.

Automatically detect (dis)connected monitor and change layout

There is low level thing called udev which cares about everything what you connect to your PC. We will tell it to run a script, which has script for xrandr.

Create this file /etc/udev/rules.d/95-monitor-hotplug.rules and add this:

#Rule for executing commands when an external screen is plugged in.
KERNEL=="card0", SUBSYSTEM=="drm", ENV{DISPLAY}=":0", ENV{XAUTHORITY}="/home/dan/.Xauthority", RUN+="/usr/local/bin/hotplug_monitor.sh"

Now we need create /usr/local/bin/hotplug_monitor.sh with this content:

#! /usr/bin/bash
# Sets right perspective when monitor is plugged in
# Needed by udev rule /etc/udev/rules.d/95-hotplug-monitor
export DISPLAY=:0
export XAUTHORITY=/home/USERNAME/.Xauthority

function connect(){
    xrandr --output HDMI1 --right-of LVDS1 --preferred --primary --output LVDS1 --preferred
}

function disconnect(){
      xrandr --output HDMI1 --off
}

xrandr | grep "HDMI1 connected" &> /dev/null && connect || disconnect

CAUTION This script is set for my layout, where LVDS1 is my laptop display and second monitor is connected by HDM1 (and is on the right of LVDS). You need to adjust it to your case.

If you connect your monitor before boot, there might not be “change” which would cause this script to run. To solve it add this line in front of exec i3 to ~/.xinitrc.

/usr/local/bin/hotplug_monitor.sh &
Bluetooth

Use bluez and bluez-utils. Configuration and usage is on the Arch wiki. But be aware of the fact that bluez and generally bluetooth on linux is TERRIBLY document. bluez hasn’t it’s own documentation and all you can get is old mailing list. UAAAAA!!!

Some other tunnies

Nicer look of Java aplications and colors in manual pages and less open .zshenv and add:

export _JAVA_OPTIONS='-Dawt.useSystemAAFontSettings=on'
export EDITOR=/usr/bin/vim

# Coloring less command
export LESS=-R
export LESS_TERMCAP_me=$(printf '\e[0m')
export LESS_TERMCAP_se=$(printf '\e[0m')
export LESS_TERMCAP_ue=$(printf '\e[0m')
export LESS_TERMCAP_mb=$(printf '\e[1;32m')
export LESS_TERMCAP_md=$(printf '\e[1;34m')
export LESS_TERMCAP_us=$(printf '\e[1;32m')
export LESS_TERMCAP_so=$(printf '\e[1;44;1m')

bash/zsh competition Maybe you’ve find out that if you type start of some command, zsh will help you to finish it if you hit TAB key. It’s not supported for all commands, so add it at least for some of them. Install vim-systemd.

Automounting discs, mounting and umounting as normal user

We will use devmon, which is part of udevil package. Add this line to ~/.i3/config:

exec --no-startup-id "devmon --no-gui"

this will run this daemon which will take care about it for us.

To unmount most recently mounted disc type devmon -c. To umount all removable devices type devmon -r. To mount connected disc type devmon --mount /dev/sdb1 (change of course sdb1. Use devmon -h for help.

Writing to NTFS discs

To have possibility to write to NTFS formated drives is good to install ntfs-3g. Next on Arch wiki :) .

Power control and power consumption

For laptops there is great tool called tlp. powertop can be also handy, but don’t trust it too much...

Backups

TODO - same as RPI

Sound

To allow sound, install alsa-firmware alsa-utils alsa-plugins pulseaudio-alsa pulseaudio. It usually works out of the box, but is necessary run pulseaudio. Add this to ~/.i3/config: exec --no-startup-id "pulseaudio --start

For graphical control of sound use pavucontrol.

For displaying current volume on i3status, add this to ~/.i3status:

order += "volume master"
...
...
...

volume master {
        format = "V: %volume"
        device = "default"
        mixer = "Master"
        mixer_idx = 0
}

Using spare memory for browser cache

If you have spare memory (RAM), it’s bad :D . Use it for something. It’s a pitty it isn’t used for something useful - like adding cache from browser to it.

What does it mean? Broswer are storing tons of data to cache for faster loading next time. It’s waering out the disc (to much writes) and it’s slow. To do this, follow these links: chromium firefox

Making Raspberry Pi usable

Introduction

After 3 months of using RPi, I decided to make this tutorial for same people as I’m - who looks for easy, understandable way to make RPi as awesome as possible.

In this tutorial I will walk you through whole process of making from Raspberry Pi secure, reliable, efficient, fast and easy to maintain server for variable purposes as is FTP, web hosting, sharing... All that thanks to Arch Linux ARM operating system. The device will be “headless” - it means, there will be no fancy windows etc., just command line. Don’t be scared, I will walk you through and you’ll thank me then :) . You don’t need some special knowledge about computers and linux systems.

What you get

From “bare” RPi you’ll get:

  • Safely to connect to your RPi from anywhere
  • Possibility of hosting web pages, files, etc.
  • Readable and reliable system (it will do what you want and nothing more)

What you will need

  • Raspberry Pi (doesn’t matter which model) with power supply
  • SD Card as a main hardisk for RPi
  • SD Card reader on computer with internet access
  • Ethernet LAN cable or USB Wi-Fi bundle
  • Other computer (preferably with linux, but nevermind if you use Windows or Mac)
  • Possibility to physically get to your router and know credentials to login to it (or have contact to your network administrator :) )
  • Few hours of work

What you don’t need

  • Monitor or ability to connect RPi to some monitor

Start

So you have just bare RPi, SD card, power supply, ethernet cable (RJ-45). So let’s start! There are houndreds of guides, but I haven’t found them satisfaing.

Installing Arch Linux ARM to SD card

Go here and make first 3 steps. That’s it! You have done it. You have you Arch Linux ARM SD card :)

Little networking

I guess you probably have some of “home router” (“box with internet”) and when you want to connect e.g by Wi-Fi with your laptop or mobile phone, it just connects (after inserting password). You need to test first what happens, when you try to connect by ethernet cable, for example with your laptop. Turn off Wi-Fi and check it. Did your computer connects to the network (or even internet) as usuall?

If yes, it is great! You can procced. It is what we need - we need RPi, when it boots up, to automatically connect to the network. Then we will able to connect to it. You will need one more think to find out - which IP address router asign to you when you connected by cable - it is very probable that RPi will get the same, or similiar. Don’t be afraid - it is easy to get (IP address)[how to get ip address]. On modern systems, one command :) .

Ok, now you have to insert SD card to RPi and connect it to your router with ethernet cable and then turn RPi on by inserting power supply. The diodes starts flashing. Now back to your computer and we will try to connect it using SSH. SSH is just “magic power” which enables to connect from one to other computer.

RPi is already ready and waits for connection. How to use ssh and some utilities (Linux, Mac) or programs (Windows) is supereasy - you will find a tons of tutorials on the internet (keywords: how to use ssh). IP address is the probably the one you assigned before. It will be something like this: 192.168.0.x, 10.0.0.14x or similar. Next thing you need is username. It’s just “root”.

If your RPi haven’t got this address (ssh is not working), than there are two options.

  1. You will login to your router settings and find out list of all connected devices with IP addresses and try them.
  2. Use nmap to find active devices in your network.

Example You have this address assigned: 192.168.0.201. Then you have to type (in linux): ssh root@192.168.0.201.

Youshould end up in RPi console.

Enough for networking just now. We’ll set a proper network configuration later in this guide, but first some musthaves.

First setup

This is covered over the internet, so I will just redirect you. elinux - from this guide finish these parts (in RPi console):

  • Change root password
  • Modify filesystem files
  • Mount extra partitions (if you don’t know what it is, nevermind)
  • Update system
  • Install Sudo
  • Create regular user account

That’s enough for now. Logout from ssh (type exit) and connect again, but as user who was created. Similiar to previous: ssh username@ip.address. From now, you’ll need to type “sudo” in front of every command, which is possibly danger. I will warn you in next chapter.

We must be sure that after reboot RPi will reconnect. Type sudo systemctl status netctl-ifplugd@eth0. In should show something like this:

● netctl-ifplugd@eth0.service - Automatic wired network connection using netctl profiles
   Loaded: loaded (/usr/lib/systemd/system/netctl-ifplugd@.service; enabled)
   Active: active (running) since Thu 2014-06-26 17:38:12 CEST; 4h 26min ago
     Docs: man:netctl.special(7)
 Main PID: 302 (ifplugd)
   CGroup: /system.slice/system-netctl\x2difplugd.slice/netctl-ifplugd@eth0.service
           └─302 /usr/bin/ifplugd -i eth0 -r /etc/ifplugd/netctl.action -bfIns

Jun 26 17:38:12 530uarch ifplugd[302]: ifplugd 0.28 initializing.
Jun 26 17:38:12 530uarch ifplugd[302]: Using interface eth0/E8:03:9A:97:B5:A7 with driver <r8169> (version: 2.3LK-NAPI)
Jun 26 17:38:12 530uarch ifplugd[302]: Using detection mode: SIOCETHTOOL
Jun 26 17:38:12 530uarch ifplugd[302]: Initialization complete, link beat not detected.

Keywords here are active (running) in “Active” and enabled in “loaded”. If there is disabled, just enable it by systemctl enable netctl-ifplugd@eth0.service

Now try if you are connected to the internet. Type ping 8.8.8.8. If you don’t see ping: unknown host 8.8.8.8 it’s good! If you do, your internet connection is not working. Try to find out why - unfortunately it is not possible to solve it here.

Warning Try also ping google.com. It may not work even pinging 8.8.8.8 worked. The reason is bad DNS servers (doesn’t matter what it is). To solve this you have to find “DNS servers of your IPS”. Try to google it. If you find them, add them to resolv.conf.

Reboot you rpi using systemctl reboot. You must be able to connect to it again after one minute. If not, somthing is wrong... In that case, you need to find out why connection stoped working - if you have keyboard and monitor, you can repair it. If not, you can try to edit mistake on other computer by inserting SD card. Otherwise, reinstall...

Installing some sugar candy

For our purpouses we will install usefull things, which will help as maintaing the system. So, run this: pacman -S vim zsh wget ranger htop lynx

Do you see:

error: you cannot perform this operation unless you are root.

Then you need to type sudo pacman -S .... I will not write it in future and it is not in other guides. So sometimes you might be confused whel you’ll read some tutorials and autor implicitly use sudo without mentioning it.

We will also need these in next chapters: pacman -S nginx sshguard vsftpd

You can notice that is really few packages! And thats true! Isn’t it great? No needs of tons of crap in your device.

What are these? Just short summary - you can find more about it in manual pages (man <name_of_pacakge>) or find something usefull on the internet. * vim - powerfull text editor (that’s what you will do 99% of time). First few days are horrible, but keep using it :) . * zsh - doesn’t matter. Just install it and install this * wget - just for downloading things without browser * ranger - file manager (you can browse files, folders...) * htop - task manager - you can see what tasks are running, how much CPU/MEM is used, kill processes and so on * lynx - browser - no kidding :)

Some configurations

I assume you installed zsh with oh-my-zsh (changed your shell) and also vim. You are connected as created user (from now, I will name him bob). You are in Bob’s home directory - check it with typing pwd. It will print /home/bob.

Make vim usable

Edit .vimrc file: vim .vimrc and insert this:

syntax on
set number
set ruler
set nocompatible
set ignorecase
set backspace=eol,start,indent
set whichwrap+=<,>,h,l
set smartcase
set hlsearch
set incsearch
set magic
set showmatch
set mat=2
set expandtab
set smarttab
set shiftwidth=4
set tabstop=4
set lbr
set tw=500
set ai
set si
set wrap
set paste
set background=dark
vnoremap <silent> * :call VisualSelection('f')<CR>
vnoremap <silent> # :call VisualSelection('b')<CR>

it will customize vim a bit, so it will be easier to edit files in it.

Journaling

Journaling is one of the most important things you need to have. It just record everything systemd does. It is part of systemd quite customizable. We will save journals in memory, because of limited wear of SD cards. We will also compress them and then limit size for them on 40 MB.

Open file /etc/system/journal.conf and uncomment these lines:

[Journal]
Storage=volatile
Compress=yes
...
RuntimeMaxUse=40M

Network configuration

For reasons I will mention in future, we need to set RPi to connect with static ip. This will assure that the IP address of RPi will be still the same and you can connect it. Right now is probably getting automatically assigned IP address from router (it’s called dhcp).

We will use systemd-network.

Type ip addr. It should shows something like this:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 32
    link/ether 22:2b:20:5b:8e:b0 brd ff:ff:ff:ff:ff:ff
3: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 32
    link/ether 6a:68:fb:64:2f:c3 brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:2d:25:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.201/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever

you are interested just in name eth0. If it is there, it is ok. In future versions of system it can change to something other, for example eth0ps1. Don’t be afraid of it and just use that instead in next chapters.

In this part you’ll need to get address of your router. How to obtain it?

And how to choose static address? As you know your router is assigning IP address automatically (it is called DHCP). But not randomly in full range. It has some range of IP addresses which it can assign. Standard is this: router has standard IP adress 192.168.0.1 and assign addresses from 192.168.0.2 to 192.168.0.254. Second standard is 10.0.0.138 for router and it assignes addresses from 10.0.0.139 to 10.0.0.254. But it can be anything else.

Interesting - and what the hell should you do that? I suggest to set one the address on the end from this range. You can notice, that my “eth0” has IP address 192.168.0.201.

Open this file /etc/systemd/network/ethernet_static.network (how? just use vim as in the previous - but don’t forgot to use sudo in front of vim, or you’ll not be able to save it!) and paste this:

[Match]
Name=eth0

[Network]
Address=the.static.address.rpi/24
Gateway=your.router.ip.address
a

my example:

[Match]
Name=eth0

[Network]
Address=192.168.0.201/24
Gateway=192.168.0.1

now we need to try it - we don’t to close us out. The connection is right now ensuring by thing called netctl-ifplugd@eth0. We want to do this:

  • Turn netctl off
  • Turn networkd on
  • Try if RPi is connected to the internet
  • If yes, than do nothing - we can connect now by ssh
  • If not, turn off networkd and turn on working netctl

why so complicated? Because when you are changing network, it will disconnect - and of course, we will disconnected also from SSH. And it discouraged to use more network managers at once, because they’d interferate and you don’t want that.

This script will do what we want:

#!/usr/bin/bash
systemctl stop netctl-ifplugd@eth0
systemctl restart systemd-networkd

sleep 10
systemctl status systemd-networkd >> log.txt
ping -c 1 google.com
if [[ `echo $?` != 0     ]]
    then
        systemctl stop systemd-networkd
        systemctl start netctl-ifplugd@eth0
fi

to run this script you need to login as root. You can do it by typing this: sudo -i. This will log you as a root. Now type vim script.sh and insert script there. Save and close (in vim using :x). Now just type chmod +x script.sh. It will make the script executable. Finally this: ./script.sh.

The connection will close now. Wait 30 seconds. If everything worked properly, you should be able to connect to RPi again by using same ssh command as previous. In that case find out it works -> does systemd-networkd care about connection and netctl is stopped?

To find it out, type: systemctl status systemd-networkd. Does it shows “active (running)” and something like gained carrier?

â systemd-networkd.service - Network Service
   Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.service; enabled)
   Active: active (running) since Wed 2014-06-11 18:42:13 CEST; 2 weeks 1 days ago
     Docs: man:systemd-networkd.service(8)
 Main PID: 213 (systemd-network)
   Status: "Processing requests..."
   CGroup: /system.slice/systemd-networkd.service
           ââ213 /usr/lib/systemd/systemd-networkd

Jun 17 17:52:01 smecpi systemd-networkd[213]:             eth0: lost carrier
Jun 17 17:52:02 smecpi systemd-networkd[213]:             eth0: gained carrier

If yes, great! We can get rid off netctl by uninstalling it by pacman -Rnsc netctl and enable networkd to start at boot by systemctl enable systemd-networkd.

If not, netctl should be started again and save the day. Find it out by systemctl status netctl-ifplugd@eth0. It should be active, otherwise there is some magic power which care about your connection. Try to find out why networkd didn’t workd and repair it (probably bad IP address...). There should be some info in file log.txt.

If you can’t connect, don’t panic. Just turn off RPi (take out power suppy) and turn it on. It should reconnect normally with netctl-ifplugd. Try to find out why it is not working and try it again.

Timesynchronization

You’ve maybe noticed that time is quite weird on your RPi. It is beacuse it does not have real hardware clock. Every time RPi is waken up, it thinks that is June 1970. You don’t have to care about it, but after boot it would be fine that time is correctly set. You can do it by using really great part of systemd. Go ahead and enable service, which takes care about that: systemctl enable systemd-timesyncd. Thats all. It will start after next reboot. If you want it to start now, just run systemctl start systemd-timesyncd.

Configuring SSH

We will open RPi to world and in that case we need to secure it a bit. Service, which takes care about SSH is called sshd. “Where” it is? It is runned by systemd, so systemctl status sshd will show you some info :). We will configure it a bit. This is not necessary, but highly recommended! Brutal force attacks are really common (hundreds every day on my little unimportant server).

Open file /etc/ssh/sshd_config and edit or add these lines as follows:

Port 1234
PermitRootLogin no
PubkeyAuthentication yes

that’t enough. Restart sshd systemctl restart sshd.

Since now, you cannot login as a root by ssh and thats good. Also - we changed the port of ssh. Think about “port” as a tunnel, which is used for ssh. There are about 60 thousands of them and you can choose whatever you want. As default there is port 22 used for ssh. We now changed that to (example) 1234. It is because on port 22 there is to big chance that someone will try to brutal force your credentials.

Since now, only ssh bob@ipadress is not enough. You will have to add port which should be used (in default is assumed port 22). ssh -p 1234 bob@ip.address will do it for you :) .

The next thing we are going to do is set up sshguard. More about it here. You don’t need more :) . Just remember to use your port (in my case 1234) for settings.

It is anoying still typing same username and password when we want to connect to RPi. And now, we have to add “-p 1234” also. We will make it automatic. Here is quite good guide how to do it. On PC from which you are connecting (no RPi), edit ~/.ssh/config to this:

Host my_superpc
  HostName ipaddressofRPi
  IdentityFile /home/yourusername/.ssh/name_of_identityfile
  User bob
  port 1234

since now, when you wan’t to connect to RPi you can just type ssh my_superpc and it will take care about rest.

Screen

You can live without that, but you shouldn’t! It makes you more productive and you don’t need to be afraid of some mishmash caused by accidently closing terminal during update or lossing connection. Learn more about what the screen is (here, here and here), install it (pacman -S screen), use it and love it.

It can be handy to automatically ssh into screen sesion. For that I use this command (from PC I want to connect to RPi):

ssh my_superpc -t screen -dRS "mainScreen". You can make some alias to something shorter (for example adding this to alias ssh_connect_RPI="ssh my_superpc  -t screen -dRUS mainScreen" in .zshrc). Now all you need to do is type ssh_connect_RPI - it here is now screen created, it will create new one. If it is, it will attach it.

Speeding RPi up

Arch Linux ARM for RPi is prepared to be tweaked. And now it is possible to speed RPi up by overclocking it’s processor without avoiding your waranty. How to do it? Just edit file /boot/config.txt and find this part:

##None
arm_freq=700
core_freq=250
sdram_freq=400
over_voltage=0

now comment it out. That means to add “#” in front of every line. From now, it will be treated as text and not command. It will look like this:

##None
#arm_freq=700
#core_freq=250
#sdram_freq=400
#over_voltage=0

and now uncoment this:

##Turbo
arm_freq=1000
core_freq=500
sdram_freq=500
over_voltage=6

After next boot your RPi will be able to get even to the 1000 MHz. That means it is faster.

Other tweaks of /boot/config.txt

Since you don’t need any of gpu memory - which cares about shiny things like windows etc., you can disable it in favor of the rest of memory which we use.

gpu_mem=16
#gpu_mem_512=316
#gpu_mem_256=128
#cma_lwm=16
#cma_hwm=32
#cma_offline_start=16

Making RPi visible from outside

Now we need to configure access from outside. You will need to configure you router. You have to make a “port forwarding”. Remember port from ssh? I told you to think about them as a tunnels. These tunnels are also handy when you need to find out what is on there end.

What we will do here is this: We want to be able from anywhere on the internet connect to our RPi server.

Example? ssh -p 1234 bob@what.the.hell.is.here. You know? There is definetely not your local address (the one with 192.168...). There must be your “public” IP address (more about this in Domains - take a look there). But this public address points to your router (if you are lucky). Where does it go next?

With every request there is also a port. With command ssh smt, you are sending username, port (standard 22, if not otherwise stated) and IP address. Ip address redirect it to router. Now router takes port and looks to it’s internal database. In this database are pairs: port - internal_ipaddress. For some port there is IP address, which it redirects to. In another worlds: if router gets some request from specific port (say, 1234) and it has in it’s database IP address

to which it has to redirect, it redirects this request there. In our case, we need to redirect these ports we want (for example 1234 for ssh) to RPi. So find a port forwarding settings for your router (this might be helpful) and set there port forward from port you setted for ssh to RPi. You can check if your port is open (it means it accepts requests here.

Since now, you can ssh from anywhere.

Webserver

Setting up nginx

Similiar to ssh handling sshish requests, Nginx is handling almost everything else and even... WebServers! Install nginx with pacman -S nginx. For security reasons create special user for it, for example using: useradd -m -G wheel -s /usr/bin/zsh nginx and also group groupadd webdata. Now create some folder for it. It can be mkdir /var/www/ and now make them owners chown nginx:webdata /var/www. Of course, enable and start nginx.

systemctl enable nginx. It will start after boot.

Now port forward port number 80 to RPi on your router.

Open /etc/nginx/nginx.conf, it can looks like this:

user nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    server_names_hash_bucket_size 64;

    sendfile        on;

    keepalive_timeout  15;

    server{
        listen  80;
        server_name ~^xxx.xxx.xxx.xxx(.*)$;

        location / {
            root   /var/www/$1;
            index  index.html index.htm;
        }
    }

}

next, create /var/www/test/index.html:

<html>
  <head>
    <title>Sample "Hello, World" Application</title>
  </head>
  <body bgcolor=white>

    <table border="0" cellpadding="10">
      <tr>
        <td>
          <h1>Sample "Hello, World" Application</h1>
        </td>
      </tr>
    </table>

    <p>This is the home page for the HelloWorld Web application. </p>
    <p>To prove that they work, you can execute either of the following links:
    <ul>
      <li>To a <a href="/">JSP page</a>.
      <li>To a <a href="/">servlet</a>.
    </ul>

  </body>
</html>

where xxx.xxx.xxx.xxx should be your public address. This will do this: when you type in your browser “youripaddress/test:80”, you should see index Hello world example. Try that without :80 - it will do the same! Default port for webpages is 80 (similiar to 22 for SSH). So it can be omited.

FTP

This will cover the most easy solution for FTP. Don’t use this configuration in real, just for test purpouses. If you didn’t download vsftp, do it now by pacman -S vsftp. Now we will create some directory where all files and users will end up after connecting. Let it be in /var/www/test. Now edit /etc/vsftpd.conf and add on the top this line:

anon_root=/var/www/test

and make sure that this line is uncommented:

anonymous_enable=YES

and just start it: systemctl start vsftpd.

Now we’ll tell nginx about that. Add this to servers confs in /etc/nginx/nginx.conf.

server{
    listen  80;
    server_name ~^123.123.32.13(.*)$;
    location / {
        ssi on;
        root   /var/www/$1;
        index  index.html index.htm;
    }
}

where you need to replace IP address in server_name directive to your public IP.

What this little configuration does? It’s simple. Every time you type to your brower your IP address and somthing behind it, it will transfer you to this “something” in /var/www/.

Example I created index.html here /var/www/example/index.html. I now type 123.123.32.13/test to my browser and voila!

This nginx configuration isn’t neccessary in our ftp example (it could be simpler), but I just like it...

You can now connect to ftp by typing this in your browser: ftp://your_ip_address or use your favorite FTP client (e.g. filezilla).

CAUTION - again, don’t use this settings as default. There are great guides on the internet how to grant access only some users, password protected etc.

System analyzing and cleaning

Use your friend systemd-analyze. It will show you which units are loading long time. Also systemctl status is great for finding failed units.

Disable things that you dont need

I guess you don’t use ipv6 (if you don’t know what it is, you don’t need it :D). systemctl disable ip6tables. In case you use sshguard, you need also edit file /cat /usr/lib/systemd/system/sshguard.service and from Wants delete ip6tables. Like this:

Wants=iptables.service

Usefull utilites

Simple to use, just install them and run:

  • iftop - for internet usage
  • iotop - for disk usage

Torrents

Your RPi is maybe running 24/7, so why not to use it for torrents? But how, when there is no GUI? It’s pretty simple. We will use transmission - popular torrent client. Install it by pacman -S transmission-cli Installation should create a new user and group, called transmission. To check that, you can take a look to /etc/passwd and /etc/group. transmission will be runned by systemd. Let’s see it it’s service file is configured properly. Check /usr/lib/systemd/system/transmission.service:

[Unit]
Description=Transmission BitTorrent Daemon
After=network.target

[Service]
User=transmission
Type=notify
ExecStart=/usr/bin/transmission-daemon -f --log-error
ExecReload=/bin/kill -s HUP $MAINPID

[Install]
WantedBy=multi-user.target

User=transmission is important here (for security reasons). Next thing we need to do is check, if transmission has place where it will live. By default it is in /var/lib/transmission(-daemon). In this dir should be also config file settings.json. There lays configuration for it.Edit it ass you wish. It is covered here and here. Maybe you’ll need to forward ports as we did in previous chapters, you should make that again without problems :) . No we can run transmission daemon by systemctl start transmission. Now you can give it commands using transmission-remote . The most usefull (and that’s all I need to know and use :) ) are these:

  • transmission-remote <port> -a "magnetlink/url" - adds torrent and starts download it
  • transmission-remote <port> -l - list all torrents that are currently running

files should be stored in /var/lib/transmission/Downloads. It can be configured in config file :) .

Backups

For backups I choosed rdiff-backup. It’s so stupid but works (almost) as expected. More about it’s usage you can find in it’s manual pages. For my example I’ll redirect you to dir with configs in this repo. These are inserted to cron (you have it by default installed) to do SSH backup every day in 4AM. If I’m on local network I also do backup to my disc on other PC.

Final

That’s all for now! I will see if this is used by someone and than I will see if I will continue.

Troubleshooting

  • RPi don’t boot - unplug everything from USB ports (there may be not enough of power to boot up and supply USB)

Simple GitHub repo and ReadTheDocs set up

I’ve just wanted to set up a GitHub repository for this guide and I found that it’s really ubearable to set so simple thing as this.

Here is short guide which should walk you through:

  1. Creating repository on GitHub
  2. Cloning it into your local machine
  3. Submitting changes from your local machine using SSH
  4. Submitting changes from repository on GitHub
  5. Generating documentation with ReadTheDocs and sphinx <http://sphinx-doc.org/>_.

Creating repository on GitHub

If you’ve found this guide, I guess you are inteligent enough to create account on GitHub, so we’ll skip this step. Same with installing git and SSH on your machine. Use google if you are lost.

To create a directory just go to your profile (e.g. https://github.com/your_username), click on repositories and then click on NEW. It’s important to make repository public (default choice). You can also create README and LICENCE file - do it, if you want.

When repository is created, copy Subversion checkout URL which can be found in right panel of repository view. In my case it’s:

https://github.com/kotrfa/test_repo

Cloning repository

Open terminal and choose folder where you’d like to clone your repository. In my case it is just my home directory. Go to this folder and run:

git clone https://github.com/kotrfa/test_repo

cd inside this folder. You should see LICENCE and README (in case you’ve created them) but it might be empty if you haven’t insert anything through browser to your repository yet.

Setting up git

Now we need to initialize git folder. To do this run:

git init

this will create .git folder with all important informations. You don’t need to mess with that for now.

Next thing we have to do is to setup username. To do this run:

git config --global user.email "your_email@your_mail.something"
git config --global user.name "your_username"

Let’s test it now by creating file:

touch test_file.txt

now we have to add this file to git’s eye - that it has to look if this file changed and in case it does, it will upgrade it on the remote repository on GitHub. Do that by:

git add test_file.txt

and now tell git that this file is prepared to be upgraded:

git commit test.md -m "testing file"

-m switch is for message and string "testing file" is the message which just gives some info about this commit.

Now we will send this changes to remote repository on GitHub. It’s pretty easy:

git push

and type your username and password as it asks for it.

If this work, we can set connection without necessity typing our credentials every time.

Setting up SSH

For some reasons it’s not really straigth forward to set up this.

First you have to go on GitHub website and go to Account settings. Navigate to SSH Keys and click on SSH Key.

Title is whatever you want to call it. Key field is what is interesting.

Go again to console and type:

ssh-keygen -t rsa -C "your_email_on_GitHub@mail.something"

and choose password as you want (or none).

It will generate SSH key inside ~/.ssh and it has two parts - public (with .pub ending) and private. Content of the public must be copied into Key field on the GitHub page.

Now, when you’d like to work in github repository you’ve to run:

eval `ssh-agent -s`;ssh-add ~/.ssh/github_private_key

this should do the trick. Now you can git push as you wish without necessity to insert credentials.

Submitting changes from repository

To do that just use:

git pull

Generating docs with sphinx and RtD

Local sphinx generator

Install sphinx using pip and navigate yourself into git directory. Create docs folder there and go inside. Run:

sphinx-quickstart

and set to your needs.

Add your source rst files into some directory inside docs, for example source. Now edit index.rst and add there source/filenames.rst. In my case:

.. toctree::
   :maxdepth: 3

   source/intro
   source/nec_know
   source/domains_ip_servers
   source/ndg
   source/Arch
   source/RPi

where maxdepth says how much level should TOC has. Another useful directives are :glob:. In previous example I should just use source/* and it would load all .rst files inside source dir. If you’d like to have TOC numbered, just add :numbered:.

Now just run:

make html

and it will make a HTML pages for you inside build/html directory.

Go to the main git folder (in my case ~/test_repo) and add, commit and push all changes:

git add --all
git commit -a -m "first docs"
git push

Read the Docs configuration

Go to the ReadTheDocs and create an account there.

Click on the dasboard and then on import. Name your project and add your git url inside Repo. In my case it’s:

https://github.com/kotrfa/test_repo

Repository type is Git and documentation Sphinx Html. Rest is basicaly optional. Now just click on Create and wait.

Now you just have to wait :) . RtD will build your project every time it detects changes. Usually it was imediately, but sometimes it takes even several minutes.

Indices and tables