Quick guide to software RAID1 with mdadm

This guide asumes you are creating a RAID from new (not already in use) disks. So I will format the full disk in a single partition. As usual, you may need to adapt device names and partition sizes to your setup, I’m just copy pasting from my terminal as a note for myself in the future.

Create a new partition table, as I’m going to use 3TB disks I will use a GPT partition table because DOS partition tables don’t support parititions bigger than 2TB.

$ parted /dev/sdc mklabel gpt

Now I have to create a partition using the full size of the disk.

$ parted -a optimal /dev/sdc mkpart primary 0% 3000GB

And set the raid flag on it, otherwise mdadm will not recognise the partition as a member of an array.

$ parted /dev/sdc set 1 raid on

Repeat in the second drive

$ parted /dev/sdd mklabel gpt
$ parted -a optimal /dev/sdd mkpart primary 0% 3000GB
$ parted /dev/sdd set 1 raid on

Now I can create the array, this is the most basic setup, I could add spare devices in case one fails, but I don’t have a spare device available.

$ mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdc1 /dev/sdd1

Check it’s created and resyncing.

$ cat /proc/mdstat

Even if the array is being rebuild it is already fully functional and you can use it safely, the first thing to do is to format the array, I’m going to use ext4 for it.

$ mkfs.ext4 /dev/md2

Once mkfs has finished, I can mount my /dev/md2 array as if it was a normal device, for example:

$ mount -t ext4 /dev/md2 /mnt

If you want your array to start automatically on system startup, add it to /etc/mdadm/mdadm.conf

Ask mdadm for the details of your new array

$ sudo mdadm --examine --scan
ARRAY /dev/md2 UUID=8dc92785:7c0616c8:fecdba62:0dcb047a
ARRAY /dev/md1 UUID=fb123aed:0146ba91:a4d2adc2:26fd5302

In this case /dev/md1 is an array which I had previously configured, and /dev/md2 is the new one, add to /etc/mdadm/mdadm.conf any line not present in the config file.

Using Gnome 3 with i3 window manager

gnome-session-i3

My dual monitor setup using gnome-session + i3.

i3 is a tiling window manager, completely written from scratch. The target platforms are GNU/Linux and BSD operating systems, our code is Free and Open Source Software (FOSS) under the BSD license. i3 is primarily targeted at advanced users and developers.

fragment from i3wm.org

This post has been updated to work with Ubuntu 13.10

I started using i3 as my window manager eight months ago, since then I’ve been using i3 alone, at first it was a pain, all things I was used to in a traditional desktop environment no longer worked as I expected, usb automounting, sound indicator with media controls, wallpaper, screen-saver… all gone. You can have some of those things back, but you have to install and configure them individually,  they work pretty well but they are still replacements and not everything works as you are used to from  another desktops.

A friend of mine who recently started working on the place where I work, showed me his desktop setup using xmonad over a gnome 3 session, replacing its own window manager with a tiling window manager, but keeping all the goodies from gnome. Xmonad installs by default a gnome session using xmonad as window manager in addition to a just xmonad session. So I decided to replicate it for i3, which was dead easy.

If you are using Ubuntu with unity, you will need to install gnome 3 and i3. Do it by running

$ sudo apt-get install gnome-session gnome-settings-daemon gnome-panel i3

then create the file /usr/share/xsessions/gnome-i3.desktop and put this inside

[Desktop Entry]
Name=GNOME with i3
Comment=A GNOME fallback mode session using i3 as the window manager.
Exec=gnome-session --session=i3
TryExec=gnome-session
Icon=
Type=Application

finally create another file in /usr/share/gnome-session/sessions/i3.session containing the following

[GNOME Session]
Name=gnome-i3
RequiredComponents=gnome-settings-daemon;gnome-panel;i3;

Now restart your session manager and select “Gnome with i3” as your session.

$ sudo service lightdm restart

If you don’t like having the Gnome desktop as a window inside your i3 you can disable it with the following command:

$ gsettings set org.gnome.desktop.background show-desktop-icons false

You can also restore the setting repeating the command but changing false to true.

Set custom location for Spotify data in Android

Recently I bought a 16Gb microSD card to expand the storage capability of my android device, my intencion was to be able to sync all my Spotify lists for offline playing.

Sadly Spotify won’t let you modify the path where it’s files are stored, but there is a way.

You must have a rooted phone and know how to use adb or a terminal emulator. Then, as root:

Open the Spotify config file in /data/data/com.spotify.mobile.android.ui/shared_prefs/spotify_preferences.xml

The file will look like this.

<?xml version='1.0' encoding='utf-8' standalone='yes' ?>
<map>
<string name="installation_id">qr8vlrvtatrokb0kmpj8gappn3</string>
</map>

you have to add one line.

<?xml version='1.0' encoding='utf-8' standalone='yes' ?>
<map>
<string name="installation_id">qr8vlrvtatrokb0kmpj8gappn3</string>
<string name="storage_location">/mnt/emmc</string>
</map>

Then you will have to close Spotify for the changes to apply, select Spotify  in the application manager and select “Force Close”.

Open Spotify again, now Spotify will store it’s data on /mnt/emmc/Android/data/com.spotify.mobile.android.ui, you can delete the old files, they are in /mnt/sdcard/Android/data/com.spotify.mobile.android.ui

Source: Spotify support forums

MySQL circular master-master replication

Lately I’ve been working on a contingency plan for  when one of our production servers goes down. All our servers have failover IP so we can immediately put a new machine where we need it and continue working. So if we have the ability to switch machines, we have to be able to do it without loosing any data, after a little investigation I have decided to go with mysql replication in master-master mode.

The master-master replication works as a ring, if you have 3 servers (which is my case) the replication will work as follows:

node 1 —> node 2
node 2 —> node 3
node 3 —> node 1

Configuring MySQL for remote access

All the mysql servers have to be accessible trought an external IP.

$ sudo netstat -nplt | grep mysql
tcp  0  0  127.0.0.1:3306  0.0.0.0:*  LISTEN  1105/mysqld

If the highlighted IP is your loopback address you have to edit your /etc/mysql/my.cnf file and comment the bind-address line.

# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# bind-address = 127.0.0.1

Restart mysql and then check again

$ sudo service mysql restart
$ sudo netstat -nplt | grep mysql
tcp  0  0 0.0.0.0:3306 0.0.0.0:* LISTEN 23529/mysqld

Now mysql is listening on all network interfaces.

If you don’t already have a root password on your mysql server set one, it is a very bad idea to have a passwordless root account on a remotely accessible mysql server.

$ mysqladmin -uroot password

Repeat all the above for all the servers you want to participate in your replication.

Configuring MySQL for replication

Open /etc/mysql/my.cnf and set this values.

[...]
[mysqld]
server-id                = 1 # each server needs it's own unique id
replicate-same-server-id = 0
auto-increment-increment = 1 # you should set this to the total number of nodes*
auto-increment-offset    = 1 # each server needs it's own offset*
log-bin                  = /var/log/mysql/mysql-bin.log
binlog-do-db             = exampledb
replicate-do-db          = exampledb
log-slave-updates        # needed for chain replication
relay-log                = /var/lib/mysql/slave-relay.log
relay-log-index          = /var/lib/mysql/slave-relay-log.index
expire_logs_days         = 10
max_binlog_size          = 500M
[...]

Restart mysql after the changes.

$ sudo service mysql restart

Now open the mysql client and type:

GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'secret';
FLUSH PRIVILEGES;
quit;

You have to repeat this process for every node, adapting the config file for each one as explained in the comments near every field.

Setting up replication

I’ll make the assumption that you already have a database with data which you want to replicate (which is my case). The database named exampledb exists in node-1 and contains data.

Open the mysql client in node-1 and type

mysql> SHOW MASTER STATUS;
 +------------------+----------+--------------+------------------+
 | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
 +------------------+----------+--------------+------------------+
 | mysql-bin.000001 |      107 | exampledb    |                  |
 +------------------+----------+--------------+------------------+
 1 row in set (0.00 sec)

You will need this data later.

Now export the database to the other nodes using mysqldump.

$ mysqldump -uroot -p exampledb > exampldedb.sql

You will have to manually create the exampledb database in the other nodes and import the data from the dump you have created.

Now open the mysql client in node-2 and type

CHANGE MASTER TO MASTER_HOST='node-1-ip',
MASTER_USER='slaveuser',
MASTER_PASSWORD='secret',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=107;

Here you have to use the file and the position returned by node-1 before.

Now (still in node-2)

SLAVE START;
SHOW SLAVE STATUS \G;

It is important that both Slave_IO_Running and Slave_SQL_Running have the value Yes in the output, otherwise something went wrong, take a look at /var/log/syslog to find out about any errors.

If everything is ok you can proceed to repeat the steps for node-3 and node-1, notice that the nodes are in a replication chain, so node-3 will have node-2 as master, and node-1 will have node-3 as master. You will also have to adapt the values for MASTER_LOG_FILE and MASTER_LOG_POS, to obtain the values just execute SHOW MASTER STATUS on node-2 and node-3 after you have imported the dumped data from node-1.

When you have completed all the steps on all your nodes you are ready to test it, change anything in any node and look the changes propagate to all your nodes.

*Personal notes

I’m breaking the rules because I will be writing to a database at once, in fact I’m using a master-master where a master-slave fits better but a slave can’t have two masters, to understand better my situation I’ll explain how the nodes are working.

node-1 --> database-1
node-2 --> database-2
           database-3
node-3 --> backups for database-1, database-2 and database-3

As node-3 can’t be slave for node-1 and node-2 at the same time, my options are:

  •  Having 2 mysql instances running on node-3, a slave for node-1 and a slave for node-2
  •  A master-master schema with the three nodes, which also provides more redundancy as I will have additional copies in a crash situation

Creating time lapse videos with mencoder

In the last post I wrote about the intervalometer I was using to create time lapse videos but I forgot to share how I’m making these videos from the camera files.

There are lots of tools you can use for mounting a time lapse, like Adobe Premiere or Final Cut, but there is also some more simple yet effective ways to do so, using the open source mencoder utility.

The first you will need is to create a text file with all the filenames for the photos you want to use as frames. On a linux box you can use ls -1tr > frames.txt (warning: the first parameter is a “one” not a “l”). This will create a file frames.txt with all files on the current directory, one per line, ordered by date taken. We will use this file to tell mencoder which files to use to create the video.

Next thing is to execute mencoder to create the video.

$ mencoder -nosound -ovc lavc -lavcopts \
vcodec=mpeg4:mbd=2:trell:autoaspect:vqscale=3 \
-vf scale=1920:1080 -mf type=jpeg:fps=20 \
mf://@frames.txt -o time-lapse.avi

With this command we are asking mencoder to pick all the photos from frames.txt and make a full HD video from them at 20 frames per second, you can adjust the fps or the quality of the video by modifying the fps and vqscale parameters respectively.

Here you can see a demo created as explained above, these are me and a friend doing some clean up in my bedroom after doing some work.

Arduino intervalometer

These days I’ve been on holidays to make some progress on a personal project I have with a friend. We are trying to make a camera dolly to record time lapse videos with our DSLR cameras. The project is powered by an Arduino board, but as my experience with electronics is null, so I decided to make a little test project first as an introduction.

The result is a very simple intervalometer, built with a few components, a potenciometer to adjust the time between shots, an optoisolator to trigger a Canon camera by cable and an IR led to trigger a Nikon camera.
The schematic is the following.

And here is the source code running on the Arduino. I should rewrite it to not make use of the delay function.

#include "NikonRemote.h"

const int potPin = A0;
const int ledPin = 13;
const int irPin = 8;
const int jackPin = 12;

// config values
const int minDelay = 100;
const int blinkLength = 150;
const int canonPulseLength = 40;

NikonRemote camera(irPin);

void setup(){
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
  pinMode(jackPin, OUTPUT);
}

void loop() { 
  int val = (analogRead(potPin) * 20) - blinkLength;
  if (val < minDelay) {
    val = minDelay;
  }
  
  Serial.print("Analog read: ");
  Serial.println(val, DEC);

  snap();
  delay(val);
}

void snap() {
  // blink status led
  digitalWrite(ledPin, HIGH);
  
  // snap nikon
  camera.Snap();
  
  // snap canon
  digitalWrite(jackPin, HIGH);
  delay(canonPulseLength);
  digitalWrite(jackPin, LOW);
  
  // end blink
  delay(blinkLength - canonPulseLength);
  digitalWrite(ledPin, LOW);
}

Make flash plugin work on Firefox 4 and Ubuntu 64 bits

On ubuntu 64 bits, flash support for Firefox 4 is broken after a manual upgrade, the reason is that the nspluginwrapper thing seems not to work well with the new firefox.

The solution is pretty simple, download flash plugin from adobe here (select .tar.gz Linux option), unpack it and replace the file npwrapper.libflashplayer.so on /var/lib/flashplugin-installer with the unpacked libflashplayer.so.

$ tar -xvzf install_flash_player_10_linux.tar.gz
$ sudo cp libflashplayer.so /var/lib/flashplugin-installer/npwrapper.libflashplayer.so

Enjoy.

Open spotify links with native linux client

If you are tired of having to copy and paste all the spotify links on the search field to open them you can configure Firefox and Chrome to open them automatically.

Just copy and paste these lines on a terminal and enjoy. (Browser restart required).

$ gconftool-2 -t string -s /desktop/gnome/url-handlers/spotify/command  "spotify -uri %s"
$ gconftool-2 -t bool -s /desktop/gnome/url-handlers/spotify/needs_terminal false
$ gconftool-2 -t bool -s /desktop/gnome/url-handlers/spotify/enabled true

Moving bulma to wordpress

Last saturday some Bulma members met to start porting the old website to a new wordpress based site, now the site is running on a custom software written about 10 years ago in PHP version 3. This software runs fine but we have some new ideas to implement and it would be a lot easier to do so if we were running on a newest environment.

Years ago a lot of Bulma members used to write posts on Bulma, but now only a few continue posting. The reason is that most of the early users now have its own blogs and they write posts on them, most of the times forgetting about Bulma. So we are planning to make the new Bulma site a mix between a blog and a planet. Allowing users to add its feeds to share all or some of the posts they write if the have something to do about free software, coding, new technologies, etc.

To export all the contents we decided to generate a fake wordpress export XML and we made it by pure SQL (with lots of concats). All the contents were exported ok, but there is a big problem, the old site allowed users to post HTML code without restrictions and a lot of people abused it, so the articles won’t look well on the new template. So the work for the next week is to postprocess all the imported data removing style and font tags to make them look more uniform across them.

I hope we can have the new version publicly available soon, so we could receive feedback from the remaining staff and the rest of the users.

coding after the pizzas

From left to right, Joanmi, René and me having some geek fun.