Lately I’ve been working on a contingency plan for when one of our production servers goes down. All our servers have failover IP so we can immediately put a new machine where we need it and continue working. So if we have the ability to switch machines, we have to be able to do it without loosing any data, after a little investigation I have decided to go with mysql replication in master-master mode.
The master-master replication works as a ring, if you have 3 servers (which is my case) the replication will work as follows:
node 1 —> node 2
node 2 —> node 3
node 3 —> node 1
Configuring MySQL for remote access
All the mysql servers have to be accessible trought an external IP.
$ sudo netstat -nplt | grep mysql tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1105/mysqld
If the highlighted IP is your loopback address you have to edit your
/etc/mysql/my.cnf file and comment the bind-address line.
# Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # bind-address = 127.0.0.1
Restart mysql and then check again
$ sudo service mysql restart $ sudo netstat -nplt | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 23529/mysqld
Now mysql is listening on all network interfaces.
If you don’t already have a root password on your mysql server set one, it is a very bad idea to have a passwordless root account on a remotely accessible mysql server.
$ mysqladmin -uroot password
Repeat all the above for all the servers you want to participate in your replication.
Configuring MySQL for replication
/etc/mysql/my.cnf and set this values.
[...] [mysqld] server-id = 1 # each server needs it's own unique id replicate-same-server-id = 0 auto-increment-increment = 1 # you should set this to the total number of nodes* auto-increment-offset = 1 # each server needs it's own offset* log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = exampledb replicate-do-db = exampledb log-slave-updates # needed for chain replication relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M [...]
Restart mysql after the changes.
$ sudo service mysql restart
Now open the mysql client and type:
GRANT REPLICATION SLAVE ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'secret'; FLUSH PRIVILEGES; quit;
You have to repeat this process for every node, adapting the config file for each one as explained in the comments near every field.
Setting up replication
I’ll make the assumption that you already have a database with data which you want to replicate (which is my case). The database named exampledb exists in node-1 and contains data.
Open the mysql client in node-1 and type
mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 107 | exampledb | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)
You will need this data later.
Now export the database to the other nodes using mysqldump.
$ mysqldump -uroot -p exampledb > exampldedb.sql
You will have to manually create the exampledb database in the other nodes and import the data from the dump you have created.
Now open the mysql client in node-2 and type
CHANGE MASTER TO MASTER_HOST='node-1-ip', MASTER_USER='slaveuser', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=107;
Here you have to use the file and the position returned by node-1 before.
Now (still in node-2)
SLAVE START; SHOW SLAVE STATUS \G;
It is important that both
Slave_SQL_Running have the value
Yes in the output, otherwise something went wrong, take a look at
/var/log/syslog to find out about any errors.
If everything is ok you can proceed to repeat the steps for node-3 and node-1, notice that the nodes are in a replication chain, so node-3 will have node-2 as master, and node-1 will have node-3 as master. You will also have to adapt the values for
MASTER_LOG_POS, to obtain the values just execute
SHOW MASTER STATUS on node-2 and node-3 after you have imported the dumped data from node-1.
When you have completed all the steps on all your nodes you are ready to test it, change anything in any node and look the changes propagate to all your nodes.
I’m breaking the rules because I will be writing to a database at once, in fact I’m using a master-master where a master-slave fits better but a slave can’t have two masters, to understand better my situation I’ll explain how the nodes are working.
node-1 --> database-1 node-2 --> database-2 database-3 node-3 --> backups for database-1, database-2 and database-3
As node-3 can’t be slave for node-1 and node-2 at the same time, my options are:
- Having 2 mysql instances running on node-3, a slave for node-1 and a slave for node-2
- A master-master schema with the three nodes, which also provides more redundancy as I will have additional copies in a crash situation