Skip to content
MySQL Master - Master

MySQL Master - Master

This blog post is the first part of a series of three tutorials dedicated to Rancher High-Availability. And the first step towards HA is to have a fault tolerant database.

By default Rancher comes with an embedded HSQLDB database. It's fine for testing purpose but not suited to a production environment.

So this tutorial explains:

Do YOU have custom Load Testing needs?
Rely on our Expertise

Prerequisites

You need at least two servers for the MySQL database and one for Rancher (can be the same as one of the database servers):

  • MySQL Server One available at IP address 1.1.1.1,
  • MySQL Server Two available at IP address 2.2.2.2,
  • Rancher available at IP address 3.3.3.3.

All servers must have Docker installed.

MySQL Master - Master

MySQL is an open-source relational database management system, and a way to achieve High Availability is the Master - Master replication.

A Master - Master replication means that we are going to use two servers at the same time, both synchronized with each other:

You can write data into any server, it will be replicated on the two of them. If one of the MySQL drops down, the other could take over and no data would be lost.

Server One

Configuring the Database

Connect to the first server using SSH and create two folders:

mkdir ~/mysql
mkdir ~/mysql/datadir

Create the file ~/mysql/my.cnf with the following content:

[mysqld]
# Default configuration
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
secure-file-priv=/var/lib/mysql-files
user=mysql   
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

# Replication configuration
server-id=1
log_bin
# The cattle here is our database name
binlog_do_db=cattle

Note:

cattle is the common name for Rancher database but you can use any name you want.

Starting the Database

Start the MySQL server using Docker:

docker run --name mysql \
--restart=always \
-e MYSQL_ROOT_PASSWORD=rootpwd \
-e MYSQL_DATABASE=cattle \
-e MYSQL_USER=cattle \
-e MYSQL_PASSWORD=cattle \
-v ~/mysql/datadir:/var/lib/mysql \
-v ~/mysql/my.cnf:/etc/my.cnf \
-p 3306:3306 \
-d mysql/mysql-server:5.7

The -e parameters are environment variables given to the MySQL Docker container. They contain the MySQL root user password (here rootpwd) and the information to create a first database (named cattle accessed by user cattle with password cattle).

The -v options are used to map a host folder to a container folder. The host is the machine where the Docker container runs. We need to map our data directory so we don't lose them when the container stops. We also map the configuration file my.cnf to add our replication configuration.

The -p parameter tells Docker to expose the port 3306 so that our database is accessible by the two other servers.

Database requirements

According to the Rancher documentation our servers needs:

  • 1GB RAM,
  • To Run MySQL 5.7 with Barracuda where the default ROW_FORMAT is Dynamic,
  • To have a max_connections setting set to more than 150.

If we check the MySQL documentation about the file format we can see that Barracuda with one file per table is the default since version 5.7.7.

When ran using the Docker image mysql/mysql-server:5.7 I got the version 5.7.16 (select version();).

After importing the Rancher data, you can validate that the ROW_FORMAT is Dynamic using the SQL request (that was the case for me):

SELECT `table_name`, `row_format` FROM `information_schema`.`tables` WHERE `table_schema`=DATABASE();

Regarding the max_connections setting you can check its value with the SQL request: show variables like "max_connections"; and update it with set global max_connections = 200;. Once again I had a default value of 151.

So we can keep the default configuration values and everything will run fine.

Securing the database

The -p 3306:3306 exposes our database to the whole world! You may want to customize IPTables to restrict its access to only the two other servers.

Type iptables -L --line-numbers to list the existing firewall rules. As you can see Docker adds its own rules:

Chain DOCKER (1 references)
 num  target     prot opt source               destination
 1    ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:mysql

The Docker documentation about network states that:

Docker will not delete or modify any pre-existing rules from the DOCKER filter chain. This allows the user to create in advance any rules required to further restrict access to the containers.

And that's exactly what we are going to do! Run the following commands to restrict access to port 3306:

iptables -I DOCKER -i eth0 -s 2.2.2.2 -p tcp --dport 3306 -j ACCEPT
iptables -I DOCKER -i eth0 -s 3.3.3.3 -p tcp --dport 3306 -j ACCEPT
iptables -I DOCKER 3 -i eth0 -p tcp --dport 3306 -j DROP

The number 3 on the last line is the position in the Chain. So you may adjust it if you want to expose the port 3306 to more than 2 IPs.

List the IPTable rules (iptables -L --line-numbers):

Chain DOCKER (1 references)
num  target     prot opt source               destination
1    ACCEPT     tcp  --  2.2.2.2        anywhere             tcp dpt:mysql
2    ACCEPT     tcp  --  3.3.3.3        anywhere             tcp dpt:mysql
3    ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:mysql
4    DROP       tcp  --  anywhere             anywhere             tcp dpt:mysql

There is an issue here. The DROP from anywhere is after the ACCEPT from anywhere. So that's not secured at all! In fact modifications to the IPTable rules must me done before starting the Docker container. To fix this simply restart the MySQL one: docker restart mysql.

Now our firewall rules are in the proper order:

Chain DOCKER (1 references)
num  target     prot opt source               destination
1    ACCEPT     tcp  --  2.2.2.2        anywhere             tcp dpt:mysql
2    ACCEPT     tcp  --  3.3.3.3        anywhere             tcp dpt:mysql
3    DROP       tcp  --  anywhere             anywhere             tcp dpt:mysql
4    ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:mysql

Note:

The modifications made to IPTables are not persistent! So they are lost if the server restart. In a production environment you may configure them on startup.

Replication configuration

To configure replication we need to create a dedicated MySQL user.

Connect to the database container using the command docker exec -ti mysql mysql -u root --password=rootpwd. The mysql> prompt appears, you can now execute SQL requests.

Execute the following requests to create the aforementioned user:

create user 'replicator'@'%' identified by 'replicatorpwd';
grant replication slave on *.* to 'replicator'@'%';

Restart the MySQL Docker container docker restart mysql and display the master status:

docker exec -ti mysql mysql -u root --password=rootpwd -e "show master status;"

+-------------------------+----------+--------------+------------------+-------------------+
| File                    | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------------+----------+--------------+------------------+-------------------+
| cb3c30d50792-bin.000001 |      599 | cattle       |                  |                   |
+-------------------------+----------+--------------+------------------+-------------------+

Keep track of the displayed information as we will need it to configure the second server.

Server Two

The second database server configuration is the same as Server One except:

  • The ~/mysql/my.cnf configuration file must have server-id=2
  • The IPTables rules are:
iptables -I DOCKER -i eth0 -s 1.1.1.1 -p tcp --dport 3306 -j ACCEPT
iptables -I DOCKER -i eth0 -s 3.3.3.3 -p tcp --dport 3306 -j ACCEPT
iptables -I DOCKER 3 -i eth0 -p tcp --dport 3306 -j DROP
  • The result of the request show master status; is different and must also be saved to activate the replication from Server 2 to Server 1.

Activating the replication

Now that our two servers are running and configured we only need to execute a few MySQL request to activate the replication.

On both server connect to MySQL as the root user docker exec -ti mysql mysql -u root --password=rootpwd and execute the commands:

slave stop; 
CHANGE MASTER TO MASTER_HOST = 'x.x.x.x', MASTER_USER = 'replicator', MASTER_PASSWORD = 'replicatorpwd', MASTER_LOG_FILE = 'xxxxx-bin.000001', MASTER_LOG_POS = 123; 
slave start;
  • MASTER_LOG_FILE = 'xxxxx-bin.000001' is the File value displayed when executing show master status; on the other server.
  • MASTER_LOG_POS = 123 is the Position value.

Note:

You may see errors such as ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'slave start' at line 1 if it's the first time you execute slave stop; it won't prevent the replication from operating.

Restart both databases using docker restart mysql.

Testing the replication

Connect as the cattle user: docker exec -ti mysql mysql -u cattle --password=cattle cattle on both servers.

Create a table and insert values on Server One:

create table example (data VARCHAR(100));
insert into example values ('test');

List the tables and the example table content on Server Two:

show tables;
select * from example;
You should the modifications replicated on Server Two. You can also insert values from Server Two and check that they are replicated the other way.

Finally drop the table: drop table example;

Running Rancher

Rancher is an open source platform for deploying and managing Docker containers in production. At OctoPerf we use it to dynamically start load generators in the Cloud (on both AWS and Digital Ocean).

Exporting the database

You may already have Rancher installed. In such case, you need to copy its existing internal database into the MySQL you just created.

  1. Open the Rancher UI (http://3.3.3.3:8080) and go to the Admin > High-Availability page,
  2. Click on the Export Database button (Step 1),
  3. A SQL dump file is downloaded to your computer,
  4. Upload it to one of your database servers,
  5. Extract it into ~/mysql/datadir/ so it can be accessed from within the Docker container.

Note:

If your Rancher server is not secured you can also download it directly to a database server using the command:

wget http://3.3.3.3:8080/v1/haconfigs/haconfig/dbdump

To import it:

  1. Open a shell session on the MySQL Docker container: docker exec -ti mysql bash,
  2. Head to the data directory cd /var/lib/mysql,
  3. Import the dump file: mysql -u cattle --password=cattle cattle < rancher-mysql-dump.sql.

Connect to the database and show the tables:

MariaDB [cattle]> show tables;
+-----------------------------------------------+
| Tables_in_cattle                              |
+-----------------------------------------------+
| DATABASECHANGELOG                             |
| DATABASECHANGELOGLOCK                         |
| account                                       |
| agent                                         |
| agent_group                                   |
| audit_log                                     |
| auth_token                                    |
| backup                                        |
...

You should see the Rancher tables.

Rancher with an external database

Running Rancher with an external database is probably the easiest step of this tutorial. Only one command and your server is available at http://3.3.3.3:8080:

docker run -d -p 8080:8080 \
--restart=unless-stopped \
-e CATTLE_DB_CATTLE_MYSQL_HOST=1.1.1.1 \
-e CATTLE_DB_CATTLE_MYSQL_PORT=3306 \
-e CATTLE_DB_CATTLE_MYSQL_NAME=cattle \
-e CATTLE_DB_CATTLE_USERNAME=cattle \
-e CATTLE_DB_CATTLE_PASSWORD=cattle \
-v /var/run/docker.sock:/var/run/docker.sock \
rancher/server

Conclusion

Using a replicated external database is a first step towards High Availability. But it's not really HA yet!

  • Since Rancher only communicates with one MySQL server, if it goes down Rancher goes down too. We might need a fail-over mechanism here. A Galera database Cluster might be one.
  • Also we only have one Rancher server. We need to install a multi-nodes Rancher.
Want to become a super load tester?
Request a Demo