MySQL 5.1 NDB Cluster Replication on EC2

As I mentioned previously once the MySQL Cluster was setup, the next thing to test was setting up a MySQL NDB Cluster Replication i.e. Master Cluster and Slave Cluster.

You would use this type of setup in situations where you want availability and redundancy.
So even if your multiple (n+1) management nodes and data nodes and SQL/API nodes die on one site, you can redirect traffic or load to a replicated site.

Essentially the (row based) replication is done via a pair of NDB SQL nodes, with the master and slave configuration similar to the plain vanilla variety of MySQL replication. I would suggest getting that working first before attempting this, just so you gain some familiarity with how MySQL replication works.

The main differences from normal replication was the creation of the slave NDB Cluster from the clone of the Master and the way to check where the NDB apply is up to, so you can set the slave correctly.
The steps to do this setup start in the MySQL documentation.

I used two pairs of multi-purpose nodes (Management, Data and MySQL combined) to setup the MySQL 5.1 NDB Clusters replication. So it was a 2 data node Master cluster and a 2 data node Slave cluster.
Here are my high level steps:

  1. Configure /etc/hosts. Added mysql-ndb-master and mysql-ndb-slave IP addresses
  2. Configure the configure.ini to reflect the number of data nodes and MySQL nodes required.
  3. On either management node issue the START BACKUP command to create a backup.
  4. Copy all files from all NDB data nodes to one of the Slave management nodes.
  5. Restore the NDB database in order by data node_id.
  6. Configure the Master /etc/my.cnf and restart the Master MySQL node.
  7. Configure the Slave /etc/my.cnf and restart the Slave MySQL node.
  8. Verify that SHOW MASTER STATUS and SHOW SLAVE STATUS are correct.

Once that was done the Cluster replication was up and running. I then proceeded to run mysqlslap on the master SQL node to quickly test the replication and it was fine under light load. (more on that later).

As per normal I have dumped my work out so that you can see the same stuff I did whilst I was setting up.

Have Fun

Paul

Previous articles in the series:
MySQL 5.1 NDB Cluster – Part 1.
MySQL 5.1 NDB Cluster – Part 2.
MySQL 5.1 NDB Cluster – Testing redundancy.
MySQL 5.1 NDB Cluster – sample configuration files.



ndb_mgm> show
Connected to Management Server at: mysql-ndb_mgmd1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @10.255.39.130 (Version: 5.1.20, Nodegroup: 0, Master)
id=2 @10.255.7.34 (Version: 5.1.20, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=3 @10.255.39.130 (Version: 5.1.20)
id=4 @10.255.7.34 (Version: 5.1.20)

[mysqld(API)] 2 node(s)
id=5 @10.255.7.34 (Version: 5.1.20)
id=6 @10.255.39.130 (Version: 5.1.20)

Original 2 data node configure.ini file

[root@domU-12-31-35-00-20-74 ~]# cat /usr/local/mysql-cluster/EC2-configure-2node.ini
[ndbd default]
NoOfReplicas= 2
MaxNoOfConcurrentTransactions= 64
MaxNoOfConcurrentOperations= 128
DataMemory= 256M
IndexMemory= 256M
Diskless= false
TimeBetweenWatchDogCheck= 30000
DataDir= /mnt/mysql-cluster/data
MaxNoOfOrderedIndexes= 500
MaxNoOfAttributes= 1000
TimeBetweenGlobalCheckpoints= 500
NoOfFragmentLogFiles= 4
FragmentLogFileSize=12M
DiskPageBufferMemory= 64M
ODirect= 1
# the following parametes just function as a small regression
# test that the parameter exists
InitialNoOfOpenFiles= 27

#
# Increase timeouts to cater for slow test-machines
# (possibly running several tests in parallell)
#
HeartbeatIntervalDbDb= 30000
HeartbeatIntervalDbApi= 30000
#TransactionDeadlockDetectionTimeout= 7500

[ndbd]
HostName= mysql-ndbd1 # hostname is a valid network adress

[ndbd]
HostName= mysql-ndbd2 # hostname is a valid network adress

[ndb_mgmd]
HostName= mysql-ndb_mgmd1 # hostname is a valid network adress
DataDir= /mnt/mysql-cluster/mgmd-data #
[ndb_mgmd]
HostName= mysql-ndb_mgmd2 # hostname is a valid network adress
DataDir= /mnt/mysql-cluster/mgmd-data #
#PortNumber= CHOOSE_PORT_MGM

[mysqld]

[mysqld]

Create REPLICATION user on master MySQL node

create user myslave IDENTIFIED BY '53cr37';

GRANT REPLICATION SLAVE
ON *.* TO 'myslave'@'mysql-ndb-slave'
IDENTIFIED BY '53cr37';

mysql> SET GLOBAL binlog_format = 'ROW';
Query OK, 0 rows affected (0.01 sec)


mysql> GRANT REPLICATION SLAVE
-> ON *.* TO 'myslave'@'mysql-ndb-slave'
-> IDENTIFIED BY '53cr37';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Hosts file on each Master and slave node

cat /etc/hosts

127.0.0.1 localhost
# Mysql Cluster data node
10.255.46.240 mysql-ndbd1
10.255.19.225 mysql-ndbd2
# Mysql Cluster mgm node
10.255.46.240 mysql-ndb_mgmd1
10.255.19.225 mysql-ndb_mgmd2
# Mysql replication
10.255.46.240 mysql-ndb-master
10.255.11.159 mysql-ndb-slave

Master MYSQL node my.cnf file

[root@domU-12-31-35-00-29-02 BACKUP]# cat /etc/my.cnf
# Options for mysqld process:
[MYSQLD]
ndbcluster # run NDB storage engine
ndb-connectstring=mysql-ndb_mgmd1,mysql-ndb_mgmd2 # location of management server
log-bin=row
server-id=1

# Options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=mysql-ndb_mgmd1,mysql-ndb_mgmd2 # location of management server


Slave /etc/my.cnf file

cat /etc/my.cnf

# Options for mysqld process:
[MYSQLD]
ndbcluster # run NDB storage engine
ndb-connectstring=mysql-ndb_mgmd1,mysql-ndb_mgmd2 # location of management server
master-host='mysql-ndb-master'
master-port=3306
master-user='myslave'
master-password='53cr37'
server_id=2

# Options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=mysql-ndb_mgmd1,mysql-ndb_mgmd2 # location of management server


Checking slave status

mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.20-beta MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show slave status\G
Empty set (0.00 sec)

mysql> CHANGE MASTER TO
-> MASTER_HOST='mysql-ndb-master',
-> MASTER_PORT=3306,
-> MASTER_USER='myslave',
-> MASTER_PASSWORD='53cr37';
Query OK, 0 rows affected (0.03 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: mysql-ndb-master
Master_User: myslave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: domU-12-31-35-00-0E-51-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 106
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

Run backup on master cluster using START BACKUP from ndb_mgm command line

ndb_mgm> start backup
Connected to Management Server at: mysql-ndb_mgmd1:1186
Waiting for completed, this may take several minutes
Node 1: Backup 1 started from node 3
Node 1: Backup 1 started from node 3 completed
StartGCP: 6681 StopGCP: 6684
#Records: 2053 #LogRecords: 0
Data: 33888 bytes Log: 0 bytes

Copy backup to slave from all data nodes

scp -r BACKUP-1/* mysql-ndb-slave:/mnt/mysql-cluster/data/BACKUP
BACKUP-1.1.log 100% 44 0.0KB/s 00:00
BACKUP-1-0.1.Data 100% 160KB 160.0KB/s 00:00
BACKUP-1.1.ctl 100% 6832 6.7KB/s 00:00

scp -r BACKUP-1/* mysql-ndb-slave:/mnt/mysql-cluster/data/BACKUP
BACKUP-1.2.log 100% 44 0.0KB/s 00:00
BACKUP-1-0.2.Data 100% 160KB 160.0KB/s 00:00
BACKUP-1.2.ctl 100% 6832 6.7KB/s 00:00

Checking databases are all on SLAVE, otherwise use CREATE SCHEMA

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.1.20-beta MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
| world |
+--------------------+
4 rows in set (0.00 sec)

Slave

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.1.20-beta MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
| world |
+--------------------+
4 rows in set (0.02 sec)

Restore the Backup onto the slave

ndb_restore -c mysql-ndb-slave:1186 -n 1 -b 1 -m -r /mnt/mysql-cluster/data/BACKUP/
ndb_restore -c mysql-ndb-slave:1186 -n 2 -b 1 -e -r /mnt/mysql-cluster/data/BACKUP/


ndb_restore -c mysql-ndb-slave:1186 -n 1 -b 1 -m -r /mnt/mysql-cluster/data/BACKUP/
Nodeid = 1
Backup Id = 1
backup path = /mnt/mysql-cluster/data/BACKUP/
Ndb version in backup files: Version 5.1.20
Connected to ndb!!
_____________________________________________________
Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 0
_____________________________________________________
Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 0
_____________________________________________________
Processing data in table: sys/def/SYSTAB_0(0) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_schema(2) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_apply_status(4) fragment 0
Restored 0 tuples and 0 log entries

NDBT_ProgramExit: 0 - OK

ndb_restore -c mysql-ndb-slave:1186 -n 2 -b 1 -e -r /mnt/mysql-cluster/data/BACKUP/
Nodeid = 2
Backup Id = 1
backup path = /mnt/mysql-cluster/data/BACKUP/
Ndb version in backup files: Version 5.1.20
Connected to ndb!!
_____________________________________________________
Processing data in table: sys/def/NDB$EVENTS_0(1) fragment 1
_____________________________________________________
Processing data in table: mysql/def/NDB$BLOB_2_3(3) fragment 1
_____________________________________________________
Processing data in table: sys/def/SYSTAB_0(0) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_schema(2) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_apply_status(4) fragment 1
Restored 0 tuples and 0 log entries

NDBT_ProgramExit: 0 - OK

Check apply status on slave

SELECT @latest:=MAX(epoch) FROM mysql.ndb_apply_status;


SELECT @file:=SUBSTRING_INDEX(File, '/', -1),
@pos:=Position
FROM mysql.ndb_binlog_index
WHERE epoch > 1110
ORDER BY epoch ASC LIMIT 1;



mysql> SELECT @latest:=MAX(epoch) FROM mysql.ndb_apply_status;
+---------------------+
| @latest:=MAX(epoch) |
+---------------------+
| 1110 |
+---------------------+
1 row in set (0.09 sec)

mysql> SELECT @file:=SUBSTRING_INDEX(File, '/', -1),
-> @pos:=Position
-> FROM mysql.ndb_binlog_index
-> WHERE epoch > 1110
-> ORDER BY epoch ASC LIMIT 1;
Empty set (0.00 sec)

Restart MASTER and SLAVE MySQL nodes to make sure /etc/my.cnf is ok

service mysql.server restart


On master:

mysql> show master status\G
*************************** 1. row ***************************
File: row.000001
Position: 106
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)

On Slave:

[root@domU-12-31-35-00-04-51 data]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.20-beta MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql-ndb-master
Master_User: myslave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: row.000001
Read_Master_Log_Pos: 106
Relay_Log_File: domU-12-31-35-00-04-51-relay-bin.000003
Relay_Log_Pos: 245
Relay_Master_Log_File: row.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 417
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

Running mysqlslap to test

mysqlslap --concurrency=1,25 --iterations=10 --number-int-cols=10 \
--number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap.csv --engine=ndb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=1 --user=root --password


mysqlslap --concurrency=1,25 --iterations=10 --number-int-cols=10 --number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap.csv --engine=ndb --auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed --number-of-queries=1 --user=root --password
Enter password:
mysqlslap: Cannot run query SELECT intcol1,intcol2,intcol3,intcol4,intcol5,intcol6,intcol7,intcol8,intcol9,intcol10,charcol1,charcol2,charcol3 FROM t1 WHERE id = '88' ERROR : Got temporary error 4006 'Connect failure - out of connection objects (increase MaxNoOfConcurrentTransactions)' from NDBCLUSTER
mysqlslap: Cannot run query SELECT intcol1,intcol2,intcol3,intcol4,intcol5,intcol6,intcol7,intcol8,intcol9,intcol10,charcol1,charcol2,charcol3 FROM t1 WHERE id = '36' ERROR : Got temporary error 4006 'Connect failure - out of connection objects (increase MaxNoOfConcurrentTransactions)' from NDBCLUSTER
mysqlslap: Cannot run query SELECT intcol1,intcol2,intcol3,intcol4,intcol5,intcol6,intcol7,intcol8,intcol9,intcol10,charcol1,charcol2,charcol3 FROM t1 WHERE id = '38' ERROR : Got temporary error 4006 'Connect failure - out of connection objects (increase MaxNoOfConcurrentTransactions)' from NDBCLUSTER
mysqlslap: Cannot run query SELECT intcol1,intcol2,intcol3,intcol4,intcol5,intcol6,intcol7,intcol8,intcol9,intcol10,charcol1,charcol2,charcol3 FROM t1 WHERE id = '73' ERROR : Got temporary error 4006 'Connect failure - out of connection objects (increase MaxNoOfConcurrentTransactions)' from NDBCLUSTER
mysqlslap: Cannot run query SELECT intcol1,intcol2,intcol3,intcol4,intcol5,intcol6,intcol7,intcol8,intcol9,intcol10,charcol1,charcol2,charcol3 FROM t1 WHERE id = '55' ERROR : Got temporary error 4006 'Connect failure - out of connection objects (increase MaxNoOfConcurrentTransactions)' from NDBCLUSTER
Killed

cat /tmp/mysqlslap.csv
ndb,mixed,0.129,0.102,0.165,1,100
ndb,mixed,0.035,0.011,0.067,1,10
ndb,mixed,0.004,0.001,0.031,1,1

Concurrency settings were too low, setting higher

[ndbd default]
NoOfReplicas= 2
MaxNoOfConcurrentTransactions= 2048
MaxNoOfConcurrentOperations= 4096

Restart the cluster.

mysqlslap --concurrency=1,25,50,100 --iterations=10 --number-int-cols=10 \
--number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap.csv --engine=ndb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=10 --user=root --password


[root@domU-12-31-35-00-29-02 BACKUP]# cat /tmp/mysqlslap.csv
ndb,mixed,0.053,0.012,0.075,1,10
ndb,mixed,1.202,1.040,1.321,25,0
ndb,mixed,2.482,2.386,2.596,50,0
ndb,mixed,10.239,9.398,11.006,100,0

Advertisements

2 thoughts on “MySQL 5.1 NDB Cluster Replication on EC2

  1. very interesting post. i also follow dbagym, but i can’t figure out if all this is for practice or if you can figure out using EC2 for database hosting in production environment. i have heard that the network and latency between the amazon machine was a problem. i haven’t found anybody using mysql cluster or replication in EC2 for production

  2. Hi Frederic,Interesting question. It is both. Both practice and database hosting on EC2. Each new install and feature exploration leaves me with another Virtual machine to reuse later.When I first started this, I didn’t realize the amount or depth of stuff to try out, even in MySQL.As to MySQL Cluster on EC2, it depends on why you are choosing MySQL cluster over say MySQL replication.If purely for availability then it is a winner, just add more EC2 nodes as MySQL ndb nodes to cover the level of paranoia required. For performance, is more difficult to answer given the network is very important for good performance for <>generic<> workloads.However like Oracle RAC you can write or rewrite your application to take advantage or the parallel nature of multiple nodes, and unfortunately like Oracle RAC if you write bad SQL it won’t magically work faster, in some respects if can perform much worse.Have FunPaul

Comments are closed.