MySQL Error: error reconnecting to master

Error message:

Slave I/O thread: error reconnecting to master
Last_IO_Error: error connecting to master

Diagnosis:

Check that the slave can connect to the master instance, using the following steps:

  1. Use ping to check the master is reachable. eg ping master.yourdomain.com
  2. Use ping with ip address to check that DNS isn’t broken. eg. ping 192.168.1.2
  3. Use mysql client to connect from slave to master. eg mysql -u repluser -pREPLPASS –host=master.yourdomain.com –port=3306 (substitute whatever port you are connecting to the master on)
  4. If all steps work, then check that the repluser (the SLAVE replication user has the REPLICATION SLAVE privilege). eg. show grants for ‘repl’@’slave.yourdomain.com’;

Resolution:

  • If step 1 and 2 fail, you have a network or firewall issue. Check with a network/firewall administrator or check the logs if you wear those hats.
  • If Step 1 fails but Step 2 works, you have a DNS or names resolution issue. Check that the slave can connect and resolves names using mysql client or ssh/telnet/remote desktop.
  • If Step 3 fails, you need to check the error reported, it will either be a authentication issue (login failed/denied) or an issue with the TCP port the master is listening on. A good way to verify that port is open is to use: telnet master.yourdomain.com 3306 (or the port the master is listening on) if that fails then there is a firewall(s) in the network which are blocking that port.
  • If you get to step 4 and everything looks fine and the slave does reconnect fine on retrying. Then you have probably had either temporary, network failure, names resolution failure, firewall failure or any of the prior together.

Continuing Sporadic issues:

Get hold of the network and firewall logs.
If this is not possible, setup a script to periodically ping, connect, mysql connect and log that over
time to prove to your friendly network admin that there is an problem with the network.

How MySQL deals with it:

MySQL will try and reconnect by itself after a network failure or query timeout.

The process is governed by a few variables:

master-connect-retry
slave-net-timeout
master-retry-count

In a nutshell, a MySQL slave will try to reconnect after getting a timeout (slave-net-timeout) after waiting the number of seconds in master-connect-retry but only for the number of times
specified in master-retry-count.
By default, a MySQL slave waits one hour before retry, and will then retry every 60 seconds for 86,400 times. That is every minute for 60 days.

If the one hour slave-net-timeout is too long for your DR/Slave read strategy you will need to adjust it accordingly.

Edit: 2011/02/02

Thanks to leBolide. He discovered that there is a 32 character limit on the password for replication.

Have Fun

Paul

P.S. If you liked this post you might be good enough to try these challenges

MySQL Practice challenges part one.

Top 9 Posts for the last 12 months

If you were ever wondering what other people check out on this site, here are the most popular articles by pageviews for the last 12 months.

Seems most people like the LVM snapshots article, articles about running multiple MySQL instances and the various benchmark articles.

  1. mysql backups using lvm snapshots
  2. oracle 11g on ec2 using silent install
  3. mysql multi master master replication on ec2
  4. mysql master master replication table sync
  5. multiple mysql instances on ec2
  6. mysql dbt2 benchmark on ec2
  7. mysql 51 ndb cluster replication on ec2
  8. sysbench vs mysql on ec2
  9. bonnie io benchmark vs ec2

Have Fun

Paul

OurDelta MySQL on EC2 – updating binaries

Given the amount of time since my last post on installing OurDelta MySQL on EC2. It allowed me to show quickly how to get your OurDelta MySQL install up-to-date.

Prerequisites:

You have already installed the OurDelta Repository as per this documentation
http://ourdelta.org/centos

To update:

Now just yum update to get the latest version:
http://ourdelta.org/release-5077-d8

yum update MySQL-OurDelta*

It is as simple as that.

Comments:

Thanks for the feedback from the last post.

Some people requested the Amazon Machine Image (AMI). The main issue with this is, once you bundle an AMI image it is going to start with those binaries (including mysql) and require you to constantly run yum update each time you launch an instance. So if I had bundled an AMI back in February, anyone using that AMI now would be way behind on the latest updates for OurDelta and also any other CentOS packages.

It is better to go down the path of learning how to bundle an AMI yourself, then getting an base CentOS 4 or CentOS 5 AMI up-to-date once month and using yum update afterwards, when you launch the instance.
You can even pass a script which runs after the instance has launched, or use a configuration tool like Puppet.
Believe me when I say that whilst bundling AMIs is straight-forward you do not want large numbers of old/obsolete AMIs floating around to manage later.

Upcoming stuff:

I am going to use this AMI with MySQL Sandbox to show how easy it is to have a test environment if you want to take your existing MySQL 5.0.x versions to OurDelta MySQL or any other version on MySQL.

Have Fun

Paul

Screen log:



[root@domU-12-31-39-04-71-D3 ~]# yum update MySQL-OurDelta*
Setting up Update Process
Setting up repositories
Reading repository metadata in from local files
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package MySQL-OurDelta-client.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-shared.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-test.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-server.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-devel.i386 0:5.0.77.d8-54.el4 set to be updated
--> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Updating:
MySQL-OurDelta-client i386 5.0.77.d8-54.el4 ourdelta 6.0 M
MySQL-OurDelta-devel i386 5.0.77.d8-54.el4 ourdelta 7.5 M
MySQL-OurDelta-server i386 5.0.77.d8-54.el4 ourdelta 17 M
MySQL-OurDelta-shared i386 5.0.77.d8-54.el4 ourdelta 1.7 M
MySQL-OurDelta-test i386 5.0.77.d8-54.el4 ourdelta 6.7 M

Transaction Summary
=============================================================================
Install 0 Package(s)
Update 5 Package(s)
Remove 0 Package(s)
Total download size: 39 M
Downloading Packages:
(1/5): MySQL-OurDelta-cli 100% |=========================| 6.0 MB 00:03
(2/5): MySQL-OurDelta-sha 100% |=========================| 1.7 MB 00:01
(3/5): MySQL-OurDelta-tes 100% |=========================| 6.7 MB 00:03
(4/5): MySQL-OurDelta-ser 100% |=========================| 17 MB 00:09
(5/5): MySQL-OurDelta-dev 100% |=========================| 7.5 MB 00:03
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : MySQL-OurDelta-client ####################### [ 1/10]
Updating : MySQL-OurDelta-shared ####################### [ 2/10]
Updating : MySQL-OurDelta-test ####################### [ 3/10]
Giving mysqld 5 seconds to exit nicely
Updating : MySQL-OurDelta-server ####################### [ 4/10]
090608 20:19:24 [Warning] option 'sync-mirror-binlog': unsigned value 18446744073709551615 adjusted to 4294967295
090608 20:19:24 [Warning] option 'sync-mirror-binlog': unsigned value 18446744073709551615 adjusted to 4294967295
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.

See the manual for more instructions.

MySQL bug reports should be submitted through http://bugs.mysql.com/
Issues related to the OurDelta patches or packaging should be
submitted in the OurDelta project on Launchpad, simply follow the
links via http://ourdelta.org/

The latest information about MySQL is available on the web at
http://dev.mysql.com/
Other information sources are
- the MySQL Mailing List archives (http://lists.mysql.com/);
- the MySQL Forums (http://forums.mysql.com/).


Notes regarding SELinux on this platform:
=========================================

The default policy might cause server startup to fail because it is
not allowed to access critical files. In this case, please update
your installation.

The default policy might also cause inavailability of SSL related
features because the server is not allowed to access /dev/random
and /dev/urandom. If this is a problem, please do the following:

1) install selinux-policy-targeted-sources from your OS vendor
2) add the following two lines to /etc/selinux/targeted/src/policy/domains/program/mysqld.te:
allow mysqld_t random_device_t:chr_file read;
allow mysqld_t urandom_device_t:chr_file read;
3) cd to /etc/selinux/targeted/src/policy and issue the following command:
make load


Starting MySQL.[ OK ]
Giving mysqld 2 seconds to start
Updating : MySQL-OurDelta-devel ####################### [ 5/10]
Cleanup : MySQL-OurDelta-client ####################### [ 6/10]
Cleanup : MySQL-OurDelta-shared ####################### [ 7/10]
Cleanup : MySQL-OurDelta-test ####################### [ 8/10]
Cleanup : MySQL-OurDelta-server ####################### [ 9/10]
Cleanup : MySQL-OurDelta-devel ####################### [10/10]

Updated: MySQL-OurDelta-client.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-devel.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-server.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-shared.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-test.i386 0:5.0.77.d8-54.el4
Complete!


Just make sure the mysql instance is secure

[root@domU-12-31-39-04-71-D3 ~]# /usr/bin/mysql_secure_installation




NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!


In order to log into MySQL to secure it, we'll need the current
password for the root user. If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
... skipping.

By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n]
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n]
... Success!

By default, MySQL comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n]
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n]
... Success!

Cleaning up...



All done! If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!

OurDelta MySQL on EC2 – install

Summary:

Arjen would give me an earful if I got this wrong or poorly worded.

“OurDelta produces enhanced builds for MySQL, with OurDelta and third-party patches, for common production platforms” from http://ourdelta.org/about

Over the next series of articles I am going to put the many additions to the MySQL 5.0 baseline through their paces on Amazon EC2.

Using a base CentOS 4.4 I had lying around on Amazon S3 I had Ourdelta installed and secure in no time flat.

This is outline for the installing Ourdelta onto CentOS 4

Install:


yum install yum-plugin-protectbase
rpm --import http://mirror.ourdelta.org/deb/ourdelta.gpg
rpm -Uvh http://mirror.ourdelta.org/yum/CentOS/OurDelta-release-0.0.4-1.noarch.rpm
mkdir downloads
cd downloads/
wget http://mirror.ourdelta.org/yum/CentOS/4/i386/RPMS/MySQL-OurDelta-server-5.0.67.d7-44.el4_7.i386.rpm
wget http://mirror.ourdelta.org/yum/CentOS/4/i386/RPMS/MySQL-OurDelta-shared-5.0.67.d7-44.el4_7.i386.rpm
wget http://mirror.ourdelta.org/yum/CentOS/4/i386/RPMS/MySQL-OurDelta-test-5.0.67.d7-44.el4_7.i386.rpm
wget http://mirror.ourdelta.org/yum/CentOS/4/i386/RPMS/MySQL-OurDelta-client-5.0.67.d7-44.el4_7.i386.rpm
wget http://mirror.ourdelta.org/yum/CentOS/4/i386/RPMS/MySQL-OurDelta-devel-5.0.67.d7-44.el4_7.i386.rpm
yum localinstall MySQL-OurDelta-*
/usr/bin/mysql_secure_installation
ps -ef|grep mysql
mysql -u root -p

Comments:

Thats it. Straightforward install.
I am getting lazy in my old age I used a pipe combo to generate the install commands using this:

history |grep -v history|awk ‘{ print $2″ “$3” “$4 }’

The next article will cover the upgrade of any existing MySQL 5.0 release to OurDelta.
I have saved this Ourdelta MySQL install as an AMI.

If people are interested in making this a public AMI please post a comment.

Have Fun

Paul

Is EC2 useful as a database server

Plenty of people have been excited by the prospect of Amazon EC2 and the ability to scale out your databases as load increases from your original configuration. I noticed Morgan Tocker and Carl Mercier are going to be presenting on this topic at the upcoming MySQL Conference

However almost immediately people are worried about the lack of persistent of data across instance terminations.
In a sense people are wanting dedicated hosting services instead of what EC2 really is.

You need to think of Amazon EC2 in a electricity generation metaphor.
Coal, Nuclear Fission and Gas provide the base load electricity which is on 24×7.
Gas and Hydro can act as peak load generation as well. So at peak periods when the requirement for more electricity is higher, electricity generators can switch on these extra resources to cope.

Amazon EC2 is the same, it is there to service peak load.
Either you are using Amazon EC2 as a base load server or you are using a dedicated hosting service to provide base load. You add additional server resources during peak periods as required.
As a dedicated hosting service EC2 is not actually the cheapest option out there. There are plenty of dedicated hosting providers who will give you and your application and database, cheaper base load capacity. That said, many people choose to run both application and database servers on EC2 as base load servers and the uptime of these instances is good.

What this means is that using EC2 as base load means you must implement additional protections for your data to provide persistence. This may be in the form of clustering technologies and replication technologies or both. So running EC2 as a base load database server adds complexity. This is why numerous companies have sprung up as a result of this complexity. They are essentially providing a method for companies to pay for someone else to deal with it.

The hidden value here is, in adopting a more thorough attitude to data persistence and redundancy, your database is more robust. So if or when your dedicated hosting provider has an outage, your architectural design is already in a position to handle it.

The danger is, you see any ongoing performance issue (a demand for addition base load) as solved by throwing hardware at it. Rather than reviewing whether the demand is justified or whether it can be reduced through tuning the application, database or architecture.

Update: Added Carl as co-presenter at the MySQL conference.

Have Fun

Paul

Sysbench fileio vs EC2 Part 1

Overview:

Peter Zaitsev’s recent article about Evaluating IO subsystem performance for MySQL spurred my interest in doing something similar on EC2.

I have covered running sysbench against MySQL on EC2 however not specifically used sysbench to test IO. Rather I had used bonnie++ and iozone to do that.

I don’t have a lot of respect for the EC2 small instance. Whilst it was reasonable in the middle of 2006 when Amazon EC2 was launched, you can (even in Australia) pick up Dual and Quad Core CPUs with enough memory for 32 bit OS for a reasonable amount nowadays.
The main problem as I see it for my specific interest is the lack of grunt in network and IO bandwidth. Having the idea of Grid or Utility or the latest “Cloud” computing falls apart when you have to hit disk or worse need to synchronize across a slow network.

So I was mildly surprised when the small instance was able to match and out-perform Peter’s benchmarks on the local disk at least.

Summary:

Using Peter’s benchmark description as a key, I reran his tests on a small EC2 instance. The local disk matched and outperformed his benchmark with no changes to any settings.
Unfortunately the mounted filesystem /mnt was much much worse.

If you are using MySQL or any IO intensive application, I would do the following:

  1. Make your image specify as large a local mount as possible. You will need the raw IO speed for anything which requires fast reads and fast writes.
  2. If you can take the risk use the Linux file cache. Especially if you have to use /mnt.
  3. Treat the raw IO performance of /mnt as slow disk, even in some respects like a backup location. It has a requests/sec and io transfer rates little better than USB 1.0 High speed. Certainly not even in the realm of Firewire.
  4. Get as much into memory as possible. This means narrowing table and indexes sizes or going with the more expensive large and extra large instances.

Install:

  1. wget sysbench from sourceforge.
  2. yum install gcc (or apt-get gcc) if not already installed
  3. configure, make and install sysbench
  4. Run this simple shell script sysbench_fileio.sh to replicate Peter’s benchmark.
  5. Free feel to publish your results, either here as a comment or on your own blog. Link or drop me a note and I will link to your benchmark results and we can tally up and see for whom the IO tolls.

Documentation (explains the various options)

http://sysbench.sourceforge.net/docs/#fileio_mode

Configuration:

Amazon small EC2 instance, running CentOS 4.4



[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 855M 8.6G 9% /
/dev/sda2 147G 188M 140G 1% /mnt
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/sda1 / ext3 defaults 1 1
/dev/sda2 /mnt ext3 defaults 1 2
/dev/sda3 swap swap defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# uname -a
Linux domU-12-31-38-00-20-02 2.6.16-xenU #1 SMP Mon May 28 03:41:49 SAST 2007 i686 athlon i386 GNU/Linux
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /etc/redhat-release
CentOS release 4.4 (Final)
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 65
model name : Dual-Core AMD Opteron(tm) Processor 2218 HE
stepping : 3
cpu MHz : 2600.000
cache size : 1024 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
bogomips : 5201.75

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# ipcs

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status

------ Semaphore Arrays --------
key semid owner perms nsems

------ Message Queues --------
key msqid owner perms used-bytes messages

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# free
total used free shared buffers cached
Mem: 1740944 533224 1207720 0 217020 263060
-/+ buffers/cache: 53144 1687800
Swap: 917496 0 917496

Results:

Raw Results:



wget http://optusnet.dl.sourceforge.net/sourceforge/sysbench/sysbench-0.4.8.tar.gz

yum install gcc sysstat

tar -xzvf sysbench-0.4.8.tar.gz
cd sysbench-0.4.8
./configure --without-mysql
make
make install

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 855M 8.6G 9% /
/dev/sda2 147G 188M 140G 1% /mnt
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/sda1 / ext3 defaults 1 1
/dev/sda2 /mnt ext3 defaults 1 2
/dev/sda3 swap swap defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# uname -a
Linux domU-12-31-38-00-20-02 2.6.16-xenU #1 SMP Mon May 28 03:41:49 SAST 2007 i686 athlon i386 GNU/Linux
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /etc/redhat-release
CentOS release 4.4 (Final)
[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 65
model name : Dual-Core AMD Opteron(tm) Processor 2218 HE
stepping : 3
cpu MHz : 2600.000
cache size : 1024 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu tsc msr pae mce cx8 apic mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
bogomips : 5201.75

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# ipcs

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status

------ Semaphore Arrays --------
key semid owner perms nsems

------ Message Queues --------
key msqid owner perms used-bytes messages

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# free
total used free shared buffers cached
Mem: 1740944 533224 1207720 0 217020 263060
-/+ buffers/cache: 53144 1687800
Swap: 917496 0 917496

http://sysbench.sourceforge.net/docs/#fileio_mode

Sysbench FileIO test options

[root@domU-12-31-38-00-20-02 sysbench-0.4.8]# sysbench --test=fileio help
sysbench v0.4.8: multi-threaded system evaluation benchmark

fileio options:
--file-num=N number of files to create [128]
--file-block-size=N block size to use in all IO operations [16384]
--file-total-size=SIZE total size of files to create [2G]
--file-test-mode=STRING test mode {seqwr, seqrewr, seqrd, rndrd, rndwr, rndrw}
--file-io-mode=STRING file operations mode {sync,async,fastmmap,slowmmap} [sync]
--file-extra-flags=STRING additional flags to use on opening files {sync,dsync,direct} []
--file-fsync-freq=N do fsync() after this number of requests (0 - don't use fsync()) [100]
--file-fsync-all=[on|off] do fsync() after each write operation [off]
--file-fsync-end=[on|off] do fsync() at the end of test [on]
--file-fsync-mode=STRING which method to use for synchronization {fsync, fdatasync} [fsync]
--file-merged-requests=N merge at most this number of IO requests if possible (0 - don't merge) [0]
--file-rw-ratio=N reads/writes ratio for combined test [1.5]


Peter Zaitsev's 120M prepare file command

sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=1 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=128M prepare

sysbench v0.4.8: multi-threaded system evaluation benchmark

1 files, 131072Kb each, 128Mb total
Creating files for the test...

Running the test with the 120M file

sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=1 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=128M --file-test-mode=rndrd run

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 590164 Read, 0 Write, 0 Other = 590164 Total
Read 9.0052Gb Written 0b Total transferred 9.0052Gb (153.69Mb/sec)
9836.03 Requests/sec executed

Test execution summary:
total time: 60.0002s
total number of events: 590164
total time taken by event execution: 59.3742
per-request statistics:
min: 0.0001s
avg: 0.0001s
max: 0.0980s
approx. 95 percentile: 0.0001s

Threads fairness:
events (avg/stddev): 590164.0000/0.00
execution time (avg/stddev): 59.3742/0.00

Rerunning with 64 threads

sysbench --num-threads=64 --test=fileio --max-time=60 \
--max-requests=1000000 --file-num=1 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=128M --file-test-mode=rndrd run
sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 745086 Read, 0 Write, 0 Other = 745086 Total
Read 11.369Gb Written 0b Total transferred 11.369Gb (194.02Mb/sec)
12417.46 Requests/sec executed

Test execution summary:
total time: 60.0031s
total number of events: 745086
total time taken by event execution: 3837.4047
per-request statistics:
min: 0.0003s
avg: 0.0052s
max: 0.1228s
approx. 95 percentile: 0.0325s

Threads fairness:
events (avg/stddev): 11641.9688/8.86
execution time (avg/stddev): 59.9594/0.02

Running in single thread with random write i.e. --file-test-mode=rndwr


sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=1 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=128M --num-threads=1 --file-test-mode=rndwr run

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 0 Read, 93104 Write, 0 Other = 93104 Total
Read 0b Written 1.4207Gb Total transferred 1.4207Gb (24.227Mb/sec)
1550.55 Requests/sec executed

Test execution summary:
total time: 60.0457s
total number of events: 93104
total time taken by event execution: 59.9639
per-request statistics:
min: 0.0000s
avg: 0.0006s
max: 1.8278s
approx. 95 percentile: 0.0001s

Threads fairness:
events (avg/stddev): 93104.0000/0.00
execution time (avg/stddev): 59.9639/0.00

with 64 threads --num-threads=64

sysbench --test=fileio --num-threads=64 --max-time=60 \
--max-requests=1000000 --file-num=1 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=128M \
--file-test-mode=rndwr run

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 109175 Write, 0 Other = 109175 Total
Read 0b Written 1.6659Gb Total transferred 1.6659Gb (24.115Mb/sec)
1543.37 Requests/sec executed

Test execution summary:
total time: 70.7379s
total number of events: 109175
total time taken by event execution: 4329.6293
per-request statistics:
min: 0.0000s
avg: 0.0397s
max: 70.7328s
approx. 95 percentile: 0.0001s

Threads fairness:
events (avg/stddev): 1705.8594/6223.80
execution time (avg/stddev): 67.6505/3.06

Cleaning up for test with multiple files

sysbench --test=fileio --num-threads=64 --max-time=60 \
--max-requests=1000000 --file-num=1 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=128M --file-test-mode=rndwr cleanup
sysbench v0.4.8: multi-threaded system evaluation benchmark

Removing test files...

Preparing files

sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=128 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=128M --num-threads=64 --file-test-mode=rndwr prepare

sysbench v0.4.8: multi-threaded system evaluation benchmark

128 files, 1024Kb each, 128Mb total
Creating files for the test...

rerunning with 128 files to total of 128M

sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=128 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=128M --num-threads=64 --file-test-mode=rndwr run

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 1Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 126633 Write, 0 Other = 126633 Total
Read 0b Written 1.9323Gb Total transferred 1.9323Gb (25.607Mb/sec)
1638.85 Requests/sec executed

Test execution summary:
total time: 77.2695s
total number of events: 126633
total time taken by event execution: 4561.7451
per-request statistics:
min: 0.0002s
avg: 0.0360s
max: 28.6922s
approx. 95 percentile: 0.0069s

Threads fairness:
events (avg/stddev): 1978.6406/28.82
execution time (avg/stddev): 71.2773/5.32

Preparing the 128 files to a total of 2Gig test

sysbench --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=128 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=2G --num-threads=1 --file-test-mode=rndwr prepare

The run

sysbench --init-rng=1 --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=128 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=2G --num-threads=1 --file-test-mode=rndrd run

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 14094 Read, 0 Write, 0 Other = 14094 Total
Read 220.22Mb Written 0b Total transferred 220.22Mb (3.6701Mb/sec)
234.89 Requests/sec executed

Test execution summary:
total time: 60.0034s
total number of events: 14094
total time taken by event execution: 59.9868
per-request statistics:
min: 0.0001s
avg: 0.0043s
max: 0.2521s
approx. 95 percentile: 0.0129s

Threads fairness:
events (avg/stddev): 14094.0000/0.00
execution time (avg/stddev): 59.9868/0.00

2Gig file test with 64 threads

sysbench --init-rng=1 --test=fileio --max-time=60 --max-requests=1000000 \
--file-num=128 --file-extra-flags=direct --file-fsync-freq=0 \
--file-total-size=2G --num-threads=64 --file-test-mode=rndrd run

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 24639 Read, 0 Write, 0 Other = 24639 Total
Read 384.98Mb Written 0b Total transferred 384.98Mb (6.3875Mb/sec)
408.80 Requests/sec executed

Test execution summary:
total time: 60.2719s
total number of events: 24639
total time taken by event execution: 3847.3337
per-request statistics:
min: 0.0023s
avg: 0.1561s
max: 0.5804s
approx. 95 percentile: 0.3490s

Threads fairness:
events (avg/stddev): 384.9844/0.94
execution time (avg/stddev): 60.1146/0.08

2 Gig file test single thread random writes --file-test-mode=rndwr

sysbench --init-rng=1 --test=fileio --max-time=60 \
--max-requests=1000000 --file-num=128 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=2G --num-threads=1 \
--file-test-mode=rndwr run

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 0 Read, 52960 Write, 0 Other = 52960 Total
Read 0b Written 827.5Mb Total transferred 827.5Mb (13.748Mb/sec)
879.89 Requests/sec executed

Test execution summary:
total time: 60.1893s
total number of events: 52960
total time taken by event execution: 60.1396
per-request statistics:
min: 0.0001s
avg: 0.0011s
max: 2.1815s
approx. 95 percentile: 0.0001s

Threads fairness:
events (avg/stddev): 52960.0000/0.00
execution time (avg/stddev): 60.1396/0.00


2 Gig file test 64 threads random writes --file-test-mode=rndwr

sysbench --init-rng=1 --test=fileio --max-time=60 \
--max-requests=1000000 --file-num=128 --file-extra-flags=direct \
--file-fsync-freq=0 --file-total-size=2G --num-threads=64 \
--file-test-mode=rndwr run

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 38767 Write, 0 Other = 38767 Total
Read 0b Written 605.73Mb Total transferred 605.73Mb (10.038Mb/sec)
642.45 Requests/sec executed

Test execution summary:
total time: 60.3424s
total number of events: 38767
total time taken by event execution: 3861.3838
per-request statistics:
min: 0.0001s
avg: 0.0996s
max: 4.8871s
approx. 95 percentile: 0.8721s

Threads fairness:
events (avg/stddev): 605.7344/14.70
execution time (avg/stddev): 60.3341/0.00

Rerunning the whole test set on /mnt

cd /mnt
vi sysbench_fileio.sh
chmod 700 sysbench_fileio.sh
./sysbench_fileio.sh

sysbench v0.4.8: multi-threaded system evaluation benchmark

1 files, 131072Kb each, 128Mb total
Creating files for the test...
sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!


Time limit exceeded, exiting...
Done.

Operations performed: 9782 Read, 0 Write, 0 Other = 9782 Total
Read 152.84Mb Written 0b Total transferred 152.84Mb (2.547Mb/sec)
163.01 Requests/sec executed

Test execution summary:
total time: 60.0092s
total number of events: 9782
total time taken by event execution: 59.9964
per-request statistics:
min: 0.0003s
avg: 0.0061s
max: 0.1835s
approx. 95 percentile: 0.0100s

Threads fairness:
events (avg/stddev): 9782.0000/0.00
execution time (avg/stddev): 59.9964/0.00

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!

Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 10799 Read, 0 Write, 0 Other = 10799 Total
Read 168.73Mb Written 0b Total transferred 168.73Mb (2.7943Mb/sec)
178.83 Requests/sec executed

Test execution summary:
total time: 60.3856s
total number of events: 10799
total time taken by event execution: 3852.0646
per-request statistics:
min: 0.0098s
avg: 0.3567s
max: 0.9908s
approx. 95 percentile: 0.6044s

Threads fairness:
events (avg/stddev): 168.7344/4.29
execution time (avg/stddev): 60.1885/0.11

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 0 Read, 15274 Write, 0 Other = 15274 Total
Read 0b Written 238.66Mb Total transferred 238.66Mb (3.9774Mb/sec)
254.55 Requests/sec executed

Test execution summary:
total time: 60.0029s
total number of events: 15274
total time taken by event execution: 59.9853
per-request statistics:
min: 0.0003s
avg: 0.0039s
max: 0.3117s
approx. 95 percentile: 0.0106s

Threads fairness:
events (avg/stddev): 15274.0000/0.00
execution time (avg/stddev): 59.9853/0.00

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
1 files, 128Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 13360 Write, 0 Other = 13360 Total
Read 0b Written 208.75Mb Total transferred 208.75Mb (3.4615Mb/sec)
221.54 Requests/sec executed

Test execution summary:
total time: 60.3062s
total number of events: 13360
total time taken by event execution: 3847.7447
per-request statistics:
min: 0.0003s
avg: 0.2880s
max: 60.2694s
approx. 95 percentile: 0.0175s

Threads fairness:
events (avg/stddev): 208.7500/1157.46
execution time (avg/stddev): 60.1210/0.07

sysbench v0.4.8: multi-threaded system evaluation benchmark

Removing test files...
sysbench v0.4.8: multi-threaded system evaluation benchmark

128 files, 1024Kb each, 128Mb total
Creating files for the test...
sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 1Mb each
128Mb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 14728 Write, 0 Other = 14728 Total
Read 0b Written 230.12Mb Total transferred 230.12Mb (3.8192Mb/sec)
244.43 Requests/sec executed

Test execution summary:
total time: 60.2541s
total number of events: 14728
total time taken by event execution: 3846.4353
per-request statistics:
min: 0.0004s
avg: 0.2612s
max: 1.5574s
approx. 95 percentile: 0.5732s

Threads fairness:
events (avg/stddev): 230.1250/10.67
execution time (avg/stddev): 60.1006/0.03

sysbench v0.4.8: multi-threaded system evaluation benchmark

128 files, 16384Kb each, 2048Mb total
Creating files for the test...
sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 8496 Read, 0 Write, 0 Other = 8496 Total
Read 132.75Mb Written 0b Total transferred 132.75Mb (2.2124Mb/sec)
141.60 Requests/sec executed

Test execution summary:
total time: 60.0016s
total number of events: 8496
total time taken by event execution: 59.9901
per-request statistics:
min: 0.0003s
avg: 0.0071s
max: 0.2733s
approx. 95 percentile: 0.0107s

Threads fairness:
events (avg/stddev): 8496.0000/0.00
execution time (avg/stddev): 59.9901/0.00

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 9497 Read, 0 Write, 0 Other = 9497 Total
Read 148.39Mb Written 0b Total transferred 148.39Mb (2.4566Mb/sec)
157.22 Requests/sec executed

Test execution summary:
total time: 60.4044s
total number of events: 9497
total time taken by event execution: 3854.1290
per-request statistics:
min: 0.0094s
avg: 0.4058s
max: 1.1867s
approx. 95 percentile: 0.7207s

Threads fairness:
events (avg/stddev): 148.3906/3.90
execution time (avg/stddev): 60.2208/0.12

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 0 Read, 12236 Write, 0 Other = 12236 Total
Read 0b Written 191.19Mb Total transferred 191.19Mb (3.1862Mb/sec)
203.91 Requests/sec executed

Test execution summary:
total time: 60.0056s
total number of events: 12236
total time taken by event execution: 59.9917
per-request statistics:
min: 0.0003s
avg: 0.0049s
max: 0.3623s
approx. 95 percentile: 0.0117s

Threads fairness:
events (avg/stddev): 12236.0000/0.00
execution time (avg/stddev): 59.9917/0.00

sysbench v0.4.8: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 64

Extra file open flags: 16384
128 files, 16Mb each
2Gb total file size
Block size 16Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random write test
Threads started!
Time limit exceeded, exiting...
(last message repeated 63 times)
Done.

Operations performed: 0 Read, 12251 Write, 0 Other = 12251 Total
Read 0b Written 191.42Mb Total transferred 191.42Mb (3.1792Mb/sec)
203.47 Requests/sec executed

Test execution summary:
total time: 60.2109s
total number of events: 12251
total time taken by event execution: 3847.4821
per-request statistics:
min: 0.0039s
avg: 0.3141s
max: 1.8187s
approx. 95 percentile: 0.6672s

Threads fairness:
events (avg/stddev): 191.4219/8.33
execution time (avg/stddev): 60.1169/0.04

sysbench v0.4.8: multi-threaded system evaluation benchmark

Removing test files...

MySQL Master-Master replication table sync

I saw a post by Baron mentioning that his tool maatkit is best for handling situations where a master-master replication setup has got out of sync.

If you think Baron was blowing his own trumpet he has good reason to. I have used his mk-archiver tool as part of the Maatkit to make the problem of archiving and purging data much easier. This was much easier than rolling my own solution.

Anyhow. I have a master-master replication just lying around to test this kind of stuff after finishing the multi-instance master-master replication pair last week. Plus I have already had some past experience using table-checksum tool which is part of Maatkit (or MySQL toolkit as it used to be known). Amazing how you publish some stuff and then you get the next idea for an article almost immediately.

Overview:

To test Baron’s assertion that Maatkit mk-table-sync is the best tool to re-sync a master-master replication pair.

Install:

  1. Setup a master-master replication pair.
  2. Download and install Maatkit.
  3. Load the Sakila sample database.
  4. Make some changes on one master which are not replicated to the other.
  5. Verify that the tables are out-of-sync using mk-table-checksum
  6. Use mk-table-sync to re-sync the tables.

Summary:

Again Maatkit lives up to my expectations. Both mk-table-checksum and mk-table-sync discovered and re-synced the tables without any issues.
On a side note, without this toolkit, you could still have rebuilt the other master from the dedicated slave, but this is so much faster and easier.

Ideas for the motivated reader:

  1. As always be careful when using set global sql_slave_skip_counter = 1;
  2. Download and test the Maatkit tools for yourself.
  3. Run mk-table-checksum periodically to discover if your slaves are actually consistent with your master.
  4. Having multiple recovery methods is good, you should be experienced and confident to use them all.

Detailed Screen Dump with comments:



Maatkit Dependencies

yum install perl-DBI perl-DBD-MySQL gcc
Time::HiRes

wget http://search.cpan.org/CPAN/authors/id/J/JH/JHI/Time-HiRes-1.9712.tar.gz

download maatkit-1753 from Sourceforge

perl Makefile.PL
make install

Break the master-master replication

On master2:

stop slave;

On master1:

mysql> use sakila
Database changed

mysql> insert into film values(1001,'Welcome to DBA Dojo','A place on the way towards being a DBA',2007,1,1,30,45,25.00,90000.00,'G','Behind the Scenes',now());
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> commit;
Query OK, 0 rows affected (0.00 sec)


mysql> select * from film where film_id = 1001;
+---------+---------------------+----------------------------------------+--------------+-------------+----------------------+-----------------+-------------+--------+------------------+--------+-------------------+---------------------+
| film_id | title | description | release_year | language_id | original_language_id | rental_duration | rental_rate | length | replacement_cost | rating | special_features | last_update |
+---------+---------------------+----------------------------------------+--------------+-------------+----------------------+-----------------+-------------+--------+------------------+--------+-------------------+---------------------+
| 1001 | Welcome to DBA Dojo | A place on the way towards being a DBA | 2007 | 1 | 1 | 30 | 45.00 | 25 | 999.99 | G | Behind the Scenes | 2008-03-03 05:42:58 |
+---------+---------------------+----------------------------------------+--------------+-------------+----------------------+-----------------+-------------+--------+------------------+--------+-------------------+---------------------+
1 row in set (0.00 sec)

mysql> select * from film where film_id = 1001\G
*************************** 1. row ***************************
film_id: 1001
title: Welcome to DBA Dojo
description: A place on the way towards being a DBA
release_year: 2007
language_id: 1
original_language_id: 1
rental_duration: 30
rental_rate: 45.00
length: 25
replacement_cost: 999.99
rating: G
special_features: Behind the Scenes
last_update: 2008-03-03 05:42:58
1 row in set (0.00 sec)

On master 2

mysql> set global sql_slave_skip_counter = 1;
Query OK, 0 rows affected (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from film where film_id = 1001\G
Empty set (0.00 sec)

On Master 1 run a simple table checksum


mk-table-checksum h=master1,u=root,p=$PASSWD h=master1 --databases=sakila

DATABASE TABLE CHUNK HOST ENGINE COUNT CHECKSUM TIME WAIT STAT LAG
sakila actor 0 master1 MyISAM NULL 3596356558 0 0 NULL NULL
sakila actor 0 master1 MyISAM NULL 3596356558 0 0 NULL NULL
sakila address 0 master1 MyISAM NULL 3083097758 0 0 NULL NULL
sakila category 0 master1 MyISAM NULL 2281594170 0 0 NULL NULL
sakila category 0 master1 MyISAM NULL 2281594170 0 0 NULL NULL
sakila city 0 master1 MyISAM NULL 1881669182 0 0 NULL NULL
sakila address 0 master1 MyISAM NULL 3083097758 0 0 NULL NULL
sakila city 0 master1 MyISAM NULL 1881669182 0 0 NULL NULL
sakila country 0 master1 MyISAM NULL 3658016321 0 0 NULL NULL
sakila country 0 master1 MyISAM NULL 3658016321 0 0 NULL NULL
sakila customer 0 master1 MyISAM NULL 1332169016 0 0 NULL NULL
sakila customer 0 master1 MyISAM NULL 1332169016 0 0 NULL NULL
sakila film 0 master1 MyISAM NULL 1490639089 0 0 NULL NULL
sakila film 0 master1 MyISAM NULL 1490639089 0 0 NULL NULL
sakila film_actor 0 master1 MyISAM NULL 3128610213 0 0 NULL NULL
sakila film_actor 0 master1 MyISAM NULL 3128610213 0 0 NULL NULL
sakila film_category 0 master1 MyISAM NULL 3646644932 0 0 NULL NULL
sakila film_category 0 master1 MyISAM NULL 3646644932 0 0 NULL NULL
sakila film_text 0 master1 MyISAM NULL 2391883145 0 0 NULL NULL
sakila film_text 0 master1 MyISAM NULL 2391883145 0 0 NULL NULL
sakila inventory 0 master1 MyISAM NULL 3471334076 0 0 NULL NULL
sakila inventory 0 master1 MyISAM NULL 3471334076 0 0 NULL NULL
sakila language 0 master1 MyISAM NULL 19972916 0 0 NULL NULL
sakila language 0 master1 MyISAM NULL 19972916 0 0 NULL NULL
sakila payment 0 master1 MyISAM NULL 684052380 1 0 NULL NULL
sakila payment 0 master1 MyISAM NULL 684052380 0 0 NULL NULL
sakila rental 0 master1 MyISAM NULL 2658764859 0 0 NULL NULL
sakila rental 0 master1 MyISAM NULL 2658764859 0 0 NULL NULL
sakila staff 0 master1 MyISAM NULL 1172551672 0 0 NULL NULL
sakila staff 0 master1 MyISAM NULL 1172551672 0 0 NULL NULL
sakila store 0 master1 MyISAM NULL 1107595282 0 0 NULL NULL
sakila store 0 master1 MyISAM NULL 1107595282 0 0 NULL NULL

Check the local slave


mk-table-checksum h=master1,u=root,p=$PASSWD h=slave1 --databases=sakila
DATABASE TABLE CHUNK HOST ENGINE COUNT CHECKSUM TIME WAIT STAT LAG
sakila actor 0 master1 MyISAM NULL 3596356558 0 0 NULL NULL
sakila actor 0 slave1 MyISAM NULL 3596356558 0 0 NULL NULL
sakila address 0 master1 MyISAM NULL 3083097758 0 0 NULL NULL
sakila address 0 slave1 MyISAM NULL 3083097758 0 0 NULL NULL
sakila category 0 master1 MyISAM NULL 2281594170 0 0 NULL NULL
sakila category 0 slave1 MyISAM NULL 2281594170 0 0 NULL NULL
sakila city 0 master1 MyISAM NULL 1881669182 0 0 NULL NULL
sakila city 0 slave1 MyISAM NULL 1881669182 0 0 NULL NULL
sakila country 0 master1 MyISAM NULL 3658016321 0 0 NULL NULL
sakila country 0 slave1 MyISAM NULL 3658016321 0 0 NULL NULL
sakila customer 0 master1 MyISAM NULL 1332169016 0 0 NULL NULL
sakila customer 0 slave1 MyISAM NULL 1332169016 0 0 NULL NULL
sakila film 0 master1 MyISAM NULL 1490639089 0 0 NULL NULL
sakila film 0 slave1 MyISAM NULL 1490639089 0 0 NULL NULL
sakila film_actor 0 master1 MyISAM NULL 3128610213 0 0 NULL NULL
sakila film_actor 0 slave1 MyISAM NULL 3128610213 0 0 NULL NULL
sakila film_category 0 master1 MyISAM NULL 3646644932 0 0 NULL NULL
sakila film_category 0 slave1 MyISAM NULL 3646644932 0 0 NULL NULL
sakila film_text 0 master1 MyISAM NULL 2391883145 0 0 NULL NULL
sakila film_text 0 slave1 MyISAM NULL 2391883145 0 0 NULL NULL
sakila inventory 0 master1 MyISAM NULL 3471334076 0 0 NULL NULL
sakila inventory 0 slave1 MyISAM NULL 3471334076 0 0 NULL NULL
sakila language 0 master1 MyISAM NULL 19972916 0 0 NULL NULL
sakila language 0 slave1 MyISAM NULL 19972916 0 0 NULL NULL
sakila payment 0 master1 MyISAM NULL 684052380 0 0 NULL NULL
sakila payment 0 slave1 MyISAM NULL 684052380 0 0 NULL NULL
sakila rental 0 master1 MyISAM NULL 2658764859 0 0 NULL NULL
sakila rental 0 slave1 MyISAM NULL 2658764859 0 0 NULL NULL
sakila staff 0 slave1 MyISAM NULL 1172551672 0 0 NULL NULL
sakila staff 0 master1 MyISAM NULL 1172551672 0 0 NULL NULL
sakila store 0 master1 MyISAM NULL 1107595282 0 0 NULL NULL
sakila store 0 slave1 MyISAM NULL 1107595282 0 0 NULL NULL


Creating the CHECKSUM table

mysql> use mysql
Database changed
mysql> CREATE TABLE checksum (
-> db char(64) NOT NULL,
-> tbl char(64) NOT NULL,
-> chunk int NOT NULL,
-> boundaries char(64) NOT NULL,
-> this_crc char(40) NOT NULL,
-> this_cnt int NOT NULL,
-> master_crc char(40) NULL,
-> master_cnt int NULL,
-> ts timestamp NOT NULL,
-> PRIMARY KEY (db, tbl, chunk)
-> ) ENGINE=InnoDB;

Forget the port and you will get this error

mk-table-checksum h=master1,u=root,p=$PASSWD h=master2,u=root,p=$PASSWD \
--databases=sakila

DBI connect(';host=master2;mysql_read_default_group=mysql','root',...) failed: Can't connect to MySQL server on 'master2

And we have a winner, mk-table-checksum has found the issue with the film table

mk-table-checksum h=master1,u=root,p=$PASSWD \
h=master2,u=root,p=$PASSWD,P=3308 --databases=sakila

DATABASE TABLE CHUNK HOST ENGINE COUNT CHECKSUM TIME WAIT STAT LAG
...
sakila film 0 master1 MyISAM NULL 1490639089 0 0 NULL NULL
sakila film 0 master2 MyISAM NULL 1421174266 0 0 NULL NULL
...

Now use Baron's mk-table-sync example... testing first.

mk-table-sync --synctomaster h=master2,u=root,p=$PASSWD,P=3308,D=sakila,t=film \
--test

# Syncing D=sakila,P=3308,h=master2,p=...,t=film,u=root
# DELETE REPLACE INSERT UPDATE ALGORITHM DATABASE.TABLE
# 0 0 0 0 Chunk sakila.film

Run the real thing

mk-table-sync --synctomaster h=master2,u=root,p=$PASSWD,P=3308,D=sakila,t=film \
--verbose --execute

# Syncing D=sakila,P=3308,h=master2,p=...,t=film,u=root
# DELETE REPLACE INSERT UPDATE ALGORITHM DATABASE.TABLE
# 0 1 0 0 Chunk sakila.film


On master2

mysql> select * from film where film_id = 1001\G
*************************** 1. row ***************************
film_id: 1001
title: Welcome to DBA Dojo
description: A place on the way towards being a DBA
release_year: 2007
language_id: 1
original_language_id: 1
rental_duration: 30
rental_rate: 45.00
length: 25
replacement_cost: 999.99
rating: G
special_features: Behind the Scenes
last_update: 2008-03-03 05:42:58
1 row in set (0.00 sec)

On Slave 2, which is a cascade slave of master 1

mysql -u root -p$PASSWD -S /tmp/mysql2.sock
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5 to server version: 5.1.20-beta-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use sakila
Database changed
mysql> select * from film where film_id = 1001\G
*************************** 1. row ***************************
film_id: 1001
title: Welcome to DBA Dojo
description: A place on the way towards being a DBA
release_year: 2007
language_id: 1
original_language_id: 1
rental_duration: 30
rental_rate: 45.00
length: 25
replacement_cost: 999.99
rating: G
special_features: Behind the Scenes
last_update: 2008-03-03 05:42:58
1 row in set (0.00 sec)

MySQL Multi Master-Master on EC2

Background:

As I mentioned in the previous post on running multiple mysql instances on EC2 using mysqld_multi. This was the first step in running 2 or more EC2 Amazon Machine Image (AMI) as a Multi-instance Master-Master (MIMM) replication cluster.
The idea cames from a blog article from Apokalyptik.com
It is about improving the availability of your databases on EC2 and allowing easy backups without affecting either master instance.
The other benefit I see, is these two masters are read-write. So you can choose your load-balancer and not worry about what to do if one of the master instances goes down.
I am going to test using mysql-proxy to perform that role of load-balancing a multi master replication cluster next.
Then we are going to hammer the hell out of the configuration using mysqlslap, then sysbench and then the granddaddy of OLTP benchmarks DBT.

If you follow the screen dump of what I did I am using a bunch of previous work. Each of the previous steps and posts builds on the last.

Summary:

  1. The most complex part of this was getting the /etc/my.cnf right. The rest was plain vanilla mysql replication. You can view myMIMM.cnf
  2. Some of the links below never mention log-slave-updates. This is fine if your in a circular or one way replication setup. In this setup each master is a slave of the other, and each master has a dedicated slave. To propagate changes from one master to the dedicated slave needs log-slave-updates on each master instance.
  3. mysqld_multi stop does not seem to stop the instances. I had to use mysqladmin shutdown instead, connecting to specific sockets.

Other Resources:

http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
http://www.howtoforge.com/mysql_master_master_replication
http://capttofu.livejournal.com/1752.html
http://jayant7k.blogspot.com/2006/06/multi-master-replication-in-mysql.html
http://code.google.com/p/mysql-master-master/

Install:

Refer to the above links for more detailed setups. Essentially the high level steps are

  1. Setup your partitioning, I used LVM and my LVM script so I can use snapshots for backups in the future.
  2. Download and install MySQL.
  3. Load your data into that single instance, then shutdown.
  4. Map out your replication topology.
  5. Copy the data to form the basis of all the new master and slave instances.
  6. Modify the /etc/my.cnf so it has the setting for running mysqld_multi. For an example follow my post on Multiple instances or follow the example my.cnf below.
  7. Assign appropriate names, sockets, port numbers and even datafile names so that at 3am you don’t mistake one instance for another.
  8. Modify the /etc/hosts file. So you can use nicer names rather than DNS or IP addresses.

Session Output: Comments in bold




Run make_mnt_lvm.sh

[root@domU-12-31-38-00-29-35 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.0G 1.2G 2.7G 31% /
/dev/mapper/vg-myvmdisk1
30G 77M 28G 1% /data1/mysql
/dev/mapper/vg-myvmdisk2
30G 77M 28G 1% /data2/mysql
/dev/mapper/vg-myvmdisk3
30G 77M 28G 1% /backup/mysql

Copy existing mysql data directory to new homes

cp -R /usr/local/mysql/data/* /data1/mysql/data
cp -R /usr/local/mysql/data/* /data2/mysql/data
mv /usr/local/mysql/data /usr/local/mysql/data_old
chown -R mysql:mysql /data1/mysql/data
chown -R mysql:mysql /data2/mysql/data


[root@domU-12-31-38-00-29-35 mysql]# ls -l /data1/mysql/data
total 1504
-rw-r----- 1 root root 5242880 Feb 25 04:46 ib_logfile0
-rw-r----- 1 root root 5242880 Feb 25 04:46 ib_logfile1
-rw-r----- 1 root root 10485760 Feb 25 04:46 ibdata1
drwxr-x--- 2 root root 4096 Feb 25 04:46 mysql
-rw-r----- 1 root root 263 Feb 25 04:46 mysql-bin.000001
-rw-r----- 1 root root 125 Feb 25 04:46 mysql-bin.000002
-rw-r----- 1 root root 38 Feb 25 04:46 mysql-bin.index
drwxr-x--- 2 root root 4096 Feb 25 04:46 test

Create the multi_mysql aware /etc/my.cnf

[root@domU-12-31-38-00-29-35 mysql]# cat /etc/my.cnf
[client]
port = 3306
socket = /tmp/mysql1.sock

[mysqld_multi]
mysqld = /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqld_safe
mysqladmin = /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqladmin
user = multi_root
password = multi_pass

[mysqld1]
port = 3306
socket = /tmp/mysql1.sock
pid-file = /data1/mysql/data/mysql1.pid
datadir = /data1/mysql/data/
skip-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# Innodb
innodb_data_home_dir = /data1/mysql/data
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /data1/mysql/data
innodb_log_arch_dir = /data1/mysql/data
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50


log-bin=mysql1-bin

server-id = 1

# For Master-Master replication
auto_increment_increment = 10
auto_increment_offset = 1

# Slave configuration

# The replication master for this slave - required
master-host = master1
#
# The username the slave will use for authentication when connecting
# to the master - required
master-user = repluser
#
# The password the slave will authenticate with when connecting to
# the master - required
master-password = replpassword
#
# The port the master is listening on.
# optional - defaults to 3306
master-port = 3308

[mysqld2]
port = 3307
socket = /tmp/mysql2.sock
pid-file = /data2/mysql/data/mysql2.pid
datadir = /data2/mysql/data
skip-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# Innodb
innodb_data_home_dir = /data2/mysql/data
innodb_data_file_path = ibdata2:10M:autoextend
innodb_log_group_home_dir = /data2/mysql/data
innodb_log_arch_dir = /data2/mysql/data
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

log-bin=mysql2-bin
log-slave-updates

server-id = 2

# Slave configuration

# The replication master for this slave - required
master-host = master1
#
# The username the slave will use for authentication when connecting
# to the master - required
master-user = repluser
#
# The password the slave will authenticate with when connecting to
# the master - required
master-password = replpassword
#
# The port the master is listening on.
# optional - defaults to 3306
master-port = 3306


[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

Second master on separate machine

Notes:
I am using higher server_id 3 and 4.
higher mysqld configurations mysqld3 and mysqld4
Higher port numbers 3308 and 3309

Reason: At 3am in the morning you don't want to be working on the wrong instance.


[client]
port = 3308
socket = /tmp/mysql1.sock

[mysqld_multi]
mysqld = /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqld_safe
mysqladmin = /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqladmin
user = multi_root
password = multi_pass

[mysqld3]
port = 3308
socket = /tmp/mysql1.sock
pid-file = /data1/mysql/data/mysql1.pid
datadir = /data1/mysql/data/
skip-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# Innodb
innodb_data_home_dir = /data1/mysql/data
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /data1/mysql/data
innodb_log_arch_dir = /data1/mysql/data
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50


log-bin=mysql1-bin

server-id = 3

# For Master-Master replication
auto_increment_increment = 10
auto_increment_offset = 2

# Slave configuration

# The replication master for this slave - required
master-host = master1
#
# The username the slave will use for authentication when connecting
# to the master - required
master-user = repluser
#
# The password the slave will authenticate with when connecting to
# the master - required
master-password = replpassword
#
# The port the master is listening on.
# optional - defaults to 3306
master-port = 3306
skip-slave-start

[mysqld4]
port = 3309
socket = /tmp/mysql2.sock
pid-file = /data2/mysql/data/mysql2.pid
datadir = /data2/mysql/data
skip-locking
key_buffer = 16M
max_allowed_packet = 1M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

# Innodb
innodb_data_home_dir = /data2/mysql/data
innodb_data_file_path = ibdata2:10M:autoextend
innodb_log_group_home_dir = /data2/mysql/data
innodb_log_arch_dir = /data2/mysql/data
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

log-bin=mysql2-bin
log-slave-updates

server-id = 4

# Slave configuration

# The replication master for this slave - required
master-host = master2
#
# The username the slave will use for authentication when connecting
# to the master - required
master-user = repluser
#
# The password the slave will authenticate with when connecting to
# the master - required
master-password = replpassword
#
# The port the master is listening on.
# optional - defaults to 3306
master-port = 3308


[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Starting up

mysqld_multi start 1 --verbose --password=multi_pass
mysqld_multi start 2 --verbose --password=multi_pass


[root@domU-12-31-38-00-29-35 data]# ps -ef|grep mysqld_safe
root 3178 1 0 04:59 ttyp0 00:00:00 /bin/sh

/usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqld_safe --port=3306 --socket=/tmp/mysql1.sock

--pid-file=/data1/mysql/data/mysql1.pid --datadir=/data1/mysql/data/ --skip-locking --key_buffer=16M

--max_allowed_packet=1M --table_cache=64 --sort_buffer_size=512K --net_buffer_length=8K --read_buffer_size=256K

--read_rnd_buffer_size=512K --myisam_sort_buffer_size=8M --innodb_data_home_dir=/data1/mysql/data

--innodb_data_file_path=ibdata1:10M:autoextend --innodb_log_group_home_dir=/data1/mysql/data

--innodb_log_arch_dir=/data1/mysql/data --log-bin=mysql1-bin --server-id=1
root 3669 1 0 05:21 ttyp0 00:00:00 /bin/sh

/usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqld_safe --port=3307 --socket=/tmp/mysql2.sock

--pid-file=/data2/mysql/data/mysql2.pid --datadir=/data2/mysql/data/ --log-error=/data2/mysql/data/ --skip-locking

--key_buffer=16M --max_allowed_packet=1M --table_cache=64 --sort_buffer_size=512K --net_buffer_length=8K

--read_buffer_size=256K --read_rnd_buffer_size=512K --myisam_sort_buffer_size=8M

--innodb_data_home_dir=/data2/mysql/data --innodb_data_file_path=ibdata2:10M:autoextend

--innodb_log_group_home_dir=/data2/mysql/data --innodb_log_arch_dir=/data2/mysql/data

--innodb_buffer_pool_size=128M --log-bin=mysql2-bin --server-id=2

Checking connectivity

[root@domU-12-31-38-00-29-35 data]# mysql -u root -p -S /tmp/mysql1.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show variables like '%dir%';
+--------------------+-----------------------------------------------------------------------+
| Variable_name | Value |
+--------------------+-----------------------------------------------------------------------+
| basedir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/ |
| character_sets_dir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/share/mysql/charsets/ |
| datadir | /data1/mysql/data/ |
| plugin_dir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/lib/mysql |
| slave_load_tmpdir | /tmp |
| tmpdir | /tmp |
+--------------------+-----------------------------------------------------------------------+
6 rows in set (0.01 sec)

mysql> exit
Bye

[root@domU-12-31-38-00-29-35 data]# mysql -u root -p -S /tmp/mysql2.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show variables like '%dir%';
+--------------------+-----------------------------------------------------------------------+
| Variable_name | Value |
+--------------------+-----------------------------------------------------------------------+
| basedir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/ |
| character_sets_dir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/share/mysql/charsets/ |
| datadir | /data2/mysql/data/ |
| plugin_dir | /usr/local/mysql-5.1.20-beta-linux-i686-glibc23/lib/mysql |
| slave_load_tmpdir | /tmp |
| tmpdir | /tmp |
+--------------------+-----------------------------------------------------------------------+
6 rows in set (0.00 sec)

Fire up new machine and setup as slave on master1 (this will be master2 soon)

[root@ip-10-251-75-150 ~]# mysqld_multi --verbose --password=multi_pass start 3
[root@ip-10-251-75-150 ~]# ps -ef|grep mysqld_safe
root 3006 1 0 05:34 ttyp0 00:00:00 /bin/sh

/usr/local/mysql-5.1.20-beta-linux-i686-glibc23/bin/mysqld_safe --port=3308 --socket=/tmp/mysql1.sock

--pid-file=/data1/mysql/data/mysql1.pid --datadir=/data1/mysql/data/ --skip-locking --key_buffer=16M

--max_allowed_packet=1M --table_cache=64 --sort_buffer_size=512K --net_buffer_length=8K --read_buffer_size=256K

--read_rnd_buffer_size=512K --myisam_sort_buffer_size=8M --innodb_data_home_dir=/data1/mysql/data

--innodb_data_file_path=ibdata1:10M:autoextend --innodb_log_group_home_dir=/data1/mysql/data

--innodb_log_arch_dir=/data1/mysql/data --innodb_buffer_pool_size=256M --innodb_additional_mem_pool_size=20M

--innodb_log_file_size=64M --innodb_log_buffer_size=8M --innodb_flush_log_at_trx_commit=1

--innodb_lock_wait_timeout=50 --log-bin=mysql1-bin --server-id=3 --auto_increment_increment=10

--auto_increment_offset=2 --master-host=master1 --master-user=repluser --master-password=replpassword

--master-port=3306 --skip-slave-start
root 3235 2791 0 05:34 ttyp0 00:00:00 grep mysqld_safe


mysql -u root -p$PASSWD -S /tmp/mysql1.sock

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> GRANT REPLICATION SLAVE ON *.* to repluser@'slave2' identified by 'replpassword';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* to repluser@'master1' identified by 'replpassword';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges
-> ;
Query OK, 0 rows affected (0.00 sec)

Add privileges on Master1

mysql> GRANT REPLICATION SLAVE ON *.* to repluser@'master2' identified by 'replpassword';
Query OK, 0 rows affected (0.00 sec)

Test connectivity from master2 to master 1

[root@ip-10-251-75-150 ~]# mysql -u repluser -preplpassword --host=master1 --port=3306
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql>

Check master position on master1

mysql> show master status\G
*************************** 1. row ***************************
File: mysql1-bin.000001
Position: 475
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)

Start slave on master2

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: master1
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql1-bin.000001
Read_Master_Log_Pos: 475
Relay_Log_File: mysql1-relay-bin.000003
Relay_Log_Pos: 621
Relay_Master_Log_File: mysql1-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 475
Relay_Log_Space: 883
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

Setup master1 as slave on master2

Test connectivity for master1 to master2

[root@ip-10-251-69-190 ~]# mysql -u repluser -preplpassword --host=master2 --port=3308
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.


Check master status on master2

mysql -u root -p$PASSWD -S /tmp/mysql1.sock

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show master status\G
*************************** 1. row ***************************
File: mysql1-bin.000003
Position: 106
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)

On master1 start slave

[root@ip-10-251-69-190 ~]# mysql -u root -p$PASSWD -P 3306
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: master2
Master_User: repluser
Master_Port: 3308
Connect_Retry: 60
Master_Log_File: mysql1-bin.000003
Read_Master_Log_Pos: 106
Relay_Log_File: mysql1-relay-bin.000004
Relay_Log_Pos: 252
Relay_Master_Log_File: mysql1-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 554
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

Ok fire up the slaves on each master


Addition of replication settings

# Slave configuration

# The replication master for this slave - required
master-host = master1
#
# The username the slave will use for authentication when connecting
# to the master - required
master-user = repluser
#
# The password the slave will authenticate with when connecting to
# the master - required
master-password = replpassword
#
# The port the master is listening on.
# optional - defaults to 3306
master-port = 3306
log-slave-updates
skip-slave-start


setup replication user

mysql -u root -p$PASSWD -S /tmp/mysql1.sock
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> GRANT REPLICATION SLAVE ON *.* to repluser@'slave1' identified by 'replpassword';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* to repluser@'master1' identified by 'replpassword';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges
-> ;
Query OK, 0 rows affected (0.00 sec)

On Master mysqld 1
mysql> show master status\G
*************************** 1. row ***************************
File: mysql1-bin.000002
Position: 320
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)

Test connectivity for slave replication user

mysql -u repluser -preplpassword --host=master1 --port=3306

Start slave if it isn't already

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

Check the status.

Note: The Last_IO_error is just that. The last recorded error



mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: master1
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql1-bin.000002
Read_Master_Log_Pos: 535
Relay_Log_File: mysql2-relay-bin.000003
Relay_Log_Pos: 681
Relay_Master_Log_File: mysql1-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 535
Relay_Log_Space: 983
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1130
Last_IO_Error: error connecting to master 'repluser@master1:3306' - retry-time: 60 retries: 86400
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)


Load some sample data into the master1 instance

http://dev.mysql.com/doc/sakila/en/sakila.html

[root@ip-10-251-69-190 ~]# cd /backup/mysql/
[root@ip-10-251-69-190 mysql]# wget http://downloads.mysql.com/docs/sakila-db.zip
--05:55:09-- http://downloads.mysql.com/docs/sakila-db.zip
=> `sakila-db.zip'
Resolving downloads.mysql.com... 213.136.52.53
Connecting to downloads.mysql.com|213.136.52.53|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 670,566 (655K) [application/x-zip]

100%[============================================================================>] 670,566 281.15K/s

05:55:13 (280.50 KB/s) - `sakila-db.zip' saved [670566/670566]



[root@domU-12-31-38-00-29-35 sakila-db]# mysql -u root -p$PASSWD -S /tmp/mysql1.sock
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 20
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> source sakila-schema.sql

...

mysql> source sakila-data.sql
...

mysql> use sakila
Database changed
mysql> show tables;
+----------------------------+
| Tables_in_sakila |
+----------------------------+
| actor |
| actor_info |
| address |
| category |
| city |
| country |
| customer |
| customer_list |
| film |
| film_actor |
| film_category |
| film_list |
| film_text |
| inventory |
| language |
| nicer_but_slower_film_list |
| payment |
| rental |
| sales_by_film_category |
| sales_by_store |
| staff |
| staff_list |
| store |
+----------------------------+
23 rows in set (0.00 sec)

mysql> SELECT COUNT(*) FROM film;
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
1 row in set (0.00 sec)

mysql> SELECT COUNT(*) FROM film_text;
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
1 row in set (0.00 sec)

Checking that slave propagation

mysql -u root -p$PASSWD -S /tmp/mysql2.sock
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 33
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| sakila |
| test |
+--------------------+
4 rows in set (0.00 sec)

mysql> use sakila;
Database changed
mysql> show tables;
+----------------------------+
| Tables_in_sakila |
+----------------------------+
| actor |
| actor_info |
| address |
| category |
| city |
| country |
| customer |
| customer_list |
| film |
| film_actor |
| film_category |
| film_list |
| film_text |
| inventory |
| language |
| nicer_but_slower_film_list |
| payment |
| rental |
| sales_by_film_category |
| sales_by_store |
| staff |
| staff_list |
| store |
+----------------------------+
23 rows in set (0.00 sec)

mysql> SELECT COUNT(*) FROM film;
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
1 row in set (0.00 sec)

mysql> SELECT COUNT(*) FROM film_text;
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
1 row in set (0.00 sec)

Checking master2

[root@ip-10-251-75-150 ~]# mysql -u root -p$PASSWD -P 3308
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| sakila |
| test |
+--------------------+
4 rows in set (0.00 sec)

mysql> use sakila
Database changed
mysql> SELECT COUNT(*) FROM film;
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
1 row in set (0.00 sec)

Checking slave2

mysql -u root -p$PASSWD -S /tmp/mysql2.sock
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: master2
Master_User: repluser
Master_Port: 3308
Connect_Retry: 60
Master_Log_File: mysql1-bin.000003
Read_Master_Log_Pos: 106
Relay_Log_File: mysql2-relay-bin.000004
Relay_Log_Pos: 252
Relay_Master_Log_File: mysql1-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 106
Relay_Log_Space: 554
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| test |
+--------------------+
3 rows in set (0.00 sec)

Where is the sakila database?!?

This is related to log-slave-updates!
http://dev.mysql.com/doc/refman/5.1/en/replication-options.html#option_mysqld_log-slave-updates

Master 2 took the changes from Master 1, however it never recorded the changes in its binlog!

Stopping the instances...

I found mysqld_multi stop doesn't stop or shutdown anything. So you have to go and shutdown the
mysql instances using mysqladmin shutdown -S yourSocket -uroot -Pyourpassword

mysqladmin shutdown -S /tmp/mysql2.sock -uroot -p$PASSWD
mysqladmin shutdown -S /tmp/mysql1.sock -uroot -p$PASSWD

[root@ip-10-251-75-150 ~]# ps -ef|grep mysql
root 27356 2791 0 06:37 ttyp0 00:00:00 grep mysql

MySQL SolidDB vs MySQLSlap EC2

Overview:

I am always keen to try and test out new technologies. The idea of having a choice in storage engines is a great feature of MySQL. In MySQL 5.1, the MySQL engine API is available to write your own storage engine.

In the meantime, there are a couple of MySQL storage engines produced by 3rd party companies. SolidDB is one of them.

I wanted to install and then test the SolidDB engine against mysqlslap. I have heard and read a little about SolidDB in the past. However I was amazed that if you use SolidDB you don’t get any of the other storage engines. It is the SolidDB or the highway, 3rd party engines are not usable.

Where to get SolidDB:

To download SolidDB use this link http://dev.soliddb.com/en/download/download_files.php.

I used the MySQL SolidDB 5.0.51.
ftp://developers.soliddb.com/pub/5.0.51-0081/solidmysql-5.0.51-linux-i686-0081.tar.gz

Documentation and FAQ:

If in doubt always have the documentation handy.
http://dev.soliddb.com/doc
http://dev.soliddb.com/doc/doc_html/079/soliddb_storageengine.html

Plus the FAQ is always a good idea when using a product for the first time.
http://www.solidtech.com/en/products/relationaldatabasemanagementsoftware/enterprise/faqs.asp

Summary:

The results for the SolidDB engine were as good as the Innodb. Nothing widely variant from the numbers. I will run another bunch of tests on a larger instance and really push it to see if there are any upper limits.
Once that is out of the way, the next thing would be to throw sysbench and then DBT benchmark at it.
I have no idea as to the level of adoption out in the real world, however the limitation of only using the solidDB engine may be limiting that. I am happy to be proving wrong on this. Feel free to comment.

Setup:



Download MySQL SolidDB
tar -zxvf solidmysql-5.0.51-linux-i686-0081.tar.gz -C /var/lib
cd /var/lib/solidmysql-5.0.51-linux-i686-0081
cp /var/lib/solidmysql-5.0.51-linux-i686-0081/support-files/my-soliddb.cnf /etc/my.cnf
/var/lib/solidmysql-5.0.51-linux-i686-0081/bin/mysqld_safe --default-storage-engine=solidDB &

Sample my.cnf file

more support-files/my-soliddb.cnf
# Example MySQL config file.
#
# You can copy this file to
# /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /usr/local/mysql/var) or
# ~/.my.cnf to set user-specific options.
#
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.

# The following options will be passed to all MySQL clients
[client]
#password = your_password
port = 3306
socket = /tmp/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
thread_stack = 64K

# These are solidDB parameters
#
default-storage-engine = soliddb
soliddb_cache_size=256M
soliddb_durability_level=3

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

Checking out the SolidDB system variables

./mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.0.51 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database solid
-> ;
Query OK, 1 row affected (0.01 sec)

mysql> use solid;
Database changed
mysql> show variables like '%solid%';
+------------------------------+-----------+
| Variable_name | Value |
+------------------------------+-----------+
| have_soliddb | YES |
| soliddb_cache_size | 268435456 |
| soliddb_durability_level | 3 |
| soliddb_logdir | |
| soliddb_backupdir | |
| soliddb_admin_command | |
| soliddb_checkpoint_deletelog | ON |
| soliddb_filespec | |
| soliddb_lock_wait_timeout | 30 |
| soliddb_db_block_size | 0 |
| soliddb_log_block_size | 0 |
| soliddb_backup_block_size | 0 |
| soliddb_checkpoint_interval | 50000 |
| soliddb_checkpoint_time | 300 |
| soliddb_io_threads | 5 |
| soliddb_lockhash_size | 1000000 |
| soliddb_pessimistic | ON |
| soliddb_write_threads | 1 |
| soliddb_extend_increment | 500 |
| soliddb_readahead | 4 |
+------------------------------+-----------+
20 rows in set (0.00 sec)

Test and Results:


mysqlslap --concurrency=1,25,50,100,200 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--engine=soliddb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=1000 --user=root

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.364 seconds
Minimum number of seconds to run all queries: 0.303 seconds
Maximum number of seconds to run all queries: 0.557 seconds
Number of clients running queries: 1
Average number of queries per client: 1000

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.391 seconds
Minimum number of seconds to run all queries: 0.311 seconds
Maximum number of seconds to run all queries: 0.508 seconds
Number of clients running queries: 25
Average number of queries per client: 40

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.418 seconds
Minimum number of seconds to run all queries: 0.392 seconds
Maximum number of seconds to run all queries: 0.461 seconds
Number of clients running queries: 50
Average number of queries per client: 20

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.456 seconds
Minimum number of seconds to run all queries: 0.403 seconds
Maximum number of seconds to run all queries: 0.500 seconds
Number of clients running queries: 100
Average number of queries per client: 10

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.633 seconds
Minimum number of seconds to run all queries: 0.599 seconds
Maximum number of seconds to run all queries: 0.683 seconds
Number of clients running queries: 200
Average number of queries per client: 5


mysqlslap --concurrency=1,25,50,100,200 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--engine=soliddb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=1000 \
--user=root --csv=/tmp/mysqlslap_q1000_soliddb.csv

cat /tmp/mysqlslap_q1000_soliddb.csv

soliddb,mixed,0.371,0.247,0.451,1,1000
soliddb,mixed,0.381,0.337,0.462,25,40
soliddb,mixed,0.418,0.369,0.483,50,20
soliddb,mixed,0.523,0.440,0.639,100,10
soliddb,mixed,0.703,0.634,0.804,200,5

Increase the size of soliddb_cache_size from 250M to 750M

soliddb_cache_size=750M


mysqlslap --concurrency=1,25,50,100,200 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--engine=soliddb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=1000 --user=root

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.336 seconds
Minimum number of seconds to run all queries: 0.240 seconds
Maximum number of seconds to run all queries: 0.425 seconds
Number of clients running queries: 1
Average number of queries per client: 1000

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.382 seconds
Minimum number of seconds to run all queries: 0.331 seconds
Maximum number of seconds to run all queries: 0.454 seconds
Number of clients running queries: 25
Average number of queries per client: 40

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.420 seconds
Minimum number of seconds to run all queries: 0.364 seconds
Maximum number of seconds to run all queries: 0.455 seconds
Number of clients running queries: 50
Average number of queries per client: 20

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.491 seconds
Minimum number of seconds to run all queries: 0.426 seconds
Maximum number of seconds to run all queries: 0.518 seconds
Number of clients running queries: 100
Average number of queries per client: 10

Benchmark
Running for engine soliddb
Average number of seconds to run all queries: 0.657 seconds
Minimum number of seconds to run all queries: 0.604 seconds
Maximum number of seconds to run all queries: 0.740 seconds
Number of clients running queries: 200
Average number of queries per client: 5

mysqlslap --concurrency=1,25,50,100,200 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--engine=soliddb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=1000 \
--user=root --csv=/tmp/mysqlslap_q1000_soliddb_750M.csv

cat /tmp/mysqlslap_q1000_soliddb_750M.csv

soliddb,mixed,0.315,0.266,0.392,1,1000
soliddb,mixed,0.397,0.355,0.463,25,40
soliddb,mixed,0.420,0.364,0.475,50,20
soliddb,mixed,0.495,0.446,0.556,100,10
soliddb,mixed,0.701,0.621,0.810,200,5

http://dev.soliddb.com/doc/doc_html/079/soliddb_storageengine.html#solid-autoincrement

Lets test the autoincrement out with a write only test

mysqlslap --concurrency=1,25,50,100,200 --iterations=10 --number-int-cols=2 \
--number-char-cols=3 --auto-generate-sql --engine=soliddb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=write \
--number-of-queries=1000 --user=root \
--csv=/tmp/mysqlslap_q1000_soliddb_750M_write.csv

soliddb,write,0.395,0.305,0.468,1,1000
soliddb,write,0.398,0.335,0.460,25,40
soliddb,write,0.507,0.437,0.757,50,20
soliddb,write,0.522,0.453,0.594,100,10
soliddb,write,0.754,0.682,0.808,200,5

Same again with 10,000 queries

mysql> set global max_connections = 500;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye
mysqlslap --concurrency=1,25,50,100,200 --iterations=10 --number-int-cols=2 \
--number-char-cols=3 --auto-generate-sql --engine=soliddb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=write \
--number-of-queries=10000 --user=root --csv=/tmp/mysqlslap_q10000_soliddb_750M_write.csv

soliddb,write,5.207,4.382,6.535,1,10000
soliddb,write,4.686,4.002,5.141,25,400
soliddb,write,4.792,4.101,5.425,50,200
soliddb,write,5.633,4.472,7.151,100,100
soliddb,write,5.784,4.630,6.733,200,50

And back to a mixed load for 10,000 queries.

mysqlslap --concurrency=1,25,50,100,200 --iterations=10 --number-int-cols=2 \
--number-char-cols=3 --auto-generate-sql --engine=soliddb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed \
--number-of-queries=10000 --user=root --csv=/tmp/mysqlslap_q10000_soliddb_750M.csv

cat /tmp/mysqlslap_q10000_soliddb_750M.csv

soliddb,mixed,3.784,3.314,4.277,1,10000
soliddb,mixed,3.791,3.624,4.310,25,400
soliddb,mixed,4.043,3.516,4.857,50,200
soliddb,mixed,4.280,3.845,5.159,100,100
soliddb,mixed,4.464,3.969,5.602,200,50


vmstat 1 10

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 0 1222700 39820 315480 0 0 16 4566 374 4444 6 3 87 4
1 1 0 1222700 39824 315476 0 0 0 33840 2347 22313 23 11 6 59
19 2 0 1221448 39828 315472 0 0 0 13968 956 12484 55 9 9 26
9 2 0 1221324 39828 315472 0 0 0 7608 568 14670 59 41 0 0
1 1 0 1221324 39832 315468 0 0 0 6012 491 15793 61 20 0 18
7 1 0 1221460 39832 315468 0 0 0 8044 626 14697 60 20 0 20
18 2 0 1221324 39836 315464 0 0 0 13196 923 15057 77 10 6 6
6 2 0 1221200 39836 315464 0 0 0 3088 257 6661 21 5 0 74
1 0 0 1221200 39840 315460 0 0 0 5924 476 15433 70 20 0 9
1 1 0 1221200 39840 315460 0 0 0 7216 562 14654 62 24 0 13


MySQL vs MySQLSlap Round 3

In this continuing series on using mysqlslap to pound mysql databases.
http://blog.dbadojo.com/2007/08/mysql-vs-mysqlslap.html
http://blog.dbadojo.com/2008/01/mysql-vs-mysqlslap-round-2.html

I was moving towards running mysqlslap to test any concurrency limits inherent in mysql-proxy. However I was unable to get the small instances on EC2 to handle even small concurrent levels (500 concurrent users).

At least this means that if you are using a small spec machine (or EC2 small instance) you should use connection pooling to limit the absolute number of sessions and promote reuse of existing sessions. This is web connected databases 101. Simple stuff.
We are now pushing the envelope, throwing big iron at the problem, scaling up rather than scaling out. This is a common alternative if you are unwilling or unable to rework or redesign your database application or architecture to use a scaling out solution.

Ok on with the show. I fired up the largest instance available in EC2.
This is Fedora Core 8 64 Bit running mysql-5.1.22-rc-linux-x86_64-glibc23

We have 4 CPUS, 15 Gig of RAM and plenty of disk space.

Results:

  1. MySQL running the Innodb storage engine scaled reasonably well.
  2. MySQL had no difficulty handling as many as 8000 concurrent sessions.
  3. Running more queries with less connections was slower than the opposite. i.e running more sessions in parallel reduced the time to completion. This is good and should be expected.
  4. In MySQLslap the total number of queries is just that. It is not the number of queries per individual session.
  5. Running many concurrent sessions without any queries will hang MySQLslap… a bug or feature?!? If a session doesn’t run anything then it will never? return.

Coming up:

  1. MySQLSlap vs mysql-proxy: load balancing
  2. MySQLSlap vs mysql-proxy: read/write split
  3. MySQLSlap vs MySQL SolidDB.
  4. MySQLSlap vs MySQL PrimeBase XT (PBXT) storage engine.

Showing the specification of the box, disk and top to show CPU and memory

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 767M 8.6G 9% /
/dev/sdb 414G 199M 393G 1% /mnt

top - 03:54:46 up 2 min, 1 user, load average: 0.02, 0.02, 0.00
Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.9%us, 0.7%sy, 0.0%ni, 95.9%id, 2.4%wa, 0.0%hi, 0.0%si, 0.1%st
Cpu1 : 0.1%us, 0.2%sy, 0.0%ni, 99.5%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.1%us, 0.2%sy, 0.0%ni, 99.4%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.4%us, 0.2%sy, 0.0%ni, 99.3%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 15728640k total, 373744k used, 15354896k free, 3668k buffers
Swap: 0k total, 0k used, 0k free, 37608k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 16 0 10312 712 592 S 0 0.0 0:00.03 init
2 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0
5 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/0

cat /proc/meminfo

MemTotal: 15728640 kB
MemFree: 15355020 kB
Buffers: 3676 kB
Cached: 37600 kB
SwapCached: 0 kB
Active: 29308 kB
Inactive: 15452 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 15728640 kB
LowFree: 15355020 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
Mapped: 6584 kB
Slab: 10016 kB
CommitLimit: 7864320 kB
Committed_AS: 9436 kB
PageTables: 592 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 124 kB
VmallocChunk: 34359738243 kB

my cnf files for the test...


grep -v "#" my-huge.cnf|sed -e '/^$/d'

[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
thread_concurrency = 8
log-bin=mysql-bin
server-id = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout


heavy INNODB my.cnf

grep -v "#" my-innodb-heavy-4G.cnf|sed -e '/^$/d'
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
back_log = 50
max_connections = 100
max_connect_errors = 10
table_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 4
default_table_type = MYISAM
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
log_slow_queries
long_query_time = 2
log_long_format
server-id = 1
key_buffer_size = 32M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[myisamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 8192

My.cnf file for test.

diff my-innodb-heavy-4G_withoutcomments.cnf /etc/my.cnf

8c8
max_connections = 1024
30c30
key_buffer_size = 256M
40c40
innodb_buffer_pool_size = 8G
45c45
innodb_log_buffer_size = 16M

First run, simple run up to 1000 concurrent sessions:

mysqlslap --concurrency=1,25,50,100,200,500,1000 --iterations=10 --number-int-cols=2 \
--number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap_q1000.csv --engine=blackhole,myisam,innodb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed --number-of-queries=1000 \
--user=root --password=$PASSWD

tail -f /tmp/mysqlslap_q1000.csv

blackhole,mixed,0.072,0.068,0.084,1,1000
blackhole,mixed,0.044,0.018,0.059,25,40
blackhole,mixed,0.041,0.035,0.069,50,20
blackhole,mixed,0.051,0.043,0.067,100,10
blackhole,mixed,0.078,0.059,0.102,200,5
blackhole,mixed,0.116,0.096,0.183,500,2
blackhole,mixed,0.204,0.164,0.311,1000,1
myisam,mixed,0.356,0.327,0.456,1,1000
myisam,mixed,0.168,0.157,0.184,25,40
myisam,mixed,0.187,0.140,0.227,50,20
myisam,mixed,0.193,0.175,0.207,100,10
myisam,mixed,0.198,0.189,0.215,200,5

MyISAM locks up at 500 concurrent sessions...


mysql -u root -p$PASSWD

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 19323
Server version: 5.1.22-rc-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> show processlist;
+-------+------+-----------+-----------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+------+-----------+-----------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| 2 | root | localhost | mysqlslap | Sleep | 77 | | NULL |
| 19275 | root | localhost | mysqlslap | Query | 77 | Locked | INSERT INTO t1 VALUES (NULL,1839719074,1147388315,'jbPuYPJuWuAF3FKMtuKwCtCBdruTJKwuvLhKms6Hm9OhOlwHX |
...
| 19322 | root | localhost | mysqlslap | Query | 77 | Locked | INSERT INTO t1 VALUES (NULL,2101913295,1100418235,'XQ0K9x30cYaFq2RvMZzKHKgmxy4uKBpreh4fX7f7XEEM8a9Nz |
| 19323 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-------+------+-----------+-----------+---------+------+--------+------------------------------------------------------------------------------------------------------+
48 rows in set (0.00 sec)

Rerunning with just INNODB engine.

mysqlslap --concurrency=1,25,50,100,200,500,1000 --iterations=10 --number-int-cols=2 --number-char-cols=3 \
--auto-generate-sql --csv=/tmp/mysqlslap_q1000_innodb.csv --engine=innodb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed \
--number-of-queries=1000 --user=root --password=$PASSWD

tail -f /tmp/mysqlslap_q1000_innodb.csv

innodb,mixed,0.195,0.186,0.232,1,1000
innodb,mixed,0.193,0.161,0.228,25,40
innodb,mixed,0.306,0.206,0.745,50,20
innodb,mixed,0.301,0.226,0.355,100,10
innodb,mixed,0.354,0.256,0.565,200,5
innodb,mixed,0.404,0.355,0.462,500,2
innodb,mixed,0.622,0.496,0.860,1000,1

Trying for 2000 concurrents session, unix barfs on socket error, needed to increase open file limit.

mysqlslap --concurrency=2000 --iterations=10 --number-int-cols=2 --number-char-cols=3 \
--auto-generate-sql --csv=/tmp/mysqlslap_q1000_innodb.csv --engine=innodb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed \
--number-of-queries=1000 --user=root --password=$PASSWD

mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
...

ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 122944
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 122944
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root@domU-12-31-39-00-00-61 ~]# ulimit -a mysql
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 122944
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 122944
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Set the open files to 4096

ulimit -n 4096

ulimit -a mysql
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
max nice (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 122944
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 122944
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Results:

innodb,mixed,10.311,9.758,10.759,2000,0

Ok going crazy. Increasing concurrency test to 8000

ulimit -n 16384

mysql -u root -p$PASSWD

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 59768
Server version: 5.1.22-rc-community-log MySQL Community Edition (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> set global max_connections = 8192;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%open%';
+-------------------+----------+
| Variable_name | Value |
+-------------------+----------+
| have_openssl | DISABLED |
| innodb_open_files | 300 |
| open_files_limit | 8192 |
| table_open_cache | 2048 |
+-------------------+----------+
4 rows in set (0.00 sec)

The test, starting at 3000, stop at 8000

mysqlslap --concurrency=3000,4000,5000,6000,7000,8000 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap_q1000_innodb.csv \
--engine=innodb --auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed \
--number-of-queries=1000 --user=root --password=$PASSWD


show innodb status

mysql tables in use 1, locked 0
MySQL thread id 73478, query id 628793 localhost root statistics
SELECT intcol1,intcol2,charcol1,charcol2,charcol3 FROM t1 WHERE id = '86'
---TRANSACTION 0 559503, ACTIVE 0 sec, process no 24914, OS thread id 1231399232 waiting in InnoDB queue
mysql tables in use 1, locked 0
MySQL thread id 72432, query id 630648 localhost root statistics
SELECT intcol1,intcol2,charcol1,charcol2,charcol3 FROM t1 WHERE id = '37'
--------
FILE I/O
--------
I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 1; buffer pool: 0
26 OS file reads, 639254 OS file writes, 639129 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 1816.98 writes/s, 1816.71 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2,
0 inserts, 0 merged recs, 0 merges
Hash table size 17700857, used cells 14, node heap has 1 buffer(s)
0.02 hash searches/s, 1784.49 non-hash searches/s
---
LOG
---
Log sequence number 0 273601811
Log flushed up to 0 273601795
Last checkpoint at 0 200657790
1 pending log writes, 0 pending chkp writes
639290 log i/o's done, 1786.10 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 9232961490; in additional pool allocated 16773376
Dictionary memory allocated 37240
Buffer pool size 524288
Free buffers 522840
Database pages 1447
Modified db pages 1394
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 64, created 1383, written 2709
0.00 reads/s, 0.00 creates/s, 4.62 writes/s
Buffer pool hit rate 1000 / 1000
--------------
ROW OPERATIONS
--------------
16 queries inside InnoDB, 1998 queries in queue
1 read views open inside InnoDB
Main thread process no. 24914, id 1140881728, state: sleeping
Number of rows inserted 319416, updated 0, deleted 0, read 10852873
910.30 inserts/s, 0.00 updates/s, 0.00 deletes/s, 789.73 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================

1 row in set, 1 warning (0.24 sec)

Results: Notice how the whole timing slows down when there aren't enough queries
for the sessions.

innodb,mixed,0.195,0.186,0.232,1,1000
innodb,mixed,0.193,0.161,0.228,25,40
innodb,mixed,0.306,0.206,0.745,50,20
innodb,mixed,0.301,0.226,0.355,100,10
innodb,mixed,0.354,0.256,0.565,200,5
innodb,mixed,0.404,0.355,0.462,500,2
innodb,mixed,0.622,0.496,0.860,1000,1
innodb,mixed,10.311,9.758,10.759,2000,0
innodb,mixed,20.512,16.948,25.606,3000,0

mysqlslap --concurrency=3000,4000,5000,6000,7000,8000 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--csv=/tmp/mysqlslap_q1000_innodb.csv --engine=innodb \
--auto-generate-sql-add-autoincrement --auto-generate-sql-load-type=mixed \

--number-of-queries=10000 --user=root --password=$PASSWD


innodb,mixed,5.466,4.771,6.007,3000,3

Killed at 3000 as running no queries was slower than running as least 1 query... interesting.



Ok. Going harder again, increasing the number of queries to 100,000.



mysqlslap --concurrency=25,50,100,200,500,1000,2000,5000,7000,8000 --iterations=10 \
--number-int-cols=2 --number-char-cols=3 --auto-generate-sql \
--csv=/tmp/mysqlslap_q100000_innodb.csv --engine=innodb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=100000 --user=root --password=$PASSWD

innodb,mixed,22.739,22.073,24.245,25,4000
innodb,mixed,25.742,24.546,27.208,50,2000
innodb,mixed,28.168,26.855,30.160,100,1000
innodb,mixed,29.032,28.212,31.196,200,500
innodb,mixed,32.129,31.216,33.788,500,200
innodb,mixed,36.847,35.449,38.028,1000,100

2nd Run

innodb,mixed,23.062,22.516,23.560,25,4000
innodb,mixed,26.703,25.016,28.788,50,2000
innodb,mixed,28.460,26.001,30.368,100,1000
innodb,mixed,29.599,28.589,31.178,200,500
innodb,mixed,32.085,31.154,35.011,500,200
innodb,mixed,35.461,34.482,36.585,1000,100
innodb,mixed,18.829,18.180,19.518,2000,50
innodb,mixed,8.149,7.231,9.889,5000,20
innodb,mixed,6.093,5.681,6.338,7000,14
innodb,mixed,5.527,5.219,6.042,8000,12