MySQL Practice challenges part one.

Are you ready to accept the challenge? Really?

Can you prove that you have got what it takes to be an effective DBA?

Go grab the tests from and see if you do…


These tests are designed to test your ability to do basic restores and recoveries of a MySQL database.

Each restore gets progressively more complex.

The key feature is, once you are successfully restored the MySQL database you will get the encryption key/passphrase
to do the next test.

Use: ccrypt -d <filename>.cpt to decrypt the file.


Virtualbox and vagrant if using a virtual machine for running the MySQL instance.
MySQL 5.6
ccrypt or equivalent that can read ccrypt encrypted files.

A vagrantfile is provided as an example to spawn a simple virtualbox VM with 1Gig of memory to run the small MySQL database required for the tests.

Once you have proved you are awesome…

Once you have completed all the current tests, email and we will keep you informed when the next set becomes ready.

For the uber awesome DBAs, if you have ideas for more tests, email them to or comment here or send a merge request.

P.S. There are more and harder restores to come… stay tuned.

MySQL upgrade 5.6 with innodb_fast_checksum=1


My checklist for performing an in-place MySQL upgrade to 5.6.


In my previous post, I discussed the problem I had when doing an in-place MySQL upgrade from 5.5 to 5.6 when the database had been running with innodb_fast_checksum=1.

The solution was to use the MySQL 5.7 version of the tool innochecksum. Using this tool on a shutdown database, you can force the checksums on the innodb datafiles to be rewritten into either INNODB or CRC32 format.

Once the MySQL 5.6 upgrade is done, the 5.6 version of mysqld will be able to read the datafiles correctly and not fail with an error.

There is already plenty of good documentation on the MySQL website on how to upgrade from 5.x to 5.6.


My checklist for in-place upgrading to MySQL 5.6:

  1. Perform application and database performance testing on your test environment to make sure your application performance doesn’t get worse when running on MySQL 5.6.
  2. Make sure you have backups and verified that your backups are good aka you have restored databases from those backups.
  3. Check that all users have updated their passwords to use the new mysql password hash (plugin) Doc URL
  4. Organize downtime in advance.
  5. If running with innodb_fast_checksum=1, proceed with steps to replace the fast checksums with INNODB or CRC32.
    Note: if you use CRC32, you will need to make sure your cnf file is updated for 5.6 to use innodb_checksum_algorithm = CRC32. This is because innodb_checksum_algorithm = INNODB is the default setting. See this post for a sample procedure.
  6. Run a quick search of all existing .cnf files to find any other system variables which have been removed and either replace or remove them.
  7. Run the in-place upgrade.
  8. Run mysql_upgrade, it will flag if it doesn’t need to be run again.

I am trying something new with a poll. Enjoy.

innodb_fast_checksum=1 and upgrading to MySQL 5.6

The Percona version of MySQL has been such a good replacement for the generic MySQL version that many of the features and options that existed in Percona have been merged into the generic MySQL.

Innodb_fast_checksum was an option added to improve the performance of checksums.

The system variable was replaced by innodb_checksum_algorithm in 5.6.

Unfortunately, when you go to upgrade from Percona 5.x to Percona (or generic mysql) 5.6, an in-place upgrade will fail.

The error(s) will be generally mysql complaining it can’t read the file. This is because fast checksums can’t be read by the 5.6 version.

Example errors:

InnoDB: checksum mismatch in data file
InnoDB: Could not open

The recommended option is do the default upgrade process: use mysqldump to dump your data out and reload after you replace the binaries.

For large datasets or servers suffering poor IO performance, the time it takes to do that, even using a parallel dump and load tool is prohibitive.

So are you looking for a workaround?

How about a mysql tool which has been around for a while, called innochecksum.

This tool can check your datafiles to make sure the checksums are correct, or in our case, force the checksums to be written a specific way. I was thinking, prep work is done, now it is just process work. But alas, the versions of innochecksum for 5.5 and 5.6 don’t support files sizes over 2Gigabytes.

Luckily, innochecksum for 5.7 actually does support larger file sizes and best of all it works on old version datafiles too. For people hitting this article in the future, 5.7 at the time was just a RC (Release candidate).

To use this method:

  1. Backup your db or have good backups.
  2. Organize downtime for your db (slave preferably so you aren’t affecting traffic)
  3. Shutdown mysql
  4. Repeat for each innodb datafile: example command: innochecksum -vS –no-check –write=innodb <path to innodb datafile>
  5. Replace innodb_fast_checksum = 1 with innodb_fast_checksum = 0 in your my.cnf (and chef/puppet/ansible repo)
  6. Restart mysql

I will cover the whole procedure for upgrading from Percona MySQL 5.5 to Percona MySQL 5.6 in more detail in a later post.

Fun tool tip:

I have had to compile the MySQL 5.7 innochecksum for an older linux kernel running glibc older than 2.14, and it works fine as well. The biggest headache was sorting out cmake, boost etc to enable the compilation of the MySQL 5.7 source code.

Have Fun

Prewarm your EBS backed EC2 MySQL slaves

This is the story of cold blocks and mismatched instances and how they will cause you pain and cost you money until you understand why.

Most of the clients that we support run on the Amazon cloud using either RDS or running MySQL on plain EC2 instances using (Provisioned IOPS) PIOPS EBS for data storage.

As expected the common architecture is running a master with one or more slaves handling the read traffic.

A common problem is that after the slaves are provisioned (normally created from an EBS snapshot) they lag badly due to slow IO performance.

Unfortunately what tends to be lost in the “speed of provisioning new resources” fetish is some limitations in terms of data persistence layer (EBS).

If you are using EBS and you have created the EBS volume from snapshot or created a new volume you have to pre-warm the EBS volume otherwise you will suffer a bad (I mean seriously bad) first usage penalty.  Bad? I am talking up to 50% performance drop[1]. So that expensive PIOPS EBS volume you created is going to perform like rubbish every time it reads/writes a cold block.

The other thing which also tends to happen is mixing up the wrong instance (network performance) with the PIOPS EBS. This the classic networked storage, the network is the bottleneck. If your instance type has limited network performance, having a higher PIOPS than the network can handle means you are wasting money (on PIOPS) you can’t use. A bit like in the old days (of dedicated servers and SAN storage) where the SAN could deliver 200-300Mbytes per sec, but the 1 Gigabit network could only do 40-50Mbytes per sec.

Here is the real downside, using the cloud you can provision new resources to handle peak load (in the case more MySQL slaves to handle read load) as fast as you can click, or faster using API calls, or even automagically, if you have some algo forecast the need for additional resources. But… the EBS is all cold blocks, so these new instances will be up and available in minutes but the IO performance will be poor until you either pre-warm or the slave gets around to writing/reading all blocks.

So the common solution is to pre-warm the blocks using dd to read the EBS device (and warm the block) to /dev/null

eg: sudo dd if=/dev/xvdf of=/dev/null bs=1M

Consider how long this will take for any reasonable sized DB (200GBytes) using an instance with 1 Gigabit network.

200Gigabytes read at 50Mbytes/sec  = 200,000 Mbytes/50 = 4000 secs = 3600 (1hr) + 400 (6 mins 40 secs) =~ more than 1 hr.

So you or your algo provisioned a new EC2 instance for the database in minutes but either your IO will be rubbish for an extended period, or you wait more than 1 hr per 200GB to have the EBS pre-warmed.

What are the solutions?

  1. Forecast further in advance depending on the size of your db (or any other persistent storage layer eg NoSQL etc)
  2. Use ephemeral storage and manage the increased risk of data loss in the event of instance termination.
  3. Break your DB or your application into smaller pieces aka micro services.[2]
  4. Pay more $ and have your databases stay around longer so waiting for a instance to be ready in the beginning is not a problem.

As you can expect, most businesses are happy with option 4. Pay more, leave instances around like they were dedicated servers (base load). Amazon is happy too.

Option 3 whilst requiring some thought (argh) and additional complexity is where the real speed of provisioning, dare I say it, agile nature of the cloud will bear the most fruit.




mysqlslap howto

I noticed that people were hitting the site for information on how to run mysqlslap.

To help out those searchers, here is a quick mysqlslap howto

  1. Make sure you have mysql 5.1.4 or higher. Download MySQL from the MySQL website
  2. Make sure your MySQL database is running.
  3. Run mysqlslap, using progressively more concurrent threads:
    mysqlslap  --concurrency=1,25,50,100 --iterations=10 --number-int-cols=2 \
    --number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap.csv \
    --engine=blackhole,myisam,innodb --auto-generate-sql-add-autoincrement \
    --auto-generate-sql-load-type=mixed --number-of-queries=100 --user=root \

For detailed descriptions of each parameter see the MySQL documentation:

If you want to see how I used mysqlslap to test mysql performance on Amazon EC2, here are the list of posts

MySQL Error: error reconnecting to master

Error message:

Slave I/O thread: error reconnecting to master
Last_IO_Error: error connecting to master


Check that the slave can connect to the master instance, using the following steps:

  1. Use ping to check the master is reachable. eg ping
  2. Use ping with ip address to check that DNS isn’t broken. eg. ping
  3. Use mysql client to connect from slave to master. eg mysql -u repluser -pREPLPASS – –port=3306 (substitute whatever port you are connecting to the master on)
  4. If all steps work, then check that the repluser (the SLAVE replication user has the REPLICATION SLAVE privilege). eg. show grants for ‘repl’@’’;


  • If step 1 and 2 fail, you have a network or firewall issue. Check with a network/firewall administrator or check the logs if you wear those hats.
  • If Step 1 fails but Step 2 works, you have a DNS or names resolution issue. Check that the slave can connect and resolves names using mysql client or ssh/telnet/remote desktop.
  • If Step 3 fails, you need to check the error reported, it will either be a authentication issue (login failed/denied) or an issue with the TCP port the master is listening on. A good way to verify that port is open is to use: telnet 3306 (or the port the master is listening on) if that fails then there is a firewall(s) in the network which are blocking that port.
  • If you get to step 4 and everything looks fine and the slave does reconnect fine on retrying. Then you have probably had either temporary, network failure, names resolution failure, firewall failure or any of the prior together.

Continuing Sporadic issues:

Get hold of the network and firewall logs.
If this is not possible, setup a script to periodically ping, connect, mysql connect and log that over
time to prove to your friendly network admin that there is an problem with the network.

How MySQL deals with it:

MySQL will try and reconnect by itself after a network failure or query timeout.

The process is governed by a few variables:


In a nutshell, a MySQL slave will try to reconnect after getting a timeout (slave-net-timeout) after waiting the number of seconds in master-connect-retry but only for the number of times
specified in master-retry-count.
By default, a MySQL slave waits one hour before retry, and will then retry every 60 seconds for 86,400 times. That is every minute for 60 days.

If the one hour slave-net-timeout is too long for your DR/Slave read strategy you will need to adjust it accordingly.

Edit: 2011/02/02

Thanks to leBolide. He discovered that there is a 32 character limit on the password for replication.

Have Fun


P.S. If you liked this post you might be good enough to try these challenges

Top 9 Posts for the last 12 months

If you were ever wondering what other people check out on this site, here are the most popular articles by pageviews for the last 12 months.

Seems most people like the LVM snapshots article, articles about running multiple MySQL instances and the various benchmark articles.

  1. mysql backups using lvm snapshots
  2. oracle 11g on ec2 using silent install
  3. mysql multi master master replication on ec2
  4. mysql master master replication table sync
  5. multiple mysql instances on ec2
  6. mysql dbt2 benchmark on ec2
  7. mysql 51 ndb cluster replication on ec2
  8. sysbench vs mysql on ec2
  9. bonnie io benchmark vs ec2

Have Fun


OurDelta MySQL on EC2 – updating binaries

Given the amount of time since my last post on installing OurDelta MySQL on EC2. It allowed me to show quickly how to get your OurDelta MySQL install up-to-date.


You have already installed the OurDelta Repository as per this documentation

To update:

Now just yum update to get the latest version:

yum update MySQL-OurDelta*

It is as simple as that.


Thanks for the feedback from the last post.

Some people requested the Amazon Machine Image (AMI). The main issue with this is, once you bundle an AMI image it is going to start with those binaries (including mysql) and require you to constantly run yum update each time you launch an instance. So if I had bundled an AMI back in February, anyone using that AMI now would be way behind on the latest updates for OurDelta and also any other CentOS packages.

It is better to go down the path of learning how to bundle an AMI yourself, then getting an base CentOS 4 or CentOS 5 AMI up-to-date once month and using yum update afterwards, when you launch the instance.
You can even pass a script which runs after the instance has launched, or use a configuration tool like Puppet.
Believe me when I say that whilst bundling AMIs is straight-forward you do not want large numbers of old/obsolete AMIs floating around to manage later.

Upcoming stuff:

I am going to use this AMI with MySQL Sandbox to show how easy it is to have a test environment if you want to take your existing MySQL 5.0.x versions to OurDelta MySQL or any other version on MySQL.

Have Fun


Screen log:

[root@domU-12-31-39-04-71-D3 ~]# yum update MySQL-OurDelta*
Setting up Update Process
Setting up repositories
Reading repository metadata in from local files
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package MySQL-OurDelta-client.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-shared.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-test.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-server.i386 0:5.0.77.d8-54.el4 set to be updated
---> Package MySQL-OurDelta-devel.i386 0:5.0.77.d8-54.el4 set to be updated
--> Running transaction check

Dependencies Resolved

Package Arch Version Repository Size
MySQL-OurDelta-client i386 5.0.77.d8-54.el4 ourdelta 6.0 M
MySQL-OurDelta-devel i386 5.0.77.d8-54.el4 ourdelta 7.5 M
MySQL-OurDelta-server i386 5.0.77.d8-54.el4 ourdelta 17 M
MySQL-OurDelta-shared i386 5.0.77.d8-54.el4 ourdelta 1.7 M
MySQL-OurDelta-test i386 5.0.77.d8-54.el4 ourdelta 6.7 M

Transaction Summary
Install 0 Package(s)
Update 5 Package(s)
Remove 0 Package(s)
Total download size: 39 M
Downloading Packages:
(1/5): MySQL-OurDelta-cli 100% |=========================| 6.0 MB 00:03
(2/5): MySQL-OurDelta-sha 100% |=========================| 1.7 MB 00:01
(3/5): MySQL-OurDelta-tes 100% |=========================| 6.7 MB 00:03
(4/5): MySQL-OurDelta-ser 100% |=========================| 17 MB 00:09
(5/5): MySQL-OurDelta-dev 100% |=========================| 7.5 MB 00:03
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : MySQL-OurDelta-client ####################### [ 1/10]
Updating : MySQL-OurDelta-shared ####################### [ 2/10]
Updating : MySQL-OurDelta-test ####################### [ 3/10]
Giving mysqld 5 seconds to exit nicely
Updating : MySQL-OurDelta-server ####################### [ 4/10]
090608 20:19:24 [Warning] option 'sync-mirror-binlog': unsigned value 18446744073709551615 adjusted to 4294967295
090608 20:19:24 [Warning] option 'sync-mirror-binlog': unsigned value 18446744073709551615 adjusted to 4294967295
To do so, start the server, then run:

which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.

See the manual for more instructions.

MySQL bug reports should be submitted through
Issues related to the OurDelta patches or packaging should be
submitted in the OurDelta project on Launchpad, simply follow the
links via

The latest information about MySQL is available on the web at
Other information sources are
- the MySQL Mailing List archives (;
- the MySQL Forums (

Notes regarding SELinux on this platform:

The default policy might cause server startup to fail because it is
not allowed to access critical files. In this case, please update
your installation.

The default policy might also cause inavailability of SSL related
features because the server is not allowed to access /dev/random
and /dev/urandom. If this is a problem, please do the following:

1) install selinux-policy-targeted-sources from your OS vendor
2) add the following two lines to /etc/selinux/targeted/src/policy/domains/program/mysqld.te:
allow mysqld_t random_device_t:chr_file read;
allow mysqld_t urandom_device_t:chr_file read;
3) cd to /etc/selinux/targeted/src/policy and issue the following command:
make load

Starting MySQL.[ OK ]
Giving mysqld 2 seconds to start
Updating : MySQL-OurDelta-devel ####################### [ 5/10]
Cleanup : MySQL-OurDelta-client ####################### [ 6/10]
Cleanup : MySQL-OurDelta-shared ####################### [ 7/10]
Cleanup : MySQL-OurDelta-test ####################### [ 8/10]
Cleanup : MySQL-OurDelta-server ####################### [ 9/10]
Cleanup : MySQL-OurDelta-devel ####################### [10/10]

Updated: MySQL-OurDelta-client.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-devel.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-server.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-shared.i386 0:5.0.77.d8-54.el4 MySQL-OurDelta-test.i386 0:5.0.77.d8-54.el4

Just make sure the mysql instance is secure

[root@domU-12-31-39-04-71-D3 ~]# /usr/bin/mysql_secure_installation


In order to log into MySQL to secure it, we'll need the current
password for the root user. If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
... skipping.

By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n]
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n]
... Success!

By default, MySQL comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n]
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n]
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!

OurDelta MySQL on EC2 – install


Arjen would give me an earful if I got this wrong or poorly worded.

“OurDelta produces enhanced builds for MySQL, with OurDelta and third-party patches, for common production platforms” from

Over the next series of articles I am going to put the many additions to the MySQL 5.0 baseline through their paces on Amazon EC2.

Using a base CentOS 4.4 I had lying around on Amazon S3 I had Ourdelta installed and secure in no time flat.

This is outline for the installing Ourdelta onto CentOS 4


yum install yum-plugin-protectbase
rpm --import
rpm -Uvh
mkdir downloads
cd downloads/
yum localinstall MySQL-OurDelta-*
ps -ef|grep mysql
mysql -u root -p


Thats it. Straightforward install.
I am getting lazy in my old age I used a pipe combo to generate the install commands using this:

history |grep -v history|awk ‘{ print $2″ “$3” “$4 }’

The next article will cover the upgrade of any existing MySQL 5.0 release to OurDelta.
I have saved this Ourdelta MySQL install as an AMI.

If people are interested in making this a public AMI please post a comment.

Have Fun


Is EC2 useful as a database server

Plenty of people have been excited by the prospect of Amazon EC2 and the ability to scale out your databases as load increases from your original configuration. I noticed Morgan Tocker and Carl Mercier are going to be presenting on this topic at the upcoming MySQL Conference

However almost immediately people are worried about the lack of persistent of data across instance terminations.
In a sense people are wanting dedicated hosting services instead of what EC2 really is.

You need to think of Amazon EC2 in a electricity generation metaphor.
Coal, Nuclear Fission and Gas provide the base load electricity which is on 24×7.
Gas and Hydro can act as peak load generation as well. So at peak periods when the requirement for more electricity is higher, electricity generators can switch on these extra resources to cope.

Amazon EC2 is the same, it is there to service peak load.
Either you are using Amazon EC2 as a base load server or you are using a dedicated hosting service to provide base load. You add additional server resources during peak periods as required.
As a dedicated hosting service EC2 is not actually the cheapest option out there. There are plenty of dedicated hosting providers who will give you and your application and database, cheaper base load capacity. That said, many people choose to run both application and database servers on EC2 as base load servers and the uptime of these instances is good.

What this means is that using EC2 as base load means you must implement additional protections for your data to provide persistence. This may be in the form of clustering technologies and replication technologies or both. So running EC2 as a base load database server adds complexity. This is why numerous companies have sprung up as a result of this complexity. They are essentially providing a method for companies to pay for someone else to deal with it.

The hidden value here is, in adopting a more thorough attitude to data persistence and redundancy, your database is more robust. So if or when your dedicated hosting provider has an outage, your architectural design is already in a position to handle it.

The danger is, you see any ongoing performance issue (a demand for addition base load) as solved by throwing hardware at it. Rather than reviewing whether the demand is justified or whether it can be reduced through tuning the application, database or architecture.

Update: Added Carl as co-presenter at the MySQL conference.

Have Fun