Sorry for the recent loss of the site. Blogger just didn’t place nice with the new hosting provider, no matter what I did. Out with the old and in with the new.
So what is on the agenda after such as long period without posts.
Some ideas for future posts:
- Hosted Database solutions, the good, the bad and the ugly.
- Running databases on virtual machines/clouds.
- DBA toolsets, do custom scripts have a place in the world containing percona-tools
- DBAs, Is this the beginning of the end, the end of the beginning? or am I running out of clichés
- SQL is dead love live SQL.
- Interviews with DBAs. 10 questions answered by the finest DBAs
- Remote DBA work, is it the promised land?
- Databases and Machine learning, is the self-tuning database in sight?
If ever you need to re-construct the directory structure on Linux/Unix on a different machine you can just run this command.
# Generates a list of mkdir commands to re-construct the directory structure from current location
find . -type d| while read -r line; do echo “mkdir -p $line”; done
If you are wanting to copy files as well, just use scp or rsync
The use case for these kind of commands nowadays is greatly reduced, if you are using DevOps tools such as puppet or chef, they will do this kind of thing automagically out-of-the-box. If you are running your databases on VMs (datafiles within the VM), most of the time you could clone the image and everything is the same.
The aim of all those tools is to make the job of Sysadmins and DBAs easier whilst producing a environment where the state is consistent/known.
I had some fun recently with a Oracle database choosing a poor execution plan.
The problem was with a view which had a column which was explicitly cast to a value.
create table vw_temp
cast(ID) as NUMBER(19,0) as ID,
from very_large_table a
join large_table b on a.ID = b.ID
where Name = ‘whatever’ ;
Oracle in this case was unable to use the ability to push predicates down and make the joins more optimized.
So the moral of the story is be careful if you are doing casts/converts or any function which will change the column in a view.
For more info about predicate push down have a read of this blog entry
Or this short entry in the documenation
I was reading some RSS feeds the other day and noticed that Jeremy Schneider over at Ardent Performance Computing was working on getting Oracle RAC working on Amazon EC2.
He looks to have solved the whole Virtual IP issue by using another instance. Nice solution!
When I get a spare moment (don’t believe for a minute that the lack of posts means I am not busy) it would be good to take the scripts and get the whole Oracle RAC working in Amazon EC2 finally!
I have had the chance to play with some columnar databases, Vertica and Ingres VectorWise and the performance is good. I used the TPC-H benchmark to test to a scale 20 (small only due to a lack of disk space). So the results are nothing like the recent Scale 100 benchmarks that Ingres VectorWise did but useful nonetheless. Sadly I can’t publish any scripts or results as the IP is owned by my current employer.
Currently I am focused on improving my skills in predictive modeling and analytics. This is using the data rather than just supporting/recovering and hand-holding the data a.k.a. being a DBA.
Listening to trance, in the zone and most definitely Having Fun!
I noticed that people were hitting the site for information on how to run mysqlslap.
To help out those searchers, here is a quick mysqlslap howto
- Make sure you have mysql 5.1.4 or higher. Download MySQL from the MySQL website
- Make sure your MySQL database is running.
- Run mysqlslap, using progressively more concurrent threads:
mysqlslap --concurrency=1,25,50,100 --iterations=10 --number-int-cols=2 \
--number-char-cols=3 --auto-generate-sql --csv=/tmp/mysqlslap.csv \
--engine=blackhole,myisam,innodb --auto-generate-sql-add-autoincrement \
--auto-generate-sql-load-type=mixed --number-of-queries=100 --user=root \
For detailed descriptions of each parameter see the MySQL documentation:
If you want to see how I used mysqlslap to test mysql performance on Amazon EC2, here are the list of posts
Slave I/O thread: error reconnecting to master
Last_IO_Error: error connecting to master
Check that the slave can connect to the master instance, using the following steps:
- Use ping to check the master is reachable. eg ping master.yourdomain.com
- Use ping with ip address to check that DNS isn’t broken. eg. ping 192.168.1.2
- Use mysql client to connect from slave to master. eg mysql -u repluser -pREPLPASS –host=master.yourdomain.com –port=3306 (substitute whatever port you are connecting to the master on)
- If all steps work, then check that the repluser (the SLAVE replication user has the REPLICATION SLAVE privilege). eg. show grants for ‘repl’@’slave.yourdomain.com’;
- If step 1 and 2 fail, you have a network or firewall issue. Check with a network/firewall administrator or check the logs if you wear those hats.
- If Step 1 fails but Step 2 works, you have a DNS or names resolution issue. Check that the slave can connect and resolves names using mysql client or ssh/telnet/remote desktop.
- If Step 3 fails, you need to check the error reported, it will either be a authentication issue (login failed/denied) or an issue with the TCP port the master is listening on. A good way to verify that port is open is to use: telnet master.yourdomain.com 3306 (or the port the master is listening on) if that fails then there is a firewall(s) in the network which are blocking that port.
- If you get to step 4 and everything looks fine and the slave does reconnect fine on retrying. Then you have probably had either temporary, network failure, names resolution failure, firewall failure or any of the prior together.
Continuing Sporadic issues:
Get hold of the network and firewall logs.
If this is not possible, setup a script to periodically ping, connect, mysql connect and log that over
time to prove to your friendly network admin that there is an problem with the network.
How MySQL deals with it:
MySQL will try and reconnect by itself after a network failure or query timeout.
The process is governed by a few variables:
In a nutshell, a MySQL slave will try to reconnect after getting a timeout (slave-net-timeout) after waiting the number of seconds in master-connect-retry but only for the number of times
specified in master-retry-count.
By default, a MySQL slave waits one hour before retry, and will then retry every 60 seconds for 86,400 times. That is every minute for 60 days.
If the one hour slave-net-timeout is too long for your DR/Slave read strategy you will need to adjust it accordingly.
Thanks to leBolide. He discovered that there is a 32 character limit on the password for replication.
I was reviewing what I had written over the last 2 years, and how people had reacted via comments and page views, even what keywords were most popular.
Here is the first draft (pending input from interested readers via adding a comment)
- More dbt2 benchmark articles.
- Amazon EC2 LVM snapshots vs EBS snapshots.
- Backup software for MySQL and specifically on EC2.
- Benchmarking Amazon EBS using iozone.
- Installing, testing and benchmarking the non-standard MySQL engines such as OurDelta and XtraDB and plugins.
- Using EC2 as a test bed for new versions of MySQL and Oracle and other dbs.
- New: Using Microsoft SQL server on EC2.
- New: Using Postgresql on EC2.
- New: Using DB2 on EC2.
New stuff over the next 12 months:
The newest theme for the next 12 months will be revisiting the original idea behind the blog. That theme was the publication of useful methods and recipes for all DBAs and DBA/Developers and people wearing DBA/Developer/SysAdmin/NetworkAdmin or more hats.
A Dojo is a commonly associated with a training place for training in Martial Arts. This Dojo is a training place for Database Administrators (DBAs).
A common method in Martial Arts is a Kata. This is the idea of honing your skills (as a DBA in this case) through training and practice. So there will be articles published under the kata category covering exercises to help people become more polished and skilful with the common tasks and responsibilities of being a DBA.