IOZone benchmark vs EC2 heat maps


I have been using the IOZone benchmarking tool to test the IO ability of EC2 running CentOS 4.

In the last post I showed the 3D surface area chart showing how as the file size grows, the io performance degrades, quite sharply as the file migrates from CPU cache to memory cache to disk.

I redid the charts as what Excel calls contour charts, but remind me of heat maps.

The change was striking, suddenly you can easily see the boundary that IOZone has found for
various file and record sizes. The other standout feature was the appearance of holes or cool spots in the charts at specific file and record sizes.

I went back and specifically tested the file and record combination, for example to do read, write and random read/write on a 16M file with 1M record size I used this command:

iozone -R -r 1m -s 16m -i 0 -i 1 -i2

I have used similar settings for the throughput test as well.

Comments:

The whole reason for stepping back and running the benchmark tools is that the results are suggesting that choosing the appropriate column datatype, rowsize, tablesize and memory buffers is potentially going to have a large impact on your databases performance.

Look at these contour maps and you will start to see what I mean. The presence of a ridge as the filesize increases for a band of record sizes is also very interesting.

Previous Articles:

IOZone Benchmark vs EC2 – Part 1

Resources:
http://s3.amazonaws.com/dbadojo_benchmark/iozone_heatmap_writes.JPG
http://s3.amazonaws.com/dbadojo_benchmark/iozone_heatmap_reads.JPG
http://s3.amazonaws.com/dbadojo_benchmark/iozone_heatmap_random_writes.JPG
http://s3.amazonaws.com/dbadojo_benchmark/iozone_heatmap_random_reads.JPG
Zipped Excel Spreadsheet for IOZone Benchmark data and charts

Have Fun
Paul

Advertisements

IOzone benchmark vs EC2

Here are some pretty surface area graphs from the EC2 benchmark, the stepping down indicates from CPU cache to Memory cache, the last cliff is down to disk once the file was larger the available memory.

As I mentioned yesterday I was running off a IOzone benchmark on EC2 to see how the disk performs, after reading about it in this online benchmark article.
There are a couple of nice features with this benchmark

  1. Output is saved in a format ready for surface area graphs in Excel
  2. Gnuplot options available as well
  3. It tests stride size to see if there are any stripe boundary or IO library issues.

Another first is you can download my results in this file iozone_benchmark_ec2.zip served from Amazon S3 (Right click SAVE AS)

The IOzone documentation[PDF] which is short, but detailed.

Installing IOzone:

  1. wget http://www.iozone.org/src/current/iozone-3-283.i386.rpm
  2. rpm -Uvh iozone-3-283.i386.rpm
  3. export PATH=$PATH:/opt/iozone/bin

Running a benchmark:

Note: File should be larger than available memory. -g 2G indicates 2 Gigabyte file.

  1. iozone -Ra -g 2G > iozone.out

The full size graphs can be found via Amazon S3

http://s3.amazonaws.com/dbadojo_benchmark/iozone_ec2_write.GIF
http://s3.amazonaws.com/dbadojo_benchmark/iozone_ec2_read.GIF
http://s3.amazonaws.com/dbadojo_benchmark/iozone_ec2_random_read.GIF
http://s3.amazonaws.com/dbadojo_benchmark/iozone_ec2_random_write.GIF

Have Fun

Paul

Bonnie IO Benchmark vs EC2

Andy, a reader of the blog left a comment asking if I could run some benchmarking of EC2.

If someone takes the time to comment, making the effort to respond is always worthwhile. Feedback drives most conversation, business and innovation.

So I went off and google’ed the most appropriate and easiest benchmarking tool.

http://www.tux.org/pub/benchmarks/
http://oss.sgi.com/LDP/HOWTO/Benchmarking-HOWTO.html
http://www.coker.com.au/bonnie++/
http://www.acnc.com/benchmarks.html
http://portal.acm.org/citation.cfm?id=71309 IOBench

I settled on bonnie and bonnie++, both may seem a little long in the tooth given when they were developed but they serve the need to test the raw speed of both the root partition and /mnt partition which comes when you run an EC2 virtual machine or Amazon Machine Image (AMI).

If you want to see other web posts on benchmarking EC2, I found a couple of good articles as well.

DeCare Systems has a bunch of articles on EC2 this one has information on using a java benchmarking tool, Javolution
http://blog.decaresystems.ie/index.php/2007/01/29/amazon-web-services-the-future-of-datacenter-computing-part-1/

Other articles on benchmarking on EC2:
http://paul-m-jones.com/blog/?p=238

Comments:

  1. Bonnie required less dependent packages then bonnie++
  2. Both tools were easy to install and run.
  3. Both tools saturated IO and therefore bypassed any issues with caching when the filesize was sufficiently large

I will followup with some more benchmarks and analysis of the results in the next couple of days and then it is back onto MySQL and Oracle.

Installing bonnie on CentOS 4.4

  1. Download bonnie: wget http://www.tux.org/pub/benchmarks/Disk_IO/bonnie.tar.gz or from Google Code
  2. Install GCC: yum install gcc
  3. Compile: gcc -O2 -o bonnie bonnie.c
  4. Run with 100M file: ./bonnie -d /mnt/bonnie -s 100 -m centos4
  5. Run with 1G file: ./bonnie -d /mnt/bonnie -s 1024 -m centos4

Results for Bonnie

100M file:

              -------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
centos4 100 25464 53.0 166352 47.1 189359 55.5 25127 53.7 412332 52.3 36552.4 45.7
centos4 100 25038 52.6 216190 61.2 193490 54.8 24317 52.2 418285 49.0 34320.0 51.5
centos4 100 25535 53.4 123481 37.4 188139 57.0 25472 54.5 417667 48.9 72301.4 90.4
centos4 100 25118 52.7 130512 39.5 191710 54.3 25546 53.6 576862 62.0 80402.0 100.5
centos4 100 24205 52.9 183853 53.9 223497 61.1 24852 51.9 400162 54.7 35898.3 35.9

1 Gig file:


-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
centos4 1024 23573 50.0 28648 6.2 19739 4.5 6892 12.1 409836 38.7 38105.0 38.1

Installing bonnie++ on CentOS 4.4

  1. Download: wget http://www.coker.com.au/bonnie++/bonnie++-1.03a.tgz
  2. Install dependencies: yum install compat-gcc-32-c++.i386 gcc-c++.i386 libstdc++.i386
  3. Configure bonnie++: ./configure
  4. Make bonnie++: make
  5. Run on /mnt: ./bonnie++ -d /mnt/oracle -s 3000 -n 1 -m centOS4 -x 3 -r 1500 -u oracle
  6. Run on /: ./bonnie++ -d /home/oracle -s 3000 -n 1 -m centOS4 -x 3 -r 150

Results for Bonnie++


/ mountpoint:

name,file_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu
centOS4,3000M,13469,28,59124,15,19772,1,21629,38,51205,1,254.5,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
centOS4,3000M,17203,36,57555,15,20025,1,22490,40,49618,0,247.9,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
centOS4,3000M,23918,49,54411,14,19845,1,23120,41,52089,1,246.4,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

/mnt mountpoint:

name,file_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu
centOS4,3000M,23847,48,45251,12,15149,2,19438,38,41982,5,199.8,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
centOS4,3000M,23189,49,42246,11,16938,4,21007,41,52733,1,183.9,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
centOS4,3000M,24195,49,44167,11,19923,2,20465,40,47364,1,185.0,0,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

MySQL DBT2 Benchmark on EC2 part 1

In the last couple of articles I have been using the load simulator/generator tool provided with the MySQL 5.1 install called mysqlslap.

I read around on some other blogs and thought it might be also useful to use a benchmarking tool. DBT2 is a TPC-C like benchmark tool provided by OSDL. You can download the software from the DBT sourceforge site.

The TPC-C is a online transaction test benchmark and the overall measure is in Transactions per Minute on TPM. The higher the better.

I chose to use the DBT2 benchmark as this is the tool that MySQL AB themselves have used to benchmark their MySQL Cluster. I also found plenty of useful information of Peter Zaitsev’s MySQLPerformance blog, specifically this presentation (PDF).

On with the show, once again, we need to test a plain standalone MySQL database. Once that is done we can build on that to test MySQL replication (Master-Slave) and MySQL NDB Clusters.

The documentation for the tool is essentially the README and README-MYSQL files in the tarball, the README mentioned a user manual which unfortunately I couldn’t find anywhere.
There is scant mention of prerequisites either, hopefully this article will help fill the void for a while.
I will probably need to create a proper HOWTO document for this as there were a bunch of conflicting information in the README files which was different to the actually files available.

Prerequisites:

  1. Perl 5.8
  2. Perl Modules: Chart::Graph::Gnuplot, Test::Parser, Test::Reporter, XML::Simple, XML::Twig
  3. Linux packages: gcc gnuplot sysstat
  4. MySQL 5.0 for stored procedures.

Comments:

  • I used CentOS 4.4 again as the base linux distro and added the required packages.
  • use CPAN to install the perl modules as it will handle any dependencies.
  • Some of the names are different from the README files.
  • The TPM results for 20 warehouses and 20 concurrent sessions scaled ok based on the number of threads (Terminal Threads). The lower the number of threads the lower the TPM.
  • For the most part the benchmark was constrained by IO waits. I was using the /mnt mountpoint on EC2, and whilst a test write of a 100M file using dd if=/dev/zero of=/mnt/data/test1 count=1 bs=100M was quick, the random nature of the IO was a killer.
  • Innodb logfiles where separated from the ibdata file.

Results:

  • Using an optimized my.cnf based on my-heavy-innodb-4G.cnf file (see listing below)
  • Data generated 20 warehouses which equated to a 3 Gigabyte database
  • Benchmark settings of 100 terminal threads, 20 concurrent sessions, 20 warehouses
  • Duration of 30 minutes.
  • 2016.02 TPM (transactions per minute)

As I have time I will try and replicate the size (200 warehouses) of Peter Zaitsev’s article, however the data generation step took a fair amount of time.

Have Fun

Paul



Get and install the prerequisites


cd /mnt
wget http://optusnet.dl.sourceforge.net/sourceforge/osdldbt/dbt2-0.40.tar.gz
tar -xzvf dbt2-0.40.tar.gz
yum install gnuplot gcc sysstat
cd dbt2-0.40
./configure --with-mysql --with-mysql-libs=/usr/local/mysql/lib/ \
--with-mysql-includes=/usr/local/mysql/includes
make

Generate the 20 warehouse dataset

mkdir -p /mnt/data
src/datagen -w 20 -d /mnt/data --mysql

warehouses = 20
districts = 10
customers = 3000
items = 100000
orders = 3000
stock = 100000
new_orders = 900

Output directory of data files: /mnt/data

Generating data files for 20 warehouse(s)...
Generating item table data...
Finished item table data...
Generating warehouse table data...
Finished warehouse table data...
Generating stock table data...
Finished stock table data...
Generating district table data...
Finished district table data...
Generating customer table data...
Finished customer table data...
Generating history table data...
Finished history table data...
Generating order and order-line table data...

Finished order and order-line table data...
Generating new-order table data...
Finished new-order table data...

Load the dataset after adding an additional index as suggested by Zaitsev presentation

cd /mnt/dbt2-0.40/scripts/mysql

vi build_db.sh

Modified the CREATE TABLE add index to NEW_ORDER table

NEW_ORDER="CREATE TABLE new_order (
no_o_id int(11) NOT NULL default '0',
no_d_id int(11) NOT NULL default '0',
no_w_id int(11) NOT NULL default '0',
PRIMARY KEY (no_d_id,no_w_id,no_o_id),
KEY ix_no_wid_did (no_w_id,no_d_id)
)"

Load the dataset

sh build_db.sh -d dbt2 -f /mnt/data -s /tmp/mysql.sock -u root -p $MYSQLPASS

Loading of DBT2 dataset located in /mnt/data to database dbt2.

DB_ENGINE: INNODB
DB_SCHEME: OPTIMIZED
DB_HOST: localhost
DB_USER: root
DB_SOCKET: /tmp/mysql.sock

Creating table STOCK
Creating table ITEM
Creating table ORDER_LINE
Creating table ORDERS
Creating table NEW_ORDER
Creating table HISTORY
Creating table CUSTOMER
Creating table DISTRICT
Creating table WAREHOUSE

Loading table customer
Loading table district
Loading table history
Loading table item
Loading table new_order
Loading table order_line
Loading table orders
Loading table stock
Loading table warehouse

Edit the MySQL stored procedures to fix delimiter (replacing |; with |)

cd /mnt/dbt2-0.40/storedproc/mysql
sed -i -e 's/|\;/|/' *.sql
mysql -u root -p$MYSQLPASS -D dbt2 &lt new_order.sql
mysql -u root -p$MYSQLPASS -D dbt2 &lt new_order_2.sql
mysql -u root -p$MYSQLPASS -D dbt2 &lt order_status.sql
mysql -u root -p$MYSQLPASS -D dbt2 &lt payment.sql
mysql -u root -p$MYSQLPASS -D dbt2 &lt stock_level.sql

Check that MySQL is ready to go

mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 39
Server version: 5.1.20-beta-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use dbt2
Database changed
mysql> show tables;
+----------------+
| Tables_in_dbt2 |
+----------------+
| customer |
| district |
| history |
| item |
| new_order |
| order_line |
| orders |
| stock |
| warehouse |
+----------------+
9 rows in set (0.00 sec)

mysql> show table status
-> ;
+------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-------------------+----------+----------------+------------------------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-------------------+----------+----------------+------------------------+
| customer | InnoDB | 10 | Compact | 603562 | 665 | 401604608 | 0 | 46907392 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| district | InnoDB | 10 | Compact | 43 | 1524 | 65536 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| history | InnoDB | 10 | Compact | 600452 | 83 | 49889280 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| item | InnoDB | 10 | Compact | 100160 | 110 | 11026432 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| new_order | InnoDB | 10 | Compact | 152246 | 51 | 7880704 | 0 | 3686400 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| order_line | InnoDB | 10 | Compact | 5697509 | 95 | 545259520 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| orders | InnoDB | 10 | Compact | 600343 | 60 | 36257792 | 0 | 27885568 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| stock | InnoDB | 10 | Compact | 2000025 | 382 | 764411904 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
| warehouse | InnoDB | 10 | Compact | 20 | 819 | 16384 | 0 | 0 | 0 | NULL | 2007-08-31 03:18:15 | NULL | NULL | latin1_swedish_ci | NULL | | InnoDB free: 229376 kB |
+------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-------------------+----------+----------------+------------------------+
9 rows in set (0.77 sec)

mysql> SHOW PROCEDURE STATUS
-> ;
+------+--------------+-----------+----------------+---------------------+---------------------+---------------+---------+
| Db | Name | Type | Definer | Modified | Created | Security_type | Comment |
+------+--------------+-----------+----------------+---------------------+---------------------+---------------+---------+
| dbt2 | delivery | PROCEDURE | root@localhost | 2007-08-30 21:21:11 | 2007-08-30 21:21:11 | DEFINER | |
| dbt2 | new_order | PROCEDURE | root@localhost | 2007-08-30 21:22:33 | 2007-08-30 21:22:33 | DEFINER | |
| dbt2 | new_order_2 | PROCEDURE | root@localhost | 2007-08-30 21:22:38 | 2007-08-30 21:22:38 | DEFINER | |
| dbt2 | order_status | PROCEDURE | root@localhost | 2007-08-30 21:22:44 | 2007-08-30 21:22:44 | DEFINER | |
| dbt2 | payment | PROCEDURE | root@localhost | 2007-08-30 21:22:51 | 2007-08-30 21:22:51 | DEFINER | |
| dbt2 | stock_level | PROCEDURE | root@localhost | 2007-08-30 21:22:56 | 2007-08-30 21:22:56 | DEFINER | |
+------+--------------+-----------+----------------+---------------------+---------------------+---------------+---------+
6 rows in set (0.00 sec)

Listing of my.cnf

grep -v "#" /etc/my.cnf|sed -e '/^$/d'

[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
back_log = 50
max_connections = 200
max_connect_errors = 10
table_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 4
default_table_type =myISAM
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
log_slow_queries
long_query_time = 2
log_long_format
server-id = 1
key_buffer_size = 32M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 1G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_data_home_dir =/mnt/mysql/data/
innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[myisamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 8192

Benchmark runs:

sh run_workload.sh -c 20 -t 20 -d 300 -w 20 -u root -x $MYSQLPASS

MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
/mnt/dbt2-0.40/scripts/mysql/start_db.sh: illegal option -- p
************************************************************************
* DBT-2 test for mysql started
* *
* Results can be found in output/16 directory
************************************************************************
* *
* Test consists of 3 stages: *
* *
* 1. Start of client to create pool of databases connections *
* 2. Start of driver to emulate terminals and transactions generation *
* 3. Processing of results *
* *
************************************************************************

DATABASE SYSTEM: localhost
DATABASE NAME: dbt2
DATABASE USER: root
DATABASE PASSWORD: *******
DATABASE CONNECTIONS: 20
TERMINAL THREADS: 400
TERMINALS PER WAREHOUSE: 20
SCALE FACTOR(WAREHOUSES): 20
DURATION OF TEST (in sec): 300
1 client stared every 1000 millisecond(s)

Stage 1. Starting up client...
Sleeping 21 seconds

Stage 2. Starting up driver...
1000 threads started per millisecond
estimated rampup time: Sleeping 210 seconds
estimated rampup time has elapsed
estimated steady state time: Sleeping 300 seconds

Stage 3. Processing of results...
Killing client...
MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
run_workload.sh: line 548: [: -eq: unary operator expected
run_workload.sh: line 498: 10227 Terminated ${abs_top_srcdir}/src/driver ${DRIVER_COMMAND_ARGS} >${OUTPUT_DIR}/driver.out 2>&1
run_workload.sh: line 461: 10185 Terminated ${abs_top_srcdir}/src/client ${CLIENT_COMMAND_ARGS} >${OUTPUT_DIR}/client.out 2>&1
chmod: cannot access `/mnt/dbt2-0.40/scripts/output/16/db/log': No such file or directory
Test completed.
Results are in: /mnt/dbt2-0.40/scripts/output/16

Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.53 0.263 : 0.533 71 0 0.00
New Order 45.58 0.028 : 0.076 918 10 1.10
Order Status 3.67 0.071 : 0.173 74 0 0.00
Payment 41.46 0.035 : 0.102 835 0 0.00
Stock Level 5.76 0.003 : 0.004 116 37 46.84
------------ ----- --------------------- ----------- --------------- -----

503.55 new-order transactions per minute (NOTPM)
1.8 minute duration
0 total unknown errors
400 second(s) ramping up

sh run_workload.sh -c 20 -t 10 -d 900 -w 20 -u root -x $MYSQLPASS

MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
/mnt/dbt2-0.40/scripts/mysql/start_db.sh: illegal option -- p
************************************************************************
* DBT-2 test for mysql started
* *
* Results can be found in output/18 directory
************************************************************************
* *
* Test consists of 3 stages: *
* *
* 1. Start of client to create pool of databases connections *
* 2. Start of driver to emulate terminals and transactions generation *
* 3. Processing of results *
* *
************************************************************************

DATABASE SYSTEM: localhost
DATABASE NAME: dbt2
DATABASE USER: root
DATABASE PASSWORD: *******
DATABASE CONNECTIONS: 20
TERMINAL THREADS: 200
TERMINALS PER WAREHOUSE: 10
SCALE FACTOR(WAREHOUSES): 20
DURATION OF TEST (in sec): 900
1 client stared every 1000 millisecond(s)

Stage 1. Starting up client...
Sleeping 21 seconds

Stage 2. Starting up driver...
1000 threads started per millisecond
estimated rampup time: Sleeping 210 seconds
estimated rampup time has elapsed
estimated steady state time: Sleeping 900 seconds

Stage 3. Processing of results...
Killing client...
run_workload.sh: line 461: 13344 Terminated ${abs_top_srcdir}/src/client ${CLIENT_COMMAND_ARGS} >${OUTPUT_DIR}/client.out 2>&1
run_workload.sh: line 498: 13382 Terminated ${abs_top_srcdir}/src/driver ${DRIVER_COMMAND_ARGS} >${OUTPUT_DIR}/driver.out 2>&1
MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
run_workload.sh: line 548: [: -eq: unary operator expected
chmod: cannot access `/mnt/dbt2-0.40/scripts/output/18/db/log': No such file or directory
Test completed.
Results are in: /mnt/dbt2-0.40/scripts/output/18

1188549933
8597
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.99 0.117 : 0.227 344 0 0.00
New Order 45.19 0.041 : 0.086 3900 33 0.85
Order Status 3.90 0.042 : 0.106 337 0 0.00
Payment 42.79 0.013 : 0.023 3693 0 0.00
Stock Level 4.13 0.003 : 0.003 356 0 0.00
------------ ----- --------------------- ----------- --------------- -----

254.19 new-order transactions per minute (NOTPM)
15.1 minute duration
0 total unknown errors
199 second(s) ramping up


sh run_workload.sh -c 20 -t 5 -d 900 -w 20 -u root -x $MYSQLPASS

MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
/mnt/dbt2-0.40/scripts/mysql/start_db.sh: illegal option -- p
************************************************************************
* DBT-2 test for mysql started
* *
* Results can be found in output/19 directory
************************************************************************
* *
* Test consists of 3 stages: *
* *
* 1. Start of client to create pool of databases connections *
* 2. Start of driver to emulate terminals and transactions generation *
* 3. Processing of results *
* *
************************************************************************

DATABASE SYSTEM: localhost
DATABASE NAME: dbt2
DATABASE USER: root
DATABASE PASSWORD: *******
DATABASE CONNECTIONS: 20
TERMINAL THREADS: 100
TERMINALS PER WAREHOUSE: 5
SCALE FACTOR(WAREHOUSES): 20
DURATION OF TEST (in sec): 900
1 client stared every 1000 millisecond(s)

Stage 1. Starting up client...
Sleeping 21 seconds

Stage 2. Starting up driver...
1000 threads started per millisecond
estimated rampup time: Sleeping 210 seconds
estimated rampup time has elapsed
estimated steady state time: Sleeping 900 seconds

Stage 3. Processing of results...
Killing client...
run_workload.sh: line 461: 14070 Terminated ${abs_top_srcdir}/src/client ${CLIENT_COMMAND_ARGS} >${OUTPUT_DIR}/client.out 2>&1
MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
run_workload.sh: line 548: [: -eq: unary operator expected
chmod: cannot access `/mnt/dbt2-0.40/scripts/output/19/db/log': No such file or directory
Test completed.
Results are in: /mnt/dbt2-0.40/scripts/output/19

1188551055
4260
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.81 0.085 : 0.174 163 0 0.00
New Order 47.02 0.030 : 0.080 2011 17 0.85
Order Status 3.95 0.038 : 0.113 169 0 0.00
Payment 41.55 0.010 : 0.021 1777 0 0.00
Stock Level 3.67 0.002 : 0.003 157 0 0.00
------------ ----- --------------------- ----------- --------------- -----

130.21 new-order transactions per minute (NOTPM)
15.2 minute duration
0 total unknown errors
98 second(s) ramping up

Adjust ulimit and filesize

sh run_workload.sh -c 20 -t 200 -d 300 -w 20 -u root -x $MYSQLPASS

error: you're open files ulimit is too small, must be at least 8020

ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 10000
pending signals (-i) 13664
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 13664
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

ulimit -n 9000
ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) 10000
pending signals (-i) 13664
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 9000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 13664
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

sh run_workload.sh -c 20 -t 100 -d 3600 -w 20 -u root -x $MYSQLPASS

MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
/mnt/dbt2-0.40/scripts/mysql/start_db.sh: illegal option -- p
************************************************************************
* DBT-2 test for mysql started
* *
* Results can be found in output/20 directory
************************************************************************
* *
* Test consists of 3 stages: *
* *
* 1. Start of client to create pool of databases connections *
* 2. Start of driver to emulate terminals and transactions generation *
* 3. Processing of results *
* *
************************************************************************

DATABASE SYSTEM: localhost
DATABASE NAME: dbt2
DATABASE USER: root
DATABASE PASSWORD: *******
DATABASE CONNECTIONS: 20
TERMINAL THREADS: 2000
TERMINALS PER WAREHOUSE: 100
SCALE FACTOR(WAREHOUSES): 20
DURATION OF TEST (in sec): 3600
1 client stared every 1000 millisecond(s)

Stage 1. Starting up client...
Sleeping 21 seconds

Stage 2. Starting up driver...
1000 threads started per millisecond
estimated rampup time: Sleeping 210 seconds
estimated rampup time has elapsed
estimated steady state time: Sleeping 3600 seconds

Stage 3. Processing of results...
Killing client...
run_workload.sh: line 461: 14572 Terminated ${abs_top_srcdir}/src/client ${CLIENT_COMMAND_ARGS} >${OUTPUT_DIR}/client.out 2>&1
run_workload.sh: line 498: 14610 Terminated ${abs_top_srcdir}/src/driver ${DRIVER_COMMAND_ARGS} >${OUTPUT_DIR}/driver.out 2>&1
MySQL pid file 'yes/var/localhost.pid' does not exist.
MySQL was not stopped, if it was running.
run_workload.sh: line 548: [: -eq: unary operator expected
run_workload.sh: line 554: 18714 File size limit exceeded${abs_top_srcdir}/scripts/post-process --dir ${OUTPUT_DIR} --xml >${DRIVER_OUTPUT_DIR}/results.out
chmod: cannot access `/mnt/dbt2-0.40/scripts/output/20/db/log': No such file or directory
Test completed.
Results are in: /mnt/dbt2-0.40/scripts/output/20

Run the post-process step manually

./post-process --dir /mnt/dbt2-0.40/scripts/output/20 --xml

Use of uninitialized value at /usr/lib/perl5/site_perl/5.8.5/Test/Parser/Dbt2.pm line
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.79 5.578 : 5.167 5309 0 0.00
New Order 43.84 6.023 : 5.492 61347 618 1.02
Order Status 3.74 5.270 : 5.021 5240 0 0.00
Payment 41.30 5.276 : 5.005 57799 1 0.00
Stock Level 7.33 5.004 : 5.513 10252 4883 90.95
------------ ----- --------------------- ----------- --------------- -----

2016.02 new-order transactions per minute (NOTPM)
29.8 minute duration
0 total unknown errors
2015 second(s) ramping up