Streaming backups in parallel using tee

So you need to build a new set of databases, perhaps in a new location or geographical zone. Management wants it done yesterday cause the newly updated service hit the front page of reddit and your website and its back-end databases are getting smashed.

The standard method would be to stream a backup from the master or a dedicated backup slave and build each new read only slave from that backup.

You tried streaming the backups in parallel using pssh because some random database blog said you can. https://dbadojo.com/2020/08/26/streaming-backups-in-parallel-using-pssh/

But that failed with a python memory error, you don’t want to use the split workaround and management are still screaming at you….so you search again…

Thinking of the basic requirements, you want one stream of data (from the backup) to go to multiple locations. So you want a linux utility which can redirect streams… enter the venerable tee command.
Tee will redirect a stream from a pipe to as many locations as you want, with the last stream going to a file or that webscale file called /dev/null.

Example of streaming a backup to two locations:

mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream \
--parallel=4 --datadir=/var/lib/mysql 2>backup.log | tee \
>(ssh -q 192.168.56.112 -t "mbstream --directory=/var/lib/mysql -x --parallel=4") \
>(ssh -q 192.168.56.113 -t "mbstream --directory=/var/lib/mysql -x --parallel=4") \
> /dev/null

This command will split the streamed backup into two ssh commands which connect to two hosts, and run the mbstream -x command to create an unprepared backup in the datadir.
If you have more hosts, it is just a matter of adding redirections and the associated commands.

One space and performance improvement would be to add pigz to parallel compress before the tee and in each tee’ed ssh command.

Until next time.

 

Full example: Parallel streaming backup using tee.

-- Split standard input into separate parallel streams using tee

mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | tee >(ssh -q 192.168.56.112 -t "mbstream --directory=/var/lib/mysql -x --parallel=4") >(ssh -q 192.168.56.113 -t "mbstream --directory=/var/lib/mysql -x --parallel=4") > /dev/null

-- Use pssh to run prepare in parallel. This is a better use case for pssh

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "mariabackup --prepare --target-dir=/var/lib/mysql"
[1] 05:10:23 [SUCCESS] 192.168.56.112
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-31 05:10:22 cd to /var/lib/mysql/
[00] 2020-08-31 05:10:22 This target seems to be not prepared yet.
[00] 2020-08-31 05:10:22 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-31 05:10:22 innodb_data_home_dir = .
[00] 2020-08-31 05:10:22 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-31 05:10:22 innodb_log_group_home_dir = .
[00] 2020-08-31 05:10:22 InnoDB: Using Linux native AIO
[00] 2020-08-31 05:10:22 Starting InnoDB instance for recovery.
[00] 2020-08-31 05:10:22 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-31 5:10:22 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-31 5:10:22 0 [Note] InnoDB: Uses event mutexes
2020-08-31 5:10:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-31 5:10:22 0 [Note] InnoDB: Number of pools: 1
2020-08-31 5:10:22 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-31 5:10:22 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-31 5:10:22 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-31 5:10:22 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-31 5:10:22 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=2008409552
2020-08-31 5:10:22 0 [Note] InnoDB: Last binlog file './mysql-bin.000003', position 172310127
[00] 2020-08-31 05:10:22 Last binlog file ./mysql-bin.000003, position 172310127
[00] 2020-08-31 05:10:23 completed OK!
[2] 05:10:23 [SUCCESS] 192.168.56.113
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-31 05:10:22 cd to /var/lib/mysql/
[00] 2020-08-31 05:10:22 This target seems to be not prepared yet.
[00] 2020-08-31 05:10:22 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-31 05:10:22 innodb_data_home_dir = .
[00] 2020-08-31 05:10:22 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-31 05:10:22 innodb_log_group_home_dir = .
[00] 2020-08-31 05:10:22 InnoDB: Using Linux native AIO
[00] 2020-08-31 05:10:22 Starting InnoDB instance for recovery.
[00] 2020-08-31 05:10:22 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-31 5:10:22 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-31 5:10:22 0 [Note] InnoDB: Uses event mutexes
2020-08-31 5:10:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-31 5:10:22 0 [Note] InnoDB: Number of pools: 1
2020-08-31 5:10:22 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-31 5:10:22 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-31 5:10:22 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-31 5:10:22 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-31 5:10:22 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=2008409552
2020-08-31 5:10:22 0 [Note] InnoDB: Last binlog file './mysql-bin.000003', position 172310127
[00] 2020-08-31 05:10:23 Last binlog file ./mysql-bin.000003, position 172310127
[00] 2020-08-31 05:10:23 completed OK!

-- change ownership in parallel.

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "chown -R mysql:mysql /var/lib/mysql"
[1] 05:10:41 [SUCCESS] 192.168.56.112
[2] 05:10:41 [SUCCESS] 192.168.56.113

-- start databases in parallel.

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "systemctl start mariadb"
[1] 05:10:48 [SUCCESS] 192.168.56.112
[2] 05:10:48 [SUCCESS] 192.168.56.113

-- check the status of databases ... you guessed it, in parallel.

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "systemctl status mariadb"
[1] 05:10:53 [SUCCESS] 192.168.56.112
● mariadb.service - MariaDB 10.4.14 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: active (running) since Mon 2020-08-31 05:10:48 UTC; 4s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 4219 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 4163 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 4161 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Main PID: 4186 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─4186 /usr/sbin/mysqld

Aug 31 05:10:48 db2 systemd[1]: Starting MariaDB 10.4.14 database server...
Aug 31 05:10:48 db2 mysqld[4186]: 2020-08-31 5:10:48 0 [Note] /usr/sbin/mysqld (mysqld 10.4.14-MariaDB-log) starting as process 4186 ...
Aug 31 05:10:48 db2 mysqld[4186]: 2020-08-31 5:10:48 0 [Warning] Could not increase number of max_open_files to more than 16384 (request: 32183)
Aug 31 05:10:48 db2 systemd[1]: Started MariaDB 10.4.14 database server.
[2] 05:10:53 [SUCCESS] 192.168.56.113
● mariadb.service - MariaDB 10.4.14 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: active (running) since Mon 2020-08-31 05:10:48 UTC; 4s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 4225 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 4169 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 4167 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Main PID: 4192 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─4192 /usr/sbin/mysqld

Aug 31 05:10:48 db3 systemd[1]: Starting MariaDB 10.4.14 database server...
Aug 31 05:10:48 db3 mysqld[4192]: 2020-08-31 5:10:48 0 [Note] /usr/sbin/mysqld (mysqld 10.4.14-MariaDB-log) starting as process 4192 ...
Aug 31 05:10:48 db3 mysqld[4192]: 2020-08-31 5:10:48 0 [Warning] Could not increase number of max_open_files to more than 16384 (request: 32183)
Aug 31 05:10:48 db3 systemd[1]: Started MariaDB 10.4.14 database server.

Streaming backups in parallel using pssh

So you need to build a new set of databases, perhaps in a new location or geographical zone and the business wants it done yesterday cause the newly launched product hit the front page of hacker news and your website and back-end databases are getting the hug-of-death.

The standard method would be to stream a backup from the master or a dedicated backup slave and build each new read only slave from that backup.

What if you have network bandwidth to burn? perhaps there is a way to run the streaming backups in parallel. Ivan G came up with one way to do it. See his article “Streaming Percona XtraBackup for MySQL to Multiple Destinations”

His article actually gave me the spark of an idea to determine if we could do something similar in parallel.

Enter pssh (parallel-ssh) which was hosted on google code but now is available in most linux distros.

pssh runs ssh commands on a list of hosts in parallel.

Update:

pssh uses python and os.fork to create the parallel processes.

This means that the data copied/streamed is streamed in one chunk. So if the data is greater than available RAM (x3 if compression is used) then the command will fail with “OSError: [Errno 12] Cannot allocate memory”.

Memory << Data workaround:

  1. Use gzip or pigz to compress the stream of data. This will allow the stream to be up to 2-3 times the size of available RAM.
  2. Use split to split stream into chunks and then combine those chunks on the destination. See example below.
  3. See my next post for a different method.

Here is an simple pssh example:

[root@db1 ~]# pssh -i --host='192.168.56.111 192.168.56.112 192.168.56.113' "hostname"
[1] 01:04:40 [SUCCESS] 192.168.56.112
db2
[2] 01:04:40 [SUCCESS] 192.168.56.111
db1
[3] 01:04:40 [SUCCESS] 192.168.56.113
db3

So to stream a backup from one db to another you run something like this:

mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | ssh -q 192.168.56.112 -t "mbstream --directory=/var/lib/mysql -x --parallel=4"

Example of a streaming backup to multiple hosts using pssh:

Note: The critical option is -I which allows pssh to take input from standard input.

mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | pssh -I --host='192.168.56.112 192.168.56.113' "mbstream --directory=/var/lib/mysql -x --parallel=4"

Preparing the backup on multiple hosts:

pssh -i --host='192.168.56.112 192.168.56.113' "mariabackup --prepare --target-dir=/var/lib/mysql"

Change ownership of multiple hosts:

pssh -i --host='192.168.56.112 192.168.56.113' "chown -R mysql:mysql /var/lib/mysql"

Start the db on multiple hosts:

pssh -i --host='192.168.56.112 192.168.56.113' "systemctl start mariadb.service"

Thanks to Ivan for the idea. Most people would use ansible or something similar to configure servers now over plain ssh command line. But thanks to pssh that option exists as well.

Until next time.

Full example (assumes small dataset):

[root@db1 ~]# mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | pssh -I --host='192.168.56.112 192.168.56.113' "mbstream --directory=/var/lib/mysql -x --parallel=4"
[1] 04:43:16 [SUCCESS] 192.168.56.112
[2] 04:43:16 [SUCCESS] 192.168.56.113


[root@db1 ~]# pssh -i --host='192.168.56.112 192.168.56.113' "ls -l /var/lib/mysql"
[1] 04:43:23 [SUCCESS] 192.168.56.112
total 12352
-rw-r-----. 1 root root 32768 Aug 26 04:43 aria_log.00000001
-rw-r-----. 1 root root 52 Aug 26 04:43 aria_log_control
-rw-r-----. 1 root root 324 Aug 26 04:43 backup-my.cnf
-rw-r-----. 1 root root 972 Aug 26 04:43 ib_buffer_pool
-rw-r-----. 1 root root 12582912 Aug 26 04:43 ibdata1
-rw-r-----. 1 root root 2560 Aug 26 04:43 ib_logfile0
drwx------. 2 root root 4096 Aug 26 04:43 mysql
drwx------. 2 root root 20 Aug 26 04:43 performance_schema
drwx------. 2 root root 20 Aug 26 04:43 test
-rw-r-----. 1 root root 29 Aug 26 04:43 xtrabackup_binlog_info
-rw-r-----. 1 root root 73 Aug 26 04:43 xtrabackup_checkpoints
-rw-r-----. 1 root root 554 Aug 26 04:43 xtrabackup_info
[2] 04:43:23 [SUCCESS] 192.168.56.113
total 12352
-rw-r-----. 1 root root 32768 Aug 26 04:43 aria_log.00000001
-rw-r-----. 1 root root 52 Aug 26 04:43 aria_log_control
-rw-r-----. 1 root root 324 Aug 26 04:43 backup-my.cnf
-rw-r-----. 1 root root 972 Aug 26 04:43 ib_buffer_pool
-rw-r-----. 1 root root 12582912 Aug 26 04:43 ibdata1
-rw-r-----. 1 root root 2560 Aug 26 04:43 ib_logfile0
drwx------. 2 root root 4096 Aug 26 04:43 mysql
drwx------. 2 root root 20 Aug 26 04:43 performance_schema
drwx------. 2 root root 20 Aug 26 04:43 test
-rw-r-----. 1 root root 29 Aug 26 04:43 xtrabackup_binlog_info
-rw-r-----. 1 root root 73 Aug 26 04:43 xtrabackup_checkpoints
-rw-r-----. 1 root root 554 Aug 26 04:43 xtrabackup_info

-- prepare

[root@db1 ~]# pssh -i --host='192.168.56.112 192.168.56.113' "mariabackup --prepare --target-dir=/var/lib/mysql"
[1] 04:43:56 [SUCCESS] 192.168.56.112
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-26 04:43:55 cd to /var/lib/mysql/
[00] 2020-08-26 04:43:55 This target seems to be not prepared yet.
[00] 2020-08-26 04:43:55 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-26 04:43:55 innodb_data_home_dir = .
[00] 2020-08-26 04:43:55 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-26 04:43:55 innodb_log_group_home_dir = .
[00] 2020-08-26 04:43:55 InnoDB: Using Linux native AIO
[00] 2020-08-26 04:43:55 Starting InnoDB instance for recovery.
[00] 2020-08-26 04:43:55 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-26 4:43:55 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-26 4:43:55 0 [Note] InnoDB: Uses event mutexes
2020-08-26 4:43:55 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-26 4:43:55 0 [Note] InnoDB: Number of pools: 1
2020-08-26 4:43:55 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-26 4:43:55 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-26 4:43:55 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-26 4:43:55 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-26 4:43:55 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=60967
[00] 2020-08-26 04:43:55 Last binlog file , position 0
[00] 2020-08-26 04:43:56 completed OK!
[2] 04:43:56 [SUCCESS] 192.168.56.113
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-26 04:43:55 cd to /var/lib/mysql/
[00] 2020-08-26 04:43:55 This target seems to be not prepared yet.
[00] 2020-08-26 04:43:55 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-26 04:43:55 innodb_data_home_dir = .
[00] 2020-08-26 04:43:55 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-26 04:43:55 innodb_log_group_home_dir = .
[00] 2020-08-26 04:43:55 InnoDB: Using Linux native AIO
[00] 2020-08-26 04:43:55 Starting InnoDB instance for recovery.
[00] 2020-08-26 04:43:55 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-26 4:43:55 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-26 4:43:55 0 [Note] InnoDB: Uses event mutexes
2020-08-26 4:43:55 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-26 4:43:55 0 [Note] InnoDB: Number of pools: 1
2020-08-26 4:43:55 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-26 4:43:55 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-26 4:43:55 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-26 4:43:55 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-26 4:43:55 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=60967
[00] 2020-08-26 04:43:55 Last binlog file , position 0
[00] 2020-08-26 04:43:56 completed OK!

-- change ownership and start

[root@db1 ~]# pssh -i --host='192.168.56.112 192.168.56.113' "chown -R mysql:mysql /var/lib/mysql"
[1] 04:44:25 [SUCCESS] 192.168.56.112
[2] 04:44:25 [SUCCESS] 192.168.56.113

[root@db1 ~]# pssh -i --host='192.168.56.112 192.168.56.113' "systemctl start mariadb.service"
[1] 04:44:34 [SUCCESS] 192.168.56.112
[2] 04:44:34 [SUCCESS] 192.168.56.113

-- Bonus: Generate change master from xtrabackup_binlog_info file
-- Change the host, user and password.
-- \x27 is the quote character

awk '{ print "change master to master_host=\x27192.168.56.111\x27,master_user=\x27repl\x27,master_password=\x27lper\x27,master_log_file=\x27"$1"\x27,m
aster_log_pos="$2 }' xtrabackup_binlog_info
change master to master_host=.168.56.111',master_user='repl',master_password='lper',master_log_file='mysql-bin.000001',master_log_pos=2194

Full Example using split to get around memory issue

[root@db1 test_db]# du -sh /var/lib/mysql
1.7G /var/lib/mysql

[root@db1 test_db]# mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | pssh -I --host='192.168.56.112 192.168.56.113' "mbstream --directory=/var/lib/mysql -x --parallel=4"
Traceback (most recent call last):
File "/bin/pssh", line 118, in <module>
do_pssh(hosts, cmdline, opts)
File "/bin/pssh", line 89, in do_pssh
statuses = manager.run()
File "/usr/lib/python2.7/site-packages/psshlib/manager.py", line 68, in run
self.update_tasks(writer)
File "/usr/lib/python2.7/site-packages/psshlib/manager.py", line 133, in update_tasks
self._start_tasks_once(writer)
File "/usr/lib/python2.7/site-packages/psshlib/manager.py", line 146, in _start_tasks_once
task.start(self.taskcount, self.iomap, writer, self.askpass_socket)
File "/usr/lib/python2.7/site-packages/psshlib/task.py", line 99, in start
close_fds=False, preexec_fn=os.setsid, env=environ)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1224, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

-- Split in 100Mbyte chunks

[root@db1 test_db]# mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | split -b 100M --filter="exec cat $FILE |pssh -I --host='192.168.56.112 192.168.56.113' 'cat $FILE >> /var/lib/mysql/streamfile ' "
[1] 04:19:28 [SUCCESS] 192.168.56.112
[2] 04:19:28 [SUCCESS] 192.168.56.113
[1] 04:19:30 [SUCCESS] 192.168.56.113
[2] 04:19:30 [SUCCESS] 192.168.56.112
[1] 04:19:33 [SUCCESS] 192.168.56.112
[2] 04:19:33 [SUCCESS] 192.168.56.113
[1] 04:19:35 [SUCCESS] 192.168.56.112
[2] 04:19:35 [SUCCESS] 192.168.56.113
[1] 04:19:36 [SUCCESS] 192.168.56.113
[2] 04:19:37 [SUCCESS] 192.168.56.112
[1] 04:19:39 [SUCCESS] 192.168.56.113
[2] 04:19:39 [SUCCESS] 192.168.56.112
[1] 04:19:41 [SUCCESS] 192.168.56.112
[2] 04:19:41 [SUCCESS] 192.168.56.113
[1] 04:19:43 [SUCCESS] 192.168.56.112
[2] 04:19:43 [SUCCESS] 192.168.56.113
[1] 04:19:45 [SUCCESS] 192.168.56.113
[2] 04:19:45 [SUCCESS] 192.168.56.112
[1] 04:19:47 [SUCCESS] 192.168.56.112
[2] 04:19:47 [SUCCESS] 192.168.56.113
[1] 04:19:50 [SUCCESS] 192.168.56.112
[2] 04:19:50 [SUCCESS] 192.168.56.113
[1] 04:19:53 [SUCCESS] 192.168.56.112
[2] 04:19:53 [SUCCESS] 192.168.56.113

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "ls -l /var/lib/mysql/"
[1] 04:20:02 [SUCCESS] 192.168.56.112
total 1221516
-rw-r--r--. 1 root root 1250829999 Aug 31 04:19 streamfile
[2] 04:20:02 [SUCCESS] 192.168.56.113
total 1221516
-rw-r--r--. 1 root root 1250829999 Aug 31 04:19 streamfile

-- Run the streamfile thru mbstream to produce the un-prepared datafiles

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "cat /var/lib/mysql/streamfile | mbstream --directory=/var/lib/mysql -x --parallel=4"
[1] 04:23:13 [SUCCESS] 192.168.56.112
[2] 04:23:13 [SUCCESS] 192.168.56.113
[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "ls -l /var/lib/mysql/"
[1] 04:23:35 [SUCCESS] 192.168.56.112
total 1496016
-rw-r-----. 1 root root 32768 Aug 31 04:23 aria_log.00000001
-rw-r-----. 1 root root 52 Aug 31 04:23 aria_log_control
-rw-r-----. 1 root root 324 Aug 31 04:23 backup-my.cnf
drwx------. 2 root root 4096 Aug 31 04:23 employees
-rw-r-----. 1 root root 976 Aug 31 04:23 ib_buffer_pool
-rw-r-----. 1 root root 281018368 Aug 31 04:23 ibdata1
-rw-r-----. 1 root root 2560 Aug 31 04:23 ib_logfile0
drwx------. 2 root root 4096 Aug 31 04:23 mysql
drwx------. 2 root root 20 Aug 31 04:23 performance_schema
-rw-r--r--. 1 root root 1250829999 Aug 31 04:19 streamfile
drwx------. 2 root root 20 Aug 31 04:23 test
-rw-r-----. 1 root root 35 Aug 31 04:23 xtrabackup_binlog_info
-rw-r-----. 1 root root 83 Aug 31 04:23 xtrabackup_checkpoints
-rw-r-----. 1 root root 565 Aug 31 04:23 xtrabackup_info
[2] 04:23:35 [SUCCESS] 192.168.56.113
total 1496016
-rw-r-----. 1 root root 32768 Aug 31 04:23 aria_log.00000001
-rw-r-----. 1 root root 52 Aug 31 04:23 aria_log_control
-rw-r-----. 1 root root 324 Aug 31 04:23 backup-my.cnf
drwx------. 2 root root 4096 Aug 31 04:23 employees
-rw-r-----. 1 root root 976 Aug 31 04:23 ib_buffer_pool
-rw-r-----. 1 root root 281018368 Aug 31 04:23 ibdata1
-rw-r-----. 1 root root 2560 Aug 31 04:23 ib_logfile0
drwx------. 2 root root 4096 Aug 31 04:23 mysql
drwx------. 2 root root 20 Aug 31 04:23 performance_schema
-rw-r--r--. 1 root root 1250829999 Aug 31 04:19 streamfile
drwx------. 2 root root 20 Aug 31 04:23 test
-rw-r-----. 1 root root 35 Aug 31 04:23 xtrabackup_binlog_info
-rw-r-----. 1 root root 83 Aug 31 04:23 xtrabackup_checkpoints
-rw-r-----. 1 root root 565 Aug 31 04:23 xtrabackup_info

-- Prepare the backup

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "mariabackup --prepare --target-dir=/var/lib/mysql"
[1] 04:24:14 [SUCCESS] 192.168.56.112
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-31 04:24:14 cd to /var/lib/mysql/
[00] 2020-08-31 04:24:14 This target seems to be not prepared yet.
[00] 2020-08-31 04:24:14 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-31 04:24:14 innodb_data_home_dir = .
[00] 2020-08-31 04:24:14 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-31 04:24:14 innodb_log_group_home_dir = .
[00] 2020-08-31 04:24:14 InnoDB: Using Linux native AIO
[00] 2020-08-31 04:24:14 Starting InnoDB instance for recovery.
[00] 2020-08-31 04:24:14 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-31 4:24:14 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-31 4:24:14 0 [Note] InnoDB: Uses event mutexes
2020-08-31 4:24:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-31 4:24:14 0 [Note] InnoDB: Number of pools: 1
2020-08-31 4:24:14 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-31 4:24:14 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-31 4:24:14 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-31 4:24:14 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-31 4:24:14 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=2008409552
2020-08-31 4:24:14 0 [Note] InnoDB: Last binlog file './mysql-bin.000003', position 172310127
[00] 2020-08-31 04:24:14 Last binlog file ./mysql-bin.000003, position 172310127
[00] 2020-08-31 04:24:14 completed OK!
[2] 04:24:14 [SUCCESS] 192.168.56.113
Stderr: mariabackup based on MariaDB server 10.4.14-MariaDB Linux (x86_64)
[00] 2020-08-31 04:24:14 cd to /var/lib/mysql/
[00] 2020-08-31 04:24:14 This target seems to be not prepared yet.
[00] 2020-08-31 04:24:14 mariabackup: using the following InnoDB configuration for recovery:
[00] 2020-08-31 04:24:14 innodb_data_home_dir = .
[00] 2020-08-31 04:24:14 innodb_data_file_path = ibdata1:12M:autoextend
[00] 2020-08-31 04:24:14 innodb_log_group_home_dir = .
[00] 2020-08-31 04:24:14 InnoDB: Using Linux native AIO
[00] 2020-08-31 04:24:14 Starting InnoDB instance for recovery.
[00] 2020-08-31 04:24:14 mariabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
2020-08-31 4:24:14 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-31 4:24:14 0 [Note] InnoDB: Uses event mutexes
2020-08-31 4:24:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2020-08-31 4:24:14 0 [Note] InnoDB: Number of pools: 1
2020-08-31 4:24:14 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-31 4:24:14 0 [Note] InnoDB: Initializing buffer pool, total size = 100M, instances = 1, chunk size = 100M
2020-08-31 4:24:14 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-31 4:24:14 0 [Note] InnoDB: page_cleaner coordinator priority: -20
2020-08-31 4:24:14 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=2008409552
2020-08-31 4:24:14 0 [Note] InnoDB: Last binlog file './mysql-bin.000003', position 172310127
[00] 2020-08-31 04:24:14 Last binlog file ./mysql-bin.000003, position 172310127
[00] 2020-08-31 04:24:14 completed OK!

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "chown -R mysql:mysql /var/lib/mysql"
[1] 04:25:08 [SUCCESS] 192.168.56.112
[2] 04:25:08 [SUCCESS] 192.168.56.113
[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "ls -l /var/lib/mysql/"
[1] 04:25:21 [SUCCESS] 192.168.56.112
total 1496012
-rw-r-----. 1 mysql mysql 32768 Aug 31 04:23 aria_log.00000001
-rw-r-----. 1 mysql mysql 52 Aug 31 04:23 aria_log_control
-rw-r-----. 1 mysql mysql 324 Aug 31 04:23 backup-my.cnf
drwx------. 2 mysql mysql 4096 Aug 31 04:23 employees
-rw-r-----. 1 mysql mysql 976 Aug 31 04:23 ib_buffer_pool
-rw-r-----. 1 mysql mysql 281018368 Aug 31 04:24 ibdata1
-rw-r-----. 1 mysql mysql 0 Aug 31 04:24 ib_logfile0
drwx------. 2 mysql mysql 4096 Aug 31 04:23 mysql
drwx------. 2 mysql mysql 20 Aug 31 04:23 performance_schema
-rw-r--r--. 1 mysql mysql 1250829999 Aug 31 04:19 streamfile
drwx------. 2 mysql mysql 20 Aug 31 04:23 test
-rw-r-----. 1 mysql mysql 35 Aug 31 04:23 xtrabackup_binlog_info
-rw-r-----. 1 mysql mysql 81 Aug 31 04:24 xtrabackup_checkpoints
-rw-r-----. 1 mysql mysql 565 Aug 31 04:23 xtrabackup_info
[2] 04:25:21 [SUCCESS] 192.168.56.113
total 1496012
-rw-r-----. 1 mysql mysql 32768 Aug 31 04:23 aria_log.00000001
-rw-r-----. 1 mysql mysql 52 Aug 31 04:23 aria_log_control
-rw-r-----. 1 mysql mysql 324 Aug 31 04:23 backup-my.cnf
drwx------. 2 mysql mysql 4096 Aug 31 04:23 employees
-rw-r-----. 1 mysql mysql 976 Aug 31 04:23 ib_buffer_pool
-rw-r-----. 1 mysql mysql 281018368 Aug 31 04:24 ibdata1
-rw-r-----. 1 mysql mysql 0 Aug 31 04:24 ib_logfile0
drwx------. 2 mysql mysql 4096 Aug 31 04:23 mysql
drwx------. 2 mysql mysql 20 Aug 31 04:23 performance_schema
-rw-r--r--. 1 mysql mysql 1250829999 Aug 31 04:19 streamfile
drwx------. 2 mysql mysql 20 Aug 31 04:23 test
-rw-r-----. 1 mysql mysql 35 Aug 31 04:23 xtrabackup_binlog_info
-rw-r-----. 1 mysql mysql 81 Aug 31 04:24 xtrabackup_checkpoints
-rw-r-----. 1 mysql mysql 565 Aug 31 04:23 xtrabackup_info

[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "systemctl start mariadb"
[1] 04:26:59 [SUCCESS] 192.168.56.112
[2] 04:26:59 [SUCCESS] 192.168.56.113
[root@db1 test_db]# pssh -i --host='192.168.56.112 192.168.56.113' "systemctl status mariadb"
[1] 04:27:05 [SUCCESS] 192.168.56.113
● mariadb.service - MariaDB 10.4.14 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: active (running) since Mon 2020-08-31 04:26:59 UTC; 6s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 4072 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 4016 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 4014 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Main PID: 4039 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─4039 /usr/sbin/mysqld

Aug 31 04:26:59 db3 systemd[1]: Starting MariaDB 10.4.14 database server...
Aug 31 04:26:59 db3 mysqld[4039]: 2020-08-31 4:26:59 0 [Note] /usr/sbin/mysqld (mysqld 10.4.14-MariaDB-log) starting as process 4039 ...
Aug 31 04:26:59 db3 mysqld[4039]: 2020-08-31 4:26:59 0 [Warning] Could not increase number of max_open_files to more than 16384 (request: 32183)
Aug 31 04:26:59 db3 systemd[1]: Started MariaDB 10.4.14 database server.
[2] 04:27:05 [SUCCESS] 192.168.56.112
● mariadb.service - MariaDB 10.4.14 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: active (running) since Mon 2020-08-31 04:26:59 UTC; 6s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 4026 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 3970 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 3968 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Main PID: 3993 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─3993 /usr/sbin/mysqld

Aug 31 04:26:59 db2 systemd[1]: Starting MariaDB 10.4.14 database server...
Aug 31 04:26:59 db2 mysqld[3993]: 2020-08-31 4:26:59 0 [Note] /usr/sbin/mysqld (mysqld 10.4.14-MariaDB-log) starting as process 3993 ...
Aug 31 04:26:59 db2 mysqld[3993]: 2020-08-31 4:26:59 0 [Warning] Could not increase number of max_open_files to more than 16384 (request: 32183)
Aug 31 04:26:59 db2 systemd[1]: Started MariaDB 10.4.14 database server.

Getting core files and systemd Restart

So you have waited two weeks (cause the crash isn’t easily repeatable) and finally you get the crash again. You check your non-datadir core file directory with loads of free space and discover nothing was written.

When MySQL crashes, you want it to produce at a minimum a stack trace into the error log or a core dump file so you can use gdb to produce the stack trace. The stack trace is an important diagnosis tool to determining what part of the code was running.

One of the goals of this blog is to spread information that will make your job supporting databases easier. This is one of those posts.

So why do I mention systemd?

The Problem:

systemd can restart a service if it fails, so if MySQL crashes, systemd will automatically restart mysql, sometimes preventing the core dump process from completing.
If you are getting the above situation, i.e. everything is already setup to dump a core file and you get nothing and you double-checked the settings, you may need to disable automatic restarts in systemd.

Risks:

The risks of turning off the automatic restart is your db is down until you manually restart it, so make sure you are monitoring your db appropriately.

Setup to dump a core file:

How what needs to be setup to get a core file? Here are some useful articles.

https://www.percona.com/blog/2011/08/26/getting-mysql-core-file-on-linux/

https://mariadb.com/kb/en/enabling-core-dumps/

https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool-in-core-file.html

https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/

The Change:

Make sure every other setting to dump a core file is correct before making this change.

In the conf file under your service eg: /etc/systemd/system/mysql.service.d

# we need the damn core file
Restart=no

Like any changes, follow the gold standard for changes on production systems…

How to apply the change:

  1. Email to stakeholders (developers, reporting users, managers)
  2. Prepare a maintenance plan and gain approval to change.
  3. Organize maintenance window with specific time and date and gain approval for change.
  4. Follow the maintenance plan, which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    –  apply the change to systemd and reload the daemon.
    – Announce the end of the maintenance window in your chat/communication channel.
  5. Wait for your next mysql crash.

Until next time.

Credit for this tip must go to Rick Pizzi.

 

 

Security: Use roles to manage user privileges

MySQL 8.0 finally allows you to use roles to grant privileges. Most companies are now required to fulfill stricter security requirements, one of those requirements is each user must login using their own username. Roles allow you to grant individual users permissions in one go, rather than trying to maintain permissions per user.

Sidenote: If your users are coming in with a shared user, understand that in the future that kind of access is going away. If you get audited for security, shared users are red flag. The reason being is, if something goes wrong or the wrong activity occurs, it can’t be determined easily who did the wrong thing.

This may seem like a mundane topic over trying to crack password hashes, or discovering the mysql_native_password plugin can be broken easily using rainbox tables, but roles will make managing privileges much much easier. Think of roles as removing a layer of redundancy (having to grant all users all the required permissions). Spending the time to get roles setup means less work controlling user privileges in the future.

Task 7: Identify groups of permissions, create a role, grant the role to individual usernames.

This is a multi stage task.

  1. Get a list of current usernames and host and their privileges.
  2. Review this list and generate a list of privileges which are required or are in common for the current usernames.
  3. Create roles to cover all groups of privileges.
  4. Grant this role the privileges identified in step 2.
  5. Revoke individual privileges from the user so the user has permissions only via the roles granted to them.

For more details, MySQL roles and MariaDB roles

Until next time.

 

 

Security tasks: Migrate from the mysql_native_password authentication plugin

If you are still running the old MySQL authentication plugin commonly called the “mysql_native_password” authentication plugin in MySQL 5.7 and older, then you need to migrate to something better.

The mysql_native_password plugin (used in MariaDB as well) is based on SHA1. This hash function simply isn’t good enough anymore unless minimum password length is much longer than the current default of 8 characters.

Why this paranoia you ask? Rainbow tables and the technology driving cheaper and bigger disk storage and machine learning/deep learning driving GPU performance.

Rainbow tables are used once you realize storing pre-computed hashes of all plain text passwords for all combinations quickly consumes a lot of disk space.

Task 6: Use a stronger authentication plugin than the mysql_native_password.

Check SQL: 

select plugin,count(*) from mysql.user group by plugin

The Fix:

Until next time.

Security Tasks: Looking for existing bad or guessable passwords.

So you running MySQL with the validate_password plugin. What about existing users and their passwords? How do you check that those passwords are ok and aren’t bad like ‘123456’ or easily guessable like ‘Tuesday2020!’? and how do you fix them.

If your application and database has been around for a while, you may have bad or guessable passwords. The validate_password plugin can’t help you with those if they were created prior to using the plugin.

In this article we see how we can check those existing passwords and fix them.

Task 6: Make sure existing passwords are strong passwords:

Simple Steps: 

  1. Ask the owner of the username if their password is strong enough to fulfill the requirements. eg. minimum length 8, upper and lowercase characters, at least 1 special character. This is a simple yes/no answer.
  2. Organize, announce, gain approval and expire all non-application user passwords to force new passwords. This step will allow the new passwords to be checked against the validate_password plugin.
  3. Check application user passwords, are they known? are they strong passwords?

Harder Steps:

  • What happens if noone knows the passwords?
  • What if people are using a bad password but don’t care and won’t tell you?
  • What if you can’t get approval to expire passwords?

In the remote chance you are in the situation, then you need to start checking passwords against either a known or generated list or you resort to brute-forcing the password check using a tool like hashcat.

Before you whip out your credit card to play/pay for that GPU enabled instance on your favourite cloud provider to brute force those passwords, how about working smarter using what we already know about passwords.

We already loaded a bunch of password lists into the db. Why not check existing passwords against those list.

SQL: Add the MySQL auth_string to an existing password list: 

mysql> alter table 10million_passwords add column auth_string text;
Query OK, 0 rows affected (1.72 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> update 10million_passwords set auth_string = CONCAT('*', UPPER(SHA1(UNHEX(SHA1(password)))));
Query OK, 999999 rows affected (13.39 sec)
Rows matched: 999999 Changed: 999999 Warnings: 0

SQL: Check all existing passwords against a known password list

mysql> select u.user,u.host,p.password from mysql.user u
-> join password_list.10million_passwords p on u.authentication_string = p.auth_string;
+------------------+-----------+----------+
| user             | host      | password |
+------------------+-----------+----------+
| BeatsValidPasswd | localhost | P@ssw0rd |
+------------------+-----------+----------+
1 row in set (0.96 sec)

SQL: Build a password list yourself. See appendix for full details

insert into season_dayname_passwords
select DaynamePassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword2Special))))) as Auth_string from (
select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499 Duplicates: 0 Warnings: 0

SQL: Search for passwords using the season or dayname combination.

select u.user,u.host,p.password from mysql.user u 
join password_list.season_dayname_passwords p on u.authentication_string = p.auth_string;

+-------------------+-----------+--------------+
| user              | host      | password     |
+-------------------+-----------+--------------+
| BeatsValidPasswd2 | localhost | January2020! |
| admin             | localhost | June2020!    |
+-------------------+-----------+--------------+
2 rows in set (0.00 sec)

We aren’t done yet… but until next time.

Appendix: Worklog:

-- Checking passwords.

mysql> alter table 10million_passwords add column auth_string text;
Query OK, 0 rows affected (1.72 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> update 10million_passwords set auth_string = CONCAT('*', UPPER(SHA1(UNHEX(SHA1(password)))));
Query OK, 999999 rows affected (13.39 sec)
Rows matched: 999999 Changed: 999999 Warnings: 0

-- Create user with a password which scores 100 vs validate_password but is in the password list

create user 'BeatsValidPasswd'@'localhost' identified by 'P@ssw0rd';
create user 'BeatsValidPasswd2'@'localhost' identified by 'January2020!';

-- Now search for mysql users with passwords in the 10million_passwords list

select u.user,u.host,p.password from mysql.user u 
join password_list.10million_passwords p on u.authentication_string = p.auth_string;

mysql> select u.user,u.host,p.password from mysql.user u
-> join password_list.10million_passwords p on u.authentication_string = p.auth_string;
+------------------+-----------+----------+
| user | host | password |
+------------------+-----------+----------+
| BeatsValidPasswd | localhost | P@ssw0rd |
+------------------+-----------+----------+
1 row in set (0.96 sec)

-- Add season, dayname passwords 

create table season_dayname_passwords (password varchar(50), auth_string text) engine innodb;

insert into season_dayname_passwords
select SeasonPassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(SeasonPassword))))) as Auth_string from (
select distinct concat(monthname(selected_date),year(selected_date),'!') as SeasonPassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 97 rows affected (0.05 sec)
Records: 97  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword))))) as Auth_string from (
select distinct concat(dayname(selected_date),date_format(selected_date,'%m%y'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 673 rows affected (0.06 sec)
Records: 673  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword))))) as Auth_string from (
select distinct concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select SeasonPassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(SeasonPassword2Special))))) as Auth_string from (
select distinct replace(concat(monthname(selected_date),year(selected_date),'!'),'a','@') as SeasonPassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 97 rows affected (0.06 sec)
Records: 97  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword2Special))))) as Auth_string from (
select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499  Duplicates: 0  Warnings: 0

-- Now search for mysql users with passwords in the season_dayname password list

select u.user,u.host,p.password from mysql.user u 
join password_list.season_dayname_passwords p on u.authentication_string = p.auth_string;

+-------------------+-----------+--------------+
| user              | host      | password     |
+-------------------+-----------+--------------+
| BeatsValidPasswd2 | localhost | January2020! |
| admin             | localhost | June2020!    |
+-------------------+-----------+--------------+
2 rows in set (0.00 sec)

Security Tasks: Passwords

In this continuing series of posts of fundamental DBA security tasks,  we will look at passwords again, the good, the bad and the outright ugly.
This group of tasks will help you find insecure users with poor passwords and provide recommendations on how to fix them.

Note:

For most new installations a lot of the checks are going to return nothing.
If however you are running an older version of MySQL say for that legacy app written back in 2007, then these kind of security problems can still occur.

Task 3: Find users with the same password as username

The problem:

The first password to guess is the username. Can it be any easier to hack your db…

SQL to check:

SELECT user,authentication_string,host 
FROM mysql.user
WHERE authentication_string=CONCAT('*', UPPER(SHA1(UNHEX(SHA1(user)))));

The Fix:

  • Discuss with the owner of the application/database.
  • Change the password to a strong password (minimum length 16, Uppercase, numbers and special characters as well) using ALTER USER or SET PASSWORD FOR.

Task 4: Find users without a password set

The problem:

Users without passwords are a joy for hackers (external or internal). The easiest password to guess is no password.

SQL to check:

SELECT User,host
FROM mysql.user
WHERE authentication_string='';

The Fix:

  • Discuss with the owner of the application/database.
  • Add a strong password (minimum length 16, Uppercase, numbers and special characters as well) using ALTER USER or SET PASSWORD FOR.

Task 5: Set the password expiry policy for non-application users to 90 days.

The problem:

You know that posted note with “Str0ngP@55w0rd!” you stuck on your monitor 3 years ago which you are still using and everyone knows… yeh that is a problem.

Changing passwords every 90 days is good security practice.
This is the reason your LAN/LDAP/corporate login password prompts you after 90 days.

The SQL to check:

SELECT user, host, password_lifetime 
from mysql.user where
password_lifetime IS NULL;

The Fix: 

  • Add a password expire interval to non-application users.

Example: For username, SecureSam

ALTER USER 'securesam'@'localhost' PASSWORD EXPIRE INTERVAL 90 DAY;

How to apply the fixes:

  1. Prepare a security audit report/review.
  2. Present or email to stakeholders (developers, reporting users, managers)
  3. Prepare a maintenance plan and gain approval to change.
  4. Organize maintenance window with specific time and date and gain approval for change.
  5. Follow the maintenance plan, which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    –  run the fixes you have listed in the maintenance plan.
    – Get developers or report users to test to make sure their application works
    – Confirm good.
    – Announce the end of the maintenance window in your chat/communication channel.

Security Tasks: Only 0.13% of passwords pass the validate_password plugin MEDIUM policy score.

Passwords lists abound on the net. They exist to:

  1. Get a laugh out of silly passwords… 123456 anyone?
  2. Develop a dictionary file to check user passwords either for validation or cracking.

From MySQL 5.6 the validate_password plugin has been available. It enables password strength to be checked against known security best practice for passwords.

I downloaded two password lists from https://github.com/danielmiessler/SecLists
10-million-password-list-top-1000000.txt
xato-net-10-million-passwords-1000000.txt

Then I loaded them into a MySQL 5.7 db and checked how many would pass the MEDIUM password policy. The MEDIUM policy is basically Length=8; at least one numeric, lowercase/uppercase, and at least one special character.

https://dev.mysql.com/doc/refman/5.7/en/validate-password-options-variables.html#sysvar_validate_password_policy

Result:

Only at max of 0.13% of the passwords scored at least 75 using the validate_password_strength function.
https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_validate-password-strength

So if you were going to use those lists to build a dictionary file for the validate_password plugin you are mostly wasting your time. Most will already be blocked by the MEDIUM policy.

Recommendation:   Make sure the validate_password plugin is on (it is on by default in MySQL 5.7 and MySQL 8.0) and that will force strong passwords.
Disabling is a super bad idea. Do you really want passwords like ‘123456’ or ‘qwerty’??

So what bad passwords remain?

Enter the Western Aust. Information Systems Audit Report 2018.
https://audit.wa.gov.au/wp-content/uploads/2018/08/report2018_14-IS-GCC-App-Pass.pdf

They found people were using nominally strong passwords, which were still too easy to guess!
Stuff like ‘October2017’ and ‘Summer2016’

So maybe we need a dictionary file for those kind of passwords. Because they pass the strength requirement but are too easy to guess.

Generating guessable strong passwords in the db.

Note: The trick using select 1 union select 2 is a common method if you don’t understand it, run the inner selects first to see what is happening. The SQL below will return the dates between two dates in the where clause.

SQL to generate Season and year type guessable passwords:

select distinct concat(monthname(selected_date),year(selected_date),'!') as SeasonPassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample output:

+----------------+
| SeasonPassword |
+----------------+
| January2018!   |
| February2018!  |
| March2018!     |
| April2018!     |
| May2018!       |
| June2018!      |
| July2018!      |
| August2018!    |
| September2018! |
| October2018!   |
+----------------+
10 rows in set (0.01 sec)

SQL to generate guessable passwords with dayname and date:

select distinct concat(dayname(selected_date),date_format(selected_date,'%m%y'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample Output:

+-----------------+
| DaynamePassword |
+-----------------+
| Monday0118!     |
| Tuesday0118!    |
| Wednesday0118!  |
| Thursday0118!   |
| Friday0118!     |
| Saturday0118!   |
| Sunday0118!     |
| Thursday0218!   |
| Friday0218!     |
| Saturday0218!   |
+-----------------+
10 rows in set (0.01 sec)

SQL to generate guessable passwords using l33t character replacements:

select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample output:

+-------------------------+
| DaynamePassword2Special |
+-------------------------+
| Mond@y0101!             |
| Tuesd@y0102!            |
| Wednesd@y0103!          |
| Thursd@y0104!           |
| Frid@y0105!             |
| S@turd@y0106!           |
| Sund@y0107!             |
| Mond@y0108!             |
| Tuesd@y0109!            |
| Wednesd@y0110!          |
+-------------------------+
10 rows in set (0.01 sec)

Got this far? Seriously you must be in the 0.13% of all readers. Truly l33t.

If you would like a 1 Mbyte dictionary file to pass to the validate_password plugin, drop a comment below or like the post.
If there is enough interest I will post a dictionary file…

Until next time..

Appendix:

The worklog:

https://github.com/danielmiessler/SecLists

Using https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_validate-password-strength

mysql>  load data infile '/var/lib/mysql-files/10-million-password-list-top-1000000.txt' into table 10million_passwords;
Query OK, 999999 rows affected (1.68 sec)
Records: 999999  Deleted: 0  Skipped: 0  Warnings: 0s

mysql> select validate_password_strength(password),count(*) from 10million_passwords group by validate_password_strength(password);
+--------------------------------------+----------+
| validate_password_strength(password) | count(*) |
+--------------------------------------+----------+
|                                    0 |      854 |
|                                   25 |   511016 |
|                                   50 |   486815 |
|                                  100 |     1314 |
+--------------------------------------+----------+
mysql> select password from 10million_passwords where validate_password_strength(password) >=75 limit 10;
+-----------------+
| password        |
+-----------------+
| L58jkdjP!       |
| P@ssw0rd        |
| !QAZ2wsx        |
| 1qaz!QAZ        |
| 1qaz@WSX        |
| ZAQ!2wsx        |
| !QAZxsw2        |
| NICK1234-rem936 |
| xxPa33bq.aDNA   |
| !QAZ1qaz        |
+-----------------+
10 rows in set (0.05 sec)
mysql> load data infile '/var/lib/mysql-files/xato-net-10-million-passwords-1000000.txt' into table xato_passwords;
Query OK, 999998 rows affected (1.54 sec)
Records: 999998  Deleted: 0  Skipped: 0  Warnings: 0


mysql> select validate_password_strength(password),count(*) from xato_passwords group by validate_password_strength(password);
+--------------------------------------+----------+
| validate_password_strength(password) | count(*) |
+--------------------------------------+----------+
|                                    0 |      830 |
|                                   25 |   510787 |
|                                   50 |   487709 |
|                                  100 |      672 |
+--------------------------------------+----------+
4 rows in set (0.68 sec)

mysql> select password from xato_passwords where validate_password_strength(password)>=75 limit 10;
+-----------------+
| password        |
+-----------------+
| L58jkdjP!       |
| P@ssw0rd        |
| 1qaz!QAZ        |
| !QAZ2wsx        |
| 1qaz@WSX        |
| ZAQ!2wsx        |
| !QAZxsw2        |
| NICK1234-rem936 |
| xxPa33bq.aDNA   |
| g00dPa$$w0rD    |
+-----------------+
10 rows in set (0.04 sec)

Information Systems Audit Report 2018
https://audit.wa.gov.au/wp-content/uploads/2018/08/report2018_14-IS-GCC-App-Pass.pdf

Variants of date and season like Summer2017 or October2017

mysql> select validate_password_strength('Summer2017');
+------------------------------------------+
| validate_password_strength('Summer2017') |
+------------------------------------------+
|                                       50 |
+------------------------------------------+
1 row in set (0.00 sec)

mysql> select validate_password_strength('October2017');
+-------------------------------------------+
| validate_password_strength('October2017') |
+-------------------------------------------+
|                                        50 |
+-------------------------------------------+
1 row in set (0.00 sec)

mysql>  select validate_password_strength('Summer2017!');
+-------------------------------------------+
| validate_password_strength('Summer2017!') |
+-------------------------------------------+
|                                       100 |
+-------------------------------------------+
1 row in set (0.00 sec)

 

Security Tasks part 2

In the second part of this series of DBA security tasks, we will continue to look at fixing mysql db users with an insecure setup.

These are not tips that you read with your morning coffee and then forget about, rather these are tasks, important tasks which you add to your project or task list and work through until they are completed, signed off if necessary.

Task 2: Remove anonymous users

 

The Problem: 

Anonymous users allow any user from the specific host string to connect.

SQL to check:

select user,host from mysql.user where user = '';
+------+-----------+
| user | host      |
+------+-----------+
|      | 192.168.% |
+------+-----------+
1 row in set (0.00 sec)

The Fix:

Using the previous post recommendations, all connections should connect using a specific user and if possible a specific hostname or subnet mask.
Remove anonymous users.

How to apply the fix:

  1. Prepare a security audit report/review.
  2. Present or email to stakeholders (developers, reporting users, managers)
  3. Prepare a maintenance plan and gain approval to change.
  4. Organize maintenance window with specific time and date and gain approval for change.
  5. Follow the maintenance plan, which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    –  drop any anonymous users.
    – Get developers or report users to test to make sure their application works
    – Confirm good.
    – Announce the end of the maintenance window in your chat/communication channel.

Note:

Most recent versions of MySQL, MariaDB and Percona have recognized the problem with anonymous users and they are removed if you follow the correct instructions when the database is first created. Sadly this doesn’t help if your database has been upgraded over time from much older versions.

Next time we will look at passwords… the good, the bad and the ugly,

Faster Galera SST times using [sst] options.

This is a followup post to the article where I outlined how to use the power of linux pipes and pigz to compress and decompress your data in parallel for faster streaming backups.

Out of the box, the SST scripts used by Galera don’t try to optimize for speed. They are just trying to robust and not fail at the job they are required to do. That job is to get a Galera node back in sync with the cluster. But we can do better…

So I looked into the options you can pass to the SST process via adding [sst] to your .cnf file. I tested them in a lab, running the same OS and same DB version as the production database we were supported. This was to make sure the SST wouldn’t break.
And the results? we got a similar speed up in the SST duration that I had seen when we had been streaming backups to build a backup replica.

The time difference when dealing with large datasets is huge.

Notes:

  • Options added a .cnf file on every db server in the Galera cluster.
  • pigz needs to be installed on all db servers in the Galera cluster.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • getting the idea yet… just test it first already.
  • These options will work with either:
    wsrep_sst_method=mariabackup
    or
    wsrep_sst_method=xtrabackup-v2

SST options to speed up using pigz and parallel threads:

[sst]
compressor='pigz -p 8'
decompressor='pigz -dc -p 8'
inno-backup-opts="--parallel=8"

Explanation:

compressor=’pigz -p 8′ : compress the streaming backup using pigz using 8 CPUs
decompressor=’pigz -dc -p 8′: decompress the streaming backup using pigz using 8 CPUs
inno-backup-opts=”–parallel=8″: backup the db using 8 parallel threads

 Documentation:

https://mariadb.com/kb/en/mariabackup-options/#-parallel
https://www.percona.com/doc/percona-xtrabackup/2.3/innobackupex/innobackupex_option_reference.html

Until next time.