Security Tasks: Looking for existing bad or guessable passwords.

So you running MySQL with the validate_password plugin. What about existing users and their passwords? How do you check that those passwords are ok and aren’t bad like ‘123456’ or easily guessable like ‘Tuesday2020!’? and how do you fix them.

If your application and database has been around for a while, you may have bad or guessable passwords. The validate_password plugin can’t help you with those if they were created prior to using the plugin.

In this article we see how we can check those existing passwords and fix them.

Task 6: Make sure existing passwords are strong passwords:

Simple Steps: 

  1. Ask the owner of the username if their password is strong enough to fulfill the requirements. eg. minimum length 8, upper and lowercase characters, at least 1 special character. This is a simple yes/no answer.
  2. Organize, announce, gain approval and expire all non-application user passwords to force new passwords. This step will allow the new passwords to be checked against the validate_password plugin.
  3. Check application user passwords, are they known? are they strong passwords?

Harder Steps:

  • What happens if noone knows the passwords?
  • What if people are using a bad password but don’t care and won’t tell you?
  • What if you can’t get approval to expire passwords?

In the remote chance you are in the situation, then you need to start checking passwords against either a known or generated list or you resort to brute-forcing the password check using a tool like hashcat.

Before you whip out your credit card to play/pay for that GPU enabled instance on your favourite cloud provider to brute force those passwords, how about working smarter using what we already know about passwords.

We already loaded a bunch of password lists into the db. Why not check existing passwords against those list.

SQL: Add the MySQL auth_string to an existing password list: 

mysql> alter table 10million_passwords add column auth_string text;
Query OK, 0 rows affected (1.72 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> update 10million_passwords set auth_string = CONCAT('*', UPPER(SHA1(UNHEX(SHA1(password)))));
Query OK, 999999 rows affected (13.39 sec)
Rows matched: 999999 Changed: 999999 Warnings: 0

SQL: Check all existing passwords against a known password list

mysql> select u.user,u.host,p.password from mysql.user u
-> join password_list.10million_passwords p on u.authentication_string = p.auth_string;
+------------------+-----------+----------+
| user             | host      | password |
+------------------+-----------+----------+
| BeatsValidPasswd | localhost | P@ssw0rd |
+------------------+-----------+----------+
1 row in set (0.96 sec)

SQL: Build a password list yourself. See appendix for full details

insert into season_dayname_passwords
select DaynamePassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword2Special))))) as Auth_string from (
select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499 Duplicates: 0 Warnings: 0

SQL: Search for passwords using the season or dayname combination.

select u.user,u.host,p.password from mysql.user u 
join password_list.season_dayname_passwords p on u.authentication_string = p.auth_string;

+-------------------+-----------+--------------+
| user              | host      | password     |
+-------------------+-----------+--------------+
| BeatsValidPasswd2 | localhost | January2020! |
| admin             | localhost | June2020!    |
+-------------------+-----------+--------------+
2 rows in set (0.00 sec)

We aren’t done yet… but until next time.

Appendix: Worklog:

-- Checking passwords.

mysql> alter table 10million_passwords add column auth_string text;
Query OK, 0 rows affected (1.72 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> update 10million_passwords set auth_string = CONCAT('*', UPPER(SHA1(UNHEX(SHA1(password)))));
Query OK, 999999 rows affected (13.39 sec)
Rows matched: 999999 Changed: 999999 Warnings: 0

-- Create user with a password which scores 100 vs validate_password but is in the password list

create user 'BeatsValidPasswd'@'localhost' identified by 'P@ssw0rd';
create user 'BeatsValidPasswd2'@'localhost' identified by 'January2020!';

-- Now search for mysql users with passwords in the 10million_passwords list

select u.user,u.host,p.password from mysql.user u 
join password_list.10million_passwords p on u.authentication_string = p.auth_string;

mysql> select u.user,u.host,p.password from mysql.user u
-> join password_list.10million_passwords p on u.authentication_string = p.auth_string;
+------------------+-----------+----------+
| user | host | password |
+------------------+-----------+----------+
| BeatsValidPasswd | localhost | P@ssw0rd |
+------------------+-----------+----------+
1 row in set (0.96 sec)

-- Add season, dayname passwords 

create table season_dayname_passwords (password varchar(50), auth_string text) engine innodb;

insert into season_dayname_passwords
select SeasonPassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(SeasonPassword))))) as Auth_string from (
select distinct concat(monthname(selected_date),year(selected_date),'!') as SeasonPassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 97 rows affected (0.05 sec)
Records: 97  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword))))) as Auth_string from (
select distinct concat(dayname(selected_date),date_format(selected_date,'%m%y'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 673 rows affected (0.06 sec)
Records: 673  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword))))) as Auth_string from (
select distinct concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select SeasonPassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(SeasonPassword2Special))))) as Auth_string from (
select distinct replace(concat(monthname(selected_date),year(selected_date),'!'),'a','@') as SeasonPassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 97 rows affected (0.06 sec)
Records: 97  Duplicates: 0  Warnings: 0

insert into season_dayname_passwords
select DaynamePassword2Special, CONCAT('*', UPPER(SHA1(UNHEX(SHA1(DaynamePassword2Special))))) as Auth_string from (
select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
 (select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01') gen_pass;

Query OK, 2499 rows affected (0.07 sec)
Records: 2499  Duplicates: 0  Warnings: 0

-- Now search for mysql users with passwords in the season_dayname password list

select u.user,u.host,p.password from mysql.user u 
join password_list.season_dayname_passwords p on u.authentication_string = p.auth_string;

+-------------------+-----------+--------------+
| user              | host      | password     |
+-------------------+-----------+--------------+
| BeatsValidPasswd2 | localhost | January2020! |
| admin             | localhost | June2020!    |
+-------------------+-----------+--------------+
2 rows in set (0.00 sec)

Security Tasks: Passwords

In this continuing series of posts of fundamental DBA security tasks,  we will look at passwords again, the good, the bad and the outright ugly.
This group of tasks will help you find insecure users with poor passwords and provide recommendations on how to fix them.

Note:

For most new installations a lot of the checks are going to return nothing.
If however you are running an older version of MySQL say for that legacy app written back in 2007, then these kind of security problems can still occur.

Task 3: Find users with the same password as username

The problem:

The first password to guess is the username. Can it be any easier to hack your db…

SQL to check:

SELECT user,authentication_string,host 
FROM mysql.user
WHERE authentication_string=CONCAT('*', UPPER(SHA1(UNHEX(SHA1(user)))));

The Fix:

  • Discuss with the owner of the application/database.
  • Change the password to a strong password (minimum length 16, Uppercase, numbers and special characters as well) using ALTER USER or SET PASSWORD FOR.

Task 4: Find users without a password set

The problem:

Users without passwords are a joy for hackers (external or internal). The easiest password to guess is no password.

SQL to check:

SELECT User,host
FROM mysql.user
WHERE authentication_string='';

The Fix:

  • Discuss with the owner of the application/database.
  • Add a strong password (minimum length 16, Uppercase, numbers and special characters as well) using ALTER USER or SET PASSWORD FOR.

Task 5: Set the password expiry policy for non-application users to 90 days.

The problem:

You know that posted note with “Str0ngP@55w0rd!” you stuck on your monitor 3 years ago which you are still using and everyone knows… yeh that is a problem.

Changing passwords every 90 days is good security practice.
This is the reason your LAN/LDAP/corporate login password prompts you after 90 days.

The SQL to check:

SELECT user, host, password_lifetime 
from mysql.user where
password_lifetime IS NULL;

The Fix: 

  • Add a password expire interval to non-application users.

Example: For username, SecureSam

ALTER USER 'securesam'@'localhost' PASSWORD EXPIRE INTERVAL 90 DAY;

How to apply the fixes:

  1. Prepare a security audit report/review.
  2. Present or email to stakeholders (developers, reporting users, managers)
  3. Prepare a maintenance plan and gain approval to change.
  4. Organize maintenance window with specific time and date and gain approval for change.
  5. Follow the maintenance plan, which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    –  run the fixes you have listed in the maintenance plan.
    – Get developers or report users to test to make sure their application works
    – Confirm good.
    – Announce the end of the maintenance window in your chat/communication channel.

Security Tasks: Only 0.13% of passwords pass the validate_password plugin MEDIUM policy score.

Passwords lists abound on the net. They exist to:

  1. Get a laugh out of silly passwords… 123456 anyone?
  2. Develop a dictionary file to check user passwords either for validation or cracking.

From MySQL 5.6 the validate_password plugin has been available. It enables password strength to be checked against known security best practice for passwords.

I downloaded two password lists from https://github.com/danielmiessler/SecLists
10-million-password-list-top-1000000.txt
xato-net-10-million-passwords-1000000.txt

Then I loaded them into a MySQL 5.7 db and checked how many would pass the MEDIUM password policy. The MEDIUM policy is basically Length=8; at least one numeric, lowercase/uppercase, and at least one special character.

https://dev.mysql.com/doc/refman/5.7/en/validate-password-options-variables.html#sysvar_validate_password_policy

Result:

Only at max of 0.13% of the passwords scored at least 75 using the validate_password_strength function.
https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_validate-password-strength

So if you were going to use those lists to build a dictionary file for the validate_password plugin you are mostly wasting your time. Most will already be blocked by the MEDIUM policy.

Recommendation:   Make sure the validate_password plugin is on (it is on by default in MySQL 5.7 and MySQL 8.0) and that will force strong passwords.
Disabling is a super bad idea. Do you really want passwords like ‘123456’ or ‘qwerty’??

So what bad passwords remain?

Enter the Western Aust. Information Systems Audit Report 2018.
https://audit.wa.gov.au/wp-content/uploads/2018/08/report2018_14-IS-GCC-App-Pass.pdf

They found people were using nominally strong passwords, which were still too easy to guess!
Stuff like ‘October2017’ and ‘Summer2016’

So maybe we need a dictionary file for those kind of passwords. Because they pass the strength requirement but are too easy to guess.

Generating guessable strong passwords in the db.

Note: The trick using select 1 union select 2 is a common method if you don’t understand it, run the inner selects first to see what is happening. The SQL below will return the dates between two dates in the where clause.

SQL to generate Season and year type guessable passwords:

select distinct concat(monthname(selected_date),year(selected_date),'!') as SeasonPassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample output:

+----------------+
| SeasonPassword |
+----------------+
| January2018!   |
| February2018!  |
| March2018!     |
| April2018!     |
| May2018!       |
| June2018!      |
| July2018!      |
| August2018!    |
| September2018! |
| October2018!   |
+----------------+
10 rows in set (0.01 sec)

SQL to generate guessable passwords with dayname and date:

select distinct concat(dayname(selected_date),date_format(selected_date,'%m%y'),'!') as DaynamePassword from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample Output:

+-----------------+
| DaynamePassword |
+-----------------+
| Monday0118!     |
| Tuesday0118!    |
| Wednesday0118!  |
| Thursday0118!   |
| Friday0118!     |
| Saturday0118!   |
| Sunday0118!     |
| Thursday0218!   |
| Friday0218!     |
| Saturday0218!   |
+-----------------+
10 rows in set (0.01 sec)

SQL to generate guessable passwords using l33t character replacements:

select distinct replace(concat(dayname(selected_date),date_format(selected_date,'%m%d'),'!'),'a','@') as DaynamePassword2Special from 
(select adddate('1970-01-01',t4.i*10000 + t3.i*1000 + t2.i*100 + t1.i*10 + t0.i) selected_date from
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t0,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t1,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t2,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t3,
(select 0 i union select 1 union select 2 union select 3 union select 4 union select 5 union select 6 union select 7 union select 8 union select 9) t4) v
where selected_date between '2018-01-01' and '2026-01-01' limit 10;

Sample output:

+-------------------------+
| DaynamePassword2Special |
+-------------------------+
| Mond@y0101!             |
| Tuesd@y0102!            |
| Wednesd@y0103!          |
| Thursd@y0104!           |
| Frid@y0105!             |
| S@turd@y0106!           |
| Sund@y0107!             |
| Mond@y0108!             |
| Tuesd@y0109!            |
| Wednesd@y0110!          |
+-------------------------+
10 rows in set (0.01 sec)

Got this far? Seriously you must be in the 0.13% of all readers. Truly l33t.

If you would like a 1 Mbyte dictionary file to pass to the validate_password plugin, drop a comment below or like the post.
If there is enough interest I will post a dictionary file…

Until next time..

Appendix:

The worklog:

https://github.com/danielmiessler/SecLists

Using https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_validate-password-strength

mysql>  load data infile '/var/lib/mysql-files/10-million-password-list-top-1000000.txt' into table 10million_passwords;
Query OK, 999999 rows affected (1.68 sec)
Records: 999999  Deleted: 0  Skipped: 0  Warnings: 0s

mysql> select validate_password_strength(password),count(*) from 10million_passwords group by validate_password_strength(password);
+--------------------------------------+----------+
| validate_password_strength(password) | count(*) |
+--------------------------------------+----------+
|                                    0 |      854 |
|                                   25 |   511016 |
|                                   50 |   486815 |
|                                  100 |     1314 |
+--------------------------------------+----------+
mysql> select password from 10million_passwords where validate_password_strength(password) >=75 limit 10;
+-----------------+
| password        |
+-----------------+
| L58jkdjP!       |
| P@ssw0rd        |
| !QAZ2wsx        |
| 1qaz!QAZ        |
| 1qaz@WSX        |
| ZAQ!2wsx        |
| !QAZxsw2        |
| NICK1234-rem936 |
| xxPa33bq.aDNA   |
| !QAZ1qaz        |
+-----------------+
10 rows in set (0.05 sec)
mysql> load data infile '/var/lib/mysql-files/xato-net-10-million-passwords-1000000.txt' into table xato_passwords;
Query OK, 999998 rows affected (1.54 sec)
Records: 999998  Deleted: 0  Skipped: 0  Warnings: 0


mysql> select validate_password_strength(password),count(*) from xato_passwords group by validate_password_strength(password);
+--------------------------------------+----------+
| validate_password_strength(password) | count(*) |
+--------------------------------------+----------+
|                                    0 |      830 |
|                                   25 |   510787 |
|                                   50 |   487709 |
|                                  100 |      672 |
+--------------------------------------+----------+
4 rows in set (0.68 sec)

mysql> select password from xato_passwords where validate_password_strength(password)>=75 limit 10;
+-----------------+
| password        |
+-----------------+
| L58jkdjP!       |
| P@ssw0rd        |
| 1qaz!QAZ        |
| !QAZ2wsx        |
| 1qaz@WSX        |
| ZAQ!2wsx        |
| !QAZxsw2        |
| NICK1234-rem936 |
| xxPa33bq.aDNA   |
| g00dPa$$w0rD    |
+-----------------+
10 rows in set (0.04 sec)

Information Systems Audit Report 2018
https://audit.wa.gov.au/wp-content/uploads/2018/08/report2018_14-IS-GCC-App-Pass.pdf

Variants of date and season like Summer2017 or October2017

mysql> select validate_password_strength('Summer2017');
+------------------------------------------+
| validate_password_strength('Summer2017') |
+------------------------------------------+
|                                       50 |
+------------------------------------------+
1 row in set (0.00 sec)

mysql> select validate_password_strength('October2017');
+-------------------------------------------+
| validate_password_strength('October2017') |
+-------------------------------------------+
|                                        50 |
+-------------------------------------------+
1 row in set (0.00 sec)

mysql>  select validate_password_strength('Summer2017!');
+-------------------------------------------+
| validate_password_strength('Summer2017!') |
+-------------------------------------------+
|                                       100 |
+-------------------------------------------+
1 row in set (0.00 sec)

 

Security Tasks part 2

In the second part of this series of DBA security tasks, we will continue to look at fixing mysql db users with an insecure setup.

These are not tips that you read with your morning coffee and then forget about, rather these are tasks, important tasks which you add to your project or task list and work through until they are completed, signed off if necessary.

Task 2: Remove anonymous users

 

The Problem: 

Anonymous users allow any user from the specific host string to connect.

SQL to check:

select user,host from mysql.user where user = '';
+------+-----------+
| user | host      |
+------+-----------+
|      | 192.168.% |
+------+-----------+
1 row in set (0.00 sec)

The Fix:

Using the previous post recommendations, all connections should connect using a specific user and if possible a specific hostname or subnet mask.
Remove anonymous users.

How to apply the fix:

  1. Prepare a security audit report/review.
  2. Present or email to stakeholders (developers, reporting users, managers)
  3. Prepare a maintenance plan and gain approval to change.
  4. Organize maintenance window with specific time and date and gain approval for change.
  5. Follow the maintenance plan, which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    –  drop any anonymous users.
    – Get developers or report users to test to make sure their application works
    – Confirm good.
    – Announce the end of the maintenance window in your chat/communication channel.

Note:

Most recent versions of MySQL, MariaDB and Percona have recognized the problem with anonymous users and they are removed if you follow the correct instructions when the database is first created. Sadly this doesn’t help if your database has been upgraded over time from much older versions.

Next time we will look at passwords… the good, the bad and the ugly,

Faster Galera SST times using [sst] options.

This is a followup post to the article where I outlined how to use the power of linux pipes and pigz to compress and decompress your data in parallel for faster streaming backups.

Out of the box, the SST scripts used by Galera don’t try to optimize for speed. They are just trying to robust and not fail at the job they are required to do. That job is to get a Galera node back in sync with the cluster. But we can do better…

So I looked into the options you can pass to the SST process via adding [sst] to your .cnf file. I tested them in a lab, running the same OS and same DB version as the production database we were supported. This was to make sure the SST wouldn’t break.
And the results? we got a similar speed up in the SST duration that I had seen when we had been streaming backups to build a backup replica.

The time difference when dealing with large datasets is huge.

Notes:

  • Options added a .cnf file on every db server in the Galera cluster.
  • pigz needs to be installed on all db servers in the Galera cluster.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • WARNING: Test this is a lab environment before rolling this out to production.
  • getting the idea yet… just test it first already.
  • These options will work with either:
    wsrep_sst_method=mariabackup
    or
    wsrep_sst_method=xtrabackup-v2

SST options to speed up using pigz and parallel threads:

[sst]
compressor='pigz -p 8'
decompressor='pigz -dc -p 8'
inno-backup-opts="--parallel=8"

Explanation:

compressor=’pigz -p 8′ : compress the streaming backup using pigz using 8 CPUs
decompressor=’pigz -dc -p 8′: decompress the streaming backup using pigz using 8 CPUs
inno-backup-opts=”–parallel=8″: backup the db using 8 parallel threads

 Documentation:

https://mariadb.com/kb/en/mariabackup-options/#-parallel
https://www.percona.com/doc/percona-xtrabackup/2.3/innobackupex/innobackupex_option_reference.html

Until next time.

Faster streaming backups using mariabackup and pigz

Are you looking for a faster way to stream a backup from one db server to another?

Maybe you are rebuilding a replica from the master after someone wrote to the slave and now the data is out of sync.

I had a look at what was out there in terms of articles about streaming backups.

Most of them got part of the way but clearly haven’t had people asking in chat “are we there yet?” over and over, wondering when their replicas will be back so they can direct the read traffic at them.
The key here is to use the parallel options that come with mariabackup and use pigz to compress and decompress in parallel.

Do you want speed?
Do you want to saturate the 10Gigabit bonded link or InfiniBand (I wish…) ?
Do you have spare CPUs to use?

WARNING:

  • This command assumes the destination directory is empty. 
  • The command will 8 CPUs (4 for mariabackup and 4 for pigz) on the source
  • The command can use up to 8 CPUs (4 for pigz to decompress, 4 for mbstream)
  • Don’t run pigz without -p option. As by default it will use all CPUs available.

The command:

mariabackup --backup --slave-info --tmpdir=/tmp --stream=xbstream --parallel=4 --datadir=/var/lib/mysql 2>backup.log | pigz -p 4 | ssh -q replica -t "pigz -dc -p 4 | mbstream --directory=/var/lib/mysql -x --parallel=4"

Explanation of the command:

  1. This command will take a streaming backup (using xbstream) in parallel from the database in /var/lib/mysql.
  2. It will write output the backup.log (just in case you need to debug…)
  3. Then it will pipe that toq pigz (parellel gzip) which will compress in parallel.
  4. This stream of compressed xbstream will be sent encrypted via ssh to host called replica.
  5. On the replica, the compressed xbstream will be uncompressed by pigz, then piped thru mbstream into the /var/lib/mysql directory.

Next steps:

  1. Prepare the backup using the appropriate amount of RAM.
    mariabackup --prepare --use-memory=16G --target-dir=/var/lib/mysql/
  2. Change the ownership of the destination datadir
    chown -R mysql:mysql /var/lib/mysql
  3. Start the db
    systemctl start mariadb
  4. Look at xtrabackup_slave_info to get the slave info
    cat /var/lib/mysql/xtrabackup_slave_info
  5. Change the master using CHANGE MASTER using that info.

Until next time.

 

 

Fixing Error: InnoDB: (Duplicate key) writing word node to FTS auxiliary index table.

This error appears in your error log. It means there is a duplicate key in your full text index, sadly it doesn’t tell you which table or which full text index.

HINT for MySQL/MariaDB/Percona devs: just add the index_id to the error message.

SQL to find all text indexes in your db:

select table_schema,table_name,index_name, index_type from information_schema.statistics
where index_type = 'FULLTEXT' order by table_schema;

+-----------------+-----------------------+-------------+------------+
| table_schema    | table_name            | index_name  | index_type |
+-----------------+-----------------------+-------------+------------+
| db0001_fullname | Activity_fullname_log | message     | FULLTEXT   |
| db0002_fullname | Activity_fullname_log | message     | FULLTEXT   |
+-----------------+-----------------------+-------------+------------+
2 rows in set (0.715 sec)

So what is the next step?

If you follow the documentation it will tell you to rebuild the table or use OPTIMIZE TABLE with innodb_optimize_fulltext_only=ON to rebuild just the text index.

https://dev.mysql.com/doc/refman/8.0/en/fulltext-fine-tuning.html#fulltext-rebuild-innodb-indexes

https://dev.mysql.com/doc/refman/8.0/en/fulltext-fine-tuning.html#fulltext-optimize

My recommendation: Rebuild the table (including the full text indexes)

-- Plain old ALTER TABLE
use db0001_fullname;
ALTER ONLINE TABLE Activity_fullname_log ENGINE=INNODB;

-- Or pt-osc equivalent

pt-online-schema-change --dry-run --alter="ENGINE=INNODB" \
--alter-foreign-keys-method=rebuild_constraints --max-lag=60 \
--max-load Threads_running=50 --critical-load Threads_running=60 \
h=localhost,D=db0001_fullname,t=Activity_fullname_log

pt-online-schema-change --execute --alter="ENGINE=INNODB" \
--alter-foreign-keys-method=rebuild_constraints --max-lag=60 \
--max-load Threads_running=50 --critical-load Threads_running=60 \
h=localhost,D=db0001_fullname,t=Activity_fullname_log

The OPTIMIZE TABLE method:

The trouble with this method is it will only do a set number of words per execution of OPTIMIZE TABLE.

SQL to set the specific table so you can query INNODB_FT_INDEX_TABLE

set global innodb_ft_aux_table = 'db0001_fullname/Activity_fullname_log';
Query OK, 0 rows affected (0.002 sec)

SQL to find the number of distinct words in a specific text index:

SELECT count(distinct word) FROM INFORMATION_SCHEMA.INNODB_FT_INDEX_TABLE;
+----------------------+
| count(distinct word) |
+----------------------+
|               600000 |
+----------------------+
1 row in set (1 min 53.333 sec)

Depending how much impact you want per OPTIMIZE TABLE set innodb_ft_num_word_optimize appropriately.

The remaining steps to rebuilding your full text index using OPTIMIZE TABLE

set GLOBAL innodb_ft_num_word_optimize=10000;
set GLOBAL innodb_optimize_fulltext_only=ON;
OPTIMIZE TABLE Activity_fullname_log; 
OPTIMIZE TABLE Activity_fullname_log;
OPTIMIZE TABLE Activity_fullname_log;
OPTIMIZE TABLE Activity_fullname_log;
OPTIMIZE TABLE Activity_fullname_log;
... -- repeat round(distinct words/innodb_ft_num_word_optimize)+1
OPTIMIZE TABLE Activity_fullname_log;

Until next time…

Find a corrupted innodb table or a corrupted index from index id

Your favourite monitoring software throws an alert or sends an email after it finds the following in the mysql error log:

InnoDB: page [page id: space=8858, page number=206777] (95 records, index id 29361)

How do you find which index is corrupted from this line?

Example using the above index id listed in the error log message:

 select * from information_schema.INNODB_SYS_INDEXES where index_id = 29361;

+----------+--------------------------------+----------+------+----------+---------+-------+-----------------+
| INDEX_ID | NAME                           | TABLE_ID | TYPE | N_FIELDS | PAGE_NO | SPACE | MERGE_THRESHOLD |
+----------+--------------------------------+----------+------+----------+---------+-------+-----------------+
|    29361 | blah_longwinded_name_for_index |    11238 |    0 |        3 |       4 |  8858 |              50 |
+----------+--------------------------------+----------+------+----------+---------+-------+-----------------+
1 row in set (0.009 sec)

How do you find which table is corrupted?

Example using the above table_id listed in the output from the above SQL statement.

select * from information_schema.INNODB_SYS_TABLES where table_id = 11238;

+----------+-----------------------------------------------------+------+--------+-------+------------+---------------+------------+
| TABLE_ID | NAME                                                | FLAG | N_COLS | SPACE | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE |
+----------+-----------------------------------------------------+------+--------+-------+------------+---------------+------------+
|    11238 | database0001_production/blah_longwinded_name_table  |   41 |      8 |  8858 | Compressed |          8192 | Single     |
+----------+-----------------------------------------------------+------+--------+-------+------------+---------------+------------+

What about just as one statement…

select t.name as table_name,i.name as index_name,t.table_id,i.index_id 
from information_schema.INNODB_SYS_TABLES t
JOIN information_schema.INNODB_SYS_INDEXES i on t.table_id=i.table_id
where index_id = 29361;

+-----------------------------------------------------+--------------------------------+----------+----------+
| table_name                                          | index_name                     | table_id | index_id |
+-----------------------------------------------------+--------------------------------+----------+----------+
| database0001_production/blah_longwinded_name_table  | blah_longwinded_name_for_index | 11238    | 29361    |
+-----------------------------------------------------+--------------------------------+----------+----------+

How do I fix this corrupted index?

Use a null alter to rebuild the table.

WARNING:

  1. Check the size first.
  2. use a tool like pt-online-schema-change if the table is large or accessed heavily, or you are running on Galera, Percona Cluster or equivalent.
  3. Did you get permission to run this on production?

eg:

ALTER TABLE <tablename> ENGINE=INNODB ;

Database security tasks.

How serious are companies and the people who support their databases about security? Not serious enough it seems.
How many databases are running with users with excessive privileges, OS privileges elevated?
What about securing via encryption your network traffic both inside your network and to external networks?

http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/
https://en.wikipedia.org/wiki/Data_breach
https://en.wikipedia.org/wiki/List_of_data_breaches

Over the next series of posts rather than give you a list of security tips, I am going to detail how to make your database more secure in detail.
These are not tips that your read with your morning coffee and then forget about, rather tasks. Tasks which you add to your project or task list
and work through until they are completed, signed off if necessary.

Task 1: Fix database users with wildcard hosts.

The Problem:

Database users with wildcard hosts who can potentially connect from any host from anywhere.
The SQL to check:

select user,host from mysql.user where host = '%';

The Excuses:

  •  Users outside our datacentre (or cloud) network such as corporate headquarters or via home/vpn connections need access to the database.
  • It is too hard to determine which user ip ranges or ip subnets will connect to our database.
  • Monitoring software needs to connect and its ip address could change.
  • If we change from wildcards, “things” could break.
  • We don’t need to worry about wildcard hosts as our firewall will save us/stop any problems.

The example:

Company X has grown from startup hacked up over a few weekends to a global tech VC darling with a huge, or rather HUUGGEE valuation.
The companies application and database originally ran on a pc under the founders desk, but over time has migrated to the cloud to become an Agile, Resilient and Scalable Enterprise.
You, the humble, meek, and lowly DBA have been hired and after onboarding, are tasked amongst other things with securing the database.

You get your credentials for the host and database sorted out and connect to the db and run the following SQL.

MariaDB [(none)]> select user,host from mysql.user where host = '%';
+-------------+------+
| user | host |
+-------------+------+
| hackmeplz | % |
| replication | % |
+-------------+------+
2 rows in set (0.00 sec)

The fix:

Change all users with wildcard host to a specific ip address or a specific ip subnet.

If all application traffic comes from your datacentre or cloud network for example from ip subnet 10.0.0.X then in our example,
‘hackmeplz’@’%’ becomes ‘hackmeplz’@’10.0.0.%’

If all application traffic comes via a Enterprise Load Balancer (ELB) or haproxy or Virtual IP (VIP) such as 10.0.0.3 then in our example,
‘hackmeplz’@’%’ becomes ‘hackmeplz’@’10.0.0.3’

The How to do the fix and not get fired for being a cowboy:

  1. Prepare a security audit report/review.
  2. Present or email to stakeholders (developers, reporting users, managers)
  3. Prepare a maintenance plan and gain approval to change.
  4. Organize maintenance window with specific time and date and gain approval for change.
  5. Follow the maintenance plan which will be something like this
    – Announce start of maintenance window in your chat/communication channel.
    – create a new user with the host part that contains a ip subnet
    – drop old user with wildcard host.
    – Get developers or report users to test to make sure their application works
    – Confirm good.
    – Announce the end of the maintenance window in your chat/communication channel.
  6. Go home and bask in the feeling that you have proactively and pre-emptively prevented a more than probable private pilllager
    from causing your company a publicly painful punishment.[1]

[1] Yahoo USD 1 Billion discount
https://en.wikipedia.org/wiki/Data_breach#cite_note-17
https://techcrunch.com/2016/10/06/report-verizon-wants-1-billion-discount-after-yahoo-privacy-concerns/

 

 

 

 

 

Lookout: crashbug using innodb_track_changed_pages with O_DIRECT

Summary:

If you are using innodb_flush_method = O_DIRECT (which is highly recommended for a bunch of reasons) and innodb_track_changed_pages the instance will crash if you query any tables related to that feature.

So innodb_track_changed_pages is a Percona system variable which is used to make incremental backups much faster.

So if you are using Percona’s great backup and recovery software suite and running at least MySQL 5.5.27 you might have enabled this variable to make your incremental backups run faster.

You may want to reconsider your choice and turn it off … continue to read why…

 

Background:

I was working on a client a while back (12 months ago) and I was looking to improve their incremental backup durations.

When I turned on the feature and restarted the MySQL instance, as soon as I queried the table the MySQL instance crashed.

After a couple of tests, the change was hastily rolled back and I raised this bug ticket with Percona.

https://bugs.launchpad.net/percona-server/+bug/1538403

Testing, preparing and getting a bug raised took time, but apart from Rick Pizzi also posting that he had a similar crash, there has been no activity on this. It is like the crickets chirping in the Simpsons.

http://tvtropes.org/pmwiki/pmwiki.php/Main/ChirpingCrickets

Zero activity, zero response, the bug is Status is new and unassigned since Jan 27th 2016.

Yes this is an older version MySQL Percona 5.5, but it also affects versions up to 5.7.

How many installations are running between 5.5 and 5.7? I would have thought the bulk of the installed base.

Goal of this post:

Hopefully, this article will prompt someone to take a look at the bug.

If it seems like some of my recent blog posts have been raising issues with Percona, it is because I use, support and recommend Percona MySQL databases and tools every day even though I don’t work at Percona. I love that Percona provide the tools and software they do. Having a crash bug open for more than 12 months is disappointing.

The bug in action:

mysql> select @@version;
+-----------------+
| @@version       |
+-----------------+
| 5.5.54-38.6-log | <== The latest 5.5 version
+-----------------+
1 row in set (0.00 sec)

mysql> show global variables like 'innodb%track%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| innodb_track_changed_pages | OFF   |
+----------------------------+-------+
1 row in set (0.00 sec)

-- restart after adding variable to .cnf file

mysql> show global variables like 'innodb%track%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| innodb_track_changed_pages | ON    |
+----------------------------+-------+
1 row in set (0.00 sec)

mysql> select count(*) from INFORMATION_SCHEMA.INNODB_CHANGED_PAGES;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

-- it works because there is nothing in the bitmap file

root@db1:~# ls -l /var/lib/mysql/
total 28704
-rw-r--r-- 1 root  root         0 Feb 18 03:01 debian-5.5.flag
-rw-rw---- 1 mysql mysql 18874368 Feb 18 03:06 ibdata1
-rw-rw---- 1 mysql mysql  5242880 Feb 18 03:06 ib_logfile0
-rw-rw---- 1 mysql mysql  5242880 Feb 18 03:01 ib_logfile1
-rw-rw---- 1 mysql mysql        0 Feb 18 03:06 ib_modified_log_1_0.xdb <=== BITMAP FILE
drwx------ 2 mysql root      4096 Feb 18 03:01 mysql
-rw-rw---- 1 mysql mysql      126 Feb 18 03:05 mysql-bin.000001
-rw-rw---- 1 mysql mysql      126 Feb 18 03:06 mysql-bin.000002
-rw-rw---- 1 mysql mysql      107 Feb 18 03:06 mysql-bin.000003
-rw-rw---- 1 mysql mysql       96 Feb 18 03:06 mysql-bin.index
-rw------- 1 root  root        11 Feb 18 03:01 mysql_upgrade_info
drwx------ 2 mysql mysql     4096 Feb 18 03:01 performance_schema
drwx------ 2 mysql root      4096 Feb 18 03:01 test


-- nothing wrong at this point... now change some rows


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

mysql> use test;
Database changed
mysql> show tables;
Empty set (0.00 sec)

-- Don't use column names like this at home folks...

mysql> create table test (blah varchar(10));
Query OK, 0 rows affected (0.02 sec)

-- Fred the world renowned tester

mysql> insert into test values ('fred');
Query OK, 1 row affected (0.00 sec)

mysql> exit
Bye
root@db1:~# ls -l /var/lib/mysql/
total 28708
-rw-r--r-- 1 root  root         0 Feb 18 03:01 debian-5.5.flag
-rw-rw---- 1 mysql mysql 18874368 Feb 18 03:08 ibdata1
-rw-rw---- 1 mysql mysql  5242880 Feb 18 03:08 ib_logfile0
-rw-rw---- 1 mysql mysql  5242880 Feb 18 03:01 ib_logfile1
-rw-rw---- 1 mysql mysql     4096 Feb 18 03:08 ib_modified_log_1_0.xdb <=== BITMAP FILE
drwx------ 2 mysql root      4096 Feb 18 03:01 mysql
-rw-rw---- 1 mysql mysql      126 Feb 18 03:05 mysql-bin.000001
-rw-rw---- 1 mysql mysql      126 Feb 18 03:06 mysql-bin.000002
-rw-rw---- 1 mysql mysql      396 Feb 18 03:08 mysql-bin.000003
-rw-rw---- 1 mysql mysql       96 Feb 18 03:06 mysql-bin.index
-rw------- 1 root  root        11 Feb 18 03:01 mysql_upgrade_info
drwx------ 2 mysql mysql     4096 Feb 18 03:01 performance_schema
drwx------ 2 mysql root      4096 Feb 18 03:08 test

-- still works as innodb_flush_method isn't set

mysql> show global variables like 'innodb_flush_method';
+---------------------+-------+
| Variable_name       | Value |
+---------------------+-------+
| innodb_flush_method |       |
+---------------------+-------+
1 row in set (0.00 sec)

mysql> select count(*) from INFORMATION_SCHEMA.INNODB_CHANGED_PAGES;
+----------+
| count(*) |
+----------+
|       17 |
+----------+
1 row in set (0.01 sec)

-- OK now turn on O_DIRECT

root@db1:~# vi /etc/mysql/my.cnf 
root@db1:~# service mysql restart
 * Stopping MySQL (Percona Server) mysqld                                                                        [ OK ] 
 * Starting MySQL (Percona Server) database server mysqld                                                        [ OK ] 
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

mysql> show global variables like 'innodb_flush_method';
+---------------------+----------+
| Variable_name       | Value    |
+---------------------+----------+
| innodb_flush_method | O_DIRECT |
+---------------------+----------+
1 row in set (0.00 sec)

-- Bingo! Houston we have a problem.

mysql> select count(*) from INFORMATION_SCHEMA.INNODB_CHANGED_PAGES;
ERROR 2013 (HY000): Lost connection to MySQL server during query