Friday, January 25, 2013

[ERROR] Slave SQL: Error 'Lock wait timeout exceeded; try restarting transaction'

In situation of table/row locking or deadlock on slave server, slave server let the transaction wait for time configured with  innodb_lock_wait_timeout and retry the transaction for the times configured in  slave_transaction_retries.  After all tries and lock timeout save server throws following error in logs.

130125 13:24:30 [ERROR] Slave SQL: Error 'Lock wait timeout exceeded; try restarting transaction' on query. Default database: 'live'. Query: 'INSERT into users (user_id, login, password, first_name, last_name, email, is, created, changed, picture, last_login, token) values (1, 'asad9@yahoo.com', '35ce1d4eb0', 'Prince A', 'S', 'asad9@yahoo.com', 1, 13, 13, 'photo_13.jpg', 13, NULL)', Error_code: 1205

Get all the deadlocks and locking transaction details and tables involved in locking using:

#mysql >show engine innodb status\G;

To avoid further locking you need to do changes at application to prevent bulk updates and select simultaneously. Also change storage engine MyISAM which use table locking to InnoDB which use row level locking.

The simple solution for this problem is to increase the value for innodb_lock_wait_timeout and slave_transaction_retries.

 There are other possible reason for this error: table is corrupt so repair the table.

http://dev.mysql.com/doc/innodb/1.1/en/innodb-other-changes-innodb_lock_wait_timeout.html
http://dev.mysql.com/doc/refman/5.0/en/replication-options-slave.html#sysvar_slave_transaction_retries

Saturday, January 19, 2013

mysqldump: Got error: 1146: Table 'tablename' doesn’t exist when doing LOCK TABLES

if you receive the error message while mysql database backup



There are few possible reason of this error:


  • permission and ownership on tables files in /var/lib/mysql is not correct, ideal ownership should be to mysql user and permission 660.
  • table not exist, check with show tables.
  • table created in uppercase, set lower_case_table_names as per your application requirement.
  • table is corrupt, repair the table.

if all above points are ok then use --skip-lock-tables parameter with mysqldump.

I am sure this will resolve the error.

 

Friday, January 18, 2013

mysqldump: Got error: 1066: Not unique table/alias: foo when using LOCK TABLES

mysqldump trying to lock lock table before starting a database dump but not able to lock and throwing this error.
After analyzing table names carefully i found few tables duplicate table exist with names in Uppercase and lowercase when lower_case_table_names=1

mysql> show tables;
+-----------------------------------+
| Tables_in_power         |
+-----------------------------------+
| Message_seq                       |
| User_seq                          |
| message_seq                |
| user_seq                |
+-----------------------------------+ 


Resolved by removing duplicate table message_seq  and user_seq.

This issue of duplicate tables occurs when application issue a CREATE TABLE command with table name in upper case or lower case  other than the existing tables with same name and lower_case_table_names=1.

For permanent resolution change  lower_case_table_names=0 if application don't have any dependency on it.

 

Tuesday, January 15, 2013

native code=1030 ** Got error 139 from storage engine

Recently in one of my application logs I see insert failing with "native code=1030 ** Got error 139 from storage engine" error, database was in production for more than 2 years and never see this error. This seems to be simple problem but due to its nature it may affect after you already have a running database in production which is in my case. This is storage engine specific error, InnoDB.
This is problem related to innodb row size limitations and storage of tablespace on disks.
In short InnoDB gives this error when it can't store all variable length column for a given row on single database page. So,

What is database page?
What is InnoDB row size limit?
How database page relate to row length?

Database pages are the internal basic structure to organize the data in the database files. The data file you define in configuration file logically concatenated to form the tablespace, The tablespace consists of database pages with default size 16 K. ( In Mysql 5.6 it is possible to change the page size to 4k or 8k in addition to original size 16k by putting  innodb_page_size=n variable in configuration file or you can set –innodb-page-size=n when starting mysqld, Previously, it could be done by recompiling the engine with a different value for UNIV_PAGE_SIZE_SHIFT and UNIV_PAGE_SIZE.) 

 Mysql store two rows on single page of 16 K, So the maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
If a row length is less than half a page long ~ 8000 bytes, all of it is stored locally within the single database page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page. For a column chosen for off-page storage, InnoDB stores the first 768 bytes locally in the row, and the rest externally into overflow pages.

Now the real problem comes now, if you have more than 10 columns of variable length type TEXT,BLOB and each column exceeding database page size then InnoDB needs 768 x 11 = 8448 bytes to store variable lenght columns not counting other fixed length columns. This exceeds the limit and therefore you get the error. 

Few suggestions to solve this issue:

1. Upgrade to Barracuda format : Barracuda file format use 20 bytes of variables length columns on single database page rather than 768 bytes. so now more data in a row can be stored on a page without any size error.

SET GLOBAL innodb_file_format=Barracuda; 
SET GLOBAL innodb_file_per_table=ON; 
ALTER TABLE (your table) ROW_FORMAT=COMPRESSED;

mysql> show table status like 'company_info'\G
*************************** 1. row ***************************
           Name: company_info
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 13
 Avg_row_length: 32768
    Data_length: 425984
Max_data_length: 0
   Index_length: 32768
      Data_free: 0
 Auto_increment: 39
    Create_time: 2013-01-11 17:49:55
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options:
        Comment: InnoDB free: 0 kB
1 row in set (0.00 sec)


mysql> alter table company_info row_format=compressed;
Query OK, 22 rows affected (0.83 sec)
Records: 22  Duplicates: 0  Warnings: 0

mysql> show table status like 'company_info'\G
*************************** 1. row ***************************
           Name: company_info
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 13
 Avg_row_length: 31507
    Data_length: 409600
Max_data_length: 0
   Index_length: 32768
      Data_free: 0
 Auto_increment: 39
    Create_time: 2013-01-11 18:36:45
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: row_format=COMPRESSED
        Comment: InnoDB free: 0 kB
1 row in set (0.00 sec)


2. Limit the seize of variable length column, that might need change at application side.
3. Limit table to have upto 10 columns of variable length type.
4. Upgrade to latest mysql version 5.6 which have flexibility setting default database page size of 16K with innodb_page_size variable.

Friday, January 11, 2013

MySQL Performance Analysis

I have been facing mysql database server slowness, application slow response, database cpu load shoot up all of a sudden and sometimes reboot needed to recover from high load. 

How we can analyze database performance and how we can fine tune it. 
Here are few important analysis we need to do to improve database performance:


  • Slow query analysis (http://mysqlyogi.blogspot.in/2013/01/slow-query-analysis.html)
  • Worker thread analysis (http://mysqlyogi.blogspot.in/2013/01/worker-thread-analysis.html)
  • Connections usage analysis
  • Innodb status/data size analysis
  • Memory usage analysis
  • Key buffer analysis
  • Query cache analysis
  • Sort/join operation analysis
  • Open file analysis
  • Table cache analysis
  • Temp table analysis
  • Table scan analysis
  • Table locking analysis
  • Queries (insert/select/delete/update) analysis 
  • Storage engine analysis 


Thursday, January 10, 2013

Slow query analysis

Slow query analysis is very important to dig low mysql performance, high cpu utilization, slow query response and application performance. The process of analyzing slow query includes enabling slow query in mysql server and then process the logs using scripts or query analysis tools.

How to Enable Slow Query log:

Add following line in /etc/my.cnf ( mysql configuration file )
slow_query_log=ON
slow_query_log_file=slowquery_logfile_path
stop start mysql

versions 5.1 and later allow to enable slow query log without restarting mysql.

mysql> set global slow_query_log=ON;
Query OK, 0 rows affected (0.00 sec)


If no file_name value is given for --log-slow-queries, the default name is host_name-slow.log. The server creates the file in the data directory unless 
an absolute path name is given to specify a different directory. 

By default mysql logs slow queries taking more than 10 seconds, we can reduce query log time by setting variable long_query_time in /etc/my.cnf or at run time on mysql console

mysql> set global long_query_time=5.000000;
Query OK, 0 rows affected (0.00 sec)

Now all sql queries/statement taking more than 5 seconds will be logged into log file in plain text with additional information.

Sunday, January 6, 2013

Mysql Worker thread analysis


Every connection to mysql database needs a thread to process the request and send back results. Each connections use some part of cpu resource to execute the thread. If you application makes lots of separate connections to your database over short periods of time or create and close  lot of connections  every seconds, the process of spawning new threads at high rate cause a lot of overhead on cpu which allocate and de allocate associated stacks for each connections or threads. To resolve this problem mysql implements thread cache, which allow it to save threads from connections that are being closed and reuse them for new connections. The parameter thread_cache_size defined how many unused threads can be kept in memory so that application can reuse them without creating a new thread.

To check what is current value of thread_cache_size configured:

 mysql> select @@thread_cache_size;
+---------------------+
| @@thread_cache_size |
+---------------------+
|                 512 | 
+---------------------+
1 row in set (0.00 sec)
 Or
mysql> show variables like '%thread%';
+----------------------------+--------+
| Variable_name              | Value  |
+----------------------------+--------+
| max_delayed_threads        | 20     | 
| max_insert_delayed_threads | 20     | 
| myisam_repair_threads      | 1      | 
| thread_cache_size          | 512    | 
+----------------------------+--------+

To determine how much the thread_cache_size variable should be set, we need to check thread status
mysql>show status like '%thread%';

+------------------------------------------+-------+

| Variable_name             | Value |

+------------------------------------------+-------+
| Slow_launch_thread        | 0     |

| Threads_cached            | 88    |

| Threads_connected         | 27    |

| Threads_created           | 115   |

| Threads_running           | 5     |

+------------------------------------------+-------+

8 rows in set (0.00 sec)

mysql> show status like '%connections%';

+----------------------+--------+

| Variable_name        | Value  |

+----------------------+--------+

| Connections          | 291200 |

| Max_used_connections | 115    |

+----------------------+--------+

2 rows in set (0.00 sec)

Thread_cache_size should be atleast equal to max_connections configured, 150 in above case.

Or we need to monitor the beheaviour of thread creation on mysql for some time and then get following ratios/values:


thread cache hit rate  = 100 - ((Threads_created / Connections) * 100)

current thread per second =  Threads_Created  - Threads_Created_after_one_second

historical thread per second = Threads_Created / Uptime

Given the above value of 115 for Threads_created, and the value of 291200  that Connections showed, My Thread Cache Hit Ratio is 99% which is good.
The ideal situation is to get Threads_created as close as possible to thread_cache_size - no new connections having to wait for new thread allocation - staying as close to around a 99% hit ratio as you can


What is thread_concurrency variable in mysql : after goggling and from mysql manual i found  it does nothing on GNU/Linux so dont waste time in using this variable. There are two problem with this variable, One is it is deprecated in latest mysql version 5.6 and second it only works on old Solaris Versions .

From mysql manual

This variable is specific to Solaris systems, for which mysqld invokes the thr_setconcurrency() with the variable value. This function enables applications to give the threads system a hint about the desired number of threads that should be run at the same time. This variable was added in MySQL 3.23.7.
 What is initial memory taken by a thread when a client connect to mysql? 
mysql allocate memory to each thread efficiently whenever a new connection is made to mysql server. Initially a connection use memory specified by three specific variables thread_stack, a connection buffer and result buffer variable net_buffer_length. The connection buffer and result buffer begin with size equal to net_buffer_length and dynamically expand up to max_allowed_packed when needed and memory released after the results sent to the client. Thread kept initial memory until released by the client, after the thread disconnected its memory released to the system. in case thread moved to thread cache, memory remains allocated to thread.

A thread consume memory additionally to initial buffer whenever query is performing specific function/operation i.e. sorting, joining, caching, temporary tables creation, scanning, updating etc. For these operations a thread allocate memory defined in other buffer read_buffer_size, join_buffer_size, sort_buffer_size etc.

We can calculate how much maximum memory can be allocated to a thread when  we assume thread is doing all operations:

total per-thread buffer usage = (read_buffer_size + read_rnd_buffer_size + sort_buffer_size + thread_stack + join_buffer_size + binlog_cache_size + net_buffer_length

Lets Calculate minimum memory assigned to threads at any point of time:

Get the current running/connected thread connected and thread cached.


mysql> show status like '%thread%';
+------------------------------------------+-------+
| Variable_name                            | Value |
+------------------------------------------+-------+|     

| Threads_cached                           | 136   |
| Threads_connected                        | 4     |
| Threads_created                          | 140   |
| Threads_running                          | 2     |
+------------------------------------------+-------+


Get the thread value for thread_stack and net_buffer_length

mysql> show variables like '%thread_stack%';
+---------------+--------+
| Variable_name | Value  |
+---------------+--------+
| thread_stack  | 262144 |
+---------------+--------+ 


mysql> show variables like '%net_buffer_length%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| net_buffer_length | 16384 |
+-------------------+-------+


So minimum memory allocated to threads is  : (262144+16384)*140=38993920=37MB