Tuning MySQL with tuning-primer.sh
by Leon Miller-Out
One of our clients has a MySQL server that sees moderate load. The MySQL server process often spikes one of the server's two CPUs. Without a lot of prior experience in tuning MySQL, I set out to figure out what was causing this load and whether I could reduce it at all.
My colleague Dave had already grabbed most of the low-hanging fruit in tuning the Rails app in general, and the home page in particular, so I wasn't able to find a lot of inefficiencies there. We had also recently consolidated the three cron'ed rake tasks that were running every minute, so that was big load off of the server.
I experimented with adding some "missing" database indexes (a common database performance culprit), but found that adding them slowed down the test suite by almost 20 seconds! So much for that.
I then tried using MySQL's log_slow_queries option to see if there were specific queries that were bogging down the server. No luck there. There is the occasional 2s query every few minutes, but there aren't enough of them to explain the load on MySQL.
My next step was to analyze the queries on the home page. One of them in particular was taking a long time (~1s) to run on my MacBook Pro, and I discovered that the cause was that it was selecting a TEXT column, which automatically causes MySQL to use an on-disk temporary table. Since the home page was only displaying the first ten words of that text, I could have remedied this by adding a short summary column of type VARCHAR and updating it with an ActiveRecord before_save hook. However, further testing on the production server indicated that the same query ran very quickly (0.0s) on subsequent runs (due to MySQL's query caching), so I decided to leave it alone.
I ran MySQL for several minutes with the log option turned on. This created a log file with all of the queries that were run during that time. I used some of my favorite *nix tools (grep, sed, uniq, and sort) to count the occurrences of each query, and found a pretty even distribution. In other words, I couldn't find any specific query that was overwhelming the server.
Finally, I discovered the very helpful MySQL tuning-primer.sh. This awesome script checks your MySQL configuration and statistics, and makes suggestions about parameters you might want to tweak. In this case, it suggested raising query_cache_size, since a lot of queries were being bumped from the cache. I've now raised it from the default 16M to 128M, and I'm still seeing some queries dropped from the cache, so it was definitely woefully inadequate before! We also raised table_cache from the default of 64 to 256, which appears to be adequate.
So far, MySQL is using about three times as much memory as before (a good thing on the 4GB server), and still using quite of bit of CPU time. Due to the diminishing returns of increasing query_cache_size, I don't think I want to push it any higher. The server load is now below 1.0 (for the first time that I've seen), and the site feels pretty zippy, so I feel good about sticking a fork in this task and calling it done!