The system slowdown was due, we think, to a database query not using an index to find the information it needed… After looking deeply into the MySQL manual we discovered we could have the database actually tell us which queries were running slowly. From that we can then tell the database (manually) to tell us which indexes it thinks it can use, is it using them or not and from there, if its NOT using an index for some reason, we can examine why its not using it and hopefully create one it can use.
This is what we did… The index was created on the entries table (which now has over 43,000 rows – or in laymans terms, there are now over 43,000 diary entries on the systems) so that when it was asked to get a specific diary entry, it was searching the whole TABLE (96Megabytes) each time someone wanted to view a diary entry… Clearly that situation would only get exponentially worse and worse. It now uses the index which means it scans somewhere around 1Megabyte of data instead of 96. This actually speeds things up by a long way more than 96 times because as you start to lift larger amounts of data into memory, the machine begins to run out of RAM, so it sticks other things onto the temporary disk storage. Disk storage is really slow, so you lose there too.
Add to that the fact that the server is actually getting quite busy in the evenings and you end up with a fairly quick cascading collapse whereby more requests are coming in, faster than server can handle, making the requests that are currently processing run even slower than they normally, which means the ones behind them fight for the time and run even slower, etc etc etc…
My apologies for not spotting this earlier. I had a gut feeling that something was not right with the system but couldn’t put my finger on what it was…
Either way, we’ve tweaked a couple of other queries that were running slow as well, so hopefully the system is back to being fairly snappy, and should stay that way… At least until the next thing that comes along as we grow 🙂
Have a good weekend!