MySQL CPU maxing out due to leap second, and AWS US-E1 outage

Wow, US-EAST-1 has the worst luck doesn’t it?

I had CPU consumption alerts fire off for ALL of my AWS instances running Percona Server (MySQL).

I couldn’t for the life of me figure it out – I re-mounted all but the root EBS volumes, restarted the services and ensured there was no legitimate load on all of them, yet MySQL stuck to 100% consumption on a single core on all of the machines. I tried everything, lsof, netstat, show processlist, re-mounting the data partitions.

WIERD. It turns out however, AWS was not be the cause of this one, even in light of the recent AZ issue in EAST-1.

My issues started sharply at 00:00 UTC. Hmm, odd since we’re having a leap second today!


A quick touch to NTP and we’re as good as gold.

This server had many cores so it was still running fine (and it was the weekend) – Seems like bad things happen at the least opportune times, like when you’re out on a date with the wife and the baby is with grandma…

At any rate, it’s amusing to watch the haters hate that clearly don’t understand the concept of AWS and the AZ (Availability Zones). Funny to watch them all scoff and huff about how they run a better data center out of their camper than AWS.

If anything, I want to see a good root cause analysis and maybe some restitution for the trouble.

Leave a Reply

Your email address will not be published. Required fields are marked *