I got a feeling these guys are going to be a long term problem.
So the shit hit the fan last night.
At about 10pm I started having a serious issues with the connection to my BNC. Then my SSH started seriously lagging. The last time I had this issue it was my “server”. So I ran an uptime command to see what my load averages were.
They were all over 30.
I started to panic a little…wondering if this was a repeat of the TragicServers tragedy. I managed to get top running…but saw no rogue processes eating CPU. In fact most of my processes were sleeping. If something had run awry…I should see massive CPU usage.
I tried…as a last resort…to issue shutdown and reboot commands from my provider’s control panel…but they all resulted in a time-out.
At this point all I know the OpenVZ node I’m on is having some real serious issues. My load averages were so high due to the fact something was jusy eating all the resources.
I gave up on it and went to bed. I was pretty sure it wasn’t my container that had been compromised. It’s been running stable since I set it up and I did some basic lockdown procedures…and I don’t think an attack on WordPress would show me a bunch of idle processes.
So…I suspect I’m on a very oversubscribed node and can probably expect this kind of instability and downtime on a semi-regular basis.
OpenVZ was probably a mistake. Getting it from ChicagoVPS was probably a larger mistake. But we’ll see…I guess.
Things were back to normal this morning. My server had been rebooted. Still not sure if thats a result of the node needing rebooting or what. I need to look in to some of my own monitoring solutions…so I can see if and when my server is causing issues.