on 01-Dec-2010 14:33
With the rapid growth of DevCentral, we continue to get great suggestions for how we can improve the site (BTW – you too can provide suggestions via the Feedback form here). We also have a nonstop stream of ideas about cool stuff we would like to see as well. All of this means that we’ve become more dedicated to regular updates/upgrades to DevCentral, in both the software we use AND the customization we build on top of it. As a result, you may have seen a few more maintenance pages over the past few months than usual. If you’re interested in what we’ve been doing, read on!
Yesterday, we ran our second update in the past couple months. For this update, our major focus was on streamlining performance at the application layer. By all measures, the upgrade went smoothly. Just a few things we focused on included:
After working through development over the past month or so as well as our usual Staging/QA process, we pushed the button yesterday and rolled out the updates. However, it wasn’t until later in the day that things got a little bit interesting…
Without going into the gory details (we’ll probably do that later as it’s probably interesting to some of you), we run our application in redundant datacenters behind a whole host of F5 gear (GTM, LTM, WA, ASM). We use iRules extensively. We’re sort of biased but we think F5 gear rocks (and it would be lame if we didn’t use it extensively…).
As part of our infrastructure, our IT team manages a pretty extensive monitoring system to help us know what’s happening with our application, servers, and the infrastructure. Around 5pm (PST) yesterday, we started getting some funky alerts. Nothing serious but enough to be monitored more closely. Eventually, through F5 health monitors on LTM and GTM, we were able to flop datacenters automagically to keep users connecting to the application. All good.
But, in email to MVP and other active users, we learned that all was not completely ideal…
Jeff: “Hey – we’re seeing some funky alerts about the application. What are you seeing?”
DevCentral Member: “I was getting TCP resets consistently tonight. The IP seemed to respond very consistently to pings. So I was guessing it was an app layer issue.”
Hmmm. Thanks to our ninja IT team and the DC gang, we took some measures that seemed to resolve/stabilize things and we went to sleep. However, this morning, the issues reappeared and we dug deeper.
It turns out that the upgrade flipped a bit in the database that told certain scheduled jobs to run on multiple servers. Combine this with the fact that a couple of these jobs were pretty resource intensive and were running against very large tables, and you end up with some DB deadlocking. Deadlocking is bad, and will drag a server to its knees quickly, even if not under load, let alone serving thousands of pages.
This took a while to find because the only symptom presenting itself was pegged CPUs on the DB systems. Fortunately we’ve got an ace team of infrastructure & app folks that work together quite well, so we were able to track this down quickly. It can’t be stressed enough here how important it is to have a combined team that can crack down on these kind of issues from both angles (infrastructure and application sides).
Once the bits were set back to the intended settings and centralized job scheduling for log and notification management was back in place, the issue went away completely, and it was back to business as usual.
This was a bit of a wild goose chase and it cropped up from a place we never would have expected. Nothing was changed in the app code surrounding the jobs that got flipped on universally. It was just a complication of the upgrade itself. A few thoughts we’ve come away from this with:
So, there you have it – a little insight into what we’ve been up to. We believe this continued focus on enhancements will deliver an even better community resource for you. And – maybe you’ll even benefit a little from some of the lessons we learned from this most recent upgrade that help your next upgrade go more smoothly.