Getting to production is just the first step. Delivering exceptional user experience is the next, and that means there's work to be done - in the network. Bring on the NetOps!
It doesn't matter whether we're talking about profit or productivity, both are equally impacted by poorly performing apps. I could include some eye candy with charts showing abandonment rates based on app response times or adoption rates for corporate apps based on availability and performance, but you've seen them all anyway and you know it's true. When we asked organizations what they'd give up to get a more secure network, performance came in dead last with less than 10% willing to abandon it.
(Lack of) speed kills.
We'd all like to point our fingers at those app developers but the reality is that most of the time they really can't do anything about it. Lag (or latency, if you prefer) is introduced in the network by a variety of factors including the protocols apps use to communicate.
Let's face the bitter truth. The reason there are more than 100 different RFCs related to TCP is because we keep trying to improve performance of a protocol that was designed primarily with reliability in mind, not performance.
Sure, you can tweak the TCP stack under the hood of whatever platform the app is deployed on (that's you, DevOps peoples) but there's a lot of things upstream, in the network, that also need attention (that's you, NetOps peoples). To get you going, here's five tips on services "in the network" that can help improve application performance:
Performance tip #1: check which load balancing algorithm your availability service is using.
For example, the load balancing algorithm you use matters. Round robin is about the most rudimentary algorithm there is and it absolutely doesn't care about performance or load on individual app instances. Given operational axiom #2 (performance decreases as load increases) you might imagine that a more load-aware or response time-based algorithm would give you a boost in performance.
Performance tip #2: Check cacheable content headers and if non-existent, use an intelligent performance service to add them.
Perhaps it's the case that the developers aren't properly marking content as cacheable. There's an application service for that which can automagically insert the appropriate headers into the outbound response to ensure caching, which improves overall performance by eliminating those pesky round trips to grab duplicate content.
Performance tip #3: Check if TCP multiplexing can be used to eliminate expensive session management from app response times.
TCP overhead, too, can introduce lag into the communication stream that will negatively impact the total performance as seen by end users. Taking advantage of TCP multiplexing offloads TCP session management to the availability service and reduces the amount of time required to receive and respond to HTTP requests (of which there are many, many, MANY more per page these days than ever before).
Performance tip #4: Enable image optimization based on device type in your performance services.
Images. Oh, images. We love 'em, we use 'em but we don't always make sure they're optimized for the device to which they're being delivered. This is not the fault of the app developers. Image optimization generally requires more than just changing HTML element attributes to make the image appear smaller on a mobile device. It requires manipulation of the actual content and, in the case of EXIF data, removal. That actually makes the image smaller in size, which means it transfers faster and doesn't show up as a silly "X" in the web app until it's fully arrived. Performance services generally include image optimization capabilities that can be enabled with a simple checkbox.
Performance tip #5: Consider using an HTTP/2 or SPDY gateway to improve the user experience without needing to go through a forklift upgrade of all app servers.
Finally, maybe it's time to retire HTTP/1. Oh, I know it's not feasible to rip and replace every app platform you have in order to support HTTP/2 or SPDY, but that doesn't mean you can't externally support either or both. HTTP/2 and SPDY gateways (which live in the network) can support these new (and very performance conscious) protocols while still talking safe HTTP/1 to all the apps in the data center. Both protocols improve performance by compressing headers and making more efficient use of HTTP streams, and can provide a pretty nice boost to application performance.
Now, all these services generally live in "the network." They aren't part of the app infrastructure where DevOps lives, so it's important for the network guys (who we're going to call NetOps for this post, at least) to get in the game and collaborate with their operations and developer counterparts to see which of these options (or maybe all of them) would help improve the performance of apps after they're in production. Sometimes you might think something will be beneficial but it could turn out just the opposite, which means monitoring and measurement and, most of all, communication with the other groups to ensure performance actually is being improved.
It's certainly also important for these network services to be provisioned as rapidly as the application they're supporting, which is where a lot of the DevOps mentality comes into play. Operationalizing the provisioning of these services is essential to getting to market, but monitoring and measuring and tweaking those services after getting to market is just as critical to realizing the gains in profit and productivity the business hoped to see.
Cause getting to market first is easier than staying in the lead if you aren't focused on ensuring the best performing app there is.