On Cloud, Integration and Performance

Application performance is more and more about dependencies in the delivery chain, not the application itself. 

When an article regarding cloud performance and an associated average $1M in loss as a result appeared it caused an uproar in the Twittersphere, at least amongst the Clouderati.There was much gnashing of teeth and pounding of fists that ultimately led to questioning the methodology and ultimately the veracity of the report.

  If you were worried about the performance of cloud-based applications, here's fair warning: You'll probably be even more so when you consider findings from a recent survey conducted by Vanson Bourne on the behalf of Compuware.

On average, IT directors at 378 large enterprises in North America reported their organizations lost almost $1 million annually due to poorly performing cloud-based applications. More specifically, close to one-third of these companies are losing $1.5 million or more per year, with 14% reporting that losses reach $3 million or more a year.

In a newly released white paper, called "Performance in the Cloud," Compuware notes its previous research showing that 33% of users will abandon a page and go elsewhere when response times reach six seconds. Response times are a notable worry, then, when the application rides over the complex and extended delivery chain that is cloud, the company says.

-- The cost of bad cloud-based application performance, Beth Shultz, NetworkWorld 

I had a chance to chat with Compuware about the survey after the hoopla and dug up some interesting tidbits – all of which were absolutely valid points and concerns regarding the nature of performance and cloud-based applications.

I SAY CLOUD YOU SAY TROPOSPHERE I’ve written some rather terse commentary regarding the use of language and in particular the use of the term “cloud” to refer to all things related to cloud because it – if you’ll pardon the pun – clouds conversation and makes it difficult to discuss the technology with any kind of common understanding. That’s primarily what happened with this report. The article in question used the words “cloud-based application performance” which many folks interpreted to merely mean “applications deployed in a public cloud.”

[ Insert pithy reference of choice to the adage about what happens you assume ]

If you dug into the white paper you might have discovered some details that are, for anyone intimately familiar with integration of applications, unsurprising. In fact I’d argue that the premise of the report is not really new, just the use of the term “cloud” to describe those external services upon which applications have relied for nearly a decade now. If you don’t like that “guilt by association”, then be ware to be more precise in your language. Those who decided to label every service or application delivered over the Internet “cloud” are to blame, not the messenger.

The findings of the report aren’t actually anything we haven’t heard before nor anything I haven’t said before. Performance of an application is highly dependent upon its dependent services. Applications are not islands, they are not silos, they are one cog in a much bigger wheel of interdependent services. If performance of one of them suffers, they all suffer. If the Internets are slow, the application is slow. If a network component is overloaded and becomes a bottleneck, impairing its ability to pass packets at an expected rate, the application is slow. If the third party payment processor upon which an application relies is slow because of high traffic or an impaired router or one the degradation of performance of one of its dependent services, the application is slow.

One of the challenges we faced back in the days of Network Computing and its “Real World Labs” was creating just such an environment for performance testing. It wasn’t enough for us to test a network component; we needed to know how the component performance in the context of a real, integrated IT architecture. As the editor responsible for maintaining NWC Inc, where we built out applications and integrated them with standard IT application and service components, I’d be the first one to stand up and say it still didn’t go far enough. We never integrated outside the lab because we had no control, no visibility, into the performance of externally hosted services. We couldn’t adequately measure those services and therefore their impact on the performance of whatever we were testing could not be controlled for in a scientific measurement sense. And yet the vast majority of enterprise architectures are dependent on at least some off-premise services over which they have no control. And some of them are going to be slow sometimes.

Slow applications lead to lost revenue. Yes, it’s only lost potential revenue (maybe visitors who abandoned their shopping carts were just engaged in the digital version of window shopping), but organizations have for many years now been treating that loss of potential revenue as a loss. They’ve also used that “loss”, as it were, to justify the need for solutions that improve performance of applications. To justify a new router, a second switch, another server, even new development tools. The loss of revenue or loyalty of customers due to “slow” applications is neither new nor a surprise. If it is, I’d suggest you’ve been living on some other planet.

And that’s what Compuware found: that the increasing dependence of applications on “cloud-based” applications is problematic. That poorly performing “cloud-based” applications impact the performance of those applications that have integrated its services. That “application performance” is really “application delivery chain performance” and that we ought to be focused on optimizing the entire service delivery chain rather than pointing our fingers at developers who, when it comes down to it, can’t do anything about the performance of a third-party service.

APPLICATION PERFORMANCE is a SUMMATION The performance of an application is not the performance of the application alone. It’s the performance of the application as an aggregate view of a request/response pair as served by a complete and fully functional infrastructure, with all dependent services included. Every microsecond of latency introduced by service requests on the back-end, i.e. the integrated services on and off-premise, are counted against the performance of the “application” as a whole.

Even services integrated at the browser, through included ad services or Twitter streams or Facebook authentication counts against the performance of the application because the user doesn’t necessarily stop to check out the status bar in their browser when a “web page” is loading slowly. They don’t and often can’t distinguish between the response you’re delivering and the data being delivered by a third-party service. It’s your application, it’s your performance problem.

What the Compuware performance survey does is highlight the very real problem with measuring that performance from a provider point of view. It’s one thing to suggest that IT find a way to measure applications holistically and application performance vendors like Compuware will be quick to point out that agents are more than capable of not only measuring the performance of individual services comprising an application but that’s only part of the performance picture. As we grow increasingly dependent on third-party, off-premise and cloud-based services for application functionality and business processing we will need to find a better way to integrate performance monitoring into IT as well. And therein lies the biggest challenge of a hyper-connected, distributed application. Without some kind of standardized measurement and monitoring services for those application and business related services, there’s no consistency in measurement across customers. No measurement means no visibility, and no visibility means a more challenging chore for IT operations to optimize, manage, and provision appropriately in the face of degrading performance.

Application performance monitoring and management doesn’t scale well in the face of off-premise distributed third-party provided services. Cloud-based applications IT deploys and controls can employ agents or other enterprise-standard monitoring and management as part of the deployment, but they have no visibility into let alone control over Twitter or their supply-chain provider’s services.

It’s a challenge that will continue to plague IT for the foreseeable future, until some method of providing visibility into those services, at least, is provided such that IT and operations can make the appropriate adjustments (compensatory controls) internal to the data center to address any performance issues arising from the use of third-party provided services.


AddThis Feed Button Bookmark and Share

 

Published Apr 04, 2011
Version 1.0

Was this article helpful?

No CommentsBe the first to comment