application performance management
5 TopicsF5 Friday: The Rules for the Game of Application Performance Tag
It’s an integration thing. One of the advantages of deploying an application delivery controller (ADC) instead of a regular old Load balancer is that it is programmable – or at least it is if it’s an F5 BIG-IP. That means you have some measure of control over application data as it’s being delivered to end-users and can manipulate that data in various ways depending on the context of the request and the response. While an ADC has insight into the end-user environment – from network connection type and conditions to platform and location – and can therefore make adjustments to delivery policies dynamically based on that information, it can’t gather the kind of end-to-end application performance metrics both business stakeholders and developers need. This data, the end-user view of application performance, is increasingly important. Back in April Google noted that “page speed” would be incorporated into its ranking algorithm, with faster loading pages being ranked higher than slower pages. And we all know that the increasingly digital generation joining the ranks of corporate end-users have grown accustomed to fast, responsive web applications no matter what device they’re using or from what location they’re coming. Application performance is a big deal. The trick, however, is that you have to know how fast (or slow) your pages are loading before you can do something about them. That generally means employing the services of an application performance monitoring solution like Keynote or Gomez (now a part of Compuware). The way in which these solutions collect application performance data is through instrumentation of web applications, i.e. every page for which you want a measurement must be modified to include a small piece of Javascript that enables the reporting of performance back to the central system. Administrators and business stakeholders can then generate reports (and/or alerts) based on that data. IT’S an INVESTMENT The time investment required to instrument every page for which you (or the business) desire metrics is significant, especially if the instrumentation is occurring after deployment rather than as a part of the development life cycle process. Instrumentation at any time incurs the risk of error, too, as its generally done manually in a programmatic way. With any code-based solution – whether operational script or part of an application – there’s always the chance of making a mistake. In the case of web applications that mistake can be more costly as its visible to the end-user (customer) and may cause the application to be “unavailable”. It’s no surprise, then, that while most agree on the importance of understanding and measuring application performance, many organizations do not place a high priority on actually implementing a solution. dynaTrace recently conducted a study on performance management in large and small companies. The quick facts paint a horrible picture. 6o percent of the companies admit that they do not have any performance management processes installed or what they have is ineffective. Half of the companies who answered that they have performance management processes admitted that they are doing it only in a reactive way when problems occur. One third of all companies said that management is not supporting performance management properly. From this data we can obviously conclude that performance management is not a primary interest in most companies. Week 22 – Is There a Business Case for Application Performance? While the data from the dynaTrace study is interesting, it ignores how the reality of implementing APM solutions impact the ability and/or desire to deploy such solutions. One of the reasons many companies have no performance management processes is because they are unable to instrument the application in the first place. For some, it’s because the application is “packaged”; it’s a closed source, third-party application that can’t be instrumented via traditional code-based extension. For others it may be the case that instrumentation is possible, but the application is frequently updated and instrumentation must be done with every update and necessarily extends the application development lifecycle. AUTOMATED INSTRUMENTATION for ALL APPLICATIONS It is at this point that the game of application performance “tag” comes in handy and alleviates the risk associated with manually instrumenting pages and enables the monitoring of packaged and other closed-source applications. Even though we often refer to BIG-IP as a “load balancer” it really is an application delivery controller. It’s a platform, imbued with the ability to programmatically modify application data on-demand. Using iRules as a network-side scripting solution, architects and developers and administrators can control and manage application data in real-time without requiring modification to applications. It is by leveraging this capability that we are able to automatically inject the instrumentation code required by Gomez to monitor application performance. The BIG-IP “tags” application response data with the appropriate end-user monitoring code which enables the ability for APM providers like Gomez to monitor any application. This is particularly interesting (and useful) when you consider the inherent difficulties of measuring performance not only from packaged applications but from an off-premise cloud computing environment. By deploying a virtual BIG-IP (a virtual network appliance) the same network-side script used to inject the appropriate client-side script to enable application performance monitoring can be used to instrument off-premise cloud computing deployed applications on-demand. This means consistent application performance measurements of applications regardless of location. Even if the application “moves” from the local data center to an off-premise cloud, the functionality can remain simply by including a virtual BIG-IP with the appropriate iRules deployed. The GOMEZ SOLUTION F5 has partnered with Gomez to make the process of implementing this instrumentation for their service almost trivial. The nature of iRules, however, makes it possible to duplicate this effort with any application performance monitoring solution which relies upon a client-side script to collect performance metrics from end-users. The programmatic nature of iRules is such that organizations will enjoy complete control over the instrumentation, including the ability to extend the functionality such that the injection of the Javascript happens only in certain situations. For example, it might be done based on the type of client or client network conditions. It might be only injected for a new end-user as determined by the existence of a cookie. It might be injected only for certain pages of an application. It can, essentially, be based on any contextual variable to which the ADC, the BIG-IP, has access. You can read more details (including the code and the specific “rules” for implementation) in this tech tip from Joe Pruitt, “Automated Gomez Performance Monitoring” and you can download the full implementation in the iRules CodeShare under GomezInjection.272Views0likes0CommentsF5 Friday: Anti-Fail
I recently expounded on my disappointment with cloud computing services that fail to recognize that server metrics are not necessarily enough to properly auto-scale applications in “I Find Your Lack of Win Disturbing”. One of the (very few) frustrating things about working for F5 is that we’re doing so much in so many different areas of application delivery that sometimes I’m not aware that we have a solution to something that’s a broader problem until I say “I wish …” (I guess in a way that’s kind of cool in and of itself, right?) Such is apparently the case with auto-scaling and application metrics. I know we integrate with IIS and Apache and Oracle and a host of other web and application servers to collect very detailed and specific application metrics, but what I didn’t know was how well integrated we’ve gotten these with our management solution. Shortly after posting I got an e-mail from Joel Hendrickson, one of our senior software engineers, who pointed out that “all of the ingredients in ‘Grandma’s Auto-Scaling Recipe’ and much more are available when using the F5MP [F5 Management Pack].” Joel says, “I think you’re essentially saying that hardware-derived metrics are too simplistic for decisions such as scale-out, and that integrating/aggregating data from the various ‘authoritative sources’ in application is key to making informed decisions.” Yes, that’s exactly what I was saying, only not quite so well. Joel went on to direct my attention to one of his recent blog posts on the subject, detailing how the F5MP does exactly that. Given that Joel already did such an excellent job of explaining the solution and what it can do, I’ve summarized the main metrics available here but will let you peruse his blog entry for the meaty details (including some very nice network diagrams) and links to download the extension (it’s free!), video tutorials, and the F5 Management Pack Application Designer Wiki Documentation.161Views0likes0CommentsLearn How to Play Application Performance Tag at Interop
It’s all fun and games until application performance can’t be measured. We talk a lot about measuring application performance and its importance to load balancing, scalability, meeting SLAs (service level agreements) and even to the implementation of more advanced concepts like cloud balancing and location-based global application delivery but we don’t often talk about how hard it is to actually get that performance data. Part of the reason it’s so difficult is that the performance metrics you want are ones that as accurately as possible represent end-user experience. You know, customers and visitors, the users of your application that must access your application over what may be a less than phenomenal network connection. This performance data is vital. Increasingly customers and visitors are basing business choices on application performance: Unacceptable Web site performance during peak traffic times led to actions and perceptions that negatively impacted businesses’ revenue and reputation: -- 78 percent of consumers have switched to a competitor’s Web site because they encountered slowdowns, errors and transaction problems during peak traffic times. -- After a poor online experience, 88 percent are less likely to return to a site, 47 percent have a less positive perception of the company and 42 percent have discussed it with family, friends and peers, or online on social networks. -- Survey Finds Consumer Frustration with Web Site Performance During Peak Traffic Times Negatively Impacts Business Results And don’t forget that Google recently decided to go ahead and add performance as a factor in its ranking algorithms. If your application and site perform poorly, this could certainly have an even bigger negative impact on your bottom line. What’s problematic about ensuring application performance is that applications are now being distributed not just across data centers but across deployment models. The term “hybrid” is usually used in conjunction with public and private cloud to denote a marriage between the two but the reality is that today’s IT operations span legacy, web-based, client-server, and cloud models. Making things more difficult is that organizations also have a cross-section of application types – open source, closed source, packaged, and custom applications are all deployed and operating across various types of deployment models and in environments without a consistent, centrally manageable solution for measuring performance in the first place. The solution to gathering accurate end-user experience performance data has been, to date, to leverage service-providers who specialize in gathering this data. But implementing a common application performance monitoring solution across all applications and environments in such a scenario is quite problematic, because most of these solutions rely upon the ability to instrument the application/site. Organizations, too, may be reluctant to instrument applications for a specific solution – that can result in de facto lock-in as the time and effort necessary to remove and replace the instrumentation may be unacceptable. A dynamic infrastructure, capable of intercepting, inspecting, and modifying, if necessary, the application data stream en route is necessary in order to unify application performance measurement efforts across all application types and locations. A dynamic infrastructure that’s capable of tagging application data with the appropriate information such that end-user monitoring services, necessary to determine more accurately the end-user experience in terms of response and page load time, can effectively perform their duties across the myriad application deployments upon which businesses and their customers depend. At Interop we’ll be happy to show you how to teach your application delivery infrastructure- physical and virtual – how to play a game of “tag” with your applications that can provide just such measurements. Measurements that are vital to identifying potential performance bottlenecks that may negatively impact application performance and, ultimately, the business’ bottom line. Even better, we’ll not only show you how to play the game, but how to win by applying architecting an even more dynamic, intelligent infrastructure through which application performance-enhancing solutions can be implemented, no matter where those applications may reside – today or tomorrow.173Views0likes0CommentsI Find Your Lack of Win Disturbing
Are you scaling applications or servers? Auto-scaling cloud brokerages appear to be popping up left and right. Following in the footsteps of folks like RightScale, these startups provide automated monitoring and scalability services for cloud computing customers. That’s all well and good because the flexibility and control over scalability in many cloud computing environments is, shall we say, somewhat lacking the mechanisms necessary to efficiently make use of the “elastic scalability” offered by cloud computing providers. The problem is (and you knew there was a problem, didn’t you?) that most of these companies are still scaling servers, not applications. Their offerings still focus on minutia that’s germane to server-based architectures, not application-centric architectures like cloud computing. For example, the following statement was found on one of the aforementioned auto-scaling cloud brokerage service startups: "Company X monitors basic statistics like CPU usage, memory usage or load average on every instance." Wrong. Wrong. More Wrong. Right, at least, in that it very clearly confesses these are basic statistics, but wrong in the sense that it’s just not the right combination of statistics necessary to efficiently scale an application. It’s server / virtual machine layer monitoring, not application monitoring. This method of determining capacity results in scalability based on virtual machine capacity rather than application capacity, which is a whole different ball of wax. Application behavior and complexity of logic cannot be necessarily be tied directly to memory or CPU usage (and certainly not to load average). Application capacity is about concurrent users and response time, and the intimate relationship between the two. An application may be overwhelmed while not utilizing a majority of its container’s allocated memory and CPU. An application may exhibit increasingly poor performance (response times) as the number of concurrent connections increase toward its limits while still maintaining less than 70,80, or 90% of its CPU resources. An application may exhibit slower response time or lower connection capacity due to the latency incurred by exorbitantly computational expensive application logic. An application may have all the CPU and memory resources it needs but still be unable to perform and scale properly due to the limitations imposed by database or service dependencies. There are so many variables involved in the definition of “application capacity” that it sometimes boggles the mind. One thing is for certain: CPU and memory usage is not accurate enough indicator of the need to scale out an application.233Views0likes1CommentAppDynamics Puts the Management in Application Performance Management
The future of application performance management is in real-time visibility, action, and integration. For a very long time now APM (Application Performance Management) has been a misnomer. It’s always really been application performance monitoring, with very little management occurring outside of triggering manual processes requiring the attention of operators and developers. APM solutions have always been great at generating eye-candy reports about response time and components and, in later implementations, dependencies on application and even network infrastructure. But it has rarely been the case that APM solutions have really been about, well, managing application performance. Certainly they’ve attempted to provide the data necessary to manually manage applications and do an excellent job of correlation and even in some cases deep trouble-shooting of root-cause performance problems. But real-time, dynamic, on-demand performance management? Not so much. AppDynamics, however, is attempting to change that. Its focus is on managing the performance and availability of applications in real-time (or at least near-time) and it does so across cloud computing environments as well as internal to the data center.250Views0likes0Comments