#webperf #devops Is your monitoring strategy evolving along with your application and infrastructure architectures?
As "applications" continue to morph into what we once might have called "mashups" but no longer do because, well, SOA is officially dead, dontcha know, it is increasingly important for a variety of constituents within organizations - from business stakeholders to application owners to devops - to understand the overall "health" of an application.
Traditional monitoring techniques focus on monitoring from a very infrastructure point of view. That is, the technique is really more of a pool and resource monitor than it is an application monitor. Each individual service that comprises an application are monitored individually, with no real view of how the "application" itself is performing.
Now the problem with this approach is that different applications may share the same services (especially in an API-driven model) but have very different performance and availability requirements. It may be completely acceptable for an internal application to respond more slowly than a consumer-facing application, for example.
Thus organizations are left with a view that accurately informs them as to the current health of individual services, but no real way to use them to get a picture of how the application is performing.
What we really need is to be able to not only monitor the performance and health of individual services but the concept of an application - even if that application is just a mashup of other applications or services.
Important to remember, too, is that applications aren't limited to a single protocol, like HTTP. Consider an application like Microsoft Exchange, which can be - and frequently is - accessed via multiple protocols. It may be necessary to monitor a variety of services in order to determine the actual health and availability of the application.
The key is to not just monitor individual services (that's important, but it's not the whole enchilada) but also the application as a whole. This provides the business and application stakeholders with a better view of how IT is servicing their needs and also offers IT significant value in understanding the impact of individual services on application and business services.
For example, if the same service is used for multiple applications and the service starts degrading, it should (logically) impact the health of every application. Noticing this early on enables IT to proactively deal with the situation, up to and including notifying all the application owners that there's an issue with a core service and IT is already on the case, before the call comes in. Being able to further monitor and analyze performance across time enables the identification of outliers earlier. By spotting these leading indicators of trouble, it can be possible to head off an outage or performance degradation before it occurs, leaving application and business stakeholders blissfully ignorant of what might have been a disastrous incident.
It can also be the case that sudden demand for an application negatively impacts the performance or availability of a shared service, which in turn, of course, impacts applications that use that service. By monitoring all the pieces of the application, the source of increased demand can be more easily correlated and a strategy to address it formulated.
Monitoring is a critical (and sadly often overlooked and underappreciated) function in the data center. Without it, however, modern methods of scalability (elasticity) and orchestrated responses to failure would not be possible. Because it is so critical, it's important to ensure that monitoring capabilities and your use of them is supporting modern architectures, networks and services.
Without monitoring, there's really no way to recognize and react to failures, overloads, and outages. So make sure your monitoring strategy is evolving along with your data center infrastructure and applications.