We need to start focusing on improving the application deployment processes that all too often are the bulk of time spent trying to get an application out the door.
Oh, I know it looks like it’s actually improving, but it’s not. Virtualization came along and took the low hanging fruit off the application deployment tree and paid no never mind to those still waiting in the upper branches. While applications are easy to provision today thanks to the wonders of virtualization, the rest of the infrastructure still is not. That’s problematic, because while we’ve made it easy to spin up an application, we haven’t made it easy to spin up the services required to actually deliver it to the consumer – whether that be internal or external end-users.
One could actually view the provisioning of an application instance as the event that triggers a series of other events, all required to provide for the security, scalability and availability of that application. We’ve automated the beginning, but we’re still for the most part manually configuring the nitty-gritty infrastructure details. Within the industry we hear the term “application development lifecycle” but we rarely hear the term “application deployment lifecycle” and that’s an important distinction to make. The development lifecycle is only one piece of a much larger and too often lengthier process.
Consider all the services existing within the infrastructure and then which of those services might need to be updated, changed or otherwise modified to support a new application:
Some applications will be delivered via their own host name. If so, DNS must be configured to support it. If the application is being deployed across multiple sites for resiliency or performance-related global load balancing, those systems, too, must be configured.
If the application in question may be running on an unusual port, that needs configuration. Too, there may be request rate monitoring that needs to be configured to recognize possible attacks, as well as other edge security services.
Load Balancing Service
If the application requires high availability or will need to scale to support high volume, load balancing services must be configured. Algorithms, strategies for fail over, and persistence settings will all need to be determined and configured.
Web Application Platform
If the topology demands it, the web application platform – the OS and even the hypervisor if it’s virtualized – may need tweaks to its network stack to ensure proper routing through the infrastructure.
Web Access Management Service
Web access management is today often centralized outside the application. If access to the application is restricted in any way or single-sign on is required, additional configuration may be necessary.
APM (Application Performance Management) Services
Services providing performance monitoring – both internal and external – may need to be configured to recognize the application, thresholds for alerting may need to be set and schedules for reports configured.
web application firewall Services
If the application is protected against SQLi and other malicious inbound attack patterns by a web application firewall, those policies must be created and tested before going live.
Event and Log Correlation
Event and log correlation, especially in large enterprise deployments, is a necessity to assist in troubleshooting and auditing. Configuration may be necessary to uniquely identify the application and any noteworthy event notifications and alerts configured appropriately.
This is by no means an exhaustive list and it is already lengthy. There are many, many moving parts that need to be configured – and tested – before “deployment” of the application can be considered complete. These are concerns for which developers have little visibility and little input. One might claim they have very little interest in such aspects of the deployment as infrastructure services are often outside their realm of not only expertise but experience – and control. Yet many of these services require information and an understanding of the application that is best gathered from the application expert: the developer. Thus the cooperation and collaboration of development with operations during the application deployment lifecycle becomes critical to ensuring the successful deployment of that application.
But even if we assume we can get the cooperation, that still leaves us with a mostly manual-intensive deployment process. While devops has begun leveraging development methodologies such as agile as a way to improve operational deployment of applications, it has not embraced the reality that deployment processes are about architectures, not applications, and a much more infrastructure-inclusive approach is necessary to making more efficient – and repeatable – the rest of the deployment lifecycle processes.
We’ve seen the efficiency gains from repeatable application provisioning that come from devops. It’s an excellent affirmation of the positive impact of applying development methodologies to operations. But we can’t stop at the application platform; we need to keep moving the concept of repeatable – and therefore consistent – architectures into the infrastructure services that make up the bulk of the deployment lifecycle.
The problem, many might say (and they’d be right) is that infrastructure itself does not adequately support the notion of “repeatable”. Configurations are often unwieldy, difficult to parse files full of component-specific language that makes it difficult to automate. While certainly Infrastructure 2.0 components with their service-enabled control planes are capable of being configured in a more granular fashion these methods, like infrastructure in general, use component specific language that is difficult for operations to translate, let alone developers completely unfamiliar with the inner workings of network and infrastructure devices to adopt.
This is why we can’t have nice things.
Infrastructure must support a more services and application-friendly means of configuration; it must treat policies more like services that are invoked during the application delivery process based on context. It needs to use language that’s familiar to developers and operations and allow for management based on the particular management paradigm of the organization.
One of the ways in which virtualization has aided this style of operational configuration management is to allow the entire application stack – from OS to platform – to be configured specifically to support a single application. It’s a self-contained, fully configured, working environment. That’s part of what devops creates today during the initial stages of the deployment process. We need to extend that concept to infrastructure. A load balancing service needs to scale and provide availability for an application – not an IP address – and it must be able to describe itself in terms more common to both operations and development.
It may be that virtualization will again come to the rescue – whether through an architectural infrastructure approach that leverages virtual network appliances as the core unit of infrastructure deployment or virtual instances initia.... The reason auto-scaling works is that repeatable deployment of applications became possible by “packaging” up the application and its immediate environs – the web and application platforms – along with its configuration. We need the same style of deployment packaging in infrastructure; either in a similar fashion to that of application “packaging” through virtualization of a configured environment or through ability to create, manage and maintain application delivery and network policies as a “package” that is easily repeated on-demand.
Only by extending the notion of services into the network can we hope to reduce and make more efficient the most time eroding portions of the application deployment process. Only by extending the notion of services to policies and processes within the infrastructure components can we make deployments consistent and thus repeatable – and ultimately provisionable as a service. It will be these building block services that form the foundation for IT as a Service and lay the potential for developers to self-service the entire application deployment process in the future.