Infrastructure 2.0: The Diseconomy of Scale Virus
The diseconomy of scale so adversely affecting the IP address management space isn't limited to network infrastructure; it's crawling up the stack steadily and infecting all layers of the data center like some kind of unstoppable infrastructure management virus.
That is why, even with the simple act of managing an enterprise network’s IP addresses, which is critical to the availability and proper functioning of the network, actually goes up as IP addresses are added. As TCP/IP continues to spread and take productivity to new heights, management costs are already escalating. -- Greg Ness, "What Are the Barriers to Entry and IT Diseconomies?"
Greg does a great job of explaining exactly why the costs of management escalate with each IP address added to the infrastructure which, in cloud computing environments, can be many.
What isn't often explained is how that diseconomy of scale at the IP address layer travels upwards quickly to escalate management costs and increases complexity for traditional scaling infrastructure as well.
THE TRADITIONAL SCALING MODEL
Traditional scaling models take advantage of an application delivery controller (load balancer) to horizontally scale applications. In this model, the application and its server (web or application) are replicated a number of times and the application delivery controller acts acts as a virtual copy of the application externally, distributing requests across the replicated copies of the server internally. If three applications are being scaled, then there are three virtual servers on the outside, with a set number of application servers on the inside actually serving up the application. So there may be ten physical instances of Application A, and ten instances of Application B, and ten of Application C. The number may deviate periodically based on maintenance windows and unplanned outages, but generally speaking the number of instances and the physical servers on which those application instances are deployed stays constant.
In a virtualized or cloud computing model, these same principles of scaling are used, but the servers inside the data center are virtual and dynamic. The three applications in the previous example still require three virtual servers on the application delivery controller, but the number of servers on the inside (in each application pool|farm|cluster) are not static. There may be four servers for Application A while there are ten servers for Application B. At another time there may be seven servers for Application A and only two for Application B.
Making the situation even more complex is the fact that not only are there are a variable number of application servers in each virtual server's application pool|farm|cluster, those application servers may reside on different physical servers at any given time.
Using traditional scaling technology, each virtual server instance on the application delivery controller would need to be configured with every possible physical server instance on which the application server could be running. If the virtual data center contains thirty physical servers, the resources of which will be shared by those three applications, then each pool|farm|cluster for each application on the application delivery controller must necessarily be configured to contain and monitor each physical server in the infrastructure. This results in increased configuration and management, and has adverse affects on the network infrastructure as each virtual server must necessarily ping|query each server in its associated pool|farm|cluster in order to determine available instances of the application it is representing.
This means a lot of additional configuration and network traffic as the application delivery controller attempts to manage the applications it is tasked with delivering.
THE INFRASTRUCTURE 2.0 SCALING MODEL
The Infrastructure 2.0 scaling model is based on the traditional model of scaling in its behavior but extended to be a better fit with the dynamic, elastic nature of emerging data center architecture. What makes the I2.0 model much more efficient and able to scale from a management perspective is the ability of the application delivery controller to be as dynamic as the infrastructure it is supporting.
Rather than configure every instance of a virtual application with every possible physical server, the application delivery controller is notified using standards-based control mechanisms (APIs) when an application is brought on or off-line and automatically configures itself appropriately.
This behavior results in a more efficient architecture, as the application delivery controller need only monitor the application servers actually executing applications at the time it performs its status inquiries, and reduces the amount of traffic on the network inside the data center. An agile, adaptable application delivery controller also improves efficiency by reducing the number of pings and connections and queries it must make of application servers, thus reducing the burden on those servers and ensuring that resources are consumed only when truly necessary.
Implementing an I2.0 application delivery model requires less rigid control over the IP address space, as well, as it is no longer necessary to hardwire such information in the application delivery controller; it can adapt, in real-time, and automatically be configured with that information as applications are brought on and off line to deal with increases and decreases in capacity.
The I2.0 model can be implemented through instrumentation of applications using standards-based APIs, or it can be implemented as a separate, integrated management mechanism that provides additional functionality by taking advantage of that same standards-based API.
As Greg so often notes, the increase in IP addresses due to virtualization and cloud computing can quickly result in escalating costs to manage and increased complexity in data center architecture. This is also true at the application layer due to the traditionally static nature of networks and load balancing infrastructure.
Application delivery solutions are necessarily elastic; they are agile and adaptable infrastructure devices capable of responding in real-time to server, network, and application conditions both internal and external to the data center. The use of an application delivery controller to implement a scaling solution for traditional and virtualized environments greatly reduces the burden on servers, on administrators, on the network, and on the client by optimizing, accelerating, and securing the applications it delivers in the most operationally efficient manner possible.
- Lori_MacVittieEmployeePutting the network back in social networking
- Lori_MacVittieEmployeeCloud interoperability must dig deeper than the virtualization layer
- Lori_MacVittieEmployeeWindows Vista Performance Issue Illustrates Importance of Context
- Lori_MacVittieEmployeeAnd the Killer App for Private Cloud Computing Is