Examining architectures on which hybrid clouds are based…
IT professionals, in general, appear to consider themselves well along the path toward IT as a Service with a significant plurality of them engaged in implementing many of the building blocks necessary to support the effort. IaaS, PaaS, and hybrid cloud computing models are essential for IT to realize an environment in which (manageable) IT as a Service can become reality.
That IT professionals –65% of them to be exact – note their organization is in-progress or already completed with a hybrid cloud implementation is telling, as it indicates a desire to leverage resources from a public cloud provider.
What the simple “hybrid cloud” moniker doesn’t illuminate is how IT organizations are implementing such a beast. To be sure, integration is always a rough road and integrating not just resources but its supporting infrastructure must certainly be a non-trivial task. That’s especially true given that there exists no “standard” or even “best practices” means of integrating the infrastructure between a cloud and a corporate data center.
Specifications designed to address this gap are emerging and there are a number of commercial solutions available that provide the capability to transparently bridge cloud-hosted resources with the corporate data center.
Without diving into the mechanism – standards-based or product solution – we can still examine the integration model from the perspective of its architectural goals, its advantages and disadvantages.
The basic premise of a bridged-cloud integration architecture is to transparently enable communication with and use of cloud-deployed resources. While the most common type of resources to be integrated will be compute, it is also the case that these resources may be network or storage focused. A bridged-cloud integration architecture provides for a seamless view of those resources. Infrastructure and applications deployed within the data center are able to communicate in an environment agnostic-manner, with no requirement of awareness of location.
This is the premise of the network-oriented standards emerging as a solution: they portend the ability to extend the corporate data center network into a public cloud (or other geographically disparate location) network and make them appear as a single, logical network.
Because of the reliance of infrastructure components on network topology, this is an important capability. Infrastructure within the data center and the services they provide are able to interact with and continue to enforce or apply policies to the resources located external to the data center. The resources can be treated as being “on” the local network by infrastructure and applications without modification.
Basically, bridging normalizes the IP address space across disparate environments.
Obviously this approach affords IT a greater measure of control over cloud-deployed resources than would be otherwise available. Resources and applications in “the cloud” can be integrated with corporate-deployed services in a way that is far less disruptive to the end-user. For example, a load balancing service can easily extend its pool of resources into the cloud to scale an application without the need to adjust its network configuration (VLANs, routing, ACLs, etc…) because all resources are available on what are existing logical networks. This has the added benefit of maintaining operational consistency, especially from a security perspective, as existing access and application security controls are applied inline.
All is not rosy in bridging land, however, as there are negatives to this approach. The most obvious one should be the impact on performance. Latency across the Internet, implied by the integration of cloud-based resources, must be considered when determining to which uses those remote resources should be put. Scaling applications that are highly latency-sensitive using remote resources in a bridged architecture may incur too high a performance penalty. Alternatively, however, applications integrated using out-of-band processing, i.e. an application that periodically polls for new data and processes it in bulk, behind the scenes, may be well-suited to such an architecture as latency is not usually an issue.
The bridging model also does not address the need for fault tolerance. If you’re utilizing remote resources to ensure scalability and without them failure may result, you run the risk of connectivity issues incurring an outage. It may be necessary to employ a tertiary provider, which could result in increased complexity in the network and require changes to infrastructure to support.
Next time we’ll examine a second approach to cloud infrastructure integration: virtualization.