Convergence, consolidation, and common-sense.
When WAN optimization was getting its legs under it as a niche in the broader networking industry it got a little boost from the fact that remote/branch office connectivity was the big focus of data centers and C-level execs in the enterprise. Latency and congested WAN links between corporate data centers and remote offices around the globe were the source of lost productivity. The obvious solution – get thee a fatter pipe – was at the time far too expensive a proposition and, in some cases, not a feasible option. We’d had bandwidth management and other asymmetric solutions in the past and while they worked well enough for web-based content the problem now was fat files and the transfer of “big data” across the WAN.
We needed something else.
The problem, it was posited, was simply that there was too much data to traverse the constrained network links tying organizations to remote offices and thus the answer, logically, was to do away with trying to juggle it all in some sort of priority order and simply make less data. A sound proposition, one that was nearly simultaneously gaining traction on the consumer side of the equation in the form of real-time web application data compression.
Here we are, many years later, and the proposition is still sound: if the problem is limited bandwidth in the face of applications and their ever growing data girth, then it behooves the infrastructure to reduce the size of that data as much as possible. This solution – whether implemented through traditional compression techniques or data deduplication or optimizing of transport and application protocols – is effective. It produces faster response times and thus the appearance, at least, of more responsive applications. As the specter of intercloud and cloud computing and the need to transport ginormous data sets (“big data”) in the form of data and virtual machine images continues to loom large on the horizon of most organizations it makes sense that folks would turn to solutions that by definition are focused on the reduction of data as a means to improve performance and success in transfer across increasingly constrained networks.
No argument there.
#GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical
The argument begins when we start looking at the changes in connectivity between then and now. The “internet” is the primary connectivity between users and applications today, even when they’re working from a “remote office.” Cloud computing changes the equation from which the solution of WAN optimization was derived and renders it a less than optimal solution on its own because it does not fit the connectivity paradigm upon which cloud computing is based - one that is increasingly unmanageable on both ends of the pipe. Luckily, decreasing data size is just one of many other methods that can be used to improve application performance and should be used in conjunction with those other methods based on context.
Because of the way in which WAN optimization solutions work (in pairs) they are generally the last hop in the corporate network and the first hop into the remote network. This is a static implementation, one that leaves little flexibility. It also assumes the existence of a matching WAN optimization solution – whether hardware or software deployed – on the other end of the pipe. This is not a practical implementation for the most constrained and growing environments – mobile devices – because as an organization you have very little control over the endpoint (device) in the first place (consider the consumerization of IT) and absolutely no control over the network on which it operates.
A traditional WAN optimization solution may be able to help specific classes of mobile devices if the user has installed the appropriate “soft client” that allows the WAN optimization solution to do its data deduplication trick. That’s feasible for corporate users over which you have control. What about the millions of end-users out there on iPhones, BlackBerries, and tablets over whom you do not have control. They are just as important and it is performance on which your organization/offering/solution will be judged by them. They’re an impatient lot, according to both Amazon and Google, and there are no studies to indicate that their conclusions are wrong, and have garnered enough mindshare to be awarded the right to run even the most stolid of enterprise applications:
Ellen Messmer, Network World
Roughly 75% of senior IT executives plan to make internal applications available to employees on a variety of smartphones and mobile devices, according to new research from McAfee's Trust Digital unit.
In particular, 57% of respondents said they intend to mobilize beyond e-mail and make CRM, ERP and proprietary in-house applications available to mobile devices. In addition, 45% are planning to support the iPhone and Android smartphones due to employee demand, even though many of these organizations already support BlackBerry devices.
Even if the end-user is not using a mobile device, it’s likely that their connection to the Internet exhibits very different characteristics than those experienced by corporate end-users. While download “speeds” have been increasing in the consumer market, we know there’s a difference between throughput and bandwidth, and that there is a relationship between ability of the servers to serve and consumers to consume. That relationship is often impeded by congestion, packet loss, endpoint resource constraints, and the shared nature of broadband networks. It is simply no longer the case that we can assume ownership of any kind over the endpoint and certainly not over the network on which it resides.
And then you’ve got cloud. Cloud, oh cloud, wherefore art thou cloud? If you can deploy WAN optimization as a virtual network appliance then you have to be careful to choose a cloud that supports whatever virtualization platform the vendor currently supports. If you’ve already invested time and effort in a cloud provider and only later determined you need WAN optimization to improve the increased traffic between you and the provider (over the open, unmanaged Internet) you may be in for an unpleasant surprise.
But the even larger problem with WAN optimization as an individual solution is that it loses context. It assumes that it will always need to do its thing on the data. It’s generally automatic, with very little intelligence built into it. The architecture on which such point solutions were developed is not the same data center architecture we’re working with today. As we continue to push the envelope of cloud computing and how it integrates with our data center architectures we find that it may be the case that a user on the LAN is directed to a cloud-hosted application while a user on the WAN is directed to the local corporate data center. In both cases it is (today) difficult to leverage a symmetric WAN optimization solution because in the first case you have little control over the infrastructure deployment and in the latter you probably have no control over the user’s network endpoint.
What you need is a solution that is aware it is symmetric when it is and asymmetric when it isn’t. Atop that, you need a solution that can simultaneously service both users while providing the best possible response time by applying the appropriate optimization and acceleration policies to their responses. That’s context, on-demand. It’s about the application and the user and the network; it’s a collaborative, integrated unified method of applying delivery policies in real-time. It’s not about simply decreasing the amount of data. That’s just one of many varied techniques and methods of improving performance. Like compression, it’s possible that introducing WAN optimization into the flow of data might impede performance because the task of deduplication may require more cycles than it would to just transfer the data across a LAN. It’s possible that given the network conditions that decreasing size isn’t enough; you may need to apply TCP optimization and acceleration to the session to improve the transfer at that time.
WAN optimization techniques are just that – techniques – and they should be applied on-demand, as necessary and dictated by the conditions under which requests are made and responses must be delivered.
It’s beneficial to examine the relative importance (and applicability) of WAN optimization solutions in the context of the “big picture”. That picture includes transport and application layer impedances that also need to be addressed, as well as the generalized difficulties in deploying such solutions in the increasingly mobile and virtual environments from which applications are being deployed. A unified approach to application delivery, which encompasses WAN optimization as a service rather than an individual solution, is better suited to interpreting context and applying the appropriate policies in a way that makes sense.
Cloud really brings to the fore the architectural issues with most WAN optimization solutions. Because of the way they work, they must be paired (not impossible to overcome at all) and they must be the “last hop”, which makes multi-tenant support an interesting proposition. Contrast that with WAN application delivery services, which recognize that WAN links (any higher latency, constrained link really) requires different behavior at the network and transport and even application layers in terms of delivery, and you’ll find that the latter makes much more sense in the dynamic, services-oriented cloud computing environments currently available today. It’s just part of the bigger picture – the application delivery picture – and it has to become more integrated if it’s going to be useful for multi-tenant, dynamic environments like cloud computing.
Just as load balancing is no longer a solution of its own, WAN optimization has become a feature of a broader, holistic unified application delivery solution.