strategy
16 TopicsSolutions are Strategic. Technology is Tactical.
And it all begins with the business. Last week was one of those weeks where my to-do list was growing twice as fast as I was checking things off. And when that happens you know some things end up deprioritized and just don’t get the attention you know they deserve. Such was the case with a question from eBizQ regarding the relationship between strategy and technology: Does strategy always trump technology? As Joe Shepley wonders in this interesting post, Strategy Trumps Technology Every Time, could you have an enterprise content management strategy without ECM technology. So do you think strategy trumps technology every time? I answered with a short response because, well, it was a very long week: I wish I had more time to expound on this one today but essentially technology is a tactical means to implement a solution as part of the execution on a strategy designed to address a business need/problem. That definitely deserves more exploration and explanation. STRATEGY versus TACTICS The reason this was my answer is the difference between strategy and tactics. Strategy is the overarching goal; it’s the purpose to which you are working. Tactics, on the other hand, are specific details regarding how you’re going to achieve that goal. Let’s apply it to something more mundane. For example: The focus of the strategy may be very narrow – consuming a sammich – or it may be very broad and vague, as it often is when applied to military or business strategy. Regardless, a strategy is always in response to some challenge and defines the goal, the solution, to addressing the challenge. Business analysts don’t sit around, after all, and posit that the solution to increasing call duration in the call center is to implement software X deployed on a cloud computing framework. The solution is to improve the productivity of the customer service representatives. That may result in the implementation of a new CRM system, i.e. technology, but it just as well may be a more streamlined business process that requires changes in the integration of the relevant IT systems. The implementation, the technology, is tactical. Tactics are more specific. In military strategy the tactics are often refined as the strategy is imparted down the chain of command. If the challenge is to stop the enemy from crossing a bridge, the tactics will be very dependent on the resources and personnel available to each commander as they receive their orders. A tank battalion, for example, is going to use different tactics than the engineer corps, because they have different resources, equipment and ultimately perspectives on how to go about achieving any stated goal. The same is true for IT organizations. The question posed was focused on enterprise content management, but you can easily abstract this out to an enterprise architecture strategy or application delivery strategy or cloud computing strategy. Having a strategy does not require a related technology because technology is tactical, solutions are strategic. The challenge for an organization may be too much content or it may be that it’s process-related, e.g. the approval process for content as it moves through the publication cycle is not well-defined, or has a single point of failure in it that causes delays in publication. The solution is the strategy. For the former it may be to implement an enterprise content management solution, for the latter it may be to sit down and hammer out a better process and even to acquire and deploy a workflow or BPM (Business Process Management) solution that is better able to manage fluctuations in people and the process. The tactics are the technology; it’s the how we’re going to do it as opposed to the what we’re going to do. CHALLENGE –> SOLUTION –> TECHNOLOGY This is an important distinction, to separate solutions from technology; strategy from tactics. If the business declares that the risk of a data breach is too high to bear, the enterprise IT strategy is not to implement a specific technology but to discover and plug all the possible “holes” in the strategic lines of defense. The solution to a vulnerability in an application is “web application security”. The technology may be a web application firewall (WAF) or it may be vulnerability scanning solutions run on pre-deployed code to identify potential vulnerabilities. When we talk about strategic points of control we aren’t necessarily talking about specific technology but rather solutions and those locations within the data center that are best able to be leveraged tactically to a wide variety of strategic solutions. The strategic trifecta is a good example of this model because it’s based on the same concepts: that a strategy is driven by a business challenge or need and executed upon using technology. The solution is not the implementation; it’s not the tactical response. Technology doesn’t enter into the picture into we get down to the implementation, to specific products and platforms we need to implement a strategy consistent with meeting the defined business goal or challenge. The question remains whether “strategy trumps technology” or not and what I was trying to impart is what a subsequent response said much eloquently and concisely: The question isn't which one trumps but how should they be aligned in order to provide value to the customer. -- Kathy Long There shouldn’t be a struggle between the two for top billing honors. They are related, after all; a strategy needs to be implemented, to be executed upon, and that requires technology. It’s more a question of which comes first in a process that should be focused on solving a specific problem or meeting some business challenge. Strategy needs to be defined before implementation because if you don’t know what the end-goal is, you really can’t claim victory or admit defeat. A solution is strategic, technology is tactical. This distinction can help IT by forcing more attention on the business and solutions layer as it is at the strategic layer that IT is able to align itself with the business and provide greater value to the entire organization. Does strategy always trump technology? What CIOs Can Learn from the Spartans Operational Risk Comprises More Than Just Security The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means What is a Strategic Point of Control Anyway? Cloud is the How not the What456Views0likes0CommentsApplying ‘Centralized Control, Decentralized Execution’ to Network Architecture
#SDN brings to the fore some critical differences between concepts of control and execution While most discussions with respect to SDN are focused on a variety of architectural questions (me included) and technical capabilities, there’s another very important concept that need to be considered: control and execution. SDN definitions include the notion of centralized control through a single point of control in the network, a controller. It is through the controller all important decisions are made regarding the flow of traffic through the network, i.e. execution. This is not feasible, at least not in very large (or even just large) networks. Nor is it feasible beyond simple L2/3 routing and forwarding. HERE COMES the SCIENCE (of WAR) There is very little more dynamic than combat operations. People, vehicles, supplies – all are distributed across what can be very disparate locations. One of the lessons the military has learned over time (sometimes quite painfully through experience) is the difference between control and execution. This has led to decisions to employ what is called, “Centralized Control, Decentralized Execution.” Joint Publication (JP) 1-02, Department of Defense Dictionary of Military and Associated Terms, defines centralized control as follows: “In joint air operations, placing within one commander the responsibility and authority for planning, directing, and coordinating a military operation or group/category of operations.” JP 1-02 defines decentralized execution as “delegation of execution authority to subordinate commanders.” Decentralized execution is the preferred mode of operation for dynamic combat operations. Commanders who clearly communicate their guidance and intent through broad mission-based or effects-based orders rather than through narrowly defined tasks maximize that type of execution. Mission-based or effects-based guidance allows subordinates the initiative to exploit opportunities in rapidly changing, fluid situations. -- Defining Decentralized Execution in Order to Recognize Centralized Execution * Lt Col Woody W. Parramore, USAF, Retired Applying this to IT network operations means a single point of control is contradictory to the “mission” and actually interferes with the ability of subordinates (strategic points of control) to dynamically adapt to rapidly changing, fluid situations such as those experienced in virtual and cloud computing environments. Not only does a single, centralized point of control (which in the SDN scenario implies control over execution through admittedly dynamically configured but rigidly executed) abrogate responsibility for adapting to “rapidly changing, fluid situations” but it also becomes the weakest link. Clausewitz, in the highly read and respected “On War”, defines a center of gravity as "the hub of all power and movement, on which everything depends. That is the point against which all our energies should be directed." Most military scholars and strategists logically imply from the notion of a Clausewitzian center of gravity is the existence of a critical weak link. If the “controller” in an SDN is the center of gravity, then it follows it is likely a critical, weak link. This does not mean the model is broken, or poorly conceived of, or a bad idea. What it means is that this issue needs to be addressed. The modern strategy of “Centralized Control, Decentralized Execution” does just that. Centralized Control, Decentralized Execution in the Network The major issue with the notion of a centralized controller is the same one air combat operations experienced in the latter part of the 20th century: agility, or more appropriately, lack thereof. Imagine a large network adopting fully an SDN as defined today. A single controller is responsible for managing the direction of traffic at L2-3 across the vast expanse of the data center. Imagine a node, behind a Load balancer, deep in the application infrastructure, fails. The controller must respond and instruct both the load balancing service and the core network how to react, but first it must be notified. It’s simply impossible to recover from a node or link failure in 50 milliseconds (a typical requirement in networks handling voice traffic) when it takes longer to get a reply from the central controller. There’s also the “slight” problem of network devices losing connectivity with the central controller if the primary uplink fails. -- OpenFlow/SDN Is Not A Silver Bullet For Network Scalability, Ivan Pepelnjak (CCIE#1354 Emeritus) Chief Technology Advisor at NIL Data Communications The controller, the center of network gravity, becomes the weak link, slowing down responses and inhibiting the network (and IT) from responding in a rapid manner to evolving situations. This does not mean the model is a failure. It means the model must adapt to take into consideration the need to adapt more quickly. This is where decentralized execution comes in, and why predictions that SDN will evolve into an overarching management system rather than an operational one are likely correct. There exist today, within the network, strategic points of control; locations within the data center architecture at which traffic (data) is aggregated, forcing all data to traverse, from which control over traffic and data is maintained. These locations are where decentralized execution can fulfill the “mission-based guidance” offered through centralized control. Certainly it is advantageous to both business and operations to centrally define and codify the operating parameters and goals of data center networking components (from L2 through L7), but it is neither efficient nor practical to assume that a single, centralized controller can achieve both managing and executing on the goals. What the military learned in its early attempts at air combat operations was that by relying on a single entity to make operational decisions in real time regarding the state of the mission on the ground, missions failed. Airmen, unable to dynamically adjust their actions based on current conditions, were forced to watch situations deteriorate rapidly while waiting for central command (controller) to receive updates and issue new orders. Thus, central command (controller) has moved to issuing mission or effects-based objectives and allowing the airmen (strategic points of control) to execute in a way that achieves those objectives, in whatever way (given a set of constraints) they deem necessary based on current conditions. This model is highly preferable (and much more feasible given today’s technology) than the one proffered today by SDN. It may be that such an extended model can easily be implemented by distributing a number of controllers throughout the network and federating them with a policy-driven control system that defines the mission, but leaves execution up to the distributed control points – the strategic control points. SDN is new, it’s exciting, it’s got potential to be the “next big thing.” Like all nascent technology and models, it will go through some evolutionary massaging as we dig into it and figure out where and why and how it can be used to its greatest potential and organizations’ greatest advantage. One thing we don’t want to do is replicate erroneous strategies of the past. No network model abrogating all control over execution has every really worked. All successful models have been a distributed, federated model in which control may be centralized, but execution is decentralized. Can we improve upon that? I think SDN does in its recognition that static configuration is holding us back. But it’s decision to reign in all control while addressing that issue may very well give rise to new issues that will need resolution before SDN can become a widely adopted model of networking. QoS without Context: Good for the Network, Not So Good for the End user Cyclomatic Complexity of OpenFlow-Based SDN May Drive Market Innovation SDN, OpenFlow, and Infrastructure 2.0 OpenFlow/SDN Is Not A Silver Bullet For Network Scalability Prediction: OpenFlow Is Dead by 2014; SDN Reborn in Network Management OpenFlow and Software Defined Networking: Is It Routing or Switching ? Cloud Security: It’s All About (Extreme Elastic) Control Ecosystems are Always in Flux The Full-Proxy Data Center Architecture373Views0likes0CommentsDoes your virtualization strategy create an SEP field?
There is a lot of hype around all types of virtualization today, with one of the primary drivers often cited being a reduction in management costs. I was pondering whether or not that hype was true, given the amount of work that goes into setting up not only the virtual image, but the infrastructure necessary to properly deliver the images and the applications they contain. We've been using imaging technology for a long time, especially in lab and testing environments. It made sense then because a lot of work goes into setting up a server and the applications running on it before it's "imaged' for rapid deployment use. Virtual images that run inside virtualization servers like VMWare brought not just the ability to rapidly deploy a new server and its associated applications, but the ability to do so in near real-time. But it's not the virtualization of the operating system that really offers a huge return on investment, it's the virtualization of the applications that are packaged up in a virtual image that offers the most benefits. While there's certainly a lot of work that goes into deploying a server OS - the actual installation, configuration, patching, more patching, and licensing - there's even more work that goes into deploying an application simply because they can be ... fussy. So once you have a server and application configured and ready to deploy, it certainly makes sense that you'd want to "capture" it so that it can be rapidly deployed in the future. Without the proper infrastructure, however, the benefits can be drastically reduced. Four questions immediately come to mind that require some answers: Where will the images be stored? How will you manage the applications running on deployed virtual images? What about updates and patches to not only the server OS but the applications themselves? What about changes to your infrastructure? The savings realized by reducing the management and administrative costs of building, testing, and deploying an application in a virtual environment can be negated by a simple change to your infrastructure, or the need to upgrade/patch the application or operating system. Because the image is a basically a snapshot, that snapshot needs to change as the environment in which it runs changes. And the environment means more than just the server OS, it means the network, application, and delivery infrastructure. Addressing the complexity involved in such an environment requires an intelligent, flexible infrastructure that supports virtualization. And not just OS virtualization, but other forms of virtualization such as server virtualization and storage or file virtualization. There's a lot more to virtualization than just setting up a VMWare server, creating some images and slapping each other on the back for a job well done. If your infrastructure isn't ready to support a virtualized environment then you've simply shifted the costs - and responsibility - associated with deploying servers and applications to someone else and, in many cases, several someone elses. If you haven't considered how you're going to deliver the applications on those virtual images then you're in danger of simply shifting the costs of delivering applications elsewhere. Without a solid infrastructure that can support the dynamic environment created by virtual imaging the benefits you think you're getting quickly diminish as other groups are suddenly working overtime to configure and manage the rest of the infrastructure necessary to deliver those images and applications to servers and users. We often talk about silos in terms of network and applications' groups; but virtualization has the potential to create yet another silo, and that silo may be taller and more costly than anyone has yet considered. Virtualization has many benefits to you and your organization. Consider carefully whether you're infrastructure is prepared to support virtualization or risk discovering that implementing a virtualized solution is creating an SEP (Somebody Else's Problem) field around delivering and managing those images.319Views0likes0CommentsOperational Risk Comprises More Than Just Security
Recognizing the relationship between and subsequently addressing the three core operational risks in the data center will result in a stronger operational posture. Risk is not a synonym for lack of security. Neither is managing risk a euphemism for information security. Risk – especially operational risk – compromises a lot more than just security. In operational terms, the chance of loss is not just about data/information, but of availability. Of performance. Of customer perception. Of critical business functions. Of productivity. Operational risk is not just about security, it’s about the potential damage incurred by a loss of availability or performance as measured by the business. Downtime costs the business; both hard and soft costs are associated with downtime and the numbers can be staggering depending on the particular vertical industry in which a business may operate. But in all cases, regardless of industry, the end-result is the same: downtime and poor performance are risks that directly impact the bottom line. Operational risk comprises concerns regarding: Performance Availability / reliability Security These three concerns are intimately bound up in one another. For example, a denial of service attack left unaddressed and able to penetrate to the database tier in the data center can degrade performance which may impact availability – whether by directly causing an outage or through deterioration of performance such that systems are no longer able to meet service level agreements mandating specific response times. The danger in assuming operational risk is all about security is that it leads to a tunnel-vision view through which other factors that directly impact operational reliability may be obscured. The notion of operational risk is most often discussed as it relates to cloud computing , but it is only that cloud computing raises the components of operational risk to a visible level that puts the two hand-in-hand. CONSISTENT REPETITION of SUCCESSFUL DEPLOYMENTS When we talk about repeatable deployment processes and devops, it’s not the application deployment itself that we necessarily seek to make repeatable – although in cases where scaling processes may be automated that certainly aids in operational efficiency and addresses all facets of operational risk. It’s the processes – the configuration and policy deployment – involving the underlying network and application network infrastructure that we seek to make repeatable, to avoid the inevitable introduction of errors and subsequently downtime due to human error. This is not to say that security is not part of that repeatable process because it is. It’s to say that it is only one piece of a much larger set of processes that must be orchestrated in such a way as to provide for consistent repetition of successful deployments that alleviates operational risk associated with the deployment of applications. Human error by contractor Northrop Grumman Corp. was to blame for a computer system crash that idled many state government agencies for days in August, according to an external audit completed at Gov. Bob McDonnell's request. The audit, by technology consulting firm Agilysis and released Tuesday, found that Northrop Grumman had not planned for an event such as the failure of a memory board, aggravating the failure. It also found that the data loss and the delay in restoration resulted from a failure to follow industry best practices. At least two dozen agencies were affected by the late-August statewide crash of the Virginia Information Technologies Agency. The crash paralyzed the departments of Taxation and Motor Vehicles, leaving people unable to renew drivers licenses. The disruption also affected 13 percent of Virginia's executive branch file servers. -- Audit: Contractor, Human Error Caused Va Outage (ABC News, February 2011) There are myriad points along the application deployment path at which an error might be introduced. Failure to add the application node to the appropriate load balancing pool; failure to properly monitor the application for health and performance; failure to apply the appropriate security and/or network routing policies. A misstep or misconfiguration at any point in this process can result in downtime or poor performance, both of which are also operational risks. Virtualization and cloud computing can complexify this process by adding another layer of configuration and policies that need to be addressed, but even without these technologies the risk remains. There are two sides to operational efficiency – the deployment/configuration side and the run-time side. During deployment it is configuration and integration that is the focus of efforts to improve efficiency. Leveraging devops and automation as a means to create a repeatable infrastructure deployment process is critical to achieving operational efficiency during deployment. Achieving run-time operational efficiency often utilizes a subset of operational deployment processes, addressing the critical need to dynamically modify security policies and resource availability based on demand. Many of the same processes that enable a successful deployment can be – and should be – reused as a means to address changes in demand. Successfully leveraging repeatable sub-processes at run-time, dynamically, requires that operational folks – devops – takes a development-oriented approach to abstracting processes into discrete, repeatable functions. It requires recognition that some portions of the process are repeated both at deployment and run-time and then specifically ensuring that the sub-process is able to execute on its own such that it can be invoked as a separate, stand-alone process. This efficiency allows IT to address operational risks associated with performance and availability by allowing IT to react more quickly to changes in demand that may impact performance or availability as well as failures internal to the architecture that may otherwise cause outages or poor performance which, in business stakeholder speak, can be interpreted as downtime. RISK FACTOR: Repeatable deployment processes address operational risk by reducing possibility of downtime due to human error. ADAPTION within CONTEXT Performance and availability are operational concerns and failure to sustain acceptable levels of either incur real business loss in the form of lost productivity or in the case of transactional-oriented applications, revenue. These operational risks are often addressed on a per-incident basis, with reactive solutions rather than proactive policies and processes. A proactive approach combines repeatable deployment processes to enable appropriate auto-scaling policies to combat the “flash crowd” syndrome that so often overwhelms unprepared sites along with a dynamic application delivery infrastructure capable of automatically adjusting delivery policies based on context to maintain consistent performance levels. Downtime and slowdown can and will happen to all websites. However, sometimes the timing can be very bad, and a flower website having problems during business hours on Valentine’s Day, or even the days leading up to Valentine’s Day, is a prime example of bad timing. In most cases this could likely have been avoided if the websites had been better prepared to handle the additional traffic. Instead, some of these sites have ended up losing sales and goodwill (slow websites tend to be quite a frustrating experience). -- Flower sites hit hard by Valentine’s Day At run-time this includes not only auto-scaling, but appropriate load balancing and application request routing algorithms that leverage intelligent and context-aware health-monitoring implementations that enable a balance between availability and performance to be struck. This balance results in consistent performance and the maintaining of availability even as new resources are added and removed from the available “pool” from which responses are served. Whether these additional resources are culled from a cloud computing provider or an internal array of virtualized applications is not important; what is important is that the resources can be added and removed dynamically, on-demand, and their “health” monitored during usage to ensure the proper operational balance between performance and availability. By leveraging a context-aware application delivery infrastructure, organizations can address the operational risk of degrading performance or outright downtime by codifying operational policies that allow components to determine how to apply network and protocol-layer optimizations to meet expected operational goals. A proactive approach has “side effect” benefits of shifting the burden of policy management from people to technology, resulting in a more efficient operational posture. RISK FACTOR: Dynamically applying policies and making request routing decisions based on context addresses operational risk by improving performance and assuring availability. Operational risk comprises much more than simply security and it’s important to remember that because all three primary components of operational risk – performance, availability and security – are very much bound up and tied together, like the three strands that come together to form a braid. And for the same reasons a braid is stronger than its composite strands, an operational strategy that addresses all three factors will be far superior to one in which each individual concern is treated as a stand-alone issue. It’s Called Cloud Computing not Cheap Computing Challenging the Firewall Data Center Dogma There Is No Such Thing as Cloud Security The Inevitable Eventual Consistency of Cloud Computing The Great Client-Server Architecture Myth IDC Survey: Risk In The Cloud Risk is not a Synonym for “Lack of Security” When Everything is a Threat Nothing is a Threat The Corollary to Hoff’s Law246Views0likes4CommentsAmazon Outage Casts a Shadow on SDN
#SDN #AWS #Cloud Amazon’s latest outage casts a shadow on the ability of software-defined networking to respond to catastrophic failure Much of the chatter regarding the Amazon outage has been focused on issues related to global reliability and failover and multi-region deployments. The issue of costs associated with duplicating storage and infrastructure services has been raised, and much advice given on how to avoid the negative impact of a future outage at any cloud provider. But reading through the issues discovered during the outages caused specifically by Amazon’s control plane for EC2 and EBS one discovers a more subtle story. After reading, it seems easy to come to the conclusion that Amazon’s infrastructure is, in practice if not theory, a SDN-based network architecture. Control planes (with which customers and systems interact via its API) are separated from the actual data planes, and used to communicate constantly to assure service quality and perform more mundane operations across the entire cloud. After power was restored, the problems with this approach to such a massive system became evident in the inability of its control plane to scale. The duration of the recovery time for the EC2 and EBS control planes was the result of our inability to rapidly fail over to a new primary datastore. Because the ELB control plane currently manages requests for the US East-1 Region through a shared queue, it fell increasingly behind in processing these requests; and pretty soon, these requests started taking a very long time to complete. -- Summary of the AWS Service Event in the US East Region This architecture is similar to the one described by SDN proponents, where control is centralized and orders are dispatched through a single controller. In the case of Amazon, that single controller is a shared queue. As we know now, this did not scale well. While recovery time duration may be tied to the excessive time it took to fail over to a new primary data store, the excruciating slowness with which services were ultimately restored to customer’s customers was almost certainly due exclusively to the inability of the control plane to scale under load. This is not a new issue. The inability of SDN to ultimately scale in the face of very high loads has been noted by many experts who cite an inability the scale inserts into networking infrastructure via such an architecture in conjunction with inadequate response times as the primary cause of failure to scale. Traditional load balancing services – both global and local – deal with failure through redundancy and state mirroring. ELB mimics state mirroring through the use of a shared data store, much in the same way applications share state by sharing a data store. The difference is that the traditional load balancing services are able to detect and react to failures in sub-second time, whereas a distributed, shared application-based system cannot. In fact, one instance of ELB is unlikely to be aware another has failed by design – only the controller of the overarching system is aware of such failures as it is the primary mechanism through which such failures are addressed. Traditional load balancing services are instantly aware of such failures, and enact counter-measures automatically – without being required to wait for customers to move resources from one zone to another to compensate. A traditional load balancing architecture is designed to address this failure automatically, it is one of the primary purposes for which load balancers are designed and used across the globe today. This difference is not necessarily apparent or all that important in day to day operations when things are running smoothly. They only rise to the surface in the event of a catastrophic failure, and even then in a well-architected system they are not cause for concern, but rather relief. One can extend the issues with this SDN-like model for load balancing to the L2-3 network services SDN is designed to serve. The same issues with shared queues and a centralized model will be exposed in the event of a catastrophic failure. Excessive requests in the shared queue (or bus) result in the inability of the control plane to adequately scale to meet the demand experienced when the entire network must “come back online” after an outage. Even if the performance of an SDN is acceptable during normal operations, its ability to restore the network after a failure may not be. It would be unwise to ignore the issues experienced by Amazon because it does not call its ELB architecture SDN. In every sense of the term, it acts like an SDN for L4+ and this outage has exposed a potentially fatal flaw in the architecture that must be addressed moving forward. LESSON LEARNED: SDN requires that both the control and data planes be architected for failure, and able to respond and scale instantaneously. Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture WILS: Virtualization, Clustering, and Disaster Recovery OpenFlow/SDN Is Not A Silver Bullet For Network Scalability Summary of the AWS Service Event in the US East Region After The Storm: Architecting AWS for Reliability QoS without Context: Good for the Network, Not So Good for the End user SDN, OpenFlow, and Infrastructure 2.0239Views0likes0CommentsWhat CIOs Can Learn from the Spartans
When your data center is constantly under pressure to address operational risks, try leveraging some ancient wisdom from King Leonidas and William Wallace The Battle of Thermopylae is most often remembered for the valiant stand of the "300". In case you aren't familiar, three hundred Spartans (and a supporting cast of city-state nations) held off the much more impressively numbered armies of Prince Xerces for a total of seven days before being annihilated. A Greek force of approximately 7,000 men marched north to block the pass in the summer of 480 BC. The Persian army, alleged by the ancient sources to have numbered in the millions but today considered to have been much smaller (various figures are given by scholars ranging between about 100,000 and 300,000), arrived at the pass in late August or early September. Vastly outnumbered, the Greeks held off the Persians for seven days in total (including three of battle), before the rear-guard was annihilated in one of history's most famous last stands. During two full days of battle, the small force led by King Leonidas I of Sparta blocked the only road by which the massive Persian army could pass. After the second day of battle, a local resident named Ephialtes betrayed the Greeks by revealing a small path that led behind the Greek lines. Aware that his force was being outflanked, Leonidas dismissed the bulk of the Greek army, and remained to guard the rear with 300 Spartans, 700 Thespians, 400 Thebans and perhaps a few hundred others, the vast majority of whom were killed. -- Wikipedia, The Battle of Thermopylae [emphasis added] Compare that to the Battle of Stirling Bridge, where William Wallace and his much smaller force of Scots prepared to make a stand against Edward I and his English forces. He chose a battleground that afforded him a view of the surrounding area for twenty miles, enabling him to not only see exactly what challenges he faced, but to make his plans accordingly. Leveraging the very narrow bridge at Stirling and some somewhat unconventional tactics at the time, he managed to direct his resources in a way that allowed him to not only control the flow of opponents but ensure victory for the Scottish forces. What CIOs should take away from even a cursory study of these battles is this: strategic control can enable you to meet your goals with far fewer resources than expected. The choice of terrain and tools is commonly accepted as a force multiplier in military tactics. The difference between the two was in visibility; ultimately it was a lack of visibility that caused Leonidas' strategy to fail where Wallace was successful. Leonidas, unable to see sooner that he was being outflanked, could not provision resources or apply tactics in a way that enabled him to defeat the Persians. Wallace, on the other hand, had both visibility and control and ultimately succeeded. What's needed in the data center is similar: finding strategic points of control and leverage them to achieve a positive operational posture that not only addresses implementation and architectural requirements but business requirements as well. IT has to align itself as a means to align with the business. THE STRATEGIC TRIFECTA There inherently exist in the data center strategic points of control; that is, locations at which it's most beneficial to apply and enforce a broad variety of policies to achieve operational and business goals. Like terrain, these points of control can be force multipliers – improving the efficiency and effectiveness of fewer resources. Like high ground, it affords IT the visibility necessary to redeploy resources dynamically. This strategic trifecta comprises business value, architecture and implementation and when identified, these strategic locations can be a powerful tool in realizing IT operational and business goals. Strategic points of control are almost always naturally aggregation points within an architecture; physical and topological locations at which traffic is forced for one reason or another to flow. The locations are ones within the data center in which all three strategic advantages can be achieved simultaneously. Applications and data cannot be controlled nor policies enforced upon them to align with business goals on a per-instance basis. Applications and storage resources today are constructs, comprising multiple infrastructure and application services that cannot be managed effectively to meet business goals individually.Strategic points of control within the data center afford a unique opportunity to view, manage and enforce policies upon application and storage services as a holistic unit. You'll note the similarity here with the battlegrounds chosen by Leonidas and Wallace: Thermopylae and Stirling. Thermopylae was a naturally occurring location that narrowed the path through which the invading army had to travel. Mountains on one side, cliffs on the other, Xerces had no choice but to send his army straight into the eager arms of the Spartans. Stirling is located within the folds of a river with a single, narrow bridge. Edward I had no choice but to send his men two by two across that bridge to form up on the chosen battleground, allowing Wallace and the Scots to control the flow and ultimately decide the moment of attack when it was most likely that the Scots could prevail. As a data center technique, the strategy remains much the same: apply policies regarding security, performance, and reliability in those places where traffic and resources naturally converges. Use the right equipment in the right locations and the investment can multiply the efficiency of the entire data center just as both become force multipliers on the battlefield. The policies implemented at each strategic point of control enable better management of resources, better direction of traffic, and improved control over access those resources. Each point essentially virtualizes resources, and policies that govern how those resources are access, distributed and consumed can be enforced. They optimize the end-to-end delivery of resources across vastly disparate conditions and environments. Such points of control, especially when collaborative in nature, provide a holistic view of and control over top-level business concerns: reliability, availability and performance. Leveraging strategic points of control also affords creates a more agile operational posture in which policies can be adjusted dynamically and rapidly to address a wide variety of data center concerns. All three foci are required; a lack of visibility by concentrating on individual performance, availability and capacity (operational risks) does not afford the opportunity to meet business goals. It is the performance of the application as a whole, not its individual components, that is of import to the business. It is the cost to deliver and secure the application as a whole that determines efficiency, not that of individual components. These strategic points of control also offer the advantage of being contextually aware, which enables policies to be applied based on the resources, the network or the clients. Policies might be applied to all tablets or all applications of a specific type or they might be dynamic based on current operational – or business – parameters. Strategic points of control enable resources to be more effectively and efficiently managed by policies instead of people. This has the effect of tipping the imbalance of burden that currently lies primarily on the shoulders of people toward technology. The goal of IT as a Service and a more dynamic data center is wholly supported by such a strategic trifecta, as it provides the means by which resources can be managed, provisioned, and secured without disruption. The virtualization of resources and their associated policies enables a more responsive IT organization by making it possible to manage resources in a very service-oriented fashion, applying and enforcing policies on an "application" rather than on individual servers, instances, or virtual images. A strategic point of control in the data center is the equivalent of a modern Thermopylae. Like ancient but successful battles whose tactics and strategy have become standard templates for efficiently using resources by leveraging location and visibility, their modern equivalents in the data center can enable a CIO to align IT not only with the business, but its own operational and architectural goals as well. What is a Strategic Point of Control Anyway? Cloud is the How not the What Cloud Control Does Not Always Mean ‘Do it yourself’ The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Data Center Feng Shui: Process Equally Important as Preparation Some Services are More Equal than Others The Battle of Economy of Scale versus Control and Flexibility232Views0likes0CommentsSo You Put an Application in the Cloud. Now what?
We need to stop thinking of cloud as an autonomous system and start treating it as part of a global application delivery architecture. When you decided you needed another garage to house that third car (the one your teenager is so proud of) you probably had a couple choices in architecture. You could build a detached garage that, while connected to your driveway, was not connected to any existing structures or you could ensure that the garage was in some way connected to either the house or the garage. In both cases the new garage is a part of your location in that both are accessed (most likely) from the same driveway. The only real question is whether you want to extend your existing dwellings or not. When you decide to deploy an application to the cloud you have a very similar decision: do you extend your existing dwelling, essentially integrating the environment with your own or do you maintain a separate building that is still “on the premises” but not connected in any way except that it’s accessible via your shared driveway. In both cases the cloud-deployed application is still located at your “address” – or should be – and you’ll need to ensure that it looks to consumers of that application like it’s just another component of your data center. THE OFF-SITE GARAGE Global application delivery (a.k.a. Global Server Load Balancing) has been an integral part of a multi-datacenter deployment model for many years. Whether a secondary or tertiary data center is leveraged for business continuity, a.k.a. “OMG our main site is down”, or as a means to improve performance of applications for a more global user base is irrelevant. In both cases all “sites” have been integrated to appear as a single, seamless data center through the use of global application delivery infrastructure. So why, when we start talking about “cloud” do we treat it as some external, disconnected entity rather than as the extension of your data center that it is? Like building a new garage you have a couple choices in architecture. There is, of course, the continued treatment of a cloud-deployed application as some external entity that is not under the management or control of the organization. That’s like using an off-site garage. That doesn’t make a lot of sense (unless your homeowners association has judged the teenager’s pride and joy an eyesore and forbids it be parked on premise) and neither does it make a lot of sense to do the same with a cloud-deployed application. You need at a minimum the ability to direct customers/users to the application in whatever situation you find yourself using it – backup, failover, performance, geo-location, on-demand bursting. Even if you’re only using off-premise cloud environments today for development or testing, it may be that in the future you’ll want to leverage the on-demand nature of off-premise cloud computing for more critical business cases such as failover or bursting. In those cases a completely separate, unmanaged (in that you have no real operational control) off-premise cloud is not going to provide the control necessary for you to execute successfully on such an initiative. You need something more, something more integrated, something more strategic rather than tactical. Instead, you want to include cloud as a part of your greater, global (multi-site) application delivery strategy. It’s either detached or attached, but in both cases it is just an extension of your existing property. ATTACHED CLOUD In the scenario in which the cloud is “attached” to your data center it actually becomes an extension of your existing architecture. This is the “private virtual cloud” scenario in which the resources provisioned in a public cloud computing environment are not accessible to the general Internet public directly. In fact, customers/users should have no idea that you are leveraging public cloud computing as the resources are obfuscated by leveraging the many-to-one virtualization offered by an application delivery controller (load balancer). The data center is extended and connected to this pool of resources in the cloud via a secured (encrypted) and accelerated tunnel that bridges the network layer and provides whatever routing may be necessary to treat the remote application instances as local resources. This is simply a resource-focused use of VPN (virtual private network), one that was often used to integrate remote offices with the corporate data center as opposed to individual use of VPNs to access a corporate network. Amazon, for example, uses IPSEC as a means to integrate resources allocated in its environments with your data center, but other cloud computing providers may provide SSL or a choice of either. In the case that the provider offers no option, it may be necessary to deploy a virtual VPN endpoint in the cloud in order to achieve this level of seamless connectivity. Once the cloud resources are “attached” they can be treated like any other pool of resources by the application delivery controller (load balancer). [ This is depicted by connection (2) in the diagram ] DETACHED CLOUD A potentially simpler exercise (in both the house and cloud scenarios) is to treat the cloud-deployed resources as “detached” from the core networking and data center infrastructure and integrating the applications served by those resources at the global application delivery layer. [ This is depicted by connection (1) in the diagram ] In this scenario the application delivery network and resources it is managing are all deployed within an external cloud environment and can be accessed publicly (if one were to determine which public IP address was fronting them, of course). You don’t want users/customers accessing those resources by some other name (you’d prefer www.example.com/remoteapp over 34.35.164.4-cloud1.provider.com of course) and further more you want to be able to make the decision when a customer will be using the detached cloud and when they will be using local data center resources. Even if the application deployed is new and no copy exists in the local data center you still want to provide a consistent corporate naming scheme to ensure brand identity and trust that the application is yours. Regardless, in this case the detached cloud resources require the means by which customers/users can be routed to them; hence the use of global application delivery infrastructure. In this case users attempt to access www.example.com/remoteapp and are provided with an IP address that is either local (in your data center) or remote (in a detached cloud environment). This resolution may be static in that it does not change based on user location, capacity of applications, or performance or it may take into consideration such variables as are available to it: location, performance, security, device, etc… (context). Yes, you could just slap a record in your DNS of choice and resolve the issue. This does not, however, lay a foundation for more dynamic and flexible integration of off-premise cloud-deployed applications in the future. FOUR REASONS to LEVERAGE GLOBAL APPLICATION DELIVERY There are many reasons to include in a global application delivery strategy a global load balancing architecture, but these four stand out as the ones that provide the most benefit to both the organization and the user: Avoid unnecessary application changes due to changes in providers If all your images or a certain type of content are served by applications deployed in an external cloud computing environment, normalizing your global namespace eliminates the need to change the application references to that namespace in the case of a change of providers. The change is made at the global application delivery layer and is propagated quickly. This eliminates a form of vendor lock-in that is rarely remembered until a change in providers is desired. Developers should never be codifying domain names in applications, but legacy and third-party applications still need support and these often draw their name and information from configuration files that effectively codify the operational location of the server and application. These configurations are less important when the platforms are deployed behind a global application delivery controller and virtualized. Normalization of global name spaces preserves corporate identity and enables trust Applications served by a trusted domain are desirable in an age when phishing and malicious code often re/directs users to oddly named domains for the purposes of delivering a malware payload. Global application delivery normalizes global domain namespaces and provides a consistent naming scheme for applications regardless of physical deployment location. Enhances decision making processes Leveraging global application delivery enables more control over the use of resources at a user level as well as a business and technical layer. Decisions regarding which resources will be used by whom and when are the purview of global application delivery controllers (GSLB) and provide the organization with flexibility to determine which application instance is best suited to serve any given request based on the context of the request. Foundational component. Like load balancing, global load balancing (application delivery) is a foundational component of a well-rounded cloud computing architecture. It provides the means by which the first integrations of off-site cloud computing will be accomplished, e.g. cloud bursting, and lays the foundation upon which more advanced location-selection algorithms will be applied, e.g. cloud balancing. Without an intelligent, integrated global application delivery strategy it will be difficult to implement and execute on strategies which leverage external and internal cloud computing deployments that are more application and business focused. External (detached) cloud computing environments need not be isolated (silo’d) from the rest of your architecture. A much better way to realize the benefits associated with public cloud computing is to incorporate them into a flexible, global application delivery strategy that leverages existing architectural principles and best practices to architect an integrated, collaborative and seamless application delivery architecture. Related Posts from tag DNS The One Problem Cloud Can’t Solve. Or Can It? It’s DNSSEC Not DNSSUX Windows Vista Performance Issue Illustrates Importance of Context from tag strategy The Devil is in the Details The Myth of 100% IT Efficiency Greedy (IT) Algorithms If you aren’t asking “what if” now you’ll be asking “why me” later from tag F5 Madness? THIS. IS. SOA! WILS: How can a load balancer keep a single server site available? Optimize Prime: The Self-Optimizing Application Delivery Network F5 Friday: Eavesdropping on Availability WILS: Automation versus Orchestration (more..) del.icio.us Tags: MacVittie,F5,cloud computing,global application delivery,GSLB,load balancing,cloud balancing,DNS,strategy200Views0likes2CommentsAbout that ‘Unassailable Economic Argument’ for Public Cloud Computing
Turns out that ‘unassailable’ economic argument for public cloud computing is very assailable The economic arguments are unassailable. Economies of scale make cloud computing more cost effective than running their own servers for all but the largest organisations. Cloud computing is also a perfect fit for the smart mobile devices that are eating into PC and laptop market. -- Tim Anderson, “Let the Cloud Developer Wars Begin” Ah, Tim. The arguments are not unassailable and, in fact, it appears you might be guilty of having tunnel vision – seeing only the list price and forgetting to factor in the associated costs that make public cloud computing not so economically attractive under many situations. Yes, on a per hour basis, per CPU cycle, per byte of RAM, public cloud computing is almost certainly cheaper than any other option. But that doesn’t mean that arguments for cloud computing (which is much more than just cheap compute resources) are economically unassailable. Ignoring for a moment that it isn’t as clear cut as basing a deployment strategy purely on costs, the variability in bandwidth and storage costs along with other factors that generate both hard and soft costs associated with applications must be considered . MACRO versus MICRO ECONOMICS The economic arguments for cloud computing almost always boil down to the competing views of micro versus macro economics. Those in favor of public cloud computing are micro-economic enthusiasts, narrowing in on the cost per cycle or hour of a given resource. But micro-economics don’t work for an application because an application is not an island of functionality; it’s an integrated, dependent component that is part of a larger, macro-economic environment in which other factors impact total costs. The lack of control over resources in external environments can be problematic for IT organizations seeking to leverage cheaper, commodity resources in public cloud environments. Failing to impose constraints on auto-scaling – as well as defining processes for de-scaling – and the inability to track and manage developer instances launched and left running are certainly two of the more common causes of “cloud sprawl.” Such scenarios can certainly lead to spiraling costs that, while not technically the fault of cloud computing or providers, may engender enough concern in enterprise IT to keep from pushing the “launch” button. The touted cost savings associated with cloud services didn't pan out for Ernie Neuman, not because the savings weren't real, but because the use of the service got out of hand. When he worked in IT for the Cole & Weber advertising firm in Seattle two and a half years ago, Neuman enlisted cloud services from a provider called Tier3, but had to bail because the costs quickly overran the budget, a victim of what he calls cloud sprawl - the uncontrolled growth of virtual servers as developers set them up at will, then abandoned them to work on other servers without shutting down the servers they no longer need. Whereas he expected the developers to use up to 25 virtual servers, the actual number hit 70 or so. "The bills were out of control compared with what the business planned to spend," he says. -- Unchecked usage can kill cost benefits of cloud services But these are not the only causes of cost overruns in public cloud computing environments and, in fact, uncontrolled provisioning whether due to auto-scaling or developers forgetfulness is not peculiar to public cloud but rather can be a problem in private cloud computing implementations as well. Without the proper processes and policies – and the right infrastructure and systems to enforce them – cloud sprawl will certainly impact especially those large enterprises for whom private cloud is becoming so attractive an option. While it’s vastly more difficult to implement the proper processes and procedures automatically in public as opposed to private cloud computing environments because of the lack of maturity in infrastructure services in the public arena, there are other, hotter issues in public cloud that will just as quickly burn up an IT or business budget if not recognized and addressed before deployment. And it’s this that cloud computing cannot necessarily address even by offering infrastructure services, which makes private cloud all the more attractive. TRAFFIC SPRAWL Though not quite technically accurate, we’ll use traffic sprawl to describe increasing amounts of unrelated traffic a cloud-deployed application must process. It’s the extra traffic – the malicious attacks and the leftovers from the last application that occupied an IP address – that the application must field and ultimately reject. This traffic is nothing less than a money pit, burning up CPU cycles and RAM that translate directly into dollars for customers. Every request an application handles – good or bad – costs money. The traditional answer to preventing the unnecessary consumption of resources on servers due to malicious or unwanted traffic is a web application firewall (WAF) and basic firewalling services. Both do, in fact, prevent that traffic from consuming resources on the server because they reject it, thereby preventing it from ever being seen by the application. So far so good. But in a public cloud computing environment you’re going to have to pay for the resources the services consumed, too. In other words, you’re paying per hour to process illegitimate and unwanted traffic no matter what. Even if IaaS providers were to offer WAF and more firewall services, you’re going to pay for that and all the unwanted, malicious traffic that comes your way will cost you, burning up your budget faster than you can say “technological money pit.” This is not to say that both types of firewall services are not a good idea in a public cloud environment; they are a valuable resource regardless and should be part and parcel of any dynamic infrastructure. But it is true that in a public cloud environment they address only security issues, and are unlikely to redress cost overruns but instead may help you further along the path to budget burnout. HYBRID WILL DOMINATE CLOUD COMPUTING I’ve made the statement before, I’ll make it again: hybrid models will dominate cloud computing in general due primarily to issues around control. Control over processes, over budgets, and over services. The inability to effectively control traffic at the network layer imposes higher processing and server consumption rates in public environments than in private, controlled environments even when public resources are leveraged in the private environment through hybrid architectures enabled by virtual private cloud computing technologies. Traffic sprawl initiated because of shared IP addresses in public cloud computing environments alone is simply not a factor in private and even hybrid style architectures where public resources are never exposed via a publicly accessible IP address. Malicious traffic is never processed by applications and servers in a well-secured and architected private environment because firewalls and application firewalls screen out such traffic and prevent them from unnecessarily increasing compute and network resource consumption, thereby expanding the capacity of existing resources. The costs of such technology and controls are shared across the organization and are fixed, leading to better forecasting in budgeting and planning and eliminating the concern that such essential services are not the cause of a budget overrun. Control over provisioning of resources in private environments is more easily achieved through existing and emerging technology, while public cloud computing environments still struggle to offer even the most rudimentary of data center infrastructure services. Without the ability to apply enterprise-class controls and limits on public cloud computing resources, organizations are likely to find that the macro-economic costs of cloud end up negating the benefits initially realized by cheap, easy to provision resources. A clear strategy with defined boundaries and processes – both technical and people related – must be defined before making the leap lest sprawl overrun budgets and eliminate the micro-economic benefits that could be realized by public cloud computing.199Views0likes0CommentsHow to Earn Your Data Center Merit Badge
Two words: be prepared. Way back when,Don was the Scoutmaster for our local Boy Scout Troop. He’d been a Scout and earned his Eagle and, as we had a son entering scouting age, it was a great opportunity for Don to give back and for me to get involved. I helped out in many ways, not the least of which was to help the boys memorize the Scout promise and be able to repeat on-demand its Motto (Be Prepared) and its Slogan (Do a good turn daily). Back then there was no Robotics Merit Badge (it was eerily introduced while I was writing this post, not kidding) but Scouts embracing the concept of being prepared were surely able to apply that principle to other aspects of their lives, covered by merit badges or not. I was excited reading this newest merit badge, of course, as our pre-schooler is an avid lover of robots and knowing he may be able to merge the two was, well, very cool for a #geek parent. Now, the simple motto of the Boy Scouts is one that will always serve IT well, especially when it comes to operational efficiency and effectiveness in dealing with unanticipated challenges. It was just such a motto put forward in different terms by a director in the US Federal Government working on “emergency preparedness plans.” In a nutshell, he said, “Think about what you would do the day after and do it the day before.” That was particularly good advice that expanded well on what it means to “Be Prepared.” Now obviously IT has to be more responsive to potential outages or other issues in the data center than the next day. But the advice still holds if we simply reduce the advice to putting into place the policies and processes you would use to address a given challenge before it becomes a challenge. Or at least be prepared to implement such policies and processes should they become necessary. The deciding factor in when to implement pre-challenge policies is likely the time required. For example, If you lose your primary ISP connection, what would you do? Provision a secondary connection to provide connectivity until the primary is returned to service, most likely. Given the period of time it takes to provision such a resource, it’s probably best to provision before you need it. Similarly, the time to consider how you’ll respond to a flash-crowd is before it happens, not after. Ask yourself how would you maintain performance and availability, and then determine how best to go about ensuring that those pieces of the solution that cannot be provisioned or implemented on-demand are in place before they are needed. EARNING the DATA CENTER MERIT BADGE It is certainly the case that some policies, if pre-implemented as a mitigation technique to address future challenges, might interrupt the normal operations in the data center.As a means to alleviate this possibility it is advised that such policies be implemented in such a way as to trigger only in the event of an emergency. In other words, based on context and with a full understanding of the current conditions within and without the data center. Because nothing says success like an empty inbox Contextually-aware policies implemented at a strategic point of control offer the means by which IT can “be prepared” to handle an emergency situation: suddenly constrained capacity, performance degradation and even attacks against the data center network or applications delivered from therein. Such policies and the processes by which they were deployed have traditionally been a manual operations’ task: push a new configuration, provision a new server or force an update to a routing table. But contextually aware solutions provide a mechanism for encapsulating much of the process and policy required to address challenges that arise occasionally in the data center. You need infrastructure components that are capable of adapting the enforcement of policies with little to no manual intervention such that availability, security and performance levels are maintained at all times. That’s Infrastructure 2.0 for the uninitiated. These components must be aware of all factors that might degrade the operational posture of any one of the three, incurring operational risk that is unacceptable to the business. By leveraging strategic points of control to deploy contextually-aware policies you can automatically respond to the unexpected in many cases without disruption. This leads to consistent application performance, behavior and availability and ensures that IT is meeting the challenges of the business. Similarly, when considering deploying an application in a public cloud computing environment, part of the process needs to be the asking of serious questions regarding the management and future integration needs of that application. Today it may not be business critical, but if/when it is – what then? How would you integrate that application’s data with your internal systems? How would you integrate processes that rely upon that application with business or operational processes inside the data center? How might you extend identity and application access management systems such that cloud-hosted applications can leverage them? Being prepared in the data center means you need the strategic platforms in place before they’re necessary and then subsequently requires that you lay out a set of tactical plans that address specific challenges that may arise along the way, noting the specific conditions that “trigger” the need for such measures in order to codify the “day after” procedures in such a way as to make them automatically provisioned when necessary. Doing so improves the responsiveness of IT, a major driver toward IT as a Service for both IT and the business. Fulfilling the requirements for a data center merit badge is a lot easier than you might think: consider the challenges you may need to address, formulate a plan, and then implement it. Then wear your badge proudly. You’ll have earned it. Related blogs & articles: Cloud Chemistry 101 Data Center Feng Shui: Reliability is not the Absence of Failure Now Witness the Power of this Fully Operational Feedback Loop Solutions are Strategic. Technology is Tactical. What CIOs Can Learn from the Spartans Operational Risk Comprises More Than Just Security The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means What is a Strategic Point of Control Anyway?199Views0likes1CommentThe Mobile Chimera
#mobile #vdi #IPv6 In the case of technology – as with mythology - the whole is often greater (and more challenging) than the sum of its parts. The chimera is a mythological beast of scary proportions. Not only is it fairly large, but it’s also got three, independent heads – traditionally a lion, a goat, and a snake. Some variations on this theme exist, but the basic principle remains: it’s a three-headed, angry beast that should not be taken lightly should one encounter it in the hallway. Individually, one might have a strategy to meet the challenge of a lion or a goat head on. But when they converge into one very angry and dangerous beast, the strategies and tactics employed to best any one of them will almost certainly not work to address all three of them simultaneously. The world of mobility is rapidly approaching its own technological chimera, one comprised of three individual technology trends. While successful stratagem and tactics exist which address each one individually, when taken together they form a new challenge requiring a new strategic approach. THE MOBILE CHIMERA Three technology trends - VDI, mobile, and IPv6 - are rapidly converging upon the enterprise. Each is driven in part by the other, and each requires in part functionality and support of another. Addressing the challenges accompanying this trifecta requires a serious evaluation of the enterprise infrastructure with an eye toward performance, scalability, and flexibility, less it be overwhelmed by demand originating both internally and externally. Mobile The myriad articles, blogs, and editorial orations on mobile device growth have to date focused on the need for organizations to step up and accept the need for device-ready enterprise applications. This focus has thus far ignored the reality of the diversity of the device client base, the ramifications of which those with long careers in IT will painfully recall from the client-server era. Thus it is no surprise that interest in and adoption of technology such as VDI is on the rise, as virtualization serves as a popular solution to the problem of delivering applications to a highly-diverse set of clients. But virtualization, as popular a solution as it may be, is not a panacea. Security and control over corporate resources and applications is a growing necessity today because of the ease with which users can take advantage of mobile technology to access them. Access control does not entirely solve the challenges of a diverse mobile client audience, as attackers turn their attention on mobile platforms as a means to gain access to resources and data previously beyond their reach. The need for endpoint security inspection continues to grow as the threat posed by mobile devices continues to rear its ugly head. VDI It was inevitable that the growth of mobile device usage in the enterprise continued to grow that so, too, would the solution of VDI grow as the most efficient way to deliver applications without requiring mobile platform-specific versions. The desire by business owners and security practitioners to keep data securely within the data center "walls", too, is a factor in the rising desire to deploy VDI. VDI enables organizations to deliver applications remotely while maintaining control over data inside the data center, preserving enforcement of corporate security policies and minimizing risk. But VDI deployments are not trivial, regardless of the virtualization platform chosen. Each virtualization solution has its challenges and most of those challenges revolve around the infrastructure necessary to support such an initiative. Scalability and flexibility are important facets of VDI delivery infrastructure, and performance cannot be overlooked if such deployments are to be considered successful. IPv6 Who could forget that the Internet is being pressured to move to IPv6 sooner rather than later, in part because of the growth of mobile clients? The strain placed on service providers to maintain IPv4 support as a means to not "break the Internet" can only be borne so long before IPv6 becomes, as has been predicted, the Y2K for the network. The ability to deliver applications via VDI to mobile devices will soon require support for IPv6, but will not obviate the need to support IPv4 just yet. A dual stack approach will be required during the transition period, putting delivery infrastructure again front and center in the battle to deploy and support applications for mobile devices. With all accounts numbering mobile devices in the four billion range across multiple platforms and effectively 0 IPv4 addresses left to assign to those devices, it should be no surprise that as these three technology trends collide the result will be the need for a new mobility strategy. This is why solutions are strategic and technology is tactical. There exist individual products that easily solve each of these problems individually, but very few solutions that address the combined juggernaut that is the three combined. It is necessary to coordinate and architect a solution that can solve all three challenges simultaneously as a means to combat complexity and its associated best friend forever, operational risk. A flexible and scalable delivery strategy will be necessary to ensure performance and security without sacrificing operational efficiency. I Scream, You Scream, We all Scream for Ice Cream (Sandwich) The Full-Proxy Data Center Architecture Scaling VDI Architectures Virtualization and Cloud Computing: A Technological El Niño The Future of Cloud: Infrastructure as a Platform Strategic Trifecta: Access Management From a Network Perspective, What Is VDI, Really? F5 Friday: A Single Namespace to Rule Them All187Views0likes0Comments