interop
10 TopicsReactive, Proactive, Predictive: SDN Models
#SDN #openflow A session at #interop sheds some light on SDN operational models One of the downsides of speaking at conferences is that your session inevitably conflicts with another session that you'd really like to attend. Interop NY was no exception, except I was lucky enough to catch the tail end of a session I was interested in after finishing my own. I jumped into OpenFlow and Software Defined Networks: What Are They and Why Do You Care? just as discussion about an SDN implementation at CERN labs was going on, and was quite happy to sit through the presentation. CERN labs has implemented an SDN, focusing on the use of OpenFlow to manage the network. They partner with HP for the control plane, and use a mix of OpenFlow-enabled switches for their very large switching fabric. All that's interesting, but what was really interesting (to me anyway) was the answer to my question with respect to the rate of change and how it's handled. We know, after all, that there are currently limitations on the number of inserts per second into OpenFlow-enabled switches and CERN's environment is generally considered pretty volatile. The response became a discussion of SDN models for handling change. The speaker presented three approaches that essentially describe SDN models for OpenFlow-based networks: Reactive Reactive models are those we generally associate with SDN and OpenFlow. Reactive models are constantly adjusting and are in flux as changes are made immediately as a reaction to current network conditions. This is the base volatility management model in which there is a high rate of change in the location of end-points (usually virtual machines) and OpenFlow is used to continually update the location and path through the network to each end-point. The speaker noted that this model is not scalable for any organization and certainly not CERN. Proactive Proactive models anticipate issues in the network and attempt to address them before they become a real problem (which would require reaction). Proactive models can be based on details such as increasing utilization in specific parts of the network, indicating potential forthcoming bottlenecks. Making changes to the routing of data through the network before utilization becomes too high can mitigate potential performance problems. CERN takes advantage of sFlow and Netflow to gather this data. Predictive A predictive approach uses historical data regarding the performance of the network to adjust routes and flows periodically. This approach is less disruptive as it occurs with less frequency that a reactive model but still allows for trends in flow and data volume to inform appropriate routes. CERN uses a combination of proactive and predictive methods for managing its network and indicated satisfaction with current outcomes. I walked out with two takeaways. First was validation that a reactive, real-time network operational model based on OpenFlow was inadequate for managing high rates of change. Second was the use of OpenFlow as more of an operational management toolset than an automated, real-time self-routing network system is certainly a realistic option to address the operational complexity introduced by virtualization, cloud and even very large traditional networks. The Future of Cloud: Infrastructure as a Platform SDN, OpenFlow, and Infrastructure 2.0 Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Integration Topologies and SDN SDN is Network Control. ADN is Application Control. The Next IT Killer Is… Not SDN How SDN Is Defined Within ADN Architectures2.1KViews0likes0CommentsF5 Friday: In the NOC at Interop
#interop #fasterapp #adcfw #ipv6 Behind the scenes in the Interop network Interop Las Vegas expects somewhere in the realm of 10,000+ attendees this year. Most of them will no doubt be carrying smart phones, many tablets, and of course the old standby, the laptop. Nearly every one will want access to some service – inside or out. The Interop network provides that access – and more. F5 solutions will provide IT services, including IPv4–IPv6 translation, firewall, SSL VPN, and web optimization technologies, for the Network Operations Center (NOC) at Interop. The Interop 2012 network is comprised of the show floor Network Operations Center (NOC), and three co-location sites: Colorado (DEN), California (SFO), and New Jersey(EWR). The NOC moves with the show to its 4 venues: Las Vegas, Tokyo, Mumbai, and New York. F5 has taken a hybrid application delivery network architectural approach – leveraging both physical devices (in the NOC) and virtual equivalents (in the Denver DC). Both physical and virtual instances of F5 solutions are managed via a BIG-IP Enterprise Manager 4000, providing operational consistency across the various application delivery services provided: DNS, SMTP, NTP, global traffic management (GSLB), remote access via SSL VPNs, local caching of conference materials, and data center firewall services in the NOC DMZ. Because the Interop network is supporting both IPv6 and IPv4, F5 is also providing NAT64 and DNS64 services. NAT64: Network address translation is performed between IPv6 and IPv4 on the Interop network, to allow IPv6-only clients and servers to communicate with hosts on IPv4-only networks DNS64: IPv6-to-IPv4 DNS translations are also performed by these BIG-IPs, allowing A records originating from IPv4-only DNS servers to be converted into AAAA records for IPv6 clients. F5 is also providing SNMP, SYSLOG, and NETFLOW services to vendors at the show for live demonstrations. This is accomplished by cloning the incoming traffic and replicating it out through the network. At the network layer, such functionality is often implemented by simply mirroring ports. While this is sometimes necessary, it does not necessarily provide the level of granularity (and thus control) required. Mirrored traffic does not distinguish between SNMP and SMTP, for example, unless specifically configured to do so. While cloning via an F5 solution can be configured to act in a manner consistent with port mirroring, cloning via F5 also allows intermediary devices to intelligently replicate traffic based on information gleaned from deep content inspection (DCI). For example, traffic can be cloned to a specific pool of devices based on the URI, or client IP address or client device type or destination IP. Virtually any contextual data can be used to determine whether or not to clone traffic. You can poke around with more detail and photos and network diagrams at F5’s microsite supporting its Interop network services. Dashboards are available, documentation, pictures, and more information in general on the network and F5 services supporting the show. And of course if you’re going to be at Interop, stop by the booth and say “hi”! I’ll keep the light on for ya… F5 Interopportunities at Interop 2012 F5 Secures and Optimizes Application and Network Services for the Interop 2012 Las Vegas Network Operations Center When Big Data Meets Cloud Meets Infrastructure Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data361Views0likes0CommentsWhen Big Data Meets Cloud Meets Infrastructure
#stirling #interop #infosec #bigdata Bridging the Gap between Big Data and Business Agility I’m a huge fan of context-aware networking. You know, the ability to interpret requests in the context they were made – examining user identity, location, client device along with network condition and server/application status. It’s what imbues the application delivery tier with the agility necessary to make decisions that mitigate operational risk (security, availability, performance) in real-time. In the past, almost all context was able to be deduced from the transport (connection) and application layer. The application delivery tier couldn’t necessarily “reach out” and take advantage of the vast amount of data “out there” that provides more insight into the conversation being initiated by a user. Much of this data falls into the realm of “big data” – untold amounts of information collected by this site and that site that offer valuable nuggets of information about any given interaction. Because of its expanded computing power and capacity, cloud can store information about user preferences, which can enable product or service customization. The context-driven variability provided via cloud allows businesses to offer users personal experiences that adapt to subtle changes in user-defined context, allowing for a more user-centric experience. -- “The power of cloud”, IBM Global Business Services All this big data is a gold mine – but only if you can take advantage of it. For infrastructure and specifically application delivery systems that means somehow being able to access data relevant to an individual user from a variety of sources and applying some operational logic to determine, say, level of access or permission to interact with a service. It’s collaboration. It’s integration. It’s an ecosystem. It’s enabling context-aware networking in a new way. It’s really about being able to consume big data via an API that’s relevant to the task at hand. If you’re trying to determine if a request is coming from a legitimate user or a node in a known botnet, you can do that. If you want to understand what the current security posture of your public-facing web applications might be, you can do that. If you want to verify that your application delivery controller is configured optimally and is up to date with the latest software, you can do that. What’s more important, however, is perhaps that such a system is a foundation for integrating services that reside in the cloud where petabytes of pertinent data has already been collected, analyzed, and categorized for consumption. Reputation, health, location. These are characteristics that barely scratch the surface of the kind of information that is available through services today that can dramatically improve the operational posture of the entire data center. Imagine, too, if you could centralize the acquisition of that data and feed it to every application without substantially modifying the application? What if you could build an architecture that enables collaboration between the application delivery tier and application infrastructure in a service-focused way? One that enables every application to enquire as to the location or reputation or personal preferences of a user – stored “out there, in the cloud” – and use that information to make decisions about what components or data the application includes? Knowing a user prefers Apple or Microsoft products, for example, would allow an application to tailor data or integrate ads or other functionality specifically targeted for that user, that fits the user’s preferences. This user-centric data is out there, waiting to be used to enable a more personal experience. An application delivery tier-based architecture in which such data is aggregated and shared to all applications shortens the development life-cycle for such personally-tailored application features and ensures consistency across the entire application portfolio. It is these kinds of capabilities that drive the integration of big data with infrastructure. First as a means to provide better control and flexibility in real-time over access to corporate resources by employees and consumers alike, and with an eye toward future capabilities that focus on collaboration inside the data center better enabling a more personal, tailored experience for all users. It’s a common refrain across the industry that network infrastructure needs to be smarter, make more intelligent decisions, and leverage available information to do it. But actually integrating that data in a way that makes it possible for organizations to actually codify operational logic is something that’s rarely seen. Until now. Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data The Challenges of SQL Load Balancing196Views0likes0CommentsThe IT Optical Illusion
Everyone has likely seen the optical illusion of the vase in which, depending on your focus, you either see a vase or two faces. This particular optical illusion is probably the best allegorical image for IT and in particular cloud computing I can imagine. Depending on your focus within IT you’re either focused on – to borrow some terminology from SOA – design-time or run-time management of the virtualized systems and infrastructure that make up your data center. That focus determines what particular aspect of management you view as most critical, and unfortunately makes it difficult to see the “big picture”: both are critical components of a successful cloud computing initiative. I realized how endemic to the industry this “split” is while prepping for today’s “Connecting On-Premise and On-Demand with Hybrid Clouds” panel at the Enterprise Cloud Summit @ Interop on which I have the pleasure to sit with some very interesting – but differently focused – panelists. See, as soon as someone starts talking about “connectivity” the focus almost immediately drops to … the network. Right. That actually makes a great deal of sense and it is, absolutely, a critical component to building out a successful hybrid cloud computing architecture. But that’s only half of the picture, the design-time picture. What about run-time? What about the dynamism of cloud computing and virtualization? The fluid, adaptable infrastructure? You know, the connectivity that’s required at the application layers, like access control and request distribution and application performance. Part of the reason you’re designing a hybrid architecture is to retain control. Control over when those cloud resources are used and how and by whom. In most cloud computing environments today, at least public ones, there’s no way for you to maintain that control because the infrastructure services are simply not in place to do so. Yet. At least I hope yet; one wishes to believe that some day they will be there. But today, they are not. Thus, in order to maintain control over those resources there needs to be a way to manage the run-time connectivity between the corporate data center (over which you have control) and the public cloud computing environment (which you do not). That’s going to take some serious architecture work and it’s going to require infrastructure services from infrastructure capable of intercepting requests, inspecting the request in context of the user and the resource requested, and applying the policies and processes to ensure that only those clients you want to access those resources can access them, and those you prefer not access them are denied. It will become increasingly important that IT be able to view its network in terms of both design and run-time connectivity if it is going to successfully incorporate public cloud computing resources into its corporate cloud computing – or traditional – network and application delivery network strategy.190Views0likes1CommentIs PaaS Just Outsourced Application Server Platforms?
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment. What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem. That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.228Views0likes0CommentsHindsight is Always Twenty-Twenty
There have been many significant events over the past decade, but looking back these are still having a significant impact on the industry. Next week is Interop. Again. This year it’s significant in that it’s my tenth anniversary attending Interop. It’s also the end of a decade’s worth of technological change in the application delivery industry, the repercussions and impact of which in some cases are just beginning to be felt. We called it load balancing back in the day, but it’s grown considerably since then and now encompasses a wide variety of application-focused concerns: security, optimization, acceleration, and instrumentation to name a few. And it’s importance to cloud computing and dynamic infrastructure is only now beginning to be understood, which means the next ten years ought to be one heck of a blast. Over these past ten years there’s been a lot of changes and movement and events that have caused quite the stir. But reflecting on those ten years and all those events and changes brings to the fore a very small subset of events that, in hindsight, have shaped application delivery and set the stage for the next ten years. I’m going to list these events in order of appearance, and to do that we’re going to have to go all the way back to the turn of the century (doesn’t that sound awful?). THE BIRTH of INFRASTRUCTURE 2.0 In 2001 F5 introduced iControl, a standards-based API that allowed customers, partners, and third-party developers to control BIG-IP programmatically. Being a developer by trade and a network jockey by experience, this concept blew my mind. Control the network? Programmatically? How awesome was that? Turns out more awesome than even I could realize, because it wasn’t long before other application delivery focused vendors were doing the same and from this has grown the foundations for Infrastructure 2.0. The new network. The dynamic infrastructure necessary for cloud computing and the answer to the myriad challenges raised by virtualization in the data center. Like most Web 2.0 applications today, an API is nearly considered “table stakes” for new or updated products; a must-have if a solution is going to be able to fit in with the increasingly integrated networks that drive data center and, in particular, network automation. Looking ahead Infrastructure 2.0 and these control planes are increasingly important to service-based cloud computing offerings and to organizations desiring to automate and orchestrate their virtualized data center for maximum efficiency. These APIs are the means by which the “new network” will be implemented, how the “network” will be integrated with cloud frameworks. THE DEATH of NAUTICUS NETWORKS Nauticus Networks had a dream; a dream of a virtualized layer 7 switch. Not the veneer kind of virtualization but real, honest-to-goodness virtualization of the entire hardware. It was amazing. And in 2004 Sun Microsystems’ acquired the company and promptly let starve to death a solution that might have had a very bright future reign in the virtualized data center. I won’t go into the details of how Sun killed the platform; suffice to say no-one saw then that it was a multi-tenant load balancing king waiting for its (cloud) kingdom to ascend the throne. The death of Nauticus impacted the market primarily because it never had a chance to grow into its legs and show the value of a virtualized hardware platform. It was way, way before its time which, in the case of a start-up, can be deadly. Its subsequent death at the hands of Sun made it appear that no one was interested in virtualized hardware platforms, which ultimately led to no one else really picking up on the concept. (Cisco’s virtualization is not nearly as thorough as Nauticus’ implementation, I assure you.) The lack of truly virtualized hardware platforms has led instead to architectural infrastructure virtualization, which is almost certainly the future of cloud computing infrastructure for a variety of reasons with portability and architectural homogeneity being at the top of the list. THE NETWORK AS A SERVICE The introduction of Cisco’s SONA in 2005 was the talk of the tech industry for months thereafter. Despite the fact that it never really gained traction outside of Cisco (and the press) it did kick-start interest in what might be called today “Network as a Service.” The idea that network functionality might be available to developers and applications “as a service” is one that extends naturally today into IaaS offerings. Where F5 introduced the concept of the control-plane necessary to implement dynamic cloud computing infrastructure, Cisco introduced the concept of applying that infrastructure functionality in a service-oriented manner. As a service, like cloud computing. Looking into the next decade you can probably see that this concept is one that must be embraced by cloud computing providers in order to differentiate their offerings. By packaging up and serving “application acceleration” or “protocol security” or “web application firewall” as a service, applications in cloud computing environments will eventually experience unprecedented control and integration over and with the network. There were many, many other acquisitions (Citrix –> Netscaler, Nortel –> Alteon and then Radware –> Nortel to name a few) and many other innovations in the past ten years but these three stand out as influencing where we stand today and, perhaps, where we’re going tomorrow. We’ll see what the playing field looks like ten years from now, when hindsight again rears its mocking head and clearly shows what was and was not influential. This Interop proves to be another good one. Not because of all the new products (always cool of course and I’m looking forward to being able to talk about them!) but, like these events, because it’s focusing more on how than what. It’s a new decade and with it comes a new era of application delivery. This one looks to be another that raises the bar in terms of the importance of architecture and the network to the next generation of applications and data center models.206Views0likes1CommentLearn How to Play Application Performance Tag at Interop
It’s all fun and games until application performance can’t be measured. We talk a lot about measuring application performance and its importance to load balancing, scalability, meeting SLAs (service level agreements) and even to the implementation of more advanced concepts like cloud balancing and location-based global application delivery but we don’t often talk about how hard it is to actually get that performance data. Part of the reason it’s so difficult is that the performance metrics you want are ones that as accurately as possible represent end-user experience. You know, customers and visitors, the users of your application that must access your application over what may be a less than phenomenal network connection. This performance data is vital. Increasingly customers and visitors are basing business choices on application performance: Unacceptable Web site performance during peak traffic times led to actions and perceptions that negatively impacted businesses’ revenue and reputation: -- 78 percent of consumers have switched to a competitor’s Web site because they encountered slowdowns, errors and transaction problems during peak traffic times. -- After a poor online experience, 88 percent are less likely to return to a site, 47 percent have a less positive perception of the company and 42 percent have discussed it with family, friends and peers, or online on social networks. -- Survey Finds Consumer Frustration with Web Site Performance During Peak Traffic Times Negatively Impacts Business Results And don’t forget that Google recently decided to go ahead and add performance as a factor in its ranking algorithms. If your application and site perform poorly, this could certainly have an even bigger negative impact on your bottom line. What’s problematic about ensuring application performance is that applications are now being distributed not just across data centers but across deployment models. The term “hybrid” is usually used in conjunction with public and private cloud to denote a marriage between the two but the reality is that today’s IT operations span legacy, web-based, client-server, and cloud models. Making things more difficult is that organizations also have a cross-section of application types – open source, closed source, packaged, and custom applications are all deployed and operating across various types of deployment models and in environments without a consistent, centrally manageable solution for measuring performance in the first place. The solution to gathering accurate end-user experience performance data has been, to date, to leverage service-providers who specialize in gathering this data. But implementing a common application performance monitoring solution across all applications and environments in such a scenario is quite problematic, because most of these solutions rely upon the ability to instrument the application/site. Organizations, too, may be reluctant to instrument applications for a specific solution – that can result in de facto lock-in as the time and effort necessary to remove and replace the instrumentation may be unacceptable. A dynamic infrastructure, capable of intercepting, inspecting, and modifying, if necessary, the application data stream en route is necessary in order to unify application performance measurement efforts across all application types and locations. A dynamic infrastructure that’s capable of tagging application data with the appropriate information such that end-user monitoring services, necessary to determine more accurately the end-user experience in terms of response and page load time, can effectively perform their duties across the myriad application deployments upon which businesses and their customers depend. At Interop we’ll be happy to show you how to teach your application delivery infrastructure- physical and virtual – how to play a game of “tag” with your applications that can provide just such measurements. Measurements that are vital to identifying potential performance bottlenecks that may negatively impact application performance and, ultimately, the business’ bottom line. Even better, we’ll not only show you how to play the game, but how to win by applying architecting an even more dynamic, intelligent infrastructure through which application performance-enhancing solutions can be implemented, no matter where those applications may reside – today or tomorrow.174Views0likes0CommentsAt Interop You Can Find Out How Five “Ates” Can Net You Three “Ables”
The biggest disadvantage organizations have when embarking on a “we’re going cloud” initiative is that they’re already saddled with an existing infrastructure and legacy applications. That’s no surprise as it’s almost always true that longer-lived enterprises are bound to have some “legacy” applications and infrastructure sitting around that’s still running just fine (and is a source of pride for many administrators – it’s no small feat to still have a Novell file server running, after all). Applications themselves are almost certainly bound to rely on some of that “legacy” infrastructure and integration and let’s not even discuss the complex web of integration that binds applications together across time and servers. So it is highly unlikely that an organization is going to go from its existing morass of infrastructure that comprises the data center to an elegant, efficient “cloud-based” architecture overnight. Like raising children, it takes an investment of time, effort, and yes, money. But for that investment the organization will eventually get from point A (legacy architecture) to point Z (cloud computing) and realize the benefits associated with an on-demand, automated data center. There are some milestones that are easily recognizable as enterprise data centers as you traverse the path between here and there; steps, if you will, on the journey to free the data center from its previously static and brittle infrastructure and processes on its way to a truly dynamic infrastructure. There are, you guessed it, five steps and they all end with (how’d you ever guess?) “ate”. 1. SEPARATE test and development 2. CONSOLIDATE servers 3. AGGREGATE capacity on demand 4. AUTOMATE operational processes 5. LIBERATE the data center with a cloud computing model And for your efforts of raising up this data center you’ll achieve a dynamic infrastructure that is scalable, reliable, and enables available applications. Yes, the three “ables”. Modern “math” says five “ates” = three “ables”, at least in the realm of the data center. To get there a new paradigm in data center and networking design is required; one that allows the customer, on their terms, to add, remove, grow, and shrink application and data/storage services on-demand. It’s the type of network that can understand the context of the user, location, situation, device, and application and dynamically adjust to those conditions. It’s the type of network which can be provisioned in hours not weeks or months to support new business applications. It’s an Infrastructure 2.0 enabled data center: integrated, collaborative, and services-based. What’s necessary is a new architecture and new way of looking at infrastructure. But to build that architecture you first need a blueprint, a map, that helps you get there – building codes that help navigate the construction of a dynamic infrastructure that’s capable of responding to demand based on the operational and business processes that have always been the real competitive advantage IT brings to the business table. That blueprint, the architecture, is infinitely more important than its individual components. It’s not just the components, it’s the way in which the components are networked together that brings to life the dynamic data center. And it’s those architectural blueprints, the building codes, that we’re bringing to Interop.136Views0likes0CommentsA Hardware Platform and a Virtual Appliance Walk into a Bar at Interop…
Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those benefits, that’s not acceptable. Virtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered. Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology. Combining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges. Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings).225Views0likes0CommentsVirtual Server Sprawl: FUD or FACT?
At Interop this week, security experts have begun sounding the drum regarding the security risks of virtualization and reminding us that virtual server sprawl magnifies that risk because, well, there are more virtual servers to manage at risk. Virtual sprawl isn't defined by numbers; it's defined as the proliferation of virtual machines without adequate IT control, [David] Lynch said. That's good, because the numbers as often cited just don't add up. A NetworkWorld article in December 2007 cited two different sets of numbers from Forrester Research on the implementation of virtualization in surveyed organizations. First we are told that: IT departments already using virtualization have virtualized 24% of servers, and that number is expected to grow to 45% by 2009. And later in the article we are told: The latest report finds that 37% of IT departments have virtualized servers already, and another 13% plan to do so by July 2008. An additional 15% think they will virtualize x86 servers by 2009. It's not clear where the first data point is coming from, but it appears to come from a Forrester Research survey cited in the first paragraph while the latter data set appears to come from the same recent study. The Big Hairy Question is: how many virtual servers does that mean? This sounds a lot like the great BPM (Business Process Management) scare of 2005 when it was predicted that business users would be creating SOA-based composite applications willy nilly using BPM tools because it required no development skills, just a really good mouse finger with which you could drag and drop web services to create your own customized application. Didn't happen. Or if it did, it happened in development and test and local environments and never made it to the all important production environment, where IT generally maintains strict control. Every time you hear virtual server sprawl mentioned it goes something like this: "When your users figure out how easy it is..." "Users", whether IT or business, are not launching virtual servers in production in the data center. If they are, then an organization has bigger concerns on their hands than the issue of sprawl. Are they launching virtual servers on their desktop? Might be. On a test or development machine? Probably. In production? Not likely. And that's where management and capacity issues matter; that's where the bottom line is potentially impacted from a technological black plague like virtual server sprawl; that's where the biggest security and management risks associated with virtualization are going to show themselves. None of the research cited ever discusses the number of virtual servers running, just the number of organizations in which virtualization has been implemented. That could mean 1 or 10 or 100 virtual servers. We just don't know because no one has real numbers to back it up; nothing but limited anecdotal evidence has been presented to indicate that there is a problem with virtual server sprawl. I see problems with virtualization. I see the potential for virtualizing solutions that shouldn't be virtualized for myriad reasons. I see the potential problems inherent in virtualizing everything from the desktop to the data center. But I don't see virtual server sprawl as the Big Hairy Monster hiding under the virtual bed. So as much as I'd like to jump on the virtual sprawl bandwagon and make scary faces in your general direction about the dangers that lie within the virtual world - because many of them are very real and you do need to be aware of them - there just doesn't seem to be any real data to back up the claim that virtual sprawl is - or will become - a problem.311Views0likes2Comments