dynamic data center
19 TopicsIt's On: Stacks versus Flows
#OpenStack #CloudStack #OpenFlow #SDN It's a showdown of model versus control – or is it? There's a lot of noise about "wars" in the networking world these days. OpenStack versus CloudStack versus OpenFlow-based SDN. But while there are definitely aspects of "stacks" that share similarities with "flows", they are not the same model and ultimately they aren't even necessarily attempting to solve the same problems. Understanding the two models and what they're intended to do can go a long way toward resolving any perceived conflicts. The Stack Model Stack models, such as CloudStack and OpenStack, are more accurately placed in the category of "cloud management frameworks" because they are designed with provisioning and management of the infrastructure services that comprise a cloud computing (or highly dynamic) environment. Stacks are aptly named as they attempt to provide management and specifically automation of provisioning for the complete network stack. Both CloudStack and OpenStack, along with Eucalyptus and Amazon and VMware vCloud, provide a framework API that can (ostensibly) be used to provision infrastructure services irrespective of vendor implementation. The vision is (or should be) to enable implementers (whether service provider or enterprise) to be able to switch out architectural elements (routers, switches, hypervisors, load balancers, etc… ) transparently*. That is, moving from Dell to HP to Cisco (or vice-versa) as an environment's switching fabric should not be disruptive. Physical changes should be able to occur without impacting the provisioning and management of the actual services provided by the infrastructure. And yes, such a strategy should also allow heterogeneity of infrastructure. In many ways, such "stacks" are the virtualization of the data center, enabling abstraction of the actual implementation from the configuration and automation of the hardware (or software) elements. This, more than anything, is what enables a comparison with flow-based models. The Flow Model Flow-based models, in particular OpenFlow-based SDN, also abstracts implementation from configuration by decoupling the control plane from the data plane. This allows any OpenFlow-enabled device (mostly switches today, as SDN and OpenFlow focus on network layers) to be configured and managed via a centralized controller using a common API. Flows are "installed" or "inserted" into OpenFlow-enabled elements via OpenFlow, an open protocol designed for this purpose, and support real-time updates that enable on-demand optimization or fault isolation of flows through the network. OpenFlow and SDN are focused on managing the flow of traffic through a network. Flow-based models purport to offer the same benefits as a stack model in terms of heterogeneity and interoperability. Moving from one OpenFlow-enabled switch to another (or mixing and matching) should ostensibly have no impact on the network whatsoever. What flow-based models offer above and beyond a stack model is extensibility. OpenFlow-based SDN models using a centralized controller also carry with it the premise of being able to programmatically add new services to the network without vendor assistance. "Applications" deployed on an SDN controller platform (for lack of a better term) can extend existing services or add new ones and there is no need to change anything in the network fabric, because ultimately every "application" distills flows into a simple forwarding decision that can then be applied like a pattern to future flows by the switches. The Differences This is markedly different from the focus of a stack, which is on provisioning and management, even though both may be occurring in real-time. While it's certainly the case that through the CloudStack API you can create or delete port forwarding rules on a firewall, these actions are pushed (initiated) external to the firewall. It is not the case that the firewall receives a packet and asks the cloud framework for the appropriate action, which is the model in play for a switch in an OpenFlow-based SDN. Another (relatively unmentioned but important) distinction is who bears responsibility for integration. A stack-based model puts the onus on the stack to integrate (via what are usually called "plug-ins" or "drivers") with the component's existing API (assuming one exists). A flow-based model requires the vendor to take responsibility for enabling OpenFlow support natively. Obviously the ecosystem of available resources to perform integration is a magnitude higher with a stack model than with a flow model. While vendors are involved in development of drivers/plug-ins for stacks now, the impact on the product itself is minimal, if any at all, because the integration occurs external to the component. Enabling native OpenFlow support on components requires a lot more internal resources be directed at such a project. Do these differences make for an either-or choice? Actually, they don't. The models are not mutually exclusive and, in fact, might be used in conjunction with one another quite well. A stack based approach to provisioning and management might well be complemented by an OpenFlow SDN in which flows through the network can be updated in real time or, as is often proffered as a possibility, the deployment of new protocols or services within the network. The War that Isn't While there certainly may be a war raging amongst the various stack models, it doesn't appear that a war between OpenFlow and *-Stack is something that's real or ever will be The two foci are very different, and realistically the two could easily be deployed in the same network and solve multiple problems. Network resources may be provisioned and initially configured via a stack but updated in real-time or extended by an SDN controller, assuming such network resources were OpenFlow-enabled in the first place. * That's the vision (and the way it should be) at least. Reality thus far is that the OpenStack API doesn't support most network elements above L3 yet, and CloudStack is tightly coupling API calls to components, rendering this alleged benefit well, not a benefit at all, at least at L4 and above.288Views0likes1CommentF5 Friday on Tuesday: Getting You One Step Closer to a SDDC
#SDN #vmworld F5 Solutions Combine with VMware VXLAN to Support Software Defined Networking As efforts around SDN (Software-Defined Networking) continue to explode faster than the price of gas it has begun to diverge into several different focal points. Each area of focus tends to zero in on a narrowly defined set of problems that are in need of being solved. One of those focal points is on the layer 2 domain, where limitations both physical and logical constrain mobility of virtual machines across the network. In an increasingly network-agnostic approach to resource provisioning the limitations imposed by traditional logical networking standards can be a serious impediment to realizing the benefits of a truly elastic, cloud-computing based architectural model. To address the specific issues related to VLAN limitations and topological constraints on rapid provisioning processes, several competing standards have been proposed. The two most recognizable are certainly VXLAN (primarily driven by VMware) and NVGRE (primarily driven by Microsoft). Organizations are pursuing increasingly dynamic IT deployment models with software defined data centers (SDDC) becoming top of mind as the end-goal. As a strategic point of control in the data center, F5's approach is to seamlessly interoperate with a wide variety of network topologies including traditional VLANs and emerging SDN-related frameworks such as VXLAN and NVGRE. Such standards-efforts are focused on decoupling virtual machines from the underlying network as a way to enable more flexible, scalable and manageable pools of resources across the entire data center. The applications residing in those resource pools, however, must still be delivered. End-users and IT alike expect the same performance, reliability, and security for those applications regardless of where they might be deployed across the data center. That means ADN must be able to seamlessly transition between both traditional and emerging virtual networking technology so as to consistently deliver applications without compromising on performance or security. By supporting emerging standards in the ADC, customers can create isolated broadcast domains across the data center, enabling dynamic logical networks to span physical boundaries. F5 recently announced its support for NVGRE with our Microsoft Network Virtualization Gateway and today we're announcing that we will also support VXLAN by adding VXLAN virtual tunneling endpoint (vTEP) capabilities to BIG-IP. BIG-IP natively supports VXLAN today, but the addition of vTEP capabilities mean BIG-IP can act as a gateway, bridging VXLAN and non-VXLAN networks with equal alacrity. That means the ability to use either physical or virtual BIG-IP form factor to leverage all F5's ADN services such as security, acceleration, and optimization across both VXLAN and traditional networks. New support means organizations can: Simplify the Expansion of Virtual Networks With BIG-IP solutions as the bridge, organizations will be able to extend their existing networks from using VLAN to using VXLAN-based topologies. This enables a transitory approach to migration of resources and systems that avoids the disruption otherwise required by technical requirements of VXLAN. Apply Services across Heterogeneous Networks for Optimized Performance F5’s BIG-IP platform can serve as a networking gateway for all ADN services, making them available to application workloads irrespective of the underlying network topology. Networks comprised of multiple network technologies will find a unified gateway approach to providing services results in more predictable results for application delivery. Improve Application Mobility and Business Continuity Because VXLAN-based networks can provide functional isolation from one another, virtual machines do not need to change IP addresses while migrating between different data centers or clouds. Eliminating this requirement is a boon for enterprise-class IP-dependent applications that were previously restricted in mobility between environments. You can learn more about BIG-IP's support for VXLAN at #VMworld Europe this week at booth G100. Hybrid Architectures Do Not Require Private Cloud F5 Friday: Automating Operations with F5 and VMware F5 ... Wednesday: VMware Business Process Desktop and F5 BIG-IP The Full-Proxy Data Center Architecture F5 Friday: A Single Namespace to Rule Them All F5 Friday: Cookie Cutter vApps Realized F5 SOLUTIONS COMBINE WITH VXLAN TO SUPPORT SDN211Views0likes0CommentsA More Practical View of Cloud Brokers
#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations. Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you're deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary. While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology. See, choosing a cloud deployment environment is not just about "will this workload run in cloud X". It's far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can't know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself. Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It's one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it's quite another to violate some of the government regulations that govern not only data at rest but in flight. A PRACTICAL VIEW of CLOUD BROKERS Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process. And it must be IT that codifies those requirements into a policy that can be leveraged by the broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default "deny all" policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That's the role of IT, and even wearing rainbow-colored cloud glasses can't eliminate the very real and important role IT has to play here. The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today's environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs. The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed. Related blogs and articles: The Social Cloud - now, with appetizers The Challenges of Cloud: Infrastructure Diaspora The Next IT Killer Is… Not SDN The Cloud Integration Stack Curing the Cloud Performance Arrhythmia F5 Friday: Avoiding the Operational Debt of Cloud The Half-Proxy Cloud Access Broker The Dynamic Data Center: Cloud's Overlooked Little Brother Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules334Views0likes0CommentsThe Dynamic Data Center: Cloud's Overlooked Little Brother
It may be heresy, but not every organization needs or desires all the benefits of cloud. There are multiple trends putting pressure on IT today to radically change the way they operate. From SDN to cloud, market pressure on organizations to adopt new technological models or utterly fail is immense. That's not to say that new technological models aren't valuable or won't fulfill promises to add value, but it is to say that the market often overestimates the urgency with which organizations must view emerging technology. Too, mired in its own importance and benefits, markets often overlook that not every organization has the same needs or goals or business drivers. After all, everyone wants to reduce their costs and simplify provisioning processes! And yet goals can often be met through application of other technologies that carry less risk, which is another factor in the overall enterprise adoption formula – and one that's often overlooked. DYNAMIC DATA CENTER versus cloud computing There are two models competing for data center attention today: dynamic data center and cloud computing. They are closely related, and both promise similar benefits with cloud computing offering "above and beyond" benefits that may or may not be needed or desired by organizations in search of efficiency. The dynamic data center originates with the same premises that drive cloud computing: the static, inflexible data center models of the past inhibit growth, promote inefficiency, and are fraught with operational risk. Both seek to address these issues with more flexible, dynamic models of provisioning, scale and application deployment. The differences are actually quite subtle. The dynamic data center is focused on NOC and administration, with enabling elasticity and shared infrastructure services that improve efficiency and decrease time to market. Cloud computing, even private cloud, is focused on the tenant and enabling for them self-service capabilities across the entire application deployment lifecycle. A dynamic data center is able to rapidly respond to events because it is integrated and automated to enable responsiveness. Cloud computing is able to rapidly respond to events because it is necessarily must provide entry points into the processes that drive elasticity and provisioning to enable the self-service aspects that have become the hallmark of cloud computing. DATA CENTER TRANSFORMATION: PHASE 4 You may recall the cloud maturity model, comprising five distinct steps of maturation from initial virtualization efforts through a fully cloud-enabled infrastructure. A highly virtualized data center, managed via one of the many available automation and orchestration frameworks, may be considered a dynamic data center. When the operational processes codified by those frameworks are made available as services to consumers (business and developers) within the organization, the model moves from dynamic data center to private cloud. This is where the dynamic data center fits in the overall transformational model. The thing is that some organizations may never desire or need to continue beyond phase 4, the dynamic data center. While cloud computing certainly brings additional benefits to the table, these may be benefits that, when evaluated against the risks and costs to implement (or adopt if it's public) simply do not measure up. And that's okay. These organizations are not some sort of technological pariah because they choose not to embark on a journey toward a destination that does not, in their estimation, offer the value necessary to compel an investment. Their business will not, as too often predicted with an overabundance of hyperbole, disappear or become in danger of being eclipsed by other more agile, younger versions who take to cloud like ducks take to water. If you're not sure about that, consider this employment ad from the most profitable insurance company in 2012, United Health Group – also #22 on the Fortune 500 list – which lists among its requirements "3+ years of COBOL programming." Nuff said. Referenced blogs & articles: Is Your Glass of Cloud Half-Empty or Half-Full? Fortune 500 Snapshot: United Health Group Hybrid Architectures Do Not Require Private Cloud294Views0likes0Comments1024 Words: Why Devops is Hard
#devops #cloud #virtualization Just because you don't detail the complexity doesn't mean it isn't there It should be noted that this is one – just one – Java EE application and even it is greatly simplified (architectural flowcharts coming from the dev side of the house are extraordinarily complicated, owing to the number of interconnects and integrations required). Most enterprises have multiple applications that require just as many interconnects, with large enterprises numbering their applications in the hundreds. Related blogs & articles: Devops Proverb: Process Practice Makes Perfect Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk165Views0likes0CommentsSDN, OpenFlow, and Infrastructure 2.0
#infra2 #openflow #sdn #devops As cloud recedes, it reveals what it hid when the hype took it big: a focus on the network. Like cloud two or three years ago, SDN and OpenFlow dominated the talk at Interop. During a show that’s (in theory at least) dedicated to networking, this should be no surprise. Is it making networking sexy again? Yes, insomuch as we’re at least talking about networking again, which is about it considering that the network is an integral component of all the other technology and models that took the spotlight from it in the first place. Considering recent commentary on SDN * and OpenFlow, it seems folks are still divided on OpenFlow and SDN and are trying to figure out where it fits – or if it fits – in modern data center architectures. Prediction: OpenFlow Is Dead by 2014; SDN Reborn in Network Management Of course, many of the problems that the SDN vendors state – VM mobility, the limited range of VLAN IDs, the inability to move L2/L3 networking among data centers and the inflexibility of current networking command and control -- are problems faced only by cloud providers and a handful of large, large companies with big, global data centers. In other words: a rather small number of customers. I think Mike is pretty spot on with this prediction. Essentially, the majority of organizations will end up leveraging SDN for something more akin to network management in a hybrid network architecture, though not necessarily for the reasons he cites. It won’t necessarily be a lack of need, it’ll be a lack of need universally and the cost of such a massive disruption to the data center. With that in mind, we need to spend some time thinking about where SDN fits in the overall data center architecture. Routing and switching is only one part of the puzzle that is dynamic data centers, after all, and while its target problems include the dynamism inherent in on-demand provisioning of resources, alone it cannot solve this problem. Its current focus lies most often on solving how to get from point A to point B through the network when point B is a moving target – and doing so dynamically, to adjust flow in a way that’s optimal given … well, given basic networking principles like shortest path routing. Not that it will remain that way, mind you, but at the nonce that’s the way it is. Greg Ferro sums up and explains in his typical straight-to-the-point manner the core concepts behind OpenFlow and SDN in a recent post. OpenFlow and Software Defined Networking: Is It Routing or Switching ? OpenFlow defines a standard for sending flow rules to network devices so that the Control Plane can add them to the forwarding table for the Data Plane. These flow rules contains fields for elements such as source & destination MAC, Source & destination IP, source and destination TCP, VLAN, QoS and MPLS tags and more. The flow rules are then added to the existing forwarding table in the network device. The forwarding table is what all routers and switches use to dispatch frame and packets to their egress ports. OpenFlow value is realised in the Controller, and the most interesting changes are because the Controller will get new capabilities and truly granular control of the traffic flows. Therefore, OpenFlow is neither routing or switching, it’s about forwarding. It’s About Forwarding This simple statement is central to the “big picture” when you step back and try to put SDN and OpenFlow into the perspective of where it fits in an existing, modern data center architecture because it’s designed to solve specific problems, not necessarily replace the entire network (if you’re starting from scratch, that’s likely a different story). It’s about forwarding and, in particular, it’s about forwarding in a dynamic, volatile environment such as exist in cloud computing models. Where SDN and OpenFlow appear to offer the most value to existing data centers with experiencing this problem is in the network pockets that must deal with the volatility inside the data center at the application infrastructure (server) tiers, where resource lifecycle management in large installations is likely to cause the most disruption. The application delivery tier already includes the notion of separation of control from data plane. That’s the way it’s been for many years, though the terminology did not always exist to describe it as such. That separation has always been necessary to abstract the notion of an “application” or “service” from its implementation and allow for the implementation of reliability and availability strategies through technology like load balancing and failover to be transparent to the consumer. The end-point in the application delivery tier is static; it’s not rigid, but it is static because there’s no need for it to change dynamically. What was dynamic were the resources which have become even more dynamic today, specifically the resources that comprise the abstracted application: the various application instances (virtual machines) that make up the “application”. Elasticity is implemented in the application delivery tier, by seamlessly ensuring that consumers are able to access resources whether demand is high or low. In modern data center models, the virtualization management systems – orchestration, automation, provisioning – are part of the equation, ensuring elasticity is possible by managing the capacity of resources in a manner commensurate with demand seen at the application delivery tier. As resources in the application infrastructure tier are launched and shut down, as they move from physical location to physical location across the network, there is chaos. The diseconomy of scale that has long been mentioned in conjunction with virtualization and cloud computing happens here, inside the bowels of the data center. It is the network that connects the application delivery tier to the application infrastructure tier that is constantly in motion in large installations and private cloud computing environments, and it is here that SDN and OpenFlow show the most promise to achieve the operational efficiencies needed to contain costs and reduce potential errors due to overwhelmingly high volumes of changes in network configurations. What’s missing is how that might happen. While the mechanisms and protocols used to update forwarding and routing tables on switches and routers is well-discussed, the impetus for such updates and changes is not. From where do such changes originate? In a fully automated, self-aware data center (one that does not exist and may never do so) the mere act of provisioning a virtual machine (application) would trigger such changes. In more evolutionary data centers (which is more likely) such changes will be initiated due to provisioning system events, whether initiated automatically or at the behest of a user (in IT as a Service scenarios). Perhaps through data or options contained in existing network discovery protocols or through integration between the virtualization management systems and the SDN management plane. One of the core value propositions of SDN and OpenFlow being centralized control, one assumes that such functionality would be realized via integration between the two and not through modification and extension of existing protocols (although both methods would be capable, if we’re careful, of maintaining compatibility with non-SDN enabled networking components). This is being referred to in some cases as the “northbound” API while the connectivity between the controller and the network components referred to as the “southbound” API. OpenFlow, the southbound API between the controller and the switch, is getting most of the attention in the current SDN hype-fest, but the northbound API, between the controller and the data center automation system (orchestration) will yield the biggest impact for users. SDN has the potential to be extremely powerful because it provides a platform to develop new, higher level abstractions. The right abstraction can free operators from having to deal with layers of implementation detail that are not scaling well as networks increasingly need to support “Hyper-Scale” data centers. A change is blowing in from the North (-bound API) In this way, SDN and OpenFlow provide the means by which the diseconomy of scale and volatility inherent in cloud computing and optimized resource utilization models can be brought under control and even reversed. Infrastructure 2.0 Isn’t that the problem Infrastructure 2.0 has been, in part, trying to address? Early on we turned to a similar, centralized model in which IFMAP provided the controller necessary to manage changes in the underlying network. An SDN-OpenFlow based model simply replaces that central controller with another, and distributes the impact of the scale of change across all network devices by delegating responsibility for implementation to individual components upon an “event” that changes the network topology. Infrastructure 2.0: As a matter of fact that isn't what it means Dynamic infrastructure [aka Infrastructure 2.0] is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). What some SDN players are focusing on is a more complete architecture – one that’s entirely SDN and unfortunately only likely to happen in green field environments, or over time. That model, too, is interesting in that traditional data center tiers will still “exist” but would not necessarily be hierarchical, and would instead use the programmable nature of the network to ensure proper forwarding within the data center. Which is why this is going to ultimately fall into the realm of expertise owned by devops. But all this is conjecture, at this point, with the only implementations truly out there still housed in academia. Whether it will make it into the data center depends on how disruptive and difficult it will be to integrate with existing network architectures. Because just like cloud, enterprises don’t rip and replace – they move cautiously in a desired direction. As in the case of cloud computing, strategies will likely revolve around hybrid architectures enabled by infrastructure integration and collaboration. Which is what infrastructure 2.0 has been about from the beginning. * Everyone right now is fleshing out definitions of SDN and jockeying for position, each to their own benefit of course. How it plays out remains to be seen, but I’m guessing we’re going to see a cloud like evolution. In other words, chaos of definitions. I don’t have a clear one as of yet, so I’m content (for now) to take at face value the definition offered by ONS and pursue how – and where - that might benefit the enterprise. I don’t see that it has to be all or nothing, obviously. Searching for an SDN Definition: What Is Software-Defined Networking? OpenFlow/Software-Defined Networking (SDN) A change is blowing in from the North (-bound API) Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait Will DevOps Fork? The World Doesn’t Care About APIs330Views0likes0CommentsYour Network Architecture Now on e-Bay for $9.95
Stateless infrastructure and highly dynamic networks may eliminate this issue. There is great awareness in both consumer and corporate culture with respect to data and second-hand markets. We know that data stored on devices of all shapes and sizes can be a potential source of sensitive information loss if not carefully eliminated before sale or disposal. But consider, too, the potential value of picking up a second-hand switch or router from e-Bay that has not been carefully wiped of all configuration data. ACLs, routing tables, VLANs, comments. These configuration details are often left on infrastructure even as the devices are put out to pasture and sold on the secondary market. These configuration details hold a wealth of information that can provide insight into the architecture of your organization, and make it much easier for attackers to piece together a successful plan of attack in penetrating your defenses. A switch formerly used by the UK's air-traffic service which still held networking configurations and passwords has been sold on eBay, raising security concerns. The £20 Cisco Catalyst switch was bought by security consultant Michael Kemp, co-founder at Xiphos Research Labs, who quickly discovered that it has been used at the National Air Traffic Services (NATS) centre in Prestwick. Data on the switch included supervisor credentials, internal VLAN and other networking configurations and upstream switch addresses as well as domains, gateways and syslogs. -- Network Switch Bought on eBay Contained Air Traffic Control Data Decoupling services from IP addresses eliminates topological-based configuration that can lead to the discovery of a path through your defenses to your data. Imagine that instead of routing and processing decisions being triggered by traffic arriving on a port that some piece of data inside the traffic triggered the execution of the proper policy, including where the next “hop” in the network should be. Essentially, dynamically building the path through the network based on the content rather than on static, preconfigured paths. Not only would you be able to switch out pieces of infrastructure at any time without disruption, you’d be able to more efficiently process data based on what it is rather than on how the network is connected. Because you’d be making decisions in real-time, changes to the network – especially in cloud computing environments leveraging virtual network components – would have minimal if any impact. In its most basic form, this was the vision of Infrastructure 2.0; of a highly dynamic network in which decisions were made intelligently and in real-time rather than based on static network designs of the past. The usefulness of such a stateless architecture, however, goes far beyond simply managing volatility in highly virtualized architectures. Eliminating static configurations addresses what is possibly an albeit esoteric security risk, but it is a risk that exists (obviously) nonetheless. Similarly, the inability to scale a network via configuration to meet the demand of a carrier-class environment is challenging, especially as enterprise-class data centers continue to expand and grow to the point that they are as unwieldy as their carrier-class counterparts. Stateless infrastructure has the potential to address many of the obstacles that stand in the way of the truly dynamic, self-directed network necessary to support the data center of the future. The Future of Cloud: Infrastructure as a Platform The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model Multi-Tenancy Requires More Than Just Isolating Customers You Can’t Have IT as a Service Until IT Has Infrastructure as a Service The Full-Proxy Data Center Architecture At the Intersection of Cloud and Control…161Views0likes0CommentsThe Full-Proxy Data Center Architecture
Why a full-proxy architecture is important to both infrastructure and data centers. In the early days of load balancing and application delivery there was a lot of confusion about proxy-based architectures and in particular the definition of a full-proxy architecture. Understanding what a full-proxy is will be increasingly important as we continue to re-architect the data center to support a more mobile, virtualized infrastructure in the quest to realize IT as a Service. THE FULL-PROXY PLATFORM The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections. While all full-proxies are proxies, the converse is not true. Not all proxies are full-proxies and it is this distinction that needs to be made when making decisions that will impact the data center architecture. A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy. Clients often experience higher latency because of lower bandwidth connections while the servers are generally low latency because they’re connected via a high-speed LAN. The optimizations and acceleration techniques used on the client side are far different than those on the LAN side because the issues that give rise to performance and availability challenges are vastly different. A full-proxy, with separate connection handling on either side of the “air gap”, can address these challenges. A proxy, which may be a full-proxy but more often than not simply uses a buffer-and-stitch methodology to perform connection management, cannot optimally do so. A typical proxy buffers a connection, often through the TCP handshake process and potentially into the first few packets of application data, but then “stitches” a connection to a given server on the back-end using either layer 4 or layer 7 data, perhaps both. The connection is a single flow from end-to-end and must choose which characteristics of the connection to focus on – client or server – because it cannot simultaneously optimize for both. The second advantage of a full-proxy is its ability to perform more tasks on the data being exchanged over the connection as it is flowing through the component. Because specific action must be taken to “match up” the connection as its flowing through the full-proxy, the component can inspect, manipulate, and otherwise modify the data before sending it on its way on the server-side. This is what enables termination of SSL, enforcement of security policies, and performance-related services to be applied on a per-client, per-application basis. This capability translates to broader usage in data center architecture by enabling the implementation of an application delivery tier in which operational risk can be addressed through the enforcement of various policies. In effect, we’re created a full-proxy data center architecture in which the application delivery tier as a whole serves as the “full proxy” that mediates between the clients and the applications. THE FULL-PROXY DATA CENTER ARCHITECTURE A full-proxy data center architecture installs a digital "air gap” between the client and applications by serving as the aggregation (and conversely disaggregation) point for services. Because all communication is funneled through virtualized applications and services at the application delivery tier, it serves as a strategic point of control at which delivery policies addressing operational risk (performance, availability, security) can be enforced. A full-proxy data center architecture further has the advantage of isolating end-users from the volatility inherent in highly virtualized and dynamic environments such as cloud computing . It enables solutions such as those used to overcome limitations with virtualization technology, such as those encountered with pod-architectural constraints in VMware View deployments. Traditional access management technologies, for example, are tightly coupled to host names and IP addresses. In a highly virtualized or cloud computing environment, this constraint may spell disaster for either performance or ability to function, or both. By implementing access management in the application delivery tier – on a full-proxy device – volatility is managed through virtualization of the resources, allowing the application delivery controller to worry about details such as IP address and VLAN segments, freeing the access management solution to concern itself with determining whether this user on this device from that location is allowed to access a given resource. Basically, we’re taking the concept of a full-proxy and expanded it outward to the architecture. Inserting an “application delivery tier” allows for an agile, flexible architecture more supportive of the rapid changes today’s IT organizations must deal with. Such a tier also provides an effective means to combat modern attacks. Because of its ability to isolate applications, services, and even infrastructure resources, an application delivery tier improves an organizations’ capability to withstand the onslaught of a concerted DDoS attack. The magnitude of difference between the connection capacity of an application delivery controller and most infrastructure (and all servers) gives the entire architecture a higher resiliency in the face of overwhelming connections. This ensures better availability and, when coupled with virtual infrastructure that can scale on-demand when necessary, can also maintain performance levels required by business concerns. A full-proxy data center architecture is an invaluable asset to IT organizations in meeting the challenges of volatility both inside and outside the data center. Related blogs & articles: The Concise Guide to Proxies At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility F5 Friday: When Firewalls Fail… F5 Friday: Platform versus Product4.4KViews1like1CommentF5 Friday: Engineering, Experience, and Bacon?
#iApp #v11 If you were wondering what these three things have to do with F5, read on … What has a strange sense of humor, an unhealthy love of bacon and donuts, and has held a wide variety IT roles and responsibilities for a whole lot of years? If you were said “the F5 Product Management Engineering team” give yourself a cookie (or better yet some bacon). The question is, why should you care? To understand that, you first have to understand the role that “PME” has within F5. Many of the solutions F5 offers are based not only on the group’s effort and experiences, but many are the product of that effort and those experiences. If you ever wondered who was beyond our Application Ready Solutions (detailed, step-by-step application-focused deployment guides) now you have your answer: it’s PME. Our most recent release of BIG-IP, v11, also brought with it iApp. A key facet of iApp is the portability of iApp templates and scripts, especially with respect to the ability of F5 and its customers to share existing iApp implementations. The iApp packages that come from F5 after many months of development, collaboration with partners, and lots of testing are almost unilaterally created by? You got it, PME. That’s why it was particularly exciting to see Karen Jester, who manages the Product Management Engineering team, begin blogging. If you were looking for insight and an expert voice on iApp – from technical details to business benefits – then Karen’s recently launched blog will definitely be right up your alley. She’s kicked off a series of blog posts on iApp that are definitely worth a read. What’s also helpful is that she’s putting iApp into the context of the BIG-IP system as a whole. After all, iApp isn’t a disconnected technology – it’s part of a larger ecosystem that makes up the F5 control plane comprising application, data, and management. These interconnects and integrations are an important aspect of BIG-IP in general, as it offers operational consistency across a multitude of architectures and environments, ultimately enabling the dynamic data center and IT as a Service. Give Karen’s posts a read, bookmark her blog or subscribe to the feed. You won’t be disappointed with the insight and information that someone who’s inside – both the technology and the organization – can provide. Karen Jester’s iApp Blog Series: iApp – What is it? iApp–How they help business iApp–Benefits iApp–Full Application Lifecycle Management Related blogs & articles: All iApp related posts All v11 related posts All F5 Friday posts iApp Wiki iApp Codeshare F5 iApp: Moving Application Delivery Beyond the Network iApp Information180Views0likes0CommentsLive Migration versus Pre-Positioning in the Cloud
The secret to live migration isn’t just a fat, fast pipe – it’s a dynamic infrastructure Very early on in the cloud computing hype cycle we posited about different use cases for the “cloud”. One that remains intriguing and increasingly possible thanks to a better understanding of the challenges associated with the process is cloud bursting. The first time I wrote about cloud bursting and detailed the high-level process the inevitable question that remained was, “Well, sure, but how did the application get into the cloud in the first place?” Back then there was no good answer because no one had really figured it out yet. Since that time, however, there have grown up many niche solutions that provide just that functionality in addition to the ability to achieve such a “migration” using virtualization technologies. You just choose a cloud and click a button and voila! Yeah. Right. It may look that easy, but under the covers there’s a lot more details required than might at first meet the eye. Especially when we’re talking about live migration. LIVE MIGRATION versus PRE-POSITIONING Many architectural-based cloud bursting solutions require pre-positioning of the application. In other words, the application must have been transferred into the cloud before it was needed to fulfill additional capacity demands on applications experiencing suddenly high volume. It assumed, in a way, that operators were prescient and budgets were infinite. While it’s true you only pay when an image is active in the cloud, there can be storage costs associated with pre-positioning as well as the inevitable wait time between seeing the need and filling the need for additional capacity. That’s because launching an instance in a cloud computing environment is never immediate. It takes time, sometimes as long as ten minutes or more. So either your operators must be able to see ten minutes into the future or it’s possible that the challenge for which you’re implementing a cloud bursting strategy (handle overflow) won’t be addressed by such a challenge. Enter live migration. Live migration of applications attempts to remove the issues inherent with pre-positioning (or no positioning at all) by migrating on-demand to a cloud computing environment and maintaining at the same time availability of the application. What that means is the architecture must be capable of: Transferring a very large virtual image across a constrained WAN connection in a relatively short period of time Launch the cloud-hosted application Recognize the availability of the cloud-hosted application and somehow direct users to it When demand decreases you must siphon users off (quiesce) the cloud-hosted application instance When no more users are connected to the cloud-hosted application, take it down Reading between the lines you should see a common theme: collaboration. The ability to recognize and act on what are essentially “events” occurring in the process require awareness of the process and a level of collaboration traditionally not found in infrastructure solutions. CLOUD is an EXERCISE in INFRASTRUCTURE INTEGRATION Sound familiar? It should. Live migration, and even the ability to leverage pre-positioned content in a cloud computing environment, is at its core an exercise in infrastructure integration. There must be collaboration and sharing of context, automation as well as orchestration of processes to realize the benefits of applications deployed in “the cloud.” Global application delivery services must be able to monitor and infer the health at the site level, and in turn local application delivery services must monitor and infer the health and capacity of the application if cloud bursting is to successfully support the resiliency and performance requirements of application stakeholders, i.e. the business. The relationship between capacity, location, and performance of applications is well-known. The problem is pulling all the disparate variables together from the client, application, and network components which individually hold some of the necessary information – but not all. These variables comprise context, and it requires collaboration across all three “tiers” of an application interaction to determine on-demand where any given request should be directed in order to meet service level expectations. That sharing, that collaboration, requires integration of the infrastructure components responsible for directing, routing, and delivering application data between clients and servers, especially when they may be located in physically diverse locations. As customers begin to really explore how to integrate and leverage cloud computing resources and services with their existing architectures, it will become more and more apparent that at the heart of cloud computing is a collaborative and much more dynamic data center architecture. That without the ability not just to automate and orchestrate, but integrate and collaborate infrastructure across highly diverse environments, cloud computing – aside from SaaS - will not achieve the successes it is predicted. Cloud is an Exercise in Infrastructure Integration IT as a Service: A Stateless Infrastructure Architecture Model Cloud is the How not the What Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Cloud Computing Making Waves All Cloud Computing Posts on DevCentral233Views0likes0Comments