services
24 TopicsWhat is server offload and why do I need it?
One of the tasks of an enterprise architect is to design a framework atop which developers can implement and deploy applications consistently and easily. The consistency is important for internal business continuity and reuse; common objects, operations, and processes can be reused across applications to make development and integration with other applications and systems easier. Architects also often decide where functionality resides and design the base application infrastructure framework. Application server, identity management, messaging, and integration are all often a part of such architecture designs. Rarely does the architect concern him/herself with the network infrastructure, as that is the purview of “that group”; the “you know who I’m talking about” group. And for the most part there’s no need for architects to concern themselves with network-oriented architecture. Applications should not need to know on which VLAN they will be deployed or what their default gateway might be. But what architects might need to know – and probably should know – is whether the network infrastructure supports “server offload” of some application functions or not, and how that can benefit their enterprise architecture and the applications which will be deployed atop it. WHAT IT IS Server offload is a generic term used by the networking industry to indicate some functionality designed to improve the performance or security of applications. We use the term “offload” because the functionality is “offloaded” from the server and moved to an application network infrastructure device instead. Server offload works because the application network infrastructure is almost always these days deployed in front of the web/application servers and is in fact acting as a broker (proxy) between the client and the server. Server offload is generally offered by load balancers and application delivery controllers. You can think of server offload like a relay race. The application network infrastructure device runs the first leg and then hands off the baton (the request) to the server. When the server is finished, the application network infrastructure device gets to run another leg, and then the race is done as the response is sent back to the client. There are basically two kinds of server offload functionality: Protocol processing offload Protocol processing offload includes functions like SSL termination and TCP optimizations. Rather than enable SSL communication on the web/application server, it can be “offloaded” to an application network infrastructure device and shared across all applications requiring secured communications. Offloading SSL to an application network infrastructure device improves application performance because the device is generally optimized to handle the complex calculations involved in encryption and decryption of secured data and web/application servers are not. TCP optimization is a little different. We say TCP session management is “offloaded” to the server but that’s really not what happens as obviously TCP connections are still opened, closed, and managed on the server as well. Offloading TCP session management means that the application network infrastructure is managing the connections between itself and the server in such a way as to reduce the total number of connections needed without impacting the capacity of the application. This is more commonly referred to as TCP multiplexing and it “offloads” the overhead of TCP connection management from the web/application server to the application network infrastructure device by effectively giving up control over those connections. By allowing an application network infrastructure device to decide how many connections to maintain and which ones to use to communicate with the server, it can manage thousands of client-side connections using merely hundreds of server-side connections. Reducing the overhead associated with opening and closing TCP sockets on the web/application server improves application performance and actually increases the user capacity of servers. TCP offload is beneficial to all TCP-based applications, but is particularly beneficial for Web 2.0 applications making use of AJAX and other near real-time technologies that maintain one or more connections to the server for its functionality. Protocol processing offload does not require any modifications to the applications. Application-oriented offload Application-oriented offload includes the ability to implement shared services on an application network infrastructure device. This is often accomplished via a network-side scripting capability, but some functionality has become so commonplace that it is now built into the core features available on application network infrastructure solutions. Application-oriented offload can include functions like cookie encryption/decryption, compression, caching, URI rewriting, HTTP redirection, DLP (Data Leak Prevention), selective data encryption, application security functionality, and data transformation. When network-side scripting is available, virtually any kind of pre or post-processing can be offloaded to the application network infrastructure and thereafter shared with all applications. Application-oriented offload works because the application network infrastructure solution is mediating between the client and the server and it has the ability to inspect and manipulate the application data. The benefits of application-oriented offload are that the services implemented can be shared across multiple applications and in many cases the functionality removes the need for the web/application server to handle a specific request. For example, HTTP redirection can be fully accomplished on the application network infrastructure device. HTTP redirection is often used as a means to handle application upgrades, commonly mistyped URIs, or as part of the application logic when certain conditions are met. Application security offload usually falls into this category because it is application – or at least application data – specific. Application security offload can include scanning URIs and data for malicious content, validating the existence of specific cookies/data required for the application, etc… This kind of offload improves server efficiency and performance but a bigger benefit is consistent, shared security across all applications for which the service is enabled. Some application-oriented offload can require modification to the application, so it is important to design such features into the application architecture before development and deployment. While it is certainly possible to add such functionality into the architecture after deployment, it is always easier to do so at the beginning. WHY YOU NEED IT Server offload is a way to increase the efficiency of servers and improve application performance and security. Server offload increases efficiency of servers by alleviating the need for the web/application server to consume resources performing tasks that can be performed more efficiently on an application network infrastructure solution. The two best examples of this are SSL encryption/decryption and compression. Both are CPU intense operations that can consume 20-40% of a web/application server’s resources. By offloading these functions to an application network infrastructure solution, servers “reclaim” those resources and can use them instead to execute application logic, serve more users, handle more requests, and do so faster. Server offload improves application performance by allowing the web/application server to concentrate on what it is designed to do: serve applications and putting the onus for performing ancillary functions on a platform that is more optimized to handle those functions. Server offload provides these benefits whether you have a traditional client-server architecture or have moved (or are moving) toward a virtualized infrastructure. Applications deployed on virtual servers still use TCP connections and SSL and run applications and therefore will benefit the same as those deployed on traditional servers. I am wondering why not all websites enabling this great feature GZIP? 3 Really good reasons you should use TCP multiplexing SOA & Web 2.0: The Connection Management Challenge Understanding network-side scripting I am in your HTTP headers, attacking your application Infrastructure 2.0: As a matter of fact that isn't what it means2.7KViews0likes1CommentInfrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait
Infrastructure 2.0 ≠ cloud computing ≠ IT as a Service. There is a difference between Infrastructure 2.0 and cloud. There is also a difference between cloud and IT as a Service. But they do go together, like a parfait. And everybody likes a parfait… The introduction of the newest member of the cloud computing buzzword family is “IT as a Service.” It is understandably causing some confusion because, after all, isn’t that just another way to describe “private cloud”? No, actually it isn’t. There’s a lot more to it than that, and it’s very applicable to both private and public models. Furthermore, equating “cloud computing” to “IT as a Service” does both a big a disservice as making synonyms of “Infrastructure 2.0” and “cloud computing.” These three [ concepts | models | technologies ] are highly intertwined and in some cases even interdependent, but they are not the same. In the simplest explanation possible: infrastructure 2.0 enables cloud computing which enables IT as a service. Now that we’ve got that out of the way, let’s dig in. ENABLE DOES NOT MEAN EQUAL TO One of the core issues seems to be the rush to equate “enable” with “equal”. There is a relationship between these three technological concepts but they are in no wise equivalent nor should be they be treated as such. Like SOA, the differences between them revolve primarily around the level of abstraction and the layers at which they operate. Not the layers of the OSI model or the technology stack, but the layers of a data center architecture. Let’s start at the bottom, shall we? INFRASTRUCTURE 2.0 At the very lowest layer of the architecture is Infrastructure 2.0. Infrastructure 2.0 is focused on enabling dynamism and collaboration across the network and application delivery network infrastructure. It is the way in which traditionally disconnected (from a communication and management point of view) data center foundational components are imbued with the ability to connect and collaborate. This is primarily accomplished via open, standards-based APIs that provide a granular set of operational functions that can be invoked from a variety of programmatic methods such as orchestration systems, custom applications, and via integration with traditional data center management solutions. Infrastructure 2.0 is about making the network smarter both from a management and a run-time (execution) point of view, but in the case of its relationship to cloud and IT as a Service the view is primarily focused on management. Infrastructure 2.0 includes the service-enablement of everything from routers to switches, from load balancers to application acceleration, from firewalls to web application security components to server (physical and virtual) infrastructure. It is, distilled to its core essence, API-enabled components. CLOUD COMPUTING Cloud computing is the closest to SOA in that it is about enabling operational services in much the same way as SOA was about enabling business services. Cloud computing takes the infrastructure layer services and orchestrates them together to codify an operational process that provides a more efficient means by which compute, network, storage, and security resources can be provisioned and managed. This, like Infrastructure 2.0, is an enabling technology. Alone, these operational services are generally discrete and are packaged up specifically as the means to an end – on-demand provisioning of IT services. Cloud computing is the service-enablement of operational services and also carries along the notion of an API. In the case of cloud computing, this API serves as a framework through which specific operations can be accomplished in a push-button like manner. IT as a SERVICE At the top of our technology pyramid, as it is likely obvious at this point we are building up to the “pinnacle” of IT by laying more aggressively focused layers atop one another, we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources. IT as a Service builds on the services provided by cloud computing, which is often called a “cloud framework” or a “cloud API” and provides the means by which resources can be provisioned and managed. Now that sounds an awful lot like “cloud computing” but the abstraction is a bit higher than what we expect with cloud. Even in a cloud computing API we are steal interacting more directly with operational and compute-type resources. We’re provisioning, primarily, infrastructure services but we are doing so at a much higher layer and in a way that makes it easy for both business and application developers and analysts to do so. An example is probably in order at this point. THE THREE LAYERS in the ARCHITECTURAL PARFAIT Let us imagine a simple “application” which itself requires only one server and which must be available at all times. That’s the “service” IT is going to provide to the business. In order to accomplish this seemingly simple task, there’s a lot that actually has to go on under the hood, within the bowels of IT. LAYER ONE Consider, if you will, what fulfilling that request means. You need at least two servers and a Load balancer, you need a server and some storage, and you need – albeit unknown to the business user – firewall rules to ensure the application is only accessible to those whom you designate. So at the bottom layer of the stack (Infrastructure 2.0) you need a set of components that match these functions and they must be all be enabled with an API (or at a minimum by able to be automated via traditional scripting methods). Now the actual task of configuring a load balancer is not just a single API call. Ask RackSpace, or GoGrid, or Terremark, or any other cloud provider. It takes multiple steps to authenticate and configure – in the right order – that component. The same is true of many components at the infrastructure layer: the APIs are necessarily granular enough to provide the flexibility necessary to be combined in a way as to be customizable for each unique environment in which they may be deployed. So what you end up with is a set of infrastructure services that comprise the appropriate API calls for each component based on the specific operational policies in place. LAYER TWO At the next layer up you’re providing even more abstract frameworks. The “cloud API” at this layer may provide services such as “auto-scaling” that require a great deal of configuration and registration of components with other components. There’s automation and orchestration occurring at this layer of the IT Service Stack, as it were, that is much more complex but narrowly focused than at the previous infrastructure layer. It is at this layer that the services become more customized and able to provide business and customer specific options. It is also at this layer where things become more operationally focused, with the provisioning of “application resources” comprising perhaps the provisioning of both compute and storage resources. This layer also lays the foundation for metering and monitoring (cause you want to provide visibility, right?) which essentially overlays, i.e. makes a service of, multiple infrastructure resource monitoring services. LAYER THREE At the top layer is IT as a Service, and this is where systems become very abstracted and get turned into the IT King “A La Carte” Menu that is the ultimate goal according to everyone who’s anyone (and a few people who aren’t). This layer offers an interface to the cloud in such a way as to make self-service possible. It may not be Infrabook or even very pretty, but as long as it gets the job done cosmetics are just enhancing the value of what exists in the first place. IT as a Service is the culmination of all the work done at the previous layers to fine-tune services until they are at the point where they are consumable – in the sense that they are easy to understand and require no real technical understanding of what’s actually going on. After all, a business user or application developer doesn’t really need to know how the server and storage resources are provisioned, just in what sizes and how much it’s going to cost. IT as a Service ultimately enables the end-user – whomever that may be – to easily “order” IT services to fulfill the application specific requirements associated with an application deployment. That means availability, scalability, security, monitoring, and performance. A DYNAMIC DATA CENTER ARCHITECTURE One of the first questions that should come to mind is: why does it matter? After all, one could cut out the “cloud computing” layer and go straight from infrastructure services to IT as a Service. While that’s technically true it eliminates one of the biggest benefits of a layered and highly abstracted architecture : agility. By presenting each layer to the layer above as services, we are effectively employing the principles of a service-oriented architecture and separating the implementation from the interface. This provides the ability to modify the implementation without impacting the interface, which means less down-time and very little – if any – modification in layers above the layer being modified. This translates into, at the lowest level, vender agnosticism and the ability to avoid vendor-lock in. If two components, say a Juniper switch and a Cisco switch, are enabled with the means by which they can be enabled as services, then it becomes possible to switch the two at the implementation layer without requiring the changes to trickle upward through the interface and into the higher layers of the architecture. It’s polymorphism applied to an data center operation rather than a single object’s operations, to put it in developer’s terms. It’s SOA applied to a data center rather than an application, to put it in an architect’s terms. It’s an architectural parfait and, as we all know, everybody loves a parfait, right? Related blogs & articles: Applying Scalability Patterns to Infrastructure Architecture The Other Hybrid Cloud Architecture The New Distribution of The 3-Tiered Architecture Changes Everything Infrastructure 2.0: Aligning the network with the business (and ... Infrastructure 2.0: As a matter of fact that isn't what it means Infrastructure 2.0: Flexibility is Key to Dynamic Infrastructure Infrastructure 2.0: The Diseconomy of Scale Virus Lori MacVittie - Infrastructure 2.0 Infrastructure 2.0: Squishy Name for a Squishy Concept Pay No Attention to the Infrastructure Behind the Cloudy Curtain Making Infrastructure 2.0 reality may require new standards The Inevitable Eventual Consistency of Cloud Computing Cloud computing is not Burger King. You can't have it your way. Yet.721Views0likes0CommentsSDN and OpenFlow are not Interchangeable
#SDN #OpenFlow They aren't,seriously. They are not synonyms. Stop conflating them. New technology always runs into problems with terminology if it's lucky enough to become the "next big thing." SDN is currently in that boat, with a nearly cloud-like variety of definitions and accompanying benefits. I've seen SDN defined so tightly as to exclude any model that doesn't include Open Flow. Conversely, I've seen it defined so vaguely as to include pretty much any network that might have a virtual network appliance deployed somewhere in the data path. It's important to remember that SDN and OpenFlow are not synonymous. SDN is an architectural model. OpenFlow is an implementation API. So is XMPP, Arista's CloudVision solution for a southbound protocol. So are potentially vendor specific southbound protocols that might be included in Open Daylight's model. SDN is an architectural model. OpenFlow is an implementation API. It is one possible southbound API protocol, admittedly one that is rapidly becoming the favored son of SDN. It's certainly gaining mindshare, with a plurality of respondents to a recent InformationWeek survey on SDN having at least a general idea what Open Flow is all about, with nearly half indicating familiarity with the protocol. The reason it is important not to conflate Open Flow with SDN is that both the API and the architecture are individually beneficial on their own. There is no requirement that an Open Flow-enabled network infrastructure must be part of an SDN, for example. Organizations looking for benefits around management and automation of the network might simply choose to implement an Open Flow-based management framework using custom scripts or software, without adopting wholesale an SDN architecture. Conversely, there are plenty of examples of SDN offerings that do not rely on OpenFlow, but rather some other protocol of choice. Open Flow is, after all, a work in progress and there are capabilities required by organizations that simply don't exist yet in the current specification - and thus implementation. Open Flow Lacks Scope Even ignoring the scalability issues with OpenFlow, there are other reasons why Open Flow might not be THE protocol - or the only protocol - used in SDN implementations. Certainly for layer 2-3, Open Flow makes a lot of sense. It is designed specifically to carry L2-3 forwarding information from the controller to the data plane. What it is not designed to do is transport or convey forwarding information that occurs in the higher layers of the stack, such as L4-7, that might require application-specific details on which the data plane will make forwarding decisions. That means there's room for another protocol, or an extension of OpenFlow, in order to enable inclusion of critical L4-7 data path elements in an SDN architecture. The fact that OpenFlow does not address L4-7 (and is not likely to anytime soon) is seen in the recent promulgation of service chaining proposals. Service chaining is rising as the way in which L4-7 services will be included in SDN architectures. Lest we lay all the blame on OpenFlow for this direction, remember that there are issues around scaling and depth of visibility with SDN controllers as it relates to directing L4-7 traffic and thus it was likely that the SDN architecture would evolve to alleviate those issues anyway. But lack of support in Open Flow for L4-7 is another line item justification for why the architecture is being extended, because it lacks the scope to deal with the more granular, application-focused rules required. Thus, it is important to recognize that SDN is an architectural model, and Open Flow an implementation detail. The two are not interchangeable, and as SDN itself matures we will see more changes to core assumptions on which the architecture is based that will require adaptation.507Views0likes1CommentF5 and Versafe: Because Mobility Matters
#F5 #security #cloud #mobile #context Context means more visibility into devices, networks, and applications even when they're unmanaged. Mobility is a significant driver of technology today. Whether it's mobility of applications between data center and cloud, web and mobile device platform or users from corporate to home to publicly available networks, mobility is a significant factor impacting all aspects of application delivery but in particular, security. Server virtualization, BYOD, SaaS, and remote work all create new security problems for data center managers. No longer can IT build a security wall around its data center; security must be provided throughout the data and application delivery process. This means that the network must play a key role in securing data center operations as it “touches” and “sees” all traffic coming in and out of the data center. -- Lee Doyle, GigaOM "Survey: SDN benefits unclear to enterprise network managers" 8/29/2013 It's a given that corporate data and access to applications need to be protected when delivered to locations outside corporate control. Personal devices, home networks, and cloud storage all introduce the risk of information loss through a variety of attack vectors. But that's not all that poses a risk. Mobility of customers, too, is a source of potential disaster waiting to happen as control over behavior as well as technology is completely lost. Industries based on consumers and using technology to facilitate business transactions are particularly at risk from consumer mobility and, more importantly, from the attackers that target them. If the risk posed by successful attacks - phishing, pharming and social engineering - isn't enough to give the CISO an ulcer, the cost of supporting sometimes technically challenged consumers will. Customer service and support has become in recent years not only a help line for the myriad web and mobile applications offered by an organization, but a security help desk, as well, as consumers confused by e-mail and web attacks make use of such support lines. F5 and Security F5 views security as a holistic strategy that must be able to dig into not just the application and corporate network, but into the device and application, as well as the networks over which users access both mobile and web applications. That's where Versafe comes in with its unique combination of client-side intelligent visibility and subscription-based security service. Versafe's technology employs its client-side visibility and logic with expert-driven security operations to ensure real-time detection of a variety of threat vectors common to web and mobile applications alike. Its coverage of browsers, devices and users is comprehensive. Every platform, every user and every device can be protected from a vast array of threats including those not covered by traditional solutions such as session hijacking. Versafe approaches web fraud by monitoring the integrity of the session data that the application expects to see between itself and the browser. This method isn’t vulnerable to ‘zero-day’ threats: malware variants, new proxy/masking techniques, or fraudulent activity originating from devices, locations or users who haven’t yet accumulated digital fraud fingerprints. Continuous Delivery Meets Continuous Security Versafe's solution can accomplish such comprehensive coverage because it's clientless,relying on injection into web content in real time. That's where F5 comes in. Using F5 iRules, the appropriate Versafe code can be injected dynamically into web pages to scan and detect potential application threats including script injection, trojans, and pharming attacks. Injection in real-time through F5 iRules eliminates reliance on scanning and updating heterogeneous endpoints and, of course, relying on consumers to install and maintain such agents. This allows the delivery process to scale seamlessly along with users and devices and reasserts control over processes and devices not under IT control, essentially securing unsecured devices and lines of communication. Injection-based delivery also means no impact on application developers or applications, which means it won't reduce application development and deployment velocity. It also enables real-time and up-to-the-minute detection and protection against threats because the injected Versafe code is always communicating with the latest, up-to-date security information maintained by Versafe at its cloud-based, Security Operations Center. User protection is always on, no matter where the user might be or on what device and doesn't require updating or action on the part of the user. The clientless aspect of Versafe means it has no impact on user experience. Versafe further takes advantage of modern browser technology to execute with no performance impact on the user experience, That's a big deal, because a variety of studies on real behavior indicates performance hits of even a second on load times can impact revenue and user satisfaction with web applications. Both the web and mobile offerings from Versafe further ensure transaction integrity by assessing a variety of device-specific and behavioral variables such as device ID, mouse and click patterns, sequencing of and timing between actions and continuous monitoring of JavaScript functions. These kinds of checks are sort of an automated Turing test; a system able to determine whether an end-user is really a human being - or a bot bent on carrying out a malicious activity. But it's not just about the mobility of customers, it's also about the mobility - and versatility - of modern attackers. To counter a variety of brand, web and domain abuse, Versafe's cloud-based 24x7x365 Security Operations Center and Malware Analysis Team proactively monitors for organization-specific fraud and attack scheming across all major social and business networks to enable rapid detection and real-time alerting of suspected fraud. EXPANDING the F5 ECOSYSTEM The acquisition of Versafe and its innovative security technologies expands the F5 ecosystem by exploiting the programmable nature of its platform. Versafe technology supports and enhances F5's commitment to delivering context-aware application services by further extending our visibility into the user and device domain. Its cloud-based, subscription service complements F5's IP Intelligence Service, which provides a variety of similar service-based data that augments F5 customers' ability to make context-aware decisions based on security and location data. Coupled with existing application security services such as web application and application delivery firewalls, Versafe adds to the existing circle of F5 application security services comprising user, network, device and application while adding brand and reputation protection to its already robust security service catalog. We're excited to welcome Versafe into the F5 family and with the opportunity to expand our portfolio of delivery services. More information on Versafe: Versafe Versafe | Anti-Fraud Solution (Anti Phishing, Anti Trojan, Anti Pharming) Versafe Identifies Significant Joomla CMS Vulnerability & Corresponding Spike in Phishing, Malware Attacks Joomla Exploit Enabling Malware, Phishing Attacks to be Hosted from Genuine Sites 'Eurograbber' online banking scam netted $47 million443Views0likes1Comment4 Things You Need in a Cloud Computing Infrastructure
Cloud computing is, at its core, about delivering applications or services in an on-demand environment. Cloud computing providers will need to support hundreds of thousands of users and applications/services and ensure that they are fast, secure, and available. In order to accomplish this goal, they'll need to build a dynamic, intelligent infrastructure with four core properties in mind: transparency, scalability, monitoring/management, and security. Transparency One of the premises of Cloud Computing is that services are delivered transparently regardless of the physical implementation within the "cloud". Transparency is one of the foundational concepts of cloud computing, in that the actual implementation of services in the "cloud" are obscured from the user. This is actually another version of virtualization, where multiple resources appear to the user as a single resource. It is unlikely that a single server or resource will always be enough to satisfy demand for a given provisioned resource, which means transparent load-balancing and application delivery will be required to enable the transparent horizontal scaling of applications on-demand. The application delivery solution used to provide transparent load-balancing services will need to be automated and integrated into the provisioning workflow process such that resources can be provisioned on-demand at any time. Related Articles from around the Web What cloud computing really means How Cloud & Utility Computing Are Different The dangers of cloud computing Guide To Cloud Computing For example, when a service is provisioned to a user or organization, it may need only a single server (real or virtual) to handle demand. But as more users access that service it may require the addition of more servers (real or virtual). Transparency allows those additional servers to be added to the provisioned service without interrupting the service or requiring reconfiguration of the application delivery solution. If the application delivery solution is integrated via a management API with the provisioning workflow system, then transparency is also achieved through the automated provisioning and de-provisioning of resources. Scalability Obviously cloud computing service providers are going to need to scale up and build out "mega data centers". Scalability is easy enough if you've deployed the proper application delivery solution, but what about scaling the application delivery solution? That's often not so easy and it usually isn't a transparent process; there's configuration work and, in many cases, re-architecting of the network. The potential to interrupt services is huge, and assuming that cloud computing service providers are servicing hundreds of thousands of customers, unacceptable. The application delivery solution is going to need to not only provide the ability to transparently scale the service infrastructure, but itself, as well. That's a tall order, and something very rarely seen in an application delivery solution. Making things even more difficult will be the need to scale on-demand in real-time in order to make the most efficient use of application infrastructure resources. Many postulate that this will require a virtualized infrastructure such that resources can be provisioned and de-provisioned quickly, easily and, one hopes, automatically. The "control node" often depicted in high-level diagrams of the "cloud computing mega data center" will need to provide on-demand dynamic application scalability. This means integration with the virtualization solution and the ability to be orchestrated into a workflow or process that manages provisioning. Intelligent Monitoring In order to achieve the on-demand scalability and transparency required of a mega data center in the cloud, the control node, i.e. application delivery solution, will need to have intelligent monitoring capabilities. It will need to understand when a particular server is overwhelmed and when network conditions are adversely affecting application performance. It needs to know the applications and services being served from the cloud and understand when behavior is outside accepted norms. While this functionality can certainly be implemented externally in a massive management monitoring system, if the control node sees clients, the network, and the state of the applications it is in the best position to understand the real-time conditions and performance of all involved parties without requiring the heavy lifting of correlation that would be required by an external monitoring system. But more than just knowing when an application or service is in trouble, the application delivery mechanism should be able to take action based on that information. If an application is responding slowly and is detected by the monitoring mechanism, then the delivery solution should adjust application requests accordingly. If the number of concurrent users accessing a service is reaching capacity, then the application delivery solution should be able to not only detect that through intelligent monitoring but participate in the provisioning of another instance of the service in order to ensure service to all clients. Security Cloud computing is somewhat risky in that if the security of the cloud is compromised potentially all services and associated data within the cloud are at risk. That means that the mega data center must be architected with security in mind, and it must be considered a priority for every application, service, and network infrastructure solution that is deployed. The application delivery solution, as the "control node" in the mega data center, is necessarily one of the first entry points into the cloud data center and must itself be secure. It should also provide full application security - from layer 2 to layer 7 - in order to thwart potential attacks at the edge. Network security, protocol security, transport layer security, and application security should be prime candidates for implementation at the edge of the cloud, in the control node. While there certainly will be, and should be, additional security measures deployed within the data center, stopping as many potential threats as possible at the edge of the cloud will alleviate much of the risk to the internal service infrastructure. What are your plans for cloud computing? ( polls)382Views0likes2CommentsF5 Friday: Programmability and Infrastructure as Code
#SDN #ADN #cloud #devops What does that mean, anyway? SDN and devops share some common themes. Both focus heavily on the notion of programmability in network devices as a means to achieve specific goals. For SDN it’s flexibility and rapid adaptation to changes in the network. For devops, it’s more a focus on the ability to treat “infrastructure as code” as a way to integrate into automated deployment processes. Each of these notions is just different enough to mean that systems supporting one don’t automatically support the other. An API focused on management or configuration doesn’t necessarily provide the flexibility of execution exhorted by SDN proponents as a significant benefit to organizations. And vice-versa. INFRASTRUCTURE as CODE Devops is a verb, it’s something you do. Optimizing application deployment lifecycle processes is a primary focus, and to do that many would say you must treat “infrastructure as code.” Doing so enables integration and automation of deployment processes (including configuration and integration) that enables operations to scale along with the environment and demand. The result is automated best practices, the codification of policy and process that assures repeatable, consistent and successful application deployments. F5 supports the notion (and has since 2003 or so) of infrastructure as code in two ways: iControl iControl, the open, standards-based API for the entire BIG-IP platform, remains the primary integration point for partners and customers alike. Whether it’s inclusion in Opscode Chef recipes, or pre-packaged solutions with systems from HP, Microsoft, or VMware, iControl offers the ability to manage the control plane of BIG-IP from just about anywhere. iControl is service-enabled and has been accessed and integrated through more programmatic languages than you can shake a stick at. Python, PERL, Java, PHP, C#, PowerShell… if it can access web-based services, it can communicate with BIG-IP via iControl. iApp A latter addition to the BIG-IP platform, iApp is best practice application delivery service deployment codified. iApps are service- and application-oriented, enabling operations and consumers of IT as a Service to more easily deploy requisite application delivery services without requiring intimate knowledge of the hundreds of individual network attributes that must be configured. iApp is also used in conjunction with iControl to better automate and integrate application delivery services into an IT as a Service environment. Using iApp to codify performance and availability policies based on application and business requirements, consumers – through pre-integrated solutions – can simply choose an appropriate application delivery “profile” along with their application to ensure not only deployment but production success. Infrastructure as code is an increasingly important view to take of the provisioning and deployment processes for network and application delivery services as they enable more consistent, accurate policy configuration and deployment. Consider research from Dimension Data that found “total number of configuration violations per device has increased from 29 to 43 year over year -- and that the number of security-related configuration errors (such as AAA Authentication, Route Maps and ACLS, Radius and TACACS+) also increased. AAA Authentication errors in particular jumped from 9.3 per device to 13.6, making it the most frequently occurring policy violation.” The ability to automate a known “good” configuration and policy when deploying application and network services can decrease the risk of these violations and ensure a more consistent, stable (and ultimately secure) network environment. PROGRAMMABILITY Less with “infrastructure as a code” (devops) and more-so with SDN comes the notion of programmability. On the one hand, this notion squares well with the “infrastructure as code” concept, as it requires infrastructure to be enabled in such as a way as to provide the means to modify behavior at run time, most often through support for a common standard (OpenFlow is the darling standard du jour for SDN).For SDN, this tends to focus on the forwarding information base (FIB) but broader applicability has been noted at times, and no doubt will continue to gain traction. The ability to “tinker” with emerging and experimental protocols, for example, is one application of programmability of the network. Rather than wait for vendor support, it is proposed that organizations can deploy and test support for emerging protocols through OpenFlow enabled networks. While this capability is likely not really something large production networks would undertake, still, the notion that emerging protocols could be supported on-demand, rather than on a vendor' driven timeline, is often desirable. Consider support for SIP, before UCS became nearly ubiquitous in enterprise networks. SIP is a message-based protocol, requiring deep content inspection (DCI) capabilities to extract AVP codes as a means to determine routing to specific services. Long before SIP was natively supported by BIG-IP, it was supported via iRules, F5’s event-driven network-side scripting language. iRules enabled customers requiring support for SIP (for load balancing and high-availability architectures) to program the network by intercepting, inspecting, and ultimately routing based on the AVP codes in SIP payloads. Over time, this functionality was productized and became a natively supported protocol on the BIG-IP platform. Similarly, iRules enables a wide variety of dynamism in application routing and control by providing a robust environment in which to programmatically determine which flows should be directed where, and how. Leveraging programmability in conjunction with DCI affords organizations the flexibility to do – or do not – as they so desire, without requiring them to wait for hot fixes, new releases, or new products. SDN and ADN – BIRDS of a FEATHER The very same trends driving SDN at layer 2-3 are the same that have been driving ADN (application delivery networking) for nearly a decade. Five trends in network are driving the transition to software defined networking and programmability. They are: • User, device and application mobility; • Cloud computing and service; • Consumerization of IT; • Changing traffic patterns within data centers; • And agile service delivery. The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centers -- like those found at Google, Facebook, Amazon, etc. -- and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say. -- Five needs driving SDNs Each of these trends applies equally to the higher layers of the networking stack, and are addressed by a fully programmable ADN platform like BIG-IP. Mobile mediation, cloud access brokers, cloud bursting and balancing, context-aware access policies, granular traffic control and steering, and a service-enabled approach to application delivery are all part and parcel of an ADN. From devops to SDN to mobility to cloud, the programmability and service-oriented nature of the BIG-IP platform enables them all. The Half-Proxy Cloud Access Broker Devops is a Verb SDN, OpenFlow, and Infrastructure 2.0 Devops is Not All About Automation Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Identity Gone Wild! Cloud Edition Mobile versus Mobile: An Identity Crisis351Views0likes0CommentsPort Knocking: What are you hiding in there?
I read with interest an article on port knocking as a mechanism for securing SOA services on CIO.com. If you aren't familiar with port knocking (I wasn't) then you'll find it somewhat interesting: From Nicholas Petreley's "There is More to SOA Security Than Authorization and Authentication" For the sake of argument, let's say you have an SOA server component for your custom client software that uses port 4000. Port knocking can close off port 4000 (and every other port) to anyone who doesn't know the "secret method" for opening it. Any cracker who scans your server for open ports will never discover that you have an SOA service available on that port. All ports will appear unresponsive, which makes your server appear to offer no services at all. Ironically, your client gains access to port 4000 in a way similar to the way crackers discover existing open ports. As described above, port scanners step through all available ports sequentially, knocking on each one to see if there's an answer. By default, a port knocking-enabled firewall never answers on any port. The secret to unlocking any given port is in the non-sequential order your client uses to check for open ports. For example, your client software might check ports 22, 8000, 45, 1056, in that order. Each time, there will be no answer. But the server will recognize that your device —running the legitimate client software—knocked on just the right ports in the right order, like the key to a combination lock. Having gotten the right combination, the firewall will open port 4000 to the authenticated device and only to that device. Port 4000 will continue to look closed and unused to the rest of the world. A great description is also available here along with client and server side software. At first I thought "this is way cool". Then I thought about it some more and thought "Wow. That's going to destroy performance, increase development and support costs, and put a big target on your services. Still cool from a technical perspective, but from an enterprise architecture point of view? Not so cool." While the initial value of port knocking - making it more difficult to find services to attack in the first place - seems obvious, there are better ways to secure access to services than by hiding them and requiring modifications to clients (or requiring custom clients in the first place) to access them. The description from Nicolas includes "running the legitimate client software", which implies there is some method of identifying legitimate - and therefore illegitimate - clients. There are much less intrusive methods of determining the legitimacy of clients such as client certificates, custom HTTP headers, and, for SOA specific applications, digital signatures. The use of port knocking necessarily adds additional requests, which degrades the responsiveness of applications simply because it requires more time to "get into" the application. SOA applications aren't all that speedy in the first place, so adding more latency into the equation is certainly not going to improve user adoption or satisfaction with the application in terms of performance. Requiring special clients increases the support costs for any application as well as introduces variability that can delay development and deployment, such as OS platform support. Will all the clients be .NET? Java? What versions? What about patching and upgrades? What do the port knocking libraries support? How secure are they? Do they even exist? It also destroys ubiquitous web access to services through a browser, which destroys the benefits of reusability of services. If you need a special client it's not likely that browser-based access is going to be possible. And what about changes to your infrastructure? Port knocking requires a port knocking enabled firewall or server, and it almost certainly will be need to be deployed on the outside perimeter, in front of your enterprise firewall, because all ports need to be "open and answerable" in order to port forward the client to the "next door" in the sequence. This changes your network infrastructure and requires that you add yet another point of failure in the security chain. SOA has some well defined security that goes further than just authentication and authorization in WS-Security 1.1. The use of encryption, digital signatures, and SSL goes a long way toward securing your SOA. Hiding the existence of services makes it harder to find them, as the article points out. But attackers aren't stupid; if you go to extreme lengths to hide those services then they must be worth finding. Will it stop script kiddies? Certainly. Will it stop the hard-core organized attack squads? Absolutely not. Hiding your services with port knocking is a lot like turning off all the lights and pretending not to be home. Thieves generally don't go after the well-lit, occupied homes. They like the empty ones, or the ones just pretending to be empty.322Views0likes0CommentsThe IPv6 Application Integration Factor
#IPv6 Integration with partners, suppliers and cloud providers will make migration to IPv6 even more challenging than we might think… My father was in the construction business most of the time I was growing up. He used to joke with us when we were small that there was a single nail in every house that – if removed – would bring down the entire building. Now that’s not true in construction, of course, but when the analogy is applied to IPv6 it may be more true than we’d like to think, especially when that nail is named “integration”. Most of the buzz around IPv6 thus far has been about the network; it’s been focused on getting routers, switches and application delivery network components supporting the standard in ways that make it possible to migrate to IPv6 while maintaining support for IPv4 because, well, we aren’t going to turn the Internet off for a day in order to flip from IPv4 to IPv6. Not many discussions have asked the very important question: “Are your applications ready for IPv6?” It’s been ignored so long that many, likely, are not even sure about what that might mean let alone what they need to do to ready their applications for IPv6. IT’S the INTEGRATION The bulk of issues that will need to be addressed in the realm of applications when the inevitable migration takes off is in integration. This will be particularly true for applications integrating with cloud computing services. Whether the integration is at the network level – i.e. cloud bursting – or at the application layer – i.e. integration with SaaS such as Salesforce.com or through PaaS services – once a major point of integration migrates it will likely cause a chain reaction, forcing enterprises to migrate whether they’re ready or not. Consider for example, that cloud bursting, assumes a single, shared application “package” that can be pushed into a cloud computing environment as a means to increase capacity. If – when – a cloud computing provider decides to migrate to IPv6 this process could become a lot more complicated than it is today. Suddenly the “package” that assumed IPv4 internal to the corporate data center must assume IPv6 internal to the cloud computing provider. Reconfiguration of the OS, platform and even application layer becomes necessary for a successful migration. Enterprises reliant on SaaS for productivity and business applications will likely be first to experience the teetering of the house of (integration) cards. Enterprises are moving to the cloud, according to Yankee Group’s 2011 US FastView: Cloud Computing Survey. Approximately 48 percent of the respondents said remote/mobile user connectivity is driving the enterprises to deploy software as a service. This is significant as there is a 92 percent increase over 2010. Around 38 percent of enterprises project the deployment of over half of their software applications on a cloud platform within three years compared to 11 percent today, Yankee Group said in its “2011 Fast View Survey: Cloud Computing Motivations Evolve to Mobility and Productivity.” -- Enterprise SaaS Adoption Almost Doubles in 2011: Yankee Group Survey Enterprise don’t just adopt SaaS and cloud services, they integrate them. Data stored in cloud-hosted software is invaluable to business decision makers but first must be loaded – integrated – into the enterprise-deployed systems responsible for assisting in analysis of that data. Secondary integration is also often required to enable business processes to flow naturally between on- and off-premise deployed systems. It is that integration that will likely first be hit by a migration on either side of the equation. If the enterprise moves first, they must address the challenge of integrating two systems that speak incompatible network protocol versions. Gateways and dual-stack strategies – even potentially translators – will be necessary to enable a smooth transition regardless of who blinks first in the migratory journey toward IPv6 deployment. Even that may not be enough. Peruse RFC 4038, “Application Aspects of IPv6 Transition”, and you’ll find a good number of issues that are going to be as knots in wood to a nail including DNS, conversion functions between hostnames and IP addresses (implying underlying changes to development frameworks that would certainly need to be replicated in PaaS environments which, according to a recent report from Gartner, indicates a 267% increase in inquiries regarding PaaS this year alone), and storage of IP addresses – whether for user identification, access policies or integration purposes. Integration is the magic nail; the one item on the migratory checklist that is likely to make or break the success of IPv6 migration. It’s also likely to be the “thing” that forces organizations to move faster. As partners, sources and other integrated systems make the move it may cause applications to become incompatible. If one environment chooses an all or nothing strategy to migration, its integrated partners may be left with no option but to migrate and support IPv6 on a timeline not their own. TOO TIGHTLY COUPLED While the answer for IPv6 migration is generally accepted to be found in a dual-stack approach, the same cannot be said for Intercloud application mobility. There’s no “dual stack” in which services aren’t tightly coupled to IP address, regardless of version, and no way currently to depict an architecture without relying heavily on topological concepts such as IP. Cloud computing – whether IaaS or PaaS or SaaS – is currently entrenched in a management and deployment system that tightly couples IP addresses to services. Integration relying upon those services, then, becomes heavily reliant on IP addresses and by extension IP, making migration a serious challenge for providers if they intend to manage both IPv4 and IPv6 customers at the same time. But eventually, they’ll have to do it. Some have likened the IPv4 –> IPv6 transition as the network’s “Y2K”. That’s probably apposite but incomplete. The transition will also be as challenging for the application layers as it will for the network, and even more so for the providers caught between two versions of a protocol upon which so many integrations and services rely. Unlike Y2K we have no deadline pushing us to transition, which means someone is going to have to be the one to pull the magic nail out of the IPv4 house and force a rebuilding using IPv6. That someone may end up being a cloud computing provider as they are likely to have not only the impetus to do so to support their growing base of customers, but the reach and influence to make the transition an imperative for everyone else. IPv6 has been treated as primarily a network concern, but because applications rely on the network and communication between IPv4 and IPv6 without the proper support is impossible, application owners will need to pay more attention to the network as the necessary migration begins – or potentially suffer undesirable interruption to services.277Views0likes0CommentsThe Problem with Consumer Cloud Services...
…is that they're consumer #cloud services. While we're all focused heavily on the challenges of managing BYOD in the enterprise, we should not overlook or understate the impact of consumer-grade services within the enterprise. Just as employees bring their own devices to the table, so too do they bring a smattering of consumer-grade "cloud" services to the enterprise. Such services are generally woefully inappropriate for enterprise use. They are focused on serving a single consumer, with authentication and authorization models that support that focus. There are no roles, generally no group membership, and there's certainly no oversight from some mediating authority other than the service provider. This is problematic for enterprises as it eliminates the ability to manage access for large groups of people, to ensure authority to access based on employee role and status, and provides no means of integration with existing ID management systems. Integrating consumer-oriented cloud services into enterprise workflows and systems is a Sisyphean task. Cloud-services replicating what has traditionally been considered enterprise-class services such as CRM and ERP are designed with the need to integrate. Consumer-oriented services are designed with the notion of integration – with other consumer-grade services, not enterprise systems. They lack even the most rudimentary enterprise-class concepts such as RBAC, group-based policy and managed access. SaaS supporting what are traditionally enterprise-class concerns such as CRM and e-mail have begun to enable the integration with the enterprise necessary to overcome what is, according to survey conducted by CloudConnect and Everest Group, the number two inhibitor of cloud adoption amongst respondents. The lack of integration points into consumer-grade services is problematic for both IT – and the service provider. For the enterprise, there is a need to integrate, to control the processes associated with, consumer-grade cloud services. As with many SaaS solutions, the ability to collaborate with data-center hosted services as a means to integrate with existing identity and access control services is paramount to assuaging the concerns that currently exist given the more lax approach to access and identity in consumer-grade services. Integration capabilities – APIs – that enable enterprises to integrate even rudimentary control over access is a must for consumer-grade SaaS looking to find a path into the enterprise. Not only is it a path to monetization (enterprise organizations are a far more consistent source of revenue than are ads or income derived from the sale of personal data) but it also provides the opportunity to overcome the stigma associated with consumer-grade services that have already resulted in "bans" on such offerings within large organizations. There are fundamentally three functions consumer-grade SaaS needs to offer to entice enterprise customers: Control over AAA Enterprises need the ability to control who accesses services and to correlate with authoritative sources of identity and role. That means the ability to coordinate a log-in process that primarily relies upon corporate IT systems to assert access rights and the capability of the cloud-service to accept that assertion as valid. APIs, SAML, and other identity management techniques are invaluable tools in enabling this integration. Alternatively, enterprise-grade management within the tools themselves can provide the level of control required by enterprises to ensure compliance with a variety of security and business-oriented requirements. Monitoring Organizations need visibility into what employees (or machines) may be storing "in the cloud" or what data is being exchanged with what system. This visibility is necessary for a variety of reasons with regulatory compliance most often cited. Mobile Device Management (MDM) and Security Because one of the most alluring aspects of consumer cloud services is nearly ubiquitous access from any device and any location, the ability to integrate #1 and #2 via MDM and mobile-friendly security policies is paramount to enabling (willing) enterprise-adoption of consumer cloud services. While most of the "consumerization" of IT tends to focus on devices, "bring your own services" should also be a very real concern for IT. And if consumer cloud services providers think about it, they'll realize there's a very large market opportunity for them to support the needs of enterprise IT while maintaining their gratis offerings to consumers.245Views0likes1CommentEcosystems are Always in Flux
#devops An ecosystem-based data center approach means accepting the constancy of change… It is an interesting fact of life for aquarists that the term “stable” does not actually mean a lack of change. On the contrary, it means that the core system is maintaining equilibrium at a constant rate. That is, the change is controlled and managed automatically either by the system itself or through the use of mechanical and chemical assistance. Sometimes, those systems need modifications or break (usually when you’re away from home and don’t know it and couldn’t do anything about it if you did anyway but when you come back, whoa, you’re in a state of panic about it) and must be repaired or replaced and then reinserted into the system. The removal and subsequent replacement introduces more change as the system attempts to realign itself to the temporary measures put into place and then again when the permanent solution is again reintroduced. A recent automatic top-off system failure reminded me of this valuable lesson as I tried to compensate for the loss while waiting for a replacement. This 150 gallon tank is its own ecosystem and it tried to compensate itself for the fluctuations in salinity (salt-to-water ratio) caused by a less-than-perfect stop-gap measure I used while waiting a more permanent solution. As I was checking things out after the replacement pump had been put in place, it occurred to me that the data center is in a similar position as an ecosystem constantly in flux and the need for devops to be able to automate as much as possible in a repeatable fashion as a means to avoid incurring operational risk. PROCESS is KEY The reason my temporary, stop-gap measure was less than perfect was that the pump I used to simulate the same auto-top off process was not the same as the one used by the failed pump. The two systems were operationally incompatible. One monitored the water level and automatically pumped fresh water into the tank as a means to keep the water level stable while the other required an interval based cycle that pumped fresh water for a specified period of time and then shut off. To correctly configure it meant determining the actual flow rate (as opposed to the stated maximum flow rate) and doing some math to figure out how much water was actually lost on daily basis (which is variable) and how long to run the pumps to replace that loss over a 24 hour period. Needless to say I did not get this right and it had unintended consequences. Because the water level increased too far it caused a siphon break to fail which resulted in even more water being pumped into the system, effectively driving it close to hypo-salinity (not enough salt in the water) and threatening the existence of those creatures sensitive to salinity levels (many corals and some invertebrates are particularly sensitive to fluctuations in salinity, among other variables). The end result? By not nailing down the process I’d opened a potential hole through which the stability of the ecosystem could be compromised. Luckily, I discovered it quickly because I monitor the system on a daily basis, but if I’d been away, well, disaster may have greeted me on return. The process in this tale of near-disaster was key; it was the poor automation of (what should be) a simple process. This is not peculiar to the ecosystem of an aquarium, a fact of which Tufin Technologies recently reminded us when it published the results of a survey focused on change management. The survey found that organizations are acutely aware of the impact of poorly implemented processes and the (often negative) impact of manual processes in the realm of security: 66% of the sample felt their change management processes do or could place the organization at risk of a breach. The main reasons cited were lack of formal processes (56%), followed by manual processes with too many steps or people in the process (29%). -- Tufin Technologies Survey Reveals Most Organizations Believe Their Change Management Processes Could Lead to a Network Security Breach DEVOPS is CRITICAL to MAINTAINING a HEALTHY DATA CENTER ECOSYSTEM The Tufin survey focused on security change management (it is a security focused organization, so no surprise there) but as security, performance, and availability are intimately related it seems logical to extrapolate that similar results might be exposed if we were to survey folks with respect to whether or not their change management processes might incur some form of operational risk. One of the goals of devops is to enable successful and repeatable application deployments through automation of the operational processes associated with a deployment. That means provisioning of the appropriate security, performance, and availability services and policies required to support the delivery of the application. Change management processes are a part of the deployment process – or if they aren’t, they should be to ensure success and avoid the risks associated with lack of formal processes or too many cooks in the kitchen with highly complex manually followed recipes. Automation of configuration and policy-related tasks as well as orchestration of accepted processes is critical to maintaining a healthy data center ecosystem in the face of application updates, changes in security and access policies, as well as adjustments necessary to combat attacks as well as legitimate sudden spikes in demand. More focus on services and policy as a means to not only deploy but maintain application deployments is necessary to enable IT to continue transforming from its traditional static, manual environment to a dynamic and more fluid ecosystem able to adapt to the natural fluctuations that occur in any ecosystem, including that of the data center. The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility The Future of Cloud: Infrastructure as a Platform The Secret to Doing Cloud Scalability Right Operational Risk Comprises More Than Just Security244Views0likes0Comments