f5 friday
127 TopicsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM655Views0likes4CommentsF5 Friday: HP Cloud Maps Help Navigate Server Flexing with BIG-IP
The economy of scale realized in enterprise cloud computing deployments is as much (if not more) about process as it is products. HP Cloud Maps simplify the former by automating the latter. When the notion of “private” or “enterprise” cloud computing first appeared, it was dismissed as being a non-viable model due to the fact that the economy of scale necessary to realize the true benefits were simply not present in the data center. What was ignored in those arguments was that the economy of scale desired by enterprises large and small was not necessarily that of technical resources, but of people. The widening gap between people and budgets and data center components was a primary cause of data center inefficiency. Enterprise cloud computing promised to relieve the increasing burden on people by moving it back to technology through automation and orchestration. As a means to achieve such a feat – and it is a non-trivial feat – required an ecosystem. No single vendor could hope to achieve the automation necessary to relieve the administrative and operational burden on enterprise IT staff because no data center is ever comprised of components provided by a single vendor. Partnerships – technological and practical partnerships – were necessary to enable the automation of processes spanning multiple data center components and achieve the economy of scale promised by enterprise cloud computing models. HP, while providing a wide variety of data center components itself, has nurtured such an ecosystem of partners. Combined with its HP Operations Orchestration, such technologically-focused partnerships have built out an ecosystem enabling the automation of common operational processes, effectively shifting the burden from people to technology, resulting in a more responsive IT organization. HP CLOUD MAPS One of the ways in which HP enables customers to take advantage of such automation capabilities is through Cloud Maps. Cloud Maps are similar in nature to F5’s Application Ready Solutions: a package of configuration templates, guides and scripts that enable repeatable architectures and deployments. Cloud Maps, according to HP’s description: HP Cloud Maps are an easy-to-use navigation system which can save you days or weeks of time architecting infrastructure for applications and services. HP Cloud Maps accelerate automation of business applications on the BladeSystem Matrix so you can reliably and consistently fast- track the implementation of service catalogs. HP Cloud Maps enable practitioners to navigate the complex operational tasks that must be accomplished to achieve even what seems like the simplest of tasks: server provisioning. It enables automation of incident resolution, change orchestration and routine maintenance tasks in the data center, providing the consistency necessary to enable more predictable and repeatable deployments and responses to data center incidents. Key components of HP Cloud Maps include: Templates for hardware and software configuration that can be imported directly into BladeSystem Matrix Tools to help guide planning Workflows and scripts designed to automate installation more quickly and in a repeatable fashion Reference whitepapers to help customize Cloud Maps for specific implementation HP CLOUD MAPS for F5 NETWORKS The partnership between F5 and HP has resulted in many data center solutions and architectures. HP’s Cloud Maps for F5 Networks today focuses on what HP calls server flexing – the automation of server provisioning and de-provisioning on-demand in the data center. It is designed specifically to work with F5 BIG-IP Local Traffic Manager (LTM) and provides the necessary configuration and deployment templates, scripts and guides necessary to implement server flexing in the data center. The Cloud Map for F5 Networks can be downloaded free of charge from HP and comprises: The F5 Networks BIG-IP reference template to be imported into HP Matrix infrastructure orchestration Workflow to be imported into HP Operations Orchestration (OO) XSL file to be installed on the Matrix CMS (Central Management Server) Perl configuration script for BIG-IP White papers with specific instructions on importing reference templates, workflows and configuring BIG-IP LTM are also available from the same site. The result is an automation providing server flexing capabilities that greatly reduces the manual intervention necessary to auto-scale and respond to capacity-induced events within the data center. Happy Flexing! Server Flexing with F5 BIG-IP and HP BladeSystem Matrix HP Cloud Maps for F5 Networks F5 Friday: The Dynamic Control Plane F5 Friday: The Evolution of Reference Architectures to Repeatable Architectures All F5 Friday Posts on DevCentral Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What is a Strategic Point of Control Anyway? The F5 Dynamic Services Model Unleashing the True Potential of On-Demand IT310Views0likes1CommentF5 Friday. IT Brand Pulse Awards
IT Brand Pulse carries a series of reports based upon surveys conducted amongst IT professionals that attempt to ferret out the impression that those working in IT have of the various vendors in any given market space. Their free sample of such a report is the November 2010 FCoE Switch Market Leader Report and it is an interesting read, though I admit it made me want to paw through some more long-form answers from the participants to see what shaped these perceptions. The fun part is trying to read between the lines, since this is aimed at the perceived leader, you have to ask how much boost Cisco and Brocade received in the FCoE space just because they’re the FC vendors of choice. But of course, no one source of information is ever a complete picture, and this does give you some information about how your peers feel – whether that impression is real or not – about the various vendors out there. It’s not the same as taking some peers to lunch, but it does give you an idea of the overall perceptions of the industry in one handy set of charts. This February, F5 was honored by those who responded to their Load Balancer Market Leader Report with wins in three separate categories of Load Balancing – price, performance, and innovation, and took the overall title of “Market Leader” in load balancing. We, of course, prefer to call ourselves an Application Delivery Controller (ADC), but when you break out the different needs of users, doing a survey on load balancing is fair enough. After all, BIG-IP Local Traffic Manager (LTM) has its roots in load balancing, and we think it’s tops. IT Brand Pulse is an analyst firm that provides a variety of services to help vendors and IT staff make intelligent decisions. While F5 is not, to my knowledge, an IT Brand Pulse customer, I (personally) worked with the CEO, Frank Berry while he was at QLogic and I was writing for Network Computing. Frank has a long history in the high tech industry and a clue what is going on, so I do trust his company’s reports more than I trust most analyst firms. We at F5 are pleased to have this validation to place next to the large array of other awards, recognition, and customer satisfaction we have earned, and intend to keep working hard to earn more such awards. It is definitely our customers that place us so highly, and for that we are very grateful. Because in the end it is about what customers do with our gear that matters. And we’ll continue to innovate to meet customer needs, while keeping our commitment to quality.189Views0likes1CommentSupport for NIST 800-53?
#f5friday There’s an iApp for that! NIST publication 800-53 is a standard defined to help government agencies (and increasingly enterprises) rein in sprawling security requirements while maintaining a solid grip on the lockdown lever. It defines the concept of a “security control” that ranges from physical security to AAA, and then subdivides controls into “common” – those used frequently across an organization, “custom” – those defined explicitly for use by a single application or device, and “hybrid” – those that start with a common control and then customize it for the needs of a specific application or device. The standard lists literally hundreds of controls, their purpose, and when they’re needed. When these controls are “common”, they can be reused by multiple applications or devices. For government entities and their contractors, this standard is not optional, but there is a lot of good in here for everyone else also. Of course external access to applications should be considered, allowed or not, and if allowed, locked down to only those who absolutely need it. That is the type of thing you’ll find in the common controls, and any enterprise could benefit from studying and understanding the standard. For applications, using this standard and the ones it is based off of, IT can develop a security checklist. The thing is that for hardware devices, support is very difficult from the outside. It is far better if the device – particularly networking devices that run outside of the application context – implement the information security portions internally. And F5BIG-IP does. With an iApp. No doubt you’ve heard us talk about how great we think iApps are, well now we have a solid example to point out, where we use it to configure the objects that allow access to our own device. Since iApps are excellent at manipulating data heading for an application, the fact that BIG-IP UI is a web application should make it easy to understand how we quickly built support for 800-53 through development of an iApp that configures all of the right objects/settings for you, if you but answer a few questions. 800-53 is a big standard, and iApps were written with the intent that they configure objects that BIG-IP manipulates more than the BIG-IP configuration objects themselves, so there are a couple of caveats in the free downloadable iApp – check the help section after you install and before you configure the iApp. But even if the caveats affect you, they are not show-stoppers that invalidate compliance with the standard, so the iApp is still useful for you. One of my co-workers was kind enough to give me a couple of enhanced screenshots to share with you, if you already know you need to support this standard in your organization, these will show you the type of support we’ve built. If you’re not sure whether you’ll be supporting 800-53, they’re still pretty information-packed and you’ll get why I say this stuff is useful for any organization. The thing is that this iApp is not designed as a “Yes, now you can check that box” solution, it aims to actually give you control over who has access to the BIG-IP system and how they have access, from where, while utilizing the language and format of the standard. All of these things can be done without the iApp, but this tool makes it far easier to implement and maintain compliance because under the covers it changes several necessary settings for you, and you do not have to search down each individual object and configure it by hand. The iApp is free. iApp support is built into BIG-IP. If you need to comply with the standard for regulatory reasons, or have decided as an organization to comply, download it and install. Then run the iApp and off you go to configuration land. Note that this iApp makes changes to the configuration of your BIG-IP. It should be used by knowledgeable staff that are aware of how their choices will impact the existing BIG-IP configuration.238Views0likes1CommentF5 Friday: Mitigating the THC SSL DoS Threat
The THC #SSL #DoS tool exploits the rapid resource consumption nature of the handshake required to establish a secure session using SSL. A new attack tool was announced this week and continues to follow in the footsteps of resource exhaustion as a means to achieve a DoS against target sites. Recent trends in attacks show an increasing interest in maximizing effect while minimizing effort. This means a move away from traditional denial of service attacks that focus on overwhelming sites with traffic and toward attacks that focus on rapidly consuming resources, instead. Both have the same ultimate goal: overwhelming infrastructure, whether server or router or insert infrastructure component of choice>. The latest SSL-based attack falls into the modern category of denial of service attacks in that it’s not an attempt to overwhelm with traffic, but rather to consume resources on servers such that capacity and the ability to respond to legitimate requests is eliminated. The blog post announcing the exploit tools explains: Establishing a secure SSL connection requires 15x more processing power on the server than on the client. THC-SSL-DOS exploits this asymmetric property by overloading the server and knocking it off the Internet. This problem affects all SSL implementations today. The vendors are aware of this problem since 2003 and the topic has been widely discussed. This attack further exploits the SSL secure Renegotiation feature to trigger thousands of renegotiations via single TCP connection. -- THC SSL DOS Tool Released As the blog points out, there is no resolution to this exploit. Common mitigation techniques include the use of an SSL accelerator, i.e. a reverse-proxy capable device with specialized hardware designed to improve the processing capability of SSL and associated cryptographic functions. Modern application delivery controllers like BIG-IP include such hardware by default and make use of its performance and capacity-enhancing abilities to offset the operational costs of supporting SSL-secured communication. BIG-IP MITIGATION There are actually several ways in which BIG-IP can mitigate the potential impact of this kind of attack. First and foremost is simply its higher capacity for connections and processing of SSL / RSA operations. BIG-IP can manage myriad more connections – secure or not – than a typical web server and thus it may be, depending on the hardware platform on which BIG-IP is deployed, that the mitigation rests merely on having a BIG-IP in the path of the attack. In the case that it is not, or if organizations desire a more proactive approach to mitigation, there are two additional options: 1. SSL renegotiation, which is in part the basis for the attack (it’s what allows a relatively few clients to force the server to consume more and more resources), can be disabled in BIG-IP v11 and v10.2.3. This may break some applications and/or clients so this option may want to be left as a “last resort” or the risks carefully weighed before deploying such a configuration. 2. An iRule that drops connections over which a client attempts to renegotiate more than five times in a given 60-second interval can be deployed. As noted by David Holmes and the iRule author, Jason Rahm, “By silently dropping the client connection, the iRule causes the attack tool to stall for long periods of time, fully negating the attack. There should be no false-positives dropped, either, as there are very few valid use cases for renegotiating more than once a minute.” The full details and code for the iRule can be found in the DevCentral article “SSL Renegotiation DOS attack – an iRule Countermeasure” UPDATE 11/1/2011: David Holmes has included an optimized version of the iRule in his latest blog, "The SSL Renegotation Attack is Back." His version uses the normal flow key (instead of a random key), adds a log message, and optimizes memory consumption. Regardless of the mitigating technique used, BIG-IP can provide the operational security necessary to prevent such consumption-leeching attacks from negatively impacting applications by defeating the attack before it reaches application infrastructure. Stay safe!467Views0likes1CommentF5 Friday: We've Got You (Virtually) Covered
#F5 #virtualization #SDAS #webperf You've all heard the news, right? Load balancers are dead. But that doesn't mean load balancing is dead and, in fact, it's a pretty critical piece of today's emerging technologies. That's because when you look out at what's going on and what's growing like weeds in a Midwest corn field, it's applications. Applications need load balancing to scale because at some point, operational axioms start proving themselves true and load on an individual application causes performance to plummet - or worse. Scale is an integral piece of the puzzle, and it's driving a number of technological changes today including SDN (scaling networks) and Cloud (scaling applications). Coupled with these external pressures are those coming from within application development proper; the architecture of applications is changing, forced by the mounting pressure from business and consumers to deliver faster, more efficiently and more frequently. Both devops and traditional network operations are feeling the pinch, and that means shaking things up when it comes to designing enterprise network architectures capable of keeping up with the tectonic shifts that threaten to squeeze the network in between them. That's why it's important to be able to support both core and application networks with a variety of options. Options that include virtualized network elements capable of supporting with alacrity architectures such as SDN, cloud and devops-inspired patterns like canary deployments. . BIG-IP LTM VE is the core of application delivery (which, if you recall, is how load balancing is delivered these days - as an integral service of a more comprehensive approach to application delivery) and you can download and try it out for free for 90 days at http://f5.com/trials. This 90-day free trial gives you the ability to run BIG-IP LTM and 10 concurrent BIG-IP Access Policy Manager user sessions with your network applications in a pre-production environment to test configurations. When you are ready to use BIG-IP in production, back up your configuration and restore it to a production BIG-IP device to experience the ultimate in flexibility. The BIG-IP LTM VE free trial (for a VMware hypervisor environment) allows you to try out such functionality as: Intelligent traffic management Data path programmability and flexible, programmatic services are the basis for F5 Synthesis Software Defined Application Services (SDAS) and that starts with intelligent, programmable traffic management provided by BIG-IP LTM. Application-driven routing, scalability and flexible application health monitoring ensure the scale and performance of applications for both mobile and traditional delivery paths. Increase operational efficiency Control plane programmability and F5's application-driven templates, iApps, ensure consistent, repeatable and successful deployments of application services. Integrate with automation frameworks leveraged by successful devops initiatives such as Puppet, Chef, and OpenStack. Ensure peak application performance When load balancing services are deployed atop a full-proxy architecture there are a variety of performance-related services that become available to improve the performance of any application. TCP multiplexing and application-specific optimization profiles can be applied to increase the efficiency of web and application servers while addressing the most common performance-impeding issues. Access control for 10 concurrent BIG-IP Access Policy Manager user sessions The next generation of data centers will rely heavily on application access management. Get a head start on how applying programmable access polices will simplify and scale the coming onslaught of application requests by consumers, partners and employees alike. F5 is pleased to announce three different ways you can try out BIG-IP LTM VE: A free 90-day download is available at http://f5.com/trials If you need more features or want to try the latest version, just contact F5 to get free full-featured 30-day evaluation licenses of any BIG-IP solution. If you need build, test, and configure BIG-IP in your dev lab, we have low-cost BIG-IP Lab licenses available for a nominal fee. Just contact F5 to get started. Need more information before you download? No problem, you can check out the BIG-IP LTM VE data sheet right here [pdf].269Views0likes0CommentsF5 Friday: I am in UR HTTP Headers Sharing Geolocation Data
#DNS #bigdata #F5 #webperf How'd you like some geolocation data with that HTTP request? Application developers are aware (you are aware, aren't you?) that when applications are scaled using most modern load balancing services that the IP address of the application requests actually belong to the load balancing service. Application developers are further aware that this means they must somehow extract the actual client IP address from somewhere else, like the X-Forwarded-For HTTP header. Now, that's pretty much old news. Like I said, application developers are aware of this already. What's new (and why I'm writing today) is the rising use of geolocation to support localized (and personalized) content. To do this, application developers need access to the geographical location indicated by either GPS coordinates or IP address. In most cases, application developers have to get this information themselves. This generally requires integration with some service that can provide this information despite the fact that infrastructure like BIG-IP and its DNS services, already have it and have paid the price (in terms of response time) to get it. Which means, ultimately, that applications pay the performance tax for geolocation data twice - once on the BIG-IP and once in the application. Why, you are certainly wondering, can't the BIG-IP just forward that information in an HTTP header just like it does the client IP address? Good question. The answer is that technically, there's no reason it can't. Licensing, however, is another story. BIG-IP includes, today, a database of IP addresses that locates clients, geographically, based on client IP address. The F5 EULA, today, allows customers to use this information for a number of purposes, including GSLB load balancing decisions, access control decisions with location-based policies, identification of threats by country, location blocking of application requests, and redirection of traffic based on the client’s geographic location. However, all decisions had to be made on BIG-IP itself and geographic information could not be shared or transmitted to any other device. However, a new agreement allows customers an option to use the geo-location data outside of BIG-IP, subject to fees and certain restrictions. That means BIG-IP can pass on State, Province, or Region geographic data to applications using an easily accessible HTTP header. How does that work? Customers can now obtain a EULA waiver which permits certain off-box use cases. This allows customers to use the geolocation data included with BIG-IP in applications residing on a server or servers in an “off box” fashion. For example, location information may be embedded into an HTTP header or similar and then sent on to the server for it to perform some geo-location specific action. Customers (existing or new) can contact their F5 sales representative to start the process of obtaining the waiver necessary to enable the legal use of this data in an off-box fashion. All that's necessary from a technical perspective is to determine how you want to share the data with the application. For example, you'll (meaning you, BIG-IP owner and you, application developer) will have to agree upon what HTTP header you'll want to use to share the data. Then voila! Developers have access to the data and can leverage it for existing or new applications to provide greater location-awareness and personalization. If your organization has a BIG-IP (and that's a lot of organizations out there), check into this opportunity to reduce the performance tax on your applications that comes from double-dipping into geolocation data. Your users (especially your mobile users) will appreciate it.380Views0likes0CommentsF5 Friday: Node in the Network
#LineRate #NodeSummit #node #devops #SDN It's node. It's in the network. I've been watching the ADC space a long time. Over a decade now, both from the publishing and vendor sides of the table. Having spent half my "'life" as a developer and the other half of my "life" in the network, there's something about technology that marries the two that really get me hyper excited. The last time someone introduced a technology that did just that was, perhaps unsurprisingly, F5 when it introduced iRules and iControl. Programmability was the name of the game, but it was perhaps a bit early to the game as few recognized just how important the ability to programmatically control and interact with the network was going to be (this was over 10 years ago). If you still aren't sure, take some time to read about SDN and devops and automation and, well, you get the picture. So it makes sense to me that F5 would lead the game again when the need for high-performance, programmable proxies made itself known. You see, the network is bifurcating. More and more developers and devops practitioners are looking for solutions that sit deep in the data center, right next to applications. Application and API proxies are the new black in the data center, and they're the infrastructure du jour that makes the transition from monolithic application to decoupled, service and API-based applications a whole lot easier for developers and operators who have to deal with nearly constant change. New, more agile architectures are necessary to support this transition as well as the need for 101% uptime. Flexible infrastructure models, networks, and application proxies are now required. Static proxies aren't going to cut it. Configuration of such systems are messy, manual and time consuming. Configurations don't necessarily fit well into a code-driven deployment model, where code artifacts are pulled automatically from repositories to build applications and services on a daily basis. What devops and developers need is a programmable proxy; one that's able to integrate well into not only the infrastructure but the processes and systems that are responsible for building and publishing APIs and applications. That means an accessible, familiar API (REST is preferred) and the ability to code instead of config for critical infrastructure components like application and API proxies. That's LineRate in a nutshell. Its control plane is a very modern REST API, not a thin veneer over an existing API or remote management system. You don't configure LineRate to do what you need it to do, you code. A few lines of node.js and voila! It's doing application routing. Or data transformation (XML to JSON anyone?). Or maybe it's enabling you to test a new application with real data - without concern it might screw up the real thing. LineRate represents the very real manifestation of the marriage of applications and the network. It's the bridge over the gap between two worlds that are very much dependent on each other. Without the network, applications today are pretty much unusable. But without applications, networks have no reason to exist. It's a symbiotic relationship, with each fueling the other - neither can really stand alone any more. LineRate brings both together and provides not just a way to interact with the data path in the network, but to control that data path, to change the behavior of the data path, to do things in the network that has not really been possible before. Where technologies today promote the ability to innovate in the network, LineRate actually provides the ability. Where technologies today claim to be able to extend the network, LineRate actually confers the ability on those so inclined. It's no longer about configuration, it's about code. With configurations you're limited to what someone else things you should be able to do. With code, you can do anything. That's what's so exciting about putting node in the network. You can download your free copy of LineRate today.219Views0likes0CommentsF5 Friday: Gracefully Scaling Down
What goes up, must come down. The question is how much it hurts (the user). An oft ignored side of elasticity is scaling down. Everyone associates scaling out/up with elasticity of cloud computing but the other side of the coin is just as important, maybe more so. After all, what goes up must come down. The trick is to scale down gracefully, i.e. to do it in such a way as to prevent the disruption of service to existing users while simultaneously trying to scale back down after a spike in demand. The ramifications of not scaling down are real in terms of utilization and therefore cost. Scaling up with the means to scale back down means higher costs, and simply shutting down an instance that is currently in use can result in angry users as service is disrupted. What’s necessary is to be able to gracefully scale down; to indicate somehow to the load balancing solution that a particular instance is no longer necessary and begin preparation for eventually shutting it down. Doing so gracefully requires that you are somehow able to quiesce or bleed off the connections. You want to continue to service those users who are currently connected to the instance while not accepting any new connections. This is one of the benefits of leveraging an application-aware application delivery controller versus a simple Load balancer: the ability to receive instruction in-process to begin preparation for shut down without interrupting existing connections. SERVING UP ACTIONABLE DATA BIG-IP users have always had the ability to specify whether disabling a particular “node” or “member” results in the rejection of all connections (including existing ones) or if it results in refusing new connections while allowing old ones to continue to completion. The latter technique is often used in preparation for maintenance on a particular server for applications (and businesses) that are sensitive to downtime. This method maintains availability while accommodating necessary maintenance. In version 10.2 of the core BIG-IP platform a new option was introduced that more easily enables the process of draining a server/application’s connections in preparation for being taken offline. Whether the purpose is maintenance or simply the scaling down side of elastic scalability is really irrelevant; the process is much the same. Being able to direct a load balancing service in the way in which connections are handled from the application is an increasingly important capability, especially in a public cloud computing environment because you are unlikely to have the direct access to the load balancing system necessary to manually engage this process. By providing the means by which an application can not only report but direct the load balancing service, some measure of customer control over the deployment environment is re-established without introducing the complexity of requiring the provider to manage the thousands (or more) credentials that would otherwise be required to allow this level of control over the load balancer’s behavior. HOW IT WORKS For specific types of monitors in LTM (Local Traffic Manager) – HTTP, HTTPS, TCP, and UDP – there is a new option called “Receive Disable String.” This “string” is just that, a string that is found within the content returned from the application as a result of the health check. In phase one we have three instances of an application (physical or virtual, doesn’t matter) that are all active. They all have active connections and are all receiving new connections. In phase two a health check on one server returns a response that includes the string “DISABLE ME.” BIG-IP sees this and, because of its configuration, knows that this means the instance of the application needs to gracefully go offline. LTM therefore continues to direct existing connections (sessions) with that instance to the right application (phase 3), but subsequently directs all new connection requests to the other instances in the pool (farm, cluster). When there are no more existing connections the instance can be taken offline or shut down with zero impact to users. The combination of “receive string” and “receive disable string” impacts the way in which BIG-IP interprets the instruction. A “receive string” typically describes the content received that indicates an available and properly executing application. This can be as simple as “HTTP 200 OK” or as complex as looking for a specific string in the response. Similarly the “receive disable” string indicates a particular string of text that indicates a desire to disable the node and begin the process of bleeding off connections. This could be as simple as “DISABLE” as indicated in the above diagram or it could just as easily be based solely on HTTP status codes. If an application instance starts returning 50x errors because it’s at capacity, the load balancing policy might include a live disable of the instance to allow it time to cool down – maintaining existing connections while not allowing new ones. Because action is based on matching a specific string, the possibilities are pretty much wide open. The following table describes the possible interactions between the two receive string types: LEVERAGING as a PROVIDER One of the ways in which a provider could leverage this functionality to provide differentiated value-added cloud services (as Randy Bias calls them) would be to define an application health monitoring API of sorts that allows customers to add to their application a specific set of URIs that are used solely for monitoring and can thus control the behavior of the load balancer without requiring per-customer access to the infrastructure itself. That’s a win-win, by the way. The customer gets control but so does the provider. Consider an health monitoring API that is a single URI: http://$APPLICATION_INSTANCE_HOSTNAME/health/check. Now provide a set of three options for customers to return (these are likely oversimplified for illustration purposes, but not by much): ENABLE QUIESCE DISABLE For all application instances the BIG-IP will automatically use an HTTP-derived monitor that calls $APP_INSTANCE/health/check and examines the result. The monitor would use “ENABLE” as the “receive string” and “QUIESCE” as the “receive disable” string. Based on the string returned by the application, the BIG-IP takes the appropriate action (as defined by the table above). Of course this can also easily be accomplished by providing a button on the cloud management interface to do the same via iControl, but this option is more able to be programmatically defined by customers and thus is more dynamic and allows for automation. And of course such an implementation isn’t relegated only to service providers; IT organizations in any environment can take advantage of such an implementation, especially if they’re working toward an automated data center and/or self-service provisioning/management of IT services. That is infrastructure as a service. Yes, this means modification to the application being deployed. No, I don’t think that’s a problem – cloud and Infrastructure as a Service (IaaS), at least real IaaS is going to necessarily require modifications to existing applications and new applications will need to include this type of integration in the future if we are to take advantage of the benefits afforded by a more application aware infrastructure and, conversely, a more infrastructure-aware application architecture. Related Posts735Views0likes1CommentF5 Friday: SDN, Layer 123 SDN & OpenFlow World Congress and LineRate Systems (A Chat with F5's John Giacomoni)
#SDN #OpenFlow John G (that's what we call him here on the inside) answers some burning questions about F5 and SDN before jetting off to SDN & OpenFlow World Congress.... We get a lot of questions about not just F5 and what we're doing with SDN these days, but also about LineRate and where it fits in. So we thought we'd chat with our resident expert, John Giacomoni. John not only co-founded LineRate and joined us after the acquisition, but has since then emerged as one our key subject matter experts on SDN in general. I caught up with John via e-mail just before he left for Germany to attend Layer 123 SDN & OpenFlow World Congress where he'll be presenting and mingling with other SDN folks. Q: I heard that you'll be speaking at the Layer 123 SDN & OpenFlow World Congress in Germany next week, can you tell us a little bit about that? Sure, I'll be presenting a talk on October 17th focusing on Application Delivery and SDN which gives an overview of how SDN Architectures embrace both applications and Layer 4-7 services and present a few Layer 7 use cases that bring powerful traffic management into data centers built around the SDN Architecture. I'll also be participating on the 16th in the lunchtime debating table focused on "Transitioning from a Connection/Transport Model to an Application Delivery Model using Application Layer SDN" hosted by F5's VP for Service Provider Solutions, Dr. Mallik Tatipamula Q: So you recently joined us in February as part of our acquisition of LineRate Systems. Can you tell us a little bit about your role at F5. Since transitioning to F5 my role has been an evolution of my former LineRate role such that I now wear two very different hats at F5. The most visible hat that I wear is that of the "lead" SDN strategist and evangelist. I have been evangelizing our vision through presentations at conferences, participation in the ONF's L4-7 working group, and authoring white papers and case studies. The less visible hat are my dual roles as architect for the LineRate kernel and participation as part of the LineRate go to market leadership team. Q: Can you briefly summarize F5's SDN story? Sure. The most important thing to understand is that SDN is an architecture and not any collection of technologies. That is to say, the central idea behind SDN is to realize operational benefits by centralizing network control into a single entity typically referred to as an SDN Controller. It is the job of the SDN Controller along with support from SDN plug-ins (Applications) to do the work of implementing the architect's intent throughout the network. The SDN Controller accomplishes this by using Open APIs that allow for programmatic configuration and by extending the data path element's directly programmable data path (extensibility). F5 is extending SDN architectural discussions by introducing the concept of stateful packet forwarding data path elements that complement the much discussed stateless data path elements. There are a number of reasons as I presented at the Layer 123 SDN & OpenFlow APAC Congress for needing stateful L4-7 data path elements. The biggest reason is that to handle all the state transitions needed for L4 and L7 services, one effectively makes the SDN Controller an integral part of the data path creating scalability issues for the controller, latency issues for traffic, and violating the core architectural principal of separation of the data and control planes. Q: Can you give us a sense of how else you've been promoting F5's SDN vision? I've been presenting at conferences, participating in the ONF's L4-7 working group, and authoring printed marketing collateral. My evangelism has been most noticeable at conferences beginning in Singapore at the Layer 123 SDN & OpenFlow APAC Congress back in June where I discussed how an SDN Architecture is incomplete without application layer SDN, that is stateful data path elements operating at Layers 4-7. I've also provided primary coverage for our booths at the Open Network Summit and at Intel IDF in the Software Defined Infrastructure pavilion. Q: So how does LineRate fit into SDN? LineRate fits into SDN the same way as the rest of the F5 portfolio; that is with APIs that allow it to be fully automated by a controller (programmable configuration) and to extend the data path in novel ways without vendor support (directly programmable). F5 has supported programmable configuration with its iControl API since its introduction in 2001 and been directly programmable since 2004 with our introduction of iRules. Both APIs have been fully published and open since launch. F5 has also demonstrated integration with VMware vShield Manager and the IBM SmartCloud Orchestration (SCO). The F5 SDC product has a SOAP API for configuration and Groovy variant of Java for extensibility. The LineRate Products have a REST API and the node.js API for JavaScript. The point is that F5 has a history of providing products that seamlessly integrate with implementations of the SDN Architecture and LineRate is no different. Q: So how does Network Functions Virtualization (NFV) fit into F5's vision? NFV is an interesting addition to the landscape added by Service Providers at last year's Layer123 SDN & OpenFlow World Congress in Germany. Since then, NFV has become a pseudo-standards process in the care of ETSI, in which F5 is a member. The core idea is to virtualize all network functions in their networks so that they can be dynamically provisioned and scaled on commodity hardware. This has the potential to lead to significant efficiencies in terms of both CAPEX and OPEX. CAPEX savings would be realized by avoiding the capacity planning trap as services can be scaled as fast as a computer program can detect the need for additional capacity and order it. OPEX savings come in the form of being able to run all data centers in a lights out model. So NFV is a closely related sibling to SDN in that SDN if focused on optimizing "topology" issues while NFV is focused on optimizing the nodes in the network. Working together they give rise to a fully Software Defined Data Center/Network that can be completely orchestrated from a central point of control. It is also worth noting that all the principles of NFV apply in other data centers and there has been a long standing movement to moving everything to software. Q: For a bit of historical context, can you tell us a bit about the genesis and motivation behind LineRate Systems. Certainly. In 2008 I cofounded LineRate Systems with my co-founder Manish Vachharajani on the then disruptive idea of replacing "big-iron" network appliances with a pure software solution. The goal was to deliver all the flexibility advantages of a software application with the streamlined manageability of a turn-key network appliance. We also made the decision to build the entire product around the idea of a REST API so that our system could be easily integrated into remote configuration management and orchestration systems without the users needing to ever touch the box. Eventually the space we had entered would be called Software Defined Networking (SDN) and Network Functions Virtualization (NFV). So that was the motivation, the genesis was rooted in a research project that I began as a professional researcher assistant at CU Boulder in back in 2003 in high-performance intrusion detection on commodity multi-core x86 servers. Later as a MS and PhD student I connected with my future co-founder Manish Vachharajani as my PhD advisor and we advanced the research techniques from functioning research to practice and founded LineRate in 2008. Q: What about your previous role at LineRate, you mentioned that your role is similar but evolved? At LineRate Manish and I split our duties with Manish biased towards the technical as our Chief Software Architect and responsible for overall architecture with a fair amount of business responsibilities as well, while I began skewed towards the business side as founding CEO and eventually transitioning to a more balanced business/technical position as CTO. As Founding CEO I led the company raise for our seed round of capital and implemented our high-performance kernel that gave us a hardware class platform in pure software. As CTO I spent a lot of time with customers, driving our SDN messaging, and leading kernel architecture. If you're attending Layer 123 SDN & OpenFlow World Congress, take a moment to track John G down and have a chat or attend his session. If you're not, you can track down John on Twitter.240Views0likes0Comments