application delivery controller
16 TopicsWILS: Virtual Server versus Virtual IP Address
load balancing intermediaries have long used the terms “virtual server” and “virtual IP address”. With the widespread adoption of virtualization these terms have become even more confusing to the uninitiated. Here’s how load balancing and application delivery use the terminology. I often find it easiest to explain the difference between a “virtual server” and a “virtual IP address (VIP)” by walking through the flow of traffic as it is received from the client. When a client queries for “www.yourcompany.com” they get an IP address, of course. In many cases if the site is served by a load balancer or application delivery controller that IP address is a virtual IP address. That simply means the IP address is not tied to a specific host. It’s kind of floating out there, waiting for requests. It’s more like a taxi than a public bus in that a public bus has a predefined route from which it does not deviate. A taxi, however, can take you wherever you want within the confines of its territory. In the case of a virtual IP address that territory is the set of virtual servers and services offered by the organization. The client (the browser, probably) uses the virtual IP address to make a request to “www.yourcompany.com” for a particular resource such as a web application (HTTP) or to send an e-mail (SMTP). Using the VIP and a TCP port appropriate for the resource, the application delivery controller directs the request to a “virtual server”. The virtual server is also an abstraction. It doesn’t really “exist” anywhere but in the application delivery controller’s configuration. The virtual server determines – via myriad options – which pool of resources will best serve to meet the user’s request. That pool of resources contains “nodes”, which ultimately map to one (or more) physical or virtual web/application servers (or mail servers, or X servers). A virtual IP address can represent multiple virtual servers and the correct mapping between them is generally accomplished by further delineating virtual servers by TCP destination port. So a single virtual IP address can point to a virtual “HTTP” server, a virtual “SMTP” server, a virtual “SSH” server, etc… Each virtual “X” server is a separate instantiation, all essentially listening on the same virtual IP address. It is also true, however, that a single virtual server can be represented by multiple virtual IP addresses. So “www1” and “www2” may represent different virtual IP addresses, but they might both use the same virtual server. This allows an application delivery controller to make routing decisions based on the host name, so “images.yourcompany.com” and “content.yourcompany.com” might resolve to the same virtual IP address and the same virtual server, but the “pool” of resources to which requests for images is directed will be different than the “pool” of resources to which content is directed. This allows for greater flexibility in architecture and scalability of resources at the content-type and application level rather than at the server level. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. Server Virtualization versus Server Virtualization Architects Need to Better Leverage Virtualization Using "X-Forwarded-For" in Apache or PHP SNAT Translation Overflow WILS: Client IP or Not Client IP, SNAT is the Question WILS: Why Does Load Balancing Improve Application Performance? WILS: The Concise Guide to *-Load Balancing WILS: Network Load Balancing versus Application Load Balancing All WILS Topics on DevCentral If Load Balancers Are Dead Why Do We Keep Talking About Them?2.8KViews1like1CommentF5 Friday. IT Brand Pulse Awards
IT Brand Pulse carries a series of reports based upon surveys conducted amongst IT professionals that attempt to ferret out the impression that those working in IT have of the various vendors in any given market space. Their free sample of such a report is the November 2010 FCoE Switch Market Leader Report and it is an interesting read, though I admit it made me want to paw through some more long-form answers from the participants to see what shaped these perceptions. The fun part is trying to read between the lines, since this is aimed at the perceived leader, you have to ask how much boost Cisco and Brocade received in the FCoE space just because they’re the FC vendors of choice. But of course, no one source of information is ever a complete picture, and this does give you some information about how your peers feel – whether that impression is real or not – about the various vendors out there. It’s not the same as taking some peers to lunch, but it does give you an idea of the overall perceptions of the industry in one handy set of charts. This February, F5 was honored by those who responded to their Load Balancer Market Leader Report with wins in three separate categories of Load Balancing – price, performance, and innovation, and took the overall title of “Market Leader” in load balancing. We, of course, prefer to call ourselves an Application Delivery Controller (ADC), but when you break out the different needs of users, doing a survey on load balancing is fair enough. After all, BIG-IP Local Traffic Manager (LTM) has its roots in load balancing, and we think it’s tops. IT Brand Pulse is an analyst firm that provides a variety of services to help vendors and IT staff make intelligent decisions. While F5 is not, to my knowledge, an IT Brand Pulse customer, I (personally) worked with the CEO, Frank Berry while he was at QLogic and I was writing for Network Computing. Frank has a long history in the high tech industry and a clue what is going on, so I do trust his company’s reports more than I trust most analyst firms. We at F5 are pleased to have this validation to place next to the large array of other awards, recognition, and customer satisfaction we have earned, and intend to keep working hard to earn more such awards. It is definitely our customers that place us so highly, and for that we are very grateful. Because in the end it is about what customers do with our gear that matters. And we’ll continue to innovate to meet customer needs, while keeping our commitment to quality.190Views0likes1CommentWILS: How can a load balancer keep a single server site available?
Most people don’t start thinking they need a “load balancer” until they need a second server. But even if you’ve only got one server a “load balancer” can help with availability, with performance, and make the transition later on to a multiple server site a whole lot easier. Before we reveal the secret sauce, let me first say that if you have only one server and the application crashes or the network stack flakes out, you’re out of luck. There are a lot of things load balancers/application delivery controllers can do with only one server, but automagically fixing application crashes or network connectivity issues ain’t in the list. If these are concerns, then you really do need a second server. But if you’re just worried about standing up to the load then a Load balancer for even a single server can definitely give you a boost.424Views0likes2CommentsBare Metal Blog: Throughput Sometimes Has Meaning
#BareMetalBlog Knowing what to test is half the battle. Knowing how it was tested the other. Knowing what that means is the third. That’s some testing, real clear numbers. In most countries, top speed is no longer the thing that auto manufacturers want to talk about. Top speed is great if you need it, but for the vast bulk of us, we’ll never need it. Since the flow of traffic dictates that too much speed is hazardous on the vast bulk of roads, automobile manufacturers have correctly moved the conversation to other things – cup holders (did you know there is a magic number of them for female purchasers? Did you know people actually debate not the existence of such a number, but what it is?), USB/bluetooth connectivity, backup cameras, etc. Safety and convenience features have supplanted top speed as the things to discuss. The same is true of networking gear. While I was at Network Computing, focus was shifting from “speeds and feeds” as the industry called it, to overall performance in a real enterprise environment. Not only was it getting increasingly difficult and expensive to push ever-larger switches until they could no longer handle the throughput, enterprise IT staff was more interested in what the capabilities of the box were than how fast it could go. Capabilities is a vague term that I used on purpose. The definition is a moving target across both time and market, with a far different set of criteria for, say, an ADC versus a WAP. There are times, however, where you really do want to know about the straight-up throughput, even if you know it is the equivalent of a professional driver on a closed course, and your network will never see the level of performance that is claimed for the device. There are actually several cases where you will want to know about the maximum performance of an ADC, using the tools I pay the most attention to at the moment as an example. WAN optimization is a good one. In WANOpt, the goal is to shrink the amount of data being transferred between two dedicated points to try and maximize the amount of throughput. When “maximize the amount of throughput” is in the description, speeds and feeds matter. WANOpt is a pretty interesting example too, because there’s more than just “how much data did I send over the wire in that fifteen minute window”. It’s more complex than that (isn’t it always?). The best testing I’ve seen for WANOpt starts with “how many bytes were sent by the originating machine”, then measures that the same number of bytes were received by the WANOpt device, then measures how much is going out the Internet port of the WANOpt device – to measure compression levels and bandwidth usage – then measures the number of bytes the receiving machine at the remote location receives to make sure it matches the originating machine. So even though I say “speeds and feeds matter”, there is a caveat. You want to measure latency introduced with compression and dedupe, and possibly with encryption since WANOpt is almost always over the public Internet these days, throughput, and bandwidth usage. All technically “speeds and feeds” numbers, but taken together giving you an overall picture of what good the WANOpt device is doing. There are scenarios where the “good” is astounding. I’ve seen the numbers that range as high as 95x the performance. If you’re sending a ton of data over WANOpt connections, even 4x or 5x is a huge savings in connection upgrades, anything higher than that is astounding. This is an (older) diagram of WAN Optimization I’ve marked up to show where the testing took place, because sometimes a picture is indeed worth a thousand words. And yeah, I used F5 gear for the example image… That really should not surprise you . So basically, you count the bytes the server sends, the bytes the WANOpt device sends (which will be less for 99.99% of loads if compression and de-dupe are used), and the total number of bytes received by the target server. Then you know what percentage improvement you got out of the WANOpt device (by comparing server out bytes to WANOpt out bytes), that the WANOpt devices functioned as expected (server received bytes == server sent bytes), and what the overall throughput improvement was (server received bytes/time to transfer). There are other scenarios where simple speeds and feeds matter, but less of them than their used to be, and the trend is continuing. When a device designed to improve application traffic is introduced, there are certainly few. The ability to handle a gazillion connections per second I’ve mentioned before is a good guardian against DDoS attacks, but what those connections can do is a different question. Lots of devices in many networking market spaces show little or even no latency introduction on their glossy sales hand-outs, but make those devices do the job they’re purchased for and see what the latency numbers look like. It can be ugly, or you could be pleasantly surprised, but you need to know. Because you’re not going to use it in a pristine lab with perfect conditions, you’re going to slap it into a network where all sorts of things are happening and it is expected to carry its load. So again, I’ll wrap with acknowledgement that you all are smart puppies and know where speeds and feeds matter, make sure you have realistic performance numbers for those cases too. Technorati Tags: Testing,Application Delivery Controller,WAN Optimization,throughput,latency,compression,deduplication,Bare Metal Blog,F5 Networks,Don MacVittie The Whole Bare Metal Blog series: Bare Metal Blog: Introduction to FPGAs | F5 DevCentral Bare Metal Blog: Testing for Numbers or Performance? | F5 ... Bare Metal Blog: Test for reality. | F5 DevCentral Bare Metal Blog: FPGAs The Benefits and Risks | F5 DevCentral Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral Bare Metal Blog: Introduction | F5 DevCentral204Views0likes0CommentsBare Metal Blog: Test for reality.
#BareMetalBlog #F5 Test results provided by vendors and “independent testing labs” often test for things that just don’t matter in the datacenter. Know what you’re getting. When working in medicine, you don’t glance over a patient, then ask them “so how do you feel when you’re at your best?” You ask them what is wrong, then run a ton of tests – even if the patient thinks they know what is wrong – then let the evidence determine the best course of treatment. Sadly, when determining the best tools for your IT staff to use, we rarely follow the same course. We invite a salesperson in, ask them “so, what do you do?”, and let them tell us with their snippets of reality or almost-reality why their product rocks the known world. Depending upon the salesperson and the company, their personal moral code or corporate standards could limit them to not bringing up the weak points of their products to outright lying about its capabilities. “But Don!”, you say, “you’re being a bit extreme, aren’t you?” Not in my experience I am not. From being an enterprise architect to doing comparative reviews, I have seen it all. Vendor culture seems to infiltrate how employees interact with the world outside their HQ – or more likely (though I’ve never seen any research on it), vendors tend to hire to fit their culture, which ranges from straight-up truth about everything to wild claims that fall apart the first time the device is put into an actual production-level network. The most common form of disinformation that is out there is to set up tests so they simply show the device operating at peak efficiency. This is viewed as almost normal by most vendors – why would you showcase your product in less than its best light? and as a necessary evil by most of the few who don’t have that view – every other vendor in the space is using this particular test metric, we’d better too or we’ll look bad. Historically, in network gear, nearly empty communications streams have been the standard for high connection rates, very large window sizes the standard for manipulating throughput rates. While there are many other games vendors in the space play to look better than they are, it is easy to claim you handle X million connections per second if those connections aren’t actually doing anything. It is also easier to claim you handle a throughput of Y Mbps if you set the window size larger than reality would ever see it. Problem with this kind of testing is that it seeps into the blood, after a while, those test results start to be sold as actual… And then customers put the device into their network, and needless to say, they see nothing of the kind. You would be surprised the number of times when we were testing for Network Computing that a vendor downright failed to operate as expected when put into a live network, let alone met the numbers the vendor was telling customers about performance. One of the reasons I came to F5 way back when was that they did not play these games. They were willing to make the marketing match the product and put a new item on the roadmap for things that weren’t as good as they wanted. We’re still out there today helping IT staff understand testing, and what testing will show relevant numbers to the real world. By way of example, there is the Testing Configuration Guide on F5 DevCentral. As Application Delivery Controllers have matured and solidified, there has been much change in how they approach network traffic. This has created an area we are now starting to talk more about, which is the validity of throughput testing in regards to ADCs in general. The thing is, we’ve progressed to the point that simply “we can handle X Mbps!” is no longer a valid indication of the workloads an ADC will be required to handle in production scenarios. The real measure for application throughput that matters is requests per second. Vendors generally avoid this kind of testing, because response is also limited by the capacity of the server doing the actual responding, so it is easy to get artificially low numbers. At this point in the evolution of the network, we have reached the reality of that piece of utility computing. Your network should be like electricity. You should be able to expect that it will be on, and that you will have enough throughput to handle incoming load. Mbps is like measuring amperage… When you don’t have enough, you’ll know it, but you should, generally speaking, have enough. It is time to focus more on what uses you put that bandwidth to, and how to conserve it. Switching to LED bulbs is like putting in an ADC that is provably able to improve app performance. LEDs use less electricity, the ADC reduces bandwidth usage… Except that throughput or packets per second isn’t measuring actual improvements of bandwidth usage. It’s more akin to turning off your lights after installing LED bulbs, and then saying “lookie how much electricity those new bulbs saved!” Seriously, do you care if your ADC can delivery 20 million Megabits per second in throughput, or that it allows your servers to respond to requests in a timely manner? Seriously, the purpose of an Application Delivery Controller is to facilitate and accelerate the delivery of applications, which means responses to requests. If you’re implementing WAN Optimization functionality, throughput is still a very valid test. If you’re looking at the Application Delivery portion of the ADC though, it really has no basis in reality, because requests and responses are messy, not “as large a string of ones as I can cram through here”. From an application perspective – particularly from a web application perspective – there is a lot of “here’s a ton of HTML, hold on, sending images, wait, I have a video lookup…” Mbps or MBps just doesn’t measure the variety of most web applications. But users are going to feel requests per second just as much as testing will show positive or negative impacts. To cover the problem of application servers actually having a large impact on testing, do what you do with everything else in your environment, control for change. When evaluating ADCs, simply use the same application infrastructure and change only the ADC out. Then you are testing apples-to-apples, and the relative values of those test results will give you a gauge for how well a given ADC will perform in your environment. In short, of course the ADC looks better if it isn’t doing anything. But ADCs do a ton in production networks, and that needs to be reflected in testing. If you’re fortunate enough to have time and equipment, get a test scheduled with your prospective vendor, and make sure that test is based upon the usage your actual network will expose the device to. If you are not, then identify your test scenarios to stress what’s most important to you, and insist that your vendor give you test results in those scenarios. In the end, you know your network far better than they ever will, and you know they’re at least not telling you the whole story, make sure you can get it. Needless to say, this is a segue into the next segment of our #BareMetalBlog series, but first I’m going to finish educating myself about our use of FPGAs and finish that segment up.195Views0likes0CommentsCommunity: Force Multiplier for Network Programmability
#SDN Programmability on its own is not enough, a strong community is required to drive the future of network capabilities One of the lesser mentioned benefits of an OpenFlow SDN as articulated by ONF is the ability to "customize the network". It promotes rapid service introduction through customization, because network operators can implement the features they want in software they control, rather than having to wait for a vendor to put it in plan in their proprietary products. -- Key Benefits of OpenFlow-Based SDN This ability is not peculiar to SDN or OpenFlow, but rather it's tied to the concept of a programmable, centralized control model architecture. It's an extension of the decoupling of control and data planes as doing so affords an opportunity to insert a programmable layer or framework at a single, strategic point of control in the network. It's ostensibly going to be transparent and non-disruptive to the network because any extension of functionality will be deployed in a single location in the network rather than on every network element in the data center. This is actually a much more powerful benefit than it is often given credit for. The ability to manipulate data in-flight is the foundation for a variety of capabilities – from security to acceleration to load distribution, being able to direct flows in real-time has become for many organizations a critical capability in enabling the dynamism required to implement modern solutions including cloud computing . This is very true at layers 4-7, where ADN provides the extensibility of functionality for application-layer flows, and it will be true at layers 2-3 where SDN will ostensibly provide the same for network-layer flows. One of the keys to success in real-time flow manipulation, a.k.a network programmability, will be a robust community supporting the controller. Community is vital to such efforts because it provides organizations with broader access to experts across various domains as well as of the controller's programmatic environment. Community experts will be vital to assisting in optimization, troubleshooting, and even development of the customized solutions for a given controller. THE PATH to PRODUCTIZATION What ONF does not go on to say about this particular benefit is that eventually customizations end up incorporated into the controller as native functionality. That's important, because no matter how you measure it, software-defined flow manipulation will never achieve the same level of performance as the same manipulations implemented in hardware. And while many organizations can accept a few milliseconds of latency, others cannot or will not. Also true is that some customized functionality eventually becomes so broadly adopted that it requires a more turn-key solution; one that does not require the installation of additional code to enable. This was the case, for example, with session persistence – the capability of an ADC (application delivery controller) to ensure session affinity with a specific server. Such a capability is considered core to load balancing services and is required for a variety of applications, including VDI. Originally, this capability was provided via real-time flow manipulation. It was code that extended the functionality of the ADC that had to be implemented individually by every organization that needed it – which was most of them. The code providing this functionality was shared and refined over and over by the community and eventually became so demanded that it was rolled into the ADC as a native capability. This improved performance, of course, but it also offered a turn-key "checkbox" configuration for something that had previously required code to be downloaded and "installed" on the controller. The same path will need to be available for SDN as has been afforded for ADN, to mitigate complexity of deployment as well as address potential performance implications coming from the implementation of network-functionality in software. That path will be a powerful one, if it is leveraged correctly. While organizations always maintain the ability to extend network services through programmability, if community support exists to assist in refinement and optimization and, ultimately, a path to productization the agility of network services increases ten or hundred fold over the traditional vendor-driven model. There are four requirements to enable such a model to be successful for both customer and vendors alike: Community that encourages sharing and refinement of "applications" Repository of "applications" that is integrated with the controller and enables simple deployment of "applications". Such a repository may require oversight to certify or verify applications as being non-malicious or error-free. A means by which applications can be rated by consumers. This is the feedback mechanism through which the market indicates to vendors which features and functionality are in high-demand and would be valuable implemented as native capabilities. A basic level of configuration management control that enables roll-back of "applications" on the controller. This affords protection against introduction of applications with errors or that interact poorly when deployed in a given environment. The programmability of the network, like programmability of the application delivery network, is a powerful capability for customers and vendors alike. Supporting a robust, active community of administrators and operators who develop, share, and refine "control-plane applications" that manipulate flows in real-time to provide additional value and functionality when it's needed is critical to the success of such a model. Building and supporting such a community should be a top priority, and integrating it into the product development cycle should be right behind it. HTML5 WebSockets Illustrates Need for Programmability in the Network Midokura – The SDN with a Hive Mind Reactive, Proactive, Predictive: SDN Models SDN is Network Control. ADN is Application Control. F5 Friday: Programmability and Infrastructure as Code Integration Topologies and SDN245Views0likes0CommentsThe Game of Musical (ADC) Chairs
Now things are starting to get interesting… Nearly everyone has played musical chairs – as a child if not as an adult with a child. When that music stops there's a desperate scrambling to pair up with a chair lest you end up sitting on the sidelines watching the game while others continue to play until finally, one stands alone. The ADC market has recently been a lot like a game of musical chairs, with players scrambling every few months for a chair upon which they can plant themselves and stay in the game. While many of the players in adjacent markets –storage, WAN optimization, switching – have been scrambling for chairs for months, it is only today, when the big kids have joined the game, that folks are really starting to pay attention. While a deepening Cisco-Citrix partnership is certainly worthy of such attention, what's likely to be missed in the distraction caused by such an announcement is that the ADC has become such a critical component in data center and cloud architectures that it would leave would-be and has-been players scrambling for an ADC chair so they can stay in the game. ADCs have become critical to architectures because of the strategic position they maintain: they are the point through which all incoming application and service traffic flows. They are the pivotal platform upon which identity and access management, security, and cloud integration heavily rely. And with application and device growth continuing unabated as well as a growing trend toward offering not only applications but APIs, the incoming flows that must be secured, managed, directed, and optimized are only going to increase in the future. F5 has been firmly attached to a chair for the past 16 years, providing market leading application and context-aware infrastructure that improves the delivery and security of applications, services, and APIs, irrespective of location or device. That has made the past three months particularly exciting for us (and that is the corporate "us") as the market has continued to be shaken up by a variety of events. The lawsuit between A10 and Brocade was particularly noteworthy, putting a damper on A10's ability to not only continue its evolution but to compete in the market. Cisco's "we're out, we're not, well, maybe we are" message regarding its ACE product line shook things up again, and was both surprising and yet not surprising. After all, for those of us who've been keeping score, ACE was the third attempt from Cisco at grabbing an ADC chair. Its track record in the ADC game hasn't been all that inspiring. Unlike Brocade and Riverbed, players in peripherally related games who've recognized the critical nature of an ADC and jumped into the market through acquisition (Brocade with Foundry, Riverbed with Zeus), Cisco is now trying a new tactic to stay in a game it recognizes as critical: a deeper more integrated relationship with Citrix. It would be foolish to assume that either party is a big winner in forging such a relationship. Citrix is struggling simply to maintain Netscaler. Revised market share figures for CYQ1 show a player struggling to prop Netscaler up and doing so primarily through VDI and XenApp opportunities, opportunities that are becoming more and more difficult to execute on for Citrix. This is particularly true for customers moving to dual-vendor strategies in their virtualization infrastructure. Strategies that require an ADC capable of providing feature parity across virtual environments in addition to the speeds and feeds required to support a heterogeneous environment. Strategies that include solutions capable of addressing operational complexity; that enable cloud and software defined data centers with a strong, integrated and programmable platform. While Microsoft applications and Apache continue to be the applications BIG-IP is most often tasked with delivering, virtualization is growing rapidly and Citrix XenApp on BIG-IP is no exception. In fact we've seen an almost 200% growth of Citrix XenApp on BIG-IP from Q2 to Q3 (FY12), owing to BIG-IP's strength and experience in not just delivery optimization and the ability to solve core architectural challenges associated with VDI, but also compelling security and performance capabilities coupled with integration with orchestration and automation platforms driving provisioning and management of virtualization across desktop and server infrastructure. Citrix's announcement makes much of a lot of integration that is forthcoming, of ecosystems and ongoing development. Yet Cisco has made such announcements in the past, and it leaves one curious as to why it would put so many resources toward integrating Citrix when it could have done so at any time with its own solution. Integration via partnerships is a much more difficult and lengthy task to undertake than that of integration with one's own products, for which one has complete control over source code and entry points. If you think about it, Cisco is asking the market to believe that it will be successful with a partner where it has been unsuccessful with its own products. What we have is a vendor struggling to sell its ADC solution asking for help from a vendor who is struggling to sell its own ADC solution. It's a great vision, don't get me wrong; one that sounds as magically delicious as AON. But it's a vision that relies on integration and development efforts, which requires resources; resources that if Cisco has them could have been put toward ACE and integration, but either do not exist or do not align with Cisco priorities. It's a vision that puts Citrix's CloudStack at the center of a combined cloud strategy that conflicts with other efforts, such as the recent release of Cisco's own version of OpenStack which, of course, is heavily supported by competing virtualization partner, VMware. In the game of musical ADC chairs, only one player has remained consistently instep with the beat of the market drum: and that player is F5. Cisco ACE Trade-In Program Latest F5 Information F5 News Articles F5 Press Releases F5 Events F5 Web Media F5 Technology Alliance Partners F5 YouTube Feed233Views0likes0CommentsBlock Attack Vectors, Not Attackers
When an army is configuring defenses, it is not merely the placement of troops and equipment that must be considered, but the likely avenues of attack, directions the attack could develop if it is successful, the terrain around the avenues of attack – because the most likely avenues of attack will be those most favorable to the attacker – and emplacements. Emplacements include such things as barricades, bunkers, barbed wire, tank traps, and land mines. While the long term effects of land mines on civilian populations has recently become evident, there is no denying that they hinder an enemy, and will continue to be used for the foreseeable future. That is because the emplacement category has several things, land mines being one of the primary ones, known as “force multipliers”. I’ve mentioned force multipliers before, but those of you who are new to my blog and those who missed that entry might want a quick refresh. Force multipliers swell the effect of your troops (as if multiplying the number of troops) by impacting the enemy or making your troops more powerful. While the term is relatively recent, the idea has been around for a while. Limit the number of attackers that can actually participate in an attack, and you have a force multiplier because you can bring more defenses to bear than the attacker can overcome. Land mines are a force multiplier because they channel attackers away from themselves and into areas more suited to defense. They can also slow down attackers and leave them in a pre-determined field of fire longer than would have been their choice. No one likes to stand in a field full of bombs, picking their way through, while the enemy is raining fire down upon them. A study of the North African campaign in World War II gives a good understanding of ways that force multipliers can be employed to astounding effect. By cutting off avenues of attack and channeling attackers to where they wanted, the defenders of Tobruk – mostly from the Australian 9th Infantry Division - for example, held off repeated, determined attacks because the avenues left open for attacks were tightly controlled by the defenders. And that is possibly the most effective form of defense that IT Security has also. It is not enough to detect that you’re being attacked and then try to block it any more. The sophistication of attackers means that if they can get to your web application from the Internet, they can attack application and OS in a very rapid succession looking for known vulnerabilities. While “script kiddie” is a phrase of scorn in the hacker community, the fact is that running a scripted attack to see if there are any easy penetrations is simple these days, and script kiddies are as real a threat as full on high skill hackers. Particularly if you don’t patch on the day a vulnerability is announced for any reason. Picture courtesy of Wikipedia Let’s start talking about detecting malevolent connections before they touch your server, about asking for login credentials before they can footprint what OS you are running, and sending those who are not trusted off to a completely different server, isolated from the core datacenter network. While we’re at it, let’s start talking about an interface to the public Internet that can withstand huge DDoS and 3DoS attacks without failing, so not only is the attack averted, it never actually makes it to the server it was intended for, and is shunted off to a different location and/or dropped. Just like force multipliers in the military world, these channel traffic the way you want, stop it before the attack gets rolling, and leaves your servers and security staff free to worry about other things. Like serving legitimate customers. It really is easy as a security professional to get cynical. After all, it is the information security professional’s job to deal with ne’er-do-wells all of the time. And to play the bad cop whenever the business or IT has a “great new idea”. Between the two it could drag you down. But if you have these two force multipliers in place, more of those great ideas can get past you because you have a solid wall of protection in place. In fact, add in a Web Acceleration Firewall (WAF) for added protection at the application layer, and you’ve got a solid architecture that will allow you to be more flexible when a “great idea” really sounds like one. And it might just return some optimism, because the bad guys will have fewer avenues of attack, and you’ll feel just that bit ahead of them. If information technology is undervalued in the organization, information security is really undervalued. But from someone who knows, thank you. It’s a tough job that has to be approached from a “we stopped them today” perspective, and you’re keeping us safe – from the bad guys, and often from ourselves. I’ve done it, and I’m glad you’re doing it. Hopefully technological advances will force you to do less that resembles this picture. DISCLAIMER: Yes, F5 makes products that vaguely fill all of the spaces I mention above. That doesn’t mean no one else does. For some of the spaces anyway. This blog was inspired by a whitepaper I’m working on, so no surprise the areas top-of-mind while writing it are things we do. Doesn’t make them bad ideas, in fact I would argue the opposite. It makes them better ideas than fluff thrown out there to attract you with no solutions available. PS: Trying out a new “Related Articles and Blogs” plug-in that Lori found. Let me know if you like the results better. Related Articles and Blogs: F5 at RSA: Multilayer Security without Compromise Making Security Understandable: A New Approach to Internet Security Committing to Overhead: Proceed With Caution. F5 Enables Mobile Device Management Security On-Demand RSA 2012 - Interview with Jeremiah Grossman RSA 2012 - BIG-IP Data Center Firewall Solution RSA 2012 - F5 MDM Solutions The Conspecific Hybrid Cloud Why BYOD Doesn't Always Work In Healthcare Federal Cybersecurity Guidelines Now Cover Cloud, Mobility257Views0likes0CommentsNetwork Virtualization Reality Check.
There are quite a few pundits out there that would like to convince you that a purely virtual infrastructure is the wave of the future. Most of them have a bias to drive them to this conclusion, and they’re hoping you’ll overlook it. Others just want to see everything virtualized because they’re aware of the massive benefits server and even in most cases desktop virtualization has brought to the enterprise. But there’s always a caveat with people who look ahead and see One True Way. The current state of high tech rarely allows for a single architectural solution to emerge, if for no other reason than the existence of a preponderance of legacy code, devices, etc. Ask anyone in storage networking about that. There have been several attempts at One True Way to access your storage. Unfortunately for those who propound them, the market continues to purchase what’s best for its needs, and that varies greatly based upon the needs of an organization – or even an application within an organization. Network appliances – software running on Commercial Off The Shelf (COTS) server hardware – have been around forever. F5 BIG-IP devices used to fall into this category, and like most networking companies that survive their first few years, we eventually created purpose-built hardware to handle the networking speeds required of a high-performance device. The software IP stacks available weren’t fast enough, and truth be told, the other built-in bottlenecks of commodity hardware were causing performance problems too. And that, in a nutshell, is why network virtualization everywhere will not be the future. There are certainly places where virtualized networking gear makes sense – like in the cloud, where you don’t have physical hardware deployed. But there are places – like your primary datacenter – where it does not. The volume that has to be supported on a VM which will have at least two functional operating systems (the VM host and the VM client) between the code and the hardware, and physical hardware that is more than likely shared with other client images, is just not feasible in many situations. You can scale out, that is truth, but how many VMs equals a physical box? Because it’s not just the cost of the VMs, the server they’re residing on costs money to acquire and maintain too, and as you scale out it takes more and more of them. Placing a second instance of a VM on the server to alleviate a problem with network throughput would be… Counterproductive, after all. Image Courtesy of iPadWalls.com So there are plenty of reasons to make use of a hybrid environment in networking architecture, and those reasons aren’t going away any time soon. So as I often say, treat pundits who are trying to tell you there is only one wave of the future with a bit of skepticism, they normally have a vested interest in seeing a particular solution be the One True Way. Just like in the storage networking space, ignore those voices that don’t suit your needs, choose solutions that are going to address your architecture and solve your problems. And they’ll eventually stop trying to tell you what to do, because they’ll realize the futility of doing so. And you’ll still be rocking the house, because in the end it is about you, serving the needs of the business, in the manner that is most efficient. *** Disclaimer: yes, F5 sells both physical and virtual ADCs, which are in the category “network infrastructure”, but I don’t feel that creates a bias in my views, it simply seems odd to me to claim that all solutions are best served by one or the other. Rather I think that F5 in general, like me in particular, sees the need for both types of solutions and is fulfilling that need. Think of it like this… I reject One True Way in my Roleplaying, and I reject it in my technology. The two things I most enjoy, so working at F5 isn’t the cause of my belief, just a happy coincidence.209Views0likes0CommentsF5 Friday: When Firewalls Fail…
New survey shows firewalls falling to application and network DDoS with alarming frequency… With the increasing frequency of successful DDoS attacks there has come a few studies focusing on organizational security posture – readiness, awareness, and incident rate as well as costs of successful attacks. When Applied Research conducted a study this fall on the topic, it came with some expected results but also uncovered some disturbing news – firewalls fail. Often. More often, in fact, than we might like to acknowledge. That’s troubling because it necessarily corresponds to the success rate of attacks and, interestingly, the increasing success of multi-layer attacks. The results were not insignificant – 36% of respondents indicated failure of a firewall due to an application layer DDoS attack while 42% indicated failure as a result of a network layer DDoS attack. That makes the 11 in 12 who said traditional safeguards are not enough a reasonable conclusion. There is a two-part challenge in addressing operational risk when it comes to mitigating modern attacks. First, traditional firewalls aren’t able to withstand the flood of traffic being directed their way and second, stateful firewalls – even with deep packet inspection capabilities – aren’t adequately enabled to detect and respond to application layer DDoS attacks. Because of this, stateful firewalls are simply melting down in the face of overwhelming connections and when they aren’t, they’re allowing the highly impactful application layer DDoS attacks to reach web and application services and shut down them. The result? An average cost to organizations of $682,000 in the past twelve months. Lost productivity (50%) and loss of data (43%) topped the sources of financial costs, but loss of revenue (31%) and loss of customer trust (30%) were close behind, with regulatory fines cited by 24% of respondents as a source of financial costs. A new strategy is necessary to combat the new techniques of attackers. Today’s modern threat stack spans the entire network stack – from layer one to layer seven. It is no longer enough to protect against one attack or even three, it’s necessary to mitigate the entire multi-layer threat spectrum in a more holistic, intelligent way. Only 8% of respondents believe traditional stateful firewalls are enough to defend against the entire landscape of modern attacks. Nearly half see application delivery controllers as able to replace many or most traditional safeguards. Between one-third and one-half of respondents are already doing just that, with 100% of those surveyed discussing the possibility. While sounding perhaps drastic, it makes sense to those who understand the strategic point of control in which the application delivery controller topologically occupies, and its ability to intercept, inspect, and interpret the context of every request – from the network to the application layers. Given that information, an ADC is eminently better positioned to detect and react to the application DDoS attacks that so easily bypass and ultimately overwhelm traditional firewall solutions. Certainly it’s possible to redress application layer DDoS attacks with yet another point solution, but it has always been the case that every additional device through which traffic must pass between the client and the server introduces not only latency – which impedes optimal performance – but another point of failure. It is much more efficient in terms of performance and provides a higher level of fault tolerance to reduce the number of devices in the path between client and server. An advanced application delivery platform like BIG-IP, with an internally integrated, high-speed interconnect across network and application-focused solutions, provides a single point at which application and network layer protections can be applied, without introducing additional points of failure or latency. The methods of attackers are evolving, shouldn’t your security strategy evolve along with it? 2011 ADC Security Survey Resources: F5 Network 2011 ADC Security Study – Findings Report 2011 ADC Security Study 2011 ADC Security Study – Infographic Study Finds Traditional Security Safeguards Failing, Application Delivery Controllers Viewed as an Effective Alternative156Views0likes0Comments