virtuailization
21 TopicsVirtual Patching: What is it and why you should be doing it
Yesterday I was privileged to co-host a webinar with WhiteHat Security's Jeremiah Grossman on preventing SQL injection and Cross-Site scripting using a technique called "virtual patching". While I was familiar with F5's partnership with WhiteHat and our integrated solution, I wasn't familiar with the term. Virtual patching should put an end to the endless religious warring that goes on between the secure coding and web application firewall camps whenever the topic of web application security is raised. The premise of virtual patching is that a web application firewall is not, I repeat is not a replacement for secure coding. It is, in fact, an augmentation of existing security systems and practices that, in fact, enables secure development to occur without being rushed or outright ignored in favor of rushing a fix out the door. "The remediation challenges most organizations face are the time consuming process of allocating the proper personnel, prioritizing the tasks, QA / regression testing the fix, and finally scheduling a production release." -- WhiteHat Security, "WhiteHat Website Security Statistic Reports", December 2008 The WhiteHat report goes on to discuss the average number of days it took for organizations to address the top five urgent - not critical, not high, but urgent - severity vulnerabilities discovered. The fewest number of days to resolve a vulnerability (SQL Injection) was 28 in 2008, which is actually an improvement over previous years. 28 days. That's a lifetime on the Internet when your site is vulnerable to exploitation and attackers are massing at the gates faster than ants to a picnic. But you can't rush finding and fixing the vulnerability, and the option to shut down the web application may not be an option at all, especially if you rely on that application as a revenue stream, as an integration point with partners, or as part of a critical business process with a strict SLA governing its uptime. So do you leave it vulnerable? According to White Hat's data, apparently that's the decision made for many organizations given the limited options. The heads of many security professionals just exploded. My apologies if any of the detritus mussed your screen. If you're one of the ones whose head is still intact, there is a solution. Virtual patching provides the means by which you can prevent the exploitation of the vulnerability while it is addressed through whatever organizational processes are required to resolve it. Virtual patching is essentially the process of putting in place a rule on a web application firewall to prevent the exploitation of a vulnerability. This process is often times a manual one, but in the case of WhiteHat and F5 the process has been made as easy as clicking a button. When WhiteHat's Sentinel, which provides vulnerability scanning as a service, uncovers a vulnerability the operator (that's you) can decide to virtually patch the hole by adding a rule to the appropriate policy on F5's BIG-IP Application Security Manager (ASM) with the click of a button. Once the vulnerability has been addressed, you can remove the rule from the policy or leave it in place, as is your wont. It's up to you. Virtual patching provides the opportunity to close a vulnerability quickly but doesn't require that you necessarily abandon secure coding practices. Virtual patching actual enables and encourages secure coding by giving developers some breathing room in which to implement a thorough, secure solution to the vulnerability. It isn't an either-or solution, it's both, and leverages both solutions to provide the most comprehensive security coverage possible. And given statistics regarding the number of sites infected of late, that's something everyone should be able to get behind. Virtual patching as a technique does not require WhiteHat or F5, but other solutions will require a manual process to put in place rules to address vulnerabilities. The advantage of a WhiteHat-F5 solution is its tight integration via iControl and ability to immediately close discovered security holes, and of course a lengthy list of cool security options and features to further secure web applications available with ASM. You can read more about the integration between WhiteHat and F5 here or here or view a short overview of the way virtual patching works between Sentinel and ASM.1.7KViews0likes2CommentsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsWould you risk $31,000 for milliseconds of application response time?
Keep in mind that the time it takes a human being to blink is an average of 300 – 400 milliseconds. I just got back from Houston where I helped present on F5’s integration with web application security vendor White Hat, a.k.a. virtual patching. As almost always happens whenever anyone mentions the term web application firewall the question of performance degradation was raised. To be precise: How much will a web application firewall degrade performance? Not will it, but how much will it, degrade performance. My question back to those of you with the same question is, “How much are you willing to accept to mitigate the risk?” Or perhaps more precisely, how much are your users and customers – and therefore your business - willing to accept to mitigate the risk, because in most cases today that’s really who is the target and thus bearing the risk of today’s web application attacks. As Jeremiah Grossman often points out, mass SQL injection and XSS attacks are not designed to expose your data, they’re designed today to exploit your customers and users, by infecting them with malware designed to steal their personal data. So the people who are really bearing the burden of risk when browsing your site are your customers and users. It’s their risk we’re playing with more than our own. So they question has to be asked with them in mind: how much latency are your users and customers willing to accept in order to mitigate the risk of being infected and the potential for becoming the next statistic in one of the many fraud-oriented organizations tracking identity theft? SIX OF ONE, HALF-DOZEN OF THE OTHER No matter where you implement a security strategy that involves the deep inspection of application data you are going to incur latency. If you implement it in code, you’re increasing the amount of time it takes to execute on the server – which increases response time. If you implement it in a web application firewall, you’re increasing the amount of time it takes to get to the server- which will undoubtedly increase response time. The interesting thing is that time is generally measured in milliseconds, and is barely noticeable to the user. It literally happens in the blink of an eye and is only obvious to someone tasked with reporting on application performance, who is used to dealing with network response times that are almost always sub-second. The difference between 2ms and 5ms is not noticeable to the human brain. The impact of this level of latency is almost unnoticeable to the end-user and does not radically affect his or her experience one way or another. Even 10ms – or 100 ms - is still sub-second latency and is not noticeable unless it appears on a detailed application performance report. But let’s say that a web application firewall did increase latency to a noticeable degree. Let’s say it added 2 seconds to the overall response time. Would the user notice? Perhaps. The question then becomes, are they willing to accept that in exchange for better protection against malicious code? Are they willing to accept that in exchange for not becoming the next victim of identity theft due to malicious code that was inserted into your database via an SQL injection attack and delivered to them the next time they visited your site? Are they willing to accept 5 seconds? 10 seconds? Probably not (I wouldn’t either), but what did they say? If you can’t answer it’s probably because you haven’t asked. That’s okay, because no one has to my knowledge. It’s not a subject we freely discuss with customers because we assume they are, for the most part, ignorant of the very risks associated with just visiting our sites. THE MYTH OF SUB-SECOND LATENCY Too, we often forget that sub-second latency does not really matter in a world served up by the Internet. We’re hard-wired to the application via the LAN and expect it to instantly appear on our screens the moment we try to access it or hit “submit”. We forget that in the variable, crazy world of the Internet the user is often subjected to a myriad events of which they are blissfully unaware that affect the performance of our sites and applications. They do not expect sub-second response times because experience tells them it’s going to vary from day to day and hour to hour and much of the reasons for it are out of their – and our – control. Do we want the absolute best performance for our customers and users? Yes. But not necessarily at the risk of leaving them – and our data – exposed. If we were really worried about performance we’d get rid of all the firewalls and content scanners and A/V gateways and IPS and IDS and just deliver applications raw across the Internet, the way nature intended them to be delivered: naked and bereft of all protection. But they’d be damned fast, wouldn’t they? We don’t do that, of course, because we aren’t fruitcakes. We’ve weighed the benefits of the protection afforded by such systems against the inherent latency incurred by the solutions and decided that the benefits outweighed the risk. We need to do the same thing with web application firewalls and really any security solution that needs to sit between the application and the user; we need to weigh the risk against the benefit. We first need to really understand what the risks are for us and our customers, and then make a decision how to address that risk – either by ignoring it or mitigating it. But we need to stop fooling ourselves into discarding possible solutions for what is almost always a non-issue. We may think our customers or users will raise hell if the response time of their favorite site or application increases by 5 or 50 or even 500 ms, but will they really? Will they really even notice it? And if we asked them, would they accept it in exchange for better protection against identity theft? Against viruses and worms? Against key-loggers and the cost of a trip to the hospital when their mother has a heart-attack because she happened to look over as three hundred pop-ups filled with porn images filled their screen because they were infected with malware by your site? We need to start considering not only the risk to our own organizations and the customer data we must protect, but to our customers’ and users’ environments, and then evaluate solutions that are going to effectively address that risk in a way that satisfies everyone. To do that, we need to involve the customer and the business more in that decision making process and stop focusing only on the technical aspects of how much latency might be involved or whether we like the technology or not. Go ahead. Ask your customers and users if they’re willing to risk $31,000 – the estimated cost of identity theft today to an individual – to save 500 milliseconds of response time. And when they ask how long that is, tell them the truth, “about the time it takes to blink an eye”. As potentially one of your customers or visitors, I’ll start out your data set by saying, “No. No, I’m not.”499Views0likes1CommentServer Virtualization versus Server Virtualization
No, that's not a typo. That's the reality of virtualization terminology today: a single term means multiple technology implementations. Server virtualization is used to describe at least two (and probably more) types of virtualization. 1. Server virtualization a la load balancing and application delivery 2. Server virtualization a la VMWare and Microsoft Server virtualization as implemented by load balancers/application delivery controllers is a M:1 virtualization scheme. An application delivery controller like BIG-IP can make many servers look like one server, a virtual server. This type of server virtualization is used to architect better performing application infrastructures, to provide load balancing, high-availability and failover capabilities, to seamlessly horizontally scale applications, and to centralize security and acceleration functions. Server virtualization as implemented by virtualization folks like VMWare and Microsoft is actually more properly called operating system virtualization, because it's really virtualizing at the operating system level, not the server level. Regardless of what you call it, the second form of server virtualization implements a 1:M scheme, making one physical server appear to be many. What you have is a very interesting situation. You have a technology that makes one server appear to be many (operating system virtualization) and another technology that makes many servers appear to be one (server virtualization). I'm sure you've guessed that this makes these two types of virtualization extremely complementary. Basically, you can make all those virtual servers created via operating system virtualization appear to be one server using server virtualization. This makes it easier to scale up an application dynamically, because clients are talking to the virtual server on the application delivery controller and it talks to the virtual servers deployed on the physical servers inside the data center. The number of servers inside the data center can change without ever affecting the security, acceleration, and availability of the application because those functions are centralized on the application delivery controller and it can be automated to seamlessly add and remove the servers inside the data center. There are more types of virtualization, at least six more, and they all fit into the big picture that is the next generation data center. For a great overview of eight of the most common categories of virtualization, check out this white paper.404Views0likes2CommentsDoes your virtualization strategy create an SEP field?
There is a lot of hype around all types of virtualization today, with one of the primary drivers often cited being a reduction in management costs. I was pondering whether or not that hype was true, given the amount of work that goes into setting up not only the virtual image, but the infrastructure necessary to properly deliver the images and the applications they contain. We've been using imaging technology for a long time, especially in lab and testing environments. It made sense then because a lot of work goes into setting up a server and the applications running on it before it's "imaged' for rapid deployment use. Virtual images that run inside virtualization servers like VMWare brought not just the ability to rapidly deploy a new server and its associated applications, but the ability to do so in near real-time. But it's not the virtualization of the operating system that really offers a huge return on investment, it's the virtualization of the applications that are packaged up in a virtual image that offers the most benefits. While there's certainly a lot of work that goes into deploying a server OS - the actual installation, configuration, patching, more patching, and licensing - there's even more work that goes into deploying an application simply because they can be ... fussy. So once you have a server and application configured and ready to deploy, it certainly makes sense that you'd want to "capture" it so that it can be rapidly deployed in the future. Without the proper infrastructure, however, the benefits can be drastically reduced. Four questions immediately come to mind that require some answers: Where will the images be stored? How will you manage the applications running on deployed virtual images? What about updates and patches to not only the server OS but the applications themselves? What about changes to your infrastructure? The savings realized by reducing the management and administrative costs of building, testing, and deploying an application in a virtual environment can be negated by a simple change to your infrastructure, or the need to upgrade/patch the application or operating system. Because the image is a basically a snapshot, that snapshot needs to change as the environment in which it runs changes. And the environment means more than just the server OS, it means the network, application, and delivery infrastructure. Addressing the complexity involved in such an environment requires an intelligent, flexible infrastructure that supports virtualization. And not just OS virtualization, but other forms of virtualization such as server virtualization and storage or file virtualization. There's a lot more to virtualization than just setting up a VMWare server, creating some images and slapping each other on the back for a job well done. If your infrastructure isn't ready to support a virtualized environment then you've simply shifted the costs - and responsibility - associated with deploying servers and applications to someone else and, in many cases, several someone elses. If you haven't considered how you're going to deliver the applications on those virtual images then you're in danger of simply shifting the costs of delivering applications elsewhere. Without a solid infrastructure that can support the dynamic environment created by virtual imaging the benefits you think you're getting quickly diminish as other groups are suddenly working overtime to configure and manage the rest of the infrastructure necessary to deliver those images and applications to servers and users. We often talk about silos in terms of network and applications' groups; but virtualization has the potential to create yet another silo, and that silo may be taller and more costly than anyone has yet considered. Virtualization has many benefits to you and your organization. Consider carefully whether you're infrastructure is prepared to support virtualization or risk discovering that implementing a virtualized solution is creating an SEP (Somebody Else's Problem) field around delivering and managing those images.319Views0likes0CommentsMaking the most of your IP address space with layer 7 switching
Organizations trying to make their presence known on the Internet today run into an interesting dilemma - there's just not enough IP addresses to go around. Long gone are the days when any old organization could nab a huge chunk of a Class A or even Class B network. Today they're relegated to a small piece of a Class C, which is often barely enough to run their business. This is especially true for smaller businesses who are lucky if they can get a /29 at a reasonable rate. While we wait for IPv6 to be fully adopted and solve most of this problem (a solution that seems to always be on the horizon but never fully realized) there is something you can do to resolve this situation, right now. That something is layer 7 - or URI - switching, which is the topic on which a reader wrote for help this morning. A reader asks... Using the iRule we can choose the pool based on the URI, but how to choose the pool based on URL. It's a great question! Choosing pools based on URI, i.e. URI switching, is something we talk a lot about, but we don't always talk about the other, less exciting HTTP headers upon which you can base your request routing decisions. Basically, we're talking about hosting support.example.com and sales.example.com on the same IP address (as far as the outside world is concerned) but physically deploying them on separate servers inside the organization/data center. Because both hosts appear in DNS entries to be the same IP address, we can use layer 7 switching to get the requests to the right host inside the organization. (On a side note this is a function made possible by "server virtualization", one of the umpteen types of virtualization out there today and supported by application delivery controllers and load balancers since, oh, the mid 1990s.) Using iRules you can route requests based on any HTTP header. You can also route requests based on anything in the payload, i.e. the application message/request, but right now we're just going to look at the HTTP header options, as there are more than enough to fill up this post today. What's cool about iRules is that you can switch on any HTTP header, and that includes custom headers, cookies, and even the HTTP version. If it's a header, you can choose a pool based on the value of the header. Here's a quick iRule solution to the problem of switching based on the host portion of a URL. The general flow of this iRule is: when HTTP_REQUEST { switch [string tolower [HTTP::host]] { "support" { pool pool_1 } "sales" { pool pool_2 } }} If you'd like to switch on, say, the HTTP request method, you could just replace the HTTP::host portion with HTTP::method and adjust the values upon which you are switching to "get" and "post" and "delete". iRules includes an HTTP class that makes it easy to retrieve the value of the most commonly accessed HTTP headers, such as host, path, method, and version. But you can use the HTTP::header method to extract any HTTP header you'd like. HTTP::host - Returns the value of the HTTP Host header. HTTP::cookie - Queries for or manipulates cookies in HTTP requests and responses. HTTP::is_keepalive - Returns a true value if this is a Keep-Alive connection. HTTP::is_redirect - Returns a true value if the response is a redirect. HTTP::method - Returns the type of HTTP request method. HTTP::password - Returns the password part of HTTP basic authentication. HTTP::path - Returns or sets the path part of the HTTP request. HTTP::payload - Queries for or manipulates HTTP payload information. HTTP::query - Returns the query part of the HTTP request. HTTP::uri - Returns or sets the URI part of the HTTP request. HTTP::username - Returns the username part of HTTP basic authentication. HTTP::version - Returns or sets the HTTP version of the request or response. Even if you have a plethora of IP addresses available, the ability to architect your application infrastructure is made even easier if you have the capability to perform layer 7 switching on HTTP requests. It allows you to make better use of resources and to optimize servers for specific type of content. A server serving up only images can be specifically configured for binary image content, while other servers can be better optimized to serve up HTML and other types of content. Whether you have enough IP addresses or not, there's something to be gained in the areas of efficiency and simplification of your application infrastructure using layer 7 switching. For a deeper dive into HTTP headers (and HTTP in general) check out the HTTP RFC specification Imbibing: Coffee312Views0likes0CommentsVideos from F5's recent Agility customer / partner conference in London
A week or so ago, F5 in EMEA held our annual customer / partner conference in London. I meant to do a little write-up sooner but after an incredibly busy conference week I flew to F5's HQ in Seattle and didn't get round to posting there either. So...better late than never? One of the things we wanted to do at Agility was take advantage of the DevCentral team's presence at the event. They pioneered social media as a community tool, kicking off F5's DevCentral community (now c. 100,000 strong) in something like 2004. They are very experienced and knowledgeable about how to use rich media to get a message across. So we thought we'd ask them to do a few videos with F5's customers and partners about what drives them and how F5 fits in. Some of them are below, and all of them can be found here.262Views0likes0CommentsA Rose By Any Other Name. Appliances Are More Than Systems.
One of the majors Lori and I’s oldest son is pursuing is in philosophy. I’ve never been a huge fan of philosophy, but as he and Lori talked, I decided to find out more, and picked up one of The Great Courses on The Philosophy of Science to try and understand where philosophy split off from hard sciences and became irrelevant or an impediment. I wasn’t disappointed, for at some point in the fifties, a philosopher posed the “If you’re a chicken, you assume when the farmer comes that he will bring food, so the day he comes with an axe, you are surprised” question. Philosophers know this tale, and to them, it disproves everything, for by his argument, all empirical data is suspect, and all of our data is empirical at one level or another. At that point, science continued forward, and philosophy got completely lost. The instructor for the class updated the example to “what if the next batch of copper pulled out of the ground doesn’t conduct electricity?” This is where it shows that either (a) I’m a hard scientist, or (b) I’m too slow-witted to hang with the philosophers, because my immediate answer (and the one I still hold today) was “Duh. It wouldn’t be called copper.” For the Shakespearian lament “that which we call a rose by any other name would smell as sweet” has a corollary. “Any other thing, when called a rose, would not smell as sweet”. And that’s the truth. If we pulled a metal out of the ground, and it looked like copper, but didn’t share this property or that property, while philosophers were slapping each other on the back and seeing vindication for years of arguments, scientists would simply declare it a new material and give it a name. Nothing in the world would change. This is true of appliances too. Once you virtualize an appliance, you have two things – a virtualized appliance AND a virtual computer. This is significant, because while people have learned how many virtuals can be run on server X given their average and peak loads, the same doesn’t yet appear to be true about virtual appliances. I’ve talked to some, and seen email exchanges from other, IT shops that are throwing virtual appliances – be they a virtualized ADC like BIG-IPLTM VE from F5 or a virtualized Cloud Storage Gateway from someone like Nasuni – onto servers without considering their very special needs as a “computer”. In general, you can’t consider them to be “applications” or “servers”, as their resource utilization is certainly very different than your average app server VM. These appliances are built for a special purpose, and both of the ones I just used for reference will use a lot more networking resources than your average server, just being what they are. Compliments of PDPhoto.org When deploying virtualized appliances, think about what the appliance is designed to do, and start with it on a dedicated server. This is non-intuitive, and kind of defeats the purpose, but it is a temporary situation. Note that I said “Start with”. My reasoning is that the process of virtualizing the appliance changed it, and when it was an appliance, you didn’t care about its performance as long as it did the job. By running it on dedicated hardware, you can evaluate what it uses for resources in a pristine environment, then when you move it onto a server with multiple virtual machines running, you know what the “best case” is, so you’ll know just how much your other VMs are impacting it, and have a head start troubleshooting problems – the resource it used the most on dedicated hardware is certainly most likely to be your problem in a shared environment. Appliances are generally more susceptible to certain resource sharing scenarios than a general-service server is. These devices were designed to perform a specific job and have been optimized to do that job. Putting it on hardware with other VMs – even other instances of the appliance – can cause it to degrade in performance because the very thing it is optimized for is the resource that it needs the most, be it memory, disk, or networking. Even CPUs, depending upon what the appliance does, can be a point of high contention between the appliance and whatever other VM is running. In the end, yes they are just computers. But you bought them because they were highly specialized computers, and when virtualized, that doesn’t change. Give them a chance to strut their stuff on hardware you know, without interference, and only after you’ve taken their measure on your production network (or a truly equivalent test network, which is rare), start running them on machines with select VMs. Even then, check with your vendor. Plenty of vendors don’t recommend that you run an virtualized appliance that was originally designed for high performance on shared hardware at all. Since doing so against your vendor’s advice can create support issues, check with them first, and if you don’t like the answer, pressure them either for details of why, or to change their advice. Yes, that includes F5. I don’t know the details of our support policy, but both LTM-VE and ARX-VE are virtualized versions of high-performance systems, so it wouldn’t surprise me if our support staff said “first, shut down all other VMs on the hardware...” but since we have multi-processing on VIPRION, it wouldn’t surprise me if they didn’t either. It is no different than any other scenario, when it comes down to it, know what you have, and unlike philosophers, expect it to behave tomorrow like it does today, anything else is an error of some kind.261Views0likes0CommentsDo you control your application network stack? You should.
Owning the stack is important to security, but it’s also integral to a lot of other application delivery functions. And in some cases, it’s downright necessary. Hoff rants with his usual finesse in a recent posting with which I could not agree more. Not only does he point out the wrongness of equating SaaS with “The Cloud”, but points out the importance of “owning the stack” to security. Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings. Why? because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would. They have context. They have visibility. They have control. They have ownership of the entire stack. Owning the stack has broader implications than just security. The control, visibility, and context-awareness implicit in owning the stack provides much more flexibility in all aspects covering the delivery of applications. Whether we’re talking about emerging or traditional data center architectures the importance of owning the application networking stack should not be underestimated. The arguments over whether virtualized application delivery makes more sense in a cloud computing- based architecture fail to recognize that a virtualized application delivery network forfeits that control over the stack. While it certainly maintains some control at higher levels, it relies upon other software – the virtual machine, hypervisor, and operating system – which shares control of that stack and, in fact, processes all requests before it reaches the virtual application delivery controller. This is quite different from a hardened application delivery controller that maintains control over the stack and provides the means by which security, network, and application experts can tweak, tune, and exert that control in myriad ways to better protect their unique environment. If you don’t completely control layer 4, for example, how can you accurately detect and thus prevent layer 4 focused attacks, such as denial of service and manipulation of the TCP stack? You can’t. If you don’t have control over the stack at the point of entry into the application environment, you are risking a successful attack. As the entry point into application, whether it’s in “the” cloud, “a” cloud, or a traditional data center architecture, a properly implemented application delivery network can offer the control necessary to detect and prevent myriad attacks at every layer of the stack, without concern that an OS or hypervisor-targeted attack will manage to penetrate before the application delivery network can stop it. The visibility, control, and contextual awareness afforded by application delivery solutions also allows the means by which finer-grained control over protocols, users, and applications may be exercised in order to improve performance at the network and application layers. As a full proxy implementation these solutions are capable of enforcing compliance with RFCs for protocols up and down the stack, implement additional technological solutions that improve the efficiency of TCP-based applications, and offer customized solutions through network-side scripting that can be used to immediately address security risks and architectural design decisions. The importance of owning the stack, particularly at the perimeter of the data center, cannot and should not be underestimated. The loss of control, the addition of processing points at which the stack may be exploited, and the inability to change the very behavior of the stack at the point of entry comes from putting into place solutions incapable of controlling the stack. If you don’t own the stack you don’t have control. And if you don’t have control, who does?251Views0likes0Comments