server virtualization
3 TopicsServer Virtualization versus Server Virtualization
No, that's not a typo. That's the reality of virtualization terminology today: a single term means multiple technology implementations. Server virtualization is used to describe at least two (and probably more) types of virtualization. 1. Server virtualization a la load balancing and application delivery 2. Server virtualization a la VMWare and Microsoft Server virtualization as implemented by load balancers/application delivery controllers is a M:1 virtualization scheme. An application delivery controller like BIG-IP can make many servers look like one server, a virtual server. This type of server virtualization is used to architect better performing application infrastructures, to provide load balancing, high-availability and failover capabilities, to seamlessly horizontally scale applications, and to centralize security and acceleration functions. Server virtualization as implemented by virtualization folks like VMWare and Microsoft is actually more properly called operating system virtualization, because it's really virtualizing at the operating system level, not the server level. Regardless of what you call it, the second form of server virtualization implements a 1:M scheme, making one physical server appear to be many. What you have is a very interesting situation. You have a technology that makes one server appear to be many (operating system virtualization) and another technology that makes many servers appear to be one (server virtualization). I'm sure you've guessed that this makes these two types of virtualization extremely complementary. Basically, you can make all those virtual servers created via operating system virtualization appear to be one server using server virtualization. This makes it easier to scale up an application dynamically, because clients are talking to the virtual server on the application delivery controller and it talks to the virtual servers deployed on the physical servers inside the data center. The number of servers inside the data center can change without ever affecting the security, acceleration, and availability of the application because those functions are centralized on the application delivery controller and it can be automated to seamlessly add and remove the servers inside the data center. There are more types of virtualization, at least six more, and they all fit into the big picture that is the next generation data center. For a great overview of eight of the most common categories of virtualization, check out this white paper.414Views0likes2CommentsDoes your virtualization strategy create an SEP field?
There is a lot of hype around all types of virtualization today, with one of the primary drivers often cited being a reduction in management costs. I was pondering whether or not that hype was true, given the amount of work that goes into setting up not only the virtual image, but the infrastructure necessary to properly deliver the images and the applications they contain. We've been using imaging technology for a long time, especially in lab and testing environments. It made sense then because a lot of work goes into setting up a server and the applications running on it before it's "imaged' for rapid deployment use. Virtual images that run inside virtualization servers like VMWare brought not just the ability to rapidly deploy a new server and its associated applications, but the ability to do so in near real-time. But it's not the virtualization of the operating system that really offers a huge return on investment, it's the virtualization of the applications that are packaged up in a virtual image that offers the most benefits. While there's certainly a lot of work that goes into deploying a server OS - the actual installation, configuration, patching, more patching, and licensing - there's even more work that goes into deploying an application simply because they can be ... fussy. So once you have a server and application configured and ready to deploy, it certainly makes sense that you'd want to "capture" it so that it can be rapidly deployed in the future. Without the proper infrastructure, however, the benefits can be drastically reduced. Four questions immediately come to mind that require some answers: Where will the images be stored? How will you manage the applications running on deployed virtual images? What about updates and patches to not only the server OS but the applications themselves? What about changes to your infrastructure? The savings realized by reducing the management and administrative costs of building, testing, and deploying an application in a virtual environment can be negated by a simple change to your infrastructure, or the need to upgrade/patch the application or operating system. Because the image is a basically a snapshot, that snapshot needs to change as the environment in which it runs changes. And the environment means more than just the server OS, it means the network, application, and delivery infrastructure. Addressing the complexity involved in such an environment requires an intelligent, flexible infrastructure that supports virtualization. And not just OS virtualization, but other forms of virtualization such as server virtualization and storage or file virtualization. There's a lot more to virtualization than just setting up a VMWare server, creating some images and slapping each other on the back for a job well done. If your infrastructure isn't ready to support a virtualized environment then you've simply shifted the costs - and responsibility - associated with deploying servers and applications to someone else and, in many cases, several someone elses. If you haven't considered how you're going to deliver the applications on those virtual images then you're in danger of simply shifting the costs of delivering applications elsewhere. Without a solid infrastructure that can support the dynamic environment created by virtual imaging the benefits you think you're getting quickly diminish as other groups are suddenly working overtime to configure and manage the rest of the infrastructure necessary to deliver those images and applications to servers and users. We often talk about silos in terms of network and applications' groups; but virtualization has the potential to create yet another silo, and that silo may be taller and more costly than anyone has yet considered. Virtualization has many benefits to you and your organization. Consider carefully whether you're infrastructure is prepared to support virtualization or risk discovering that implementing a virtualized solution is creating an SEP (Somebody Else's Problem) field around delivering and managing those images.323Views0likes0CommentsMaking the most of your IP address space with layer 7 switching
Organizations trying to make their presence known on the Internet today run into an interesting dilemma - there's just not enough IP addresses to go around. Long gone are the days when any old organization could nab a huge chunk of a Class A or even Class B network. Today they're relegated to a small piece of a Class C, which is often barely enough to run their business. This is especially true for smaller businesses who are lucky if they can get a /29 at a reasonable rate. While we wait for IPv6 to be fully adopted and solve most of this problem (a solution that seems to always be on the horizon but never fully realized) there is something you can do to resolve this situation, right now. That something is layer 7 - or URI - switching, which is the topic on which a reader wrote for help this morning. A reader asks... Using the iRule we can choose the pool based on the URI, but how to choose the pool based on URL. It's a great question! Choosing pools based on URI, i.e. URI switching, is something we talk a lot about, but we don't always talk about the other, less exciting HTTP headers upon which you can base your request routing decisions. Basically, we're talking about hosting support.example.com and sales.example.com on the same IP address (as far as the outside world is concerned) but physically deploying them on separate servers inside the organization/data center. Because both hosts appear in DNS entries to be the same IP address, we can use layer 7 switching to get the requests to the right host inside the organization. (On a side note this is a function made possible by "server virtualization", one of the umpteen types of virtualization out there today and supported by application delivery controllers and load balancers since, oh, the mid 1990s.) Using iRules you can route requests based on any HTTP header. You can also route requests based on anything in the payload, i.e. the application message/request, but right now we're just going to look at the HTTP header options, as there are more than enough to fill up this post today. What's cool about iRules is that you can switch on any HTTP header, and that includes custom headers, cookies, and even the HTTP version. If it's a header, you can choose a pool based on the value of the header. Here's a quick iRule solution to the problem of switching based on the host portion of a URL. The general flow of this iRule is: when HTTP_REQUEST { switch [string tolower [HTTP::host]] { "support" { pool pool_1 } "sales" { pool pool_2 } }} If you'd like to switch on, say, the HTTP request method, you could just replace the HTTP::host portion with HTTP::method and adjust the values upon which you are switching to "get" and "post" and "delete". iRules includes an HTTP class that makes it easy to retrieve the value of the most commonly accessed HTTP headers, such as host, path, method, and version. But you can use the HTTP::header method to extract any HTTP header you'd like. HTTP::host - Returns the value of the HTTP Host header. HTTP::cookie - Queries for or manipulates cookies in HTTP requests and responses. HTTP::is_keepalive - Returns a true value if this is a Keep-Alive connection. HTTP::is_redirect - Returns a true value if the response is a redirect. HTTP::method - Returns the type of HTTP request method. HTTP::password - Returns the password part of HTTP basic authentication. HTTP::path - Returns or sets the path part of the HTTP request. HTTP::payload - Queries for or manipulates HTTP payload information. HTTP::query - Returns the query part of the HTTP request. HTTP::uri - Returns or sets the URI part of the HTTP request. HTTP::username - Returns the username part of HTTP basic authentication. HTTP::version - Returns or sets the HTTP version of the request or response. Even if you have a plethora of IP addresses available, the ability to architect your application infrastructure is made even easier if you have the capability to perform layer 7 switching on HTTP requests. It allows you to make better use of resources and to optimize servers for specific type of content. A server serving up only images can be specifically configured for binary image content, while other servers can be better optimized to serve up HTML and other types of content. Whether you have enough IP addresses or not, there's something to be gained in the areas of efficiency and simplification of your application infrastructure using layer 7 switching. For a deeper dive into HTTP headers (and HTTP in general) check out the HTTP RFC specification Imbibing: Coffee319Views0likes0Comments