virtual appliance
5 TopicsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsHow to test APM access policy locally?
Very new to BIG-IP. I have downloaded the BIG-IP VE and imported it into VirtualBox. Followed the basic configuration mentioned here https://f5.bravais.com/s/vf4XckakhbTbCRUwa2sb I created an APM access policy to use AD. But is there any way to test it? Some internal URL which simulates the real world behavior? As I am on VBox, any virtual server I create, is directly accessible from local. How do I get BIG-IP into the picture?433Views0likes2CommentsGTM Split-Brain DNS Setup
I'm trying to setup a F5/GTM/LTM server that is a virtual appliance. I have two data centers one for Production and one for DR. This would be a active/passive setup to some degree. The ISP connection is tied to the firewall and the F5/GTM/LTM is behind the firewall in a DMZ with the Application in a different Email DMZ. I'm using exchange for my testing/starting setup. So owa.domain.com etc. What do I have to setup so the F5 will publish Public IPs for External users and Private IPs for Internal users? I was trying to follow this document: https://support.f5.com/kb/en-us/solutions/public/14000/400/sol14421.html I setup two Topology Regions based on subnet with one called RFC1918-Internal that has all the standard private IP subnets in it and then another Catchall-External Region with Continent(all) on it. Problem is the GTM Pools are created from LTM Virtual Servers with private IPs. In order to setup a Topology Record that uses the Regions and then directs them to the right Pool I need some way of getting the Public IPs in there. I'm a little lost on what steps to take to make that happen.399Views0likes2CommentsA Rose By Any Other Name. Appliances Are More Than Systems.
One of the majors Lori and I’s oldest son is pursuing is in philosophy. I’ve never been a huge fan of philosophy, but as he and Lori talked, I decided to find out more, and picked up one of The Great Courses on The Philosophy of Science to try and understand where philosophy split off from hard sciences and became irrelevant or an impediment. I wasn’t disappointed, for at some point in the fifties, a philosopher posed the “If you’re a chicken, you assume when the farmer comes that he will bring food, so the day he comes with an axe, you are surprised” question. Philosophers know this tale, and to them, it disproves everything, for by his argument, all empirical data is suspect, and all of our data is empirical at one level or another. At that point, science continued forward, and philosophy got completely lost. The instructor for the class updated the example to “what if the next batch of copper pulled out of the ground doesn’t conduct electricity?” This is where it shows that either (a) I’m a hard scientist, or (b) I’m too slow-witted to hang with the philosophers, because my immediate answer (and the one I still hold today) was “Duh. It wouldn’t be called copper.” For the Shakespearian lament “that which we call a rose by any other name would smell as sweet” has a corollary. “Any other thing, when called a rose, would not smell as sweet”. And that’s the truth. If we pulled a metal out of the ground, and it looked like copper, but didn’t share this property or that property, while philosophers were slapping each other on the back and seeing vindication for years of arguments, scientists would simply declare it a new material and give it a name. Nothing in the world would change. This is true of appliances too. Once you virtualize an appliance, you have two things – a virtualized appliance AND a virtual computer. This is significant, because while people have learned how many virtuals can be run on server X given their average and peak loads, the same doesn’t yet appear to be true about virtual appliances. I’ve talked to some, and seen email exchanges from other, IT shops that are throwing virtual appliances – be they a virtualized ADC like BIG-IPLTM VE from F5 or a virtualized Cloud Storage Gateway from someone like Nasuni – onto servers without considering their very special needs as a “computer”. In general, you can’t consider them to be “applications” or “servers”, as their resource utilization is certainly very different than your average app server VM. These appliances are built for a special purpose, and both of the ones I just used for reference will use a lot more networking resources than your average server, just being what they are. Compliments of PDPhoto.org When deploying virtualized appliances, think about what the appliance is designed to do, and start with it on a dedicated server. This is non-intuitive, and kind of defeats the purpose, but it is a temporary situation. Note that I said “Start with”. My reasoning is that the process of virtualizing the appliance changed it, and when it was an appliance, you didn’t care about its performance as long as it did the job. By running it on dedicated hardware, you can evaluate what it uses for resources in a pristine environment, then when you move it onto a server with multiple virtual machines running, you know what the “best case” is, so you’ll know just how much your other VMs are impacting it, and have a head start troubleshooting problems – the resource it used the most on dedicated hardware is certainly most likely to be your problem in a shared environment. Appliances are generally more susceptible to certain resource sharing scenarios than a general-service server is. These devices were designed to perform a specific job and have been optimized to do that job. Putting it on hardware with other VMs – even other instances of the appliance – can cause it to degrade in performance because the very thing it is optimized for is the resource that it needs the most, be it memory, disk, or networking. Even CPUs, depending upon what the appliance does, can be a point of high contention between the appliance and whatever other VM is running. In the end, yes they are just computers. But you bought them because they were highly specialized computers, and when virtualized, that doesn’t change. Give them a chance to strut their stuff on hardware you know, without interference, and only after you’ve taken their measure on your production network (or a truly equivalent test network, which is rare), start running them on machines with select VMs. Even then, check with your vendor. Plenty of vendors don’t recommend that you run an virtualized appliance that was originally designed for high performance on shared hardware at all. Since doing so against your vendor’s advice can create support issues, check with them first, and if you don’t like the answer, pressure them either for details of why, or to change their advice. Yes, that includes F5. I don’t know the details of our support policy, but both LTM-VE and ARX-VE are virtualized versions of high-performance systems, so it wouldn’t surprise me if our support staff said “first, shut down all other VMs on the hardware...” but since we have multi-processing on VIPRION, it wouldn’t surprise me if they didn’t either. It is no different than any other scenario, when it comes down to it, know what you have, and unlike philosophers, expect it to behave tomorrow like it does today, anything else is an error of some kind.261Views0likes0CommentsDo you control your application network stack? You should.
Owning the stack is important to security, but it’s also integral to a lot of other application delivery functions. And in some cases, it’s downright necessary. Hoff rants with his usual finesse in a recent posting with which I could not agree more. Not only does he point out the wrongness of equating SaaS with “The Cloud”, but points out the importance of “owning the stack” to security. Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings. Why? because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would. They have context. They have visibility. They have control. They have ownership of the entire stack. Owning the stack has broader implications than just security. The control, visibility, and context-awareness implicit in owning the stack provides much more flexibility in all aspects covering the delivery of applications. Whether we’re talking about emerging or traditional data center architectures the importance of owning the application networking stack should not be underestimated. The arguments over whether virtualized application delivery makes more sense in a cloud computing- based architecture fail to recognize that a virtualized application delivery network forfeits that control over the stack. While it certainly maintains some control at higher levels, it relies upon other software – the virtual machine, hypervisor, and operating system – which shares control of that stack and, in fact, processes all requests before it reaches the virtual application delivery controller. This is quite different from a hardened application delivery controller that maintains control over the stack and provides the means by which security, network, and application experts can tweak, tune, and exert that control in myriad ways to better protect their unique environment. If you don’t completely control layer 4, for example, how can you accurately detect and thus prevent layer 4 focused attacks, such as denial of service and manipulation of the TCP stack? You can’t. If you don’t have control over the stack at the point of entry into the application environment, you are risking a successful attack. As the entry point into application, whether it’s in “the” cloud, “a” cloud, or a traditional data center architecture, a properly implemented application delivery network can offer the control necessary to detect and prevent myriad attacks at every layer of the stack, without concern that an OS or hypervisor-targeted attack will manage to penetrate before the application delivery network can stop it. The visibility, control, and contextual awareness afforded by application delivery solutions also allows the means by which finer-grained control over protocols, users, and applications may be exercised in order to improve performance at the network and application layers. As a full proxy implementation these solutions are capable of enforcing compliance with RFCs for protocols up and down the stack, implement additional technological solutions that improve the efficiency of TCP-based applications, and offer customized solutions through network-side scripting that can be used to immediately address security risks and architectural design decisions. The importance of owning the stack, particularly at the perimeter of the data center, cannot and should not be underestimated. The loss of control, the addition of processing points at which the stack may be exploited, and the inability to change the very behavior of the stack at the point of entry comes from putting into place solutions incapable of controlling the stack. If you don’t own the stack you don’t have control. And if you don’t have control, who does?251Views0likes0Comments