hyper-v
5 TopicsWhy so high Ping Latencies in VE LTM ?
Hello, I'm evaluating a VE LTM Trial, 25 Mbps, BIG-IP 12.1.1 Build 2.0.204 Hotfix HF2 It's running on Hyper-V on Windows Server 2012R2. When I run ping from the Hyper-V console window of the LTM VM I can measure the following times: ping -I 172.27.50.1 172.27.50.151 = **7 ms .. 30 ms** (pinging from the LTM internal static self-IP to another VM attached to the same Virtual Switch) ping -I 172.27.50.1 172.27.50.161 = **7 ms .. 30 ms** (pinging from the LTM internal static self-IP to another VM reached through the external network, through a physical switch) ping -I 172.27.50.1 172.27.51.1 < 1 ms (pinging from the LTM internal static self-IP to the LTM external static self-IP) ping -I 172.27.50.1 172.27.52.1 < 1 ms (pinging from the LTM internal static self-IP to the LTM management address) ping -I 172.27.50.1 172.27.51.51 = **2 ms .. 4 ms** (pinging from the LTM internal static self-IP to any of the configured LTM Virtual Servers) pings between the two devices over the HA VLAN are even higher: tens of ms ! I reserved what I judge to be the recommended amounts of vCPU and memory to the LTM VE. I have also disable Virtual Machine Queues in the PhyNICs and in the LTM VNICs. Has someone suggestions of configurations to check/change, or troubleshooting procedures to reveal the cause of the high latencies above ? Many thanks!711Views0likes5Comments"Managed by BIG-IQ" message not removed from BIG IP LTM
I have added BIG-IP LTM device in BIG-IQ and now wanted to add this device to other BIG-IQ but it is not allowing this. when I removed the device from BIG-IQ system, it is no more reflecting in BIG-IQ, however on the LTM GUI top left it is still showing as "Managed by BIG-IQ". when I click on "Managed by BIG-IQ", it opens the same old BIG-IQ, however the device removed from there.592Views0likes3CommentsI CAN HAS DEFINISHUN of SoftADC and vADC?
In the networking side of the world, vendors often seek to differentiate their solutions not just based on features and functionality, but on form-factor, as well. Using a descriptor to impart an understanding of the deployment form-factor of a particular solution has always been quite common: appliance, hardware, platform, etc… Sometimes these terms come from analysts, other times they come from vendors themselves. Regardless of where they originate, they quickly propagate and unfortunately often do so without the benefit of a clear definition. A reader recently asked a question that reminded me that we’ve done just that as we cloud computing and virtualization creep into our vernacular. Quite simply the question was, “What’s the definition of a Soft ADC and vADC?” That’s actually an interesting question as it’s more broadly applicable than just to ADCs. For example, the last several years we’ve been hearing about “Soft WOC (WAN Optimization Controller)” in addition to just plain old WOC and the definition of Soft WOC is very similar to Soft ADC. The definitions are, if not well understood and often used, consistent across the entire application delivery realm – from WAN to LAN to cloud. So this post is to address the question in relation to ADC more broadly, as there’s an emerging “xADC” model that should probably be mentioned as well. Let’s start with the basic definition of an Application Delivery Controller (ADC) and go from there, shall we? ADC An application delivery controller is a device that is typically placed in a data center between the firewall and one or more application servers (an area known as the DMZ). First- generation application delivery controllers primarily performed application acceleration and handled load balancing between servers. The latest generation of application delivery controllers handles a much wider variety of functions, including rate shaping and SSL offloading, as well as serving as a Web application firewall. If you said an application delivery controller was a “load balancer on steroids” (which is how I usually describe them to the uninitiated) you wouldn’t be far from the truth. The core competency of an ADC is load balancing, and from that core functionality has been derived, over time, the means by which optimization, acceleration, security, remote access, and a wealth of other functions directly related to application delivery in scalable architectures can be applied in a unified fashion. Hence the use of the term “Unified Application Delivery.” If you prefer a gaming metaphor, an application delivery controller is like a multi-classed D&D character, probably a 3e character because many of the “extra” functions available in an ADC are more like skills or feats than class abilities. SOFT ADC So a "Soft ADC" then is simply an ADC in software format, deployed on commodity hardware. That hardware may or may not have additional hardware processing (like PCI-based SSL acceleration) to assist in offloading compute intense processes and the integration of the software with that hardware varies from vendor to vendor. Soft ADCs are sometimes offered as “softpliances” (many people hate this term) or an “appliance comprised of commodity hardware pre-loaded and configured with the ADC software.” This option allows the vendor to harden and optimize the operating system on which the Soft ADC runs, which can be advantageous to the organization as it will not need to worry about upgrades and/or patches to the solution impacting the functionality of the Soft ADC. This option can also result in higher capacity and better performance for the ADC and the applications it manages, as the operating system’s network stack is often “tweaked” and “tuned” to support the application delivery functions of the Soft ADC. VIRTUAL ADC (vADC) A "vADC" is a virtualized version of an ADC. The ADC may or may not have first been a "Soft ADC", as in the case of BIG-IP which is not available as a "Soft ADC" but is available as a traditional hardware ADC or a virtual ADC. vADCs are ADCs deployed in a virtual network appliance (VNA) form factor, as an image compatible with modern virtual machines (VMware, Xen, Hyper-V). ADC as a SERVICE There is an additional "type" of ADC emerging mainly because of proprietary virtual image formats in clouds like Amazon, the "ADC as a service" which is offered as a provisionable service within a specific cloud computing environment that is not portable (or usable) outside the environment. In all other respects the “ADC as a service” is indistinguishable from the vADC as it, too, is deployed on commodity hardware and lacks integration with the underlying hardware platform or available acceleration chipsets. A PLACE for EVERYTHING and EVERYTHING in its PLACE In the general category of application delivery (and most networking solutions as well) we can make the following abstractions regarding these definitions: “Solution” Soft “Solution” v”Solution” “Solution” as a Service* A traditional hardware-based “solution” A traditional hardware-based solution in a software form-factor that can be deployed on an “appliance” or commodity hardware A traditional hardware-based solution in a virtualized form-factor that can be deployed as a virtual network appliance (VNA) on a variety of virtualization platforms. A traditional hardware-based solution in a proprietary form-factor (software or virtual) that is not usable or portable outside the environment in which it is offered. So if we were to tackle “Soft WOC”, as well, we’d find that the general definition – traditional hardware-based solution in a software form-factor – also fits that category of solution well. It may seem to follow logically than any version of an ADC (or network solution) is “as good” as the next given that the core functionality is almost always the same regardless of form factor. There are, however, pros and cons to each form-factor that should be taken into consideration when designing an architecture that may take advantage of an ADC. In some cases a Soft ADC or vADC will provide the best value, in others a traditional hardware ADC, and in many cases the highly-scalable and flexible architecture will take advantage of both in the appropriate places within the architecture. *Some solutions offered “as a service” are more akin to SaaS in that they are truly web services, regardless of underlying implementation, that are “portable” because they can be accessed from anywhere, though they cannot be “moved” or integrated internally as private solutions.285Views0likes2Comments- 169Views0likes0Comments
Cloud Computing and Infrastructure 2.0
Not every infrastructure vendor needs new capabilities to support cloud computing and infrastructure 2.0. Greg Ness of Infoblox has an excellent article on "The Next Tech Boom: Infrastructure 2.0" that is showing up everywhere. That's because it raises some interesting questions and points out some real problems that will be need to be addressed as we move further into cloud computing and virtualized environments. What is really interesting, however, is the fact that some infrastructure vendors are already there and have been for quite some time. One thing Greg mentions that's not quite accurate (at least in the case of F5) is regarding the ability of "appliances" to "look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors". From Greg's article: The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks. Enterprises already incurring dis-economies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow. Rising management costs will further compromise the economics of static network infrastructure. I must disagree. Not on the sheer terror statement, that's almost certainly true, but on the capabilities of infrastructure devices to handle a virtualized environment. Some appliances and network devices have long been able to look inside servers and dynamically keep up with the rapid changes occurring in a hypervisor-driven application infrastructure. We call one of those capabilities "intelligent health monitoring", for example, and others certainly have their own special name for a similar capability. On the dynamic front, when you combine an intelligent application delivery controller with the ability to be orchestrated from within applications or within the OS, you get the ability to dynamically modify configuration of application delivery in real-time based on current conditions within the data center. And if you're monitoring is intelligent enough, you can sense within seconds when an application - whether virtualized or not - has disappeared or conversely, when it's come back on line. F5 has been supporting this kind of dynamic, flexible application infrastructure for years. It's not really new except that its importance has suddenly skyrocketed due to exactly the scenario Greg points out using virtualization. WHAT ABOUT THE VIRTSEC PIECE? There has never been a better case for centralized web application security through a web application firewall and an application delivery controller. The application delivery controller - which necessarily sits between clients and those servers - provides security at layers 2 through 7. The full stack. There's nothing really that special about a virtualized environment as far as the architecture goes for delivering applications running on those virtual servers; the protocols are still the same, and the same vulnerabilities that have plagued non-virtualized applications will also plague virtualized ones. That means that existing solutions can address those vulnerabilities in either environment, or a mix. Add in a web application firewall to centralize application security and it really doesn't matter whether applications are going up and down like the stock market over the past week. By deploying the security at the edge, rather than within each application, you can let the application delivery controller manage the availability state of the application and concentrate on cleaning up and scanning requests for malicious content. Centralizing security for those applications - again, whether they are deployed on a "real" or "virtual" server - has a wealth of benefits including improving performance and reducing the very complexity Greg points out that makes information security folks reach for a valium. BUT THEY'RE DYNAMIC! Yes, yes they are. The assumption is that given the opportunity to move virtual images around that organizations will do so - and do so on a frequent basis. I think that assumption is likely a poor one for the enterprise and probably not nearly as willy nilly for cloud computing providers, either. Certainly there will some movement, some changes, but it's not likely to be every few minutes, as is often implied. Even if it was, some infrastructure is already prepared to deal with that dynamism. Dynamism is just another term for agility and makes the case well for loose-coupling of security and delivery with the applications living in the infrastructure. If we just apply the lessons we've learned from SOA to virtualization and cloud computing and 90% of the "Big Hairy Questions" can be answered by existing technology. We just may have to change our architectures a bit to adapt to these new computing models. Network infrastructure, specifically application delivery, has had to deal with applications coming online and going offline since their inception. It's the nature of applications to have outages, and application delivery infrastructure, at least, already deals with those situations. It's merely the frequency of those "outages" that is increasing, not the general concept. But what if they change IP addresses? That would indeed make things more complex. This requires even more intelligence but again, we've got that covered. While the functionality necessary to handle this kind of a scenario is not "out of the box" (yet) it is certainly not that difficult to implement if the infrastructure vendor provides the right kind of integration capability. Which most do already. Greg isn't wrong in his assertions. There are plenty of pieces of network infrastructure that need to take a look at these new environments and adjust how they deal with the dynamic nature of virtualization and cloud computing in general. But it's not all infrastructure that needs to "get up to speed". Some infrastructure has been ready for this scenario for years and it's just now that the application infrastructure and deployment models (SOA, cloud computing, virtualization) has actually caught up and made those features even more important to a successful application deployment. Application delivery in general has stayed ahead of the curve and is already well-suited to cloud computing and virtualized environments. So I guess some devices are already "Infrastructure 2.0" ready. I guess what we really need is a sticker to slap on the product that says so. Related Links Are you (and your infrastructure) ready for virtualization? Server virtualization versus server virtualization Automating scalability and high availability services The Three "Itys" of Cloud Computing 4 things you need in a cloud computing infrastructure284Views0likes3Comments