virtualization
1240 TopicsHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist DevOps teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license.This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, please log into downloads.f5.com (separate login from DevCentral), and navigate to your appropriate virtual edition, example: For VMware Fusion or Workstation or ESX/i:BIGIP-16.1.2-0.0.18.ALL-vmware.ova For Microsoft HyperV:BIGIP-16.1.2-0.0.18.ALL.vhd.zip KVM RHEL/CentoOS: BIGIP-16.1.2-0.0.18.ALL.qcow2.zip Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include_1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs fo Dev/Lab environments. Similar to driving a tank, use your best judgement. DeployingF5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritativeAzure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.81KViews13likes147CommentsF5 in AWS Part 1 - AWS Networking Basics
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 If you work in IT, and you haven’t been living under a rock, then you have likely heard of Amazon Web Services (AWS). There has been a substantial increase in the maturity and stability of the AWS Elastic Compute Cloud (EC2), but you are wondering – can I continue to leverage F5 services in AWS? In this series of blog posts, we will discuss the how and why of running F5 BIG-IP in EC2. In this specific article, we’ll start with the basics of the AWS EC2 and Virtual Private Cloud (VPC). Later in the series, we will discuss some of the considerations associated with running BIG-IP as compute instance in this environment, we’ll outline the best deployment models for your application in EC2, and how these deployment models can be automated using open-source tools. Note: AWS uses the terms "public" and "private" to refer to what F5 Networks has typically referred to as "external" and "internal" respectively. We will use this terms interchangeably. First, what is AWS? If you have read the story, you will know that the EC2 project began with an internal interest at Amazon to move away from messy, multi-tenant networks using VLANs for segregation. Instead, network engineers at Amazon wanted to build an entirely IP-based architecture. This vision morphed into the universe of application services available today. Of course, building multi-tenant, purely L3 networks at massive scale had implications for both security and redundancy (we’ll get to this later). Today, EC2 enables users to run applications and services on top of virtualized network, storage, and compute infrastructure, where hosts are deployed in the form of Amazon Machine Images (AMIs). These AMIs can either be private to the user or launched from the public AWS marketplace. Hosts can be added to elastic load balancing (ELB) groups and associated with publicly accessible IPs to implement a simple horizontal model for availability. AWS became truly relevant for the enterprise with the introduction of the Virtual Private Cloud service. VPCs enabled users to build virtual private networks at the IP layer. These private networks can be connected to on-premise configurations by way of a VPN Gateway, or connected to the internet via an Internet Gateway. When deploying hosts within a VPC, the user has a significant amount of control over how each host is attached to the network. For example, a host can be attached to multiple networks and given several public or private IPs on one or multiple interfaces. Further, users can control many of the security aspects they are used to configuring in an on-premise environment (albeit in a slightly different way), including network ACLs, routing, simple firewalling, DHCP options, etc. Lets talk about these and other important EC2 aspects and try to understand how they affect our application deployment strategy. L2 Restrictions As we mentioned above, one of the design goals of AWS was to remove layer 2 networking. This is a worthy accomplishment but we lose access to certain useful protocols, including ARP (and gratuitious ARP), broadcast and multi-cast groups, 802.1Q tagging. We can no longer use VLANs for some availability models, for quality of service management, or for tenant isolation. Network Interfaces For larger topologies, one of the largest impacts given the removal of 802.1Q protocol support is the number of subnets we can attach to a node in the network. Because in AWS each interface is attached as a layer 3 endpoint, we must add an interface for each subnet. This contrasts with traditional networks, where you can add VLANs to your trunk for each subnet via tagging. Even though we're in a virtual world, the number of virtual network interfaces (or Elastic Network Interfaces (ENIs) in AWS terminology) is also limited according to the EC2 instance size. Together, the limits on number of interfaces and mapping between interface and subnet effectively limit the number of directly connected networks we can attach to a device (like BIG-IP, for example). IP Addressing AWS offers two kinds of globally routable IP address; these are “Public IP Addresses” and “Elastic IP Address”. In the table below, we outlined some of the differences between these two types of IP addresses. You can probably figure out for yourself why we will want to use Elastic IPs with BIG-IP. Like interfaces, AWS limits the number of IPs in several ways, including the number of IPs that can be attached to an interface and the number of elastic IPs per AWS account. Table 1: Differences between Public and Elastic IP Addresses Public IP Elastic IP Released on device termination/disassociation YES NO Assignable to secondary interfaces NO YES Can be associated after launch NO YES Amazon provides more information on public and elastic IP addresses here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses Each interface on an EC2 instance is given a private IP address. This IP address is routable locally through your subnet and assigned from the address range associated with the subnet to which your interface is attached. Multiple private secondary IP addresses can be attached to an interface, and is a useful technique for creating more complex topologies. The number of interfaces and private IPs per interface within an Amazon VPC are listed here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI NAT Instances, Subnets and Routing When creating a VPC using the wizard available in the AWS VPC web portal, several default configurations are possible. One of these configurations is “VPC with Public and Private subnets”. In this configuration, what if the instances on our private subnet wish to access the outside world? Because we cannot attach public or elastic IP address to instances within the private subnet, we must use NAT provided by AWS. Like BIG-IP and other network devices in EC2, the NAT instance will live as a compute node within your VPC. This is good way to allow outbound traffic from your internal servers, but to prevent those servers from receiving inbound traffic. When you create subnets manually or through the VPC wizard, you’ll note that each subnet has an associated routing table. These route tables may be updated to control traffic flow between instances and subnets in your VPC. Regions and Availability Zones We know quite a few number of people who have been confused by the concept of availabilty zones in EC2. To put it clearly, an availabilty zone is a physically isolated datacenter in a region. Regions may contain mulitple availability zones. Availabilty zones run on different networking and storage infrastructure, and depend on seperate power supplies and internet connections. Striping your application deployments across availability zones is a great way to provide redundancy and, perhaps a hot standby, but please note that these are not the same thing. Amazon does not mirror any data between zones on behalf of the customer. While VPCs can span availability zones, subnets may not. To close this blog post, we are fortunate enough to get a video walk through from Vladimir Bojkovic, Solution Architect at F5 Networks. He shows how to create a VPC with internal and external subnets as a practical demonstration of the concepts we discussed above.4.2KViews2likes4CommentsThe Limits of Cloud: Gratuitous ARP and Failover
#Cloud is great at many things. At other things, not so much. Understanding the limitations of cloud will better enable a successful migration strategy. One of the truisms of technology is that takes a few years of adoption before folks really start figuring out what it excels at – and conversely what it doesn't. That's generally because early adoption is focused on lab-style experimentation that rarely extends beyond basic needs. It's when adoption reaches critical mass and folks start trying to use the technology to implement more advanced architectures that the "gotchas" start to be discovered. Cloud is no exception. A few of the things we've learned over the past years of adoption is that cloud is always on, it's simple to manage, and it makes applications and infrastructure services easy to scale. Some of the things we're learning now is that cloud isn't so great at supporting application mobility, monitoring of deployed services and at providing advanced networking capabilities. The reason that last part is so important is that a variety of enterprise-class capabilities we've come to rely upon are ultimately enabled by some of the advanced networking techniques cloud simply does not support. Take gratuitous ARP, for example. Most cloud providers do not allow or support this feature which ultimately means an inability to take advantage of higher-level functions traditionally taken for granted in the enterprise – like failover. GRATUITOUS ARP and ITS IMPLICATIONS For those unfamiliar with gratuitous ARP let's get you familiar with it quickly. A gratuitous ARP is an unsolicited ARP request made by a network element (host, switch, device, etc… ) to resolve its own IP address. The source and destination IP address are identical to the source IP address assigned to the network element. The destination MAC is a broadcast address. Gratuitous ARP is used for a variety of reasons. For example, if there is an ARP reply to the request, it means there exists an IP conflict. When a system first boots up, it will often send a gratuitous ARP to indicate it is "up" and available. And finally, it is used as the basis for load balancing failover. To ensure availability of load balancing services, two load balancers will share an IP address (often referred to as a floating IP). Upstream devices recognize the "primary" device by means of a simple ARP entry associating the floating IP with the active device. If the active device fails, the secondary immediately notices (due to heartbeat monitoring between the two) and will send out a gratuitous ARP indicating it is now associated with the IP address and won't the rest of the network please send subsequent traffic to it rather than the failed primary. VRRP and HSRP may also use gratuitous ARP to implement router failover. Most cloud environments do not allow broadcast traffic of this nature. After all, it's practically guaranteed that you are sharing a network segment with other tenants, and thus broadcasting traffic could certainly disrupt other tenant's traffic. Additionally, as security minded folks will be eager to remind us, it is fairly well-established that the default for accepting gratuitous ARPs on the network should be "don't do it". The astute observer will realize the reason for this; there is no security, no ability to verify, no authentication, nothing. A network element configured to accept gratuitous ARPs does so at the risk of being tricked into trusting, explicitly, every gratuitous ARP – even those that may be attempting to fool the network into believing it is a device it is not supposed to be. That, in essence, is ARP poisoning, and it's one of the security risks associated with the use of gratuitous ARP. Granted, someone needs to be physically on the network to pull this off, but in a cloud environment that's not nearly as difficult as it might be on a locked down corporate network. Gratuitous ARP can further be used to execute denial of service, man in the middle and MAC flooding attacks. None of which have particularly pleasant outcomes, especially in a cloud environment where such attacks would be against shared infrastructure, potentially impacting many tenants. Thus cloud providers are understandably leery about allowing network elements to willy-nilly announce their own IP addresses. That said, most enterprise-class network elements have implemented protections against these attacks precisely because of the reliance on gratuitous ARP for various infrastructure services. Most of these protections use a technique that will tentatively accept a gratuitous ARP, but not enter it in its ARP cache unless it has a valid IP-to-MAC mapping, as defined by the device configuration. Validation can take the form of matching against DHCP-assigned addresses or existence in a trusted database. Obviously these techniques would put an undue burden on a cloud provider's network given that any IP address on a network segment might be assigned to a very large set of MAC addresses. Simply put, gratuitous ARP is not cloud-friendly, and thus it is you will be hard pressed to find a cloud provider that supports it. What does that mean? That means, ultimately, that failover mechanisms in the cloud cannot be based on traditional techniques unless a means to replicate gratuitous ARP functionality without its negative implications can be designed. Which means, unfortunately, that traditional failover architectures – even using enterprise-class load balancers in cloud environments – cannot really be implemented today. What that means for IT preparing to migrate business critical applications and services to cloud environments is a careful review of their requirements and of the cloud environment's capabilities to determine whether availability and uptime goals can – or cannot – be met using a combination of cloud and traditional load balancing services.1.1KViews1like0CommentsIP::addr and IPv6
Did you know that all address internal to tmm are kept in IPv6 format? If you’ve written external monitors, I’m guessing you knew this. In the external monitors, for IPv4 networks the IPv6 “header” is removed with the line: IP=`echo $1 | sed 's/::ffff://'` IPv4 address are stored in what’s called “IPv4-mapped” format. An IPv4-mapped address has its first 80 bits set to zero and the next 16 set to one, followed by the 32 bits of the IPv4 address. The prefix looks like this: 0000:0000:0000:0000:0000:ffff: (abbreviated as ::ffff:, which looks strickingly simliar—ok, identical—to the pattern stripped above) Notation of the IPv4 section of the IPv4-formatted address vary in implementations between ::ffff:192.168.1.1 and ::ffff:c0a8:c8c8, but only the latter notation (in hex) is supported. If you need the decimal version, you can extract it like so: % puts $x ::ffff:c0a8:c8c8 % if { [string range $x 0 6] == "::ffff:" } { scan [string range $x 7 end] "%2x%2x:%2x%2x" ip1 ip2 ip3 ip4 set ipv4addr "$ip1.$ip2.$ip3.$ip4" } 192.168.200.200 Address Comparisons The text format is not what controls whether the IP::addr command (nor the class command) does an IPv4 or IPv6 comparison. Whether or not the IP address is IPv4-mapped is what controls the comparison. The text format merely controls how the text is then translated into the internal IPv6 format (ie: whether it becomes a IPv4-mapped address or not). Normally, this is not an issue, however, if you are trying to compare an IPv6 address against an IPv4 address, then you really need to understand this mapping business. Also, it is not recommended to use 0.0.0.0/0.0.0.0 for testing whether something is IPv4 versus IPv6 as that is not really valid a IP address—using the 0.0.0.0 mask (technically the same as /0) is a loophole and ultimately, what you are doing is loading the equivalent form of a IPv4-mapped mask. Rather, you should just use the following to test whether it is an IPv4-mapped address: if { [IP::addr $IP1 equals ::ffff:0000:0000/96] } { log local0. “Yep, that’s an IPv4 address” } These notes are covered in the IP::addr wiki entry. Any updates to the command and/or supporting notes will exist there, so keep the links handy. Related Articles F5 Friday: 'IPv4 and IPv6 Can Coexist' or 'How to eat your cake ... Service Provider Series: Managing the ipv6 Migration IPv6 and the End of the World No More IPv4. You do have your IPv6 plan running now, right ... Question about IPv6 - BIGIP - DevCentral - F5 DevCentral ... Insert IPv6 address into header - DevCentral - F5 DevCentral ... Business Case for IPv6 - DevCentral - F5 DevCentral > Community ... We're sorry. The IPv4 address you are trying to reach has been ... Don MacVittie - F5 BIG-IP IPv6 Gateway Module1.2KViews1like1CommentBIG-IP Configuration Conversion Scripts
Kirk Bauer, John Alam, and Pete White created a handful of perl and/or python scripts aimed at easing your migration from some of the “other guys” to BIG-IP.While they aren’t going to map every nook and cranny of the configurations to a BIG-IP feature, they will get you well along the way, taking out as much of the human error element as possible.Links to the codeshare articles below. Cisco ACE (perl) Cisco ACE via tmsh (perl) Cisco ACE (python) Cisco CSS (perl) Cisco CSS via tmsh (perl) Cisco CSM (perl) Citrix Netscaler (perl) Radware via tmsh (perl) Radware (python)1.7KViews1like13CommentsTen Steps to iRules Optimization
Ever wondered whether to use switch or if/else? How about the effects of regular expressions? Or even what the overhead of variables are? We've come up with a list of the top 10 optimization techniques you can follow to write a healthier happier iRule. #1 - Use a Profile First This one seems obvious enough but we continually get questions on the forums about iRules that are already built into the product directly. iRules does add extra processing overhead and will be slower than executing the same features in native code. Granted, they won't be that slow, but any little bit you can save will help you in the long run. So, if you aren't doing custom conditional testing, let the profiles do the work. The following common features implemented in iRules can be handled by native profiles: HTTP header insert and erase HTTP fallback HTTP compress uri <exclude|include> HTTP redirect rewrite HTTP insert X-Forwarded-For HTTP ramcache uri <exclude|include|pinned> Stream profile for content replacement Class profile for URI matching. Even if you do need to do some conditional testing, several of the profiles such as the stream profile will allow you to enable/disable and control the expression directly from the iRule. #2 - A Little Planning Goes A Long Way iRules are fun and it is really easy to find yourself jumping right in and coding without giving proper thought to what you are trying to accomplish. Just like any software project, it will pay off in the long run if you come up with some sort of feature specification and try to determine the best way to implement it. Common questions you can ask yourself include: What are the protocols involved? What commands should I use to achieve the desired result? How can I achieve the desired result in the least amount of steps? What needs to be logged? Where and how will I test it? By taking a few minutes up front, you will save yourself a bunch of rewrites later on. #3 - Understand Control Statements Common control statements include if/elseif/else, switch, and the iRule matchclass/findclass commands with datagroups. The problem is that most of these do the same thing but, depending on their usage, their performance can vary quite a bit. Here are some general rules of thumb when considering which control statement to use: Always think: "switch", "data group", and then "if/elseif" in that order. If you think in this order, then in most cases you will be better off. Use switch for < 100 comparisons. Switches are fastest for fewer than 100 comparisons but if you are making changes to the list frequently, then a data group might be more manageable. Use data groups for > 100 comparisons. Not only will your iRule be easier to read by externalizing the comparisons, but it will be easier to maintain and update. Order your if/elseif's with most frequent hits at the top. If/elseif's must perform the full comparison for each if and elseif. If you can move the most frequently hit match to the top, you'll have much less processing overhead. If you don't know which hits are the highest, think about creating a Statistics profile to dynamically store your hit counts and then modify you iRule accordingly. Combine switch/data group/if's when appropriate. No one said you have to pick only one of the above. When it makes since, embed a switch in an if/elseif or put a matchclass inside a switch. User your operators wisely. "equals" is better than "contains", "string match/switch-glob" is better than regular expressions. The "equals" operator checks to see if string "a" is contained in string "b". If you are trying to do a true comparison, the overhead here is unnecessary - use the "equals" command instead. As for regular expressions, see #4. Use break, return, and EVENT::disable commands to avoid unnecessary iRule processing. Since iRules cannot be broken into procedures they tend to be lengthy in certain situations. When applicable use the "break", "return", or "EVENT::disable" commands to halt further iRule processing. Now, these are not set in stone but just general rules of thumb so use your best judgement. There will always be exceptions to the rule and it is best if you know the cost/benefits to make the right decision for you. #4 - Regex is EVIL! Regex's are cool and all, but are CPU hogs and should be considered pure evil. In most cases there are better alternatives. Bad when HTTP_REQUEST { if { [regex {^/myPortal} [HTTP::uri] } { regsub {/myPortal} [HTTP::uri] "/UserPortal" newUri HTTP::uri $newUri } Good when HTTP_REQUEST { if { [HTTP::uri] starts_with "/myPortal" } { set newUri [string map {myPortal UserPortal [HTTP::uri]] HTTP::uri $newUri } But sometimes they are a necessary evil as illustrated in the CreditCardScrubber iRule. In the cases where it would take dozens or more commands to recreate the functionality in a regular expression, then a regex might be your best option. when HTTP_RESPONSE_DATA { # Find ALL the possible credit card numbers in one pass set card_indices [regexp -all -inline -indices \ {(?:3[4|7]\d{2})(?:[ ,-]?(?:\d{5}(?:\d{1})?)){2}|(?:4\d{3})(?:[ ,-]?(?:\d{4})){3}|(?:5[1-5]\d{2})(?:[ ,-]?(?:\d{4})){3}|(?:6011)(?:[ ,-]?(?:\d{4})){3}} \ [HTTP::payload]] } #5 - Don't Use Variables The use of variables does incur a small amount of overhead that in some cases can be eliminated. Consider the following example where the value of the Host header and the URI are stored in variables and later are compared and logged. These take up additional memory for each connection. when HTTP_REQUEST { set host [HTTP::host] set uri [HTTP::uri] if { $host equals "bob.com" } { log "Host = $host; URI = $uri" pool http_pool1 } } Here is the exact same functionality as above but without the overhead of the variables. when HTTP_REQUEST { if { [HTTP::host] equals "bob.com" } { log "Host = [HTTP::host]; URI = [HTTP::uri]" pool http_pool1 } } You may ask yourself: "Self, who cares about a spare 30 bytes or so?" Well, for a single connection, it's likely no big deal but you need to think in terms of 100's of 1000's of connections per second. That's where even the smallest amount of overhead you can save will add up. Removing those variables could gain you a couple thousand connections per second on your application. And never fear, there is no more overhead in using the builtin commands like [HTTP::host] over a varable reference as both are just direct pointers to the data in memory. #6 - Use Variables Now that I've told you not to use variables, here's the case where using variables makes sense. When you need to do repetitive costly evaluations on a value and can avoid it by storing that evaluation in a variable, then by all means do so. Bad - Too many memory allocations when HTTP_REQUEST { if { [string tolower [HTTP::uri]] starts_with "/img" } { pool imagePool } elseif { ([string tolower [HTTP::uri]] ends_with ".gif") || ([string tolower [HTTP::uri]] ends_with ".jpg") } { pool imagePool } } Bad - The length of a variable name can actually consume more overhead due to the way TCL stores variables in lookup tables. when HTTP_REQUEST { set theUriThatIAmMatchingInThisiRule [string tolower [HTTP::uri]] if { $theUriThatIAmMatchingInThisiRule starts_with "/img" } { pool imagePool } elseif { ($theUriThatIAmMatchingInThisiRule ends_with ".gif") || ($theUriThatIAmMatchingInThisiRule ends_with ".jpg") } { pool imagePool } } Good - Keep your variables names to a handful of characters. when HTTP_REQUEST { set uri [string tolower [HTTP::uri]] if { $uri starts_with "/img" } { pool imagePool } elseif { ($uri ends_with ".gif") || ($uri ends_with ".jpg") } { pool imagePool } } #7 - Understand Polymorphism TCL is polymorphic in that variables can "morph" back and forth from data type to data type depending on how it is used. This makes loosly-typed languages like TCL easier to work with since you don't need to declare variables as a certain type such as integer or string but it does come at a cost. You can minimize those costs by using the correct operators on your data types. Use the correct operator for the correct type. Use eq,ne when comparing strings. Use ==,!= when comparing numbers Use [IP::addr] when comparing IP addresses. If you aren't careful, things may not be what they seem as illustrated below set x 5 if { $x == 5 } { } # evaluates to true if { $x eq 5 } { } # evaluates to true if { $x == 05 } { } # evaluates to true if { $x eq 05 } { } # evaluates to false #8 - Timing The timing iRule command is a great tool to help you eek the best performance out of your iRule. CPU cycles are stored on a per event level and you can turn "timing on" for any and all of your events. These CPU metrics are stored and can be retrieved by the bigpipe CLI, the administrative GUI, or the iControl API. The iRule Editor can take these numbers and convert them into the following statistics CPU Cycles/Request Run Time (ms)/Request Percent CPU Usage/Request Max Concurrent Requests Just make sure you turn those timing values off when you are done optimizing your iRule as that will have overhead as well! #9 - Use The Community DevCentral is a great resource whether you are just starting your first iRule or a seasoned veteran looking for advice on something you may have overlooked. The forums and wikis can be a life saver when trying to squeeze every bit of performance out of your iRule. #10 - Learn about the newest features Read the release documentation for new BIG-IP releases as we are constantly adding new functionalty to make your iRules faster. For instance, look for the upcoming "class" command to make accessing your data groups lightning fast. Conclusion Take these 10 tips with you when you travel down the road of iRule development and you will likley have a faster iRule to show for it. As always, if you have any tips that you've found to be helpful when optimizing your iRules, please pass them along and we'll add them to this list. Get the Flash Player to see this player. 20081030-iRulesOpt_TenSteps.mp31.5KViews1like2CommentsWhy the BIG-IP VE OVA does not work on Virtualbox
I've seen this question asked on this forum as well as on several other forums across the internet, so I wanted to put together this article to answer folks' questions and to settle the discussion. I'll also discuss options that you have for working around it. The VE images I am working with will be from downloads.f5.com. Perspective Sometimes things like this get lost in the noise that PM should be channeling back to the maintainers of these products. Other times, PM does exactly what's needed to be done, but then it gets filed as "meh, not that important" and it gets checked on every 6 months or so to see if there has been any movement on it. In this case, I just don't think enough people were making a fuss about it to attract any development effort. If something is listed as "not supported", that's usually why its not supported. There are many times though that a person in-house will need to scratch an itch. This is one of those cases. I do not work on the team that builds the images that are available for download, but I have a great working relationship with them, so they gave me the access needed to squirrel around and find the answer. Build process insight The build process itself is encapsulated in a biiiiiig shell script. It throws off a number of different VE images. To understand why the OVA fails, you need to understand what the build process does a little something about virtualbox The process to build the VE's starts with QEMU. For the, what I will call "standard" VE's ( ALL-ide.ova , ALL-scsi.ova , ALL.qcow2 , etc),the QEMU command attaches 2 disk drives during build; 1 for the system, and one for a data store. For the VE's that are known as 1SLOT VE's, that second disk (the data store one) is never created. 1SLOTS are intended only to be used for LTM. As such, adequate disk space is not allocated for "everything else". You get ~7G of space and that's it. To create those drives and attach them as needed by qemu, we simply append more and more " -drive=blah blah blah " parameters. Ok, so that collection of QEMU commands eventually gets us a RAW image that we can transform into a number of different formats. One of those formats is OVA, which is created by use of the ovftool command. The output of that command is an OVA file. For those unfamiliar with it, an OVA file is just a tarball that includes an OVF file, a number of disks, and some other "stuff" that you might want to include (certificates, eula files, yada yada yada). An OVA file is picky though as to the order that files are added to it. The OVF file must come first. Virtualbox Virtualbox will happily support more than one disk on a single controller. It will not, however, support more than one controller. On the surface, you would think this is not a problem. After all, when you "Import Appliance", it will happily list both of the controllers there. In fact, the OVF file can even include multiple controllers in it, as we'll see below. The Problem So the problem stems from one of these tools, qemu or ovftool , attaching the second disk to the second controller. If the disk were attached to the first controller, then everything would be fine. It's the lack of multiple controller support in Virtualbox that gums up the machine. VMWare allows multiple controllers, so this would explain why nobody in CM saw this problem when they created the OVA. So can you workaround this problem in any way so that you get a working OVA on Virtualbox? Answer: Yes. It's not even a difficult workaround, but I wonder if it changes anything in the functionality of the OVA as imported into Virtualbox? I suspect not, but if you're willing to hack on the VE, then I assume you're not all that concerned with the VE working 100% as designed either. I just needed it to give me a REST and SOAP API that I could throw scripts at to test stuff. I'm OK with hacking on the VE. Workarounds Since I'm not privy to the CM team's work pipeline, I can't even begin to guess at when (or if) this will be fixed. I suspect that "since it works in our supported platforms" it won't be fixed any time soon. Who knows. In the meantime, here is a way to fix the problem if you are unable to resort to using the 1SLOT OVA. I am going to use the IDE version in this example, but similar steps will apply to the SCSI image. Starting with the VE image, we'll use tar to extract its contents to view them. SEA-ML-RUPP1:tim trupp$ tar xvf BIGIP-12.0.0.0.0.606.ALL-ide.ova x BIGIP-12.0.0.0.0.606-ide.ovf x BIGIP-12.0.0.0.0.606-ide.mf x BIGIP-12.0.0.0.0.606-ide.cert x BIGIP-12.0.0.0.0.606-disk1.vmdk x BIGIP-12.0.0.0.0.606-disk2.vmdk SEA-ML-RUPP1:tim trupp$ Now, let's take a moment to just copy those file names and put them in a temporary file. The reason we do this is because we will need to recreate the OVA later, but because it has a specific file list we need to specify that list when we recreate the file. I just stick the file list in a file called "files.txt" BIGIP-12.0.0.0.0.606-ide.ovf BIGIP-12.0.0.0.0.606-ide.mf BIGIP-12.0.0.0.0.606-ide.cert BIGIP-12.0.0.0.0.606-disk1.vmdk BIGIP-12.0.0.0.0.606-disk2.vmdk From here on out, the remaining work will be done in the OVF file, so open that in your favorite editor. With it open, search around the bottom of the file for this block. <Item> <rasd:AddressOnParent>0</rasd:AddressOnParent> <rasd:ElementName>disk2</rasd:ElementName> <rasd:HostResource>ovf:/disk/vmdisk2</rasd:HostResource> <rasd:InstanceID>6</rasd:InstanceID> <rasd:Parent>3</rasd:Parent> <rasd:ResourceType>17</rasd:ResourceType> </Item> There are two values we will want to change here. They are <rasd:AddressOnParent>0</rasd:AddressOnParent> <rasd:Parent>3</rasd:Parent> You want to change those values to be the following <rasd:AddressOnParent>1</rasd:AddressOnParent> <rasd:Parent>4</rasd:Parent> Then, save the file. What we have accomplished with this change is that we have removed the disk from the second controller, and attached it as a second disk on the first controller. With all disks being on one controller, the OVA will now boot fine in Virtualbox. Let's repack up the OVA. tar-cvfBIGIP-12.0.0.0.0.606.ALL-ide-FIXED.ova -T files.txt This should deliver output that resembles the following SEA-ML-RUPP1:tim trupp$ tar -cvf BIGIP-12.0.0.0.0.606.ALL-ide-FIXED.ova -T files.txt a BIGIP-12.0.0.0.0.606-ide.ovf a BIGIP-12.0.0.0.0.606-ide.mf a BIGIP-12.0.0.0.0.606-ide.cert a BIGIP-12.0.0.0.0.606-disk1.vmdk a BIGIP-12.0.0.0.0.606-disk2.vmdk SEA-ML-RUPP1:tim trupp$ And that should be all you need to do. With the fixed OVA made, open it up in Virtualbox and have at it. One last thing. Remember I mentioned that multiple controllers could be specified as long as no disks were attached to the second controller? Take a look at the OVF file again and you will see this block <Item> <rasd:Address>1</rasd:Address> <rasd:Description>IDE Controller</rasd:Description> <rasd:ElementName>ideController1</rasd:ElementName> <rasd:InstanceID>3</rasd:InstanceID> <rasd:ResourceType>5</rasd:ResourceType> </Item> <Item> <rasd:Address>0</rasd:Address> <rasd:Description>IDE Controller</rasd:Description> <rasd:ElementName>ideController0</rasd:ElementName> <rasd:InstanceID>4</rasd:InstanceID> <rasd:ResourceType>5</rasd:ResourceType> </Item> Proof that more than one controller can exist, but when you boot the fixed OVA, it still works right? Weird. Anyways, I hope this was useful and answers any outstanding questions you may have had about why the OVA doesn't work on Virtualbox.2.5KViews1like9CommentsThe Full-Proxy Data Center Architecture
Why a full-proxy architecture is important to both infrastructure and data centers. In the early days of load balancing and application delivery there was a lot of confusion about proxy-based architectures and in particular the definition of a full-proxy architecture. Understanding what a full-proxy is will be increasingly important as we continue to re-architect the data center to support a more mobile, virtualized infrastructure in the quest to realize IT as a Service. THE FULL-PROXY PLATFORM The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections. While all full-proxies are proxies, the converse is not true. Not all proxies are full-proxies and it is this distinction that needs to be made when making decisions that will impact the data center architecture. A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy. Clients often experience higher latency because of lower bandwidth connections while the servers are generally low latency because they’re connected via a high-speed LAN. The optimizations and acceleration techniques used on the client side are far different than those on the LAN side because the issues that give rise to performance and availability challenges are vastly different. A full-proxy, with separate connection handling on either side of the “air gap”, can address these challenges. A proxy, which may be a full-proxy but more often than not simply uses a buffer-and-stitch methodology to perform connection management, cannot optimally do so. A typical proxy buffers a connection, often through the TCP handshake process and potentially into the first few packets of application data, but then “stitches” a connection to a given server on the back-end using either layer 4 or layer 7 data, perhaps both. The connection is a single flow from end-to-end and must choose which characteristics of the connection to focus on – client or server – because it cannot simultaneously optimize for both. The second advantage of a full-proxy is its ability to perform more tasks on the data being exchanged over the connection as it is flowing through the component. Because specific action must be taken to “match up” the connection as its flowing through the full-proxy, the component can inspect, manipulate, and otherwise modify the data before sending it on its way on the server-side. This is what enables termination of SSL, enforcement of security policies, and performance-related services to be applied on a per-client, per-application basis. This capability translates to broader usage in data center architecture by enabling the implementation of an application delivery tier in which operational risk can be addressed through the enforcement of various policies. In effect, we’re created a full-proxy data center architecture in which the application delivery tier as a whole serves as the “full proxy” that mediates between the clients and the applications. THE FULL-PROXY DATA CENTER ARCHITECTURE A full-proxy data center architecture installs a digital "air gap” between the client and applications by serving as the aggregation (and conversely disaggregation) point for services. Because all communication is funneled through virtualized applications and services at the application delivery tier, it serves as a strategic point of control at which delivery policies addressing operational risk (performance, availability, security) can be enforced. A full-proxy data center architecture further has the advantage of isolating end-users from the volatility inherent in highly virtualized and dynamic environments such as cloud computing . It enables solutions such as those used to overcome limitations with virtualization technology, such as those encountered with pod-architectural constraints in VMware View deployments. Traditional access management technologies, for example, are tightly coupled to host names and IP addresses. In a highly virtualized or cloud computing environment, this constraint may spell disaster for either performance or ability to function, or both. By implementing access management in the application delivery tier – on a full-proxy device – volatility is managed through virtualization of the resources, allowing the application delivery controller to worry about details such as IP address and VLAN segments, freeing the access management solution to concern itself with determining whether this user on this device from that location is allowed to access a given resource. Basically, we’re taking the concept of a full-proxy and expanded it outward to the architecture. Inserting an “application delivery tier” allows for an agile, flexible architecture more supportive of the rapid changes today’s IT organizations must deal with. Such a tier also provides an effective means to combat modern attacks. Because of its ability to isolate applications, services, and even infrastructure resources, an application delivery tier improves an organizations’ capability to withstand the onslaught of a concerted DDoS attack. The magnitude of difference between the connection capacity of an application delivery controller and most infrastructure (and all servers) gives the entire architecture a higher resiliency in the face of overwhelming connections. This ensures better availability and, when coupled with virtual infrastructure that can scale on-demand when necessary, can also maintain performance levels required by business concerns. A full-proxy data center architecture is an invaluable asset to IT organizations in meeting the challenges of volatility both inside and outside the data center. Related blogs & articles: The Concise Guide to Proxies At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility F5 Friday: When Firewalls Fail… F5 Friday: Platform versus Product4.4KViews1like1CommentF5 in AWS Part 4 - Orchestrating BIG-IP Application Services with Open-Source tools
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 The following post references code hosted at F5's Github repository f5networks/aws-deployments. This code provides a demonstration of using open-source tools to configure and orchestrate BIG-IP. Full documentation for F5 BIG-IP cloud work can be found at Cloud Docs: F5 Public Cloud Integrations. So far we have talked above AWS networking basics, how to run BIG-IP in a VPC, and highly-available deployment footprints. In this post, we’ll move on to my favorite topic, orchestration. By this point, you probably have several VMs running in AWS. You’ve lost track of which configuration is setup on which VM, and you have found yourself slowly going mad as you toggle between the AWS web portal and several SSH windows. I call this ‘point-and-click’ purgatory. Let's be blunt, why would you move to cloud without realizing the benefits of automation, of which cloud is a large enabler. If you remember our second article, we mentioned CloudFormation templates as a great way to deploy a standardized set of resources (perhaps BIG-IP + the additional virtualized network resources) in EC2. This is a great start, but we need to configure these resources once they have started, and we need a way to define and execute workflows which will run across a set of hosts, perhaps even hosts which are external to the AWS environment. Enter the use of open-source configuration management and workflow tools that have been popularized by the software development community. Open-source configuration management and AWS APIs Lately, I have been playing with Ansible, which is a python-based, agentless workflow engine for IT automation. By agentless, I mean that you don’t need to install an agent on hosts under management. Ansible, like the other tools, provides a number of libraries (or “modules”) which provide the ability to manage a diverse collection of remote systems. These modules are typically implemented through the use of API calls, often over HTTP. Out of the box, Ansible comes with several modules for managing resources in AWS. While the EC2 libraries provided are useful for basic orchestration use cases, we decided it would be easier to atomically manage sets of resources using the CloudFormation module. In doing so, we were able to deploy entire CloudFormation stacks which would include items like VPCs, networking elements, BIG-IP, app servers, etc. Underneath the covers, the CloudFormation: Ansible module and our own project use the python module to interact with AWS service endpoints. Ansible provides some basic modules for managing BIG-IP configuration resources. These along with libraries for similar tools can be found here: Ansible Puppet SaltStack In the rest of this post, I’ll discuss some work colleagues and I have done to automate BIG-IP deployments in AWS using Ansible. While we chose to use Ansible, we readily admit that Puppet, Chef, Salt and whatever else you use are all appropriate choices for implementing deployment and configuration management workflows for your network. Each have their upsides and downsides, and different tools may lend themselves to different use cases for your infrastructure. Browse the web to figure out which tool is right for you. Using Standardized BIG-IP Interfaces Speaking of APIs, for years F5 has provided the ability to programmatically configure BIG-IP using iControlSOAP. As the audiences performing automation work have matured, so have the weapons of choice. The new hot ticket is REST (Representational State Transfer), and guess what, BIG-IP has a REST interface (you can probably figure out what it is called). Together, iControlSOAP and iControlREST give you the power to manage nearly every configuration element and feature of BIG-IP. These interfaces become extremely powerful when you combine them with your favorite open-source configuration management tool and a cloud that allows you to spin up and down compute and networking resources. In the project described below, we have also made use of iApps using iControlRest as a way to create a standard virtual server configuration with the correct policies and profiles. The documentation in Github describes this in detail, but our approach shows how iApps provide a strongly supported approach for managing network policy across engineering teams. For example, imagine that a team of software engineers has written a framework to deploy applications. You can package the network policy into iApps for various types of apps, and pass these to the teams writing the deployment framework. Implementing a Service Catalog To pull the above concepts together, a colleague and I put together the aws-deployments project.The goal was to build a simple service catalog which would enable a user to deploy a containerized application in EC2 with BIG-IP network services sitting in front. This is example code that is not supported by F5 support but is a proof of concept to show how you can fully automate production-like deployments in AWS. Some highlights of the project include: Use of iControlRest and iControlSoap within Ansible playbooks to setup advanced topologies of BIG-IP in AWS. Automated deployment of a basic ASM web application firewall policy to protect a vulnerable web app (Hackazon. Use of iApps to manage virtual server configurations, including the WAF policy mentioned above. Figure 1 - Generic Architecture for automating application deployments in public or private cloud In examination of the code, you will see that we provide the opportunity to provision all the development models outlined in our earlier post (a single standalone VE, standalones BIG-IP VEs striped availability zones, clusters within an availability zone, etc). We used Ansible and the interfaces on BIG-IP to orchestrate the workflows assoiated with these deployment models. To perform the clustering step, we have used the iControlSoap interface on BIG-IP. The final set of technology used is depicted in Figure 3. Figure 2 - Technologies used in the aws-deployments project on Github Read the Code and Test It Yourself All the code I have mentioned is available at f5networks/aws-deployments. We encourage you to download and run the code for yourself. Instructions for setting up a development environment which includes the necessary dependencies is easy. We have packaged all the dependencies for use with either Vagrant or Docker as development tools. The instructions for either of these approaches can be found in the README.md or in the /docs directory. The following video shows an end-to-end usage example. (Keep in mind that the code has been updated since this video was produced). At the end of the day, our goal for this work was to collect customer feedback. Please provide some by leaving a comment below, or by filing ‘pull requests’ or ‘issues’ in Github. In the next few weeks, we will be updating the project to include the Hackazon app mentioned above, show how to cluster BIG-IP across availability zones, and how to deploy an ASM profile with an iApp. Have fun!1.3KViews1like3CommentsF5 in AWS Part 2 - Running BIG-IP in an EC2 Virtual Private Cloud
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 Previously in this series, we discussed the networking fundamentals of Virtual Private Clouds (VPC) in Amazon’s Elastic Compute Cloud (EC2). Some of the topics we touched on include the impact of the removal of layer 2 access, limits on network elements like the number of interfaces and publicly routable IP addresses, and how to manage routing within your subnets. Today we’ll cover licensing models and images available in Amazon, sizing requirements, including the number of interfaces assignable to BIG-IP, some basic network topologies, and how you can use Amazon CloudFormation templates to make your life easier when deploying BIG-IP. Licensing Models There are two ways you can run BIG-IP in AWS, at an utility rateor Bring Your Own License (BYOL). Utility Model you pay Amazon both for the compute and disk requirements of the instances AND for the BIG-IP software license at an hourly rate There are two forms: hourly or annual subscriptions. Using annual licenses you can save 37%. Follow the instructions on AWS to purchase an annual subscription. When launching hourly instances, the devices boot into a licensed state and are immediately ready for provisioning BYOL Model You do pay Amazon only for the compute + disk footprint, notfor the F5 software license. Version Plus licenses (like "V12" or "V13") can be reused in Amazon if you have them from previous deployments You must license the device after it launches, either manually or in an orchestration manner. Available as individual licenses, or in volume as license pools. All in all, the utility licensing model offers significant flexibility to scale up your infrastructure to meet demand, while reducing the amount you pay for base traffic throughput. It may be advantagious to use this model if you experience large traffic swings. In contrast, you may be able to achieve this flexibility at a lower cost using BYOL license pools. With volume (pool) licensing, licenses can be reused across devices as you ramp these instances up/down. In addition to choosing between utility and BYOL licenses models, you’ll also need to choose the licensed features and the throughput level. When taking a BYOL approach, the license (which you may have already) will have a max throughput level and will be associated with a Good/Better/Best (GBB) package. For more information on GBB, see Simplified Licensing: Compare our Good, Better, Best product bundles. When deciding on the throughput level, you may license up to 1Gbit/s using hourly AMIs. It is possible to import a 3Gbit/s VE license in AWS, but note that AWS caps the throughput on an instance to 2Gbit/s, so you will be limited by Amazon EC2 restrictions, rather than F5. Driving 2Gbit/s through your virtual instance in AWS will require careful implementation of your configuration in BIG-IP. Also, note that the throughput restrictions on each image include both data plane and management traffic. You can read more about throughput restrictions for virtual instances here: K14810: Overview of BIG-IP VE license and throughput limits. Once you have an chosen a license model, GBB package, and throughput, select the corrosponding AMI in the Amazon Marketplace. Disk and Compute Recommendations: An astute individual will wonder why there exist separate images for each GBB package. In an effort to maintain the smallest footprint possible, each AMI includes just enough disk volume for licensed features. Each GBB package has different disk of requirements which are built into the AMI. For evidence of this, use the AWS CLI to see details on a specific image: aws ec2 describe-images --filter "Name=name,Values=*F5 Networks BYOL BIGIP-13.1.0.2.0.0.6* Truncated output: { "Images": [ { "ProductCodes": [ { "ProductCodeId": "91wwm31qya4s3rkc5bv4jq9b3", "ProductCodeType": "marketplace" } ], "Description": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM", "VirtualizationType": "hvm", "Hypervisor": "xen", "ImageOwnerAlias": "aws-marketplace", "EnaSupport": true, "SriovNetSupport": "simple", "ImageId": "ami-3bbd0243", "State": "available", "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "Encrypted": false, "DeleteOnTermination": true, "VolumeType": "gp2", "VolumeSize": 82, "SnapshotId": "snap-0c9beaa9345422784" } } ], "Architecture": "x86_64", "ImageLocation": "aws-marketplace/F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4", "RootDeviceType": "ebs", "OwnerId": "679593333241", "RootDeviceName": "/dev/xvda", "CreationDate": "2018-01-24T19:58:31.000Z", "Public": true, "ImageType": "machine", "Name": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4" }, From the above, you can see that the Good BYOL image configures a single Elastic Block Store (EBS) 31 Gb volume, whereas the Best image comes with two EBS volumes, totaling 124Gb in space. On the discussion of storage, we would like to take a moment to focus on analytics. While the analytics module is licensed in the "Good" package, you may need additional disk space in order to provision this module. See this link (https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-msft-hyper-v-11-5-0/3.html) for increasing the disk space on a specific volumes. Another option for working around this issue is to use a “Better” AMI. This will ensure you have enough space to provision the analytics module. In addition to storage, running BIG-IP as a compute node in EC2 also requires a minimum number of interfaces, vCPUs and RAM. AskF5's Virtual Edition and Supported Hypervisors Matrix provides a list of recommended instance types although you can choose alternatives as long as they support your architecture's configurations. In short, as you choose higher performance instance types in EC2, you get more RAM and more network interfaces. This will allow you to create more advanced topologies and services. Basic Network Topologies So with a limited number of interfaces, how do you build a successful multi-tier application architecture? Many customers might start with a directly connected architecture like that shown below: In this architecture, 10.0.0.100 is the virtual server. The address matches an EC2 private IP on the external interface, either the first assigned to that interface (the primary private IP) or a secondary private IP. Not shown is the Elastic IP address (EIP) which maps to this private IP. We recommend using the primary private IP as the external self-IP on BIG-IP. An Elastic IP can then be attached to this primary private IP to allow outbound calls. The 1:1 NAT performed by Amazon between the public (elastic) IP and private IP is invisible to BIG-IP. Keep in mind that a publicly routable self-IP is required to use the BIG-IP failover mechanism, which makes HTTP calls to AWS. We’ll discuss failover in a few moments. Secondary privates IPs and corresponding EIPs on the external interface can then be used for each virtual server. Given this discussion about interfaces and EIPs, be sure to consider that the instance type you choose in Amazon will dictate how many virtual servers you can run on BIG-IP. For example, given a m3.xlarge (allowed three interfaces) and the default account limited of 5 EIPs, you will be limited to 3 virtual servers. In this case, one interface will be used to attach each of the management, external, and internal subnets. On the external interface, you would attach 3 secondary interfaces, each with an EIP. The other two EIPs would be used for the management port and external self-IP. To get more interfaces, move up instance sizes ( -> m3.xxlarge). To get more EIPs, request them from Amazon. If you do use an EIP for the management port, be sure to ACL it appropriately. The benefit of the directly connected architecture shown above, where BIG-IP can serve as the default gateway, is that each node in the tier can communicate with other nodes in the tiers and leverage virtual listeners on BIG-IP without having to be SNATed. This is sometimes preferred as it makes it simpler to implement E-W security & analytics.The problem, as shown below, is that as application or tenant density increases so does the number of required interfaces. Alternatively, routed architectures (shown below) where pool members live on remote networks, are more easily migrated and suitable to situations with limited network interfaces. In the case below, the route table for all pool members must be contain a default route that leads back to BIG-IP. By doing so, you can: leverage BIG-IP for outbound use case (secure outbound traffic) return internet traffic back through the BIG-IP and avoid SNAT’ing your internet facing VIPs . Note: requires disabling SRC/DST Check on your BIG-IP instances/interfaces An alternate, and perhaps more realistic view of the above looks like: Finally, it may make sense to attach an additional interfaces for each application to increase the application density BIG-IP: These routed architectures allow you to reduce the number of interfaces used to connect internal networks, which then enables you to leverage the remaining interfaces to increase application density. Two potential drawbacks include the requirement of SNAT (as BIG-IP is no longer inline to intercept response traffic) and adding an additional network hop. The up/down stream router will generally intercept the return traffic because the client is also on a directly connected or closer network. Elastic IPs = Floating IPs and API Based Failover After you have figured how to incorporate BIG-IP into your network, the final step before deploying applications and network services will be ensuring you can maintain high-availability. One of the challenges in adapting BIG-IP for public clouds was that the availability model of BIG-IP (“Device Service groups”, or DSCs) was tightly coupled to sharing L2/L3 floating addresses in the same L2 segment. An active device made an L2 broadcast (GARP) to take over "Active" ownership of IP addresses and other network listeners. In accordance with the removal of L2, BIG-IP has adapted and replaced the GARP failover method with API calls to Amazon. These API calls toggle ownership of Amazon secondary private-IP addresses between devices. Any EIPs which map to these secondary IP addresses will now point to the new active device. Note here that floating IPs in BIG-IP speak are now equivalent to secondary IPs in the EC2 world. One issue to be aware of with the API-based failover mechanism is the increase in failover time to =~ 10 sec per EIP. This is the time it takes for changes to propagate in AWS’s network. While this downtime is still significantly less than a DNS timeout, it is troublesome as BIG-IP’s Device Service Group (DSC) feature was specifically introduced to provide sub-second failover. Newer applications built for cloud are typically designed to handle these changes in availability concepts, but this makes it more challenging to shift traditional workloads to layer-3-only environments like AWS. Historically, the DSC group feature has also allowed the use of BIG-IP as a highly available default gateway. This was accomplished by directing the default route to the internal floating self-IP on a cluster or by directly connecting application servers. In Amazon, the default route may point to an internet gateway or a device interface, but not a statically named IPaddress. We'll leave the fix for this problem for the next article where we will also talk about other deployment models of BIG-IP in AWS, including those which span availability zones. CloudFormation Templates To close this article, we’ve decided to provide examples of how BIG-IP can be deployed using CloudFormation Templates (CFT) in AWS. CloudFormation is an AWS service that enables you to define a set of EC2 resources that can be automatically and deterministically deployed in your account. These application “stacks” are defined in JSON, making them easier to read and share. F5 provides serveral CFTs with options for licensing model, high availability, and auto-scaling (for LTM and WAF modules). Please review the Github Big-IP Version Matrix for AWS CFT Templates document within our f5-aws-cloudformation repository to determine your deployment requirements. Enjoy!3.5KViews1like3Comments