design
641 TopicsThe Full-Proxy Data Center Architecture
Why a full-proxy architecture is important to both infrastructure and data centers. In the early days of load balancing and application delivery there was a lot of confusion about proxy-based architectures and in particular the definition of a full-proxy architecture. Understanding what a full-proxy is will be increasingly important as we continue to re-architect the data center to support a more mobile, virtualized infrastructure in the quest to realize IT as a Service. THE FULL-PROXY PLATFORM The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections. While all full-proxies are proxies, the converse is not true. Not all proxies are full-proxies and it is this distinction that needs to be made when making decisions that will impact the data center architecture. A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy. Clients often experience higher latency because of lower bandwidth connections while the servers are generally low latency because they’re connected via a high-speed LAN. The optimizations and acceleration techniques used on the client side are far different than those on the LAN side because the issues that give rise to performance and availability challenges are vastly different. A full-proxy, with separate connection handling on either side of the “air gap”, can address these challenges. A proxy, which may be a full-proxy but more often than not simply uses a buffer-and-stitch methodology to perform connection management, cannot optimally do so. A typical proxy buffers a connection, often through the TCP handshake process and potentially into the first few packets of application data, but then “stitches” a connection to a given server on the back-end using either layer 4 or layer 7 data, perhaps both. The connection is a single flow from end-to-end and must choose which characteristics of the connection to focus on – client or server – because it cannot simultaneously optimize for both. The second advantage of a full-proxy is its ability to perform more tasks on the data being exchanged over the connection as it is flowing through the component. Because specific action must be taken to “match up” the connection as its flowing through the full-proxy, the component can inspect, manipulate, and otherwise modify the data before sending it on its way on the server-side. This is what enables termination of SSL, enforcement of security policies, and performance-related services to be applied on a per-client, per-application basis. This capability translates to broader usage in data center architecture by enabling the implementation of an application delivery tier in which operational risk can be addressed through the enforcement of various policies. In effect, we’re created a full-proxy data center architecture in which the application delivery tier as a whole serves as the “full proxy” that mediates between the clients and the applications. THE FULL-PROXY DATA CENTER ARCHITECTURE A full-proxy data center architecture installs a digital "air gap” between the client and applications by serving as the aggregation (and conversely disaggregation) point for services. Because all communication is funneled through virtualized applications and services at the application delivery tier, it serves as a strategic point of control at which delivery policies addressing operational risk (performance, availability, security) can be enforced. A full-proxy data center architecture further has the advantage of isolating end-users from the volatility inherent in highly virtualized and dynamic environments such as cloud computing . It enables solutions such as those used to overcome limitations with virtualization technology, such as those encountered with pod-architectural constraints in VMware View deployments. Traditional access management technologies, for example, are tightly coupled to host names and IP addresses. In a highly virtualized or cloud computing environment, this constraint may spell disaster for either performance or ability to function, or both. By implementing access management in the application delivery tier – on a full-proxy device – volatility is managed through virtualization of the resources, allowing the application delivery controller to worry about details such as IP address and VLAN segments, freeing the access management solution to concern itself with determining whether this user on this device from that location is allowed to access a given resource. Basically, we’re taking the concept of a full-proxy and expanded it outward to the architecture. Inserting an “application delivery tier” allows for an agile, flexible architecture more supportive of the rapid changes today’s IT organizations must deal with. Such a tier also provides an effective means to combat modern attacks. Because of its ability to isolate applications, services, and even infrastructure resources, an application delivery tier improves an organizations’ capability to withstand the onslaught of a concerted DDoS attack. The magnitude of difference between the connection capacity of an application delivery controller and most infrastructure (and all servers) gives the entire architecture a higher resiliency in the face of overwhelming connections. This ensures better availability and, when coupled with virtual infrastructure that can scale on-demand when necessary, can also maintain performance levels required by business concerns. A full-proxy data center architecture is an invaluable asset to IT organizations in meeting the challenges of volatility both inside and outside the data center. Related blogs & articles: The Concise Guide to Proxies At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility F5 Friday: When Firewalls Fail… F5 Friday: Platform versus Product4.4KViews1like1CommentRouting traffic by URI using iRule
The DevCentral team recently developed a solution for routing traffic to a specific server based on the URI path. A BIG-IP Local Traffic Manager (LTM) sits between the client and two servers to load balance the traffic for those servers. The Challenge: When a user conducts a search on a website and is directed to one of the servers, the search information is cached on that server. If another user searches for that same data but the LTM load balances to the other server, the cached data from the first server does him no good. So to solve this caching problem, the customer wants traffic that contains a specific search parameter to be routed to the second server (as long as the server is available). Specifically in this case, when a user loads a page and the URI starts with /path/* that traffic should be sent to Server_2. The picture below shows a representation of what the customer wants to accomplish: The Solution: So, the question becomes: How does the customer ensure all /path/* traffic is sent to a specific server? Well, you guessed it...the ubiquitous and loveable iRule! Everyone had a pretty good idea an iRule would be used to solve this problem, but what does that iRule look like? Well, here it is!! when HTTP_REQUEST { if { [string tolower [HTTP::path]] starts_with "/path/" } { persist none set pm [lsearch -inline [active_members -list <pool name>] x.x.x.x*] catch { pool <pool name> member [lindex $pm 0] [lindex $pm 1] } } } Let's talk through the specifics of this solution... For efficiency, start by checking the least likely condition. If an HTTP_REQUEST comes in, immediately check for the "/path/" string. Keep in mind the "string tolower" command on the HTTP::path before the comparison to "/path/" to ensure the cases match correctly. Also, notice the use of HTTP::path instead of the full URI for the comparison...there's no need to use the full URI for this check. Next, turn off persistence just in case another profile or iRule is forcing the connection to persist to a place other than the beloved Server_2. Then, search all active members in the pool for the Server_2 IP address and port. The "lsearch -inline" ensures the matching value is returned instead of just the index. The "active_members -list" is used to ensure we get a list of IP addresses and ports, not just the number of active members. Note the asterisk behind the IP address in the search command...this is needed to ensure the port number is included in the search. Based on the searches, the resulting values are set in a variable called "pm". Next, use the catch command to stop any TCL errors from causing problems. Because we are getting the active members list, it's possible that the pool member we are trying to match is NOT active and therefore the pool member listed in the pool command may not be there...this is what might cause that TCL error. Then send the traffic to the correct pool member, which requires the IP and port. The astute observer and especially the one familiar with the output of "active_members -list" will notice that each pool member returned in the list is already pre-formatted in "ip port" format. However, just using the pm variable in the pool command returns a TCL error, likely because the pm variable is a single object instead of two unique objects. So, the lindex is used to pull out each element individually to avoid the TCL error. Testing: Our team tested the iRule by adding it to a development site and then accessing several pages on that site. We made sure the pages included "/path/" in the URIs! We used tcpdump on the BIG-IP to capture the transactions (tcpdump -ni 0.0 -w/var/tmp/capture1.pcap tcp port 80 -s0) and then downloaded them locally and used Wireshark for analysis. Using these tools, we determined that all the "/path/" traffic routed to Server_2 and all other traffic was still balanced between Server_1 and Server_2. So, the iRule worked correctly and it was ready for prime time! Special thanks to Jason Rahm and Joe Pruitt for their outstanding technical expertise and support in solving this challenge! For more information on this iRule or any other LTM issues, you can submit questions/comments in the "Comments" section below, or you can contact the DevCentral team at https://devcentral.f5.com/s/community/contact-us.4.1KViews0likes5CommentsF5 in AWS Part 1 - AWS Networking Basics
Updated for Current Versions and Documentation Part 1 : AWS Networking Basics Part 2: Running BIG-IP in an EC2 Virtual Private Cloud Part 3: Advanced Topologies and More on Highly-Available Services Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12 If you work in IT, and you haven’t been living under a rock, then you have likely heard of Amazon Web Services (AWS). There has been a substantial increase in the maturity and stability of the AWS Elastic Compute Cloud (EC2), but you are wondering – can I continue to leverage F5 services in AWS? In this series of blog posts, we will discuss the how and why of running F5 BIG-IP in EC2. In this specific article, we’ll start with the basics of the AWS EC2 and Virtual Private Cloud (VPC). Later in the series, we will discuss some of the considerations associated with running BIG-IP as compute instance in this environment, we’ll outline the best deployment models for your application in EC2, and how these deployment models can be automated using open-source tools. Note: AWS uses the terms "public" and "private" to refer to what F5 Networks has typically referred to as "external" and "internal" respectively. We will use this terms interchangeably. First, what is AWS? If you have read the story, you will know that the EC2 project began with an internal interest at Amazon to move away from messy, multi-tenant networks using VLANs for segregation. Instead, network engineers at Amazon wanted to build an entirely IP-based architecture. This vision morphed into the universe of application services available today. Of course, building multi-tenant, purely L3 networks at massive scale had implications for both security and redundancy (we’ll get to this later). Today, EC2 enables users to run applications and services on top of virtualized network, storage, and compute infrastructure, where hosts are deployed in the form of Amazon Machine Images (AMIs). These AMIs can either be private to the user or launched from the public AWS marketplace. Hosts can be added to elastic load balancing (ELB) groups and associated with publicly accessible IPs to implement a simple horizontal model for availability. AWS became truly relevant for the enterprise with the introduction of the Virtual Private Cloud service. VPCs enabled users to build virtual private networks at the IP layer. These private networks can be connected to on-premise configurations by way of a VPN Gateway, or connected to the internet via an Internet Gateway. When deploying hosts within a VPC, the user has a significant amount of control over how each host is attached to the network. For example, a host can be attached to multiple networks and given several public or private IPs on one or multiple interfaces. Further, users can control many of the security aspects they are used to configuring in an on-premise environment (albeit in a slightly different way), including network ACLs, routing, simple firewalling, DHCP options, etc. Lets talk about these and other important EC2 aspects and try to understand how they affect our application deployment strategy. L2 Restrictions As we mentioned above, one of the design goals of AWS was to remove layer 2 networking. This is a worthy accomplishment but we lose access to certain useful protocols, including ARP (and gratuitious ARP), broadcast and multi-cast groups, 802.1Q tagging. We can no longer use VLANs for some availability models, for quality of service management, or for tenant isolation. Network Interfaces For larger topologies, one of the largest impacts given the removal of 802.1Q protocol support is the number of subnets we can attach to a node in the network. Because in AWS each interface is attached as a layer 3 endpoint, we must add an interface for each subnet. This contrasts with traditional networks, where you can add VLANs to your trunk for each subnet via tagging. Even though we're in a virtual world, the number of virtual network interfaces (or Elastic Network Interfaces (ENIs) in AWS terminology) is also limited according to the EC2 instance size. Together, the limits on number of interfaces and mapping between interface and subnet effectively limit the number of directly connected networks we can attach to a device (like BIG-IP, for example). IP Addressing AWS offers two kinds of globally routable IP address; these are “Public IP Addresses” and “Elastic IP Address”. In the table below, we outlined some of the differences between these two types of IP addresses. You can probably figure out for yourself why we will want to use Elastic IPs with BIG-IP. Like interfaces, AWS limits the number of IPs in several ways, including the number of IPs that can be attached to an interface and the number of elastic IPs per AWS account. Table 1: Differences between Public and Elastic IP Addresses Public IP Elastic IP Released on device termination/disassociation YES NO Assignable to secondary interfaces NO YES Can be associated after launch NO YES Amazon provides more information on public and elastic IP addresses here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses Each interface on an EC2 instance is given a private IP address. This IP address is routable locally through your subnet and assigned from the address range associated with the subnet to which your interface is attached. Multiple private secondary IP addresses can be attached to an interface, and is a useful technique for creating more complex topologies. The number of interfaces and private IPs per interface within an Amazon VPC are listed here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI NAT Instances, Subnets and Routing When creating a VPC using the wizard available in the AWS VPC web portal, several default configurations are possible. One of these configurations is “VPC with Public and Private subnets”. In this configuration, what if the instances on our private subnet wish to access the outside world? Because we cannot attach public or elastic IP address to instances within the private subnet, we must use NAT provided by AWS. Like BIG-IP and other network devices in EC2, the NAT instance will live as a compute node within your VPC. This is good way to allow outbound traffic from your internal servers, but to prevent those servers from receiving inbound traffic. When you create subnets manually or through the VPC wizard, you’ll note that each subnet has an associated routing table. These route tables may be updated to control traffic flow between instances and subnets in your VPC. Regions and Availability Zones We know quite a few number of people who have been confused by the concept of availabilty zones in EC2. To put it clearly, an availabilty zone is a physically isolated datacenter in a region. Regions may contain mulitple availability zones. Availabilty zones run on different networking and storage infrastructure, and depend on seperate power supplies and internet connections. Striping your application deployments across availability zones is a great way to provide redundancy and, perhaps a hot standby, but please note that these are not the same thing. Amazon does not mirror any data between zones on behalf of the customer. While VPCs can span availability zones, subnets may not. To close this blog post, we are fortunate enough to get a video walk through from Vladimir Bojkovic, Solution Architect at F5 Networks. He shows how to create a VPC with internal and external subnets as a practical demonstration of the concepts we discussed above.4.1KViews2likes4CommentsInfrastructure Architecture: Whitelisting with JSON and API Keys
Application delivery infrastructure can be a valuable partner in architecting solutions …. AJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well. Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved. The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives. AN EXAMPLE Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests. An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result. This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of authorization and access management functions to the infrastructure. 1. A request is received by the application delivery service. 2. The application delivery service extracts the API key from the URI and determines validity. 3. If the API key is not legitimate, a JSON-encoded response is returned. 4. If the API key is valid, the request is passed on to the appropriate web/application server for processing. Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content. Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user. The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing. The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met. Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial. ARCHITECTURE is KEY The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers. The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort. Related blogs & articles: HTML5 Going Like Gangbusters But Will Anyone Notice? Web 2.0 Killed the Middleware Star The Inevitable Eventual Consistency of Cloud Computing Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage Cloud-Tiered Architectural Models are Bad Except When They Aren’t The Database Tier is Not Elastic The New Distribution of The 3-Tiered Architecture Changes Everything Sessions, Sessions Everywhere3.1KViews0likes0CommentsScheduling BIG-IP Configuration Backups via the GUI with an iApp
Beginning with BIG-IP version 11, the idea of templates has not only changed in amazing and powerful ways, it has been extended to be far more than just templates. The replacement for templates is called iApp TM . But to call the iApp TM just a template would be woefully inaccurate and narrow. It does templates well, and takes the concept further by allowing you to re-enter a templated application and make changes. Previously, deploying an application via a template was sort of like the Ron Popeil rotisserie: “Set it, and forget it!” Once it was executed, the template process was over, it was up to you to track and potentially clean up all those objects. Now, the application service you create based on an iApp TM template effectively “owns” all the objects it created, so any change to the deployment adds/changes/deletes objects as necessary. The other exciting change from the template perspective is the idea of strictness. Once an application service is configured, any object created that is owned by that service cannot be changed outside of the service itself. This means that if you want to add a pool member, it must be done within the application service, not within the pool. You can turn this off, but what a powerful protection of your services! Update for v11.2 - https://devcentral.f5.com/s/Tutorials/TechTips/tabid/63/articleType/ArticleView/articleId/1090565/Archiving-BIG-IP-Configurations-with-an-iApp-in-v112.aspx The Problem I received a request from one of our MVPs that he’d really like to be able to allow his users to schedule configuration backups without dropping to the command line. Knowing that the iApp TM feature was releasing soon with version 11, I started to see how I might be able to coax a command line configuration from the GUI. In training, I was told that “anything you can do in tmsh, you can do with an iApp TM .” This is excellent, and the basis for why I think they are going to be incredibly popular for not only controlling and managing applications, but also for extending CLI functions to the GUI. Anyway, so in order to schedule a configuration backup, I need: A backup script A cron job to call said script That’s really all there is to it. The Solution Thankfully, the background work is already done courtesy of a config backup codeshare entry by community user Colin Stubbs in the Advanced Design & Config Wiki. I did have to update the following bigpipe lines from the script: bigpipe export oneline “${SCF_FILE}” to tmsh save /sys config one-line file “${SCF_FILE}” bigpipe export “${SCF_FILE}” to tmsh save /sys config file "${SCF_FILE}" bigpipe config save “${UCS_FILE}” passphrase “${UCS_PASSPHRASE}” to tmsh save /sys ucs "${UCS_FILE}" passphrase "${UCS_PASSPHRASE}" bigpipe config save “${UCS_FILE}” to tmsh save /sys ucs "${UCS_FILE}" Also, I created (according the script comments from the codeshare entry) a /var/local/bin directory to place the script and a /var/local/backups directory for the script to dump the backup files in. These are optional and can be changed as necessary in your deployment, you’ll just need to update the script to reflect your file system preferences. Now that I have everything I need to support a backup, I can move on to the iApp TM template configuration. iApp TM Components A template consists of three parts: implementation, presentation, and help. You can create an empty template, or just start with presentation or help if you like. The implementation is tmsh script language, based on the Tcl language so loved by all of us iRulers. Please reference the tmsh wiki for the available tmsh extensions to the Tcl language. The presentation is written with the Application Presentation Language, or APL, which is new and custom-built for templates. It is defined on the APL page in the iApp TM wiki. The help is written in HTML, and is used to guide users in the use of the template. I’ll focus on the presentation first, and then the implementation. I’ll forego the help section in this article. Presentation The reason I’m starting with the presentation section of the template is that the implementation section’s Tcl variables reflect the presentation methods naming conventions. I want to accomplish a few things in the template presentation: Ask users for the frequency of backups (daily, weekly, monthly) If weekly, ask for the day of the week If monthly, ask for the day of the month and provide a warning about days 29-31 For all frequencies, ask for the hour and minute the backup should occur The APL code for this looks like this: section time_select { choice day_select display "large" { "Daily", "Weekly", "Monthly" } optional ( day_select == "Weekly" ) { choice dow_select display "medium" { "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday" } } optional ( day_select == "Monthly" ) { message dom_warning "The day of the month should be the 1st-28th. Selecting the 29th-31st will result in missed backups on some months." choice dom_select display "medium" tcl { for { set x 1 } { $x < 32 } { incr x } { append dom "$x\n" } return $dom } } choice hr_select display "medium" tcl { for { set x 0 } { $x < 24 } { incr x} { append hrs "$x\n" } return $hrs } choice min_select display "medium" tcl { for { set x 0 } { $x < 60 } { incr x } { append mins "$x\n" } return $mins } } text { time_select "Backup Schedule" time_select.day_select "Choose the frequency the backup should occur:" time_select.dow_select "Choose the day of the week the backup should occur:" time_select.dom_warning "WARNING: " time_select.dom_select "Choose the day of the month the backup should occur:" time_select.hr_select "Choose the hour the backup should occur:" time_select.min_select "Choose the minute the backup should occur:" } A few things to point out. First, the sections (which can’t be nested) provide a way to set apart functional differences in your form. I only needed one here, but it’s very useful if I were to build on and add options for selecting a UCS or SCF format, or specifying a mail address for the backups to be mailed to. Second, order matters. The objects will be displayed in the template as you define them. Third, the optional command allows me to hide questions that wouldn’t make sense given previous answers. If you dig into some of the canned templates shipping with v11, you’ll also see another use case for the optional command. Fourth, you can use Tcl commands to populate fields for you. This can be generated data like I did above, or you can loop through configuration objects to present in the template as well. Finally, the text section is where you define the language you want to appear with each of your objects. The nomenclature here is section.variable. To give you an idea what this looks like, here is a screenshot of a monthly backup configuration: Once my template (f5.archiving) is saved, I can configure it in the Application Services section by selecting the template. At this point, I have a functioning presentation, but with no implementation, it’s effectively useless. Implementation Now that the presentation is complete, I can move on to an implementation. I need to do a couple things in the implementation: Grab the data entered into the application service Convert the day of week information from long name to the appropriate 0-6 (or 1-7) number for cron Use that data to build a cron file (statically assigned at this point to /etc/cron.d/f5backups) Here is the implementation section: array set dow_map { Sunday 0 Monday 1 Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 } set hr $::time_select__hr_select set min $::time_select__min_select set infile [open "/etc/cron.d/f5backups" "w" "0755"] puts $infile "SHELL=\/bin\/bash" puts $infile "PATH=\/sbin:\/bin:\/usr\/sbin:\/usr\/bin" puts $infile "#MAILTO=user@somewhere" puts $infile "HOME=\/var\/tmp\/" if { $::time_select__day_select == "Daily" } { puts $infile "$min $hr * * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Weekly" } { puts $infile "$min $hr * * $dow_map($::time_select__dow_select) root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Monthly" } { puts $infile "$min $hr $::time_select__dom_select * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } close $infile A few notes: The dow_map array is to convert the selected day (ie, Saturday) to a number for cron (6). The variables in the implementation section reference the data supplied from the presentation like so: $::<section>__<presentation variable name> (Note the double underscore between them. As such, DO NOT use double underscores in your presentation variables) tmsh special characters need to be escaped if you’re using them for strings. A succesful configuration of the application service results in this file configuration for /etc/cron.d/f5backups: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin #MAILTO=user@somewhere HOME=/var/tmp/ 54 15 * * * root /bin/bash /var/local/bin/f5backup.sh 1>/var/tmp/f5backup.log 2>&1 So this backup is scheduled to run daily at 15:54. This is confirmed with this directory listing on my BIG-IP: [root@golgotha:Active] bin # ls -las /var/local/backups total 2168 8 drwx------ 2 root root 4096 Aug 16 15:54 . 8 drwxr-xr-x 9 root root 4096 Aug 3 14:44 .. 1076 -rw-r--r-- 1 root root 1091639 Aug 15 15:54 f5backup-golgotha.test.local-20110815155401.tar.bz2 1076 -rw-r--r-- 1 root root 1092259 Aug 16 15:54 f5backup-golgotha.test.local-20110816155401.tar.bz2 Conclusion This is just scratching the surface of what can be done with the new iApp TM feature in v11. I didn’t even cover the ability to use presentation and implementation libraries, but that will be covered in due time. If you’re impatient, there are already several examples (including this one here) in the codeshare. Related Articles F5 DevCentral > Community > Group Details - iApp iApp Wiki Home - DevCentral Wiki F5 Agility 2011 - James Hendergart on iApp iApp Codeshare - DevCentral Wiki iApp Lab 5 - Priority Group Activation - DevCentral Wiki iApp Template Development Tips and Techniques - DevCentral Wiki Lori MacVittie - iApp2.9KViews0likes13CommentsBack to Basics: Health Monitors and Load Balancing
#webperf #ado Because every connection counts One of the truisms of architecting highly available systems is that you never, ever want to load balance a request to a system that is down. Therefore, some sort of health (status) monitoring is required. For applications, that means not just pinging the network interface or opening a TCP connection, it means querying the application and verifying that the response is valid. This, obviously, requires the application to respond. And respond often. Best practices suggest determining availability every 5 seconds or so. That means every X seconds the load balancing service is going to open up a connection to the application and make a request. Just like a user would do. That adds load to the application. It consumes network, transport, application and (possibly) database resources. Resources that cannot be used to service customers. While the impact on a single application may appear trivial, it's not. Remember, as load increases performance decreases. And no matter how trivial it may appear, health monitoring is adding load to what may be an already heavily loaded application. But Lori, you may be thinking, you expound on the importance of monitoring and visibility all the time! Are you saying we shouldn't be monitoring applications? Nope, not at all. Visibility is paramount, providing the actionable data necessary to enable highly dynamic, automated operations such as elasticity. Visibility through health-monitoring is a critical means of ensuring availability at both the local and global level. What we may need to do, however, is move from active to passive monitoring. PASSIVE MONITORING Passive monitoring, as the modifier suggests, is not an active process. The Load balancer does not open up connections nor query an application itself. Instead, it snoops on responses being returned to clients and from that infers the current status of the application. For example, if a request for content results in an HTTP error message, the load balancer can determine whether or not the application is available and capable of processing subsequent requests. If the load balancer is a BIG-IP, it can mark the service as "down" and invoke an active monitor to probe the application status as well as retrying the request to another available instance – insuring end-users do not see an error. Passive (inband) monitors are not binary. That is, they aren't simple "on" or "off" based on HTTP status codes. Such monitors can be configured to track the number of failures and evaluate failure rates against a configurable failure interval. When such thresholds are exceeded, the application can then be marked as "down". Passive monitors aren't restricted to availability status, either. They can also monitor for performance (response time). Failure to meet response time expectations results in a failure, and the application continues to be watched for subsequent failures. Passive monitors are, like most inline/inband technologies, transparent. They quietly monitor traffic and act upon that traffic without adding overhead to the process. Passive monitoring gives operations the visibility necessary to enable predictable performance and to meet or exceed user expectations with respect to uptime, without negatively impacting performance or capacity of the applications it is monitoring.2.9KViews1like2CommentsWhat is server offload and why do I need it?
One of the tasks of an enterprise architect is to design a framework atop which developers can implement and deploy applications consistently and easily. The consistency is important for internal business continuity and reuse; common objects, operations, and processes can be reused across applications to make development and integration with other applications and systems easier. Architects also often decide where functionality resides and design the base application infrastructure framework. Application server, identity management, messaging, and integration are all often a part of such architecture designs. Rarely does the architect concern him/herself with the network infrastructure, as that is the purview of “that group”; the “you know who I’m talking about” group. And for the most part there’s no need for architects to concern themselves with network-oriented architecture. Applications should not need to know on which VLAN they will be deployed or what their default gateway might be. But what architects might need to know – and probably should know – is whether the network infrastructure supports “server offload” of some application functions or not, and how that can benefit their enterprise architecture and the applications which will be deployed atop it. WHAT IT IS Server offload is a generic term used by the networking industry to indicate some functionality designed to improve the performance or security of applications. We use the term “offload” because the functionality is “offloaded” from the server and moved to an application network infrastructure device instead. Server offload works because the application network infrastructure is almost always these days deployed in front of the web/application servers and is in fact acting as a broker (proxy) between the client and the server. Server offload is generally offered by load balancers and application delivery controllers. You can think of server offload like a relay race. The application network infrastructure device runs the first leg and then hands off the baton (the request) to the server. When the server is finished, the application network infrastructure device gets to run another leg, and then the race is done as the response is sent back to the client. There are basically two kinds of server offload functionality: Protocol processing offload Protocol processing offload includes functions like SSL termination and TCP optimizations. Rather than enable SSL communication on the web/application server, it can be “offloaded” to an application network infrastructure device and shared across all applications requiring secured communications. Offloading SSL to an application network infrastructure device improves application performance because the device is generally optimized to handle the complex calculations involved in encryption and decryption of secured data and web/application servers are not. TCP optimization is a little different. We say TCP session management is “offloaded” to the server but that’s really not what happens as obviously TCP connections are still opened, closed, and managed on the server as well. Offloading TCP session management means that the application network infrastructure is managing the connections between itself and the server in such a way as to reduce the total number of connections needed without impacting the capacity of the application. This is more commonly referred to as TCP multiplexing and it “offloads” the overhead of TCP connection management from the web/application server to the application network infrastructure device by effectively giving up control over those connections. By allowing an application network infrastructure device to decide how many connections to maintain and which ones to use to communicate with the server, it can manage thousands of client-side connections using merely hundreds of server-side connections. Reducing the overhead associated with opening and closing TCP sockets on the web/application server improves application performance and actually increases the user capacity of servers. TCP offload is beneficial to all TCP-based applications, but is particularly beneficial for Web 2.0 applications making use of AJAX and other near real-time technologies that maintain one or more connections to the server for its functionality. Protocol processing offload does not require any modifications to the applications. Application-oriented offload Application-oriented offload includes the ability to implement shared services on an application network infrastructure device. This is often accomplished via a network-side scripting capability, but some functionality has become so commonplace that it is now built into the core features available on application network infrastructure solutions. Application-oriented offload can include functions like cookie encryption/decryption, compression, caching, URI rewriting, HTTP redirection, DLP (Data Leak Prevention), selective data encryption, application security functionality, and data transformation. When network-side scripting is available, virtually any kind of pre or post-processing can be offloaded to the application network infrastructure and thereafter shared with all applications. Application-oriented offload works because the application network infrastructure solution is mediating between the client and the server and it has the ability to inspect and manipulate the application data. The benefits of application-oriented offload are that the services implemented can be shared across multiple applications and in many cases the functionality removes the need for the web/application server to handle a specific request. For example, HTTP redirection can be fully accomplished on the application network infrastructure device. HTTP redirection is often used as a means to handle application upgrades, commonly mistyped URIs, or as part of the application logic when certain conditions are met. Application security offload usually falls into this category because it is application – or at least application data – specific. Application security offload can include scanning URIs and data for malicious content, validating the existence of specific cookies/data required for the application, etc… This kind of offload improves server efficiency and performance but a bigger benefit is consistent, shared security across all applications for which the service is enabled. Some application-oriented offload can require modification to the application, so it is important to design such features into the application architecture before development and deployment. While it is certainly possible to add such functionality into the architecture after deployment, it is always easier to do so at the beginning. WHY YOU NEED IT Server offload is a way to increase the efficiency of servers and improve application performance and security. Server offload increases efficiency of servers by alleviating the need for the web/application server to consume resources performing tasks that can be performed more efficiently on an application network infrastructure solution. The two best examples of this are SSL encryption/decryption and compression. Both are CPU intense operations that can consume 20-40% of a web/application server’s resources. By offloading these functions to an application network infrastructure solution, servers “reclaim” those resources and can use them instead to execute application logic, serve more users, handle more requests, and do so faster. Server offload improves application performance by allowing the web/application server to concentrate on what it is designed to do: serve applications and putting the onus for performing ancillary functions on a platform that is more optimized to handle those functions. Server offload provides these benefits whether you have a traditional client-server architecture or have moved (or are moving) toward a virtualized infrastructure. Applications deployed on virtual servers still use TCP connections and SSL and run applications and therefore will benefit the same as those deployed on traditional servers. I am wondering why not all websites enabling this great feature GZIP? 3 Really good reasons you should use TCP multiplexing SOA & Web 2.0: The Connection Management Challenge Understanding network-side scripting I am in your HTTP headers, attacking your application Infrastructure 2.0: As a matter of fact that isn't what it means2.7KViews0likes1CommentF5 Friday: Load Balancing MySQL with F5 BIG-IP
Scaling MySQL just got a whole lot easier load balancing MySQL – any database, really – is not a trivial task. Generally speaking one does not simply round robin your way through a cluster of MySQL databases as a means to achieve scalability. It is databases, in fact, that have driven a wide variety of scalability patterns such as sharding and partitioning to achieve the ultimate goal of high-performance and scalability simultaneously. Unfortunately, most folks don’t architect their applications with scalability in mind. A single database is all that’s necessary at first, and because of the way in which the application interacts with the database, it doesn’t make sense to code in support for multiple database instances, such as is often implemented with a MySQL master-slave cluster. That’s because the application has to actually open a connection to the database in question. If you’re only starting with one database, you really can’t code in a connection to a separate instance. Eventually that application’s usage grows and the demands upon the database require a more scalable approach. Enter the MySQL master/slave relationship. A typical configuration is to maintain the master as the “write” database, i.e. all updates and/or inserts must use the master, while the slave instance is used as a “read only” instance. Obviously this means the application code must be changed to support this kind of functional sharding. Unless you leverage network server virtualization from a load balancing service capable of acting as a full-proxy at layer 7 (application) like BIG-IP. This solution leverages iRules to implement database load balancing. While this specific example is designed to perform the common functional sharding pattern of read-write separation for a master-slave MySQL cluster, the flexibility of iRules is such that other architectural solutions can easily be designed using the same basic functions. Location based sharding is another popular means of scaling databases, and using the GeoLocation capabilities of BIG-IP along with iRules to inspect and route database requests, it should be a fairly trivial architectural task to implement. The ability to further extend sharding or other distribution methodologies for scaling databases without modifying the application itself is a huge bonus for both developers and operations. By decoupling the application from the database, it provides a more flexibility set of scalability domains in which technology targeted scalability strategies can be leveraged independent of the other layers. This is an important facet of agile infrastructure architecture and should not be underestimated as a benefit of network server virtualization. MySQL Load Balancing Resources: MySQL Proxy iRule MySQL Proxy iApp (deployment package for BIG-IP v11) The Full-Proxy Data Center Architecture Infrastructure Scalability Pattern: Sharding Streams Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type IT as a Service: A Stateless Infrastructure Architecture Model F5 Friday: Platform versus Product At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? All F5 Friday Posts on DevCentral Why Single-Stack Infrastructure Sucks2.6KViews0likes0CommentsFull examples of iControlREST for device and application service deployment
In a previous article, I highlighted a proof-of-concept where we fully automated the deployment of BIG-IP in AWS using the web interfaces of BIG-IP inconjunction with Ansible. The goal of this article is to focus in more detail on the use of iControlREST within that project, in order to show how it can be extremely useful for automating various aspects of your ITOM workflows. There are four main workflows we execute in order to configure BIG-IP in AWS and to deploy services. This breakown is provided below. Basic System Configuration AWS-specific system configuration Network attachment Application service provisioning For now, we avoid the discussion about which configuration elements are part of the infrastructure deployment or the application deployment. This is an important discussion, but one that can only take place after we understand how to provision the elements that will be a part of either workflow. For each of these workflows, we have provided the log output which shows REST calls and responses. To gain from these examples, it is important to understand the following: These iControlREST calls were scraped from an execution of the aws-deployments code where we captured log output. In that project, we have written a custom Ansible module called 'bigip_config' (see /library/bigip_config.py in the project directory of Github). This module is used to provision config objects within TMOS using iControlREST. In order to make it easier to use this 'bigip_config' Ansible module, we aimed to make itidempotent. This means that we need only identify the resources we wish to create or update, and identify the state of the resource after our module is run. We don't need to worry about whether the object exists when calling our module, or the procedural set of calls that should be made it order to get it there. An example might be like: "create iApp service 'my_app' with parameters X, Y, and Z". We don't care whether 'my_app' already exists.In order to implement such behavior, the module internally does an HTTP GET against the resource or collection it is modifying. Subsequently, if the resource already exists, a PATCH call is made, otherwise a POST call is made. In many cases in the code, we repeat a call until it returns successfully or returns the state we are expecting. You can see this in the examples, where the same call seems to be made repeatedly. Basic system configuration The examples below show how we are configuring basic device settings with REST. First, because BIG-IP was just started, we wait until the first iCR call,"GET mgmt/tm/sys/db",succeeds before we continue. The final step the workflow involves provisioning modules on BIG-IP. Note the 30 second wait between provisioning of AVR and ASM reflected in the timestamps. 2015-11-12 09:46:03 : Disabling Setup Utility in GUI GET mgmt/tm/sys/db "" 2015-11-12 09:46:10 : Disabling Setup Utility in GUI GET mgmt/tm/sys/db "" 2015-11-12 09:46:17 : Disabling Setup Utility in GUI GET mgmt/tm/sys/db "" Method GET mgmt/tm/sys/db returned: {"kind":"tm:sys:db:dbcollectionstate","selfLink":"https://localhost/mgmt/tm/sys/db?ver=11.6.0","items":[....<a whole bunch of database variables>...]} 2015-11-12 09:46:19 : Disabling Setup Utility in GUI PATCH mgmt/tm/sys/db/setup.run?ver=11.6.0 {"value": "false"} Method PATCH mgmt/tm/sys/db/setup.run?ver=11.6.0 returned: {"kind":"tm:sys:db:dbstate","name":"setup.run","fullPath":"setup.run","generation":62,"selfLink":"https://localhost/mgmt/tm/sys/db/setup.run?ver=11.6.0","defaultValue":"true","scfConfig":"false","value":"false","valueRange":"false true"} 2015-11-12 09:46:22 : Configuring NTP servers PATCH mgmt/tm/sys/ntp {"timezone": "America/Los_Angeles", "servers": ["0.pool.ntp.org", "1.pool.ntp.org"]} Method PATCH mgmt/tm/sys/ntp returned: {"kind":"tm:sys:ntp:ntpstate","selfLink":"https://localhost/mgmt/tm/sys/ntp?ver=11.6.0","servers":["0.pool.ntp.org","1.pool.ntp.org"],"timezone":"America/Los_Angeles","restrictReference":{"link":"https://localhost/mgmt/tm/sys/ntp/restrict?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:46:25 : Configuring syslog logging destinations PATCH mgmt/tm/sys/syslog {"include": "destination loghost { udp( 10.0.3.32 port (514));};"} Method PATCH mgmt/tm/sys/syslog returned: {"kind":"tm:sys:syslog:syslogstate","selfLink":"https://localhost/mgmt/tm/sys/syslog?ver=11.6.0","authPrivFrom":"notice","authPrivTo":"emerg","consoleLog":"enabled","cronFrom":"warning","cronTo":"emerg","daemonFrom":"notice","daemonTo":"emerg","include":"destination loghost { udp( 10.0.3.32 port (514));};","isoDate":"disabled","kernFrom":"debug","kernTo":"emerg","local6From":"notice","local6To":"emerg","mailFrom":"notice","mailTo":"emerg","messagesFrom":"notice","messagesTo":"warning","userLogFrom":"notice","userLogTo":"emerg"} 2015-11-12 09:46:28 : Configuring HTTP mgmt access PATCH mgmt/tm/sys/httpd {"allow": ["ALL"]} Method PATCH mgmt/tm/sys/httpd returned: {"kind":"tm:sys:httpd:httpdstate","selfLink":"https://localhost/mgmt/tm/sys/httpd?ver=11.6.0","allow":["ALL"],"authName":"BIG-IP","authPamDashboardTimeout":"off","authPamIdleTimeout":1200,"authPamValidateIp":"on","fastcgiTimeout":300,"hostnameLookup":"off","logLevel":"warn","maxClients":10,"redirectHttpToHttps":"disabled","requestBodyMaxTimeout":0,"requestBodyMinRate":500,"requestBodyTimeout":60,"requestHeaderMaxTimeout":40,"requestHeaderMinRate":500,"requestHeaderTimeout":20,"sslCertfile":"/etc/httpd/conf/ssl.crt/server.crt","sslCertkeyfile":"/etc/httpd/conf/ssl.key/server.key","sslCiphersuite":"DEFAULT:!aNULL:!eNULL:!LOW:!RC4:!MD5:!EXP","sslOcspDefaultResponder":"http://127.0.0.1","sslOcspEnable":"off","sslOcspOverrideResponder":"off","sslOcspResponderTimeout":300,"sslOcspResponseMaxAge":-1,"sslOcspResponseTimeSkew":300,"sslProtocol":"all -SSLv2 -SSLv3","sslVerifyClient":"no","sslVerifyDepth":10} 2015-11-12 09:46:31 : Configuring SSH mgmt access PATCH mgmt/tm/sys/sshd {"allow": ["ALL"]} Method PATCH mgmt/tm/sys/sshd returned: {"kind":"tm:sys:sshd:sshdstate","selfLink":"https://localhost/mgmt/tm/sys/sshd?ver=11.6.0","allow":["ALL"],"banner":"disabled","inactivityTimeout":0,"logLevel":"info","login":"enabled"} 2015-11-12 09:46:33 : Configuring SNMP access PATCH mgmt/tm/sys/snmp {"allowedAddresses": ["172.16.0.0/16"]} Method PATCH mgmt/tm/sys/snmp returned: {"kind":"tm:sys:snmp:snmpstate","selfLink":"https://localhost/mgmt/tm/sys/snmp?ver=11.6.0","agentAddresses":["tcp6:161","udp6:161"],"agentTrap":"enabled","allowedAddresses":["172.16.0.0/16"],"authTrap":"disabled","bigipTraps":"enabled","l2forwardVlan":"none","loadMax1":12,"loadMax15":12,"loadMax5":12,"sysContact":"Customer Name <admin@customer.com>","sysLocation":"Network Closet 1","sysServices":78,"trapCommunity":"public","trapSource":"none","communitiesReference":{"link":"https://localhost/mgmt/tm/sys/snmp/communities?ver=11.6.0","isSubcollection":true},"diskMonitors":[{"name":"root","partition":"Common","minspace":2000,"minspaceType":"size","path":"/"},{"name":"var","partition":"Common","minspace":10000,"minspaceType":"size","path":"/var"}],"processMonitors":[{"name":"bigd","partition":"Common","maxProcesses":"1","minProcesses":1,"process":"bigd"},{"name":"chmand","partition":"Common","maxProcesses":"1","minProcesses":1,"process":"chmand"},{"name":"httpd","partition":"Common","maxProcesses":"infinity","minProcesses":1,"process":"httpd"},{"name":"mcpd","partition":"Common","maxProcesses":"1","minProcesses":1,"process":"mcpd"},{"name":"sod","partition":"Common","maxProcesses":"1","minProcesses":1,"process":"sod"},{"name":"tmm","partition":"Common","maxProcesses":"infinity","minProcesses":1,"process":"tmm"}],"trapsReference":{"link":"https://localhost/mgmt/tm/sys/snmp/traps?ver=11.6.0","isSubcollection":true},"usersReference":{"link":"https://localhost/mgmt/tm/sys/snmp/users?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:46:36 : Configuring FastL4 profiles ... fastL4-route-friendly GET mgmt/tm/ltm/profile/fastl4 "" Method GET mgmt/tm/ltm/profile/fastl4 returned: {"kind":"tm:ltm:profile:fastl4:fastl4collectionstate","selfLink":"https://localhost/mgmt/tm/ltm/profile/fastl4?ver=11.6.0","items":[{"kind":"tm:ltm:profile:fastl4:fastl4state","name":"fastL4","partition":"Common","fullPath":"/Common/fastL4","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/profile/fastl4/~Common~fastL4?ver=11.6.0","clientTimeout":30,"explicitFlowMigration":"disabled","hardwareSynCookie":"enabled","idleTimeout":"300","ipTosToClient":"pass-through","ipTosToServer":"pass-through","keepAliveInterval":"disabled","lateBinding":"disabled","linkQosToClient":"pass-through","linkQosToServer":"pass-through","looseClose":"disabled","looseInitialization":"disabled","mssOverride":0,"priorityToClient":"pass-through","priorityToServer":"pass-through","pvaAcceleration":"full","pvaDynamicClientPackets":1,"pvaDynamicServerPackets":0,"pvaFlowAging":"enabled","pvaFlowEvict":"enabled","pvaOffloadDynamic":"enabled","pvaOffloadState":"embryonic","reassembleFragments":"disabled","receiveWindowSize":0,"resetOnTimeout":"enabled","rttFromClient":"disabled","rttFromServer":"disabled","serverSack":"disabled","serverTimestamp":"disabled","softwareSynCookie":"disabled","synCookieWhitelist":"disabled","tcpCloseTimeout":"5","tcpGenerateIsn":"disabled","tcpHandshakeTimeout":"5","tcpStripSack":"disabled","tcpTimestampMode":"preserve","tcpWscaleMode":"preserve","timeoutRecovery":"disconnect"}]} 2015-11-12 09:46:37 : Configuring FastL4 profiles ... fastL4-route-friendly POST mgmt/tm/ltm/profile/fastl4 {"looseClose": "enabled", "resetOnTimeout": "disabled", "name": "fastL4-route-friendly", "looseInitialization": "enabled"} Method POST mgmt/tm/ltm/profile/fastl4 returned: {"kind":"tm:ltm:profile:fastl4:fastl4state","name":"fastL4-route-friendly","fullPath":"fastL4-route-friendly","generation":68,"selfLink":"https://localhost/mgmt/tm/ltm/profile/fastl4/fastL4-route-friendly?ver=11.6.0","clientTimeout":30,"defaultsFrom":"/Common/fastL4","explicitFlowMigration":"disabled","hardwareSynCookie":"enabled","idleTimeout":"300","ipTosToClient":"pass-through","ipTosToServer":"pass-through","keepAliveInterval":"disabled","lateBinding":"disabled","linkQosToClient":"pass-through","linkQosToServer":"pass-through","looseClose":"enabled","looseInitialization":"enabled","mssOverride":0,"priorityToClient":"pass-through","priorityToServer":"pass-through","pvaAcceleration":"full","pvaDynamicClientPackets":1,"pvaDynamicServerPackets":0,"pvaFlowAging":"enabled","pvaFlowEvict":"enabled","pvaOffloadDynamic":"enabled","pvaOffloadState":"embryonic","reassembleFragments":"disabled","receiveWindowSize":0,"resetOnTimeout":"disabled","rttFromClient":"disabled","rttFromServer":"disabled","serverSack":"disabled","serverTimestamp":"disabled","softwareSynCookie":"disabled","synCookieWhitelist":"disabled","tcpCloseTimeout":"5","tcpGenerateIsn":"disabled","tcpHandshakeTimeout":"5","tcpStripSack":"disabled","tcpTimestampMode":"preserve","tcpWscaleMode":"preserve","timeoutRecovery":"disconnect"} ... 2015-11-12 09:46:44 : PATCH mgmt/tm/sys/provision/asm {"level": "nominal"} Method PATCH mgmt/tm/sys/provision/asm returned: {"kind":"tm:sys:provision:provisionstate","name":"asm","fullPath":"asm","generation":71,"selfLink":"https://localhost/mgmt/tm/sys/provision/asm?ver=11.6.0","cpuRatio":0,"diskRatio":0,"level":"nominal","memoryRatio":0} 2015-11-12 09:47:18 : PATCH mgmt/tm/sys/provision/avr {"level": "nominal"} Method PATCH mgmt/tm/sys/provision/avr returned: {"kind":"tm:sys:provision:provisionstate","name":"avr","fullPath":"avr","generation":114,"selfLink":"https://localhost/mgmt/tm/sys/provision/avr?ver=11.6.0","cpuRatio":0,"diskRatio":0,"level":"nominal","memoryRatio":0} AWS-specific System Configuration This next workflow is very small and simple. We are adding some variables to global-settings which are only necessary because BIG-IP is running in AWS. We've obfuscated the AWS Access Key and Secret Key in the output. 2015-11-12 09:48:12 : Adding/updating AWS access and secret keys PATCH mgmt/tm/sys/global-settings {"awsAccessKey": "...<my access key>...", "awsSecretKey": "...<my secret key>..."} Method PATCH mgmt/tm/sys/global-settings returned: {"kind":"tm:sys:global-settings:global-settingsstate","selfLink":"https://localhost/mgmt/tm/sys/global-settings?ver=11.6.0","awsAccessKey":"...<my access key>...","awsApiMaxConcurrency":1,"awsSecretKey":"...<my secret key>...","consoleInactivityTimeout":0,"customAddr":"none","failsafeAction":"go-offline-restart-tm","fileLocalPathPrefix":"{/shared/} {/tmp/}","guiSecurityBanner":"enabled","guiSecurityBannerText":"Welcome to the BIG-IP Configuration Utility.\n\nLog in with your username and password using the fields on the left.","guiSetup":"disabled","hostAddrMode":"management","hostname":"ip-172-16-11-77.ec2.internal","lcdDisplay":"enabled","mgmtDhcp":"enabled","netReboot":"disabled","passwordPrompt":"Password","quietBoot":"enabled","usernamePrompt":"Username"} Network Attachment Setup of self-IPs, VLANs, and other network specific configuration is relatively straight forward. 2015-11-12 09:48:15 : Disabling dhcp PATCH mgmt/tm/sys/db/dhclient.mgmt {"value": "disable"} Method PATCH mgmt/tm/sys/db/dhclient.mgmt returned: {"kind":"tm:sys:db:dbstate","name":"dhclient.mgmt","fullPath":"dhclient.mgmt","generation":154,"selfLink":"https://localhost/mgmt/tm/sys/db/dhclient.mgmt?ver=11.6.0","defaultValue":"disable","scfConfig":"true","value":"disable","valueRange":"disable enable"} 2015-11-12 09:48:18 : Adding/updating internal vlan GET mgmt/tm/net/vlan "" Method GET mgmt/tm/net/vlan returned: {"kind":"tm:net:vlan:vlancollectionstate","selfLink":"https://localhost/mgmt/tm/net/vlan?ver=11.6.0"} 2015-11-12 09:48:19 : Adding/updating internal vlan POST mgmt/tm/net/vlan {"interfaces": "1.2", "name": "private"} Method POST mgmt/tm/net/vlan returned: {"kind":"tm:net:vlan:vlanstate","name":"private","fullPath":"private","generation":167,"selfLink":"https://localhost/mgmt/tm/net/vlan/private?ver=11.6.0","autoLasthop":"default","cmpHash":"default","dagRoundRobin":"disabled","dagTunnel":"outer","failsafe":"disabled","failsafeAction":"failover-restart-tm","failsafeTimeout":90,"ifIndex":80,"learning":"enable-forward","mtu":1500,"sflow":{"pollInterval":0,"pollIntervalGlobal":"yes","samplingRate":0,"samplingRateGlobal":"yes"},"sourceChecking":"disabled","tag":4094,"interfacesReference":{"link":"https://localhost/mgmt/tm/net/vlan/~Common~private/interfaces?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:48:21 : Adding/updating external vlan GET mgmt/tm/net/vlan "" Method GET mgmt/tm/net/vlan returned: {"kind":"tm:net:vlan:vlancollectionstate","selfLink":"https://localhost/mgmt/tm/net/vlan?ver=11.6.0","items":[{"kind":"tm:net:vlan:vlanstate","name":"private","partition":"Common","fullPath":"/Common/private","generation":167,"selfLink":"https://localhost/mgmt/tm/net/vlan/~Common~private?ver=11.6.0","autoLasthop":"default","cmpHash":"default","dagRoundRobin":"disabled","dagTunnel":"outer","failsafe":"disabled","failsafeAction":"failover-restart-tm","failsafeTimeout":90,"ifIndex":80,"learning":"enable-forward","mtu":1500,"sflow":{"pollInterval":0,"pollIntervalGlobal":"yes","samplingRate":0,"samplingRateGlobal":"yes"},"sourceChecking":"disabled","tag":4094,"interfacesReference":{"link":"https://localhost/mgmt/tm/net/vlan/~Common~private/interfaces?ver=11.6.0","isSubcollection":true}}]} ... 2015-11-12 09:48:24 : Adding/updating internal selfip GET mgmt/tm/net/self "" Method GET mgmt/tm/net/self returned: {"kind":"tm:net:self:selfcollectionstate","selfLink":"https://localhost/mgmt/tm/net/self?ver=11.6.0"} 2015-11-12 09:48:24 : Adding/updating internal selfip POST mgmt/tm/net/self {"allowService": "default", "vlan": "private", "trafficGroup": "traffic-group-local-only", "name": "private", "address": "172.16.12.44/24"} Method POST mgmt/tm/net/self returned: {"kind":"tm:net:self:selfstate","name":"private","fullPath":"private","generation":177,"selfLink":"https://localhost/mgmt/tm/net/self/private?ver=11.6.0","address":"172.16.12.44/24","floating":"disabled","inheritedTrafficGroup":"false","trafficGroup":"/Common/traffic-group-local-only","unit":0,"vlan":"/Common/private","allowService":["default"]} 2015-11-12 09:48:26 : Adding/updating external selfip GET mgmt/tm/net/self "" Method GET mgmt/tm/net/self returned: {"kind":"tm:net:self:selfcollectionstate","selfLink":"https://localhost/mgmt/tm/net/self?ver=11.6.0","items":[{"kind":"tm:net:self:selfstate","name":"private","partition":"Common","fullPath":"/Common/private","generation":177,"selfLink":"https://localhost/mgmt/tm/net/self/~Common~private?ver=11.6.0","address":"172.16.12.44/24","floating":"disabled","inheritedTrafficGroup":"false","trafficGroup":"/Common/traffic-group-local-only","unit":0,"vlan":"/Common/private","allowService":["default"]}]} 2015-11-12 09:48:27 : Adding/updating external selfip POST mgmt/tm/net/self {"allowService": ["tcp:4353"], "vlan": "public", "trafficGroup": "traffic-group-local-only", "name": "public", "address": "172.16.13.83/24"} Method POST mgmt/tm/net/self returned: {"kind":"tm:net:self:selfstate","name":"public","fullPath":"public","generation":178,"selfLink":"https://localhost/mgmt/tm/net/self/public?ver=11.6.0","address":"172.16.13.83/24","floating":"disabled","inheritedTrafficGroup":"false","trafficGroup":"/Common/traffic-group-local-only","unit":0,"vlan":"/Common/public","allowService":["tcp:4353"]} 2015-11-12 09:48:29 : Setting default route using default_gateway or gateway_pool GET /mgmt/tm/net/route "" Method GET /mgmt/tm/net/route returned: {"kind":"tm:net:route:routecollectionstate","selfLink":"https://localhost/mgmt/tm/net/route?ver=11.6.0"} 2015-11-12 09:48:29 : Setting default route using default_gateway or gateway_pool POST /mgmt/tm/net/route {"gw": "172.16.13.1", "name": "default_route", "network": "default"} Method POST /mgmt/tm/net/route returned: {"kind":"tm:net:route:routestate","name":"default_route","fullPath":"default_route","generation":0,"selfLink":"https://localhost/mgmt/tm/net/route/default_route?ver=11.6.0"} Application Service Provisioning This workflow is where things really get interesting. Let's break it down. We are deploying two sets of virtual servers (the pool members are the same, but the VIP is different). For virtual 1 (VIP =172.16.13.128), we use an iApp to deploy a HTTPS virtual with an ASM policy. To do so, we: Deploy all resources that are needed to support the iApp deployment: A high-speed logging pool An LTM logging profile (which will send logs to Splunk on port 514) An ASM logging profile(which will send logs to Splunk on port 515) An analytics profile, in case we want to inspect traffic with AVR on-box Base64 encoded images to an iRule data-group iRules to support a sorry page and the analytics profile An ASM policy (we've encoded the XML policy file into base64). Deploying the ASM policies requires first making a new policy via a POST command, then importing the policy over the defaults for the one we have just created. Note that we check the status of the asynchronous REST tasks which are started during the policy 'create' and 'apply' steps. An LTM policy which attaches the ASM policy above using a ruleset. Deploy the iApp template (look here to understand how we built the JSON payload for the iApp template). Finally, deploy the iApp service, an instantiation of the template that references all the above content (look here to understand how we built theJSON payload for the iApp service). For virtual 2 (VIP =172.16.13.124), just deploy the web server pool, iRule, and virtual server directly (without an iApp). 2015-11-12 09:49:13 : Deploying/updating Webserver Pool GET mgmt/tm/ltm/pool "" Method GET mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolcollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/pool?ver=11.6.0"} 2015-11-12 09:49:14 : Deploying/updating Webserver Pool POST mgmt/tm/ltm/pool {"name": "Vip1_pool", "members": [{"description": "Name=/boring_lovelace,ContainerHostname=a0085832ad28,Image=mutzel/all-in-one-hackazon:postinstall", "name": "172.16.14.87:80", "address": "172.16.14.87"}], "monitor": "http"} Method POST mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolstate","name":"Vip1_pool","fullPath":"Vip1_pool","generation":236,"selfLink":"https://localhost/mgmt/tm/ltm/pool/Vip1_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/http ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip1_pool/members?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:49:18 : Deploying/updating High Speed Logging pool to send to Analytics Server GET mgmt/tm/ltm/pool "" Method GET mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolcollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/pool?ver=11.6.0","items":[{"kind":"tm:ltm:pool:poolstate","name":"Vip1_pool","partition":"Common","fullPath":"/Common/Vip1_pool","generation":236,"selfLink":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip1_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/http ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip1_pool/members?ver=11.6.0","isSubcollection":true}}]} 2015-11-12 09:49:19 : Deploying/updating High Speed Logging pool to send to Analytics Server POST mgmt/tm/ltm/pool {"name": "syslog_pool", "members": [{"name": "172.16.14.180:514", "address": "172.16.14.180"}], "monitor": "tcp"} Method POST mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolstate","name":"syslog_pool","fullPath":"syslog_pool","generation":239,"selfLink":"https://localhost/mgmt/tm/ltm/pool/syslog_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/tcp ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~syslog_pool/members?ver=11.6.0","isSubcollection":true}} ... 2015-11-12 09:49:21 : Deploying/updating ASM Logging Profile to send to Remote Analytics Server POST mgmt/tm/security/log/profile {"application": [{"guaranteeLogging": "enabled", "guaranteeResponseLogging": "disabled", "logicOperation": "or", "protocol": "tcp", "name": "asm_log_to_splunk", "format": {"fieldDelimiter": ",", "type": "predefined"}, "reportAnomalies": "disabled", "facility": "local0", "partition": "Common", "filter": [{"values": ["all"], "name": "protocol"}, {"values": ["all"], "name": "request-type"}, {"name": "search-all"}], "maximumHeaderSize": "any", "localStorage": "enabled", "maximumQuerySize": "any", "maximumEntryLength": "2k", "servers": [{"name": "172.16.14.180:515"}], "remoteStorage": "splunk", "maximumRequestSize": "any", "responseLogging": "none"}], "name": "asm_log_to_splunk"} Method POST mgmt/tm/security/log/profile returned: {"kind":"tm:security:log:profile:profilestate","name":"asm_log_to_splunk","fullPath":"asm_log_to_splunk","generation":240,"selfLink":"https://localhost/mgmt/tm/security/log/profile/asm_log_to_splunk?ver=11.6.0","applicationReference":{"link":"https://localhost/mgmt/tm/security/log/profile/~Common~asm_log_to_splunk/application?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:49:23 : Deploying/updating Analytics Profile GET mgmt/tm/ltm/profile/analytics "" Method GET mgmt/tm/ltm/profile/analytics returned: {"kind":"tm:ltm:profile:analytics:analyticscollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/profile/analytics?ver=11.6.0","items":[{"kind":"tm:ltm:profile:analytics:analyticsstate","name":"analytics","partition":"Common","fullPath":"/Common/analytics","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/profile/analytics/~Common~analytics?ver=11.6.0","capturedTrafficExternalLogging":"disabled","capturedTrafficInternalLogging":"disabled","collectGeo":"disabled","collectIp":"disabled","collectMaxTpsAndThroughput":"disabled","collectMethods":"enabled","collectPageLoadTime":"disabled","collectResponseCodes":"enabled","collectSubnets":"disabled","collectUrl":"disabled","collectUserAgent":"disabled","collectUserSessions":"disabled","collectedStatsExternalLogging":"disabled","collectedStatsInternalLogging":"enabled","notificationByEmail":"disabled","notificationBySnmp":"disabled","notificationBySyslog":"disabled","publishIruleStatistics":"disabled","sampling":"enabled","sessionCookieSecurity":"ssl-only","sessionTimeoutMinutes":"5","alertsReference":{"link":"https://localhost/mgmt/tm/ltm/profile/analytics/~Common~analytics/alerts?ver=11.6.0","isSubcollection":true},"trafficCaptureReference":{"link":"https://localhost/mgmt/tm/ltm/profile/analytics/~Common~analytics/traffic-capture?ver=11.6.0","isSubcollection":true}}]} ... 2015-11-12 09:49:26 : Uploading Datagroup ... background for sorry page GET mgmt/tm/ltm/data-group/internal "" Method GET mgmt/tm/ltm/data-group/internal returned: {"kind":"tm:ltm:data-group:internal:internalcollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal?ver=11.6.0","items":[{"kind":"tm:ltm:data-group:internal:internalstate","name":"aol","partition":"Common","fullPath":"/Common/aol","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~aol?ver=11.6.0","type":"ip","records":[{"name":"64.12.96.0/19"},{"name":"195.93.16.0/20"},{"name":"195.93.48.0/22"},{"name":"195.93.64.0/19"},{"name":"195.93.96.0/19"},{"name":"198.81.0.0/22"},{"name":"198.81.8.0/23"},{"name":"198.81.16.0/20"},{"name":"202.67.65.128/25"},{"name":"205.188.112.0/20"},{"name":"205.188.146.144/30"},{"name":"205.188.192.0/20"},{"name":"205.188.208.0/23"},{"name":"207.200.112.0/21"}]},{"kind":"tm:ltm:data-group:internal:internalstate","name":"images","partition":"Common","fullPath":"/Common/images","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~images?ver=11.6.0","type":"string","records":[{"name":".bmp"},{"name":".gif"},{"name":".jpg"}]},{"kind":"tm:ltm:data-group:internal:internalstate","name":"private_net","partition":"Common","fullPath":"/Common/private_net","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~private_net?ver=11.6.0","type":"ip","records":[{"name":"10.0.0.0/8"},{"name":"172.16.0.0/12"},{"name":"192.168.0.0/16"}]}]} 2015-11-12 09:49:26 : Uploading Datagroup ... background for sorry page POST mgmt/tm/ltm/data-group/internal {"records": [{"name": "...<base64 image>..."}], "type": "string", "name": "background_images"} Method POST mgmt/tm/ltm/data-group/internal returned: {"kind":"tm:ltm:data-group:internal:internalstate","name":"background_images","fullPath":"background_images","generation":244,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/background_images?ver=11.6.0","type":"string","records":[{"name":"....large base64 image...."}]} 2015-11-12 09:49:29 : Uploading Datagroup ... image for sorry page GET mgmt/tm/ltm/data-group/internal "" Method GET mgmt/tm/ltm/data-group/internal returned: {"kind":"tm:ltm:data-group:internal:internalcollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal?ver=11.6.0","items":[{"kind":"tm:ltm:data-group:internal:internalstate","name":"aol","partition":"Common","fullPath":"/Common/aol","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~aol?ver=11.6.0","type":"ip","records":[{"name":"64.12.96.0/19"},{"name":"195.93.16.0/20"},{"name":"195.93.48.0/22"},{"name":"195.93.64.0/19"},{"name":"195.93.96.0/19"},{"name":"198.81.0.0/22"},{"name":"198.81.8.0/23"},{"name":"198.81.16.0/20"},{"name":"202.67.65.128/25"},{"name":"205.188.112.0/20"},{"name":"205.188.146.144/30"},{"name":"205.188.192.0/20"},{"name":"205.188.208.0/23"},{"name":"207.200.112.0/21"}]},{"kind":"tm:ltm:data-group:internal:internalstate","name":"background_images","partition":"Common","fullPath":"/Common/background_images","generation":244,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~background_images?ver=11.6.0","type":"string","records":[{"name":"...<base64 image>..."}]},{"kind":"tm:ltm:data-group:internal:internalstate","name":"images","partition":"Common","fullPath":"/Common/images","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~images?ver=11.6.0","type":"string","records":[{"name":".bmp"},{"name":".gif"},{"name":".jpg"}]},{"kind":"tm:ltm:data-group:internal:internalstate","name":"private_net","partition":"Common","fullPath":"/Common/private_net","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/~Common~private_net?ver=11.6.0","type":"ip","records":[{"name":"10.0.0.0/8"},{"name":"172.16.0.0/12"},{"name":"192.168.0.0/16"}]}]} 2015-11-12 09:49:30 : Uploading Datagroup ... image for sorry page POST mgmt/tm/ltm/data-group/internal {"records": [{"name": "...<base64 image>..."}], "type": "string", "name": "sorry_images"} Method POST mgmt/tm/ltm/data-group/internal returned: {"kind":"tm:ltm:data-group:internal:internalstate","name":"sorry_images","fullPath":"sorry_images","generation":245,"selfLink":"https://localhost/mgmt/tm/ltm/data-group/internal/sorry_images?ver=11.6.0","type":"string","records":[{"name":"....base 64 image...."}]} 2015-11-12 09:49:32 : Uploading iRules ... sorry_page_rule GET mgmt/tm/ltm/rule "" Method GET mgmt/tm/ltm/rule returned: {"kind":"tm:ltm:rule:rulecollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/rule?ver=11.6.0","items":[<a whole bunch of irules>]} 2015-11-12 09:49:35 : Uploading iRules ... demo_analytics_rule GET mgmt/tm/ltm/rule "" Method GET mgmt/tm/ltm/rule returned: {"kind":"tm:ltm:rule:rulecollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/rule?ver=11.6.0","items":[<a whole bunnch of irules>]} 2015-11-12 09:49:38 : Create the ASM policy GET mgmt/tm/asm/policies "" Method GET mgmt/tm/asm/policies returned: {"selfLink":"https://localhost/mgmt/tm/asm/policies","kind":"tm:asm:policies:policycollectionstate","totalItems":0,"items":[]} 2015-11-12 09:49:39 : Create the ASM policy POST mgmt/tm/asm/policies {"caseInsensitive": true, "name": "linux_high-Vip1", "applicationLanguage": "utf-8"} ... 2015-11-12 09:49:53 : Import our policy over the one existing above POST mgmt/tm/asm/tasks/import-policy {"policyReference": {"link": "https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"}, "isBase64": true, "file": "...<base64 policy file>...","lastUpdateMicros":1.447350597e+15,"selfLink":"https://localhost/mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law","kind":"tm:asm:tasks:import-policy:import-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"startTime":"2015-11-12T17:49:57Z","id":"hP37L9EM650WeWKkgX7law"} 2015-11-12 09:50:00 : Determine whether the asm policy import task is complete GET mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law "" Method GET mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law returned: {"isBase64":true,"status":"STARTED","file":"...<base64 policy file>...","lastUpdateMicros":1.447350597e+15,"selfLink":"https://localhost/mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law","kind":"tm:asm:tasks:import-policy:import-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"startTime":"2015-11-12T17:49:57Z","id":"hP37L9EM650WeWKkgX7law"} 2015-11-12 09:50:05 : Determine whether the asm policy import task is complete GET mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law "" Method GET mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law returned: {"isBase64":true,"status":"COMPLETED","file":"...<base64 policy file>...","lastUpdateMicros":1.447350605e+15,"selfLink":"https://localhost/mgmt/tm/asm/tasks/import-policy/hP37L9EM650WeWKkgX7law","kind":"tm:asm:tasks:import-policy:import-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"endTime":"2015-11-12T17:50:05Z","startTime":"2015-11-12T17:49:57Z","id":"hP37L9EM650WeWKkgX7law","result":{"policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"message":"Security policy version information will be ignored, since the file has been modified since it was exported.\nSignature Set linux-high (previously used in this security policy) was added to this system.\nThe operation was completed successfully. The security policy name is '/Common/linux_high-Vip1'."}} 2015-11-12 09:50:08 : Apply the ASM policy POST mgmt/tm/asm/tasks/apply-policy {"policyReference": {"link": "https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"}} Method POST mgmt/tm/asm/tasks/apply-policy returned: {"selfLink":"https://localhost/mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg","kind":"tm:asm:tasks:apply-policy:apply-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"status":"NEW","lastUpdateMicros":1.447350608e+15,"startTime":"2015-11-12T17:50:08Z","id":"38B8slfPm1_lBBRG1STNeg"} 2015-11-12 09:50:10 : Determine whether the asm policy apply task is complete GET mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg "" Method GET mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg returned: {"selfLink":"https://localhost/mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg","kind":"tm:asm:tasks:apply-policy:apply-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"status":"STARTED","lastUpdateMicros":1.447350608e+15,"startTime":"2015-11-12T17:50:08Z","id":"38B8slfPm1_lBBRG1STNeg"} 2015-11-12 09:50:14 : Determine whether the asm policy apply task is complete GET mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg "" Method GET mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg returned: {"selfLink":"https://localhost/mgmt/tm/asm/tasks/apply-policy/38B8slfPm1_lBBRG1STNeg","kind":"tm:asm:tasks:apply-policy:apply-policy-taskstate","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/qnU5A8PUMuPurLRLUt8VHg"},"status":"STARTED","lastUpdateMicros":1.447350608e+15,"startTime":"2015-11-12T17:50:08Z","id":"38B8slfPm1_lBBRG1STNeg"} 2015-11-12 09:50:20 : Create an LTM policy for use with by iApp which associates the ASM policy GET mgmt/tm/ltm/policy "" Method GET mgmt/tm/ltm/policy returned: {"kind":"tm:ltm:policy:policycollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/policy?ver=11.6.0","items":[{"kind":"tm:ltm:policy:policystate","name":"_sys_CEC_SSL_client_policy","partition":"Common","fullPath":"/Common/_sys_CEC_SSL_client_policy","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_SSL_client_policy?ver=11.6.0","controls":["classification"],"hints":["no-write","no-delete","no-exclusion"],"requires":["ssl-persistence"],"strategy":"/Common/first-match","rulesReference":{"link":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_SSL_client_policy/rules?ver=11.6.0","isSubcollection":true}},{"kind":"tm:ltm:policy:policystate","name":"_sys_CEC_SSL_server_policy","partition":"Common","fullPath":"/Common/_sys_CEC_SSL_server_policy","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_SSL_server_policy?ver=11.6.0","controls":["classification"],"hints":["no-write","no-delete","no-exclusion"],"requires":["ssl-persistence"],"strategy":"/Common/first-match","rulesReference":{"link":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_SSL_server_policy/rules?ver=11.6.0","isSubcollection":true}},{"kind":"tm:ltm:policy:policystate","name":"_sys_CEC_video_policy","partition":"Common","fullPath":"/Common/_sys_CEC_video_policy","generation":1,"selfLink":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_video_policy?ver=11.6.0","controls":["classification"],"hints":["no-write","no-delete","no-exclusion"],"requires":["http"],"strategy":"/Common/first-match","rulesReference":{"link":"https://localhost/mgmt/tm/ltm/policy/~Common~_sys_CEC_video_policy/rules?ver=11.6.0","isSubcollection":true}}]} 2015-11-12 09:50:23 : Create an LTM policy for use with by iApp which associates the ASM policy POST mgmt/tm/ltm/policy {"name": "ltm_policy_w_asm_linux_high-Vip1", "rules": [{"ordinal": 1, "conditions": [], "name": "rule-1", "actions": [{"status": 0, "enable": true, "name": "0", "request": true, "vlanId": 0, "code": 0, "policy": "/Common/linux_high-Vip1", "port": 0, "asm": true}]}], "partition": "Common", "controls": ["asm"], "strategy": "/Common/first-match", "requires": ["http"]} Method POST mgmt/tm/ltm/policy returned: {"kind":"tm:ltm:policy:policystate","name":"ltm_policy_w_asm_linux_high-Vip1","partition":"Common","fullPath":"/Common/ltm_policy_w_asm_linux_high-Vip1","generation":318,"selfLink":"https://localhost/mgmt/tm/ltm/policy/~Common~ltm_policy_w_asm_linux_high-Vip1?ver=11.6.0","controls":["asm"],"requires":["http"],"strategy":"/Common/first-match","rulesReference":{"link":"https://localhost/mgmt/tm/ltm/policy/~Common~ltm_policy_w_asm_linux_high-Vip1/rules?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:50:25 : Deploy the iApp template, since we are not using a default iApp on box GET mgmt/tm/sys/application/template "" Method GET mgmt/tm/sys/application/template returned: {"kind":"tm:sys:application:template:templatecollectionstate","selfLink":"https://localhost/mgmt/tm/sys/application/template?ver=11.6.0","items":[...<list of iApp templates on the box>...]} 2015-11-12 09:50:30 : Deploy the iApp service from the f5 http backport template GET mgmt/tm/sys/application/service "" Method GET mgmt/tm/sys/application/service returned: {"kind":"tm:sys:application:service:servicecollectionstate","selfLink":"https://localhost/mgmt/tm/sys/application/service?ver=11.6.0"} 2015-11-12 09:51:37 : Deploying/updating webserver pool GET mgmt/tm/ltm/pool "" Method GET mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolcollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/pool?ver=11.6.0","items":[{"kind":"tm:ltm:pool:poolstate","name":"Vip1_pool","partition":"Common","fullPath":"/Common/Vip1_pool","generation":236,"selfLink":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip1_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/http ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip1_pool/members?ver=11.6.0","isSubcollection":true}},{"kind":"tm:ltm:pool:poolstate","name":"syslog_pool","partition":"Common","fullPath":"/Common/syslog_pool","generation":239,"selfLink":"https://localhost/mgmt/tm/ltm/pool/~Common~syslog_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/tcp ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~syslog_pool/members?ver=11.6.0","isSubcollection":true}}]} 2015-11-12 09:51:37 : Deploying/updating webserver pool POST mgmt/tm/ltm/pool {"name": "Vip2_pool", "members": [{"description": "Name=/boring_lovelace,ContainerHostname=a0085832ad28,Image=mutzel/all-in-one-hackazon:postinstall", "name": "172.16.14.87:80", "address": "172.16.14.87"}], "monitor": "http"} Method POST mgmt/tm/ltm/pool returned: {"kind":"tm:ltm:pool:poolstate","name":"Vip2_pool","fullPath":"Vip2_pool","generation":352,"selfLink":"https://localhost/mgmt/tm/ltm/pool/Vip2_pool?ver=11.6.0","allowNat":"yes","allowSnat":"yes","ignorePersistedWeight":"disabled","ipTosToClient":"pass-through","ipTosToServer":"pass-through","linkQosToClient":"pass-through","linkQosToServer":"pass-through","loadBalancingMode":"round-robin","minActiveMembers":0,"minUpMembers":0,"minUpMembersAction":"failover","minUpMembersChecking":"disabled","monitor":"/Common/http ","queueDepthLimit":0,"queueOnConnectionLimit":"disabled","queueTimeLimit":0,"reselectTries":0,"serviceDownAction":"none","slowRampTime":10,"membersReference":{"link":"https://localhost/mgmt/tm/ltm/pool/~Common~Vip2_pool/members?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:51:39 : Uploading iRules ... irule_random_snat GET mgmt/tm/ltm/rule "" Method GET mgmt/tm/ltm/rule returned: {"kind":"tm:ltm:rule:rulecollectionstate","selfLink":"https://localhost/mgmt/tm/ltm/rule?ver=11.6.0","items":[...<list of irules on the box>...]} ... 2015-11-12 09:51:43 : Setup the HTTP virtual server POST mgmt/tm/ltm/virtual {"name": "Vip2_http", "rules": ["/Common/irule_random_snat"], "translateAddress": "enabled", "destination": "/Common/172.16.13.145:80", "mask": "255.255.255.255", "sourceAddressTranslation": {"type": "automap"}, "profiles": [{"name": "http"}, {"name": "tcp-wan-optimized", "context": "clientside"}, {"name": "tcp-lan-optimized", "context": "serverside"}], "translatePort": "enabled", "ipProtocol": "tcp", "pool": "/Common/Vip2_pool"} Method POST mgmt/tm/ltm/virtual returned: {"kind":"tm:ltm:virtual:virtualstate","name":"Vip2_http","fullPath":"Vip2_http","generation":355,"selfLink":"https://localhost/mgmt/tm/ltm/virtual/Vip2_http?ver=11.6.0","addressStatus":"yes","autoLasthop":"default","cmpEnabled":"yes","connectionLimit":0,"destination":"/Common/172.16.13.145:80","enabled":true,"gtmScore":0,"ipProtocol":"tcp","mask":"255.255.255.255","mirror":"disabled","mobileAppTunnel":"disabled","nat64":"disabled","pool":"/Common/Vip2_pool","rateLimit":"disabled","rateLimitDstMask":0,"rateLimitMode":"object","rateLimitSrcMask":0,"source":"0.0.0.0/0","sourceAddressTranslation":{"type":"automap"},"sourcePort":"preserve","synCookieStatus":"not-activated","translateAddress":"enabled","translatePort":"enabled","vlansDisabled":true,"vsIndex":4,"rules":["/Common/irule_random_snat"],"policiesReference":{"link":"https://localhost/mgmt/tm/ltm/virtual/~Common~Vip2_http/policies?ver=11.6.0","isSubcollection":true},"profilesReference":{"link":"https://localhost/mgmt/tm/ltm/virtual/~Common~Vip2_http/profiles?ver=11.6.0","isSubcollection":true}} 2015-11-12 09:51:45 : Setup the HTTPS virtual server POST mgmt/tm/ltm/virtual {"name": "Vip2_https", "rules": ["/Common/irule_random_snat"], "translateAddress": "enabled", "destination": "/Common/172.16.13.145:443", "mask": "255.255.255.255", "sourceAddressTranslation": {"type": "automap"}, "profiles": [{"name": "tcp-ssl-wan-optimized", "context": "clientside"}, {"name": "tcp-ssl-lan-optimized", "context": "serverside"}], "translatePort": "enabled", "ipProtocol": "tcp", "pool": "/Common/Vip2_pool"} Method POST mgmt/tm/ltm/virtual returned: {"kind":"tm:ltm:virtual:virtualstate","name":"Vip2_https","fullPath":"Vip2_https","generation":356,"selfLink":"https://localhost/mgmt/tm/ltm/virtual/Vip2_https?ver=11.6.0","addressStatus":"yes","autoLasthop":"default","cmpEnabled":"yes","connectionLimit":0,"destination":"/Common/172.16.13.145:443","enabled":true,"gtmScore":0,"ipProtocol":"tcp","mask":"255.255.255.255","mirror":"disabled","mobileAppTunnel":"disabled","nat64":"disabled","pool":"/Common/Vip2_pool","rateLimit":"disabled","rateLimitDstMask":0,"rateLimitMode":"object","rateLimitSrcMask":0,"source":"0.0.0.0/0","sourceAddressTranslation":{"type":"automap"},"sourcePort":"preserve","synCookieStatus":"not-activated","translateAddress":"enabled","translatePort":"enabled","vlansDisabled":true,"vsIndex":5,"rules":["/Common/irule_random_snat"],"policiesReference":{"link":"https://localhost/mgmt/tm/ltm/virtual/~Common~Vip2_https/policies?ver=11.6.0","isSubcollection":true},"profilesReference":{"link":"https://localhost/mgmt/tm/ltm/virtual/~Common~Vip2_https/profiles?ver=11.6.0","isSubcollection":true}} Deploying an iApp template using iControlREST Because we recognize that it may be not obvious how we are deploying iApp templates using iControlREST, we break it down into more detail here. First, note that there is no 'import' action we can invoke via REST to import the iApp template which mirrors the action in the Configuration Utility (GUI). This means that we need to create the JSON payload containing the iApp and POST it. Given an iApp template, like those found on DevCentral, here are the steps to create the JSON body. On a pre-existing BIG-IP install (or one have created in your build process for your code) Import the iApp template in the Configuration Utility in the 'Common' partition Do an HTTP GET to retrieve the iApp template payload. Make sure that you use the expandSubcollections=True as a query parameter, as we want to include the stuff in the 'actionsReference' sub-collection. curl -sku <user>:<password> -X GET https://<management ip>/mgmt/tm/sys/application/template/~Common~<name of your iApp>?expandSubcollections=true You should get something back that looks like the following (which is the payload for the f5.http backport iApp). I have truncated the 'implementation', 'presentation' and 'htmlHelp' actions: { "actionsReference": { "isSubcollection": true, "items": [ { "fullPath": "definition", "generation": 4672, "htmlHelp": "....", "implementation": "...", "kind": "tm:sys:application:template:actions:actionsstate", "name": "definition", "presentation": "...", "roleAcl": [ "admin", "manager", "resource-admin" ], "selfLink": "https://localhost/mgmt/tm/sys/application/template/~Common~f5.http.backport.1.1.2/actions/definition?ver=11.6.0" } ], "link": "https://localhost/mgmt/tm/sys/application/template/~Common~f5.http.backport.1.1.2/actions?ver=11.6.0" }, "fullPath": "/Common/f5.http.backport.1.1.2", "generation": 4672, "ignoreVerification": "false", "kind": "tm:sys:application:template:templatestate", "name": "f5.http.backport.1.1.2", "partition": "Common", "requiresBigipVersionMin": "11.6.0", "selfLink": "https://localhost/mgmt/tm/sys/application/template/~Common~f5.http.backport.1.1.2?expandSubcollections=true&ver=11.6.0", "totalSigningStatus": "not-all-signed", "verificationStatus": "none" } Before we can POST this payload back to any BIG-IP, we need to cleanup a few things: Remove any of the extraneousfields including 'verificationStatus', 'totalSigningStatus', 'selfLink', 'partition', 'kind', 'generation', 'fullPath'. Make a new top-level key in the payload called 'actions'. The value for this key should everything in the 'items' array under the top-level key 'actionsReference'. Finally, delete the 'actionsReference' key/value pair from the JSON body. The final JSON payload should look like: { "actions": [ { "htmlHelp": "....", "implementation": "...", "name": "definition", "presentation": "...", "roleAcl": [ "admin", "manager", "resource-admin" ] } ], "ignoreVerification": "false", "name": "f5.http.backport.1.1.2", "requiresBigipVersionMin": "11.6.0", "totalSigningStatus": "not-all-signed" } Finally, we can use this to deploy an iApp template on BIG-IP. In the example below, the iApp_template.json file is formatted like the above. I have also attached it to this page for inspection. curl -sku rest_admin:<obfuscated> -H "Content-type: application/json" -X POST -d@./iApp_template.json https://52.23.149.16//mgmt/tm/sys/application/template Before you go POSTing iApps to any old version of TMOS, be aware that there are still some remaining issues you might have to solve. Some of the official iApps found on DevCentral are prepended with aTCL library that defines functions used within the iApp. The iApp solutions team made this design decision so that newer iApps will work against older versions of TMOS. For example, see the 'F5 HTTP', which starts with the library definition on line 0: "cli script f5.iapp.1.3.0.cli {....". When you export the iApp template to JSON using REST as we have documented above, this library will not be included in the payload. Because newer versions of BIG-IP (11.6) might already include a version of this 'iApp' library, you can work around this issue by updating the function references to use the existing library on-box. Here are the high-level steps: Downloading the iApp from DevCentral Change the function references to leverage the library that is installed on your BIG-IP. See an example of this by comparing the F5 HTTP template on the codeshare with the one attached to this page. You'll probably have to do some "find and replace" like the following: f5.iapp.1.3.0.cli:iapp_get_provisioned -> iapp::get_provisioned There may be some references to functions that do not exist yet. These will have to be dealt with on a case-by-case basis. Uploading the iApp to BIG-IP and exporting we documented above. Deploying an iApp service using iControlREST Fortunately, using iControlREST to manage instances of iApps (also known as iApp services) is much easier than managing templates. The high-level steps are similar: Deploy an iApp service via the Configuration Utility. Do an HTTP GET to acquire the JSON representation (notice the URL formatting!). curl -sku <user>:<password> -X GET https://<management ip>/mgmt/tm/sys/application/service/~Common~<your iapp name>.app~<your iapp name> Depending on the variables presented by the iApp template, the JSON payload for the iApp service might look something like: { "deviceGroup": "none", "fullPath": "/Common/Vip1_iApp.app/Vip1_iApp", "generation": 4674, "inheritedDevicegroup": "true", "inheritedTrafficGroup": "true", "kind": "tm:sys:application:service:servicestate", "lists": [ { "encrypted": "no", "name": "irules__irules", "value": [ "/Common/irule_demo_analytics", "/Common/irule_sorry_page" ] } ], "name": "Vip1_iApp", "partition": "Common", "selfLink": "https://localhost/mgmt/tm/sys/application/service/~Common~Vip1_iApp.app~Vip1_iApp?ver=11.6.0", "strictUpdates": "enabled", "subPath": "Vip1_iApp.app", "tables": [ { "name": "basic__snatpool_members" }, { "name": "net__snatpool_members" }, { "name": "optimizations__hosts" }, { "columnNames": [ "name" ], "name": "pool__hosts", "rows": [ { "row": [ "demo.example.com" ] } ] }, { "name": "pool__members" }, { "name": "server_pools__servers" } ], "template": "/Common/f5.http.backport.1.1.2", "templateModified": "yes", "trafficGroup": "/Common/traffic-group-1", "variables": [ { "encrypted": "no", "name": "asm__security_logging", "value": "asm_log_to_splunk" }, { "encrypted": "no", "name": "asm__use_asm", "value": "/Common/ltm_policy_w_asm_linux_high-Vip1" }, { "encrypted": "no", "name": "client__http_compression", "value": "/#do_not_use#" }, { "encrypted": "no", "name": "client__standard_caching_without_wa", "value": "/#do_not_use#" }, { "encrypted": "no", "name": "client__tcp_wan_opt", "value": "/Common/tcp-ssl-wan-optimized" }, { "encrypted": "no", "name": "net__client_mode", "value": "wan" }, { "encrypted": "no", "name": "net__route_to_bigip", "value": "no" }, { "encrypted": "no", "name": "net__same_subnet", "value": "no" }, { "encrypted": "no", "name": "net__server_mode", "value": "lan" }, { "encrypted": "no", "name": "net__snat_type", "value": "automap" }, { "encrypted": "no", "name": "net__vlan_mode", "value": "all" }, { "encrypted": "no", "name": "pool__addr", "value": "172.16.13.128" }, { "encrypted": "no", "name": "pool__http", "value": "/#create_new#" }, { "encrypted": "no", "name": "pool__mask", "value": "none" }, { "encrypted": "no", "name": "pool__persist", "value": "/#cookie#" }, { "encrypted": "no", "name": "pool__pool_to_use", "value": "/Common/Vip1_pool" }, { "encrypted": "no", "name": "pool__port_secure", "value": "443" }, { "encrypted": "no", "name": "pool__redirect_port", "value": "80" }, { "encrypted": "no", "name": "pool__redirect_to_https", "value": "yes" }, { "encrypted": "no", "name": "pool__xff", "value": "yes" }, { "encrypted": "no", "name": "server__oneconnect", "value": "/#do_not_use#" }, { "encrypted": "no", "name": "server__tcp_lan_opt", "value": "/Common/tcp-wan-optimized" }, { "encrypted": "no", "name": "server__tcp_req_queueing", "value": "no" }, { "encrypted": "no", "name": "ssl__cert", "value": "/Common/default.crt" }, { "encrypted": "no", "name": "ssl__client_ssl_profile", "value": "/#create_new#" }, { "encrypted": "no", "name": "ssl__key", "value": "/Common/default.key" }, { "encrypted": "no", "name": "ssl__mode", "value": "client_ssl" }, { "encrypted": "no", "name": "ssl__use_chain_cert", "value": "/#do_not_use#" }, { "encrypted": "no", "name": "ssl_encryption_questions__advanced", "value": "yes" }, { "encrypted": "no", "name": "ssl_encryption_questions__help", "value": "hide" }, { "encrypted": "no", "name": "stats__analytics", "value": "/Common/Vip1-demo_analytics" }, { "encrypted": "no", "name": "stats__request_logging", "value": "/#do_not_use#" } ] } As ealier, remove some of the fields that don't make sense to re-post. This includes 'deviceGroup', 'fullPath', 'generation', 'kind', 'partition', 'selfLink', and 'subPath'. You can now use this JSON body with updates to the variable values as needed. Example python code for deploying iApp Templates In addition to the above procedures, we'd like you point you to some python examples which show how to push iApp templates using REST. Hitesh Patel, another monster F5er, has put together the following code: https://github.com/0xHiteshPatel/appsvcs_integration_iapp/tree/80cc40dcf85e352a25c7ec44d9e4dcc253e51e69/scripts In his words: "that's 152 lines of awesome right there". His examples run against 11.5.x, 11.6.x and 12.0. Debugging When trying to create or update an instance of an iApp via REST, you will get error messages in the HTTP response if your POST is unsuccessful. In addition to the HTTP payload in the response, the following debug steps can be helpful: 1) Set the scriptd log level to debug: modify sys scriptd log-level debug 2) Look at the TMSH output from the iApp printed to /var/log/scriptd.out. Typically the last line will show the error that has occured. In closing The above examples should bring you one step closer to automating the delivery of advanced network services for your applications. We're looking forward to doing future posts on how to automate your deployment. Finally, if you haven't checked out the Application Services Integration iApp, also by Hitesh, you should probably do so now:https://github.com/0xHiteshPatel/appsvcs_integration_iapp. Cheers!2.5KViews0likes1CommentConverting a Cisco ACE configuration file to F5 BIG-IP Format
In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs. One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task. To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts. https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip https://devcentral.f5.com/s/articles/Cisco-ACE-to-F5-Conversion-Python-3 https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip-via-tmsh We also have scripts covering Cisco’s CSS (https://devcentral.f5.com/s/articles/cisco-css-to-f5-big-ip ) and CSM products (https://devcentral.f5.com/s/articles/cisco-csm-to-f5-big-ip ) as well. In this article, I’m going to focus on the ace2f5-tmsh” in the ace2f5.zip script library. The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects. ace2f5-tmsh.pl $ perl ace2f5-tmsh.pl ace_config > tmsh_script We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects. ip The ip object in the ACE configuration is defined like this: ip route 0.0.0.0 0.0.0.0 10.211.143.1 equates to a tmsh “net route” command. net route 0.0.0.0-0 { network 0.0.0.0/0 gw 10.211.143.1 } rserver An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not. ACE Configuration rserver host R190-JOEINC0060 ip address 10.213.240.85 rserver host R191-JOEINC0061 ip address 10.213.240.86 inservice rserver host R192-JOEINC0062 ip address 10.213.240.88 inservice rserver host R193-JOEINC0063 ip address 10.213.240.89 inservice It will be used to find the IP address for a given rserver hostname. serverfarm A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet. ACE Configuration serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice F5 Configuration ltm pool Insiteqa-JoeInc { load-balancing-mode predictive-node members { 10.213.240.86:any { address 10.213.240.86 }} members { 10.213.240.88:any { address 10.213.240.88 }} members { 10.213.240.89:any { address 10.213.240.89 }} } probe a “probe” is a LTM monitor except that it does not have a port. ACE Configuration probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2 will map to the TMSH “ltm monitor” command. F5 Configuration ltm monitor Insiteqa-JoeInc { defaults from tcp interval 5 timeout 10 retry 2 } sticky The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server. ACE Configuration sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc class-map A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection. ACE Configuration class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https policy-map a policy-map of type loadbalance simply ties the persistence profile to the Virtual . the “multi-match” attribute constructs the virtual server by tying a bunch of objects together. ACE Configuration policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply F5 Configuration ltm virtual vip-Insiteqa-JoeInc-12345 { destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-1433 { destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-31314 { destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-8080 { destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-http { destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} http {} } } ltm virtual vip-Insiteqa-JoeInc-https { destination 10.213.238.140:https profiles { tcp {} } Conclusion If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.2.5KViews0likes6Comments