configuration
50 TopicsScheduling BIG-IP Configuration Backups via the GUI with an iApp
Beginning with BIG-IP version 11, the idea of templates has not only changed in amazing and powerful ways, it has been extended to be far more than just templates. The replacement for templates is called iApp TM . But to call the iApp TM just a template would be woefully inaccurate and narrow. It does templates well, and takes the concept further by allowing you to re-enter a templated application and make changes. Previously, deploying an application via a template was sort of like the Ron Popeil rotisserie: “Set it, and forget it!” Once it was executed, the template process was over, it was up to you to track and potentially clean up all those objects. Now, the application service you create based on an iApp TM template effectively “owns” all the objects it created, so any change to the deployment adds/changes/deletes objects as necessary. The other exciting change from the template perspective is the idea of strictness. Once an application service is configured, any object created that is owned by that service cannot be changed outside of the service itself. This means that if you want to add a pool member, it must be done within the application service, not within the pool. You can turn this off, but what a powerful protection of your services! Update for v11.2 - https://devcentral.f5.com/s/Tutorials/TechTips/tabid/63/articleType/ArticleView/articleId/1090565/Archiving-BIG-IP-Configurations-with-an-iApp-in-v112.aspx The Problem I received a request from one of our MVPs that he’d really like to be able to allow his users to schedule configuration backups without dropping to the command line. Knowing that the iApp TM feature was releasing soon with version 11, I started to see how I might be able to coax a command line configuration from the GUI. In training, I was told that “anything you can do in tmsh, you can do with an iApp TM .” This is excellent, and the basis for why I think they are going to be incredibly popular for not only controlling and managing applications, but also for extending CLI functions to the GUI. Anyway, so in order to schedule a configuration backup, I need: A backup script A cron job to call said script That’s really all there is to it. The Solution Thankfully, the background work is already done courtesy of a config backup codeshare entry by community user Colin Stubbs in the Advanced Design & Config Wiki. I did have to update the following bigpipe lines from the script: bigpipe export oneline “${SCF_FILE}” to tmsh save /sys config one-line file “${SCF_FILE}” bigpipe export “${SCF_FILE}” to tmsh save /sys config file "${SCF_FILE}" bigpipe config save “${UCS_FILE}” passphrase “${UCS_PASSPHRASE}” to tmsh save /sys ucs "${UCS_FILE}" passphrase "${UCS_PASSPHRASE}" bigpipe config save “${UCS_FILE}” to tmsh save /sys ucs "${UCS_FILE}" Also, I created (according the script comments from the codeshare entry) a /var/local/bin directory to place the script and a /var/local/backups directory for the script to dump the backup files in. These are optional and can be changed as necessary in your deployment, you’ll just need to update the script to reflect your file system preferences. Now that I have everything I need to support a backup, I can move on to the iApp TM template configuration. iApp TM Components A template consists of three parts: implementation, presentation, and help. You can create an empty template, or just start with presentation or help if you like. The implementation is tmsh script language, based on the Tcl language so loved by all of us iRulers. Please reference the tmsh wiki for the available tmsh extensions to the Tcl language. The presentation is written with the Application Presentation Language, or APL, which is new and custom-built for templates. It is defined on the APL page in the iApp TM wiki. The help is written in HTML, and is used to guide users in the use of the template. I’ll focus on the presentation first, and then the implementation. I’ll forego the help section in this article. Presentation The reason I’m starting with the presentation section of the template is that the implementation section’s Tcl variables reflect the presentation methods naming conventions. I want to accomplish a few things in the template presentation: Ask users for the frequency of backups (daily, weekly, monthly) If weekly, ask for the day of the week If monthly, ask for the day of the month and provide a warning about days 29-31 For all frequencies, ask for the hour and minute the backup should occur The APL code for this looks like this: section time_select { choice day_select display "large" { "Daily", "Weekly", "Monthly" } optional ( day_select == "Weekly" ) { choice dow_select display "medium" { "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday" } } optional ( day_select == "Monthly" ) { message dom_warning "The day of the month should be the 1st-28th. Selecting the 29th-31st will result in missed backups on some months." choice dom_select display "medium" tcl { for { set x 1 } { $x < 32 } { incr x } { append dom "$x\n" } return $dom } } choice hr_select display "medium" tcl { for { set x 0 } { $x < 24 } { incr x} { append hrs "$x\n" } return $hrs } choice min_select display "medium" tcl { for { set x 0 } { $x < 60 } { incr x } { append mins "$x\n" } return $mins } } text { time_select "Backup Schedule" time_select.day_select "Choose the frequency the backup should occur:" time_select.dow_select "Choose the day of the week the backup should occur:" time_select.dom_warning "WARNING: " time_select.dom_select "Choose the day of the month the backup should occur:" time_select.hr_select "Choose the hour the backup should occur:" time_select.min_select "Choose the minute the backup should occur:" } A few things to point out. First, the sections (which can’t be nested) provide a way to set apart functional differences in your form. I only needed one here, but it’s very useful if I were to build on and add options for selecting a UCS or SCF format, or specifying a mail address for the backups to be mailed to. Second, order matters. The objects will be displayed in the template as you define them. Third, the optional command allows me to hide questions that wouldn’t make sense given previous answers. If you dig into some of the canned templates shipping with v11, you’ll also see another use case for the optional command. Fourth, you can use Tcl commands to populate fields for you. This can be generated data like I did above, or you can loop through configuration objects to present in the template as well. Finally, the text section is where you define the language you want to appear with each of your objects. The nomenclature here is section.variable. To give you an idea what this looks like, here is a screenshot of a monthly backup configuration: Once my template (f5.archiving) is saved, I can configure it in the Application Services section by selecting the template. At this point, I have a functioning presentation, but with no implementation, it’s effectively useless. Implementation Now that the presentation is complete, I can move on to an implementation. I need to do a couple things in the implementation: Grab the data entered into the application service Convert the day of week information from long name to the appropriate 0-6 (or 1-7) number for cron Use that data to build a cron file (statically assigned at this point to /etc/cron.d/f5backups) Here is the implementation section: array set dow_map { Sunday 0 Monday 1 Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 } set hr $::time_select__hr_select set min $::time_select__min_select set infile [open "/etc/cron.d/f5backups" "w" "0755"] puts $infile "SHELL=\/bin\/bash" puts $infile "PATH=\/sbin:\/bin:\/usr\/sbin:\/usr\/bin" puts $infile "#MAILTO=user@somewhere" puts $infile "HOME=\/var\/tmp\/" if { $::time_select__day_select == "Daily" } { puts $infile "$min $hr * * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Weekly" } { puts $infile "$min $hr * * $dow_map($::time_select__dow_select) root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } elseif { $::time_select__day_select == "Monthly" } { puts $infile "$min $hr $::time_select__dom_select * * root \/bin\/bash \/var\/local\/bin\/f5backup.sh 1>\/var\/tmp\/f5backup.log 2>\&1" } close $infile A few notes: The dow_map array is to convert the selected day (ie, Saturday) to a number for cron (6). The variables in the implementation section reference the data supplied from the presentation like so: $::<section>__<presentation variable name> (Note the double underscore between them. As such, DO NOT use double underscores in your presentation variables) tmsh special characters need to be escaped if you’re using them for strings. A succesful configuration of the application service results in this file configuration for /etc/cron.d/f5backups: SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin #MAILTO=user@somewhere HOME=/var/tmp/ 54 15 * * * root /bin/bash /var/local/bin/f5backup.sh 1>/var/tmp/f5backup.log 2>&1 So this backup is scheduled to run daily at 15:54. This is confirmed with this directory listing on my BIG-IP: [root@golgotha:Active] bin # ls -las /var/local/backups total 2168 8 drwx------ 2 root root 4096 Aug 16 15:54 . 8 drwxr-xr-x 9 root root 4096 Aug 3 14:44 .. 1076 -rw-r--r-- 1 root root 1091639 Aug 15 15:54 f5backup-golgotha.test.local-20110815155401.tar.bz2 1076 -rw-r--r-- 1 root root 1092259 Aug 16 15:54 f5backup-golgotha.test.local-20110816155401.tar.bz2 Conclusion This is just scratching the surface of what can be done with the new iApp TM feature in v11. I didn’t even cover the ability to use presentation and implementation libraries, but that will be covered in due time. If you’re impatient, there are already several examples (including this one here) in the codeshare. Related Articles F5 DevCentral > Community > Group Details - iApp iApp Wiki Home - DevCentral Wiki F5 Agility 2011 - James Hendergart on iApp iApp Codeshare - DevCentral Wiki iApp Lab 5 - Priority Group Activation - DevCentral Wiki iApp Template Development Tips and Techniques - DevCentral Wiki Lori MacVittie - iApp2.9KViews0likes13CommentsConverting a Cisco ACE configuration file to F5 BIG-IP Format
In September, Cisco announced that it was ceasing development and pulling back on sales of its Application Control Engine (ACE) load balancing modules. Customers of Cisco’s ACE product line will now have to look for a replacement product to solve their load balancing and application delivery needs. One of the first questions that will come up when a customer starts looking into replacement products surrounds the issue of upgradability. Will the customer be able to import their current configuration into the new technology or will they have to start with the new product from scratch. For smaller businesses, starting over can be a refreshing way to clean up some of the things you’ve been meaning to but weren’t able to for one reason or another. But, for a large majority of the users out there, starting over from nothing with a new product is a daunting task. To help with those users considering a move to the F5 universe, DevCentral has included several scripts to assist with the configuration migration process. In our Codeshare section we created some scripts useful in converting ACE configurations into their respective F5 counterparts. https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip https://devcentral.f5.com/s/articles/Cisco-ACE-to-F5-Conversion-Python-3 https://devcentral.f5.com/s/articles/cisco-ace-to-f5-big-ip-via-tmsh We also have scripts covering Cisco’s CSS (https://devcentral.f5.com/s/articles/cisco-css-to-f5-big-ip ) and CSM products (https://devcentral.f5.com/s/articles/cisco-csm-to-f5-big-ip ) as well. In this article, I’m going to focus on the ace2f5-tmsh” in the ace2f5.zip script library. The script takes as input an ACE configuration and creates a TMSH script to create the corresponding F5 BIG-IP objects. ace2f5-tmsh.pl $ perl ace2f5-tmsh.pl ace_config > tmsh_script We could leave it at that, but I’ll use this article to discuss the components of the ACE configuration and how they map to F5 objects. ip The ip object in the ACE configuration is defined like this: ip route 0.0.0.0 0.0.0.0 10.211.143.1 equates to a tmsh “net route” command. net route 0.0.0.0-0 { network 0.0.0.0/0 gw 10.211.143.1 } rserver An “rserver” is basically a node containing a server address including an optional “inservice” attribute indicating whether it’s active or not. ACE Configuration rserver host R190-JOEINC0060 ip address 10.213.240.85 rserver host R191-JOEINC0061 ip address 10.213.240.86 inservice rserver host R192-JOEINC0062 ip address 10.213.240.88 inservice rserver host R193-JOEINC0063 ip address 10.213.240.89 inservice It will be used to find the IP address for a given rserver hostname. serverfarm A serverfarm is a LTM pool except that it doesn’t have a port assigned to it yet. ACE Configuration serverfarm host MySite-JoeInc predictor hash url rserver R190-JOEINC0060 inservice rserver R191-JOEINC0061 inservice rserver R192-JOEINC0062 inservice rserver R193-JOEINC0063 inservice F5 Configuration ltm pool Insiteqa-JoeInc { load-balancing-mode predictive-node members { 10.213.240.86:any { address 10.213.240.86 }} members { 10.213.240.88:any { address 10.213.240.88 }} members { 10.213.240.89:any { address 10.213.240.89 }} } probe a “probe” is a LTM monitor except that it does not have a port. ACE Configuration probe tcp MySite-JoeInc interval 5 faildetect 2 passdetect interval 10 passdetect count 2 will map to the TMSH “ltm monitor” command. F5 Configuration ltm monitor Insiteqa-JoeInc { defaults from tcp interval 5 timeout 10 retry 2 } sticky The “sticky” object is a way to create a persistence profile. First you tie the serverfarm to the persist profile, then you tie the profile to the Virtual Server. ACE Configuration sticky ip-netmask 255.255.255.255 address source MySite-JoeInc-sticky timeout 60 replicate sticky serverfarm MySite-JoeInc class-map A “class-map” assigns a listener, or Virtual IP address and port number which is used for the clientside and serverside of the connection. ACE Configuration class-map match-any vip-MySite-JoeInc-12345 2 match virtual-address 10.213.238.140 tcp eq 12345 class-map match-any vip-MySite-JoeInc-1433 2 match virtual-address 10.213.238.140 tcp eq 1433 class-map match-any vip-MySite-JoeInc-31314 2 match virtual-address 10.213.238.140 tcp eq 31314 class-map match-any vip-MySite-JoeInc-8080 2 match virtual-address 10.213.238.140 tcp eq 8080 class-map match-any vip-MySite-JoeInc-http 2 match virtual-address 10.213.238.140 tcp eq www class-map match-any vip-MySite-JoeInc-https 2 match virtual-address 10.213.238.140 tcp eq https policy-map a policy-map of type loadbalance simply ties the persistence profile to the Virtual . the “multi-match” attribute constructs the virtual server by tying a bunch of objects together. ACE Configuration policy-map type loadbalance first-match vip-pol-MySite-JoeInc class class-default sticky-serverfarm MySite-JoeInc-sticky policy-map multi-match lb-MySite-JoeInc class vip-MySite-JoeInc-http loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-https loadbalance vip inservice loadbalance vip icmp-reply class vip-MySite-JoeInc-12345 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-31314 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class vip-MySite-JoeInc-1433 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply class reals nat dynamic 1 vlan 240 class vip-MySite-JoeInc-8080 loadbalance vip inservice loadbalance policy vip-pol-MySite-JoeInc loadbalance vip icmp-reply F5 Configuration ltm virtual vip-Insiteqa-JoeInc-12345 { destination 10.213.238.140:12345 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-1433 { destination 10.213.238.140:1433 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-31314 { destination 10.213.238.140:31314 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-8080 { destination 10.213.238.140:8080 pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} } } ltm virtual vip-Insiteqa-JoeInc-http { destination 10.213.238.140:http pool Insiteqa-JoeInc persist my_source_addr profiles { tcp {} http {} } } ltm virtual vip-Insiteqa-JoeInc-https { destination 10.213.238.140:https profiles { tcp {} } Conclusion If you are considering migrating from Cicso’s ACE to F5, I’d consider you take a look at the Cisco conversion scripts to assist with the conversion.2.5KViews0likes6CommentsSSL Orchestrator Advanced Use Cases: Reducing Complexity with Internal Layered Architecture
Introduction Sir Isaac Newton said, "Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things". The world we live in is...complex. No getting around that. But at the very least, we should strive for simplicity where we can achieve it. As IT folk, we often find ourselves mired in the complexity of things until we lose sight of the big picture, the goal. How many times have you created an additional route table entry, or firewall exception, or virtual server, because the alternative meant having a deeper understanding of the existing (complex) architecture? Sure, sometimes it's unavoidable, but this article describes at least one way that you can achieve simplicity in your architecture. SSL Orchestrator sits as an inline point of presence in the network to decrypt, re-encrypt, and dynamically orchestrate that traffic to the security stack. You need rules to govern how to handle specific types of traffic, so you create security policy rules in the SSL Orchestrator configuration to describe and take action on these traffic patterns. It's definitely easy to create a multitude of traffic rules to match discrete conditions, but if you step back and look at the big picture, you may notice that the different traffic patterns basically all perform the same core actions. They allow or deny traffic, intercept or bypass TLS (decrypt/not-decrypt), and send to one or a few service chains. If you were to write down all of the combinations of these actions, you'd very likely discover a small subset of discrete "functions". As usual, F5 BIG-IP and SSL Orchestrator provide some innovative and unique ways to optimize this. And so in this article we will explore SSL Orchestrator topologies "as functions" to reduce complexity. Specifically, you can reduce the complexity of security policy rules, and in doing so, quite likely increase the performance of your SSL Orchestrator architecture. SSL Orchestrator Use Case: Reducing Complexity with Internal Layered Architectures The idea is simple. Instead of a single topology with a multitude of complex traffic pattern matching rules, create small atomic topologies as static functions and steer traffic to the topologies by virtue of "layered" traffic pattern matching. Granted, if your SSL Orchestrator configuration is already pretty simple, then please keep doing what you're doing. You've got this, Tiger. But if your environment is getting complex, and you're not quite convinced yet that topologies as functions is a good idea, here are a few additional benefits you'll get from this topology layering: Dynamic egress selection: topologies as functions can define different egress paths. Dynamic CA selection: topologies as functions can use different local issuing CAs for different traffic flows. Dynamic traffic bypass: certain types of traffic can be challenging to handle internally. For example, mutual TLS traffic can be bypassed globally with the "Bypass on client cert failure" option in the SSL configuration, but bypassing mutual TLS sites by hostname is more complex. A layered architecture can steer traffic (by SNI) through a bypass topology, with a service chain. More flexible pattern recognition: for all of its flexibility, SSL Orchestrator security policy rules cannot catch every possible use case. External traffic pattern recognition, via iRules or CPM (LTM policies) offer near infinite pattern matching options. You could, for example, steer traffic based on incoming tenant VLAN or route domain for multi-tenancy configurations. More flexible automation strategies: as iRules, data groups, and CPM policies are fully automate-able across many AnO platforms (ex. AS3, Ansible, Terraform, etc.), it becomes exceedingly easy to automate SSL Orchestrator traffic processing, and removes the need to manage individual topology security policy rules. Hopefully these benefits give you a pretty clear indication of the value in this architecture strategy. So without further ado, let's get started. Configuration Before we begin, I'd like to make note of the following caveats: While every effort has been made to simplify the layered architecture, there is still a small element of complexity. If you are at all averse to creating virtual servers or modifying iRules, then maybe this isn't for you. But as you are reading this in a forum dedicated to programmability, I'm guessing you the reader are ready for a challenge. This is a "field contributed" solution, so not officially supported by F5. This topology layering architecture is applicable to all modern versions of SSL Orchestrator, from 5.0 to 8.0. While topology layering can be used for inbound topologies, it is most appropriate for outbound. The configuration below also only describes the layer 3 implementation. But layer 2 layering is also possible. With this said, there are just a few core concepts to understand: Basic layered architecture configuration - how the pieces fit together The iRules - how traffic moves through the architecture Or the CPM policies - an alternative to iRules Note again that this is primarily useful in outbound topologies. Inbound topologies are typically more atomic on their own already. I will cover both transparent and explicit forward proxy configurations below. Basic layered architecture configuration A layered architecture takes advantage of a powerful feature of the BIG-IP called "VIP targeting". The idea is that one virtual server calls another with negligible latency between the two VIPs. The "external" virtual server is client-facing. The SSL Orchestrator topology virtual servers are thus "internal". Traffic enters the external VIP and traffic rules pass control to any of a number of internal "topology function" VIPs. You certainly don't have to use the iRule implementation presented here. You just need a client-facing virtual server with an iRule that VIP-targets to one or more SSL Orchestrator topologies. Each outbound topology is represented by a virtual server that includes the application server name. You can see these if you navigate to Local Traffic -> Virtual Servers in the BIG-IP UI. So then the most basic topology layering architecture might just look like this: when CLIENT_ACCEPTED { virtual "/Common/sslo_my_topology.app/sslo_my_topology-in-t-4" } This iRule doesn't do anything interesting, except get traffic flowing across your layered architecture. To be truly useful you'll want to include conditions and evaluations to steer different types of traffic to different topologies (as functions). As the majority of security policy rules are meant to define TLS traffic patterns, the provided iRules match on TLS traffic and pass any non-TLS traffic to a default (intercept/inspection) topology. These iRules are intended to simplify topology switching by moving all of the complexity of traffic pattern matching to a library iRule. You should then only need to modify the "switching" iRule to use the functions in the library, which all return Boolean true or false results. Here are the simple steps to create your layered architecture: Step 1: Build a set of "dummy" VLANs. A topology must be bound to a VLAN. But since the topologies in this architecture won't be listening on client-facing VLANs, you will need to create a separate VLAN for each topology you intend to create. A dummy VLAN is a VLAN with no interface assigned. In the BIG-IP UI, under Network -> VLANs, click Create. Give your VLAN a name and click Finished. It will ask you to confirm since you're not attaching an interface. Repeat this step by creating unique VLAN names for each topology you are planning to use. Step 2: Build a set of static topologies as functions. You'll want to create a normal "intercept" topology and a separate "bypass" topology, though you can create as many as you need to encompass the unique topology functions. Your intercept topology is configured as such: L3 outbound topology configuration, normal topology settings, SSL configuration, services, service chains No security policy rules - just the ALL rule with TLS intercept action (and service chain), and optionally remove the built-in Pinners rule Attach to a dummy VLAN (a VLAN with no assigned interfaces) Your bypass topology should then look like this: L3 outbound topology configuration, skip the SSL Configuration settings, optionally re-use services and service chains No security policy rules - just the ALL rule with TLS bypass action (and service chain) Attach to a separate dummy VLAN (a VLAN with no assigned interfaces) Note the name you use for each topology, as this will be called explicitly in the iRule. For example, if you name the topology "myTopology", that's the name you will use in each "call SSLOLIB::target" function (more on this in a moment) . If you look in the SSL Orchestrator UI, you will see that it prepends "sslo_" (ex. sslo_myTopology). Don't include the "sslo_" portion in the iRule. Step 3: Import the SSLOLIB iRule (attached here). Name it "SSLOLIB". This is the library rule, so no modifications are needed. The functions within (as described below) will return a true or false, so you can mix these together in your switching rule as needed. Step 4: Import the traffic switching iRule (attached here). You will modify this iRule as required, but the SSLOLIB library rule makes this very simple. Step 5: Create your external layered virtual server. This is the client-facing virtual server that will catch the user traffic and pass control to one of the internal SSL Orchestrator topology listeners. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: 0 Protocol: TCP VLAN: client-facing VLAN Address Translation: disabled Port Translation: disabled Default Persistence Profile: ssl iRule: the traffic switching iRule Note that the ssl persistence profile is enabled here to allow the iRules to handle client side SSL traffic without SSL profiles attached. Also make sure that Address and Port Translation are disabled before clicking Finished. Step 6: Modify the traffic switching iRule to meet your traffic matching requirements (see below). You have the basic layered architecture created. The only remaining step is to modify the traffic switching iRule as required, and that's pretty easy too. The iRules I'll repeat, there are near infinite options here. At the very least you need to VIP target from the external layered VIP to at least one of the SSL Orchestrator topology VIPs. The iRules provided here have been cultivated to make traffic selection and steering as easy as possible by pushing all of the pattern functions to a library iRule (SSLOLIB). The idea is that you will call a library function for a specific traffic pattern and if true, call a separate library function to steer that flow to the desired topology. All of the build instructions are contained inside the SSLOLIB iRule, with examples. SSLOLIB iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/SSLOLIB Switching iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/sslo-layering-rule The function to steer to a topology (SSLOLIB::target) has three parameters: <topology name>: this is the name of the desired topology. Use the basic topology name as defined in the SSL Orchestrator configuration (ex. "intercept"). ${sni}: this is static and should be left alone. It's used to convey the SNI information for logging. <message>: this is a message to send to the logs. In the examples, the message indicates the pattern matched (ex. "SRCIP"). Note, include an optional 'return' statement at the end to cancel any further matching. Without the 'return', the iRule will continue to process matches and settle on the value from the last evaluation. Example (sending to a topology named "bypass"): call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return There are separate traffic matching functions for each pattern: SRCIP IP:<ip/subnet> SRCIP DG:<data group name> (address-type data group) SRCPORT PORT:<port/port-range> SRCPORT DG:<data group name> (integer-type data group) DSTIP IP:<ip/subnet> DSTIP DG:<data group name> (address-type data group) DSTPORT PORT:<port/port-range> DSTPORT DG:<data group name> (integer-type data group) SNI URL:<static url> SNI URLGLOB:<glob match url> (ends_with match) SNI CAT:<category name or list of categories> SNI DG:<data group name> (string-type data group) SNI DGGLOB:<data group name> (ends_with match) Examples: # SOURCE IP if { [call SSLOLIB::SRCIP IP:10.1.0.0/16] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } if { [call SSLOLIB::SRCIP DG:my-srcip-dg] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } # SOURCE PORT if { [call SSLOLIB::SRCPORT PORT:5000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } if { [call SSLOLIB::SRCPORT PORT:1000-60000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } # DESTINATION IP if { [call SSLOLIB::DSTIP IP:93.184.216.34] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } if { [call SSLOLIB::DSTIP DG:my-destip-dg] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } # DESTINATION PORT if { [call SSLOLIB::DSTPORT PORT:443] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } if { [call SSLOLIB::DSTPORT PORT:443-9999] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } # SNI URL match if { [call SSLOLIB::SNI URL:www.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } if { [call SSLOLIB::SNI URLGLOB:.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } # SNI CATEGORY match if { [call SSLOLIB::SNI CAT:$static::URLCAT_list] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } if { [call SSLOLIB::SNI CAT:/Common/Government] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } # SNI URL DATAGROUP match if { [call SSLOLIB::SNI DG:my-sni-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } if { [call SSLOLIB::SNI DGGLOB:my-sniglob-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } To combine these, you can use simple AND|OR logic. Example: if { ( [call SSLOLIB::DSTIP DG:my-destip-dg] ) and ( [call SSLOLIB::SRCIP DG:my-srcip-dg] ) } Finally, adjust the static configuration variables in the traffic switching iRule RULE_INIT event: ## User-defined: Default topology if no rules match (the topology name as defined in SSLO) set static::default_topology "intercept" ## User-defined: DEBUG logging flag (1=on, 0=off) set static::SSLODEBUG 0 ## User-defined: URL category list (create as many lists as required) set static::URLCAT_list { /Common/Financial_Data_and_Services /Common/Health_and_Medicine } CPM policies LTM policies (CPM) can work here too, but with the caveat that LTM policies do not support URL category lookups. You'll probably want to either keep the Pinners rule in your intercept topologies, or convert the Pinners URL category to a data group. A "url-to-dg-convert.sh" Bash script can do that for you. url-to-dg-convert.sh: https://github.com/f5devcentral/sslo-script-tools/blob/main/misc-tools/url-to-dg-convert.sh As with iRules, infinite options exist. But again for simplicity here is a good CPM configuration. For this you'll still need a "helper" iRule, but this requires minimal one-time updates. when RULE_INIT { ## Default SSLO topology if no rules match. Enter the name of the topology here set static::SSLO_DEFAULT "intercept" ## Debug flag set static::SSLODEBUG 0 } when CLIENT_ACCEPTED { ## Set default topology (if no rules match) virtual "/Common/sslo_${static::SSLO_DEFAULT}.app/sslo_${static::SSLO_DEFAULT}-in-t-4" } when CLIENTSSL_CLIENTHELLO { if { ( [POLICY::names matched] ne "" ) and ( [info exists ACTION] ) and ( ${ACTION} ne "" ) } { if { $static::SSLODEBUG } { log -noname local0. "SSLO Switch Log :: [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] :: [POLICY::rules matched [POLICY::names matched]] :: Sending to $ACTION" } virtual "/Common/sslo_${ACTION}.app/sslo_${ACTION}-in-t-4" } } The only thing you need to do here is update the static::SSLO_DEFAULT variable to indicate the name of the default topology, for any traffic that does not match a traffic rule. For the comparable set of CPM rules, navigate to Local Traffic -> Policies in the BIG-IP UI and create a new CPM policy. Set the strategy as "Execute First matching rule", and give each rule a useful name as the iRule can send this name in the logs. For source IP matches, use the "TCP address" condition at ssl client hello time. For source port matches, use the "TCP port" condition at ssl client hello time. For destination IP matches the "TCP address" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For destination port matches the "TCP port" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For SNI matches, use the "SSL Extension server name" condition at ssl client hello time. For each of the conditions, add a simple "Set variable" action as ssl client hello time. Name the variable "ACTION" and give it the name of the desired topology. Apply the helper iRule and CPM policy to the external traffic steering virtual server. The "first" matching rule strategy is applied here, and all rules trigger on ssl client hello, so you can drag them around and re-order as necessary. Note again that all of the above only evaluates TLS traffic. Any non-TLS traffic will flow through the "default" topology that you identify in the iRule. It is possible to re-configure the above to evaluate HTTP traffic, but honestly the only significant use case here might be to allow or drop traffic at the policy. Layered architecture for an explicit forward proxy You can use the same logic to support an explicit proxy configuration. The only difference will be that the frontend layered virtual server will perform the explicit proxy functions. The backend SSL Orchestrator topologies will continue to be in layer 3 outbound (transparent proxy) mode. Normally SSL Orchestrator would build this for you, but it's pretty easy and I'll show you how. You could technically configure all of the SSL Orchestrator topologies as explicit proxies, and configure the client facing virtual server as a layer 3 pass-through, but that adds unnecessary complexity. If you also need to add explicit proxy authentication, that is done in the one frontend explicit proxy configuration. Use the settings below to create an explicit proxy LTM configuration. If not mentioned, settings can be left as defaults. Under SSL Orchestrator -> Configuration in the UI, click on the gear icon in the top right corner. This will expose the DNS resolver configuration. The easiest option here is to select "Local Forwarding Nameserver" and then enter the IP address of the local DNS service. Click "Save & Next" and then "Deploy" when you're done. Under Network -> Tunnels in the UI, click Create. This will create a TCP tunnel for the explicit proxy traffic. Profile: select tcp-forward Under Local Traffic -> Profiles -> Services -> HTTP in the UI, click Create. This will create the HTTP explicit proxy profile. Proxy Mode: Explicit Explicit Proxy: DNS Resolver: select the ssloGS-net-resolver Explicit Proxy: Tunnel Name: select the TCP tunnel created earlier Under Local Traffic -> Virtual Servers, click Create. This will create the client-facing explicit proxy virtual server. Type: Standard Source: 0.0.0.0/0 Destination: enter an IP the client can use to access the explicit proxy interface Service Port: enter the explicit proxy listener port (ex. 3128, 8080) HTTP Profile: HTTP explicit profile created earlier VLANs and Tunnel Traffic: set to "Enable on..." and select the client-facing VLAN Address Translation: enabled Port Translation: enabled Under Local Traffic -> Virtual Servers, click Create again. This will create the TCP tunnel virtual server. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: * VLANs and Tunnel Traffic: set to "Enable on..." and select the TCP tunnel created earlier Address Translation: disabled Port Translation: disabled iRule: select the SSLO switching iRule Default Persistence Profile: select ssl Note, make sure that Address and Port Translation are disabled before clicking Finished. Under Local Traffic -> iRules, click Create. This will create a small iRule for the explicit proxy VIP to forward non-HTTPS traffic through the TCP tunnel. Change "<name-of-TCP-tunnel-VIP>" to reflect the name of the TCP tunnel virtual server created in the previous step. when HTTP_REQUEST { virtual "/Common/<name-of-TCP-tunnel-VIP>" [HTTP::proxy addr] [HTTP::proxy port] } Add this new iRule to the explicit proxy virtual server. To test, point your explicit proxy client at the IP and port defined IP:port and give it a go. HTTP and HTTPS explicit proxy traffic arriving at the explicit proxy VIP will flow into the TCP tunnel VIP, where the SSLO switching rule will process traffic patterns and send to the appropriate backend SSL Orchestrator topology-as-function. Testing and Considerations Assuming you have the default topology defined in the switching iRule's RULE_INIT, and no traffic matching rules defined, all traffic from the client should pass effortlessly through that topology. If it does not, Ensure the named defined in the static::default_topology variable is the actual name of the topology, without the prepended "sslo_". Enable debug logging in the iRule and observe the LTM log (/var/log/ltm) for any anomalies. Worst case, remove the client facing VLAN from the frontend switching virtual server and attach it to one of your topologies, effectively bypassing the layered architecture. If traffic does not pass in this configuration, then it cannot in the layered architecture. You need to troubleshoot the SSL Orchestrator topology itself. Once you have that working, put the dummy VLAN back on the topology and add the client facing VLAN to the switching virtual server. Considerations The above provides a unique way to solve for complex architectures. There are however a few minor considerations: This is defined outside of SSL Orchestrator, so would not be included in the internal HA sync process. However, this architecture places very little administrative burden on the topologies directly. It is recommended that you create and sync all of the topologies first, then create the layered virtual server and iRules, and then manually sync the boxes. If you make any changes to the switching iRule (or CPM policy), that should not affect the topologies. You can initiate a manual BIG-IP HA sync to copy the changes to the peer. If upgrading to a new version of SSL Orchestrator (only), no additional changes are required. If upgrading to a new BIG-IP version, it is recommended to break HA (SSL Orchestrator 8.0 and below) before performing the upgrade. The external switching virtual server and iRules should migrate natively. Summary And there you have it. In just a few steps you've been able to reduce complexity and add capabilities, and along the way you have hopefully recognized the immense flexibility at your command.1.7KViews5likes2CommentsMonitoring Your Network with PRTG - Overview, Installation, and Configuration
A few months back, our team moved DevCentral.f5.com from our corporate datacenter to a cloud service provider. As part of this project, we were required to take over the tasks previously performed by our IT department. One of the tasks that fell into my hands was to look at the monitoring and alerting for the heath of the systems in our environment. These systems included our application tier (application, database, and storage) as well as the F5 network infrastructure (GTM, LTM, ASM, WA, APM, etc). After looking at several product offerings, we ended up choosing PRTG Network Monitor from Paessler. We chose PRTG for it’s cost model as well as it’s ability to be customized and extended. This is the first in a series of articles covering PRTG and how we use it to monitor the production environment of DevCentral.f5.com. In this article, I’ll go over the features in the product, the installation process, and touch on configuration. In future articles, I’ll talk in more detail about configuration and the development of custom monitoring (to get data from our F5 products) and alerting (sending out customized email/SMS notifications of the monitor results). PRTG The PRTG Network Monitor runs as a service on a Windows machine on your network. It collects statistics from the components in your network (which it can auto-discover) and retains that data for historical performance reporting and analysis. The product has the following features: Quick Download, Installation, and Configuration Choose Between 5 Easy To Use User Interfaces Comprehensive Network Monitoring with more than 130 sensor types covering all aspects of network monitoring. Flexible Alerting Support for Clustered configuration of Network Monitors. Distributed Monitoring Using Remote Probes Data Publishing and Maps In-Depth Reporting Installation Paessler offers a couple of ways for you to test their software and we opted for the 30 day no-limitation trial. They also offer a completely freeware version for 10 sensors. Be careful if you use the network discovery feature as it will easily bypass the 10 sensor limit. Also keep in mind that the commercial product is a separate installation so you will need to re-install the product if you decide to go with a commercial license (meaning you need more sensors). You can download the install package from the product website. It consists of a self-contained installer that does it all for you. Since PRTG uses it’s own internal database, no 3rd party products are required to use the software. I ran the install with all the defaults and within a few minutes, the software was ready to use. When the install finished, a web browser was opened with the Configuration page. After configuring a few things I was ready to start adding sensors to my network gear. I opted to not do the auto-scanning of my network but went into the Sensors page by clicking on the “Review Results” button. Sensor Configuration PRTG allows you to build hierarchies of devices. I opted to lay them out in the following device tree Root Local Probe (I’m only using single probe so everything under this item is monitored locally). management server (this is the local machine PRTG is running on. PTRG can monitor itself by the way). DC1 F5 Devices asm-1 (device 1 in our Application Security/Local Traffic HA pair) asm-2 (device 2) egw-1 (device 1 in our Edge Gateway/Web Acceleration HA pair) egw-2 (device 2) gtm-1 (our Global availabililty DNS device) Virtuals devcentral.f5.com (external virtual for application monitoring) Servers - DOMAIN adc-1 (windows domain controller #1) adc-2 (#2) Servers - APPS app-1 (Application server #1) app-2 (Application server #2) … (Application server #n) db-1 (Database server #1) db-2 (Database server #2) fil-1 (Network file storage) Each of those tree items allows for multiple sensors attached to them. I made use of memory, CPU, disk, Database, and system health checks. In the next article in this series, I’ll break down the various health checks I used and cover a few custom ones we developed to determine our application’s health.1.7KViews0likes0CommentsBIG-IP Configuration Conversion Scripts
Kirk Bauer, John Alam, and Pete White created a handful of perl and/or python scripts aimed at easing your migration from some of the “other guys” to BIG-IP.While they aren’t going to map every nook and cranny of the configurations to a BIG-IP feature, they will get you well along the way, taking out as much of the human error element as possible.Links to the codeshare articles below. Cisco ACE (perl) Cisco ACE via tmsh (perl) Cisco ACE (python) Cisco CSS (perl) Cisco CSS via tmsh (perl) Cisco CSM (perl) Citrix Netscaler (perl) Radware via tmsh (perl) Radware (python)1.6KViews1like13CommentsBIG-IP and Merge File Configuration Changes
Early in my exposure to F5 BIG-IP I was involved in a load balancer migration project from our existing hardware to F5 Load Balancers. This involved the migration of several hundred configurations, so manual configuration of each new configuration was just not an option. Direct modification of the configuration file (bigip.conf) was also not an option since it would require a “b load” command to activate any modifications, which is disruptive to active traffic during a reload. A merge file on the other hand is non-service disrupting. Meaning that the BIG-IP will not have to reload the entire configuration file and disrupt all other traffic in order to implement your change. It will only make the changes that you specify. If your change is disruptive (removing servers from a pool, putting up a maintenance pages or redirect, etc.) those changes will be disruptive for other reasons and not caused by the merge process. The documentation of the command is pretty vague on usage, and it does not appear that very many people in the community are utilizing this feature which I have found immensely useful. MERGE(1) BIG-IP Manual NAME merge command - Loads the specified configuration file, which modifies the running configuration. SYNTAX bigpipe merge (<file> | –) DESCRIPTION The bigpipe merge command loads the specified configuration file or data. This modifies the running configuration. After you run the bigpipe merge command, if you want to save the modified running configuration in the stored configuration files, run the bigpipe save all command. It is important to note that if you want to replace the running configuration of the BIG-IP system, rather than modify it, you use the bigpipe load command. For more information, see the man page for the bigpipe load command. Merge files can be utilized to create or modify virtually anything that is contained within the bigip.conf configuration file (Virtual Servers, Pools, Nodes, iRules, Profiles, Classes, etc.). The basic structure of each can be found in the bigip.conf and copied to create a modification merge file (and a back-out merge file to restore the current configuration, so you can adhere to best practices and in compliance with most ITIL change management processes) or act as a template for a new configuration item. Commands: Merge file verification: bigpipe verify merge <file.name> Merge specified file: bigpipe merge <file.name> Implementation I have a new set of Virtual Servers to implement, but none of the supporting pieces have been implemented yet. Below is an example of the merge file followed by an attempted merge with the error that you will receive when verifying a merge file that has missing configurations or errors. The verify command will verify the syntax of the content and the existence of the items called in the merge file. NOTE: The top line of the Merge File specifies the target partition. If you have implemented Administrative Domains and Partitions on your BIG-IP, this is where you will specify which partition to place things. You can implement multiple items in different partitions in the same merge file. If you have not implemented Administrative Domains and Partitions you can specify the Common Partition. NOTE: The file extension (.txt) is not necessary. These would actually be .tcl files if you wanted language recognition in an editor like Notepad++. After merging the supporting configuration items the verification will return a good result: Modification with Back-out If you have a situation where a modification is necessary, you can retrieve the original (current) configuration from your bigip.conf and place it into a merge file(s). I have discovered that it is a best practice to keep a copy of the original configuration for Back-Out purposes and a second copy to act as a template for a modification. Merge files can also be change management friendly by allowing application configurations to be exported and kept in source control (and even versioned) with the application code providing a more holistic configuration repository or by attaching the configurations to whatever change management processes you may use. Lastly, merge order matters…. The order that you plan to merge files will impact the success or failure of a group of merge files because of the configuration dependencies (Example: You cannot merge a Virtual Server first if it is attempting to utilize a Pool that does not exist yet). I have discovered the following order: Classes (iRule Data Groups) Pools iRules Virtual Servers Conclusion Merge file implementations are fast. Almost instant and implementations / modifications can be done on numerous items at a time. Non-service disruptive to other traffic or configurations Allows for pre-implementation verification Configurations can be exported and kept with an application in source control our used with change management processes I hope that everyone finds this ability as useful as I have and that it makes your implementations easier. If anyone has any questions about the merge ability of bigpipe please join the Advanced Design and Configuration group on DevCentral and start a new thread in the forum. .1.6KViews0likes4CommentsLooking for Setup Advice
Hello, I am looking for some advise for setting up a F5 Big-IP that can accomplish the following things. I only have one public IP address but will be hosting muliple services. I am looking at setting up one VIP that's open to public with ports that are required then when hitting FQDN that it redirects to VIP that is hosting service. Example mysite1.domain.com goes to VIP 10.10.10.100, mysite2.domain.com goes to VIP 10.10.10.110, so on. Is this done by iRule, reverse proxy, or policy. What's the best pratice for setting something up like this. Thanks in advance for the help.Solved1.5KViews0likes8CommentsSimple balancing doesn't work
Good morning community, I have to configure, for my work, a F5 VE. So, I download F5 VE 13.1.4 in my lab @home and install it on VMWare to make practice and understand the F5 basics. What I did is configure internal and external network VPN and assign related IP. Then Pool/Nodes and a Virtual Server listening on port 5000. Everything looks good: From F5 I can reach both nodes, even with a simple telnet on port 5000. From external network I can reach external F5 interface. The problem is that F5 doesn't route connection to the pool. This is my network topology: As I wrote, external network can reach VSERVER at 10.3.0.100 on port 5000. Then from F5 I can reach nodes in the pool always on port 5000. The problem here is when from a client (external network) I try to connect to VSERVER, it seems the connection is ESTABLISHED for a while, but not forwarded to internal network. While I tried to establish a connection from a client from external network (10.3.0.128), this is what happen: 1 0.000000 10.3.0.128 → 10.3.0.100 TCP 70 61440 → 5000 [SYN] Seq=0 Win=64240 2 0.000219 10.3.0.100 → 10.3.0.128 TCP 66 5000 → 61440 [SYN, ACK] Seq=0 Ack=1 3 0.002661 10.3.0.128 → 10.3.0.100 TCP 58 61440 → 5000 [ACK] Seq=1 Ack=1 4 0.006505 10.3.0.128 → 10.2.0.129 TCP 66 61440 → 5000 [SYN] Seq=0 Win=4380 5 0.059742 10.3.0.128 → 10.3.0.100 IPA 115 unknown 0x30 6 0.059768 10.3.0.100 → 10.3.0.128 TCP 58 5000 → 61440 [ACK] Seq=1 Ack=58 7 3.003461 10.3.0.128 → 10.2.0.129 TCP 66 [TCP Retransmission] 61440 → 5000 [SYN] Seq=0 Win=4380 Len=0 MSS=1460 SACK_PERM=1 10 12.004963 10.3.0.100 → 10.3.0.128 TCP 113 5000 → 61440 [RST, ACK] Seq=1 Ack=58 11 12.004980 10.3.0.128 → 10.2.0.129 TCP 106 61440 → 5000 [RST, ACK] Seq=1 Ack=1 I'm getting crazy since configuration should be ok, could someone help me? Thank you very much, LucasSolved1.2KViews0likes2CommentsBIG-IP Configuration Visualizer - iControl Style
I posted almost two years ago to the day on a cool tool called BIG-IP Config Visualizer, or BCV, that one of our field engineers put together that utilizes a BIG-IP config parser and GraphViz to create images visualizing the relationship of configuration objects for a particular virtual server. Well, I’m here to report that another community user, Russell Moore, has taken that work to the next level. Rather than trying to figure out the nuances of configuration objects amongst all the versions of BIG-IP, he converted the script to utilize iControl! In this tech tip, I’ll walk through the installation steps necessary to get this tool off the ground. The Setup Install a few libraries and GraphViz via apt-get apt-get install libssl-dev libcrypt-ssleay-perl libio-socket-ssl-perl libgraph-writer-graphviz-perl Open a CPAN shell and install SOAP::Lite and Net::Netmask perl –MCPAN –e shell install SOAP::Lite install Net::Netmask After installing those libraries and tools, grab the BCV-iControl source from the codeshare, save it as an executable (bcv.pl on my system) and set these variables (I only changed the ones in bold type): #Declare CLI $vars my $vs1; my $new_dir = 'NO_DIR'; my $extension = 'NO_EXT'; my $ltm_host = "172.16.99.5"; my $ltm_port = '443'; my $user_id = "admin"; my $req_partition; my $user_password = "admin"; my $ltm_protocol = 'https'; my $path; my $dir; Finally, some command-line options: root@ubuntu:/home/jrahm# ./bcv.pl -h Thank you for using BIG-IP Configuration Visualizer (BCV 1.16.1-revisited with soap) -v <VS_NAME> this prints the specified virtual server and requires option -c. Default is to print all -c Specify the partition/container to look in for option -v -t <iControl host LTM> specify ltm_host IP we will connect to -d specifies a directory you want the images in. Has to be in Current working Directory: /home/jrahm Default is /img) -e Define image format options: svg, png (default is jpg) -help for help but you already found it The Payoff Now that all the legwork is complete, we can play! root@ubuntu:/home/jrahm# ./bcv.pl Please wait while we build some maps of your system. Retrieving SelfIPs in Partition: ** Common ** Mapping Partition: ** Common ** routes to gateways Mapping Partition: ** Common ** selfIPs and VLANs.. Mapping Partition: ** Common ** pools and iRule references to pools............ Mapping Partition: ** Common ** virtual servers and properties... Drawing VS: dc.hashtest which is 1 of 3 in Partition: Common Drawing VS: testvip1 which is 2 of 3 in Partition: Common Drawing VS: management_vip which is 3 of 3 in Partition: Common All drawings completed! They can be found in: /home/jrahm/img Taking a look at the virtual server I used for the hashing algorithm distribution tech tip: Conclusion Visual representations of configurations are incredibly helpful in identifying issues quickly. An interesting next step would be to track state of objects from iteration of the drawings, and build a page to include all the images. That would make a nice and cheap dashboard for application owners or operating centers. Any takers? Thanks to community user Russell Moore that took a great contributed tool and made it better with iControl!1.2KViews0likes12CommentsSSL Orchestrator Advanced Use Cases: Reducing Complexity
Introduction Think back to any time in your career where one IT security solution solved all of your problems right out-of-the-box. Go ahead, I'll wait. Chances are you're probably struggling to think of one, or at least more than one. Trust me when I say it doesn't matter how many other industry professions are doing what you do, many of your security challenges are absolutely unique to you. "Best of breed" means little if it can't solve all of your problems. That's why an entire ecosystem exists of best-of-breed security solutions...because each is best at a subset of the total problem space. The problem with this inescapable truth is that using multiple solutions to solve for the entire security conundrum also breeds complexity. The spectacular beauty of an F5 SSL Orchestrator solution is that it can drastically reduce that complexity and at the same time, increase manageability, scalability, and proficiency. With great ease and finesse, a sprawl of heavily burdened security products can be transformed into dynamically addressable and independently scalable malware munching monsters, void of the weight of decryption, each focused purely on their strengths. Best of all, SSL Orchestrator sits on top of the F5 BIG-IP, which itself provides unparalleled flexibility. If the SSL Orchestrator itself doesn't meet your every need, it's almost certain there's some configuration or programmatic way to achieve what you're looking for. In this article we will explore some of that immense flexibility. I should note here that the following guidance stands equally as a set of best practices for configuring and managing SSL Orchestrator, and for reducing complexity in an existing environment. Let's go. SSL Orchestrator Use Case: Reducing Complexity We have already solved for the security product complexity of integrating multiple security products, so this article specifically tackles another challenge: what happens when the SSL Orchestrator configuration itself gets to be complicated? The goal is to reduce overall complexity, but as with all other things we tend to think about challenges one at a time, creating separate solutions for each problem as it comes up. This becomes troublesome when we build large numbers of nearly duplicate topologies, or create super-complex security policies. The most important consideration here is that an SSL Orchestrator configuration creates and manages ALL of the objects it needs, and very often that can be a lot. For example, a layer 3 outbound topology with one inline service will create no less than 700 dependent configuration objects. An inbound topology will create about 300 objects. Compare that to a typical LTM virtual server, where common objects are often re-used, and that's around 20. The net result is that creating a ton of SSL Orchestrator configurations can put a strain on the control plane, but also sort of defeats the point of reducing complexity. But fear not, there are a number of really interesting ways to reduce all of this without losing anything, and in some cases even increase capacity and flexibility. The solutions I am about to present are broken down by topology direction: Reducing Complexity for Reverse Proxy Topologies (Inbound) Reducing complexity by using shared security policies Reducing complexity with the Existing Applications topology Reducing complexity with existing applications and shared policies Reducing complexity with gateway mode Reducing complexity with existing application, SNI switching and address lists Reducing Complexity for Forward Proxy Topologies (Outbound) Reducing complexity with layered architectures For the purpose of illustration, I've also extracted the average number of SSL Orchestrator security policy objects for comparison, where a minimal outbound security policy is about 80 objects, and an inbound policy is around 60 objects. I will use these numbers to make basic comparisons throughout, but of course this will always vary per environment. You can find the total number of control plane objects using this command: tmsh -q -c 'cd /; list recursive one-line'|wc -l If you don't have access to the BIG-IP command line, but do have a copy of the bigip.conf, you can do this: cat bigip.conf |egrep '^[a-z]' |wc -l And the total number of access (security policy) objects using this command: cat bigip.conf |egrep '^apm ' |wc -l Reducing Complexity for Reverse Proxy Topologies There are a number of optimizations that can be done for reverse proxy topologies, but let us start with the simplest and most profound update. Option 1: Reducing complexity by using shared security policies One single topology will create its own security policy, but very often your security policies are all going to be doing basically the same thing. You can make a dramatic impact on complexity and overall object count by simply reducing the total number of security policies and reusing these across your topologies. figure 1: Reducing complexity in a reverse proxy with shared security policies Let's break it down to show the benefit of this approach. We'll illustrate using twelve separate SSL Orchestrator inbound topologies. As I mentioned earlier, a basic inbound layer 3 topology with one inline service will create about 300 control plane objects. Of that, less than half of these are the security policy, service chain and the inline service, around 120. An inline layer 2 service will create around 60 objects, and as there's a finite number of security services to begin with, they'll generally be shared between the security policies so we won't count these. If we then focus just on the security policy and service chain, that's around 60 unique objects. If each of the above twelve topologies creates its own security policy, reducing that down to just 3 unique security policies (for example), can remove over 500 objects, a nearly 75% reduction, and by the way you have also just made overall policy management simpler. Configuration object reuse has tremendous benefit. If you are building a new SSL Orchestrator environment, consider this option if you simply cannot reduce the total number of inbound topologies, but can reasonably reduce security policies down to a smaller set of unique policies that can be shared. If you have an existing environment and need to reduce complexity, setting this up is also very simple: Use the above scripts to get a total object count before taking any action. Identify a set of security policies that perform the same steps, pick one or create a new one, and then under each respective topology replace the existing security policy with this single shared security policy. You can then delete the old unused security policies, then repeat this action for each topology until all duplicate policies are removed. Run the object counts tools one more time and compare the difference. Option 2: Reducing complexity with the Existing Applications topology Shrinking the number of active security policies is an easy win, but even that isn't where the bulk of objects are. Recall again that a minimal inbound topology is about 300 objects, so minus the 120 from the security policy, service chain and inline service objects, 180 remain in the topology and its other dependent configurations. A layer 3 inbound SSL Orchestrator topology is, basically, a reverse proxy virtual server, SSL profiles, security policies, and various other dependent profiles and configurations. If you have LTM licensed on the BIG-IP, you can further reduce this by switching to an Existing Application. The Existing Application topology simplifies the configuration by only building the security policy, service chain and security services. You then attach that security policy to an existing LTM application virtual server. figure 2: Reducing complexity in a reverse proxy with existing application topologies Let's break it down again. Using the same twelve topologies with individual policies, disregarding the security services, you're seeing around 2800 objects. If you were to replace each FULL topology with an LTM application virtual server using an Existing Application security policy, you can see in the image above a pretty significant drop in configuration object count, to about 960 objects. For twelve topologies that's a 77% reduction in configuration objects. Full topology creation is, by design, constrained to the most essential (optimal) elements and thus limits some flexibility of manual configuration. Using a manually defined LTM VIP with an Existing Application topology instead of a full inbound topology will provide the full flexibility of LTM together with security policies and inline services.One caveat, the existing application option requires an LTM license. If you are building a new SSL Orchestrator environment, consider this option if you can see the benefits of deploying existing application virtual servers, where this technique provides additional capabilities and near unlimited customization. If you have an existing environment and need to reduce complexity, putting this one together requires a bit more work in an existing environment, but it's not insurmountable. If this is a new environment, then clearly starting with this configuration approach could be beneficial. To test this configuration, we'll use the virtual server's Source Address parameter. Use the above scripts to get a total object count before taking any action. Create a new Existing Application topology to define your security service(s), service chain and corresponding security policy. Disable strictness on an existing topology. Assuming traffic is flowing to this topology already, we'll need to make a quick switch at the command line so we need strictness to be disabled. Create an LTM application virtual server that matches the topology listener (ie. destination address, port, pool, etc.), but for testing purposes enter just YOUR client IP/mask in the Source Address field. BIG-IP virtual servers follow a "most specific" order-of-precedence, so by adding a specific source address, only your traffic will flow to the new VIP and not anything else. Also attach your new Existing Application security policy to this virtual server. Under the Access Policy section, Access Profile setting, select "ssloDefault_accessProfile". And then under Per-Request Profile, select your new security policy. figure 3: SSL Orchestrator Existing Application topology selection You should now have an LTM virtual server that matches the topology virtual server, except that it only listens for traffic to your address. You can alternatively apply a source Address List here to allow multiple testers to access the new VIP. Make sure that you are able to access the application and that decrypted traffic is passing to the security service(s). When you are ready to make the switch for all traffic, you'll need to change the Source Address field in the topology listener virtual to something specific, and then change the Source Address field in your new LTM virtual to 0.0.0.0%0/0, again manipulating the order-of-precedence. You can do this manually, or via simple TMSH transaction in a Bash script to switch both services simultaneously: #!/bin/bash tmsh << EOF create cli transaction modify ltm virtual sslo_app1.app/sslo_app1 source 10.10.10.10/32 modify ltm virtual app1-ea-vip source 0.0.0.0%0/0 submit cli transaction EOF The first modify command points at the existing topology listener virtual and changes the source to something unique. The second modify command changes your new LTM application virtual source to allow traffic from all (0.0.0.0%0/0). All of this is done inside a transaction, so the change is immediate for both. Perform the steps above for each inbound layer 3 topology that you need to replace with an LTM virtual and Existing Application policy. When you're satisfied that everything is now flowing through your LTM application VIPs, go back and delete the unused topologies, security policies, service chains and SSL configurations. Run the object count tools one more time and compare the difference. Option 3: Reducing complexity with existing applications and shared policies Okay, so far we've separately reduced complexity by either sharing security policies, or using Existing Application topologies with LTM VIPs. What if we actually combined these two methods? figure 4: Reducing complexity in a reverse proxy with existing application topologies and shared policies Again, if we remove the security services from the equation, you should see an absolutely enormous drop in object count. Twelve separate topologies and security policies at around 2800 objects, consolidated to twelve LTM VIPs and three unique security policies is an 86% reduction in configuration objects! Plus, again, you've expanded your flexibility by switching to LTM application VIPs and made SSL Orchestrator policy management much simpler. Migration to this configuration option is more or less the same as the last, except instead of twelve individual Existing Application topologies, you'll only need to create a smaller set of unique policies and share these accordingly. If you are building a new SSL Orchestrator environment, consider this option if you can see the benefits of deploying existing application virtual servers, and you can reasonably reduce security policies down to a smaller set of unique policies that can be shared. Option 4: Reducing complexity with gateway mode At this point, you've now potentially removed 86% of the total configuration objects and made your environment clean, lean, simpler and more flexible. What more could you ask for? Well I'm glad you asked. The following isn't for everyone, but if it makes sense in your environment, it can create a HUGE simplification of resources and management. An SSL Orchestrator inbound "gateway mode" topology is intended as a routed path. Conversely what we've been talking about so far are "application mode" topologies, where the destination IP:port are configured within the topology, or LTM application virtual server. External clients target this address directly. But in gateway mode, the listener is a network address such as 0.0.0.0/0, so the destination address will actually be behind the BIG-IP, possibly a separate BIG-IP, another load balancer, router, or the application server itself. Traffic is routed through the BIG-IP to get there. You may see this as an advantage if the SSL Orchestrator sits closer to the edge of the network, and/or not owned and managed by the same teams. In any case, by virtue of the single 0.0.0.0/0 listener, you really only need ONE topology and ONE security policy, though your security policy may tend to be a bit more complex. figure 5: Reducing complexity in a reverse proxy with gateway mode If you are building a new SSL Orchestrator environment, consider this option if you can insert the BIG-IP as a routed hop in the application path. If you have an existing environment and need to reduce complexity, we can actually use that most specific order-of-precedence thing again to our advantage. As long as you have a bunch of application mode topologies configured with specific destination addresses, only new traffic flows that don't match one of these will flow to your wildcard gateway mode topology. Create a gateway mode SSL Orchestrator inbound topology by ensuring that the destination address is 0.0.0.0/0 (or other appropriate network address) and no pool is assigned. This will also automatically disable address and port translation so this effectively becomes a forwarding virtual server. You can test this topology by sending a request to a destination address that isn't defined by one of the application mode topologies, but does exist beyond the BIG-IP. When you're ready to move traffic over to the gateway topology, you can either start deleting the application mode topologies (or LTM virtual servers), or simply disable them. Now there's one other thing you have to address in the gateway mode configuration, and that's how to handle server certificates. In inbound application mode, each topology has its own SSL configuration and applied server certificate and key. But what do you do if all HTTPS traffic flows through a single virtual server? There are two options here. In both cases you'll need to create multiple SSL profiles, which you can either do in the SSL Orchestrator UI or manually in the BIG-IP. Certificate selection option 1: Attach multiple SSL profiles to the topology listener You will need to disable strictness on the topology to use this approach, as you'll need to edit the virtual server directly. You're going to add all of the client SSL profiles to the virtual server. The benefit of this method is that it is fast. The downsides are that you have to disable topology strictness (and leave it disabled), and also keep track of which client SSL profile has the "Default for SNI" setting. Create each client SSL profile and specify a unique server certificate and key. You'll also need to edit the "Server Name" field and enter the unique SNI hostname. The BIG-IP will select the correct client SSL profile automatically by virtue of the client's TLS ClientHello "servername" extension (SNI). In this method you need to select one of the client SSL profiles as "Default for SNI". This is the profile the BIG-IP will select in the very rare case that the client does not present an SNI.It's also worth noting here that as of BIG-IP 14.1, SSL profile selection can also be done based on the subject and subject alternative name (SAN) values in the applied certificate, versus a single static "Server Name" entry in the client SSL profile. You should still enter a Server Name value in the profile, but profile selection will only use this if the SNI does not match the subject or SAN in the applied certificate. Attach all of the client SSL profiles to the topology listener virtual server. Further detail: K13452: Configure a virtual server to serve multiple HTTPS sites using the TLS Server Name Indication feature Certificate selection option 2: Dynamically assign the SSL profiles via client SNI Using a slightly different approach, we'll employ an iRule on the topology interception rule that dynamically selects the client SSL profile based on the client's ClientHello SNI. The benefits here are that this doesn't require non-strict changes to the topology, and you don't have to set a Server Name value in the SSL profiles or worry about the Default for SNI setting. The disadvantage is that iRule-based dynamic profile selection will generate a small amount of additional runtime overhead. You'll need to weigh these two options against how heavily loaded your system will be. Create each client SSL profile and specify a unique server certificate and key. Create a string datagroup that maps the SNI server name value to the name of the client SSL profile. Example: www1.foo.com := /Common/www1.foo.com-clientssl www2.foo.com := /Common/www2.foo.com-clientssl www3.foo.com := /Common/www3.foo.com-clientssl Navigate over to https://github.com/f5devcentral/sslo-script-tools/tree/main/sni-switching and grab the "library-rule.tcl" iRule and import this to your BIG-IP. Name it "library-rule". On that same page, also grab and import the "in-t-rule-datagroup.tcl" iRule. Modify the data group name on line 23 with the name of your data group. Finally, navigate to the Interception Rules tab in the SSL Orchestrator UI and edit the corresponding interception rule. At the bottom of the page add the new "in-t-rule-datagroup" iRule and re-deploy. As TLS traffic enters the gateway mode topology listener, the above iRule will parse the client's ClientHello SNI, lookup the corresponding client SSL profile in the data group, and switch to that profile. Whenever you need to add a new site, simply create a new client SSL profile and add the respective key and value to the data group. Option 5: Reducing complexity with existing application, SNI switching and address lists I've saved this one for last to first introduce you to some of the underlying techniques, specifically address lists and SNI switching. You saw mention of address lists in testing the migration to existing application security policies, and we covered SNI switching in the previous gateway mode solution. If you can't employ a gateway mode (routed path) architecture in your environment, you can still reduce the total number of LTM virtual servers in an Existing Application topology option by using address lists. figure 6: Reducing complexity in a reverse proxy with existing app, SNI switching, and address-lists An LTM virtual server supports either a single static source or destination IP, or an address list that you can define in the UI under Shared Objects >> Address Lists. We use address lists to consolidate multiple LTM virtual servers into a single virtual server using an address list of multiple destination IPs. Since this is an LTM virtual server and not a full topology, you can also easily just add the multiple client SSL profiles without dealing with topology strictness. But you do still need to define one client SSL profile as Default for SNI. In lieu of that you can also use the above SNI switching iRules. If you then also reduce and reuse a unique set of security policies, you can drop control plane configuration objects as low as they can go. I've stated this a few times already, but by using LTM virtual servers instead of full inbound topologies you not only cut down on control plane congestion but also regain some flexibility. And moving to shared security policies means configuration management becomes a breeze. Reducing Complexity for Forward Proxy Topologies Where inbound reverse proxy complexity usually manifests as lots of topologies and lots of security policies, there's usually only one or a small few forward proxy topologies. Most of the complexity in a forward proxy topology is actually in the security policy itself, or in situations where specific environmental requirements necessitate non-strict customization. To solve for this sort of complexity you can employ what is referred to as an "internal layered architecture." This idea is covered in great detail here: https://devcentral.f5.com/s/articles/SSL-Orchestrator-Advanced-Use-Cases-Reducing-Complexity-with-Internal-Layered-Architecture. Essentially this method uses a "steering" virtual server positioned in front of a set of SSL Orchestrator topologies (on the same BIG-IP) and directs traffic to one of the topologies via local steering policy decision. figure 7: Reducing complexity in a forward proxy with an internal layered architecture This architecture has some pretty interesting advantages in its own right: The steering policy is simple, flexible and highly automatable. In the absence of policy decisions, the SSL Orchestrator topologies are reduced to essentially static "functions", a unique combination of actions (allow/block, TLS intercept/bypass, service chain, egress). Topology objects are re-usable so while creating multiple topology functions, you'll likely re-use most of the underlying objects, like SSL configurations and services. Topology functions support dynamic egress (egress via different paths). Topology functions support dynamic local CA issuer selection (for example, using a different local CA for different "tenants"). Topology functions support more flexible traffic bypass options. Topologies as static functions will likely not require any non-strict customization, so makes management and upgrades simpler. The above DevCentral article provides much more insight here and offers a set of helper utilities to make steering policy super simple to deploy. To migrate to an internal layered architecture, we can use the most specific order-of-precedence thing again. Create a set of "dummy" VLANs, just an empty VLAN with no assigned interface. SSL Orchestrator topologies require a VLAN, but these are going to be "internal". Create your set of SSL Orchestrator topologies. The key idea here is that each will have a very basic security policy. The Pinners rule is actually covered in the helper utility, so you could reduce each security policy to a single "All Traffic" rule and apply the unique combination of actions. Last, assign a dummy VLAN to the topology and deploy it. Create your steering VIP as described in the other article but give it a source address filter that matches your local client address or use an address list to let your local team test it. Once you're confident that traffic is flowing through the steering virtual and to the correct internal topologies via policy, you can use the same TMSH transaction technique from the reverse proxy Existing Application use case above. That will move the 0.0.0.0/0 source address filter from your existing forward proxy topology to the steering VIP. Once you've done that, you can delete the unused client-facing topology and any other unused objects. Summary If you are just getting started with SSL Orchestrator, you have a lot of options to choose from. The objective here is to encourage simplicity. For SSL Orchestrator specifically that can mean reducing inbound topologies down to a single gateway mode, or utilizing common BIG-IP capabilities like address lists to reduce the number of existing application virtual servers. Or at the very least, consolidate to a common shared set of unique security policies. In fact, many of the above techniques for reducing and simplifying architectures are useful well beyond just SSL Orchestrator. Any time you have an environment with an enormous number of virtual servers, you can almost always consolidate a lot of these with practices like address lists and SNI switching. That helps by cleaning up the control plane and can simplify configuration, where "common" settings can now be managed in one place. This article was long, and for that I sincerely thank you for bearing with me to the end. Hopefully you can use some of this information as a best practice in managing what is easily one of the most flexible application delivery control devices on the planet. And in doing so, also vastly reduce the complexity in your security stack with SSL Orchestrator.1.2KViews1like1Comment