SSL Orchestrator
62 TopicsWAFaaS with SSL Orchestrator
Introduction Note: This article applies to SSL Orchestrator versions prior to 11.0. If using version 11.0 refer to the articleHERE This use case allows you to insert F5 WAF functionality as a Service in the SSL Orchestrator inspection zone. WAFaaS is the ability to insert ASM profiles into the SSL Orchestrator Service Chain for Inbound Topologies.This configuration is specific to a WAF policy running on the SSL Orchestrator device.WAF and SSL Orchestrator consume significant CPU cycles so care should be given when deploying both together.It is also possible to deploy WAF as a service on a separate BIG-IP device, in which case you’d simply configure an inline transparent proxy service.The ability to insert F5’s WAF into the Service Chain presents a significant customer benefit. This guide assumes you already have WAF/ASM profile(s) configured, licensed and provisioned on BIG-IP and wish to add this functionality to an Inbound Topology.In order to run WAF and SSL Orchestrator on the same device you will need an LTM license with SSL Orchestrator as an add-on option.You cannot add a WAF license to an SSL Orchestrator stand-alone license. SSL Orchestrator does not directly support inserting F5 WAF policies into the Service Chain.However, the F5 platform is flexible enough to handle many custom use cases.In this case, the ICAP service configuration exposes a framework that is useful for any number of specialized patterns, including adding a WAF policy to an SSLO service chain.We will configure an ICAP Service and attach the WAF policy to it. Steps: Create ICAP Service Disable Strictness on the Service Disable TCP monitor for the ICAP Pool ICAP Adapt profiles removed from the Virtual Server Application Security Policy enabled and a Policy assigned under Security Step #1: Create ICAP Service Note: These instructions assume an SSL Orchestrator Topology and Service Chain are already deployed and working properly.These instructions simply add WAFaaS to the existing Service Chain.It is entirely possible to create the WAFaaS during the initial Topology creation, in which case you would create the service during the workflow, then make the necessary changes after the topology has been created. From the SSL Orchestrator Guided Configuration click Services then Add Scroll to the bottom, select Generic ICAP Service and click Add Give it a name, WAFaaS in this example For ICAP Devices click Add on the right Enter an IP Address, 198.19.97.1 in this example and click Done. Note:the IP address you use does not have to be the one above.It’s just a local, non-routable address used as a placeholder in the service definition.This IP address will not be used. IP addresses 198.19.97.0 to 198.19.97.255 are owned by network benchmark tests and located in private networks. Scroll to the bottom and click Save & Next. The next screen is the Services Chain List.Click the name of the Service Chain you wish to add WAF functionality to, ssloSC_ServiceChain in this example. Note: The order of the Services in the Selected column is the order in which SSL Orchestrator will pass decrypted data to the device.This can be an important consideration if you want some devices to see, or not see, the actions taken by the WAF Service. Select the WAFaaS Service and click the right arrow to move it to Selected.Click Save. Click Save & Next Click Deploy You should receive a Success message Step #2: Disable Strictness on the Service From the SSL Orchestrator Configuration screen select Services.Click the padlock to Unprotect Configuration. Note:Disabling Strictness on the ICAP Service is needed to modify it and attach the WAFaaS policy.Strictness must remain disabled on this service and disabling strictness on the service has no effect on any other part of the SSL Orchestrator configuration. Click OK to Unprotect the Configuration Step #3: Disable tcp monitor for the ICAP Pool From Local Traffic select Pools > Pool List Select the WAFaaS Pool Under Active Health Monitors select tcp and click >> to move it to Available.This removes the Pool’s Monitor because otherwise it would be marked as down or unavailable. Click Update Note:The Health Monitor needs to be removed because there is no actual ICAP service to monitor. Step #4: ICAP Adapt profiles removed from the Virtual Server From Local Traffic select Virtual Servers > Virtual Server List Locate the WAFaaS ICAP service that ends in “-t-4”virtual server and select it Set the Request Adapt Profile and Response Adapt Profile to None to disable the default ICAP Profiles Click Update Step #5: Application Security Policy enabled and a Policy assigned under Security For the WAFaaS-t-4 Virtual Server click the Security tab Set Application Security Policy to Enabled Select the Security Policy you wish to use.Click Update when done Note: In specific versions of SSL Orchestrator there is one extra configuration item that needs to be modified. This is NOT required in other versions. If this change is made, when performing an upgrade it is not necessarily required to back out this change. Required versions: SSLO version 5.9.15 available on TMOS 14.1.4 SSLO versions 6.0-6.5 available on TMOX 15.0.x Navigate to “Local Traffic››Profiles : Other : Service” Select the Service profile named “ssloS_WAFaaS-service” Change the “Type” from “ICAP” to “F5 Module” Conclusion The configuration is now complete.Using the WAFaaS this way is functionally the same as using it by itself.There are no known limitations to this configuration.2.1KViews5likes9CommentsPacket Analysis with Scapy and tcpdump: Checking Compatibility with F5 SSL Orchestrator
In this guide I want to demonstrate how you can use Scapy (https://scapy.net/) and tcpdump for reproducing and troubleshooting layer 2 issues with F5 BIG-IP devices. Just in case you get into a finger-pointing situation... Starting situation This is a quite recent story from the trenches: My customer uses a Bypass Tap to forward or mirror data traffic to inline tools such as IDS/IPS, WAF or threat intelligence systems. This ByPass Tap offers a feature called Network Failsafe (also known as Fail-to-Wire). This is a fault tolerance feature that protects the flow of data in the event of a power outage and/or system failure. It allows traffic to be rerouted while the inline tools (IDS/IPS, WAF or threat intelligence systems) are shutting down, restarting, or unexpectedly losing power (see red line namedFallbackin the picture below). Since the ByPass Tap itself does not have support for SSL decryption and re-encryption, an F5 BIG-IP SSL Orchestrator shall be introduced as an inline tool in a Layer 2 inbound topology. Tools directly connected to the Bypass Tap will be connected to the SSL Orchestrator for better visibility. To check the status of the inline tools, the Bypass Tap sends health checks through the inline tools. What is sent on one interface must be seen on the other interface and vice versa. So if all is OK (health check is green), traffic will be forwarded to the SSL Orchestrator, decrypted and sent to the IDS/IPS and the TAP, and then re-encrypted and sent back to the Bypass Tap. If the Bypass Tap detects that the SSL Orchestrator is in a failure state, it will just forward the traffic to the switch. This is the traffic flow of the health checks: Target topology This results in the following topology: Problem description During commissioning of the new topology, it turned out that the health check packets are not forwarded through the vWire configured on the BIG-IP. A packet analysis with Wireshark revealed that the manufacturer uses ARP-like packets with opcode 512 (HEX 02 00). This opcode is not defined in the RFC that describes ARP (https://datatracker.ietf.org/doc/html/rfc826), the RFC only describes the opcodes Request (1 or HEX 00 01) and Reply (2 or HEX 00 02). NOTE:Don't get confused that you see ARP packets on port 1.1 and 1.2. They are not passing through, the Bypass Tap is just send those packets from both sides of the vWire, as explained above. The source MAC on port 1.1 and 1.2 are different. Since the Bypass Tap is located right behind the customer's edge firewall, lengthy and time-consuming tests on the live system are not an option, since it would result in a massive service interruption. Therefore, a BIG-IP i5800 (the same model as the customer's) was set up as SSL Orchestrator and a vWire configuration was build in my employers lab. The vWire configuration can be found in this guide (https://clouddocs.f5.com/sslo-deployment-guide/chapter2/page2.7.html). INFO:For those not familiar with vWire: "Virtual wire … creates a layer 2 bridge across the defined interfaces. Any traffic that does not match a topology listener will pass across this bridge." Lab Topology The following topology was used for the lab: I build a vWire configuration on the SSL Orchestrator, as in the customer's environment. A Linux system with Scapy installed was connected to Interface 1.1. With Scapy TCP, UDP and ARP packets can be crafted or sent like a replay from a Wireshark capture. Interface 1.3 was connected to another Linux system that should receive the ARP packets. All tcpdumps were captured on the F5 and analyzed on the admin system (not plotted). Validating vWire Configuration To check the functionality of the F5 and the vWire configuration, two tests were performed. A replay of the Healthcheck packets from the Bypass Tap and a test with RFC-compliant ARP requests. Use Scapy to resend the faulty packets First, I used Wireshark to extract a single packet from packet analysis we took in the customer environment and saved it to a pcap file. I replayed this pcap file to the F5 with Scapy. The sendp() function will work at layer 2, it requires the parametersrdpcap(location of the pcap file for replay) andiface(which interface it shall use for sending). webserverdude@tux480:~$ sudo scapy -H WARNING: IPython not available. Using standard Python shell instead. AutoCompletion, History are disabled. Welcome to Scapy (2.5.0) >>> sendp(rdpcap("/home/webserverdude/cusomter-case/bad-example.pcap"),iface="enp0s31f6") . Sent 1 packets. This test confirmed the behavior that was observed in the customer's environment. The F5 BIG-IP does not forward this packet. Use PING and Scapy to send RFC-compliant ARP packets To create RFC-compliant ARP requests, I first sent an ARP request (opcode 1) through the vWire via PING command. As expected, this was sent through the vWire. To ensure that this also works with Scapy, I also resent this packet with Scapy. >>> sendp(rdpcap("/home/webserverdude/cusomter-case/good-example.pcap"),iface="enp0s31f6") . Sent 1 packets. In the Wireshark analysis it can be seen that this packet is incoming on port 1.1 and then forwarded to port 1.3 through the vWire. Solving the issue with the help of the vendor It became evident that the BIG-IP was dropping ARP packets that failed to meet RFC compliance, rendering the Bypass Tap from this particular vendor seemingly incompatible with the BIG-IP. Following my analysis, the vendor was able to develop and provide a new firmware release addressing this issue. To verify that the issue was resolved in this firmware release, my customer's setup, the exact same model of the Bypass Tap and a BIG-IP i5800, were deployed in my lab, where the new firmware underwent thorough testing. With this approach I could test the functionality and compatibility of the systems under controlled conditions. In this Wireshark analysis it can be seen that the Healthcheck packets are incoming on port 1.1 and then forwarded to port 1.3 through the vWire (marked in green) and also the other way round, coming in on port 1.3 and then forwarded to port 1.1 (marked in pink). Also now you can see that the packet is a proper gratuitous ARP reply (https://wiki.wireshark.org/Gratuitous_ARP). Because the Healthcheck packets were not longer dropped by the BIG-IP, but were forwarded through the vWire the Bypass Tap subsequently marked the BIG-IP as healthy and available. The new firmware resolved the issue. Consequently, my customer could confidently proceed with this project, free from the constraints imposed by the compatibility issue.495Views2likes2CommentsSSL Orchestrator Advanced Use Cases: Outbound SNAT Persistence
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Outbound SNAT Persistence It may not be the most obvious thing to think about persistence in the vein of outbound traffic. We are all groomed to accept that any given load balancer can handle persistence (or "affinity", or "stickiness") to backend servers. This is an important characteristic for sure. But in an outbound scenario, you don't load balance remote servers, so why on Earth would you need persistence? Well, I'm glad you asked. There indeed happens to be a somewhat unique, albeit infrequent use case where two different servers need to persist on YOUR IP address. The classic example is a site that requires federated authentication, where the service provider (SP) generates a token (perhaps a SAML auth request) and inside of that request the SP has embedded the client IP. The client receives this message and is redirected to the IdP to authenticate. But in this case the client is talking to the outside world through a forward proxy, and outbound source NAT (SNAT) could be required in this environment. That means there's a potential that the client IP address as seen from the two remote servers could be different. So if the IdP needs to verify the client IP based on what's embedded in the authentication request token, that could possibly fail. The good news here is that federated authentication doesn't normally require client IP verification, and there aren't many other similar use cases, but it can happen. The F5 BIG-IP, as with ANY proxy server, load balancer, or ADC device, clearly supports server affinity, and in a highly flexible way. But, as with ANY proxy server, load balancer, or ADC device, that doesn't apply to SNAT addresses. Nevertheless, the F5 BIG-IP can be configured to do this, which is exactly what this article is about. We're going to flex some BIG-IP muscle to derive a unique and innovative way to enable outbound SNAT persistence. What we're basically talking about is ensuring that a single internal client persists a single outbound SNAT IP address, when and where needed, and as long as possible. It's important to note here that we're not really talking about persistence in the same way you think about load balanced server affinity. With affinity, you're stapling a single (remote) client "session" to a single load balanced server. With SNAT persistence, you're stapling a single outbound SNAT IP to a single internal client so that all remote servers see that same source address. Same-same but different-different. To do this we'll need a SNAT pool and an iRule. We need the SNAT pool to define the SNAT addresses we can use. And since SNAT pools don't provide a persistence option like regular pools do, we'll use an iRule to provide the stickiness. It's also worth noting here, again since we're not really talking about load balancing stickiness, that the IP persistence mechanism in the iRule may not (likely will not) evenly distribute the IPs in the SNAT pool. Your best bet is to provide as many SNAT pool IPs as possible and reasonable. The good news here is that, because you're using a BIG-IP, you can define exactly how you assert that IP stickiness. In most cases, you'll probably just want to persist on the internal client IP, but you could also persist on: Client source address and remote server port Client source address and remote destination addresses Client source, day of the week, the year+month+day % mod 2, a hash of the word-of-the-day...and hopefully you get the idea. Lot's of options. To make this work, let's start with the SNAT pool. Navigate to Local Traffic -> Address Translation -> SNAT Pool List in the BIG-IP and click Create. In the Member List section, add as many SNAT IPs as you can afford. Remember, these are going to be IPs on your outbound VLAN, so in the same subnet as your outbound VLAN self-IP. Figure: SNAT pool list You don't need to assign the SNAT pool to anything directly. The iRule will handle that. And now onto the iRule. Navigate to Local Traffic -> iRules -> iRule List in the BIG-IP, and click Create. Copy the following into the iRule editor: when RULE_INIT { ## This iRule should be applied to your SSLO intercaption rule ending with in-t-4. catch { unset -nocomplain static::snat_ips } ## For each SNAT IP needed define the IP versus dynamically looking it up. ## These need to be in the real SNAT pool as well so ARP works. set static::snat_ips(1) 10.1.20.50 set static::snat_ips(2) 10.1.20.51 set static::snat_ips(3) 10.1.20.52 set static::snat_ips(4) 10.1.20.53 set static::snat_ips(5) 10.1.20.54 ## Set to how many SNAT IPs were added set static::array_size 5 } when CLIENT_ACCEPTED priority 100 { ## Select and uncomment only ONE of the below SNAT persistence options ## Persist SNAT based on client address only snat $static::snat_ips([expr {[crc32 [IP::client_addr]] % $static::array_size}]) ## Persist SNAT based on client address and remote port #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [TCP::remote_port]] % $static::array_size}]) ## Persist SNAT based on client address and remote address #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [IP::local_addr]] % $static::array_size}]) } Let's take a moment to explain what this iRule is actually doing, and it is fairly straightforward. In RULE_INIT, which fires ONCE when you update the iRule, the members of the defined SNAT pool are read into an array. Then a second static variable is created to store the size of the array. These values are stored as static, global variables. In CLIENT_ACCEPTED we set a priority of 100 to control the order of execution under SSL Orchestrator as there is already a CLIENT_ACCEPTED iRule event on the topology (we want our new event to run first). Below that you're provided with three choices for persistence: persist on source IP only, source IP and destination port, or source IP and destination IP. You'll want to uncomment only ONE of these. Each basically performs a quick CRC hash on the selected value, then calculates a modulus based on the array size. This returns a number within the size of the array, that is then applied as the index to the array to extract one of the array values. This calculation is always the same for the same input value(s), so effectively persisting on that value. The selected SNAT IP is then fed to the 'snat' command, and there you have it. As stated, you're probably only going to need the source-only persistence option. Using either of the others will pin a SNAT IP to a client IP and protocol port (ex. client IP:443 or client IP:80), or pin a SNAT IP to a specific host (ex. client IP:www.example.com), respectively. At the end of the day, you can insert any reasonable expression that will result in the selection of one of the values in the SNAT pool array, so the sky is really the limit here. The last step is easiest of all. You need to attach this iRule to your SSL Orchestrator topology. To do that. navigate to SSL Orchestrator -> Configuration in the UI, select the Interception Rules tab, and click to edit the respective outbound interception rule. Scroll to the bottom of this page, and under Resources, add the new iRule to the Selected column. The order doesn't matter. Click Deploy to complete the change, and you're done. You can do a packet capture on your outbound VLAN to see what is happening. tcpdump -lnni [outbound vlan] host 93.184.216.34 And then access https://www.example.com to test. For your IP address you should see a consistent outgoing SNAT IP. If you have access to a Linux client, you can add multiple IP addresses to an interface and test with each: ifconfig eth0:1 10.1.10.51 ifconfig eth0:2 10.1.10.52 ifconfig eth0:3 10.1.10.53 ifconfig eth0:4 10.1.10.54 ifconfig eth0:5 10.1.10.55 curl -vk https://www.example.com --interface 10.1.10.51 curl -vk https://www.example.com --interface 10.1.10.52 curl -vk https://www.example.com --interface 10.1.10.53 curl -vk https://www.example.com --interface 10.1.10.54 curl -vk https://www.example.com --interface 10.1.10.55 And again there you have it. In just a few steps you've been able to enable outbound SNAT persistence, and along the way you have hopefully recognized the immense flexibility at your command.1.9KViews1like5CommentsSSL Orchestrator Advanced Use Cases: Client Certificate Constrained Delegation (C3D) Support
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Client Certificate Constrained Delegation (C3D) Using certificates to authenticate is one of the oldest and most reliable forms of authentication. While not every application supports modern federated access or multi-factor schemes, you'll be hard-pressed to find something that doesn't minimally support authentication over TLS with certificates. And coupled with hardware tokens like smart cards, certificates can enable one of the most secure multi-factor methods available. But certificate-based authentication has always presented a unique challenge to security architectures. Certificate "mutual TLS" authentication requires an end-to-end TLS handshake. When a server indicates a requirement for the client to submit its certificate, the client must send both its certificate, and a digitally-signed hash value. This hash value is signed (i.e. encrypted) with the client's private key. Should a device between the client and server attempt to decrypt and re-encrypt, it would be unable to satisfy the server's authentication request by virtue of not having access to the client's private key (to create the signed hash). This makes encrypted malware inspection complicated, often requiring a total bypass of inspection to sites that require mutual TLS authentication. Fortunately, F5 has an elegant solution to this challenge in the form of Client Certificate Constrained Delegation, affectionally referred to as "C3D". The concept is reasonably straightforward. In very much the same way that SSL forward proxy re-issues a remote server certificate to local clients, C3D can re-issue a remote client certificate to local servers. A local server can continue to enforce secure mutual TLS authentication, while allowing the BIG-IP to explicitly decrypt and re-encrypt in the middle. This presents an immediate advantage in basic load balancing, where access to the unencrypted data allows the BIG-IP greater control over persistence. In the absence of this, persistence would typically be limited to IP address affinity. But of course, access to the unencrypted data also allows the content to be scanned for malicious malware. C3D actually takes this concept of certificate re-signing to a higher level though. The "constrained delegation" portion of the name implies a characteristic much like Kerberos constrained delegation, where (arbitrary) attributes can be inserted into the re-signed token, like the PAC attributes in a Kerberos ticket, to inform the server about the client. Servers for their part can then simply filter on client certificates issued by the BIG-IP (to prevent direct access), and consume any additional attributes in the certificate to understand how better to handle the client. With C3D you can maintain strong mutual TLS authentication all the way through to your servers, while allowing the BIG-IP to more effectively manage availability. And combined with SSL Orchestrator, C3D can enable decryption and inspection of content for malware inspection. This article describes how to configure SSL Orchestrator to enable C3D for inbound decrypted inspection. Arguably, most of what follows is the C3D configuration itself, as the integration with SSL Orchestrator is pretty simple. Note that Client Certificate Constrained Delegation (C3D) is included with Local Traffic Manager (LTM) 13.0 and beyond, but for integration with SSL Orchestrator you should be running 14.1 or later.To learn more about C3D, please see the following resources: K14065425: Configuring Client Certificate Constrained Delegation (C3D):https://support.f5.com/csp/article/K14065425 Manual Chapter: SSL Traffic Management:https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-system-ssl-administration/ssl-traffic-management.html#GUID-B4D2529E-D1B0-4FE2-8C7F-C3774ADE1ED2 SSL::c3d iRule reference - not required to use C3D, but adds powerful functionalityhttps://clouddocs.f5.com/api/irules/SSL__c3d.html The integration of C3D with SSL Orchestrator involves effectively replacing the client and server SSL profiles that the SSL Orchestrator topology creates, with C3D SSL profiles. This is done programmatically with an iRule, so no "non-strict" customization is required at the topology. Also note that an inbound (reverse proxy) SSL Orchestrator topology will take the form of a "gateway mode" deployment (a routed path to multiple applications), or "application mode" deployment (a single application instance hosted at the BIG-IP). See section 2.5 of the SSL Orchestrator deployment guide for a deeper examination of gateway and application modes: https://clouddocs.f5.com/sslo-deployment-guide/ The C3D integration is only applicable to application mode deployments. Configuration C3D itself involves the creation of client and server SSL profiles: Create a new Client SSL profile: Configuration Certificate Key Chain: public-facing server certificate and private key. This will be the certificate and key presented to the client on inbound request. It will likely be the same certificate and key defined in the SSL Orchestrator inbound topology. Client Authentication Client Certificate: require Trusted Certificate Authorities: bundle that can validate client certificate. This is a certificate bundle used to verify the client's certificate, and will contain all of the client certificate issuer CAs. Advertised Certificate Authorities: optional CA hints bundle. Not expressly required, but this certificate bundle is forwarded to the client during the TLS handshake to "hint" at the correct certificate, based on issuer. Client Certificate Constrained Delegation Client Certificate Constrained Delegation: Enabled Client Fallback Certificate (new in 15.1): option to select a default client certificate if client does not send one. This option was introduced in 15.1 and provides the means to select an alternate (local) certificate if the client does not present one. The primary use case here might be to select a "template" certificate, and use an iRule function to insert arbitrary attributes. OCSP: optional client certificate revocation control. This option defines an OCSP revocation provider for the client certificate. Unknown OCSP Response Control (new in 15.1): determines what happens when OCSP returns Unknown. If an OCSP revocation provider is selected, this option defines what to do if the response to the OCSP query is "unknown". Create a new Server SSL profile: Configuration Certificate: default.crt. The certificate and key here are used as "templates" for the re-signed client certificate. Key: default.key Client Certificate Configuration Delegation Client Certificate Constrained Delegation: Enabled CA Certificate: local forging CA cert. This is the CA certificate used to re-sign the client certificate. This CA must be trusted by the local servers. CA Key: local forging CA key CA Passphrase: optional CA passphrase Certificate Extensions: extensions from the real client cert to be included in the forged cert. This is the list of certificate extensions to be copied from the original certificate to the re-issued certificate. Custom Extension: additional extensions to copy to forged cert from real cert (OID). This option allows you to insert additional extensions to be copied, as OID values. Additional considerations: Under normal conditions, the F5 and backend server attempt to resume existing SSL sessions, whereby the server doesn’t send a Certificate Request message. The effect is that all connections to the backend server use the same forged client cert. There are two ways to get around this: Set a zero-length cache value in the server SSL profile, or Set server authentication frequency to ‘always’ in the server SSL profile CA certificate considerations: A valid signing CA certificate should possess the following attributes. While it can work in some limited scenarios, a self-signed server certificate is generally not an adequate option for the signing CA. keyUsage: certificate extension containing "keyCertSign" and "digitalSignature" attributes basicConstraints: certificate extension containing "CA = true" (for Yes), marked as "Critical" With the client and server SSL profiles built, the C3D configuration is basically done. To integrate with an inbound SSL Orchestrator topology, create a simple iRule and add it to the topology's Interception Rule configuration. Modify the SSL profile paths below to reflect the profiles you created earlier. ### Modify the SSL profile paths below to match real C3D SSL profiles when CLIENT_ACCEPTED priority 250 { ## set clientssl set cmd1 "SSL::profile /Common/c3d-clientssl" ; eval $cmd1 } when SERVER_CONNECTED priority 250 { ## set serverssl SSL::profile "/Common/c3d-serverssl" } In the SSL Orchestrator UI, either from topology workflow, or directly from the corresponding Interception Rule configuration, add the above iRule and deploy. The above iRule programmatically overrides the SSL profiles applied to the Interception Rule (virtual server), effectively enabling C3D support. At this point, the virtual server will request a client certificate, perform revocation checks if defined, and then mint a local copy of the client certificate to pass to the backend server. Optionally, you can insert additional certificate attributes via the server SSL profile configuration, or more dynamically through additional iRule logic: ### Added in 15.1 - allows you to send a forged cert to the server ### irrespective of the client side authentication (ex. APM SSO), ### and insert arbitrary values when SERVERSSL_SERVERCERT { ### The following options allow you to override/replace a submitted ### client cert. For example, a minted client certificate can be sent ### to the server irrespective of the client side authentication method. ### This certificate "template" could be defined locally in the iRule ### (Base64-encoded), pulled from an iFile, or some other certificate source. # set cert1 [b64decode "LS0tLS1a67f7e226f..."] # set cert1 [ifile get template-cert] ### In order to use a template cert, it must first be converted to DER format # SSL::c3d cert [X509::pem2der $cert1] ### Insert arbitrary attributes (OID:value) SSL::c3d extension 1.3.6.1.4.1.3375.3.1 "TEST" } If you've configured the above, a server behind SSL Orchestrator that requires mutual TLS authentication can receive minted client certificates from external users, and SSL Orchestrator can explicitly decrypt and pass traffic to the set of malware inspection tools. You can look at the certificate sent to the server by injecting a tcpdump packet between the BIG-IP and server, then open in Wireshark. tcpdump -lnni [VLAN] -Xs0 -w capture.pcap [additional filters] Finally, you might be asking what to do with certificate attributes injected by C3D, and really it depends on what the server can support. The below is a basic example in an Apache config file to block a client certificate that doesn't contain your defined attribute. <Directory /> SSLRequire "HTTP/%h" in PeerExtList("1.3.6.1.4.1.3375.3.1") RewriteEngine on RewriteCond %{SSL::SSL_CLIENT_VERIFY} !=SUCCESS RewriteRule .? - [F] ErrorDocument 403 "Delegation to SPN HTTP/%h failed. Please pass a valid client certificate" </Directory> And there you have it. In just a few steps you've configured your SSL Orchestrator to integrate with Client Certificate Constrained Delegation to support mutual TLS authentication, and along the way you have hopefully recognized the immense flexibility at your command. Updates As of F5 BIG-IP 16.1.3, there are some new C3D capabilities: C3D has been updated to encode and return the commonName (CN) found in the client certificate subject field in printableString format if possible, otherwise the value will be encoded as UTF8. C3D has been updated to support inserting a subject commonName (CN) via 'SSL::c3d subject commonName' command: when CLIENTSSL_HANDSHAKE { if {[SSL::cert count] > 0} { SSL::c3d subject commonName [X509::subject [SSL::cert 0] commonName] } } C3D has been updated to support inserting a Subject Alternative Name (SAN) via 'SSL::c3d extention SAN' command: when CLIENTSSL_HANDSHAKE { SSL::c3d extension SAN "DNS:*.test-client.com, IP:1.1.1.1" } C3D has been updated to add the Authority Key Identifier (AKI) extension to the client certificate if the CA certificate has a Subject Key Identifier (SKI) extension. Another interesting use case is copying the real client certificate Subject Key Identifier (SKI) to the minted client certificate. By default, the minted client certificate will not contain an SKI value, but it's easy to configure C3D to copy the origin cert's SKI by modifying the C3D server SSL profile. In the "Custom extension" field of the C3D section, add 2.5.29.14 as an available extension. As of F5 BIG-IP 17.1.0 (SSL Orchestrator 11.0), C3D has been integrated natively.Now, for a deployed Inbound topology, the C3D SSL profiles are listed in the Protocol Settings section of the Interception Rules tab. You can replace the client and server SSL profiles created by SSL Orchestrator, with C3D SSL profiles in the Interception Rules tab to support C3D. The C3D support is now extended to both Gateway and Application modes.3.3KViews2likes2CommentsSSL Orchestrator Advanced Use Cases: Integrating SOCKS Proxy
Introduction When we talk about security, and in particular, malware defenses, we spend a lot of time focusing on the capabilities of a product or an entire security stack. We want to know what types of malware a solution covers to get a sense of how effective that solution is. But beyond that, and something that’s not always obvious, is exactly how a solution “sees” the traffic. Now I’m not talking about decryption. Clearly that needs to happen somewhere in the stack for the malware solutions to actually be useful. What I’m referring to though is how traffic actually gets TO the security stack. For an Internet-facing web application that’s usually obvious – it’s routed to an IP address, it's TCP-based, it's wrapped in TLS encryption, and it's application layer HTTP. Client browsers send a request that’s routed across the Internet, arriving at a firewall, and then transported internally to a web server, maybe passing through a load balancer, proxy server, WAF and/or DDoS solution to get there. In any case, this describes a common “reverse proxy” scenario. But this is not the ONLY way that malware security solutions are employed. For example, in an enterprise scenario, it is just as important to apply defenses to the outbound flow for traffic leaving the network to the Internet (or perhaps other business units). But in a forward proxy, the how isn’t always as obvious. To get traffic to the security stack, SSL Orchestrator natively supports inbound and outbound flows, layer 3 (standard routing), layer 2 (bump-in-the-wire), and explicit HTTP proxy. You can also turn on WCCP, BGP, OSPF and others to enable dynamic routing, and WPAD and Proxy PAC files to enable dynamic proxying. You can deploy SSL Orchestrator inline of the traffic flow, or “on-a-stick”. You can even deploy SSL Orchestrator as a load balanced solution for greater scale, using another BIG-IP or even ECMP*. Truth be told, most other TLS visibility products only handle a small subset of those just listed. But there is something I subtly didn’t mention. SSL Orchestrator supports explicit proxy for HTTP/HTTPS traffic. An explicit proxy is a type of proxy that the client is aware of and must target directly. The proxy creates a TCP connection to another server behind the firewall on the client’s behalf, usually employed when the client is not permitted to establish TCP connections to outside servers directly. This would be the proxy server settings in your browser configuration. The native HTTP explicit proxy support is fine for anything a browser needs to talk to, but there may be scenarios where a proxy server is required in an environment for other protocols. For that, the “Socket Secure” (SOCKS) protocol could be a reasonable option. SOCKS isn’t built into the SSL Orchestrator configuration, but BIG-IP has had it forever, and as you’ve no doubt heard me say a few times the flexibility of the BIG-IP platform prevails. In this article I’m going to show just how easy it is to enable a SOCKS proxy listener for SSL Orchestrator. Let’s go see what that looks like! SSL Orchestrator Use Case: Integrating SOCKS Proxy To start, it’s important to understand that SSL Orchestrator creates an HTTP explicit forward proxy with two virtual servers: The first is the actual proxy virtual server. This contains the IP address and port that the client will configure in its proxy settings. All proxied traffic from clients will enter through this virtual server. The second is the TCP tunnel virtual server. This is where SSL Orchestrator decryption and service chaining magic happens. This virtual server listens on an internal “tunnel” VLAN provided by the proxy virtual. Traffic enters through the proxy virtual and is forwarded (wrapped) through the TCP tunnel virtual. The TCP tunnel virtual server is basically identical to a transparent proxy, but that’s just listening internally for traffic. To create a SOCKS proxy implementation, we are simply going to: Create the SOCKS proxy virtual server configuration manually Create an SSL Orchestrator transparent proxy topology Add the internal proxy tunnel VLAN to the resulting transparent proxy virtual server(s) In the same way that the native HTTP explicit proxy works, traffic will enter the SOCKS proxy VIP and wrap around to the TCP tunnel (transparent proxy) virtual server that the SSL Orchestrator created for us. Licensing and Version Requirements The beauty here is that you simply need SSL Orchestrator to do this. The SOCKS proxy profile is a function of Local Traffic Manager, which is also included in the standalone SSL Orchestrator license. As for versions, your best bet is any SSL Orchestrator version 9.0 or higher. It will certainly work with earlier versions, but in those you’ll need to disable configuration strictness. Versions 9.0 and above remove this requirement. Configuring a SOCKS Proxy To build a SOCKS proxy virtual server, you’ll need a DNS resolver, TCP tunnel, SOCKS profile, and a virtual server: DNS resolver: In the BIG-IP UI, under Network – DNS Resolvers – DNS Resolver List, click Create. Give this DNS resolver a name and then click Finished. Now click to edit this resolver, move to the Forward Zones tab, and click Add. Name: enter a single period (“.”, without the quotes) Address: enter the IP address of a DNS nameserver Service Port: enter the listening port of that DNS nameserver (almost always 53) Click Add to complete. TCP tunnel: In the BIG-IP UI, under Network – Tunnels – Tunnel List, click Create. Give this tunnel a name. Profile: tcp-forward Click Finished to complete. SOCKS profile: In the BIG-IP UI, under Local Traffic – Profiles – Services – SOCKS, click Create. Give the SOCKS profile a name. Protocol Versions: select the required versions to support, usually just Socks5 DNS Resolver: select the previously created DNS resolver IPv6: enable this if IPv6 is required Route Domain: set this if a unique (non-zero) route domain is required Tunnel Name: select the previously create TCP tunnel Default Connect Handling: set to Deny Click Finished to complete SOCKS virtual server: In the BIG-IP UI, under Local Traffic – Virtual Servers – Virtual Server List, click Create. Give the virtual server a name. Source: 0.0.0.0/0 Destination Address/Mask: enter the proxy IP address that clients will access Service Port: enter the port for this SOCKS proxy instance SOCKS Profile: select the previously created SOCKS profile VLANs and Tunnels: select the client facing VLAN Source Address Translation: set to None Address Translation: set to enabled Port Translation: set to enabled Click Finished to complete. This is all that you really must do to create the SOCKS proxy configuration. The next step is to create the TCP tunnel virtual server(s), and for that, we’ll let SSL Orchestrator do the work. Configuring SSL Orchestrator You’re going to create an L3 Outbound SSL Orchestrator topology. This will minimally create one TCP forward proxy listener, but you also can create separate listeners for other StartTLS protocols (i.e., SMTP, IMAP, POP3, FTPS). In the interest of brevity, we won’t go through the entire topology configuration in detail. For deeper insights into all of the available settings, hop on over to the SSL Orchestrator deployment guide: https://clouddocs.f5.com/sslo-deployment-guide/. The following will serve as a skeleton for the full topology workflow. In the SSL Orchestrator UI, under Configuration, start a new topology configuration. Topology Properties: provide a name and select L3 Outbound. SSL Configurations: set your CA certificate key chain. Services List: create the security services Service Chain List: create the service chain(s) and attach the services as required Security Policy: adjust the security policy as required Interception Rule: select any VLAN (we will modify this later).Optionally select addition L7 Interception Rules for FTP, IMAP, POP3, and SMTP. Egress Settings: select SNAT as required, andselect gateway settings as required Review the settings and then deploy this configuration. If you’re on a version of SSL Orchestrator older than 9.0, you will need to disable strictness on the new topology. This is the little lock icon to the far right of the listed topology. Note that you’ll need to leave this unlocked, and any changes you make outside of the SSL Orchestrator configuration could be overwritten if you re-deploy the topology. In versions 9.0 and higher, the strictness lock has been removed, so you can freely make the required SOCKS proxy modifications. Once the SSL Orchestrator topology is deployed, head over to Local Traffic – Virtual Servers – Virtual Server List in the BIG-IP UI. You will potentially see a lot of new virtual servers here. Most will have names that correspond to the security services you created, prefixed with “ssloS_”. But you’re looking for virtual servers that are prefixed with “sslo_” and then the name of the topology you just created. For example, if you named the topology “test”, you would see a virtual server named “sslo_test-in-t-4”. This is the standard transparent proxy (TCP tunnel) virtual. If you opted to create the additional L7 interception rules, you’ll see separate virtual servers for each of these. Again, for example, “sslo_test-pop3-4”, would represent the POP3 tunnel listener. To enable the SOCKS proxy, edit each of these “sslo_” virtual servers and replace the existing selected VLAN with the TCP tunnel you created in the above SOCKS proxy configuration steps. That should be it. Configure a client to point to your SOCKS proxy and test. That traffic should arrive at the SOCKS proxy virtual server, and then wrap back through the respective TCP tunnel virtual. The SSL Orchestrator configuration on that tunnel will handle decryption and service chaining to the security stack. Testing your SOCKS Proxy Now, if you’re reading this article, you’re either the highly motivated, super inquisitive type, which is awesome by the way, and/or have a real need to implement a SOCKS proxy for your decrypted malware security stack. For the latter, you probably already have a set of tools that needs a proxy to get to the Internet, so you can use those to test. But if you don’t, I’ve taken the liberty to add a few below to get you started. Nothing fancy here, just some simple command line utilities to test your new SSL Orchestrator SOCKS proxy implementation. Let’s also assume for examples below that the SOCKS proxy is configured to listen at 10.1.10.150:2222. We’ll use the set of test services at rebex.net to try these out. When a password is required, it’s ‘password’. SSH: ssh -o ProxyCommand='nc -X5 -x 10.1.10.150:2222 %h %p' demo@test.rebex.net SFTP sftp -o ProxyCommand='nc -v -x10.1.10.150:2222 %h %p' demo@test.rebex.net HTTP and HTTPS curl -vk --socks5 10.1.10.150:2222 https://www.example.com FTP and FTPS (implicit) curl --socks5 10.1.10.150:2222 -1 -v --disable-epsv --ftp-skip-pasv-ip -u demo --ftp-ssl -k ftps://test.rebex.net:990/pub/example/readme.txt -o readme.txt Caveats As you can tell, we’re just swapping out the HTTP explicit proxy front end for a SOCKS proxy front end. This layer’s job is primarily to establish the client’s server-side TCP socket (on the client’s behalf), and then forward everything through the established tunnel. All of the SSL Orchestrator magic happens inside that tunnel. The SOCKS proxy itself does not introduce any new constraints. It’ll work with mostly any protocol that an SSL Orchestrator L3 Outbound topology will handle. The only significant caveats are: BIG-IP cannot decrypt outbound SSH (and SFTP). It can, however, be service chained. BIG-IP's FTP profile does not support passive mode FTP through a SOCKS proxy configuration. To get FTP traffic to flow, remove the FTP virtual server configuration from the SSL Orchestrator topology (sslo_xxxx-ftp-4). This removes the FTP profile and allows the use of SSL Orchestrator as a SOCKS proxy for FTP. Note also that for a standard explicit outbound SSL Orchestrator topology, passive FTP works correctly with the FTP virtual server and FTP profile. Active mode FTP is not supported in any outbound proxy configuration. Resources Below is a small list of resources to further your SSL Orchestrator and SOCKS proxy education: Not that you’ll need any of this to implement the above solution, but if you just want to get nerdy with details on how SOCKS works, try this: https://en.wikipedia.org/wiki/SOCKS. If you have any questions about setting up SSL Orchestrator, the official deployment guide is your friend: https://clouddocs.f5.com/sslo-deployment-guide/. To see a full list of test services available at rebex.net, see the following: https://test.rebex.net. Summary And there you have it. In just a few steps you’ve configured your SSL Orchestrator solution to work as a SOCKS proxy, and along the way you have hopefully recognized some of the immense flexibility at your command.4.2KViews1like4CommentsSSL Orchestrator Advanced Use Cases: DNS-over-HTTPS Detection
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. Let us now explore one perspective of that flexibility and how SSL Orchestrator can be used to handle DNS-over-HTTPS and DNS-over-TLS. SSL Orchestrator Use Case: DNS-over-HTTPS and DNS-over-TLS Handling Despite the rapid evolution of Internet standards, and increasing amount of encryption, there's one aspect of our daily online world that hasn't really changed that much in its nearly 4 decades of breath. That of course is DNS. We don't tend to think about it often, which is probably why it hasn't evolved as much as other things, but it truly is the heart and backbone of everything we do online. That is, unless you want to memorize "2607:f8b0:400a:0804:0000:0000:0000:2004" as the way to get to Google, you had better have a working DNS. But DNS is inherently insecure. It's been shown to be vulnerable to all manner of attacks, and for the purposes of this discussion specifically, also exposes where you're going. That is, while the HTTP payload may be encrypted, there's still that (visible) DNS request that goes out first. That's not to say that there haven't been any improvements though. DNSSEC was developed to help secure DNS and prevent spoofing, but in the many years since its introduction it still isn't as widespread as we'd have hoped. DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) are more recent developments that focus mainly on the privacy aspect of DNS communications (or lack thereof). With DoH and DoT, clients and servers forego the typical DNS protocol request over UDP or TCP port 53 and embed the request inside an encrypted HTTPS or pure TLS connection. But...while that sounds pretty cool, there can be additional consequences to encrypting DNS, both good and bad: [Good] DoH and DoT can protect against ISPs that use your DNS information for targeted advertising. [Bad] DoH and DoT can effectively blind local DNS security and filtering controls. DNS monitoring is often an effective defense against spam and malware infections. [Bad] DoH and DoT providers are public services (ex. Cloudflare and Google), so while DNS is being protected from eavesdropping along the way, the providers have access to your DNS requests. The US National Security Agency (NSA) actually published a warning on this: https://media.defense.gov/2021/Jan/14/2002564889/-1/-1/0/CSI_ADOPTING_ENCRYPTED_DNS_U_OO_102904_21.P... The basic idea in this document is that while your DNS privacy is a good thing, encrypted DoH and DoT can also mask malware command-and-control. If you strictly follow the NSA guidance, you could either block DoH and/or DoT altogether, or force users to only use a local enterprise DoH resolver. If you don't have access to a local DoH resolver though, what else could you do? Well, I'm so glad you asked. DoH and DoT are of course encrypted, so unless you disable either protocol at the client, set up a local DoH resolver and force clients to it, or attempt to do encrypted analysis to find DoH traffic, you need to decrypt and inspect. And as it turns out, F5 has an elegant solution to do just that. In this blog post I am going to present a few solutions on how to handle these protocols through SSL Orchestrator decrypted analysis. The protocol implementations of DoH and DoT are a little different, so I will address them separately. Handling DNS-over-HTTPS For all intents and purposes, DoH looks and feels like normal HTTPS traffic. There are some semi-unique patterns you might be able to follow to infer that something is DoH rather than regular HTTPS (i.e., packet size, frequency), but it’s never going to be completely accurate. You could also potentially filter on the IP addresses of the known public DoH services, but that list is rather large and growing: https://github.com/curl/curl/wiki/DNS-over-HTTPS, and clients can ultimately choose the service they point to. If you’re serious about actually detecting DoH traffic with reasonable accuracy, then there’s no better way than through decrypted analysis. Now, once you've decrypted HTTPS and detected that it's DoH, there are a number of things you can do: Simply detect and block: there are actually three forms of DoH requests – Wireformat GET and POST, and JSON (detailed here:https://developers.cloudflare.com/1.1.1.1/dns-over-https), though the vast majority of clients use the Wireformat GET and POST methods. In any case, this option follows the NSA's first recommendation to simply detect and block DoH requests. A browser that receives a reject on a DoH request will natively revert to normal TCP/UDP DNS to allow your local DNS security tools to do what they do best. Detect and log: you may simply want to log that DoH is passing through your network, who's asking, and what they're asking for as an extension to your current DNS monitoring protocols. Detect and blackhole: also, as a possible extension to your current DNS protection protocols, you may need to block some DoH requests from happening. As mentioned earlier, if you simply block DoH, the client will revert to DNS. One of the best ways to block a DoH/DNS request is to provide a good but fake response instead, a technique called "blackholing". In this approach, you can flag specific hostnames (matching on URL categories) to provide a blackhole response. Let’s now look at each of the above options in turn. The first option is actually pretty straightforward. You just need to add the below iRule to an SSL Orchestrator outbound layer 3 topology*, on the “-in-t” TCP tunnel virtual server. On any decrypted HTTPS traffic, the iRule examines the HTTP request methods for the three signature DoH methods and sends a reject on a match. Easy peasy. when HTTP_REQUEST priority 750 { if { ( [HTTP::method] equals "GET" and [HTTP::header exists "accept"] and [HTTP::header "accept"] equals "application/dns-json" ) or \ ( [HTTP::method] equals "GET" and [HTTP::header exists "accept"] and [HTTP::header "accept"] equals "application/dns-message" ) or \ ( [HTTP::method] equals "POST" and [HTTP::header exists "content-type"] and [HTTP::header "content-type"] equals "application/dns-message" ) } { reject } } The second two options above are available in a single iRule here:https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-dns-over-https-detection. As with the previous, import the iRule to the BIG-IP, then associate that iRule with an SSL Orchestrator outbound layer 3 topology*. You'll need to make a few adjustments in the iRule to suit for your environment, and those are all presented as simple static variable assignments: static::LOCAL_LOG: enable or disable local Syslog logging (0 = off, 1 = on). static::HSL: enable or disable remote high-speed logging (0 = off, 1 = on). static::URLDB_LICENSED: set to 1 (on) if you've licensed and provisioned the subscription based URLDB, or 0 (off) if you have not and want to just use custom URL categories for DoH blackholing. static::BLACKHOLE_URLCAT: leave empty to disable or add a list of URL categories to search for URL-based DoH blackholing (ex. /Common/block-doh-urls) static::BLACKHOLE_RTYPE_A: enable or disable blackholing matched A record requests (0 = off, 1 = on). static::BLACKHOLE_RTYPE_AAAA: enable or disable blackholing matched AAAA record requests (0 = off, 1 = on). static::BLACKHOLE_RTYPE_TXT: enable or disable blackholing matched TXT record requests (0 = off, 1 = on). You can quickly test the solution by enabling DoH support in a browser. Please refer to the following instructions for each: https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/encrypted-dns-browsers/. With the static::LOCAL_LOG value set to 1 (enabled), you can tail the BIG-IP LTM log (tail -f /var/log/ltm) and watch the DoH traffic flow across the SSL Orchestrator topology. 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:www.google.com 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:encrypted-tbn0.gstatic.com 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:fonts.gstatic.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:www.gstatic.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:id.google.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:play.google.com Handling DNS-over-TLS DNS-over-TLS (DoT) is essentially plain DNS wrapped in TLS. If you decrypt it, you’ll see a regular DNS request. DoT by standard, travels on TCP port 853, so that’s one very simple way to block DoT traffic, by just blocking that port. But if you wanted to actually log the DoT requests flowing through the BIG-IP, or blackhole some URLs, you again have to decrypt. The aforementioned DoH logging iRule can also handle DoT requests, inspecting any decrypted TCP:853 traffic: https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-dns-over-https-detection. The same static variable flags apply. Considering DoH/DoT proxy options I would be remiss in not mentioning options to actually proxy DoH and DoT traffic, and BIG-IP 16.1 introduced two new capabilities: DoH proxy – where the BIG-IP proxies client DoH requests to external DoH resources. You get to explicitly decrypt here, so you get another opportunity to log and control the DNS. DoH server – where the BIG-IP proxies client DoH requests to external TPC or UDP DNS resources. As with the proxy option, you get to decrypt the client’s DoH request. You can find out more about both of these capabilities here: https://support.f5.com/csp/article/K05451012 To proxy DoT to DoT, you simply need a TCP:853 virtual server with client and server SSL profiles. There are also various programmatic options to convert DoT to DNS, DNS to DoT, and DNS to DoH. But an important consideration here is that to proxy you would need clients to direct their DNS/DoT/DoH requests to your local resource. In some cases, you can do that through DHCP, or through enterprise browser policy management, but other than simply blocking outbound access to these protocols, it can be non-trivial to prevent clients from trying to use any of the external providers. That is specifically where decrypted analysis can be beneficial. Summary With an iRule and just a few configuration changes we have been able to implement a capability on top of SSL Orchestrator to log and actively control encrypted DNS. Decrypted inspection of DoH and DoT is just one of the many interesting benefits of and SSL Orchestrator solution. A demo of the above is available here:SSL Orchestrator Advanced Use Case: DNS over HTTPS * The iRules will also technically work for outbound layer 2 topologies.1.5KViews1like1CommentSSL Orchestrator Advanced Use Cases: Reducing Complexity with Internal Layered Architecture
Introduction Sir Isaac Newton said, "Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things". The world we live in is...complex. No getting around that. But at the very least, we should strive for simplicity where we can achieve it. As IT folk, we often find ourselves mired in the complexity of things until we lose sight of the big picture, the goal. How many times have you created an additional route table entry, or firewall exception, or virtual server, because the alternative meant having a deeper understanding of the existing (complex) architecture? Sure, sometimes it's unavoidable, but this article describes at least one way that you can achieve simplicity in your architecture. SSL Orchestrator sits as an inline point of presence in the network to decrypt, re-encrypt, and dynamically orchestrate that traffic to the security stack. You need rules to govern how to handle specific types of traffic, so you create security policy rules in the SSL Orchestrator configuration to describe and take action on these traffic patterns. It's definitely easy to create a multitude of traffic rules to match discrete conditions, but if you step back and look at the big picture, you may notice that the different traffic patterns basically all perform the same core actions. They allow or deny traffic, intercept or bypass TLS (decrypt/not-decrypt), and send to one or a few service chains. If you were to write down all of the combinations of these actions, you'd very likely discover a small subset of discrete "functions". As usual, F5 BIG-IP and SSL Orchestrator provide some innovative and unique ways to optimize this. And so in this article we will explore SSL Orchestrator topologies "as functions" to reduce complexity. Specifically, you can reduce the complexity of security policy rules, and in doing so, quite likely increase the performance of your SSL Orchestrator architecture. SSL Orchestrator Use Case: Reducing Complexity with Internal Layered Architectures The idea is simple. Instead of a single topology with a multitude of complex traffic pattern matching rules, create small atomic topologies as static functions and steer traffic to the topologies by virtue of "layered" traffic pattern matching. Granted, if your SSL Orchestrator configuration is already pretty simple, then please keep doing what you're doing. You've got this, Tiger. But if your environment is getting complex, and you're not quite convinced yet that topologies as functions is a good idea, here are a few additional benefits you'll get from this topology layering: Dynamic egress selection: topologies as functions can define different egress paths. Dynamic CA selection: topologies as functions can use different local issuing CAs for different traffic flows. Dynamic traffic bypass: certain types of traffic can be challenging to handle internally. For example, mutual TLS traffic can be bypassed globally with the "Bypass on client cert failure" option in the SSL configuration, but bypassing mutual TLS sites by hostname is more complex. A layered architecture can steer traffic (by SNI) through a bypass topology, with a service chain. More flexible pattern recognition: for all of its flexibility, SSL Orchestrator security policy rules cannot catch every possible use case. External traffic pattern recognition, via iRules or CPM (LTM policies) offer near infinite pattern matching options. You could, for example, steer traffic based on incoming tenant VLAN or route domain for multi-tenancy configurations. More flexible automation strategies: as iRules, data groups, and CPM policies are fully automate-able across many AnO platforms (ex. AS3, Ansible, Terraform, etc.), it becomes exceedingly easy to automate SSL Orchestrator traffic processing, and removes the need to manage individual topology security policy rules. Hopefully these benefits give you a pretty clear indication of the value in this architecture strategy. So without further ado, let's get started. Configuration Before we begin, I'd like to make note of the following caveats: While every effort has been made to simplify the layered architecture, there is still a small element of complexity. If you are at all averse to creating virtual servers or modifying iRules, then maybe this isn't for you. But as you are reading this in a forum dedicated to programmability, I'm guessing you the reader are ready for a challenge. This is a "field contributed" solution, so not officially supported by F5. This topology layering architecture is applicable to all modern versions of SSL Orchestrator, from 5.0 to 8.0. While topology layering can be used for inbound topologies, it is most appropriate for outbound. The configuration below also only describes the layer 3 implementation. But layer 2 layering is also possible. With this said, there are just a few core concepts to understand: Basic layered architecture configuration - how the pieces fit together The iRules - how traffic moves through the architecture Or the CPM policies - an alternative to iRules Note again that this is primarily useful in outbound topologies. Inbound topologies are typically more atomic on their own already. I will cover both transparent and explicit forward proxy configurations below. Basic layered architecture configuration A layered architecture takes advantage of a powerful feature of the BIG-IP called "VIP targeting". The idea is that one virtual server calls another with negligible latency between the two VIPs. The "external" virtual server is client-facing. The SSL Orchestrator topology virtual servers are thus "internal". Traffic enters the external VIP and traffic rules pass control to any of a number of internal "topology function" VIPs. You certainly don't have to use the iRule implementation presented here. You just need a client-facing virtual server with an iRule that VIP-targets to one or more SSL Orchestrator topologies. Each outbound topology is represented by a virtual server that includes the application server name. You can see these if you navigate to Local Traffic -> Virtual Servers in the BIG-IP UI. So then the most basic topology layering architecture might just look like this: when CLIENT_ACCEPTED { virtual "/Common/sslo_my_topology.app/sslo_my_topology-in-t-4" } This iRule doesn't do anything interesting, except get traffic flowing across your layered architecture. To be truly useful you'll want to include conditions and evaluations to steer different types of traffic to different topologies (as functions). As the majority of security policy rules are meant to define TLS traffic patterns, the provided iRules match on TLS traffic and pass any non-TLS traffic to a default (intercept/inspection) topology. These iRules are intended to simplify topology switching by moving all of the complexity of traffic pattern matching to a library iRule. You should then only need to modify the "switching" iRule to use the functions in the library, which all return Boolean true or false results. Here are the simple steps to create your layered architecture: Step 1: Build a set of "dummy" VLANs. A topology must be bound to a VLAN. But since the topologies in this architecture won't be listening on client-facing VLANs, you will need to create a separate VLAN for each topology you intend to create. A dummy VLAN is a VLAN with no interface assigned. In the BIG-IP UI, under Network -> VLANs, click Create. Give your VLAN a name and click Finished. It will ask you to confirm since you're not attaching an interface. Repeat this step by creating unique VLAN names for each topology you are planning to use. Step 2: Build a set of static topologies as functions. You'll want to create a normal "intercept" topology and a separate "bypass" topology, though you can create as many as you need to encompass the unique topology functions. Your intercept topology is configured as such: L3 outbound topology configuration, normal topology settings, SSL configuration, services, service chains No security policy rules - just the ALL rule with TLS intercept action (and service chain), and optionally remove the built-in Pinners rule Attach to a dummy VLAN (a VLAN with no assigned interfaces) Your bypass topology should then look like this: L3 outbound topology configuration, skip the SSL Configuration settings, optionally re-use services and service chains No security policy rules - just the ALL rule with TLS bypass action (and service chain) Attach to a separate dummy VLAN (a VLAN with no assigned interfaces) Note the name you use for each topology, as this will be called explicitly in the iRule. For example, if you name the topology "myTopology", that's the name you will use in each "call SSLOLIB::target" function (more on this in a moment) . If you look in the SSL Orchestrator UI, you will see that it prepends "sslo_" (ex. sslo_myTopology). Don't include the "sslo_" portion in the iRule. Step 3: Import the SSLOLIB iRule (attached here). Name it "SSLOLIB". This is the library rule, so no modifications are needed. The functions within (as described below) will return a true or false, so you can mix these together in your switching rule as needed. Step 4: Import the traffic switching iRule (attached here). You will modify this iRule as required, but the SSLOLIB library rule makes this very simple. Step 5: Create your external layered virtual server. This is the client-facing virtual server that will catch the user traffic and pass control to one of the internal SSL Orchestrator topology listeners. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: 0 Protocol: TCP VLAN: client-facing VLAN Address Translation: disabled Port Translation: disabled Default Persistence Profile: ssl iRule: the traffic switching iRule Note that the ssl persistence profile is enabled here to allow the iRules to handle client side SSL traffic without SSL profiles attached. Also make sure that Address and Port Translation are disabled before clicking Finished. Step 6: Modify the traffic switching iRule to meet your traffic matching requirements (see below). You have the basic layered architecture created. The only remaining step is to modify the traffic switching iRule as required, and that's pretty easy too. The iRules I'll repeat, there are near infinite options here. At the very least you need to VIP target from the external layered VIP to at least one of the SSL Orchestrator topology VIPs. The iRules provided here have been cultivated to make traffic selection and steering as easy as possible by pushing all of the pattern functions to a library iRule (SSLOLIB). The idea is that you will call a library function for a specific traffic pattern and if true, call a separate library function to steer that flow to the desired topology. All of the build instructions are contained inside the SSLOLIB iRule, with examples. SSLOLIB iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/SSLOLIB Switching iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/sslo-layering-rule The function to steer to a topology (SSLOLIB::target) has three parameters: <topology name>: this is the name of the desired topology. Use the basic topology name as defined in the SSL Orchestrator configuration (ex. "intercept"). ${sni}: this is static and should be left alone. It's used to convey the SNI information for logging. <message>: this is a message to send to the logs. In the examples, the message indicates the pattern matched (ex. "SRCIP"). Note, include an optional 'return' statement at the end to cancel any further matching. Without the 'return', the iRule will continue to process matches and settle on the value from the last evaluation. Example (sending to a topology named "bypass"): call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return There are separate traffic matching functions for each pattern: SRCIP IP:<ip/subnet> SRCIP DG:<data group name> (address-type data group) SRCPORT PORT:<port/port-range> SRCPORT DG:<data group name> (integer-type data group) DSTIP IP:<ip/subnet> DSTIP DG:<data group name> (address-type data group) DSTPORT PORT:<port/port-range> DSTPORT DG:<data group name> (integer-type data group) SNI URL:<static url> SNI URLGLOB:<glob match url> (ends_with match) SNI CAT:<category name or list of categories> SNI DG:<data group name> (string-type data group) SNI DGGLOB:<data group name> (ends_with match) Examples: # SOURCE IP if { [call SSLOLIB::SRCIP IP:10.1.0.0/16] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } if { [call SSLOLIB::SRCIP DG:my-srcip-dg] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } # SOURCE PORT if { [call SSLOLIB::SRCPORT PORT:5000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } if { [call SSLOLIB::SRCPORT PORT:1000-60000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } # DESTINATION IP if { [call SSLOLIB::DSTIP IP:93.184.216.34] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } if { [call SSLOLIB::DSTIP DG:my-destip-dg] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } # DESTINATION PORT if { [call SSLOLIB::DSTPORT PORT:443] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } if { [call SSLOLIB::DSTPORT PORT:443-9999] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } # SNI URL match if { [call SSLOLIB::SNI URL:www.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } if { [call SSLOLIB::SNI URLGLOB:.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } # SNI CATEGORY match if { [call SSLOLIB::SNI CAT:$static::URLCAT_list] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } if { [call SSLOLIB::SNI CAT:/Common/Government] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } # SNI URL DATAGROUP match if { [call SSLOLIB::SNI DG:my-sni-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } if { [call SSLOLIB::SNI DGGLOB:my-sniglob-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } To combine these, you can use simple AND|OR logic. Example: if { ( [call SSLOLIB::DSTIP DG:my-destip-dg] ) and ( [call SSLOLIB::SRCIP DG:my-srcip-dg] ) } Finally, adjust the static configuration variables in the traffic switching iRule RULE_INIT event: ## User-defined: Default topology if no rules match (the topology name as defined in SSLO) set static::default_topology "intercept" ## User-defined: DEBUG logging flag (1=on, 0=off) set static::SSLODEBUG 0 ## User-defined: URL category list (create as many lists as required) set static::URLCAT_list { /Common/Financial_Data_and_Services /Common/Health_and_Medicine } CPM policies LTM policies (CPM) can work here too, but with the caveat that LTM policies do not support URL category lookups. You'll probably want to either keep the Pinners rule in your intercept topologies, or convert the Pinners URL category to a data group. A "url-to-dg-convert.sh" Bash script can do that for you. url-to-dg-convert.sh: https://github.com/f5devcentral/sslo-script-tools/blob/main/misc-tools/url-to-dg-convert.sh As with iRules, infinite options exist. But again for simplicity here is a good CPM configuration. For this you'll still need a "helper" iRule, but this requires minimal one-time updates. when RULE_INIT { ## Default SSLO topology if no rules match. Enter the name of the topology here set static::SSLO_DEFAULT "intercept" ## Debug flag set static::SSLODEBUG 0 } when CLIENT_ACCEPTED { ## Set default topology (if no rules match) virtual "/Common/sslo_${static::SSLO_DEFAULT}.app/sslo_${static::SSLO_DEFAULT}-in-t-4" } when CLIENTSSL_CLIENTHELLO { if { ( [POLICY::names matched] ne "" ) and ( [info exists ACTION] ) and ( ${ACTION} ne "" ) } { if { $static::SSLODEBUG } { log -noname local0. "SSLO Switch Log :: [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] :: [POLICY::rules matched [POLICY::names matched]] :: Sending to $ACTION" } virtual "/Common/sslo_${ACTION}.app/sslo_${ACTION}-in-t-4" } } The only thing you need to do here is update the static::SSLO_DEFAULT variable to indicate the name of the default topology, for any traffic that does not match a traffic rule. For the comparable set of CPM rules, navigate to Local Traffic -> Policies in the BIG-IP UI and create a new CPM policy. Set the strategy as "Execute First matching rule", and give each rule a useful name as the iRule can send this name in the logs. For source IP matches, use the "TCP address" condition at ssl client hello time. For source port matches, use the "TCP port" condition at ssl client hello time. For destination IP matches the "TCP address" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For destination port matches the "TCP port" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For SNI matches, use the "SSL Extension server name" condition at ssl client hello time. For each of the conditions, add a simple "Set variable" action as ssl client hello time. Name the variable "ACTION" and give it the name of the desired topology. Apply the helper iRule and CPM policy to the external traffic steering virtual server. The "first" matching rule strategy is applied here, and all rules trigger on ssl client hello, so you can drag them around and re-order as necessary. Note again that all of the above only evaluates TLS traffic. Any non-TLS traffic will flow through the "default" topology that you identify in the iRule. It is possible to re-configure the above to evaluate HTTP traffic, but honestly the only significant use case here might be to allow or drop traffic at the policy. Layered architecture for an explicit forward proxy You can use the same logic to support an explicit proxy configuration. The only difference will be that the frontend layered virtual server will perform the explicit proxy functions. The backend SSL Orchestrator topologies will continue to be in layer 3 outbound (transparent proxy) mode. Normally SSL Orchestrator would build this for you, but it's pretty easy and I'll show you how. You could technically configure all of the SSL Orchestrator topologies as explicit proxies, and configure the client facing virtual server as a layer 3 pass-through, but that adds unnecessary complexity. If you also need to add explicit proxy authentication, that is done in the one frontend explicit proxy configuration. Use the settings below to create an explicit proxy LTM configuration. If not mentioned, settings can be left as defaults. Under SSL Orchestrator -> Configuration in the UI, click on the gear icon in the top right corner. This will expose the DNS resolver configuration. The easiest option here is to select "Local Forwarding Nameserver" and then enter the IP address of the local DNS service. Click "Save & Next" and then "Deploy" when you're done. Under Network -> Tunnels in the UI, click Create. This will create a TCP tunnel for the explicit proxy traffic. Profile: select tcp-forward Under Local Traffic -> Profiles -> Services -> HTTP in the UI, click Create. This will create the HTTP explicit proxy profile. Proxy Mode: Explicit Explicit Proxy: DNS Resolver: select the ssloGS-net-resolver Explicit Proxy: Tunnel Name: select the TCP tunnel created earlier Under Local Traffic -> Virtual Servers, click Create. This will create the client-facing explicit proxy virtual server. Type: Standard Source: 0.0.0.0/0 Destination: enter an IP the client can use to access the explicit proxy interface Service Port: enter the explicit proxy listener port (ex. 3128, 8080) HTTP Profile: HTTP explicit profile created earlier VLANs and Tunnel Traffic: set to "Enable on..." and select the client-facing VLAN Address Translation: enabled Port Translation: enabled Under Local Traffic -> Virtual Servers, click Create again. This will create the TCP tunnel virtual server. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: * VLANs and Tunnel Traffic: set to "Enable on..." and select the TCP tunnel created earlier Address Translation: disabled Port Translation: disabled iRule: select the SSLO switching iRule Default Persistence Profile: select ssl Note, make sure that Address and Port Translation are disabled before clicking Finished. Under Local Traffic -> iRules, click Create. This will create a small iRule for the explicit proxy VIP to forward non-HTTPS traffic through the TCP tunnel. Change "<name-of-TCP-tunnel-VIP>" to reflect the name of the TCP tunnel virtual server created in the previous step. when HTTP_REQUEST { virtual "/Common/<name-of-TCP-tunnel-VIP>" [HTTP::proxy addr] [HTTP::proxy port] } Add this new iRule to the explicit proxy virtual server. To test, point your explicit proxy client at the IP and port defined IP:port and give it a go. HTTP and HTTPS explicit proxy traffic arriving at the explicit proxy VIP will flow into the TCP tunnel VIP, where the SSLO switching rule will process traffic patterns and send to the appropriate backend SSL Orchestrator topology-as-function. Testing and Considerations Assuming you have the default topology defined in the switching iRule's RULE_INIT, and no traffic matching rules defined, all traffic from the client should pass effortlessly through that topology. If it does not, Ensure the named defined in the static::default_topology variable is the actual name of the topology, without the prepended "sslo_". Enable debug logging in the iRule and observe the LTM log (/var/log/ltm) for any anomalies. Worst case, remove the client facing VLAN from the frontend switching virtual server and attach it to one of your topologies, effectively bypassing the layered architecture. If traffic does not pass in this configuration, then it cannot in the layered architecture. You need to troubleshoot the SSL Orchestrator topology itself. Once you have that working, put the dummy VLAN back on the topology and add the client facing VLAN to the switching virtual server. Considerations The above provides a unique way to solve for complex architectures. There are however a few minor considerations: This is defined outside of SSL Orchestrator, so would not be included in the internal HA sync process. However, this architecture places very little administrative burden on the topologies directly. It is recommended that you create and sync all of the topologies first, then create the layered virtual server and iRules, and then manually sync the boxes. If you make any changes to the switching iRule (or CPM policy), that should not affect the topologies. You can initiate a manual BIG-IP HA sync to copy the changes to the peer. If upgrading to a new version of SSL Orchestrator (only), no additional changes are required. If upgrading to a new BIG-IP version, it is recommended to break HA (SSL Orchestrator 8.0 and below) before performing the upgrade. The external switching virtual server and iRules should migrate natively. Summary And there you have it. In just a few steps you've been able to reduce complexity and add capabilities, and along the way you have hopefully recognized the immense flexibility at your command.1.9KViews5likes2CommentsSSL Orchestrator Advanced Use Cases: Fun with SaaS Tenant Isolation
Introduction Introduced in SSL Orchestrator 10.1, you can now seemlessly integrate Microsoft Office 365 Tenant Restrictions as a service function in the decrypted service chain. Before I go any further, you may be asking, what exactly are Tenant Restrictions? And why would I want to use them? Well, I'm so glad you asked. Think of a tenant restriction as a way to isolate a tenant (i.e., a set of enterprise accounts) on a SaaS platform. This is super useful for SaaS resources like Office 365, where your enterprise users will have corporate access to Office 365, but may also have their own personal O365 accounts, or accounts with another corporate entity. An Office 365 Tenant Restriction essentially prevents users from accessing any O365 accounts other than those specifically allowed. Most important, a tenant restriction is a really good way to prevent some forms of data exfiltration -i.e., by preventing users from saving proprietary information to a personal or other non-sanctioned Office 365 account. A tenant restriction essentially works by injecting one or more HTTP headers into an HTTP request for a specific set of request URLs, and enabling this feature in SSL Orchestrator 10.1 is super easy. You just create the service object, apply that service to a service chain, and put that service chain in the path of decrypted traffic. It's important to highlight here that injecting an HTTP header requires decrypted access to the traffic, so your SSL Orchestrator policy configuration must put this in a decrypted (intercept) path. To give you a quick example, for Office 365, you'll look for the following three request URLs: "login.microsoftonline.com" "login.microsoft.com" "login.windows.net" And if you see one of these, you'll inject the following two headers: "Restrict-Access-To-Tenants" "<permitted-tenant-list>" "Restrict-Access-Context" "<directory-id>" The above is all described in great and glorious detail here:https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/tenant-restrictions. Okay, so now that we've established what it is, and how it works, we can now get to the FUN part of this article. As it turns out, there are actually a few SaaS applications that support "tenant isolation", and they all do so in more or less the same way - inject one or more HTTP headers for a specified set of URLs. The common ones include: Webex Google G-Suite Dropbox Youtube Slack I'm sure there are more out there, but the above are what we'll focus on here. Let's see exactly how to set this up. SSL Orchestrator Advanced Use Case: Fun with SaaS Tenant Isolation The basic idea is to perform the following three steps: Deploy Office 365 Tenant Restrictions in SSL Orchestrator 10.1 Replace the iRule that the service generates with a new iRule Configure that iRule with the SaaS tenant isolation values that you require Step 1: Deploy Office 365 Tenant Restrictions in SSL Orchestrator 10.1 Starting in SSL Orchestrator 10.1, in the UI go to Services and click the Add button. Navigate to the F5 tab, click on "Office 365 Tenant Restrictions" and then hit the Add button. You won't actually be using the iRule that this service creates, so put anything you want into the two required header fields and click Save & Next. On the subsequent Service Chains List page, select or create a service chain and add this new service to that service chain. Click Save & Next. At this point, just make sure this service chain is assigned to a security policy rule that intercepts (decrypts) traffic. Deploy and move on to the next step. Step 2: Replace the iRule that the service generates with a new iRule Head over to the following Github repo to get the new iRule:https://github.com/f5devcentral/sslo-script-tools/tree/main/saas-tenant-isolation. In the BIG-IP UI, under Local Traffic, iRules, create a new iRule and paste the content there. Now back in the SSL Orchestrator UI, go to the Services tab, select your Office 365 Tenant Restrictions service to edit, and then at the bottom of the page, replace the selected iRule with your new tenant isolation iRule. Deploy and move on to the next step. Step 3: Configure the new iRule with the SaaS tenant isolation values that you require You're now going to need to make a few modifications to the new iRule. In the BIG-IP UI, under Local Traffic, iRules, click on your new tenant isolation iRule to edit. The iRule comes pre-built to support all of the aforementioned SaaS applications. Step 3a: In the RULE_INIT section, there is a separate array defined for each SaaS application. Inside each array is the required tenant isolation header(s) for that application. If you need to deploy tenant isolation headers for a given SaaS application, replace the "enter-value-here" string next to each header name with the required value. I've provided additional references below to help with those values. Step 3b: At the bottom of the RULE_INIT section, there are a set of static variables that enable or disable header injection for each SaaS application. To enable, set the value to 1. Otherwise to disable set the value to 0. That's basically it. If you've configured the header values correctly, and the service is in the direct path of decrypted outbound traffic through SSL Orchestrator, then the iRule should be doing it's job to prevent data exfiltration in your environment. Good job! Summary With an iRule and just a few configuration changes we have been able to implement a capability on top of SSL Orchestrator to extend the built-in Office 365 Tenant Restrictions service to support a set of additional SaaS applications. This flexibility is just one of the many interesting benefits of and SSL Orchestrator solution. Additional Resources As previously mentioned, there are more than just the described SaaS applications out in the wild that support tenant isolation, but these are the most common. If you know of any that might be useful to add, please let me know. Below is a quick reference for each of the included SaaS applications and the required match-on and send values. Office 365 Ref:https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/tenant-restrictions Match on: "login.microsoftonline.com" "login.microsoft.com" "login.windows.net" Send: "Restrict-Access-To-Tenants" "<permitted-tenant-list>" "Restrict-Access-Context" "<directory-id>" _________________________________________________ Webex Ref: https://help.webex.com/en-us/article/m0jby2/Configure-a-list-of-allowed-domains-to-access-Webex-while-on-your-corporate-network- Match on: identity.webex.com identity-eu.webex.com idbroker.webex.com idbroker-secondary.webex.com idbroker-b-us.webex.com idbroker-au.webex.comatlas-a.wbx2.com Send: CiscoSpark-Allowed-Domains = <enterprise domain> Ex. CiscoSpark-Allowed-Domains:example.com,example1.com,example2.com _________________________________________________ Google / G-Suite Ref: https://support.google.com/a/answer/1668854?hl=en Match on: *.google.com *.googleusercontent.com *.gstatic.com Send: X-GooGApps-Allowed-Domains = <comma separated list of allowed domains> Ex. X-GoogApps-Allowed-Domains: mydomain1.com, mydomain2.com _________________________________________________ Dropbox Ref: https://learn.akamai.com/en-us/webhelp/enterprise-threat-protector/enterprise-threat-protector/GUID-634616ED-28D3-4B23-B2C7-1E12ABE47E4A.html Match on: dropbox.com *.dropbox.com Send: X-Dropbox-allowed-Team-Ids = <dropbox team ID> _________________________________________________ Youtube Ref: https://support.google.com/a/answer/6214622?hl=en Match on: www.youtube.com m.youtube.com youtubei.googleapis.com youtube.googleapis.com www.youtube-nocookie.com Send: YouTube-Restrict = Strict. Provides access to a limited collection of video content. This is the most restricted mode. Moderate. Provides some restricted access but is less strict than Strict mode. Moderate mode allows users to access a larger collection of video content. _________________________________________________ Slack Ref: https://slack.com/help/articles/360024821873-Approve-Slack-workspaces-for-your-network Match on: *.slack.com Send: X-Slack-Allowed-Workspaces-Requester = <workspace or org ID (one value)> X-Slack-Allowed-Workspaces = <comma separated list of allowed workspaces and/or org IDs>1.7KViews1like0CommentsOrchestrated Infrastructure Security - Change at the Speed of Business - Cisco Firepower
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF, Protocol Inspection (IPS) with AFM as well as leading Security Solutions like Cisco Firepower and WSA.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series on Implementing SSL Orchestrator here or the CloudDocs Deployment Guide here. This article focuses on using SSL Orchestrator as a tool to assist with simplifying Change Management processes, procedures and shortening the duration of the entire process. Configuration files of Cisco Firepower can be downloaded fromherefrom GitLab. Please forgive me for using SSL and TLS interchangeably in this article. Click here for a demo video of this Dev/Central article This article is divided into the following high level sections: ·Create a new Topology to perform testing ·Monitor Firepower statistics – change the weight ratio – check Firepower stats again ·Remove a single Firepower device from the Service ·Perform maintenance on the Firepower device ·Add the Firepower device to the new Topology ·Test functionality with a single client ·Add the Firepower device back to the original Topology ·Test functionality again ·Repeat to perform maintenance on the other Firepower device Create a new Topology to perform testing A new Topology will be used to safely test the Service after maintenance is performed.The Topology should be similar to the one used for production traffic.This Topology can be re-used in the future. From the BIG-IP Configuration Utility select SSL Orchestrator > Configuration.Click Add under Topologies. Scroll to the bottom of the next screen and click Next. Give it a name, Topology_Staging in this example. Select L2 Inbound as the Topology type then click Save & Next. For the SSL Configurations you can leave the default settings.Click Save & Next at the bottom. Click Save & Next at the bottom of the Services List. Click the Add button under Services Chain List.A new Service Chain is needed so we can remove Firepower1 from the Production Service and add it here. Give the Service Chain a name, Staging_Chain in this example.Click Save at the bottom. Note: The Service will be added to this Service Chain later. Click Save & Next. Click the Add button on the right to add a new rule. For Conditions select Client IP Subnet Match. Enter the Client IP and mask, 10.1.11.52/32 in this example.Click New to add the IP/Subnet. Set the SSL Proxy Action to Intercept. Set the Service Chain to the one created previously. Click OK. Note: This rule is written so that a single client computer (10.1.11.52) will match and can be used for testing. Select Save & Next at the bottom. For the Interception Rule set the Source Address to 10.1.11.52/32.Set the Destination Address/Mask to 10.4.11.0/24.Set the port to 443. Select the VLAN for your Ingress Network and move it to Selected. Set the L7 Profile to Common/http. Click Save & Next. For Log Settings, scroll to the bottom and select Save & Next. Click Deploy. Monitor Firepower statistics – change the weight ratio – check Firepower statistics again Check the statistics on the Firepower device we will be performing maintenance on.It’s “Firepower1” in this example. Connect to the CLI via SSH.At the prompt enter ‘capture-traffic’.Select the correct ‘inlineset’ (2 in this example) and hit Enter for no tcpdump options: > capture-traffic Please choose domain to capture traffic from: 0 - management0 1 - inlineset1 inline set 2 - inlineset2 inline set Selection? 2 Please specify tcpdump options desired. (or enter '?' for a list of supported options) Options: You should see an output similar to the following: This Firepower device is actively processing connections. Change the Weight Ratio Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_Firepower in this example. Click the pencil icon to edit the Service. Click the pencil icon to edit the Network Configuration for Firepower2 Set the ratio to 65535 and click Done. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Check Firepower Statistics Again Check the statistics on “Firepower1” again.With the Weight Ratio change there should be little to no active connections. It should look like the following: Note: The connections above represent the health checks from SSL Orchestrator to the inline Service. Remove a single Firepower device from the Service Back to the SSL Orchestrator Configuration Utility.Click SSL Orchestrator > Configuration > Services > then the Service name, ssloS_Firepower in this example. Click the pencil icon to edit the Service. Under Network Configuration, delete Firepower1. Click Save & Next at the bottom. Click OK if presented with the following warning. Click Deploy. Click OK when presented with the Success message. Perform maintenance on the Firepower device At this point Fireower1 has been removed from the Incoming_Security Topology and is no longer handling production traffic.Firepower2 is now handling all of the production traffic. We can now perform a variety of maintenance tasks on Firepower1 without disrupting production traffic.When done with the task(s) we can then safely test/verify the health of Firepower1 prior to moving it back into production. Some examples of maintenance tasks: ·Perform a software upgrade to a newer version. ·Make policy changes and verify they work as expected. ·Physically move the device. ·Replace a hard drive, fan, and/or power supply. Add the Firepower device to the new Topology This will allow us to test its functionality with a single client computer, prior to moving it back to production. From the SSL Orchestrator Configuration Utility click SSL Orchestrator > Configuration > Topologies > sslo_Topology_Staging. Click the pencil icon on the right to edit the Service. Click Add Service. Select the Cisco Firepower Threat Defense Inline Layer 2 Service and click Add. Give it a name or leave the default.Click Add under Network Configuration. Set the FROM and TO VLANS to the following and click Done. Click Save at the bottom. Click the Service Chain icon. Click the Staging_Chain. Move the CSCO Service from Available to Selected and click Save. Click OK. Click Deploy. Click OK. Test functionality with a single client We created a policy with source IP = 10.1.11.52 to use the new Firepower Service that we just performed maintenance on. Go to that client computer and verify that everything is still working as expected. As you can see this is the test client with IP 10.1.11.52. The page still loads for one of the web servers. You can view the Certificate and see that it is not the same as the Production Certificate. Add the Firepower device back to the original Topology From the SSL Orchestrator GUI select SSL Orchestrator > Configuration > Service Chains Select the Staging_Chain. Select ssloS_CSCO on the right and click the left arrow to remove it from Selected. Click Deploy when done. Click OK. Click OK to the Success message. From the SSL Orchestrator Guided Configuration select SSL Orchestrator > Configuration > Services. Select the CSCO Service and click Delete. Click OK to the Warning. When that is done click the ssloS_Firepower Service. Click the Pencil icon to edit the Service. Under Network Configuration click Add. Set the Ratio to the same value as Firepower2, 65535 in this example.Set the From and To VLAN the following and click Done. Click Save & Next at the bottom. Click OK. Click Deploy. Click OK. Test functionality again Make sure Firepower1 is working properly. To ensure that everything is working as expected you can view the Statistics on Firepower1 again. This Firepower device is actively processing connections. Repeat these steps to perform maintenance on the other Firepower device (not covered in this guide) ·Create a new Topology to perform testing ·Monitor Firepower statistics – change the weight ratio – check Firepower stats again ·Remove a single Firepower device from the Service ·Perform maintenance on the Firepower device ·Add the Firepower device to the new Topology ·Test functionality with a single client ·Add Firepower device back to the original Topology ·Test functionality again609Views2likes0CommentsSSL Orchestrator Advanced Use Cases: Inbound Authentication
Introduction In your many adventures as an IT pro, you've undoubtedly come across the term "Swiss Army Knife" when describing the F5 BIG-IP. You don't have to be an expert at F5 products...or Swiss Army knives...to understand what this means. The term itself ubiquitously describes the idea of versatility, the ability to solve any problem with one of many tools included in a single shiny package. Now of course the naysayers will argue that this versatility breeds complexity. And while there's no argument that a BIG-IP can be complex, just take a look at your current network and security architectures and count how many different tools are used to solve a single set of challenges. The subtle reality is that there's really no such thing as "one-size-fits-all". Homogenizing technologies like the various public cloud offerings will give you "good enough" capabilities, but then you have to ask yourself, is my competitor's good enough the same as my good enough? Do we really have the same exact challenges? This is where versatility can be a critical advantage. Versatility, for example, can help to stop zero-day attacks before your security products have a chance to roll out their own solutions. Versatility can solve complex software issues that might otherwise require a multitude of expensive vendor tools. And versatility can very often create capabilities (application, authentication, security, etc.), where no formal vendor solution exists. In this article I'll be addressing a specific set of BIG-IP (versatility) characteristics: authentication and orchestration. And in doing so, I will also be showing you some powerful capabilities that you probably didn't know were there. Let's get started! SSL Orchestrator Use Case: Inbound Authentication The basic premise of this use case is that an SSL Orchestrator security policy is built on top of a set of "stateless" Access per-session and per-request policies. Access Policy Manager (APM) is the module you use on a BIG-IP to perform client authentication, and this requires "stateful" per-session and per-request policies. Therefore, as an application virtual server can only contain ONE access policy, the APM and SSL Orchestrator policies cannot coexist. In other words, you cannot add APM authentication to an SSL Orchestrator virtual server (or SSL Orchestrator security policy to an APM virtual server). SSL Orchestrator technically allows for authentication in outbound (forward proxy) topologies, because the explicit or transparent forward proxy authentication policy does not sit on the same virtual server as the SSL Orchestrator security policy. What we're focusing on here, though, is inbound (reverse proxy) authentication where there's generally just the one application virtual. There are fundamentally two ways to address this challenge: Layering virtual servers - often referred to as "VIP targeting", or "VIP-target-VIP", this is where one (external) virtual server uses an iRule command to push traffic to another internal virtual server. This is the simple approach. You put your authentication policy and client-side SSL offload on the external virtual, and an iRule to do the VIP targeting. The targeted internal virtual contains the SSL Orchestrator security policies, the application server pool, and optionally server SSL if you need to re-encrypt. figure: apm-sslo-vip-target Connector profile - a connector profile is a proxy element that was added to BIG-IP in 14.1, and that inserts itself in the client-side proxy flow after layer 5/6 (SSL decryption) and before layer 7 (HTTP). The connector is flow-based, so can be assigned once at flow initiation. Essentially, the connector can "tee" traffic out of the original proxy flow, and then back. The connector itself points to an internal virtual server that can perform any number of functions before returning back to the original proxy flow. figure: apm-sslo-connector For those of you that have spent any time digging around in the guts of an SSL Orchestrator configuration, you may recognize the connector profile. The connector was specifically created for SSL Orchestrator to handle third-party security service insertion. This is the thing that tees decrypted traffic off to the security devices. The connector is fundamentally an LTM object, but with LTM you can attach a single connector to a virtual server. In other words, you can attach a single security device to an LTM virtual server. SSL Orchestrator gives you dynamic assignment of multiple connectors (the service chain), a robust security policy (that attaches the flow to the server chain), dynamic decryption, and a guided configuration user interface to build all of this coolness. In the context of this use case, we'll attach a connector profile to the APM application virtual that points to an internal virtual, and that internal virtual will contain the SSL Orchestrator security policy. It is important to note here that the following solutions will minimally require: LTM base + APM add-on SSL Orchestrator You will be using the SSL Orchestrator "Existing Application" topology option here. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. We'll leave application traffic management and decryption to the APM virtual server. Before I dig into each of these options, let's understand why you would select one over the other as both have pros and cons. The VIP target solution is fundamentally easy. It's two virtual servers and a simple VIP target iRule. However, there's a tiny bit of overhead in a VIP target as you engage the TCP proxy twice. And with multiple applications, a VIP target isn't really re-usable. You have to create a separate virtual server pair for each application. The frontend virtual contains the client-facing destination IP, VLAN, client SSL, and iRule. The internal virtual contains the SSL Orchestrator security policy, application pool, and optional server SSL. The connector solution is re-usable. You simply attach the same connector profile to each application virtual server. It's also going to be slightly more efficient than the VIP target. However, the connector configuration is going to be more complex. Inbound Authentication through VIP targeting We will start with the easiest option first. Before doing anything else, navigate to SSL Orchestrator and create an "Existing Application" topology. Here you'll define the security services, service chain(s), and a security policy. On completion you'll have two "stateless" access policies that will get attached to one of the virtual servers. Create a client SSL profile - assuming you're building an HTTPS site, you'll need a client SSL profile to perform HTTPS decryption. Optionally create a server SSL profile - if you're going to re-encrypt to the application servers, you can either create a custom server SSL profile, or just use the built-in "serverssl" profile. Create the application pool - this is the pool that sends traffic to the application servers. Create the internal virtual server - this is the virtual server that will contain the SSL Orchestrator security policy and application pool. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Port: * SSL Profile (Server): optional server SSL profile VLAN and Tunnel Traffic: enabled on (empty) Source Address Translation: SNAT as required Address/Port Translation: enabled Access Profile: SSL Orchestrator base policy (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Default Pool: select the application pool Create the VIP target iRule - the iRule will pass the flow from the external to the internal virtual server: when ACCESS_ACL_ALLOWED { ## Enter the full name and path of the internal virtual server here virtual "/Common/internal-vip" } Create the authentication per-session access policy - this is a standard APM authentication per-session access policy, and can be anything you need. Create the client-facing external virtual server - this is the application virtual server that the client will communicate with directly. Type: Standard Source: 0.0.0.0/0 Destination: enter the IP address clients will use to access the application Port: enter the port for this application SSL Profile (Client): select the client SSL profile VLAN and Tunnel Traffic: enabled on the client-facing VLAN Address/Port Translation: disabled Access Profile: APM authentication policy iRule: select the VIP target iRule That's it. Client traffic will arrive via HTTPS to the external virtual server, get decrypted by the client SSL profile, and then pass to the authentication access profile. The client authenticates, and then the iRule passes the flow to the internal virtual server. The internal virtual server contains the SSL Orchestrator security policy, so decrypted traffic flows to the security services, returns to the BIG-IP, and then flows out to the application servers. Inbound Authentication through a Connector The connector profile is at the heart of SSL Orchestrator and how it drives traffic to security devices. But we're going to use a connector here in a novel way. We're going to create a connector that points to an internal virtual server, and that virtual server will contain the SSL Orchestrator security policy (see image above). It is effectively "tee-ing" the traffic out of the original proxy flow, across the security stack, and then back into the flow. The beauty here is that, aside from being slightly more efficient than a VIP target, the connector is re-usable across multiple APM application virtual servers. It's worth noting here that the traffic to the SSL Orchestrator security policy will have already been decrypted at the APM virtual, so the security policy should not contain rules specific to TLS handling (i.e. SSL bypass). But as we're talking about inbound traffic, it's very likely you won't be needing any of that complexity in the security policy anyway. The objective of the security policy here is to pass decrypted traffic to a service chain of security devices. As with the VIP target approach, first create an SSL Orchestrator "Existing Application" topology. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. Let's build the connector configuration, which includes three things: Create a Service profile - a Service profile essentially defines the type of connector, and how traffic is processed. Here we will be using the F5 Module service. Navigate to Local Traffic :: Profiles :: Other :: Service, and click create. Give it a name and select F5 Module as the Type. Create the internal virtual server - the internal virtual server will host the SSL Orchestrator security policy: Type: Internal HTTP Profile (client): http Service Profile: select the service profile Access Profile: select the SSL Orchestrator profile (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Create the connector profile - navigate to Local Traffic :: Profiles :: Other :: Connector, and click create. Simply select the internal virtual server here. To make all of the above slightly easier, you can simply run the following commands in a BIG-IP shell: tmsh create ltm profile service sslo-service type f5-module tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service } tmsh create ltm profile connector sslo-connector entry-virtual-server sslo-internal-vip Note that prior to BIG-IP 16.0, the Access Profile selection won't be available in the UI for Internal virtual servers, but you can still add via TMSH: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy [name of policy] Example: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy ssloP_sslotest.app/ssloP_sslotest_per_req_policy Now just create your APM application virtual server as usual, including client-facing destination IP/port, VLAN, client (and optional server) SSL, APM authentication policy, SNAT (as required), and the application pool. On top of that, attach the connector profile in the field labeled Connector Profile. For each additional APM application virtual server, you can re-use this same connector profile. Client traffic will arrive via HTTPS to the APM virtual server, get decrypted by the client SSL profile, pass through the connector, and then to the authentication profile. Note that there's one other subtle difference between these methods that I didn't touch on earlier, and that's the order of events. In the VIP target option, authentication is attempted and completed before any traffic passes to the SSL Orchestrator security policy. So the security devices only see application traffic flows. In the connector option, authentication is engaged after the connector, so the security policy and devices see the entire authentication process. In this case, they will see the APM /my.policy redirects and the APM session cookies. Summary The connector profile presents a lot of really interesting capabilities, even beyond what we've seen here. For example, anywhere that you may have some mutual exclusivity, like APM and ASM policies on a virtual server, you could potentially use the connector attached to an APM virtual to pass traffic to a WAF policy. The connector basically gives you a single "tee" for free in LTM. For multiple connectors, dynamic connector assignment, dynamic decryption, and a robust policy to handle that assignment, you'd use the SSL Orchestrator. In either case, whether using the VIP target or connector approach to inbound authentication with SSL Orchestrator, hopefully you can see some of the immense versatility at your command.2.4KViews5likes2Comments