SSL Orchestrator
63 TopicsImplementing SSL Orchestrator - High Level Considerations
Introduction This article is the beginning of a multi-part series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on pre-deployment tasks and preparations for SSL Orchestrator. This article is divided into the following high level sections: Solution Overview Customer Use Case Architecture & Network Topology Please forgive me for using SSL and TLS interchangeably in this article. Software versions used in this article: BIG-IP Version: 14.1.2 SSL Orchestrator Version: 5.5 BIG-IQ Version: 7.0.1 Solution Overview Data transiting between clients (PCs, tablets, phones etc.) and servers is predominantly encrypted with Secure Socket Layer (SSL) and its evolution Transport Layer Security (TLS)(ref. Google Transparency Report). Pervasive encryption means that threats are now predominantly hidden and invisible to security inspection unless traffic is decrypted.The decryption and encryption of data by different devices performing security functions potentially adds overhead and latency.The picture below shows a traditional chaining of security inspection devices such as a filtering web gateway, a data loss prevention (DLP) tool, and intrusion detection system (IDS) and next generation firewall (NGFW). Also, TLS/SSL operations are computationally intensive and stress the security devices’ resources.This leads to a sub-optimal usage of resource where compute time is used to encrypt/decrypt and not inspect. F5’s BIG-IP SSL Orchestrator offers a solution to optimize resource utilization, remove latency, and add resilience to the security inspection infrastructure. F5 SSL Orchestrator ensures encrypted traffic can be decrypted, inspected by security controls, then re-encrypted—delivering enhanced visibility to mitigate threats traversing the network. As a result, you can maximize your security services investment for malware, data loss prevention (DLP), ransomware, and next-generation firewalls (NGFW), thereby preventing inbound and outbound threats, including exploitation, callback, and data exfiltration. The SSL Orchestrator decrypts the traffic and forwards unencrypted traffic to the different security devices for inspection leveraging its optimized and hardware-accelerated SSL/TLS stack.As shown below the BIG-IP SSL Orchestrator classifies traffic and selectively decrypts traffic.It then forwards it to the appropriate security functions for inspection.Finally, once duly inspected the traffic is encrypted and sent on its way to the resource the client is accessing. Deploying F5 and inline security tools together has the following benefits: Traffic Distribution for load sharing Improve the scalability of inline security by distributing the traffic across multiple Security appliances, allowing them to share the load and inspect more traffic. Agile Deployment Add, remove, and/or upgrade Security appliances without disrupting network traffic; converting Security appliances from out-of-band monitoring to inline inspection on the fly without rewiring. Customer Use Case This document focuses on the implementation of BIG-IP SSL Orchestrator to process SSL/TLS encrypted traffic and forward it to a security inspection/enforcement devices. The decryption and forwarding behavior are determined by the security policy. This ensures that only targeted traffic is decrypted in compliance with corporate and regulator policy, data privacy requirements, and other relevant factors. The configuration supports encrypted traffic that originates from within the data center or the corporate network.It also supports traffic originating from clients outside of the security perimeter accessing resources inside the corporate network or demilitarized zone (DMZ) as depicted below. The decrypted traffic transits through different inspection devices for inbound and outbound traffic. As an example, inbound traffic is decrypted and processed by F5’s Advanced Web Application Firewall (F5 Advanced WAF) as shown below. *Can be encrypted or cleartext as needed As an example, outbound traffic is decrypted and sent to a next generation firewall (NGFW) for inspection as shown in the diagram below. The BIG-IP SSL Orchestrator solution offers 5 different configuration templates. The following topologies are discussed in Network Insertion Use Cases. L2 Outbound L2 Inbound L3 Outbound L3 Inbound L3 Explicit Proxy Existing Application In the use case described herein, the BIG-IP is inserted as layer 3 (L3) network device and is configured with an L3 Outbound Topology. Architecture & Network Topology The assumption is that, prior to the insertion of BIG-IP SSL Orchestrator into the network (in a brownfield environment), the network looks like the one depicted below.It is understood that actual networks will vary, that IP addressing, L2 and L3 connectivity will differ, however, this is deemed to be a representative setup. Note: All IP addressing in this document is provided as examples only.Private IP addressing (RFC 1918) is used as in most corporate environments. Note: the management network is not depicted in the picture above.Further discussion about management and visibility is the subject of Centralized Management below. The following is a description of the different reference points shown in the diagram above. a.This is the connection of the border routers that connect to the internet and other WAN and private links. Typically, private IP addressing space is used from the border routers to the firewalls. b.The border switching connects to the corporate/infrastructure firewall.Resilience is built into this switching layer by implementing 2 link aggregates (LAG or Port Channel ®). c.The “demilitarized zone”(DMZ) switches are connected to the firewall.The DMZ network hosts application that are accessible from untrusted networks such as the Internet. d.Application server connect into the DMZ switch fabric. e.Firewalls connect into the switch fabric.Typically core and distribution infrastructure switching will provide L2 and L3 switching to the enterprise (in some case there may be additional L3 routing for larger enterprises/entities that require dynamic routing and other advanced L3 services. f.The connection between the core and distribution layers are represented by a bus on the figure above because the actual connection schema is too intricate to picture.The writer has taken the liberty of drawing a simplified representation.Switches actually interconnect with a mixture of link aggregation and provide differentiated switching using virtualization (e.g. VLAN tagging, 802.1q), and possibly further frame/packet encapsulation (e.g. QinQ, VxLAN). g.The core and distribution switching are used to create 2 broadcast domains. One is the client network, and the other is the internal application network. h.The internal applications are connected to their own subnet. The BIG-IP SSL Orchestrator solution is implemented as depicted below. In the diagram above, new network connections are depicted in orange (vs. blue for existing connections).Similarly to the diagram showing the original network, the switching for the DMZ is depicted using a bus representation to keep the diagram simple. The following discusses the different reference points in the diagram above: a.The BIG-IP SSL Orchestrator is connected to the core switching infrastructure A new VLAN and network are created on the core switching infrastructure to connect to the firewalls (North) to the BIG-IP SSL Orchestrator devices. b.The client network (South) is connected to the BIG-IP via a second VLAN and network. c.The SSL Orchestrator devices are connected to a newly created inspection network.This network is kept separate from the rest of the infrastructure as client traffic transits through the inspection devices unencrypted.As an example, Web Application Firewalls (BIG-IP ASM) are used to filter inbound traffic. d. The LAN configuration for the connection to the BIG-IP ASM is as depicted below. e. The NGFW is connected to the INSPECTION switching network in such a manner that traffic traverses it when the BIG-IP SSL Orchestrator is configured to push traffic for inspection. Summary This article should be a good starting point for planning your initial SSL Orchestrator deployment. We covered the solution overview and use cases. The network topology and architecture was explained with the help of diagrams. Next Steps Click Next to proceed to the next article in the series4.5KViews7likes4CommentsSSL Orchestrator Advanced Use Cases: Integrating SOCKS Proxy
Introduction When we talk about security, and in particular, malware defenses, we spend a lot of time focusing on the capabilities of a product or an entire security stack. We want to know what types of malware a solution covers to get a sense of how effective that solution is. But beyond that, and something that’s not always obvious, is exactly how a solution “sees” the traffic. Now I’m not talking about decryption. Clearly that needs to happen somewhere in the stack for the malware solutions to actually be useful. What I’m referring to though is how traffic actually gets TO the security stack. For an Internet-facing web application that’s usually obvious – it’s routed to an IP address, it's TCP-based, it's wrapped in TLS encryption, and it's application layer HTTP. Client browsers send a request that’s routed across the Internet, arriving at a firewall, and then transported internally to a web server, maybe passing through a load balancer, proxy server, WAF and/or DDoS solution to get there. In any case, this describes a common “reverse proxy” scenario. But this is not the ONLY way that malware security solutions are employed. For example, in an enterprise scenario, it is just as important to apply defenses to the outbound flow for traffic leaving the network to the Internet (or perhaps other business units). But in a forward proxy, the how isn’t always as obvious. To get traffic to the security stack, SSL Orchestrator natively supports inbound and outbound flows, layer 3 (standard routing), layer 2 (bump-in-the-wire), and explicit HTTP proxy. You can also turn on WCCP, BGP, OSPF and others to enable dynamic routing, and WPAD and Proxy PAC files to enable dynamic proxying. You can deploy SSL Orchestrator inline of the traffic flow, or “on-a-stick”. You can even deploy SSL Orchestrator as a load balanced solution for greater scale, using another BIG-IP or even ECMP*. Truth be told, most other TLS visibility products only handle a small subset of those just listed. But there is something I subtly didn’t mention. SSL Orchestrator supports explicit proxy for HTTP/HTTPS traffic. An explicit proxy is a type of proxy that the client is aware of and must target directly. The proxy creates a TCP connection to another server behind the firewall on the client’s behalf, usually employed when the client is not permitted to establish TCP connections to outside servers directly. This would be the proxy server settings in your browser configuration. The native HTTP explicit proxy support is fine for anything a browser needs to talk to, but there may be scenarios where a proxy server is required in an environment for other protocols. For that, the “Socket Secure” (SOCKS) protocol could be a reasonable option. SOCKS isn’t built into the SSL Orchestrator configuration, but BIG-IP has had it forever, and as you’ve no doubt heard me say a few times the flexibility of the BIG-IP platform prevails. In this article I’m going to show just how easy it is to enable a SOCKS proxy listener for SSL Orchestrator. Let’s go see what that looks like! SSL Orchestrator Use Case: Integrating SOCKS Proxy To start, it’s important to understand that SSL Orchestrator creates an HTTP explicit forward proxy with two virtual servers: The first is the actual proxy virtual server. This contains the IP address and port that the client will configure in its proxy settings. All proxied traffic from clients will enter through this virtual server. The second is the TCP tunnel virtual server. This is where SSL Orchestrator decryption and service chaining magic happens. This virtual server listens on an internal “tunnel” VLAN provided by the proxy virtual. Traffic enters through the proxy virtual and is forwarded (wrapped) through the TCP tunnel virtual. The TCP tunnel virtual server is basically identical to a transparent proxy, but that’s just listening internally for traffic. To create a SOCKS proxy implementation, we are simply going to: Create the SOCKS proxy virtual server configuration manually Create an SSL Orchestrator transparent proxy topology Add the internal proxy tunnel VLAN to the resulting transparent proxy virtual server(s) In the same way that the native HTTP explicit proxy works, traffic will enter the SOCKS proxy VIP and wrap around to the TCP tunnel (transparent proxy) virtual server that the SSL Orchestrator created for us. Licensing and Version Requirements The beauty here is that you simply need SSL Orchestrator to do this. The SOCKS proxy profile is a function of Local Traffic Manager, which is also included in the standalone SSL Orchestrator license. As for versions, your best bet is any SSL Orchestrator version 9.0 or higher. It will certainly work with earlier versions, but in those you’ll need to disable configuration strictness. Versions 9.0 and above remove this requirement. Configuring a SOCKS Proxy To build a SOCKS proxy virtual server, you’ll need a DNS resolver, TCP tunnel, SOCKS profile, and a virtual server: DNS resolver: In the BIG-IP UI, under Network – DNS Resolvers – DNS Resolver List, click Create. Give this DNS resolver a name and then click Finished. Now click to edit this resolver, move to the Forward Zones tab, and click Add. Name: enter a single period (“.”, without the quotes) Address: enter the IP address of a DNS nameserver Service Port: enter the listening port of that DNS nameserver (almost always 53) Click Add to complete. TCP tunnel: In the BIG-IP UI, under Network – Tunnels – Tunnel List, click Create. Give this tunnel a name. Profile: tcp-forward Click Finished to complete. SOCKS profile: In the BIG-IP UI, under Local Traffic – Profiles – Services – SOCKS, click Create. Give the SOCKS profile a name. Protocol Versions: select the required versions to support, usually just Socks5 DNS Resolver: select the previously created DNS resolver IPv6: enable this if IPv6 is required Route Domain: set this if a unique (non-zero) route domain is required Tunnel Name: select the previously create TCP tunnel Default Connect Handling: set to Deny Click Finished to complete SOCKS virtual server: In the BIG-IP UI, under Local Traffic – Virtual Servers – Virtual Server List, click Create. Give the virtual server a name. Source: 0.0.0.0/0 Destination Address/Mask: enter the proxy IP address that clients will access Service Port: enter the port for this SOCKS proxy instance SOCKS Profile: select the previously created SOCKS profile VLANs and Tunnels: select the client facing VLAN Source Address Translation: set to None Address Translation: set to enabled Port Translation: set to enabled Click Finished to complete. This is all that you really must do to create the SOCKS proxy configuration. The next step is to create the TCP tunnel virtual server(s), and for that, we’ll let SSL Orchestrator do the work. Configuring SSL Orchestrator You’re going to create an L3 Outbound SSL Orchestrator topology. This will minimally create one TCP forward proxy listener, but you also can create separate listeners for other StartTLS protocols (i.e., SMTP, IMAP, POP3, FTPS). In the interest of brevity, we won’t go through the entire topology configuration in detail. For deeper insights into all of the available settings, hop on over to the SSL Orchestrator deployment guide: https://clouddocs.f5.com/sslo-deployment-guide/. The following will serve as a skeleton for the full topology workflow. In the SSL Orchestrator UI, under Configuration, start a new topology configuration. Topology Properties: provide a name and select L3 Outbound. SSL Configurations: set your CA certificate key chain. Services List: create the security services Service Chain List: create the service chain(s) and attach the services as required Security Policy: adjust the security policy as required Interception Rule: select any VLAN (we will modify this later).Optionally select addition L7 Interception Rules for FTP, IMAP, POP3, and SMTP. Egress Settings: select SNAT as required, andselect gateway settings as required Review the settings and then deploy this configuration. If you’re on a version of SSL Orchestrator older than 9.0, you will need to disable strictness on the new topology. This is the little lock icon to the far right of the listed topology. Note that you’ll need to leave this unlocked, and any changes you make outside of the SSL Orchestrator configuration could be overwritten if you re-deploy the topology. In versions 9.0 and higher, the strictness lock has been removed, so you can freely make the required SOCKS proxy modifications. Once the SSL Orchestrator topology is deployed, head over to Local Traffic – Virtual Servers – Virtual Server List in the BIG-IP UI. You will potentially see a lot of new virtual servers here. Most will have names that correspond to the security services you created, prefixed with “ssloS_”. But you’re looking for virtual servers that are prefixed with “sslo_” and then the name of the topology you just created. For example, if you named the topology “test”, you would see a virtual server named “sslo_test-in-t-4”. This is the standard transparent proxy (TCP tunnel) virtual. If you opted to create the additional L7 interception rules, you’ll see separate virtual servers for each of these. Again, for example, “sslo_test-pop3-4”, would represent the POP3 tunnel listener. To enable the SOCKS proxy, edit each of these “sslo_” virtual servers and replace the existing selected VLAN with the TCP tunnel you created in the above SOCKS proxy configuration steps. That should be it. Configure a client to point to your SOCKS proxy and test. That traffic should arrive at the SOCKS proxy virtual server, and then wrap back through the respective TCP tunnel virtual. The SSL Orchestrator configuration on that tunnel will handle decryption and service chaining to the security stack. Testing your SOCKS Proxy Now, if you’re reading this article, you’re either the highly motivated, super inquisitive type, which is awesome by the way, and/or have a real need to implement a SOCKS proxy for your decrypted malware security stack. For the latter, you probably already have a set of tools that needs a proxy to get to the Internet, so you can use those to test. But if you don’t, I’ve taken the liberty to add a few below to get you started. Nothing fancy here, just some simple command line utilities to test your new SSL Orchestrator SOCKS proxy implementation. Let’s also assume for examples below that the SOCKS proxy is configured to listen at 10.1.10.150:2222. We’ll use the set of test services at rebex.net to try these out. When a password is required, it’s ‘password’. SSH: ssh -o ProxyCommand='nc -X5 -x 10.1.10.150:2222 %h %p' demo@test.rebex.net SFTP sftp -o ProxyCommand='nc -v -x10.1.10.150:2222 %h %p' demo@test.rebex.net HTTP and HTTPS curl -vk --socks5 10.1.10.150:2222 https://www.example.com FTP and FTPS (implicit) curl --socks5 10.1.10.150:2222 -1 -v --disable-epsv --ftp-skip-pasv-ip -u demo --ftp-ssl -k ftps://test.rebex.net:990/pub/example/readme.txt -o readme.txt Caveats As you can tell, we’re just swapping out the HTTP explicit proxy front end for a SOCKS proxy front end. This layer’s job is primarily to establish the client’s server-side TCP socket (on the client’s behalf), and then forward everything through the established tunnel. All of the SSL Orchestrator magic happens inside that tunnel. The SOCKS proxy itself does not introduce any new constraints. It’ll work with mostly any protocol that an SSL Orchestrator L3 Outbound topology will handle. The only significant caveats are: BIG-IP cannot decrypt outbound SSH (and SFTP). It can, however, be service chained. BIG-IP's FTP profile does not support passive mode FTP through a SOCKS proxy configuration. To get FTP traffic to flow, remove the FTP virtual server configuration from the SSL Orchestrator topology (sslo_xxxx-ftp-4). This removes the FTP profile and allows the use of SSL Orchestrator as a SOCKS proxy for FTP. Note also that for a standard explicit outbound SSL Orchestrator topology, passive FTP works correctly with the FTP virtual server and FTP profile. Active mode FTP is not supported in any outbound proxy configuration. Resources Below is a small list of resources to further your SSL Orchestrator and SOCKS proxy education: Not that you’ll need any of this to implement the above solution, but if you just want to get nerdy with details on how SOCKS works, try this: https://en.wikipedia.org/wiki/SOCKS. If you have any questions about setting up SSL Orchestrator, the official deployment guide is your friend: https://clouddocs.f5.com/sslo-deployment-guide/. To see a full list of test services available at rebex.net, see the following: https://test.rebex.net. Summary And there you have it. In just a few steps you’ve configured your SSL Orchestrator solution to work as a SOCKS proxy, and along the way you have hopefully recognized some of the immense flexibility at your command.4.1KViews1like4CommentsSSL Orchestrator Advanced Use Cases: Client Certificate Constrained Delegation (C3D) Support
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Client Certificate Constrained Delegation (C3D) Using certificates to authenticate is one of the oldest and most reliable forms of authentication. While not every application supports modern federated access or multi-factor schemes, you'll be hard-pressed to find something that doesn't minimally support authentication over TLS with certificates. And coupled with hardware tokens like smart cards, certificates can enable one of the most secure multi-factor methods available. But certificate-based authentication has always presented a unique challenge to security architectures. Certificate "mutual TLS" authentication requires an end-to-end TLS handshake. When a server indicates a requirement for the client to submit its certificate, the client must send both its certificate, and a digitally-signed hash value. This hash value is signed (i.e. encrypted) with the client's private key. Should a device between the client and server attempt to decrypt and re-encrypt, it would be unable to satisfy the server's authentication request by virtue of not having access to the client's private key (to create the signed hash). This makes encrypted malware inspection complicated, often requiring a total bypass of inspection to sites that require mutual TLS authentication. Fortunately, F5 has an elegant solution to this challenge in the form of Client Certificate Constrained Delegation, affectionally referred to as "C3D". The concept is reasonably straightforward. In very much the same way that SSL forward proxy re-issues a remote server certificate to local clients, C3D can re-issue a remote client certificate to local servers. A local server can continue to enforce secure mutual TLS authentication, while allowing the BIG-IP to explicitly decrypt and re-encrypt in the middle. This presents an immediate advantage in basic load balancing, where access to the unencrypted data allows the BIG-IP greater control over persistence. In the absence of this, persistence would typically be limited to IP address affinity. But of course, access to the unencrypted data also allows the content to be scanned for malicious malware. C3D actually takes this concept of certificate re-signing to a higher level though. The "constrained delegation" portion of the name implies a characteristic much like Kerberos constrained delegation, where (arbitrary) attributes can be inserted into the re-signed token, like the PAC attributes in a Kerberos ticket, to inform the server about the client. Servers for their part can then simply filter on client certificates issued by the BIG-IP (to prevent direct access), and consume any additional attributes in the certificate to understand how better to handle the client. With C3D you can maintain strong mutual TLS authentication all the way through to your servers, while allowing the BIG-IP to more effectively manage availability. And combined with SSL Orchestrator, C3D can enable decryption and inspection of content for malware inspection. This article describes how to configure SSL Orchestrator to enable C3D for inbound decrypted inspection. Arguably, most of what follows is the C3D configuration itself, as the integration with SSL Orchestrator is pretty simple. Note that Client Certificate Constrained Delegation (C3D) is included with Local Traffic Manager (LTM) 13.0 and beyond, but for integration with SSL Orchestrator you should be running 14.1 or later.To learn more about C3D, please see the following resources: K14065425: Configuring Client Certificate Constrained Delegation (C3D):https://support.f5.com/csp/article/K14065425 Manual Chapter: SSL Traffic Management:https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-system-ssl-administration/ssl-traffic-management.html#GUID-B4D2529E-D1B0-4FE2-8C7F-C3774ADE1ED2 SSL::c3d iRule reference - not required to use C3D, but adds powerful functionalityhttps://clouddocs.f5.com/api/irules/SSL__c3d.html The integration of C3D with SSL Orchestrator involves effectively replacing the client and server SSL profiles that the SSL Orchestrator topology creates, with C3D SSL profiles. This is done programmatically with an iRule, so no "non-strict" customization is required at the topology. Also note that an inbound (reverse proxy) SSL Orchestrator topology will take the form of a "gateway mode" deployment (a routed path to multiple applications), or "application mode" deployment (a single application instance hosted at the BIG-IP). See section 2.5 of the SSL Orchestrator deployment guide for a deeper examination of gateway and application modes: https://clouddocs.f5.com/sslo-deployment-guide/ The C3D integration is only applicable to application mode deployments. Configuration C3D itself involves the creation of client and server SSL profiles: Create a new Client SSL profile: Configuration Certificate Key Chain: public-facing server certificate and private key. This will be the certificate and key presented to the client on inbound request. It will likely be the same certificate and key defined in the SSL Orchestrator inbound topology. Client Authentication Client Certificate: require Trusted Certificate Authorities: bundle that can validate client certificate. This is a certificate bundle used to verify the client's certificate, and will contain all of the client certificate issuer CAs. Advertised Certificate Authorities: optional CA hints bundle. Not expressly required, but this certificate bundle is forwarded to the client during the TLS handshake to "hint" at the correct certificate, based on issuer. Client Certificate Constrained Delegation Client Certificate Constrained Delegation: Enabled Client Fallback Certificate (new in 15.1): option to select a default client certificate if client does not send one. This option was introduced in 15.1 and provides the means to select an alternate (local) certificate if the client does not present one. The primary use case here might be to select a "template" certificate, and use an iRule function to insert arbitrary attributes. OCSP: optional client certificate revocation control. This option defines an OCSP revocation provider for the client certificate. Unknown OCSP Response Control (new in 15.1): determines what happens when OCSP returns Unknown. If an OCSP revocation provider is selected, this option defines what to do if the response to the OCSP query is "unknown". Create a new Server SSL profile: Configuration Certificate: default.crt. The certificate and key here are used as "templates" for the re-signed client certificate. Key: default.key Client Certificate Configuration Delegation Client Certificate Constrained Delegation: Enabled CA Certificate: local forging CA cert. This is the CA certificate used to re-sign the client certificate. This CA must be trusted by the local servers. CA Key: local forging CA key CA Passphrase: optional CA passphrase Certificate Extensions: extensions from the real client cert to be included in the forged cert. This is the list of certificate extensions to be copied from the original certificate to the re-issued certificate. Custom Extension: additional extensions to copy to forged cert from real cert (OID). This option allows you to insert additional extensions to be copied, as OID values. Additional considerations: Under normal conditions, the F5 and backend server attempt to resume existing SSL sessions, whereby the server doesn’t send a Certificate Request message. The effect is that all connections to the backend server use the same forged client cert. There are two ways to get around this: Set a zero-length cache value in the server SSL profile, or Set server authentication frequency to ‘always’ in the server SSL profile CA certificate considerations: A valid signing CA certificate should possess the following attributes. While it can work in some limited scenarios, a self-signed server certificate is generally not an adequate option for the signing CA. keyUsage: certificate extension containing "keyCertSign" and "digitalSignature" attributes basicConstraints: certificate extension containing "CA = true" (for Yes), marked as "Critical" With the client and server SSL profiles built, the C3D configuration is basically done. To integrate with an inbound SSL Orchestrator topology, create a simple iRule and add it to the topology's Interception Rule configuration. Modify the SSL profile paths below to reflect the profiles you created earlier. ### Modify the SSL profile paths below to match real C3D SSL profiles when CLIENT_ACCEPTED priority 250 { ## set clientssl set cmd1 "SSL::profile /Common/c3d-clientssl" ; eval $cmd1 } when SERVER_CONNECTED priority 250 { ## set serverssl SSL::profile "/Common/c3d-serverssl" } In the SSL Orchestrator UI, either from topology workflow, or directly from the corresponding Interception Rule configuration, add the above iRule and deploy. The above iRule programmatically overrides the SSL profiles applied to the Interception Rule (virtual server), effectively enabling C3D support. At this point, the virtual server will request a client certificate, perform revocation checks if defined, and then mint a local copy of the client certificate to pass to the backend server. Optionally, you can insert additional certificate attributes via the server SSL profile configuration, or more dynamically through additional iRule logic: ### Added in 15.1 - allows you to send a forged cert to the server ### irrespective of the client side authentication (ex. APM SSO), ### and insert arbitrary values when SERVERSSL_SERVERCERT { ### The following options allow you to override/replace a submitted ### client cert. For example, a minted client certificate can be sent ### to the server irrespective of the client side authentication method. ### This certificate "template" could be defined locally in the iRule ### (Base64-encoded), pulled from an iFile, or some other certificate source. # set cert1 [b64decode "LS0tLS1a67f7e226f..."] # set cert1 [ifile get template-cert] ### In order to use a template cert, it must first be converted to DER format # SSL::c3d cert [X509::pem2der $cert1] ### Insert arbitrary attributes (OID:value) SSL::c3d extension 1.3.6.1.4.1.3375.3.1 "TEST" } If you've configured the above, a server behind SSL Orchestrator that requires mutual TLS authentication can receive minted client certificates from external users, and SSL Orchestrator can explicitly decrypt and pass traffic to the set of malware inspection tools. You can look at the certificate sent to the server by injecting a tcpdump packet between the BIG-IP and server, then open in Wireshark. tcpdump -lnni [VLAN] -Xs0 -w capture.pcap [additional filters] Finally, you might be asking what to do with certificate attributes injected by C3D, and really it depends on what the server can support. The below is a basic example in an Apache config file to block a client certificate that doesn't contain your defined attribute. <Directory /> SSLRequire "HTTP/%h" in PeerExtList("1.3.6.1.4.1.3375.3.1") RewriteEngine on RewriteCond %{SSL::SSL_CLIENT_VERIFY} !=SUCCESS RewriteRule .? - [F] ErrorDocument 403 "Delegation to SPN HTTP/%h failed. Please pass a valid client certificate" </Directory> And there you have it. In just a few steps you've configured your SSL Orchestrator to integrate with Client Certificate Constrained Delegation to support mutual TLS authentication, and along the way you have hopefully recognized the immense flexibility at your command. Updates As of F5 BIG-IP 16.1.3, there are some new C3D capabilities: C3D has been updated to encode and return the commonName (CN) found in the client certificate subject field in printableString format if possible, otherwise the value will be encoded as UTF8. C3D has been updated to support inserting a subject commonName (CN) via 'SSL::c3d subject commonName' command: when CLIENTSSL_HANDSHAKE { if {[SSL::cert count] > 0} { SSL::c3d subject commonName [X509::subject [SSL::cert 0] commonName] } } C3D has been updated to support inserting a Subject Alternative Name (SAN) via 'SSL::c3d extention SAN' command: when CLIENTSSL_HANDSHAKE { SSL::c3d extension SAN "DNS:*.test-client.com, IP:1.1.1.1" } C3D has been updated to add the Authority Key Identifier (AKI) extension to the client certificate if the CA certificate has a Subject Key Identifier (SKI) extension. Another interesting use case is copying the real client certificate Subject Key Identifier (SKI) to the minted client certificate. By default, the minted client certificate will not contain an SKI value, but it's easy to configure C3D to copy the origin cert's SKI by modifying the C3D server SSL profile. In the "Custom extension" field of the C3D section, add 2.5.29.14 as an available extension. As of F5 BIG-IP 17.1.0 (SSL Orchestrator 11.0), C3D has been integrated natively.Now, for a deployed Inbound topology, the C3D SSL profiles are listed in the Protocol Settings section of the Interception Rules tab. You can replace the client and server SSL profiles created by SSL Orchestrator, with C3D SSL profiles in the Interception Rules tab to support C3D. The C3D support is now extended to both Gateway and Application modes.3.1KViews2likes2CommentsSSL Orchestrator Advanced Use Cases: Forward Proxy Authentication
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless.In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Forward Proxy Authentication Arguably, authentication is an easy one for BIG-IP, but I'm going to ease into this series slowly. There's no better place to start than with an examination of some of the many ways you can configure an F5 BIG-IP to authenticate user traffic. Forward Proxy Overview Forward proxy authentication isn't exclusive to SSL Orchestrator, but a vital component if you need to authenticate inspected outbound client traffic to the Internet. In this article, we are simply going to explore the act of authenticating in a forward proxy, in general. - how it works, and how it's applied. For detailed instructions on setting up Kerberos and NTLM forward proxy authentication, please see the SSL Orchestrator deployment guide. Let's start with a general characterization of "forward proxy" to level set. The semantics of forward and reverse proxy can change depending on your environment, but generally when we talk about a forward proxy, we're talking about something that controls outbound (usually Internet-bound) traffic. This is typically internal organizational traffic to the Internet. It is an important distinction, because it also implicates the way we handle encryption. In a forward proxy, clients are accessing remote Internet resources (ex. https://www.f5.com). For TLS to work, the client expects to receive a valid certificate from that remote resource, though the inspection device in the middle does not own that certificate and private key. So for decryption to work in an "SSL forward proxy", the middle device must re-issue ("forge") the remote server's certificate to the client using a locally-trusted CA certificate (and key). This is essentially how every SSL visibility product works for outbound traffic, and a native function of the SSL Orchestrator. Now, for any of this to work, traffic must of course be directed through the forward proxy, and there are generally two ways that this is accomplished: Explicit proxy - where the browser is configured to access the Internet through a proxy server. This can also be accomplished through auto-configuration scripts (PAC and WPAD). Transparent proxy - where the client is blissfully unaware of the proxy and simply routes to the Internet through a local gateway. It should be noted here that SSL visibility products that deploy at layer 2 are effectively limited to one traffic flow option, and lack the level of control that a true proxy solution provides, including authentication. Also note, BIG-IP forward proxy authentication requires the Access Policy Manager (APM) module licensed and provisioned. Explicit Forward Proxy Authentication The option you choose for outbound traffic flow will have an impact on how you authenticate that traffic, as each works a bit differently. Again, we're not getting into the details of Kerberos or NTLM here. The goal is to derive an essential understanding of the forward proxy authentication mechanisms, how they work, how traffic flows through them, and ultimately how to build them and apply them to your SSL Orchestrator configurations. And as each is different, let us start with explicit proxy. Explicit forward proxy authentication for HTTP traffic is governed by a "407" authentication model. In this model, the user agent (i.e. a browser) authenticates to the proxy server before passing any user request traffic to the remote server. This is an important distinction from other user-based authentication mechanisms, as the browser is generally limited in the types of authentication it can perform here (on the user's behalf). In fact most modern browsers, with some exceptions, are limited to the set of "Windows Integrated" methods (NTLM, Kerberos, and Basic). Explicit forward proxy authentication will look something like this: Figure: 407-based HTTPS and HTTP authentication The upside here is that the Windows Integrated methods are usually "transparent". That is, silently handled by the browser and invisible to the user. If you're logged into a domain-joined workstation with a domain user account, the browser will use this access to generate an NTLM token or fetch a Kerberos ticket on your behalf. If you build an SSL Orchestrator explicit forward proxy topology, you may notice it builds two virtual servers. One of these is the explicit proxy itself, listening on the defined explicit proxy IP and port. And the other is a TCP tunnel VIP. All client traffic arrives at the explicit proxy VIP, then wraps around through the TCP tunnel VIP. The SSL Orchestrator security policy, SSL configurations, and service chains are all connected to the TCP tunnel VIP. Figure: SSL Orchestrator explicit proxy VIP configuration As explicit proxy authentication is happening at the proxy connection layer, to do authentication you simply need to attach your authentication policy to the explicit proxy VIP. This is actually selected directly inside the topology configuration, Interception Rules page. Figure: SSL Orchestrator explicit proxy authentication policy selection But before you can do this, you must first create the authentication policy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Explicit Profile Scope: leave it at 'Profile' Customization Type: leave it at 'Modern' Don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform explicit forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). Now, before we dive into the VPE, let's take a moment to talk about how authentication is going to work here. As previously stated, we are not going to dig into things like Kerberos or NTLM, but we still need something to authenticate to. Once you have something simple working, you can quickly shim in the actual authentication protocol. So let's do basic LocalDB authentication to prove out the configuration. Hop down to Access -> Authentication -> Local User DB -> Instances, and click Create New Instance. Create a simple LocalDB instance: Settings: Name: provide a unique name Leave the remaining settings as is and click OK. Now go to Access -> Authentication -> Local User DB -> Users, and click Create New User. Settings: User Name: provide a unique user name Password: provide a password Instances: selected the LocalDB instance Leave everything else as is and click OK. Now go back to the VPE. You're ready to define your authentication policy. With some exceptions, most explicit forward proxy authentication policies will minimally include a 407 Proxy-Authenticate agent and an authentication agent. The 407 Proxy-Authenticate agent will issue the 407 Proxy-Authenticate response to the client, and pass the user's submitted authentication data (Basic Authorization header, NTLM token, Kerberos ticket) to the auth agent behind it. The auth agent is then responsible for validating that submission and allowing (or denying) access. Since we're using a simple LocalDB to test this, we'll configure this for Basic authentication. Figure: 407-based SWG-Explicit authentication policy 407 HTTP Response Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Note again that this is a simple explicit forward proxy test using a local database for HTTP Basic authentication. Once you have this working, it is super easy to replace the LocalDB method with the authentication protocol you need. Now head back to your SSL Orchestrator explicit proxy configuration. Navigate to the Interception Rules page. On that page you will see a setting for Access Profile. Select your SWG-Explicit access policy here. And that's it. Deploy the configuration and you're done. Configure your browser to point to the SSL Orchestrator explicit proxy IP and port, if you haven't already, and attempt to access an external URL (ex. https://www.f5.com). Since this is configured for HTTP Basic authentication, you should see a popup dialog in the browser requesting username and password. Enter the values you created in the LocalDB user properties. In following articles, I will show you how to configure Kerberos and NTLM for forward proxy authentication. If you want to see what this communication actually looks like on the wire, you can either enable your browser's developer tools, network tab. Or for a cleaner view, head over to a command line on your client and use the cURL command (you'll need cURL installed on your workstation): curl -vk --proxy [PROXY IP:PORT] https://www.example.com --proxy-basic --proxy-user '[username:password]' Figure: cURL explicit proxy output What you see in the output should look pretty close to the explicit proxy diagram from earlier. And if your SSL Orchestrator security policy is defined to intercept TLS, you will see your local CA as the example.com CA issuer. Transparent Forward Proxy Authentication I intentionally started with explicit proxy authentication because it's usually the easiest to get your head around. Transparent forward proxy authentication is a bit different, but you very likely see it all the time. If you've ever connected to hotel, airport/airplane, or coffee shop WiFi, and you were presenting with a webpage or popup screen that asked for username, room number, or asked you to agree to some terms of use, you were using transparent authentication. In this case though, it is commonly referred to as a "Captive Portal". Note that captive portal authentication was introduced to SSL Orchestrator in version 6.0. Captive portal authentication basically works like this: On first time connecting, you navigate to a remote URL (ex. www.f5.com), which passes through a security device (a proxy server, or in the case of hotel/coffee shop WiFi, an access point). The device has never seen you before, so issues an HTTP redirect to a separate URL. This URL will present an authentication point, usually a web page with some form of identity verification, user agreement, etc. You do what you need to do there, and the authentication page redirects you back to the original URL (ex. www.f5.com) and either stores some information about you, or sends something back with you in the redirect (a token). On passing back through the proxy (or access point), you are recognized as an authenticated user and allowed to pass. The token is stored for the life of your sessions so that you are not sent back to the captive portal. Figure: Captive-portal Authentication Process The real beauty here is that you are not at limited in the mechanisms you use to authenticate, like you are in an explicit proxy. The captive portal URL is essentially a webpage, so you could use NTLM, Kerberos, Basic, certificates, federation, OAuth, logon page, basically anything. Configuring this in APM is also super easy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Transparent Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Again, don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform transparent forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). We are going to continue to use the LocalDB authentication method here to keep the configuration simple. But in this case, you could extend that to do Basic authentication or a logon page. If you do Basic, Kerberos, or NTLM, you'll be using a "401 authentication model". This is very similar to the 407 model, except that 401 interacts directly with the user. And again, this is just an example. Captive portal authentication isn't dependent on browser proxy authentication capabilities, and can support pretty much any user authentication method you can throw at it. Figure: 401-based SWG-Transparent authentication policy 401 Authentication Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Now, there are a few additional things to do here. Transparent proxy (captive portal) authentication actually requires two access profiles. The authentication profile you just created gets applied to the captive port (authentication URL). You need a separate access profile on the proxy listener to redirect the user to the captive if no token exists for that user. As it turns out, an SSL Orchestrator security policy is indeed a type of access profile, so it simply gets modified to point to the captive portal URL. The 'named' profile scope you selected in the above authentication profile defines how the two profiles share user identity information, thus both with have a named profile scope, and must use the same named scope value (ex. SSO). You will now create the second access profile: Settings: Name: provide a unique name Profile Type: SSL Orchestrator Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Captive Portals: select 'Enabled' Primary Authentication URI: enter the URL of the captive portal (ex. https://login.f5labs.com) You now need to create a virtual server to hold your captive portal. This is the URL that users are redirected to for authentication (ex. https://login.f5labs.com). The steps are as follows: Create a certificate and private key to enable TLS Create a client SSL profile that contains the certificate and private key Create a virtual server Destination Address/Mask: enter the IP address that the captive portal URL resolves to Service Port: enter 443 HTTP Profle (Client): select 'http' SSL Profile (Client): select your client SSL profile VLANs and Tunnels: enable for your client-facing VLAN Access Profile: select your captive portal access profile Head back into your SSL Orchestrator outbound transparent proxy topology configuration, and go to the Interception Rules page. Under the 'Access Profile' setting, select your new SSL Orchestrator access profile and re-deploy. That's it. Now open a browser and attempt to access a remote resource. Since this is using Basic authentication with LocalDB, you should get prompted for username and password. If you look closely, you will see that you've been redirected to your captive portal URL. 401 Basic authentication is not connection based, so APM stores the user session information by client IP. If you do not get prompted for authentication, it's likely you have an active session already. Navigate to Access -> Overview -> Active Sessions. If you see your LocalDB user account name listed there, delete it and try again (close and re-open the browser). And there you have it. In just a few steps you've configured your SSL Orchestrator outbound topology to perform user authentication, and along the way you have hopefully recognized the immense flexibility at your command. Thanks.2.8KViews4likes10CommentsSSL Orchestrator Advanced Use Cases: Inbound Authentication
Introduction In your many adventures as an IT pro, you've undoubtedly come across the term "Swiss Army Knife" when describing the F5 BIG-IP. You don't have to be an expert at F5 products...or Swiss Army knives...to understand what this means. The term itself ubiquitously describes the idea of versatility, the ability to solve any problem with one of many tools included in a single shiny package. Now of course the naysayers will argue that this versatility breeds complexity. And while there's no argument that a BIG-IP can be complex, just take a look at your current network and security architectures and count how many different tools are used to solve a single set of challenges. The subtle reality is that there's really no such thing as "one-size-fits-all". Homogenizing technologies like the various public cloud offerings will give you "good enough" capabilities, but then you have to ask yourself, is my competitor's good enough the same as my good enough? Do we really have the same exact challenges? This is where versatility can be a critical advantage. Versatility, for example, can help to stop zero-day attacks before your security products have a chance to roll out their own solutions. Versatility can solve complex software issues that might otherwise require a multitude of expensive vendor tools. And versatility can very often create capabilities (application, authentication, security, etc.), where no formal vendor solution exists. In this article I'll be addressing a specific set of BIG-IP (versatility) characteristics: authentication and orchestration. And in doing so, I will also be showing you some powerful capabilities that you probably didn't know were there. Let's get started! SSL Orchestrator Use Case: Inbound Authentication The basic premise of this use case is that an SSL Orchestrator security policy is built on top of a set of "stateless" Access per-session and per-request policies. Access Policy Manager (APM) is the module you use on a BIG-IP to perform client authentication, and this requires "stateful" per-session and per-request policies. Therefore, as an application virtual server can only contain ONE access policy, the APM and SSL Orchestrator policies cannot coexist. In other words, you cannot add APM authentication to an SSL Orchestrator virtual server (or SSL Orchestrator security policy to an APM virtual server). SSL Orchestrator technically allows for authentication in outbound (forward proxy) topologies, because the explicit or transparent forward proxy authentication policy does not sit on the same virtual server as the SSL Orchestrator security policy. What we're focusing on here, though, is inbound (reverse proxy) authentication where there's generally just the one application virtual. There are fundamentally two ways to address this challenge: Layering virtual servers - often referred to as "VIP targeting", or "VIP-target-VIP", this is where one (external) virtual server uses an iRule command to push traffic to another internal virtual server. This is the simple approach. You put your authentication policy and client-side SSL offload on the external virtual, and an iRule to do the VIP targeting. The targeted internal virtual contains the SSL Orchestrator security policies, the application server pool, and optionally server SSL if you need to re-encrypt. figure: apm-sslo-vip-target Connector profile - a connector profile is a proxy element that was added to BIG-IP in 14.1, and that inserts itself in the client-side proxy flow after layer 5/6 (SSL decryption) and before layer 7 (HTTP). The connector is flow-based, so can be assigned once at flow initiation. Essentially, the connector can "tee" traffic out of the original proxy flow, and then back. The connector itself points to an internal virtual server that can perform any number of functions before returning back to the original proxy flow. figure: apm-sslo-connector For those of you that have spent any time digging around in the guts of an SSL Orchestrator configuration, you may recognize the connector profile. The connector was specifically created for SSL Orchestrator to handle third-party security service insertion. This is the thing that tees decrypted traffic off to the security devices. The connector is fundamentally an LTM object, but with LTM you can attach a single connector to a virtual server. In other words, you can attach a single security device to an LTM virtual server. SSL Orchestrator gives you dynamic assignment of multiple connectors (the service chain), a robust security policy (that attaches the flow to the server chain), dynamic decryption, and a guided configuration user interface to build all of this coolness. In the context of this use case, we'll attach a connector profile to the APM application virtual that points to an internal virtual, and that internal virtual will contain the SSL Orchestrator security policy. It is important to note here that the following solutions will minimally require: LTM base + APM add-on SSL Orchestrator You will be using the SSL Orchestrator "Existing Application" topology option here. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. We'll leave application traffic management and decryption to the APM virtual server. Before I dig into each of these options, let's understand why you would select one over the other as both have pros and cons. The VIP target solution is fundamentally easy. It's two virtual servers and a simple VIP target iRule. However, there's a tiny bit of overhead in a VIP target as you engage the TCP proxy twice. And with multiple applications, a VIP target isn't really re-usable. You have to create a separate virtual server pair for each application. The frontend virtual contains the client-facing destination IP, VLAN, client SSL, and iRule. The internal virtual contains the SSL Orchestrator security policy, application pool, and optional server SSL. The connector solution is re-usable. You simply attach the same connector profile to each application virtual server. It's also going to be slightly more efficient than the VIP target. However, the connector configuration is going to be more complex. Inbound Authentication through VIP targeting We will start with the easiest option first. Before doing anything else, navigate to SSL Orchestrator and create an "Existing Application" topology. Here you'll define the security services, service chain(s), and a security policy. On completion you'll have two "stateless" access policies that will get attached to one of the virtual servers. Create a client SSL profile - assuming you're building an HTTPS site, you'll need a client SSL profile to perform HTTPS decryption. Optionally create a server SSL profile - if you're going to re-encrypt to the application servers, you can either create a custom server SSL profile, or just use the built-in "serverssl" profile. Create the application pool - this is the pool that sends traffic to the application servers. Create the internal virtual server - this is the virtual server that will contain the SSL Orchestrator security policy and application pool. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Port: * SSL Profile (Server): optional server SSL profile VLAN and Tunnel Traffic: enabled on (empty) Source Address Translation: SNAT as required Address/Port Translation: enabled Access Profile: SSL Orchestrator base policy (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Default Pool: select the application pool Create the VIP target iRule - the iRule will pass the flow from the external to the internal virtual server: when ACCESS_ACL_ALLOWED { ## Enter the full name and path of the internal virtual server here virtual "/Common/internal-vip" } Create the authentication per-session access policy - this is a standard APM authentication per-session access policy, and can be anything you need. Create the client-facing external virtual server - this is the application virtual server that the client will communicate with directly. Type: Standard Source: 0.0.0.0/0 Destination: enter the IP address clients will use to access the application Port: enter the port for this application SSL Profile (Client): select the client SSL profile VLAN and Tunnel Traffic: enabled on the client-facing VLAN Address/Port Translation: disabled Access Profile: APM authentication policy iRule: select the VIP target iRule That's it. Client traffic will arrive via HTTPS to the external virtual server, get decrypted by the client SSL profile, and then pass to the authentication access profile. The client authenticates, and then the iRule passes the flow to the internal virtual server. The internal virtual server contains the SSL Orchestrator security policy, so decrypted traffic flows to the security services, returns to the BIG-IP, and then flows out to the application servers. Inbound Authentication through a Connector The connector profile is at the heart of SSL Orchestrator and how it drives traffic to security devices. But we're going to use a connector here in a novel way. We're going to create a connector that points to an internal virtual server, and that virtual server will contain the SSL Orchestrator security policy (see image above). It is effectively "tee-ing" the traffic out of the original proxy flow, across the security stack, and then back into the flow. The beauty here is that, aside from being slightly more efficient than a VIP target, the connector is re-usable across multiple APM application virtual servers. It's worth noting here that the traffic to the SSL Orchestrator security policy will have already been decrypted at the APM virtual, so the security policy should not contain rules specific to TLS handling (i.e. SSL bypass). But as we're talking about inbound traffic, it's very likely you won't be needing any of that complexity in the security policy anyway. The objective of the security policy here is to pass decrypted traffic to a service chain of security devices. As with the VIP target approach, first create an SSL Orchestrator "Existing Application" topology. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. Let's build the connector configuration, which includes three things: Create a Service profile - a Service profile essentially defines the type of connector, and how traffic is processed. Here we will be using the F5 Module service. Navigate to Local Traffic :: Profiles :: Other :: Service, and click create. Give it a name and select F5 Module as the Type. Create the internal virtual server - the internal virtual server will host the SSL Orchestrator security policy: Type: Internal HTTP Profile (client): http Service Profile: select the service profile Access Profile: select the SSL Orchestrator profile (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Create the connector profile - navigate to Local Traffic :: Profiles :: Other :: Connector, and click create. Simply select the internal virtual server here. To make all of the above slightly easier, you can simply run the following commands in a BIG-IP shell: tmsh create ltm profile service sslo-service type f5-module tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service } tmsh create ltm profile connector sslo-connector entry-virtual-server sslo-internal-vip Note that prior to BIG-IP 16.0, the Access Profile selection won't be available in the UI for Internal virtual servers, but you can still add via TMSH: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy [name of policy] Example: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy ssloP_sslotest.app/ssloP_sslotest_per_req_policy Now just create your APM application virtual server as usual, including client-facing destination IP/port, VLAN, client (and optional server) SSL, APM authentication policy, SNAT (as required), and the application pool. On top of that, attach the connector profile in the field labeled Connector Profile. For each additional APM application virtual server, you can re-use this same connector profile. Client traffic will arrive via HTTPS to the APM virtual server, get decrypted by the client SSL profile, pass through the connector, and then to the authentication profile. Note that there's one other subtle difference between these methods that I didn't touch on earlier, and that's the order of events. In the VIP target option, authentication is attempted and completed before any traffic passes to the SSL Orchestrator security policy. So the security devices only see application traffic flows. In the connector option, authentication is engaged after the connector, so the security policy and devices see the entire authentication process. In this case, they will see the APM /my.policy redirects and the APM session cookies. Summary The connector profile presents a lot of really interesting capabilities, even beyond what we've seen here. For example, anywhere that you may have some mutual exclusivity, like APM and ASM policies on a virtual server, you could potentially use the connector attached to an APM virtual to pass traffic to a WAF policy. The connector basically gives you a single "tee" for free in LTM. For multiple connectors, dynamic connector assignment, dynamic decryption, and a robust policy to handle that assignment, you'd use the SSL Orchestrator. In either case, whether using the VIP target or connector approach to inbound authentication with SSL Orchestrator, hopefully you can see some of the immense versatility at your command.2.3KViews5likes2CommentsImplementing SSL Orchestrator - Certificate Considerations
Introduction This article is part of a series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on SSL certificates and everything you need to know about them. This article is divided into the following high level sections: Using OpenSSL Using Microsoft CA Importing a private key and certificate into SSL Orchestrator Manually Installing Certificates in browsers Creating a Certificate Signing Request (CSR) for Inbound Topology Using Group Policy Objects (GPO) to distribute certificates Please forgive me for using SSL and TLS interchangeably in this article. Software versions used in this article: BIG-IP Version: 14.1.2 SSL Orchestrator Version: 5.5 BIG-IQ Version: 7.0.1 Using OpenSSL OpenSSL can be used to sign a CSR.It can also be used to generate a self-signed certificate.When creating a CSR for production, you might need to use OpenSSL with a template in order to populate certain fields like the Digital Signature. This information is provided as a courtesy. OpenSSL contains an open-source implementation of the SSL and TLS protocols. The core library, written in the C programming language, implements basic cryptographic functions and provides various utility functions. Wrappers allowing the use of the OpenSSL library in a variety of computer languages are available. OpenSSL can be used to create private keys, certificates and more.Here’s an example of the syntax used to create a self-signed certificate: openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt Full instructions about how to use OpenSSL are beyond the scope of this article.However, the links below contain excellent information on usage: OpenSSL - Command Line Utilities SSLShopper- Common OpenSSL Commands Note: If you want to create your own OpenSSL Certificate Authority the following Dev/Central Article is excellent: Building an OpenSSL Certificate Authority - Creating Your Root Certificate Using Microsoft Certificate Authority This method is generally preferred to using self-signed certificates. Rather than reinvent the wheel, the Virtually There Blog does an excellent job of explaining the process to sign a CSR with a local Certificate Authority.Click the link below to learn more: VirtuallyThere - Signing a CSR with your Microsoft Certificate Authority Note: If you’re looking for information about how to setup your own local Microsoft CA see this previous blog: VirtuallyThere - Building a Microsoft Certificate Authority for your lab Note: the blog author has given f5 permission to include the links above. Installing signed certificate into SSL Orchestrator From the Configuration Utility > SSL Orchestrator > Certificate Management > Traffic Certificate Management.Click on the certificate created earlier (my_certificate). Click Import. Click Choose File and select the signed certificate from the CA.Click OK/Open.Click Import. Note: Using Certificate Chains or Subordinate CAs If using Certificate Chains be sure to include all intermediate certificates in the chain.For more information on Certificate Chains, see this Microsoft article. Import private key and certificate into SSL Orchestrator Follow the steps below if you already have the private key and certificate you want to use for SSL decryption. From the BIG-IP Configuration Utility click SSL Orchestrator > Certificate Management > Certificates and Keys.On the far right, click Import. For Import Type click Select.Different types of import options are available. For this example, select Key.Give it a name, in this example SSL.key.You can upload the key from a local file or paste it in as text.Choose the method you prefer and click Import when done.The example below shows the local file method. Click the name of the Key you created. Click Import.You can upload the certificate from a local file or paste it in as text.Choose the method you prefer and click Import when done.The example below shows the local file method. You have successfully imported the private key and certificate. Note: most Enterprise customers will have their own local Certificate Authority (CA). Creating a Certificate Signing Request (CSR) for Inbound Topology If you are creating an Inbound Topology you can use this method to create a CSR. From the F5 Configuration Utility go to SSL Orchestrator > Certificate Management > Certificates and Keys.Click Create on the top right. Give the certificate a name.For Issuer select Certificate Authority.Fill in the rest of the form. Click Finished when done. The page should look like the following.Click Download my_certificate to download it as a file.You can optionally copy the text output to the Clipboard. Download the CSR so it can be signed by your Local Certificate Authority. Manually Installing Certificates in browsers Certificates generated by SSL Orchestrator need to be trusted by the client computers. If using a Microsoft Certificate Authority (CA) to sign the SSL certificates the clients will trust it automatically, assuming they are members of the same domain as the CA. If using Self-Signed certificates you need to install them in the Certificate store on all client computers. Most Enterprise customers won't do this in production but it's often used for testing or demos. Either way, it's important to know these procedures. Firefox has its own Certificate store.Click the icon on the top right then Preferences. Note: Firefox version 70.0.1 was used in the configuration below. Scroll to the bottom of the next screen.Under Security click View Certificates. Click Import. Find the Certificate on your computer.Select it and click Open. Select the option to Trust this CA to identify websites.Click OK. Internet Explorer/Edge and Chrome use the Windows Certificate store. Locate the Certificate on your computer and double click it.Click Install Certificate. Click Next at the Import Wizard. Select the option to Place all certificates in the following store.Click Browse. Select Trusted Root Certification Authorities then OK. Click Next. Click Finish. You should see a Security Warning like the following.Click Yes. Click OK to the Successful Import message. Using GPO to distribute certificates Microsoft has a variety of support articles and documentation for how to do this with GPO: Distribute Certificates to Client Computers by using Group Policy Summary In this article we covered the most common tasks associated with SSL certificates and how to use them with SSL decryption. Next Steps The next article in this series will cover the Guided Configuration component of SSL Orchestrator.2.3KViews1like7CommentsWAFaaS with SSL Orchestrator
Introduction Note: This article applies to SSL Orchestrator versions prior to 11.0. If using version 11.0 refer to the articleHERE This use case allows you to insert F5 WAF functionality as a Service in the SSL Orchestrator inspection zone. WAFaaS is the ability to insert ASM profiles into the SSL Orchestrator Service Chain for Inbound Topologies.This configuration is specific to a WAF policy running on the SSL Orchestrator device.WAF and SSL Orchestrator consume significant CPU cycles so care should be given when deploying both together.It is also possible to deploy WAF as a service on a separate BIG-IP device, in which case you’d simply configure an inline transparent proxy service.The ability to insert F5’s WAF into the Service Chain presents a significant customer benefit. This guide assumes you already have WAF/ASM profile(s) configured, licensed and provisioned on BIG-IP and wish to add this functionality to an Inbound Topology.In order to run WAF and SSL Orchestrator on the same device you will need an LTM license with SSL Orchestrator as an add-on option.You cannot add a WAF license to an SSL Orchestrator stand-alone license. SSL Orchestrator does not directly support inserting F5 WAF policies into the Service Chain.However, the F5 platform is flexible enough to handle many custom use cases.In this case, the ICAP service configuration exposes a framework that is useful for any number of specialized patterns, including adding a WAF policy to an SSLO service chain.We will configure an ICAP Service and attach the WAF policy to it. Steps: Create ICAP Service Disable Strictness on the Service Disable TCP monitor for the ICAP Pool ICAP Adapt profiles removed from the Virtual Server Application Security Policy enabled and a Policy assigned under Security Step #1: Create ICAP Service Note: These instructions assume an SSL Orchestrator Topology and Service Chain are already deployed and working properly.These instructions simply add WAFaaS to the existing Service Chain.It is entirely possible to create the WAFaaS during the initial Topology creation, in which case you would create the service during the workflow, then make the necessary changes after the topology has been created. From the SSL Orchestrator Guided Configuration click Services then Add Scroll to the bottom, select Generic ICAP Service and click Add Give it a name, WAFaaS in this example For ICAP Devices click Add on the right Enter an IP Address, 198.19.97.1 in this example and click Done. Note:the IP address you use does not have to be the one above.It’s just a local, non-routable address used as a placeholder in the service definition.This IP address will not be used. IP addresses 198.19.97.0 to 198.19.97.255 are owned by network benchmark tests and located in private networks. Scroll to the bottom and click Save & Next. The next screen is the Services Chain List.Click the name of the Service Chain you wish to add WAF functionality to, ssloSC_ServiceChain in this example. Note: The order of the Services in the Selected column is the order in which SSL Orchestrator will pass decrypted data to the device.This can be an important consideration if you want some devices to see, or not see, the actions taken by the WAF Service. Select the WAFaaS Service and click the right arrow to move it to Selected.Click Save. Click Save & Next Click Deploy You should receive a Success message Step #2: Disable Strictness on the Service From the SSL Orchestrator Configuration screen select Services.Click the padlock to Unprotect Configuration. Note:Disabling Strictness on the ICAP Service is needed to modify it and attach the WAFaaS policy.Strictness must remain disabled on this service and disabling strictness on the service has no effect on any other part of the SSL Orchestrator configuration. Click OK to Unprotect the Configuration Step #3: Disable tcp monitor for the ICAP Pool From Local Traffic select Pools > Pool List Select the WAFaaS Pool Under Active Health Monitors select tcp and click >> to move it to Available.This removes the Pool’s Monitor because otherwise it would be marked as down or unavailable. Click Update Note:The Health Monitor needs to be removed because there is no actual ICAP service to monitor. Step #4: ICAP Adapt profiles removed from the Virtual Server From Local Traffic select Virtual Servers > Virtual Server List Locate the WAFaaS ICAP service that ends in “-t-4”virtual server and select it Set the Request Adapt Profile and Response Adapt Profile to None to disable the default ICAP Profiles Click Update Step #5: Application Security Policy enabled and a Policy assigned under Security For the WAFaaS-t-4 Virtual Server click the Security tab Set Application Security Policy to Enabled Select the Security Policy you wish to use.Click Update when done Note: In specific versions of SSL Orchestrator there is one extra configuration item that needs to be modified. This is NOT required in other versions. If this change is made, when performing an upgrade it is not necessarily required to back out this change. Required versions: SSLO version 5.9.15 available on TMOS 14.1.4 SSLO versions 6.0-6.5 available on TMOX 15.0.x Navigate to “Local Traffic››Profiles : Other : Service” Select the Service profile named “ssloS_WAFaaS-service” Change the “Type” from “ICAP” to “F5 Module” Conclusion The configuration is now complete.Using the WAFaaS this way is functionally the same as using it by itself.There are no known limitations to this configuration.2KViews5likes9CommentsImplementing SSL Orchestrator - Guided Configuration
Introduction This article is part of a series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on the SSL Orchestrator Guided Configuration and everything you need to know about it. This article is divided into the following high level sections: Configuration prerequisites Deployment Topology SSL certificate and key settings Service properties Security policy Interception rule Please forgive me for using SSL and TLS interchangeably in this article. Software versions used in this article: BIG-IP Version: 14.1.2 SSL Orchestrator Version: 5.5 BIG-IQ Version: 7.0.1 Configuration Prerequisites From the BIG-IP Configuration Utility click SSL Orchestrator > Configuration.This is the default landing page when SSL Orchestrator is not configured.The configuration options are presented on this page.Notice the Required Configuration settings on the top right.For DNS click the link to configure. Enter the IP address of the DNS server you wish to use and click Add.You can add multiple DNS servers.Click Update when done. Click SSL Orchestrator > Configuration.For NTP click the link to configure. Enter the IP address or hostname of the NTP server you wish to use and click Add.You can add multiple NTP servers.Click Update when done. Click SSL Orchestrator > Configuration.For Route click the link to configure. Name it.In this example it’s default_route.Enter the correct Destination and Netmask.In this example we’re using 0.0.0.0 as this is a default route.Enter the Gateway IP Address, in this example 10.0.0.1. Click Finished. Click SSL Orchestrator > Configuration.The Required Configuration section should look like the following. Deployment Topology We are now ready to begin the Guided Configuration.Click Next at the bottom. Choose the Topology you would like to deploy.In this example we will configure an L3 Outbound Topology. Name the Topology.For the Protocol choose Any.Select L3 Outbound then click Save & Next Note: some of the available Topologies might be greyed out if not supported by your platform.As an example, virtual machines don’t support L2. SSL Certificate and Key Settings Leave the Certificate Key Chain settings to their defaults. Edit the existing CA Certificate Key Chain by clicking the pencil icon. In a previous article you installed your own private key and certificate.Click the down arrow on the right to select that Certificate and Key.Click Done. Click Save & Next Notes: The difference between the Cert Key Chain and the CA Cert Key Chain: Certificate Key Chain – the certificate key chain represents the certificate and private key used as the “template” for forged server certificates. While re-issuing server certificates on-the-fly is generally easy, private key creation tends to be a CPU-intensive operation. For that reason, the underlying SSL Forward Proxy engine forges server certificates from a single defined private key. This setting gives customers the opportunity to apply their own template private key, and optionally store that key in a FIPS-certified HSM for additional protection. The built-in “default” certificate and private key uses 2K RSA and is generated from scratch when the BIG-IP system is installed. The pre-defined default.crt and default.key can be left as is. CA Certificate Key Chain – an SSL forward proxy must re-sign, or “forge” remote server certificate to local clients using a local certificate authority (CA) certificate, and local clients must trust this local CA. This setting defines the local CA certificate and private key used to perform the forging operation. Service Properties Click Add Service You can choose from many pre-defined templates from different security vendors.In this example select Palo Alto Networks NGFW Inline Layer 2 then click Add. Give it a name.Under Network Configuration click Add. Here you define the VLANS that the Palo Alto is connected to (or will be connected to).You can use existing ones or create new VLANS.We will create new ones by choosing the Create New radio button. Give each VLAN a unique name to help remember which device it’s connected to and in which direction data flows.Select the Interface from the drop-down menu.Click Done. Enable the Port Remap option and leave the port at 80. Click Save Notes: SSL Orchestrator allows for the insertion of additional iRule logic at different points. An iRule defined at the service only affects traffic flowing across this service. It is important to understand, however, that these iRules must not be used to control traffic flow (ex. pools, nodes, virtuals, etc.), but rather should be used to view/modify application layer protocol traffic. For example, an iRule assigned here could be used to view and modify HTTP traffic flowing to/from the service. Click Save & Next Click the button to Add a Service Chain List Give it a name.Click the arrow in the middle to move the Palo Alto Service to the Selected side.Click Save. Note: When you have multiple Services in a Service Chain you can adjust the order that they are used. Your screen should like the following.Click Save & Next. Security Policy The Security Policy is next.Notice you have the option to create new or use an existing one.By default, the policy should look like this. The Name is populated automatically but can be changed.If you click the pencil icon to the far right of the Pinners_Rule you can see the contents of the rule. The Pinners_Rule checks to make sure the content is SSL/TLS.It also checks the category “Pinners” which contains websites with Pinned Certificates.Sites in the category Pinners are automatically set to Bypass decryption.It is recommended to keep this setting. Notes: If you have a URL categorization database you can also bypass decryption based on website category. Conditions can be toggled between Match Any and Match All.Actions can be to Allow or Reject.Also note the Service Chain is bypassed by default.However, you can choose to send the encrypted content through the Security Chain. Click Cancel. The Pinners Category can be viewed/edited from SSL Orchestrator > Policies > URL Categories.Expand Custom Categories and you will see the Pinners category.Click Pinners (custom) to view and/or edit the sites. Back to the Security Policy.Click the pencil icon to the far right of the All Traffic rule to edit it. Set the Service Chain to the one created previously, in this case it’s ssloSC_SecurityServiceChain.Click OK. Click Save & Next at the bottom. Interception Rule Next is the Interception Rule.For Ingress Network select the VLAN that internal clients connect through.In this example select INTERNAL and click the arrow to move it to Selected.Note that you can also create VLANS from this screen. Click Save & Next at the bottom. Notes:L7 Interception Rules – FTP and email protocol traffic are all “server-speaks-first” protocols, and therefore SSL Orchestrator must process these separately from typical client-speaks-first protocols like HTTP. This selection enables processing of each of these protocols, which create separate port-based listeners for each. As required, selectively enable the additional protocols that need to be decrypted and inspected through SSL Orchestrator. Egress Settings For the Egress Settings Click Save & Next at the bottom. Last is the Summary screen.You can review and edit any of the Configurations we just went through.Click Deploy when done.The next screen should look like the image below. Click OK and you should see something like the following: Notes: The Palo Alto Service is shown as DOWN in red.This is because we haven’t configured it yet.We’ll do that in the next article. Summary In this article you learned how to use the SSL Orchestrator Guided Configuration to do the following: Configuration prerequisites Deployment Topology SSL certificate and key settings Service properties Security policy Interception rule Next Steps Click Next to proceed to the next article in the series.1.9KViews0likes3CommentsSSL Orchestrator Advanced Use Cases: Outbound SNAT Persistence
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Outbound SNAT Persistence It may not be the most obvious thing to think about persistence in the vein of outbound traffic. We are all groomed to accept that any given load balancer can handle persistence (or "affinity", or "stickiness") to backend servers. This is an important characteristic for sure. But in an outbound scenario, you don't load balance remote servers, so why on Earth would you need persistence? Well, I'm glad you asked. There indeed happens to be a somewhat unique, albeit infrequent use case where two different servers need to persist on YOUR IP address. The classic example is a site that requires federated authentication, where the service provider (SP) generates a token (perhaps a SAML auth request) and inside of that request the SP has embedded the client IP. The client receives this message and is redirected to the IdP to authenticate. But in this case the client is talking to the outside world through a forward proxy, and outbound source NAT (SNAT) could be required in this environment. That means there's a potential that the client IP address as seen from the two remote servers could be different. So if the IdP needs to verify the client IP based on what's embedded in the authentication request token, that could possibly fail. The good news here is that federated authentication doesn't normally require client IP verification, and there aren't many other similar use cases, but it can happen. The F5 BIG-IP, as with ANY proxy server, load balancer, or ADC device, clearly supports server affinity, and in a highly flexible way. But, as with ANY proxy server, load balancer, or ADC device, that doesn't apply to SNAT addresses. Nevertheless, the F5 BIG-IP can be configured to do this, which is exactly what this article is about. We're going to flex some BIG-IP muscle to derive a unique and innovative way to enable outbound SNAT persistence. What we're basically talking about is ensuring that a single internal client persists a single outbound SNAT IP address, when and where needed, and as long as possible. It's important to note here that we're not really talking about persistence in the same way you think about load balanced server affinity. With affinity, you're stapling a single (remote) client "session" to a single load balanced server. With SNAT persistence, you're stapling a single outbound SNAT IP to a single internal client so that all remote servers see that same source address. Same-same but different-different. To do this we'll need a SNAT pool and an iRule. We need the SNAT pool to define the SNAT addresses we can use. And since SNAT pools don't provide a persistence option like regular pools do, we'll use an iRule to provide the stickiness. It's also worth noting here, again since we're not really talking about load balancing stickiness, that the IP persistence mechanism in the iRule may not (likely will not) evenly distribute the IPs in the SNAT pool. Your best bet is to provide as many SNAT pool IPs as possible and reasonable. The good news here is that, because you're using a BIG-IP, you can define exactly how you assert that IP stickiness. In most cases, you'll probably just want to persist on the internal client IP, but you could also persist on: Client source address and remote server port Client source address and remote destination addresses Client source, day of the week, the year+month+day % mod 2, a hash of the word-of-the-day...and hopefully you get the idea. Lot's of options. To make this work, let's start with the SNAT pool. Navigate to Local Traffic -> Address Translation -> SNAT Pool List in the BIG-IP and click Create. In the Member List section, add as many SNAT IPs as you can afford. Remember, these are going to be IPs on your outbound VLAN, so in the same subnet as your outbound VLAN self-IP. Figure: SNAT pool list You don't need to assign the SNAT pool to anything directly. The iRule will handle that. And now onto the iRule. Navigate to Local Traffic -> iRules -> iRule List in the BIG-IP, and click Create. Copy the following into the iRule editor: when RULE_INIT { ## This iRule should be applied to your SSLO intercaption rule ending with in-t-4. catch { unset -nocomplain static::snat_ips } ## For each SNAT IP needed define the IP versus dynamically looking it up. ## These need to be in the real SNAT pool as well so ARP works. set static::snat_ips(1) 10.1.20.50 set static::snat_ips(2) 10.1.20.51 set static::snat_ips(3) 10.1.20.52 set static::snat_ips(4) 10.1.20.53 set static::snat_ips(5) 10.1.20.54 ## Set to how many SNAT IPs were added set static::array_size 5 } when CLIENT_ACCEPTED priority 100 { ## Select and uncomment only ONE of the below SNAT persistence options ## Persist SNAT based on client address only snat $static::snat_ips([expr {[crc32 [IP::client_addr]] % $static::array_size}]) ## Persist SNAT based on client address and remote port #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [TCP::remote_port]] % $static::array_size}]) ## Persist SNAT based on client address and remote address #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [IP::local_addr]] % $static::array_size}]) } Let's take a moment to explain what this iRule is actually doing, and it is fairly straightforward. In RULE_INIT, which fires ONCE when you update the iRule, the members of the defined SNAT pool are read into an array. Then a second static variable is created to store the size of the array. These values are stored as static, global variables. In CLIENT_ACCEPTED we set a priority of 100 to control the order of execution under SSL Orchestrator as there is already a CLIENT_ACCEPTED iRule event on the topology (we want our new event to run first). Below that you're provided with three choices for persistence: persist on source IP only, source IP and destination port, or source IP and destination IP. You'll want to uncomment only ONE of these. Each basically performs a quick CRC hash on the selected value, then calculates a modulus based on the array size. This returns a number within the size of the array, that is then applied as the index to the array to extract one of the array values. This calculation is always the same for the same input value(s), so effectively persisting on that value. The selected SNAT IP is then fed to the 'snat' command, and there you have it. As stated, you're probably only going to need the source-only persistence option. Using either of the others will pin a SNAT IP to a client IP and protocol port (ex. client IP:443 or client IP:80), or pin a SNAT IP to a specific host (ex. client IP:www.example.com), respectively. At the end of the day, you can insert any reasonable expression that will result in the selection of one of the values in the SNAT pool array, so the sky is really the limit here. The last step is easiest of all. You need to attach this iRule to your SSL Orchestrator topology. To do that. navigate to SSL Orchestrator -> Configuration in the UI, select the Interception Rules tab, and click to edit the respective outbound interception rule. Scroll to the bottom of this page, and under Resources, add the new iRule to the Selected column. The order doesn't matter. Click Deploy to complete the change, and you're done. You can do a packet capture on your outbound VLAN to see what is happening. tcpdump -lnni [outbound vlan] host 93.184.216.34 And then access https://www.example.com to test. For your IP address you should see a consistent outgoing SNAT IP. If you have access to a Linux client, you can add multiple IP addresses to an interface and test with each: ifconfig eth0:1 10.1.10.51 ifconfig eth0:2 10.1.10.52 ifconfig eth0:3 10.1.10.53 ifconfig eth0:4 10.1.10.54 ifconfig eth0:5 10.1.10.55 curl -vk https://www.example.com --interface 10.1.10.51 curl -vk https://www.example.com --interface 10.1.10.52 curl -vk https://www.example.com --interface 10.1.10.53 curl -vk https://www.example.com --interface 10.1.10.54 curl -vk https://www.example.com --interface 10.1.10.55 And again there you have it. In just a few steps you've been able to enable outbound SNAT persistence, and along the way you have hopefully recognized the immense flexibility at your command.1.8KViews1like5CommentsSSL Orchestrator Advanced Use Cases: Reducing Complexity with Internal Layered Architecture
Introduction Sir Isaac Newton said, "Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things". The world we live in is...complex. No getting around that. But at the very least, we should strive for simplicity where we can achieve it. As IT folk, we often find ourselves mired in the complexity of things until we lose sight of the big picture, the goal. How many times have you created an additional route table entry, or firewall exception, or virtual server, because the alternative meant having a deeper understanding of the existing (complex) architecture? Sure, sometimes it's unavoidable, but this article describes at least one way that you can achieve simplicity in your architecture. SSL Orchestrator sits as an inline point of presence in the network to decrypt, re-encrypt, and dynamically orchestrate that traffic to the security stack. You need rules to govern how to handle specific types of traffic, so you create security policy rules in the SSL Orchestrator configuration to describe and take action on these traffic patterns. It's definitely easy to create a multitude of traffic rules to match discrete conditions, but if you step back and look at the big picture, you may notice that the different traffic patterns basically all perform the same core actions. They allow or deny traffic, intercept or bypass TLS (decrypt/not-decrypt), and send to one or a few service chains. If you were to write down all of the combinations of these actions, you'd very likely discover a small subset of discrete "functions". As usual, F5 BIG-IP and SSL Orchestrator provide some innovative and unique ways to optimize this. And so in this article we will explore SSL Orchestrator topologies "as functions" to reduce complexity. Specifically, you can reduce the complexity of security policy rules, and in doing so, quite likely increase the performance of your SSL Orchestrator architecture. SSL Orchestrator Use Case: Reducing Complexity with Internal Layered Architectures The idea is simple. Instead of a single topology with a multitude of complex traffic pattern matching rules, create small atomic topologies as static functions and steer traffic to the topologies by virtue of "layered" traffic pattern matching. Granted, if your SSL Orchestrator configuration is already pretty simple, then please keep doing what you're doing. You've got this, Tiger. But if your environment is getting complex, and you're not quite convinced yet that topologies as functions is a good idea, here are a few additional benefits you'll get from this topology layering: Dynamic egress selection: topologies as functions can define different egress paths. Dynamic CA selection: topologies as functions can use different local issuing CAs for different traffic flows. Dynamic traffic bypass: certain types of traffic can be challenging to handle internally. For example, mutual TLS traffic can be bypassed globally with the "Bypass on client cert failure" option in the SSL configuration, but bypassing mutual TLS sites by hostname is more complex. A layered architecture can steer traffic (by SNI) through a bypass topology, with a service chain. More flexible pattern recognition: for all of its flexibility, SSL Orchestrator security policy rules cannot catch every possible use case. External traffic pattern recognition, via iRules or CPM (LTM policies) offer near infinite pattern matching options. You could, for example, steer traffic based on incoming tenant VLAN or route domain for multi-tenancy configurations. More flexible automation strategies: as iRules, data groups, and CPM policies are fully automate-able across many AnO platforms (ex. AS3, Ansible, Terraform, etc.), it becomes exceedingly easy to automate SSL Orchestrator traffic processing, and removes the need to manage individual topology security policy rules. Hopefully these benefits give you a pretty clear indication of the value in this architecture strategy. So without further ado, let's get started. Configuration Before we begin, I'd like to make note of the following caveats: While every effort has been made to simplify the layered architecture, there is still a small element of complexity. If you are at all averse to creating virtual servers or modifying iRules, then maybe this isn't for you. But as you are reading this in a forum dedicated to programmability, I'm guessing you the reader are ready for a challenge. This is a "field contributed" solution, so not officially supported by F5. This topology layering architecture is applicable to all modern versions of SSL Orchestrator, from 5.0 to 8.0. While topology layering can be used for inbound topologies, it is most appropriate for outbound. The configuration below also only describes the layer 3 implementation. But layer 2 layering is also possible. With this said, there are just a few core concepts to understand: Basic layered architecture configuration - how the pieces fit together The iRules - how traffic moves through the architecture Or the CPM policies - an alternative to iRules Note again that this is primarily useful in outbound topologies. Inbound topologies are typically more atomic on their own already. I will cover both transparent and explicit forward proxy configurations below. Basic layered architecture configuration A layered architecture takes advantage of a powerful feature of the BIG-IP called "VIP targeting". The idea is that one virtual server calls another with negligible latency between the two VIPs. The "external" virtual server is client-facing. The SSL Orchestrator topology virtual servers are thus "internal". Traffic enters the external VIP and traffic rules pass control to any of a number of internal "topology function" VIPs. You certainly don't have to use the iRule implementation presented here. You just need a client-facing virtual server with an iRule that VIP-targets to one or more SSL Orchestrator topologies. Each outbound topology is represented by a virtual server that includes the application server name. You can see these if you navigate to Local Traffic -> Virtual Servers in the BIG-IP UI. So then the most basic topology layering architecture might just look like this: when CLIENT_ACCEPTED { virtual "/Common/sslo_my_topology.app/sslo_my_topology-in-t-4" } This iRule doesn't do anything interesting, except get traffic flowing across your layered architecture. To be truly useful you'll want to include conditions and evaluations to steer different types of traffic to different topologies (as functions). As the majority of security policy rules are meant to define TLS traffic patterns, the provided iRules match on TLS traffic and pass any non-TLS traffic to a default (intercept/inspection) topology. These iRules are intended to simplify topology switching by moving all of the complexity of traffic pattern matching to a library iRule. You should then only need to modify the "switching" iRule to use the functions in the library, which all return Boolean true or false results. Here are the simple steps to create your layered architecture: Step 1: Build a set of "dummy" VLANs. A topology must be bound to a VLAN. But since the topologies in this architecture won't be listening on client-facing VLANs, you will need to create a separate VLAN for each topology you intend to create. A dummy VLAN is a VLAN with no interface assigned. In the BIG-IP UI, under Network -> VLANs, click Create. Give your VLAN a name and click Finished. It will ask you to confirm since you're not attaching an interface. Repeat this step by creating unique VLAN names for each topology you are planning to use. Step 2: Build a set of static topologies as functions. You'll want to create a normal "intercept" topology and a separate "bypass" topology, though you can create as many as you need to encompass the unique topology functions. Your intercept topology is configured as such: L3 outbound topology configuration, normal topology settings, SSL configuration, services, service chains No security policy rules - just the ALL rule with TLS intercept action (and service chain), and optionally remove the built-in Pinners rule Attach to a dummy VLAN (a VLAN with no assigned interfaces) Your bypass topology should then look like this: L3 outbound topology configuration, skip the SSL Configuration settings, optionally re-use services and service chains No security policy rules - just the ALL rule with TLS bypass action (and service chain) Attach to a separate dummy VLAN (a VLAN with no assigned interfaces) Note the name you use for each topology, as this will be called explicitly in the iRule. For example, if you name the topology "myTopology", that's the name you will use in each "call SSLOLIB::target" function (more on this in a moment) . If you look in the SSL Orchestrator UI, you will see that it prepends "sslo_" (ex. sslo_myTopology). Don't include the "sslo_" portion in the iRule. Step 3: Import the SSLOLIB iRule (attached here). Name it "SSLOLIB". This is the library rule, so no modifications are needed. The functions within (as described below) will return a true or false, so you can mix these together in your switching rule as needed. Step 4: Import the traffic switching iRule (attached here). You will modify this iRule as required, but the SSLOLIB library rule makes this very simple. Step 5: Create your external layered virtual server. This is the client-facing virtual server that will catch the user traffic and pass control to one of the internal SSL Orchestrator topology listeners. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: 0 Protocol: TCP VLAN: client-facing VLAN Address Translation: disabled Port Translation: disabled Default Persistence Profile: ssl iRule: the traffic switching iRule Note that the ssl persistence profile is enabled here to allow the iRules to handle client side SSL traffic without SSL profiles attached. Also make sure that Address and Port Translation are disabled before clicking Finished. Step 6: Modify the traffic switching iRule to meet your traffic matching requirements (see below). You have the basic layered architecture created. The only remaining step is to modify the traffic switching iRule as required, and that's pretty easy too. The iRules I'll repeat, there are near infinite options here. At the very least you need to VIP target from the external layered VIP to at least one of the SSL Orchestrator topology VIPs. The iRules provided here have been cultivated to make traffic selection and steering as easy as possible by pushing all of the pattern functions to a library iRule (SSLOLIB). The idea is that you will call a library function for a specific traffic pattern and if true, call a separate library function to steer that flow to the desired topology. All of the build instructions are contained inside the SSLOLIB iRule, with examples. SSLOLIB iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/SSLOLIB Switching iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/sslo-layering-rule The function to steer to a topology (SSLOLIB::target) has three parameters: <topology name>: this is the name of the desired topology. Use the basic topology name as defined in the SSL Orchestrator configuration (ex. "intercept"). ${sni}: this is static and should be left alone. It's used to convey the SNI information for logging. <message>: this is a message to send to the logs. In the examples, the message indicates the pattern matched (ex. "SRCIP"). Note, include an optional 'return' statement at the end to cancel any further matching. Without the 'return', the iRule will continue to process matches and settle on the value from the last evaluation. Example (sending to a topology named "bypass"): call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return There are separate traffic matching functions for each pattern: SRCIP IP:<ip/subnet> SRCIP DG:<data group name> (address-type data group) SRCPORT PORT:<port/port-range> SRCPORT DG:<data group name> (integer-type data group) DSTIP IP:<ip/subnet> DSTIP DG:<data group name> (address-type data group) DSTPORT PORT:<port/port-range> DSTPORT DG:<data group name> (integer-type data group) SNI URL:<static url> SNI URLGLOB:<glob match url> (ends_with match) SNI CAT:<category name or list of categories> SNI DG:<data group name> (string-type data group) SNI DGGLOB:<data group name> (ends_with match) Examples: # SOURCE IP if { [call SSLOLIB::SRCIP IP:10.1.0.0/16] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } if { [call SSLOLIB::SRCIP DG:my-srcip-dg] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } # SOURCE PORT if { [call SSLOLIB::SRCPORT PORT:5000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } if { [call SSLOLIB::SRCPORT PORT:1000-60000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } # DESTINATION IP if { [call SSLOLIB::DSTIP IP:93.184.216.34] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } if { [call SSLOLIB::DSTIP DG:my-destip-dg] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } # DESTINATION PORT if { [call SSLOLIB::DSTPORT PORT:443] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } if { [call SSLOLIB::DSTPORT PORT:443-9999] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } # SNI URL match if { [call SSLOLIB::SNI URL:www.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } if { [call SSLOLIB::SNI URLGLOB:.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } # SNI CATEGORY match if { [call SSLOLIB::SNI CAT:$static::URLCAT_list] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } if { [call SSLOLIB::SNI CAT:/Common/Government] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } # SNI URL DATAGROUP match if { [call SSLOLIB::SNI DG:my-sni-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } if { [call SSLOLIB::SNI DGGLOB:my-sniglob-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } To combine these, you can use simple AND|OR logic. Example: if { ( [call SSLOLIB::DSTIP DG:my-destip-dg] ) and ( [call SSLOLIB::SRCIP DG:my-srcip-dg] ) } Finally, adjust the static configuration variables in the traffic switching iRule RULE_INIT event: ## User-defined: Default topology if no rules match (the topology name as defined in SSLO) set static::default_topology "intercept" ## User-defined: DEBUG logging flag (1=on, 0=off) set static::SSLODEBUG 0 ## User-defined: URL category list (create as many lists as required) set static::URLCAT_list { /Common/Financial_Data_and_Services /Common/Health_and_Medicine } CPM policies LTM policies (CPM) can work here too, but with the caveat that LTM policies do not support URL category lookups. You'll probably want to either keep the Pinners rule in your intercept topologies, or convert the Pinners URL category to a data group. A "url-to-dg-convert.sh" Bash script can do that for you. url-to-dg-convert.sh: https://github.com/f5devcentral/sslo-script-tools/blob/main/misc-tools/url-to-dg-convert.sh As with iRules, infinite options exist. But again for simplicity here is a good CPM configuration. For this you'll still need a "helper" iRule, but this requires minimal one-time updates. when RULE_INIT { ## Default SSLO topology if no rules match. Enter the name of the topology here set static::SSLO_DEFAULT "intercept" ## Debug flag set static::SSLODEBUG 0 } when CLIENT_ACCEPTED { ## Set default topology (if no rules match) virtual "/Common/sslo_${static::SSLO_DEFAULT}.app/sslo_${static::SSLO_DEFAULT}-in-t-4" } when CLIENTSSL_CLIENTHELLO { if { ( [POLICY::names matched] ne "" ) and ( [info exists ACTION] ) and ( ${ACTION} ne "" ) } { if { $static::SSLODEBUG } { log -noname local0. "SSLO Switch Log :: [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] :: [POLICY::rules matched [POLICY::names matched]] :: Sending to $ACTION" } virtual "/Common/sslo_${ACTION}.app/sslo_${ACTION}-in-t-4" } } The only thing you need to do here is update the static::SSLO_DEFAULT variable to indicate the name of the default topology, for any traffic that does not match a traffic rule. For the comparable set of CPM rules, navigate to Local Traffic -> Policies in the BIG-IP UI and create a new CPM policy. Set the strategy as "Execute First matching rule", and give each rule a useful name as the iRule can send this name in the logs. For source IP matches, use the "TCP address" condition at ssl client hello time. For source port matches, use the "TCP port" condition at ssl client hello time. For destination IP matches the "TCP address" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For destination port matches the "TCP port" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For SNI matches, use the "SSL Extension server name" condition at ssl client hello time. For each of the conditions, add a simple "Set variable" action as ssl client hello time. Name the variable "ACTION" and give it the name of the desired topology. Apply the helper iRule and CPM policy to the external traffic steering virtual server. The "first" matching rule strategy is applied here, and all rules trigger on ssl client hello, so you can drag them around and re-order as necessary. Note again that all of the above only evaluates TLS traffic. Any non-TLS traffic will flow through the "default" topology that you identify in the iRule. It is possible to re-configure the above to evaluate HTTP traffic, but honestly the only significant use case here might be to allow or drop traffic at the policy. Layered architecture for an explicit forward proxy You can use the same logic to support an explicit proxy configuration. The only difference will be that the frontend layered virtual server will perform the explicit proxy functions. The backend SSL Orchestrator topologies will continue to be in layer 3 outbound (transparent proxy) mode. Normally SSL Orchestrator would build this for you, but it's pretty easy and I'll show you how. You could technically configure all of the SSL Orchestrator topologies as explicit proxies, and configure the client facing virtual server as a layer 3 pass-through, but that adds unnecessary complexity. If you also need to add explicit proxy authentication, that is done in the one frontend explicit proxy configuration. Use the settings below to create an explicit proxy LTM configuration. If not mentioned, settings can be left as defaults. Under SSL Orchestrator -> Configuration in the UI, click on the gear icon in the top right corner. This will expose the DNS resolver configuration. The easiest option here is to select "Local Forwarding Nameserver" and then enter the IP address of the local DNS service. Click "Save & Next" and then "Deploy" when you're done. Under Network -> Tunnels in the UI, click Create. This will create a TCP tunnel for the explicit proxy traffic. Profile: select tcp-forward Under Local Traffic -> Profiles -> Services -> HTTP in the UI, click Create. This will create the HTTP explicit proxy profile. Proxy Mode: Explicit Explicit Proxy: DNS Resolver: select the ssloGS-net-resolver Explicit Proxy: Tunnel Name: select the TCP tunnel created earlier Under Local Traffic -> Virtual Servers, click Create. This will create the client-facing explicit proxy virtual server. Type: Standard Source: 0.0.0.0/0 Destination: enter an IP the client can use to access the explicit proxy interface Service Port: enter the explicit proxy listener port (ex. 3128, 8080) HTTP Profile: HTTP explicit profile created earlier VLANs and Tunnel Traffic: set to "Enable on..." and select the client-facing VLAN Address Translation: enabled Port Translation: enabled Under Local Traffic -> Virtual Servers, click Create again. This will create the TCP tunnel virtual server. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: * VLANs and Tunnel Traffic: set to "Enable on..." and select the TCP tunnel created earlier Address Translation: disabled Port Translation: disabled iRule: select the SSLO switching iRule Default Persistence Profile: select ssl Note, make sure that Address and Port Translation are disabled before clicking Finished. Under Local Traffic -> iRules, click Create. This will create a small iRule for the explicit proxy VIP to forward non-HTTPS traffic through the TCP tunnel. Change "<name-of-TCP-tunnel-VIP>" to reflect the name of the TCP tunnel virtual server created in the previous step. when HTTP_REQUEST { virtual "/Common/<name-of-TCP-tunnel-VIP>" [HTTP::proxy addr] [HTTP::proxy port] } Add this new iRule to the explicit proxy virtual server. To test, point your explicit proxy client at the IP and port defined IP:port and give it a go. HTTP and HTTPS explicit proxy traffic arriving at the explicit proxy VIP will flow into the TCP tunnel VIP, where the SSLO switching rule will process traffic patterns and send to the appropriate backend SSL Orchestrator topology-as-function. Testing and Considerations Assuming you have the default topology defined in the switching iRule's RULE_INIT, and no traffic matching rules defined, all traffic from the client should pass effortlessly through that topology. If it does not, Ensure the named defined in the static::default_topology variable is the actual name of the topology, without the prepended "sslo_". Enable debug logging in the iRule and observe the LTM log (/var/log/ltm) for any anomalies. Worst case, remove the client facing VLAN from the frontend switching virtual server and attach it to one of your topologies, effectively bypassing the layered architecture. If traffic does not pass in this configuration, then it cannot in the layered architecture. You need to troubleshoot the SSL Orchestrator topology itself. Once you have that working, put the dummy VLAN back on the topology and add the client facing VLAN to the switching virtual server. Considerations The above provides a unique way to solve for complex architectures. There are however a few minor considerations: This is defined outside of SSL Orchestrator, so would not be included in the internal HA sync process. However, this architecture places very little administrative burden on the topologies directly. It is recommended that you create and sync all of the topologies first, then create the layered virtual server and iRules, and then manually sync the boxes. If you make any changes to the switching iRule (or CPM policy), that should not affect the topologies. You can initiate a manual BIG-IP HA sync to copy the changes to the peer. If upgrading to a new version of SSL Orchestrator (only), no additional changes are required. If upgrading to a new BIG-IP version, it is recommended to break HA (SSL Orchestrator 8.0 and below) before performing the upgrade. The external switching virtual server and iRules should migrate natively. Summary And there you have it. In just a few steps you've been able to reduce complexity and add capabilities, and along the way you have hopefully recognized the immense flexibility at your command.1.7KViews5likes2Comments