SSL Orchestrator
63 TopicsImplementing SSL Orchestrator - High Level Considerations
Introduction This article is the beginning of a multi-part series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on pre-deployment tasks and preparations for SSL Orchestrator. This article is divided into the following high level sections: Solution Overview Customer Use Case Architecture & Network Topology Please forgive me for using SSL and TLS interchangeably in this article. Software versions used in this article: BIG-IP Version: 14.1.2 SSL Orchestrator Version: 5.5 BIG-IQ Version: 7.0.1 Solution Overview Data transiting between clients (PCs, tablets, phones etc.) and servers is predominantly encrypted with Secure Socket Layer (SSL) and its evolution Transport Layer Security (TLS)(ref. Google Transparency Report). Pervasive encryption means that threats are now predominantly hidden and invisible to security inspection unless traffic is decrypted.The decryption and encryption of data by different devices performing security functions potentially adds overhead and latency.The picture below shows a traditional chaining of security inspection devices such as a filtering web gateway, a data loss prevention (DLP) tool, and intrusion detection system (IDS) and next generation firewall (NGFW). Also, TLS/SSL operations are computationally intensive and stress the security devices’ resources.This leads to a sub-optimal usage of resource where compute time is used to encrypt/decrypt and not inspect. F5’s BIG-IP SSL Orchestrator offers a solution to optimize resource utilization, remove latency, and add resilience to the security inspection infrastructure. F5 SSL Orchestrator ensures encrypted traffic can be decrypted, inspected by security controls, then re-encrypted—delivering enhanced visibility to mitigate threats traversing the network. As a result, you can maximize your security services investment for malware, data loss prevention (DLP), ransomware, and next-generation firewalls (NGFW), thereby preventing inbound and outbound threats, including exploitation, callback, and data exfiltration. The SSL Orchestrator decrypts the traffic and forwards unencrypted traffic to the different security devices for inspection leveraging its optimized and hardware-accelerated SSL/TLS stack.As shown below the BIG-IP SSL Orchestrator classifies traffic and selectively decrypts traffic.It then forwards it to the appropriate security functions for inspection.Finally, once duly inspected the traffic is encrypted and sent on its way to the resource the client is accessing. Deploying F5 and inline security tools together has the following benefits: Traffic Distribution for load sharing Improve the scalability of inline security by distributing the traffic across multiple Security appliances, allowing them to share the load and inspect more traffic. Agile Deployment Add, remove, and/or upgrade Security appliances without disrupting network traffic; converting Security appliances from out-of-band monitoring to inline inspection on the fly without rewiring. Customer Use Case This document focuses on the implementation of BIG-IP SSL Orchestrator to process SSL/TLS encrypted traffic and forward it to a security inspection/enforcement devices. The decryption and forwarding behavior are determined by the security policy. This ensures that only targeted traffic is decrypted in compliance with corporate and regulator policy, data privacy requirements, and other relevant factors. The configuration supports encrypted traffic that originates from within the data center or the corporate network.It also supports traffic originating from clients outside of the security perimeter accessing resources inside the corporate network or demilitarized zone (DMZ) as depicted below. The decrypted traffic transits through different inspection devices for inbound and outbound traffic. As an example, inbound traffic is decrypted and processed by F5’s Advanced Web Application Firewall (F5 Advanced WAF) as shown below. *Can be encrypted or cleartext as needed As an example, outbound traffic is decrypted and sent to a next generation firewall (NGFW) for inspection as shown in the diagram below. The BIG-IP SSL Orchestrator solution offers 5 different configuration templates. The following topologies are discussed in Network Insertion Use Cases. L2 Outbound L2 Inbound L3 Outbound L3 Inbound L3 Explicit Proxy Existing Application In the use case described herein, the BIG-IP is inserted as layer 3 (L3) network device and is configured with an L3 Outbound Topology. Architecture & Network Topology The assumption is that, prior to the insertion of BIG-IP SSL Orchestrator into the network (in a brownfield environment), the network looks like the one depicted below.It is understood that actual networks will vary, that IP addressing, L2 and L3 connectivity will differ, however, this is deemed to be a representative setup. Note: All IP addressing in this document is provided as examples only.Private IP addressing (RFC 1918) is used as in most corporate environments. Note: the management network is not depicted in the picture above.Further discussion about management and visibility is the subject of Centralized Management below. The following is a description of the different reference points shown in the diagram above. a.This is the connection of the border routers that connect to the internet and other WAN and private links. Typically, private IP addressing space is used from the border routers to the firewalls. b.The border switching connects to the corporate/infrastructure firewall.Resilience is built into this switching layer by implementing 2 link aggregates (LAG or Port Channel ®). c.The “demilitarized zone”(DMZ) switches are connected to the firewall.The DMZ network hosts application that are accessible from untrusted networks such as the Internet. d.Application server connect into the DMZ switch fabric. e.Firewalls connect into the switch fabric.Typically core and distribution infrastructure switching will provide L2 and L3 switching to the enterprise (in some case there may be additional L3 routing for larger enterprises/entities that require dynamic routing and other advanced L3 services. f.The connection between the core and distribution layers are represented by a bus on the figure above because the actual connection schema is too intricate to picture.The writer has taken the liberty of drawing a simplified representation.Switches actually interconnect with a mixture of link aggregation and provide differentiated switching using virtualization (e.g. VLAN tagging, 802.1q), and possibly further frame/packet encapsulation (e.g. QinQ, VxLAN). g.The core and distribution switching are used to create 2 broadcast domains. One is the client network, and the other is the internal application network. h.The internal applications are connected to their own subnet. The BIG-IP SSL Orchestrator solution is implemented as depicted below. In the diagram above, new network connections are depicted in orange (vs. blue for existing connections).Similarly to the diagram showing the original network, the switching for the DMZ is depicted using a bus representation to keep the diagram simple. The following discusses the different reference points in the diagram above: a.The BIG-IP SSL Orchestrator is connected to the core switching infrastructure A new VLAN and network are created on the core switching infrastructure to connect to the firewalls (North) to the BIG-IP SSL Orchestrator devices. b.The client network (South) is connected to the BIG-IP via a second VLAN and network. c.The SSL Orchestrator devices are connected to a newly created inspection network.This network is kept separate from the rest of the infrastructure as client traffic transits through the inspection devices unencrypted.As an example, Web Application Firewalls (BIG-IP ASM) are used to filter inbound traffic. d. The LAN configuration for the connection to the BIG-IP ASM is as depicted below. e. The NGFW is connected to the INSPECTION switching network in such a manner that traffic traverses it when the BIG-IP SSL Orchestrator is configured to push traffic for inspection. Summary This article should be a good starting point for planning your initial SSL Orchestrator deployment. We covered the solution overview and use cases. The network topology and architecture was explained with the help of diagrams. Next Steps Click Next to proceed to the next article in the series4.6KViews7likes4CommentsSSL Orchestrator Advanced Use Cases: Reducing Complexity with Internal Layered Architecture
Introduction Sir Isaac Newton said, "Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things". The world we live in is...complex. No getting around that. But at the very least, we should strive for simplicity where we can achieve it. As IT folk, we often find ourselves mired in the complexity of things until we lose sight of the big picture, the goal. How many times have you created an additional route table entry, or firewall exception, or virtual server, because the alternative meant having a deeper understanding of the existing (complex) architecture? Sure, sometimes it's unavoidable, but this article describes at least one way that you can achieve simplicity in your architecture. SSL Orchestrator sits as an inline point of presence in the network to decrypt, re-encrypt, and dynamically orchestrate that traffic to the security stack. You need rules to govern how to handle specific types of traffic, so you create security policy rules in the SSL Orchestrator configuration to describe and take action on these traffic patterns. It's definitely easy to create a multitude of traffic rules to match discrete conditions, but if you step back and look at the big picture, you may notice that the different traffic patterns basically all perform the same core actions. They allow or deny traffic, intercept or bypass TLS (decrypt/not-decrypt), and send to one or a few service chains. If you were to write down all of the combinations of these actions, you'd very likely discover a small subset of discrete "functions". As usual, F5 BIG-IP and SSL Orchestrator provide some innovative and unique ways to optimize this. And so in this article we will explore SSL Orchestrator topologies "as functions" to reduce complexity. Specifically, you can reduce the complexity of security policy rules, and in doing so, quite likely increase the performance of your SSL Orchestrator architecture. SSL Orchestrator Use Case: Reducing Complexity with Internal Layered Architectures The idea is simple. Instead of a single topology with a multitude of complex traffic pattern matching rules, create small atomic topologies as static functions and steer traffic to the topologies by virtue of "layered" traffic pattern matching. Granted, if your SSL Orchestrator configuration is already pretty simple, then please keep doing what you're doing. You've got this, Tiger. But if your environment is getting complex, and you're not quite convinced yet that topologies as functions is a good idea, here are a few additional benefits you'll get from this topology layering: Dynamic egress selection: topologies as functions can define different egress paths. Dynamic CA selection: topologies as functions can use different local issuing CAs for different traffic flows. Dynamic traffic bypass: certain types of traffic can be challenging to handle internally. For example, mutual TLS traffic can be bypassed globally with the "Bypass on client cert failure" option in the SSL configuration, but bypassing mutual TLS sites by hostname is more complex. A layered architecture can steer traffic (by SNI) through a bypass topology, with a service chain. More flexible pattern recognition: for all of its flexibility, SSL Orchestrator security policy rules cannot catch every possible use case. External traffic pattern recognition, via iRules or CPM (LTM policies) offer near infinite pattern matching options. You could, for example, steer traffic based on incoming tenant VLAN or route domain for multi-tenancy configurations. More flexible automation strategies: as iRules, data groups, and CPM policies are fully automate-able across many AnO platforms (ex. AS3, Ansible, Terraform, etc.), it becomes exceedingly easy to automate SSL Orchestrator traffic processing, and removes the need to manage individual topology security policy rules. Hopefully these benefits give you a pretty clear indication of the value in this architecture strategy. So without further ado, let's get started. Configuration Before we begin, I'd like to make note of the following caveats: While every effort has been made to simplify the layered architecture, there is still a small element of complexity. If you are at all averse to creating virtual servers or modifying iRules, then maybe this isn't for you. But as you are reading this in a forum dedicated to programmability, I'm guessing you the reader are ready for a challenge. This is a "field contributed" solution, so not officially supported by F5. This topology layering architecture is applicable to all modern versions of SSL Orchestrator, from 5.0 to 8.0. While topology layering can be used for inbound topologies, it is most appropriate for outbound. The configuration below also only describes the layer 3 implementation. But layer 2 layering is also possible. With this said, there are just a few core concepts to understand: Basic layered architecture configuration - how the pieces fit together The iRules - how traffic moves through the architecture Or the CPM policies - an alternative to iRules Note again that this is primarily useful in outbound topologies. Inbound topologies are typically more atomic on their own already. I will cover both transparent and explicit forward proxy configurations below. Basic layered architecture configuration A layered architecture takes advantage of a powerful feature of the BIG-IP called "VIP targeting". The idea is that one virtual server calls another with negligible latency between the two VIPs. The "external" virtual server is client-facing. The SSL Orchestrator topology virtual servers are thus "internal". Traffic enters the external VIP and traffic rules pass control to any of a number of internal "topology function" VIPs. You certainly don't have to use the iRule implementation presented here. You just need a client-facing virtual server with an iRule that VIP-targets to one or more SSL Orchestrator topologies. Each outbound topology is represented by a virtual server that includes the application server name. You can see these if you navigate to Local Traffic -> Virtual Servers in the BIG-IP UI. So then the most basic topology layering architecture might just look like this: when CLIENT_ACCEPTED { virtual "/Common/sslo_my_topology.app/sslo_my_topology-in-t-4" } This iRule doesn't do anything interesting, except get traffic flowing across your layered architecture. To be truly useful you'll want to include conditions and evaluations to steer different types of traffic to different topologies (as functions). As the majority of security policy rules are meant to define TLS traffic patterns, the provided iRules match on TLS traffic and pass any non-TLS traffic to a default (intercept/inspection) topology. These iRules are intended to simplify topology switching by moving all of the complexity of traffic pattern matching to a library iRule. You should then only need to modify the "switching" iRule to use the functions in the library, which all return Boolean true or false results. Here are the simple steps to create your layered architecture: Step 1: Build a set of "dummy" VLANs. A topology must be bound to a VLAN. But since the topologies in this architecture won't be listening on client-facing VLANs, you will need to create a separate VLAN for each topology you intend to create. A dummy VLAN is a VLAN with no interface assigned. In the BIG-IP UI, under Network -> VLANs, click Create. Give your VLAN a name and click Finished. It will ask you to confirm since you're not attaching an interface. Repeat this step by creating unique VLAN names for each topology you are planning to use. Step 2: Build a set of static topologies as functions. You'll want to create a normal "intercept" topology and a separate "bypass" topology, though you can create as many as you need to encompass the unique topology functions. Your intercept topology is configured as such: L3 outbound topology configuration, normal topology settings, SSL configuration, services, service chains No security policy rules - just the ALL rule with TLS intercept action (and service chain), and optionally remove the built-in Pinners rule Attach to a dummy VLAN (a VLAN with no assigned interfaces) Your bypass topology should then look like this: L3 outbound topology configuration, skip the SSL Configuration settings, optionally re-use services and service chains No security policy rules - just the ALL rule with TLS bypass action (and service chain) Attach to a separate dummy VLAN (a VLAN with no assigned interfaces) Note the name you use for each topology, as this will be called explicitly in the iRule. For example, if you name the topology "myTopology", that's the name you will use in each "call SSLOLIB::target" function (more on this in a moment) . If you look in the SSL Orchestrator UI, you will see that it prepends "sslo_" (ex. sslo_myTopology). Don't include the "sslo_" portion in the iRule. Step 3: Import the SSLOLIB iRule (attached here). Name it "SSLOLIB". This is the library rule, so no modifications are needed. The functions within (as described below) will return a true or false, so you can mix these together in your switching rule as needed. Step 4: Import the traffic switching iRule (attached here). You will modify this iRule as required, but the SSLOLIB library rule makes this very simple. Step 5: Create your external layered virtual server. This is the client-facing virtual server that will catch the user traffic and pass control to one of the internal SSL Orchestrator topology listeners. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: 0 Protocol: TCP VLAN: client-facing VLAN Address Translation: disabled Port Translation: disabled Default Persistence Profile: ssl iRule: the traffic switching iRule Note that the ssl persistence profile is enabled here to allow the iRules to handle client side SSL traffic without SSL profiles attached. Also make sure that Address and Port Translation are disabled before clicking Finished. Step 6: Modify the traffic switching iRule to meet your traffic matching requirements (see below). You have the basic layered architecture created. The only remaining step is to modify the traffic switching iRule as required, and that's pretty easy too. The iRules I'll repeat, there are near infinite options here. At the very least you need to VIP target from the external layered VIP to at least one of the SSL Orchestrator topology VIPs. The iRules provided here have been cultivated to make traffic selection and steering as easy as possible by pushing all of the pattern functions to a library iRule (SSLOLIB). The idea is that you will call a library function for a specific traffic pattern and if true, call a separate library function to steer that flow to the desired topology. All of the build instructions are contained inside the SSLOLIB iRule, with examples. SSLOLIB iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/SSLOLIB Switching iRule: https://github.com/f5devcentral/sslo-script-tools/blob/main/internal-layered-architecture/transparent-proxy/sslo-layering-rule The function to steer to a topology (SSLOLIB::target) has three parameters: <topology name>: this is the name of the desired topology. Use the basic topology name as defined in the SSL Orchestrator configuration (ex. "intercept"). ${sni}: this is static and should be left alone. It's used to convey the SNI information for logging. <message>: this is a message to send to the logs. In the examples, the message indicates the pattern matched (ex. "SRCIP"). Note, include an optional 'return' statement at the end to cancel any further matching. Without the 'return', the iRule will continue to process matches and settle on the value from the last evaluation. Example (sending to a topology named "bypass"): call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return There are separate traffic matching functions for each pattern: SRCIP IP:<ip/subnet> SRCIP DG:<data group name> (address-type data group) SRCPORT PORT:<port/port-range> SRCPORT DG:<data group name> (integer-type data group) DSTIP IP:<ip/subnet> DSTIP DG:<data group name> (address-type data group) DSTPORT PORT:<port/port-range> DSTPORT DG:<data group name> (integer-type data group) SNI URL:<static url> SNI URLGLOB:<glob match url> (ends_with match) SNI CAT:<category name or list of categories> SNI DG:<data group name> (string-type data group) SNI DGGLOB:<data group name> (ends_with match) Examples: # SOURCE IP if { [call SSLOLIB::SRCIP IP:10.1.0.0/16] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } if { [call SSLOLIB::SRCIP DG:my-srcip-dg] } { call SSLOLIB::target "bypass" ${sni} "SRCIP" ; return } # SOURCE PORT if { [call SSLOLIB::SRCPORT PORT:5000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } if { [call SSLOLIB::SRCPORT PORT:1000-60000] } { call SSLOLIB::target "bypass" ${sni} "SRCPORT" ; return } # DESTINATION IP if { [call SSLOLIB::DSTIP IP:93.184.216.34] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } if { [call SSLOLIB::DSTIP DG:my-destip-dg] } { call SSLOLIB::target "bypass" ${sni} "DSTIP" ; return } # DESTINATION PORT if { [call SSLOLIB::DSTPORT PORT:443] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } if { [call SSLOLIB::DSTPORT PORT:443-9999] } { call SSLOLIB::target "bypass" ${sni} "DSTPORT" ; return } # SNI URL match if { [call SSLOLIB::SNI URL:www.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } if { [call SSLOLIB::SNI URLGLOB:.example.com] } { call SSLOLIB::target "bypass" ${sni} "SNIURLGLOB" ; return } # SNI CATEGORY match if { [call SSLOLIB::SNI CAT:$static::URLCAT_list] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } if { [call SSLOLIB::SNI CAT:/Common/Government] } { call SSLOLIB::target "bypass" ${sni} "SNICAT" ; return } # SNI URL DATAGROUP match if { [call SSLOLIB::SNI DG:my-sni-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } if { [call SSLOLIB::SNI DGGLOB:my-sniglob-dg] } { call SSLOLIB::target "bypass" ${sni} "SNIDGGLOB" ; return } To combine these, you can use simple AND|OR logic. Example: if { ( [call SSLOLIB::DSTIP DG:my-destip-dg] ) and ( [call SSLOLIB::SRCIP DG:my-srcip-dg] ) } Finally, adjust the static configuration variables in the traffic switching iRule RULE_INIT event: ## User-defined: Default topology if no rules match (the topology name as defined in SSLO) set static::default_topology "intercept" ## User-defined: DEBUG logging flag (1=on, 0=off) set static::SSLODEBUG 0 ## User-defined: URL category list (create as many lists as required) set static::URLCAT_list { /Common/Financial_Data_and_Services /Common/Health_and_Medicine } CPM policies LTM policies (CPM) can work here too, but with the caveat that LTM policies do not support URL category lookups. You'll probably want to either keep the Pinners rule in your intercept topologies, or convert the Pinners URL category to a data group. A "url-to-dg-convert.sh" Bash script can do that for you. url-to-dg-convert.sh: https://github.com/f5devcentral/sslo-script-tools/blob/main/misc-tools/url-to-dg-convert.sh As with iRules, infinite options exist. But again for simplicity here is a good CPM configuration. For this you'll still need a "helper" iRule, but this requires minimal one-time updates. when RULE_INIT { ## Default SSLO topology if no rules match. Enter the name of the topology here set static::SSLO_DEFAULT "intercept" ## Debug flag set static::SSLODEBUG 0 } when CLIENT_ACCEPTED { ## Set default topology (if no rules match) virtual "/Common/sslo_${static::SSLO_DEFAULT}.app/sslo_${static::SSLO_DEFAULT}-in-t-4" } when CLIENTSSL_CLIENTHELLO { if { ( [POLICY::names matched] ne "" ) and ( [info exists ACTION] ) and ( ${ACTION} ne "" ) } { if { $static::SSLODEBUG } { log -noname local0. "SSLO Switch Log :: [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] :: [POLICY::rules matched [POLICY::names matched]] :: Sending to $ACTION" } virtual "/Common/sslo_${ACTION}.app/sslo_${ACTION}-in-t-4" } } The only thing you need to do here is update the static::SSLO_DEFAULT variable to indicate the name of the default topology, for any traffic that does not match a traffic rule. For the comparable set of CPM rules, navigate to Local Traffic -> Policies in the BIG-IP UI and create a new CPM policy. Set the strategy as "Execute First matching rule", and give each rule a useful name as the iRule can send this name in the logs. For source IP matches, use the "TCP address" condition at ssl client hello time. For source port matches, use the "TCP port" condition at ssl client hello time. For destination IP matches the "TCP address" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For destination port matches the "TCP port" condition at ssl client hello time. Click on the Options icon and select "Local" and "External". For SNI matches, use the "SSL Extension server name" condition at ssl client hello time. For each of the conditions, add a simple "Set variable" action as ssl client hello time. Name the variable "ACTION" and give it the name of the desired topology. Apply the helper iRule and CPM policy to the external traffic steering virtual server. The "first" matching rule strategy is applied here, and all rules trigger on ssl client hello, so you can drag them around and re-order as necessary. Note again that all of the above only evaluates TLS traffic. Any non-TLS traffic will flow through the "default" topology that you identify in the iRule. It is possible to re-configure the above to evaluate HTTP traffic, but honestly the only significant use case here might be to allow or drop traffic at the policy. Layered architecture for an explicit forward proxy You can use the same logic to support an explicit proxy configuration. The only difference will be that the frontend layered virtual server will perform the explicit proxy functions. The backend SSL Orchestrator topologies will continue to be in layer 3 outbound (transparent proxy) mode. Normally SSL Orchestrator would build this for you, but it's pretty easy and I'll show you how. You could technically configure all of the SSL Orchestrator topologies as explicit proxies, and configure the client facing virtual server as a layer 3 pass-through, but that adds unnecessary complexity. If you also need to add explicit proxy authentication, that is done in the one frontend explicit proxy configuration. Use the settings below to create an explicit proxy LTM configuration. If not mentioned, settings can be left as defaults. Under SSL Orchestrator -> Configuration in the UI, click on the gear icon in the top right corner. This will expose the DNS resolver configuration. The easiest option here is to select "Local Forwarding Nameserver" and then enter the IP address of the local DNS service. Click "Save & Next" and then "Deploy" when you're done. Under Network -> Tunnels in the UI, click Create. This will create a TCP tunnel for the explicit proxy traffic. Profile: select tcp-forward Under Local Traffic -> Profiles -> Services -> HTTP in the UI, click Create. This will create the HTTP explicit proxy profile. Proxy Mode: Explicit Explicit Proxy: DNS Resolver: select the ssloGS-net-resolver Explicit Proxy: Tunnel Name: select the TCP tunnel created earlier Under Local Traffic -> Virtual Servers, click Create. This will create the client-facing explicit proxy virtual server. Type: Standard Source: 0.0.0.0/0 Destination: enter an IP the client can use to access the explicit proxy interface Service Port: enter the explicit proxy listener port (ex. 3128, 8080) HTTP Profile: HTTP explicit profile created earlier VLANs and Tunnel Traffic: set to "Enable on..." and select the client-facing VLAN Address Translation: enabled Port Translation: enabled Under Local Traffic -> Virtual Servers, click Create again. This will create the TCP tunnel virtual server. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Service Port: * VLANs and Tunnel Traffic: set to "Enable on..." and select the TCP tunnel created earlier Address Translation: disabled Port Translation: disabled iRule: select the SSLO switching iRule Default Persistence Profile: select ssl Note, make sure that Address and Port Translation are disabled before clicking Finished. Under Local Traffic -> iRules, click Create. This will create a small iRule for the explicit proxy VIP to forward non-HTTPS traffic through the TCP tunnel. Change "<name-of-TCP-tunnel-VIP>" to reflect the name of the TCP tunnel virtual server created in the previous step. when HTTP_REQUEST { virtual "/Common/<name-of-TCP-tunnel-VIP>" [HTTP::proxy addr] [HTTP::proxy port] } Add this new iRule to the explicit proxy virtual server. To test, point your explicit proxy client at the IP and port defined IP:port and give it a go. HTTP and HTTPS explicit proxy traffic arriving at the explicit proxy VIP will flow into the TCP tunnel VIP, where the SSLO switching rule will process traffic patterns and send to the appropriate backend SSL Orchestrator topology-as-function. Testing and Considerations Assuming you have the default topology defined in the switching iRule's RULE_INIT, and no traffic matching rules defined, all traffic from the client should pass effortlessly through that topology. If it does not, Ensure the named defined in the static::default_topology variable is the actual name of the topology, without the prepended "sslo_". Enable debug logging in the iRule and observe the LTM log (/var/log/ltm) for any anomalies. Worst case, remove the client facing VLAN from the frontend switching virtual server and attach it to one of your topologies, effectively bypassing the layered architecture. If traffic does not pass in this configuration, then it cannot in the layered architecture. You need to troubleshoot the SSL Orchestrator topology itself. Once you have that working, put the dummy VLAN back on the topology and add the client facing VLAN to the switching virtual server. Considerations The above provides a unique way to solve for complex architectures. There are however a few minor considerations: This is defined outside of SSL Orchestrator, so would not be included in the internal HA sync process. However, this architecture places very little administrative burden on the topologies directly. It is recommended that you create and sync all of the topologies first, then create the layered virtual server and iRules, and then manually sync the boxes. If you make any changes to the switching iRule (or CPM policy), that should not affect the topologies. You can initiate a manual BIG-IP HA sync to copy the changes to the peer. If upgrading to a new version of SSL Orchestrator (only), no additional changes are required. If upgrading to a new BIG-IP version, it is recommended to break HA (SSL Orchestrator 8.0 and below) before performing the upgrade. The external switching virtual server and iRules should migrate natively. Summary And there you have it. In just a few steps you've been able to reduce complexity and add capabilities, and along the way you have hopefully recognized the immense flexibility at your command.1.8KViews5likes2CommentsSSL Orchestrator Advanced Use Cases: Inbound Authentication
Introduction In your many adventures as an IT pro, you've undoubtedly come across the term "Swiss Army Knife" when describing the F5 BIG-IP. You don't have to be an expert at F5 products...or Swiss Army knives...to understand what this means. The term itself ubiquitously describes the idea of versatility, the ability to solve any problem with one of many tools included in a single shiny package. Now of course the naysayers will argue that this versatility breeds complexity. And while there's no argument that a BIG-IP can be complex, just take a look at your current network and security architectures and count how many different tools are used to solve a single set of challenges. The subtle reality is that there's really no such thing as "one-size-fits-all". Homogenizing technologies like the various public cloud offerings will give you "good enough" capabilities, but then you have to ask yourself, is my competitor's good enough the same as my good enough? Do we really have the same exact challenges? This is where versatility can be a critical advantage. Versatility, for example, can help to stop zero-day attacks before your security products have a chance to roll out their own solutions. Versatility can solve complex software issues that might otherwise require a multitude of expensive vendor tools. And versatility can very often create capabilities (application, authentication, security, etc.), where no formal vendor solution exists. In this article I'll be addressing a specific set of BIG-IP (versatility) characteristics: authentication and orchestration. And in doing so, I will also be showing you some powerful capabilities that you probably didn't know were there. Let's get started! SSL Orchestrator Use Case: Inbound Authentication The basic premise of this use case is that an SSL Orchestrator security policy is built on top of a set of "stateless" Access per-session and per-request policies. Access Policy Manager (APM) is the module you use on a BIG-IP to perform client authentication, and this requires "stateful" per-session and per-request policies. Therefore, as an application virtual server can only contain ONE access policy, the APM and SSL Orchestrator policies cannot coexist. In other words, you cannot add APM authentication to an SSL Orchestrator virtual server (or SSL Orchestrator security policy to an APM virtual server). SSL Orchestrator technically allows for authentication in outbound (forward proxy) topologies, because the explicit or transparent forward proxy authentication policy does not sit on the same virtual server as the SSL Orchestrator security policy. What we're focusing on here, though, is inbound (reverse proxy) authentication where there's generally just the one application virtual. There are fundamentally two ways to address this challenge: Layering virtual servers - often referred to as "VIP targeting", or "VIP-target-VIP", this is where one (external) virtual server uses an iRule command to push traffic to another internal virtual server. This is the simple approach. You put your authentication policy and client-side SSL offload on the external virtual, and an iRule to do the VIP targeting. The targeted internal virtual contains the SSL Orchestrator security policies, the application server pool, and optionally server SSL if you need to re-encrypt. figure: apm-sslo-vip-target Connector profile - a connector profile is a proxy element that was added to BIG-IP in 14.1, and that inserts itself in the client-side proxy flow after layer 5/6 (SSL decryption) and before layer 7 (HTTP). The connector is flow-based, so can be assigned once at flow initiation. Essentially, the connector can "tee" traffic out of the original proxy flow, and then back. The connector itself points to an internal virtual server that can perform any number of functions before returning back to the original proxy flow. figure: apm-sslo-connector For those of you that have spent any time digging around in the guts of an SSL Orchestrator configuration, you may recognize the connector profile. The connector was specifically created for SSL Orchestrator to handle third-party security service insertion. This is the thing that tees decrypted traffic off to the security devices. The connector is fundamentally an LTM object, but with LTM you can attach a single connector to a virtual server. In other words, you can attach a single security device to an LTM virtual server. SSL Orchestrator gives you dynamic assignment of multiple connectors (the service chain), a robust security policy (that attaches the flow to the server chain), dynamic decryption, and a guided configuration user interface to build all of this coolness. In the context of this use case, we'll attach a connector profile to the APM application virtual that points to an internal virtual, and that internal virtual will contain the SSL Orchestrator security policy. It is important to note here that the following solutions will minimally require: LTM base + APM add-on SSL Orchestrator You will be using the SSL Orchestrator "Existing Application" topology option here. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. We'll leave application traffic management and decryption to the APM virtual server. Before I dig into each of these options, let's understand why you would select one over the other as both have pros and cons. The VIP target solution is fundamentally easy. It's two virtual servers and a simple VIP target iRule. However, there's a tiny bit of overhead in a VIP target as you engage the TCP proxy twice. And with multiple applications, a VIP target isn't really re-usable. You have to create a separate virtual server pair for each application. The frontend virtual contains the client-facing destination IP, VLAN, client SSL, and iRule. The internal virtual contains the SSL Orchestrator security policy, application pool, and optional server SSL. The connector solution is re-usable. You simply attach the same connector profile to each application virtual server. It's also going to be slightly more efficient than the VIP target. However, the connector configuration is going to be more complex. Inbound Authentication through VIP targeting We will start with the easiest option first. Before doing anything else, navigate to SSL Orchestrator and create an "Existing Application" topology. Here you'll define the security services, service chain(s), and a security policy. On completion you'll have two "stateless" access policies that will get attached to one of the virtual servers. Create a client SSL profile - assuming you're building an HTTPS site, you'll need a client SSL profile to perform HTTPS decryption. Optionally create a server SSL profile - if you're going to re-encrypt to the application servers, you can either create a custom server SSL profile, or just use the built-in "serverssl" profile. Create the application pool - this is the pool that sends traffic to the application servers. Create the internal virtual server - this is the virtual server that will contain the SSL Orchestrator security policy and application pool. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Port: * SSL Profile (Server): optional server SSL profile VLAN and Tunnel Traffic: enabled on (empty) Source Address Translation: SNAT as required Address/Port Translation: enabled Access Profile: SSL Orchestrator base policy (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Default Pool: select the application pool Create the VIP target iRule - the iRule will pass the flow from the external to the internal virtual server: when ACCESS_ACL_ALLOWED { ## Enter the full name and path of the internal virtual server here virtual "/Common/internal-vip" } Create the authentication per-session access policy - this is a standard APM authentication per-session access policy, and can be anything you need. Create the client-facing external virtual server - this is the application virtual server that the client will communicate with directly. Type: Standard Source: 0.0.0.0/0 Destination: enter the IP address clients will use to access the application Port: enter the port for this application SSL Profile (Client): select the client SSL profile VLAN and Tunnel Traffic: enabled on the client-facing VLAN Address/Port Translation: disabled Access Profile: APM authentication policy iRule: select the VIP target iRule That's it. Client traffic will arrive via HTTPS to the external virtual server, get decrypted by the client SSL profile, and then pass to the authentication access profile. The client authenticates, and then the iRule passes the flow to the internal virtual server. The internal virtual server contains the SSL Orchestrator security policy, so decrypted traffic flows to the security services, returns to the BIG-IP, and then flows out to the application servers. Inbound Authentication through a Connector The connector profile is at the heart of SSL Orchestrator and how it drives traffic to security devices. But we're going to use a connector here in a novel way. We're going to create a connector that points to an internal virtual server, and that virtual server will contain the SSL Orchestrator security policy (see image above). It is effectively "tee-ing" the traffic out of the original proxy flow, across the security stack, and then back into the flow. The beauty here is that, aside from being slightly more efficient than a VIP target, the connector is re-usable across multiple APM application virtual servers. It's worth noting here that the traffic to the SSL Orchestrator security policy will have already been decrypted at the APM virtual, so the security policy should not contain rules specific to TLS handling (i.e. SSL bypass). But as we're talking about inbound traffic, it's very likely you won't be needing any of that complexity in the security policy anyway. The objective of the security policy here is to pass decrypted traffic to a service chain of security devices. As with the VIP target approach, first create an SSL Orchestrator "Existing Application" topology. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. Let's build the connector configuration, which includes three things: Create a Service profile - a Service profile essentially defines the type of connector, and how traffic is processed. Here we will be using the F5 Module service. Navigate to Local Traffic :: Profiles :: Other :: Service, and click create. Give it a name and select F5 Module as the Type. Create the internal virtual server - the internal virtual server will host the SSL Orchestrator security policy: Type: Internal HTTP Profile (client): http Service Profile: select the service profile Access Profile: select the SSL Orchestrator profile (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Create the connector profile - navigate to Local Traffic :: Profiles :: Other :: Connector, and click create. Simply select the internal virtual server here. To make all of the above slightly easier, you can simply run the following commands in a BIG-IP shell: tmsh create ltm profile service sslo-service type f5-module tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service } tmsh create ltm profile connector sslo-connector entry-virtual-server sslo-internal-vip Note that prior to BIG-IP 16.0, the Access Profile selection won't be available in the UI for Internal virtual servers, but you can still add via TMSH: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy [name of policy] Example: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy ssloP_sslotest.app/ssloP_sslotest_per_req_policy Now just create your APM application virtual server as usual, including client-facing destination IP/port, VLAN, client (and optional server) SSL, APM authentication policy, SNAT (as required), and the application pool. On top of that, attach the connector profile in the field labeled Connector Profile. For each additional APM application virtual server, you can re-use this same connector profile. Client traffic will arrive via HTTPS to the APM virtual server, get decrypted by the client SSL profile, pass through the connector, and then to the authentication profile. Note that there's one other subtle difference between these methods that I didn't touch on earlier, and that's the order of events. In the VIP target option, authentication is attempted and completed before any traffic passes to the SSL Orchestrator security policy. So the security devices only see application traffic flows. In the connector option, authentication is engaged after the connector, so the security policy and devices see the entire authentication process. In this case, they will see the APM /my.policy redirects and the APM session cookies. Summary The connector profile presents a lot of really interesting capabilities, even beyond what we've seen here. For example, anywhere that you may have some mutual exclusivity, like APM and ASM policies on a virtual server, you could potentially use the connector attached to an APM virtual to pass traffic to a WAF policy. The connector basically gives you a single "tee" for free in LTM. For multiple connectors, dynamic connector assignment, dynamic decryption, and a robust policy to handle that assignment, you'd use the SSL Orchestrator. In either case, whether using the VIP target or connector approach to inbound authentication with SSL Orchestrator, hopefully you can see some of the immense versatility at your command.2.3KViews5likes2CommentsWAFaaS with SSL Orchestrator
Introduction Note: This article applies to SSL Orchestrator versions prior to 11.0. If using version 11.0 refer to the articleHERE This use case allows you to insert F5 WAF functionality as a Service in the SSL Orchestrator inspection zone. WAFaaS is the ability to insert ASM profiles into the SSL Orchestrator Service Chain for Inbound Topologies.This configuration is specific to a WAF policy running on the SSL Orchestrator device.WAF and SSL Orchestrator consume significant CPU cycles so care should be given when deploying both together.It is also possible to deploy WAF as a service on a separate BIG-IP device, in which case you’d simply configure an inline transparent proxy service.The ability to insert F5’s WAF into the Service Chain presents a significant customer benefit. This guide assumes you already have WAF/ASM profile(s) configured, licensed and provisioned on BIG-IP and wish to add this functionality to an Inbound Topology.In order to run WAF and SSL Orchestrator on the same device you will need an LTM license with SSL Orchestrator as an add-on option.You cannot add a WAF license to an SSL Orchestrator stand-alone license. SSL Orchestrator does not directly support inserting F5 WAF policies into the Service Chain.However, the F5 platform is flexible enough to handle many custom use cases.In this case, the ICAP service configuration exposes a framework that is useful for any number of specialized patterns, including adding a WAF policy to an SSLO service chain.We will configure an ICAP Service and attach the WAF policy to it. Steps: Create ICAP Service Disable Strictness on the Service Disable TCP monitor for the ICAP Pool ICAP Adapt profiles removed from the Virtual Server Application Security Policy enabled and a Policy assigned under Security Step #1: Create ICAP Service Note: These instructions assume an SSL Orchestrator Topology and Service Chain are already deployed and working properly.These instructions simply add WAFaaS to the existing Service Chain.It is entirely possible to create the WAFaaS during the initial Topology creation, in which case you would create the service during the workflow, then make the necessary changes after the topology has been created. From the SSL Orchestrator Guided Configuration click Services then Add Scroll to the bottom, select Generic ICAP Service and click Add Give it a name, WAFaaS in this example For ICAP Devices click Add on the right Enter an IP Address, 198.19.97.1 in this example and click Done. Note:the IP address you use does not have to be the one above.It’s just a local, non-routable address used as a placeholder in the service definition.This IP address will not be used. IP addresses 198.19.97.0 to 198.19.97.255 are owned by network benchmark tests and located in private networks. Scroll to the bottom and click Save & Next. The next screen is the Services Chain List.Click the name of the Service Chain you wish to add WAF functionality to, ssloSC_ServiceChain in this example. Note: The order of the Services in the Selected column is the order in which SSL Orchestrator will pass decrypted data to the device.This can be an important consideration if you want some devices to see, or not see, the actions taken by the WAF Service. Select the WAFaaS Service and click the right arrow to move it to Selected.Click Save. Click Save & Next Click Deploy You should receive a Success message Step #2: Disable Strictness on the Service From the SSL Orchestrator Configuration screen select Services.Click the padlock to Unprotect Configuration. Note:Disabling Strictness on the ICAP Service is needed to modify it and attach the WAFaaS policy.Strictness must remain disabled on this service and disabling strictness on the service has no effect on any other part of the SSL Orchestrator configuration. Click OK to Unprotect the Configuration Step #3: Disable tcp monitor for the ICAP Pool From Local Traffic select Pools > Pool List Select the WAFaaS Pool Under Active Health Monitors select tcp and click >> to move it to Available.This removes the Pool’s Monitor because otherwise it would be marked as down or unavailable. Click Update Note:The Health Monitor needs to be removed because there is no actual ICAP service to monitor. Step #4: ICAP Adapt profiles removed from the Virtual Server From Local Traffic select Virtual Servers > Virtual Server List Locate the WAFaaS ICAP service that ends in “-t-4”virtual server and select it Set the Request Adapt Profile and Response Adapt Profile to None to disable the default ICAP Profiles Click Update Step #5: Application Security Policy enabled and a Policy assigned under Security For the WAFaaS-t-4 Virtual Server click the Security tab Set Application Security Policy to Enabled Select the Security Policy you wish to use.Click Update when done Note: In specific versions of SSL Orchestrator there is one extra configuration item that needs to be modified. This is NOT required in other versions. If this change is made, when performing an upgrade it is not necessarily required to back out this change. Required versions: SSLO version 5.9.15 available on TMOS 14.1.4 SSLO versions 6.0-6.5 available on TMOX 15.0.x Navigate to “Local Traffic››Profiles : Other : Service” Select the Service profile named “ssloS_WAFaaS-service” Change the “Type” from “ICAP” to “F5 Module” Conclusion The configuration is now complete.Using the WAFaaS this way is functionally the same as using it by itself.There are no known limitations to this configuration.2.1KViews5likes9CommentsSSL Orchestrator Advanced Use Cases: Integrating F5 Advanced Firewall Manager (AFM)
Introduction There’s plenty of examples where “better together” is the right way to go. Where would M&Ms be if chocolatiers Forrest Mars Sr. and Bruce Murrie hadn’t shaken hands? They were better together. But also, cheese and crackers, peanut butter and jelly, rice and beans, are arguably better together. Okay, clearly, I need to go eat lunch after this… You could say the same thing for lots of non-food things, though. These days, with the immense volume of TLS traffic on the Internet, application layer security solutions don’t make a lot of sense without decryption, and decryption alone isn’t an adequate security solution by itself. They’re better together. So, what does this have to do with F5 SSL Orchestrator? Well, as a rule, the primary purpose of the SSL Orchestrator is to provide real-time, dynamic layering of scalable security solutions to combat the wide variety of malware pouring into or out of the enterprise through encrypted protocols. SSL Orchestrator supports inbound and outbound traffic flows, and a diverse set of third-party security products. And this works extremely well under normal network conditions. However, SSL Orchestrator alone does not deal with volumetric attacks – those events that flood the network with bad traffic to create a denial-of-service. That is usually the job of a firewall, and/or a dedicated DDoS appliance, and F5 BIG-IP Advanced Firewall Manager (AFM) happens to be the sort of product that can protect against those volumetric bad-actor strikes. It also just so happens that AFM and SSL Orchestrator will happily run on the same appliance, where AFM protects your BIG-IP and even the backend infrastructure components within your security inspection zones from voluminous and otherwise bad network traffic, and SSL Orchestrator then decrypts and inspects the remaining good traffic. They’re better together. SSL Orchestrator Use Case: Integrating AFM In this article I’ll explore a few different ways to deploy and integrate AFM into an SSL Orchestrator architecture. Let’s go see what that looks like! Licensing Requirements The license requirements are pretty simple here. You need SSL Orchestrator and Advanced Firewall Manager licensed and provisioned on the BIG-IP. You can license SSL Orchestrator as the base product and add AFM, or license AFM as the based product and add SSL Orchestrator. Either works fine. Configuring SSL Orchestrator The beauty here is that not much additional work is required to integrate AFM and SSL Orchestrator. As you will see below, you may need to edit a virtual server, or nothing at all. Build your SSL Orchestrator topologies as you normally would. Please reference the SSL Orchestrator deployment guide for a deep-dive into all that’s possible: https://clouddocs.f5.com/sslo-deployment-guide/. Configuring AFM It probably goes without saying, F5’s Advanced Firewall Manager is a wildly powerful product, and this simple article couldn’t begin to do justice to all of its features and capabilities. For that, I would kindly suggest the following excellent AFM resources: BIG-IP AFM: Network Firewall Policies and Implementations BIG-IP AFM: DoS/DDoS Protection Implementations AFM Operations Guide Keep in mind the goal of this scenario is to protect the BIG-IP itself. AFM is not taking the place of a traditional perimeter firewall in this case, but purely keeping the BIG-IP…and SSL Orchestrator out of harm’s way. In that sense, there are generally two ways you’d consider deploying AFM for this use case: Per-application – where you would assert an AFM Network Firewall policy directly at the BIG-IP virtual server(s). In this case, that would be the SSL Orchestrator topology VIP. The benefit here is that you can have different network firewall policies for different applications and/or traffic paths. Globally – where you would assert an AFM Network Firewall policy in the global context. The benefit here is that one policy will protect all traffic flows, and no customization is needed at the virtual servers. Let us first see how to build a very simple AFM policy, and then attach that to each of the above contexts. In the BIG-IP UI, under Security -> Network Firewall -> Policies, click the Create button. Give that policy a name and click Finished. Now click the created policy to edit. Click the Add Rule button. Again, please reference the above resources for broader coverage of the AFM policy settings. At a bare minimum though, and just for testing, Name: provide a unique name Auto Generate UUID: click to enable (if desired) State: Enabled Protocol: Any Source: type in your local client IP address or IP subnet Destination: Any Actions: Reject/Drop Logging: Optionally enable All other settings can be left blank. As you can see, this is a highly contrived example to simply demonstrate a policy that blocks all traffic from your IP address. Click the Commit Changes to System button to complete the policy. Now to actually use this policy, let’s explore the separate context options: Per-application – Here you will assign this policy directly to an application virtual server. For SSL Orchestrator, that’s the topology VIP. If you’re on an SSL Orchestrator version 8.x and lower, you’ll need to disable strictness on the topology to make custom edits to the virtual servers. In that case I would strongly recommend the global context option. If running 9.0 and above, you can freely edit the topology objects. In the BIG-IP UI, under Local Traffic -> Virtual Servers, find and edit your SSL Orchestrator topology VIP. This will be a VIP with the “sslo_” prefix. On the top Security tab, go to Network Firewall and set Enforcement (or Staging) to Enabled. In the Policy drop-down, select your AFM network firewall policy. Update the VIP. At this point you can test and requests from your IP address should indeed get blocked. Global context – Instead of attaching the AFM policy to virtual servers, you can set it in a global context so that it applies to ALL traffic flowing through the BIG-IP. In the BIG-IP UI, under Security -> Network Firewall -> Active Rules, select Global from the Context drop-down. Now click on the Global link below to edit. Go to Network Firewall and set Enforcement (or Staging) to Enable. In the Policy drop-down, select you AFM network firewall policy, then update. At this point you can test and requests from your IP address should indeed get blocked. If you have multiple applications created, the policy will apply to all of these. Testing your AFM integration We have already covered a contrived example of blocking by client IP address. There are course MANY different ways to define AFM policies: You can filter by protocol You can filter by source IP address, subnet, geolocation, and blacklist categories You can filter by destination IP address, subnet, geolocation, and blacklist categories You can insert iRule logic to extend validation and customize response and policy action behaviors You can apply an additional service policy You can apply an additional protocol inspection policy You can apply an additional traffic classification policy You can divert traffic to different virtual servers You can apply schedules to your policies And you can create sets of rules (Rule Lists) that are re-usable across multiple policies and contexts Sending to virtual servers The third-to-last item above, diverting to different virtual servers, is an interesting concept I’d like to expand on. There’s a unique configuration pattern called the “Internal Layered Architecture” that can be used to create additional functionality and flexibility in an SSL Orchestrator deployment. The general idea is explained here: https://github.com/f5devcentral/sslo-script-tools/tree/main/internal-layered-architecture, but it is basically where a frontend virtual server steers traffic to internal SSL Orchestrator topologies based on its own external policy. That policy can derive from iRules and LTM policies, and can pull from data groups, URL categories, and pretty much anything else you can imagine to define a topology as a “function”. In any case, AFM enables this capability right out of the box, without iRules. Each rule in an AFM policy can “Send to Virtual”, where a topology VIP can be that destination. Simply put: Create your SSL Orchestrator topologies as simple “functions” with a single “All Traffic” rule that does something (i.e., Allow/block, Intercept/Bypass, service chain), then assign a “dummy” VLAN to the topology. You’ll minimally create one topology for TLS intercept and one for TLS bypass, but then you can create additional topologies that do different things: Different combinations of rule actions Different service chains Different SSL properties to allow for per-tenant certificate signing Different egress paths Create a client-facing virtual server and apply your AFM network policy. In that policy, define a set of rules, where each rule sends to a different topology VIP. Done. Traffic coming to the BIG-IP will hit the AFM VIP, match a policy rule, and internally steer to an SSL Orchestrator topology. Summary In just a few short steps you should now be able to wield the full power of a world-class network firewall and do so with a fully integrated, better together F5 solution. Pretty cool, yes? I urge you to spend some time in the AFM Operations Guide and other AFM references to get a better understanding of the full set of its capabilities. And with that, I hope you can see some of the immense versatility and power of an integrated SSL Orchestrator and F5 IPS solution. Thanks!1.3KViews4likes1CommentSSL Orchestrator Advanced Use Cases: Forward Proxy Authentication
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless.In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Forward Proxy Authentication Arguably, authentication is an easy one for BIG-IP, but I'm going to ease into this series slowly. There's no better place to start than with an examination of some of the many ways you can configure an F5 BIG-IP to authenticate user traffic. Forward Proxy Overview Forward proxy authentication isn't exclusive to SSL Orchestrator, but a vital component if you need to authenticate inspected outbound client traffic to the Internet. In this article, we are simply going to explore the act of authenticating in a forward proxy, in general. - how it works, and how it's applied. For detailed instructions on setting up Kerberos and NTLM forward proxy authentication, please see the SSL Orchestrator deployment guide. Let's start with a general characterization of "forward proxy" to level set. The semantics of forward and reverse proxy can change depending on your environment, but generally when we talk about a forward proxy, we're talking about something that controls outbound (usually Internet-bound) traffic. This is typically internal organizational traffic to the Internet. It is an important distinction, because it also implicates the way we handle encryption. In a forward proxy, clients are accessing remote Internet resources (ex. https://www.f5.com). For TLS to work, the client expects to receive a valid certificate from that remote resource, though the inspection device in the middle does not own that certificate and private key. So for decryption to work in an "SSL forward proxy", the middle device must re-issue ("forge") the remote server's certificate to the client using a locally-trusted CA certificate (and key). This is essentially how every SSL visibility product works for outbound traffic, and a native function of the SSL Orchestrator. Now, for any of this to work, traffic must of course be directed through the forward proxy, and there are generally two ways that this is accomplished: Explicit proxy - where the browser is configured to access the Internet through a proxy server. This can also be accomplished through auto-configuration scripts (PAC and WPAD). Transparent proxy - where the client is blissfully unaware of the proxy and simply routes to the Internet through a local gateway. It should be noted here that SSL visibility products that deploy at layer 2 are effectively limited to one traffic flow option, and lack the level of control that a true proxy solution provides, including authentication. Also note, BIG-IP forward proxy authentication requires the Access Policy Manager (APM) module licensed and provisioned. Explicit Forward Proxy Authentication The option you choose for outbound traffic flow will have an impact on how you authenticate that traffic, as each works a bit differently. Again, we're not getting into the details of Kerberos or NTLM here. The goal is to derive an essential understanding of the forward proxy authentication mechanisms, how they work, how traffic flows through them, and ultimately how to build them and apply them to your SSL Orchestrator configurations. And as each is different, let us start with explicit proxy. Explicit forward proxy authentication for HTTP traffic is governed by a "407" authentication model. In this model, the user agent (i.e. a browser) authenticates to the proxy server before passing any user request traffic to the remote server. This is an important distinction from other user-based authentication mechanisms, as the browser is generally limited in the types of authentication it can perform here (on the user's behalf). In fact most modern browsers, with some exceptions, are limited to the set of "Windows Integrated" methods (NTLM, Kerberos, and Basic). Explicit forward proxy authentication will look something like this: Figure: 407-based HTTPS and HTTP authentication The upside here is that the Windows Integrated methods are usually "transparent". That is, silently handled by the browser and invisible to the user. If you're logged into a domain-joined workstation with a domain user account, the browser will use this access to generate an NTLM token or fetch a Kerberos ticket on your behalf. If you build an SSL Orchestrator explicit forward proxy topology, you may notice it builds two virtual servers. One of these is the explicit proxy itself, listening on the defined explicit proxy IP and port. And the other is a TCP tunnel VIP. All client traffic arrives at the explicit proxy VIP, then wraps around through the TCP tunnel VIP. The SSL Orchestrator security policy, SSL configurations, and service chains are all connected to the TCP tunnel VIP. Figure: SSL Orchestrator explicit proxy VIP configuration As explicit proxy authentication is happening at the proxy connection layer, to do authentication you simply need to attach your authentication policy to the explicit proxy VIP. This is actually selected directly inside the topology configuration, Interception Rules page. Figure: SSL Orchestrator explicit proxy authentication policy selection But before you can do this, you must first create the authentication policy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Explicit Profile Scope: leave it at 'Profile' Customization Type: leave it at 'Modern' Don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform explicit forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). Now, before we dive into the VPE, let's take a moment to talk about how authentication is going to work here. As previously stated, we are not going to dig into things like Kerberos or NTLM, but we still need something to authenticate to. Once you have something simple working, you can quickly shim in the actual authentication protocol. So let's do basic LocalDB authentication to prove out the configuration. Hop down to Access -> Authentication -> Local User DB -> Instances, and click Create New Instance. Create a simple LocalDB instance: Settings: Name: provide a unique name Leave the remaining settings as is and click OK. Now go to Access -> Authentication -> Local User DB -> Users, and click Create New User. Settings: User Name: provide a unique user name Password: provide a password Instances: selected the LocalDB instance Leave everything else as is and click OK. Now go back to the VPE. You're ready to define your authentication policy. With some exceptions, most explicit forward proxy authentication policies will minimally include a 407 Proxy-Authenticate agent and an authentication agent. The 407 Proxy-Authenticate agent will issue the 407 Proxy-Authenticate response to the client, and pass the user's submitted authentication data (Basic Authorization header, NTLM token, Kerberos ticket) to the auth agent behind it. The auth agent is then responsible for validating that submission and allowing (or denying) access. Since we're using a simple LocalDB to test this, we'll configure this for Basic authentication. Figure: 407-based SWG-Explicit authentication policy 407 HTTP Response Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Note again that this is a simple explicit forward proxy test using a local database for HTTP Basic authentication. Once you have this working, it is super easy to replace the LocalDB method with the authentication protocol you need. Now head back to your SSL Orchestrator explicit proxy configuration. Navigate to the Interception Rules page. On that page you will see a setting for Access Profile. Select your SWG-Explicit access policy here. And that's it. Deploy the configuration and you're done. Configure your browser to point to the SSL Orchestrator explicit proxy IP and port, if you haven't already, and attempt to access an external URL (ex. https://www.f5.com). Since this is configured for HTTP Basic authentication, you should see a popup dialog in the browser requesting username and password. Enter the values you created in the LocalDB user properties. In following articles, I will show you how to configure Kerberos and NTLM for forward proxy authentication. If you want to see what this communication actually looks like on the wire, you can either enable your browser's developer tools, network tab. Or for a cleaner view, head over to a command line on your client and use the cURL command (you'll need cURL installed on your workstation): curl -vk --proxy [PROXY IP:PORT] https://www.example.com --proxy-basic --proxy-user '[username:password]' Figure: cURL explicit proxy output What you see in the output should look pretty close to the explicit proxy diagram from earlier. And if your SSL Orchestrator security policy is defined to intercept TLS, you will see your local CA as the example.com CA issuer. Transparent Forward Proxy Authentication I intentionally started with explicit proxy authentication because it's usually the easiest to get your head around. Transparent forward proxy authentication is a bit different, but you very likely see it all the time. If you've ever connected to hotel, airport/airplane, or coffee shop WiFi, and you were presenting with a webpage or popup screen that asked for username, room number, or asked you to agree to some terms of use, you were using transparent authentication. In this case though, it is commonly referred to as a "Captive Portal". Note that captive portal authentication was introduced to SSL Orchestrator in version 6.0. Captive portal authentication basically works like this: On first time connecting, you navigate to a remote URL (ex. www.f5.com), which passes through a security device (a proxy server, or in the case of hotel/coffee shop WiFi, an access point). The device has never seen you before, so issues an HTTP redirect to a separate URL. This URL will present an authentication point, usually a web page with some form of identity verification, user agreement, etc. You do what you need to do there, and the authentication page redirects you back to the original URL (ex. www.f5.com) and either stores some information about you, or sends something back with you in the redirect (a token). On passing back through the proxy (or access point), you are recognized as an authenticated user and allowed to pass. The token is stored for the life of your sessions so that you are not sent back to the captive portal. Figure: Captive-portal Authentication Process The real beauty here is that you are not at limited in the mechanisms you use to authenticate, like you are in an explicit proxy. The captive portal URL is essentially a webpage, so you could use NTLM, Kerberos, Basic, certificates, federation, OAuth, logon page, basically anything. Configuring this in APM is also super easy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Transparent Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Again, don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform transparent forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). We are going to continue to use the LocalDB authentication method here to keep the configuration simple. But in this case, you could extend that to do Basic authentication or a logon page. If you do Basic, Kerberos, or NTLM, you'll be using a "401 authentication model". This is very similar to the 407 model, except that 401 interacts directly with the user. And again, this is just an example. Captive portal authentication isn't dependent on browser proxy authentication capabilities, and can support pretty much any user authentication method you can throw at it. Figure: 401-based SWG-Transparent authentication policy 401 Authentication Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Now, there are a few additional things to do here. Transparent proxy (captive portal) authentication actually requires two access profiles. The authentication profile you just created gets applied to the captive port (authentication URL). You need a separate access profile on the proxy listener to redirect the user to the captive if no token exists for that user. As it turns out, an SSL Orchestrator security policy is indeed a type of access profile, so it simply gets modified to point to the captive portal URL. The 'named' profile scope you selected in the above authentication profile defines how the two profiles share user identity information, thus both with have a named profile scope, and must use the same named scope value (ex. SSO). You will now create the second access profile: Settings: Name: provide a unique name Profile Type: SSL Orchestrator Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Captive Portals: select 'Enabled' Primary Authentication URI: enter the URL of the captive portal (ex. https://login.f5labs.com) You now need to create a virtual server to hold your captive portal. This is the URL that users are redirected to for authentication (ex. https://login.f5labs.com). The steps are as follows: Create a certificate and private key to enable TLS Create a client SSL profile that contains the certificate and private key Create a virtual server Destination Address/Mask: enter the IP address that the captive portal URL resolves to Service Port: enter 443 HTTP Profle (Client): select 'http' SSL Profile (Client): select your client SSL profile VLANs and Tunnels: enable for your client-facing VLAN Access Profile: select your captive portal access profile Head back into your SSL Orchestrator outbound transparent proxy topology configuration, and go to the Interception Rules page. Under the 'Access Profile' setting, select your new SSL Orchestrator access profile and re-deploy. That's it. Now open a browser and attempt to access a remote resource. Since this is using Basic authentication with LocalDB, you should get prompted for username and password. If you look closely, you will see that you've been redirected to your captive portal URL. 401 Basic authentication is not connection based, so APM stores the user session information by client IP. If you do not get prompted for authentication, it's likely you have an active session already. Navigate to Access -> Overview -> Active Sessions. If you see your LocalDB user account name listed there, delete it and try again (close and re-open the browser). And there you have it. In just a few steps you've configured your SSL Orchestrator outbound topology to perform user authentication, and along the way you have hopefully recognized the immense flexibility at your command. Thanks.2.9KViews4likes10CommentsSSL Orchestrator Advanced Use Cases: Local DNS Proxy Cache
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Local DNS Proxy Cache A basic tenet of an explicit proxy is DNS resolution. That is, the proxy functions to perform DNS requests on the client's behalf. This is a useful security characteristic as it abstracts DNS away from the client and generally prevents IP spoofing. For example, a client cannot spoof the IP of a known-bypassed host because an explicit proxy client does not control the IP address. The client passes a URL to the proxy, and the proxy resolves the URL to the correct address. Needless to say, however, if you have a thousand hosts perform 100 DNS requests per hour (each) in an environment, and you shift that to an explicit proxy, you now have a proxy server that's performing potentially 100,000 DNS requests per hour. Now fortunately, a large portion of the traffic from each of these clients will be going to the same places (ex. google.com), so the proxy server can locally cache a lot of these. But at some point, a dedicated DNS proxy cache can be useful to offload this burden, and in a configurable way. In an SSL Orchestrator environment, you may also have an explicit proxy security device plugged into a decrypted service chain, and that explicit proxy will also need DNS resolution. So if you've configured an SSL Orchestrator explicit proxy topology, and sending decrypted traffic to an explicit proxy in a service chain, you now have two devices that need DNS. As you've probably guessed by now, there's an elegant solution to this problem. Using a DNS cache profile on the BIG-IP, you can point the topology explicit proxy resolver, and the inline service resolver at the same cache. As traffic arrives at the topology, an initial DNS request flows to the DNS cache. If an answer doesn't exist, the request is forwarded to a defined authority and the answer cached. When the decrypted traffic arrives at the inline service, it attempts a DNS request to the same place and gets an immediate response from the cache. This has additional benefits in both optimizing traffic flow through the inline proxy device, as it doesn't have to wait for a DNS response from an external source, and also removes the need for this inline obfuscated security device to have to communicate outside of its secure enclave (for DNS). It may still have to communicate beyond the enclave for software, signature and licensing updates, but those are not real-time traffic concerns. This article provides a simple set of steps to build a local DNS proxy cache for SSL Orchestrator. [figure 1: SSL Orchestrator with DNS proxy cache] ** Note that a DNS proxy cache requires the DNS license, which also requires SSL Orchestrator to be licensed as an add-on to base LTM. ** Configuration Configuring a local DNS proxy cache involves creating the DNS cache and virtual server to hold it. This virtual server basically load balances external DNS and enables optimization through caching. You will point both the SSL Orchestrator resolver and the inline proxy DNS at this virtual server to take full benefit of the optimization. Also note that this could simply be used by an inline proxy service, in the absence of an explicit proxy SSL Orchestrator topology. This allows you to both optimize DNS for the inline proxy, and also not have to build a service control channel for inline proxy DNS to get out to an external DNS resource. Create DNS cache - Located under DNS -> Caches -> Cache List, click the Create button. Resolver Type: Transparent (None) Create DNS profile - Located under Local Traffic -> Profiles -> Services -> DNS, click the Create button. DNS Cache: Enabled DNS Cache Name: <above DNS cache> Create DNS proxy pool - This is the actual DNS resource (ex. 8.8.8.8:53). Find this under Local Traffic -> Pools. Create DNS proxy VIP (on XP service's outbound subnet) - This is the virtual server that load balances the DNS resource. Find this under Local Traffic Virtual Servers. It is important to create the inline proxy service first as that will establish the VLAN and IP subnet that this virtual server will attach to. Type: Standard Source Address: 0.0.0.0/0 Destination Address/Mask: select a unique IP address on the inline proxy service's outbound (from-service) subnet (ex. 198.19.96.153) Service Port: 53 Protocol: UDP Protocol Profile (Client): udp DNS Profile: select the previously-created DNS profile VLAN: select the inline proxy service's outbound (from-service) VLAN Source Address Translation: SNAT as required for routing Address Translation: enabled Port Translation: enabled Default Pool: select the DNS server pool Configure the SSL Orchestrator resolver to point to DNS proxy VIP - Navigate to the SSL Orchestrator UI and in the top right corner click on the gear icon to access the DNS resolver configuration. Enter this same virtual server IP address. Configure the inline explicit proxy service to point to the DNS proxy VIP - DNS requests will leave the inline proxy service on the service's from-service VLAN to the DNS proxy VIP. Optionally create a system route if the DNS pool requires a gateway route - If you need a route to get to the actual DNS resource, create a system route. This is found under Network -> Routes. To test, initiate a tcpdump capture on your outbound VLAN and look for port 53 traffic. Assuming an explicit proxy SSL Orchestrator topology, you should see a single outbound DNS request. The DNS request from the inline proxy device will be served directly from the newly cached data. tcpdump -lnni [outbound-vlan] port 53 And there you have it. In just a few steps you've configured your SSL Orchestrator security policy to take advantage of a local DNS proxy cache, and along the way you have hopefully recognized the immense flexibility at your command.682Views3likes1CommentPacket Analysis with Scapy and tcpdump: Checking Compatibility with F5 SSL Orchestrator
In this guide I want to demonstrate how you can use Scapy (https://scapy.net/) and tcpdump for reproducing and troubleshooting layer 2 issues with F5 BIG-IP devices. Just in case you get into a finger-pointing situation... Starting situation This is a quite recent story from the trenches: My customer uses a Bypass Tap to forward or mirror data traffic to inline tools such as IDS/IPS, WAF or threat intelligence systems. This ByPass Tap offers a feature called Network Failsafe (also known as Fail-to-Wire). This is a fault tolerance feature that protects the flow of data in the event of a power outage and/or system failure. It allows traffic to be rerouted while the inline tools (IDS/IPS, WAF or threat intelligence systems) are shutting down, restarting, or unexpectedly losing power (see red line namedFallbackin the picture below). Since the ByPass Tap itself does not have support for SSL decryption and re-encryption, an F5 BIG-IP SSL Orchestrator shall be introduced as an inline tool in a Layer 2 inbound topology. Tools directly connected to the Bypass Tap will be connected to the SSL Orchestrator for better visibility. To check the status of the inline tools, the Bypass Tap sends health checks through the inline tools. What is sent on one interface must be seen on the other interface and vice versa. So if all is OK (health check is green), traffic will be forwarded to the SSL Orchestrator, decrypted and sent to the IDS/IPS and the TAP, and then re-encrypted and sent back to the Bypass Tap. If the Bypass Tap detects that the SSL Orchestrator is in a failure state, it will just forward the traffic to the switch. This is the traffic flow of the health checks: Target topology This results in the following topology: Problem description During commissioning of the new topology, it turned out that the health check packets are not forwarded through the vWire configured on the BIG-IP. A packet analysis with Wireshark revealed that the manufacturer uses ARP-like packets with opcode 512 (HEX 02 00). This opcode is not defined in the RFC that describes ARP (https://datatracker.ietf.org/doc/html/rfc826), the RFC only describes the opcodes Request (1 or HEX 00 01) and Reply (2 or HEX 00 02). NOTE:Don't get confused that you see ARP packets on port 1.1 and 1.2. They are not passing through, the Bypass Tap is just send those packets from both sides of the vWire, as explained above. The source MAC on port 1.1 and 1.2 are different. Since the Bypass Tap is located right behind the customer's edge firewall, lengthy and time-consuming tests on the live system are not an option, since it would result in a massive service interruption. Therefore, a BIG-IP i5800 (the same model as the customer's) was set up as SSL Orchestrator and a vWire configuration was build in my employers lab. The vWire configuration can be found in this guide (https://clouddocs.f5.com/sslo-deployment-guide/chapter2/page2.7.html). INFO:For those not familiar with vWire: "Virtual wire … creates a layer 2 bridge across the defined interfaces. Any traffic that does not match a topology listener will pass across this bridge." Lab Topology The following topology was used for the lab: I build a vWire configuration on the SSL Orchestrator, as in the customer's environment. A Linux system with Scapy installed was connected to Interface 1.1. With Scapy TCP, UDP and ARP packets can be crafted or sent like a replay from a Wireshark capture. Interface 1.3 was connected to another Linux system that should receive the ARP packets. All tcpdumps were captured on the F5 and analyzed on the admin system (not plotted). Validating vWire Configuration To check the functionality of the F5 and the vWire configuration, two tests were performed. A replay of the Healthcheck packets from the Bypass Tap and a test with RFC-compliant ARP requests. Use Scapy to resend the faulty packets First, I used Wireshark to extract a single packet from packet analysis we took in the customer environment and saved it to a pcap file. I replayed this pcap file to the F5 with Scapy. The sendp() function will work at layer 2, it requires the parametersrdpcap(location of the pcap file for replay) andiface(which interface it shall use for sending). webserverdude@tux480:~$ sudo scapy -H WARNING: IPython not available. Using standard Python shell instead. AutoCompletion, History are disabled. Welcome to Scapy (2.5.0) >>> sendp(rdpcap("/home/webserverdude/cusomter-case/bad-example.pcap"),iface="enp0s31f6") . Sent 1 packets. This test confirmed the behavior that was observed in the customer's environment. The F5 BIG-IP does not forward this packet. Use PING and Scapy to send RFC-compliant ARP packets To create RFC-compliant ARP requests, I first sent an ARP request (opcode 1) through the vWire via PING command. As expected, this was sent through the vWire. To ensure that this also works with Scapy, I also resent this packet with Scapy. >>> sendp(rdpcap("/home/webserverdude/cusomter-case/good-example.pcap"),iface="enp0s31f6") . Sent 1 packets. In the Wireshark analysis it can be seen that this packet is incoming on port 1.1 and then forwarded to port 1.3 through the vWire. Solving the issue with the help of the vendor It became evident that the BIG-IP was dropping ARP packets that failed to meet RFC compliance, rendering the Bypass Tap from this particular vendor seemingly incompatible with the BIG-IP. Following my analysis, the vendor was able to develop and provide a new firmware release addressing this issue. To verify that the issue was resolved in this firmware release, my customer's setup, the exact same model of the Bypass Tap and a BIG-IP i5800, were deployed in my lab, where the new firmware underwent thorough testing. With this approach I could test the functionality and compatibility of the systems under controlled conditions. In this Wireshark analysis it can be seen that the Healthcheck packets are incoming on port 1.1 and then forwarded to port 1.3 through the vWire (marked in green) and also the other way round, coming in on port 1.3 and then forwarded to port 1.1 (marked in pink). Also now you can see that the packet is a proper gratuitous ARP reply (https://wiki.wireshark.org/Gratuitous_ARP). Because the Healthcheck packets were not longer dropped by the BIG-IP, but were forwarded through the vWire the Bypass Tap subsequently marked the BIG-IP as healthy and available. The new firmware resolved the issue. Consequently, my customer could confidently proceed with this project, free from the constraints imposed by the compatibility issue.436Views2likes2CommentsF5 SSL Orchestrator and McAfee DLP Solution for SSL Visibility and Content Adaptation
Data transiting between clients (e.g. PCs, tablets, phones, etc.) and servers is predominantly encrypted with Secure Socket Layer (SSL) or the newer Transport Layer Security (TLS) (For reference, see the 2019 TLS Telemetry Report Summary from F5 Labs). Pervasive encryption results in threats being hidden and invisible to security inspection unless traffic is decrypted. This creates serious risks, leaving organizations vulnerable to costly data breaches and loss of intellectual property. An integrated F5® SSL Orchestrator™ and McAfee Data Loss Prevention (DLP) solution solves this SSL/TLS challenge across cloud, mobile, and on-premises environments. SSL Orchestrator centralizes SSL inspection throughout the complex security architectures, providing high-performance decryption of web traffic for security services like McAfee DLP to detect and block data breaches hidden by encryption. This joint solution thus eliminates the blind spots introduced by SSL and closes any opportunity for attackers. Solution Overview F5 SSL Orchestrator, deployed inline to the wire traffic, intercepts any outbound secure web request and establishes two separate SSL connections, one each with the client (the user device) and the requested web server. This creates a decryption zone between the client and the server for inspection. Within the inspection zone, both unencrypted HTTP and decrypted HTTPS requests are encapsulated within Internet Content Adaptation Protocol (ICAP, RFC3507) and steered to the McAfee DLP systems for inspection and possible request modification (REQMOD). In this context, SSL Orchestrator is the ICAP client and McAfee DLP is the ICAP server. After inspection, user HTTPS requests are re-encrypted by SSL Orchestrator, on their way to the web server. The same process of decryption, inspection, and re-encryption takes place for the return response from the web server to the client. Bill of Materials F5 SSL Orchestrator 16.0 Optional functional add-ons include URL filtering subscription, IP intelligence subscription and network hardwaresecurity module (HSM) McAfee Data Loss Prevention 11.4 Pre-requisites F5 SSL Orchestrator is licensed and set up with internal and external VLANs, and self-IP addresses. An SSL certificate—preferably a subordinate certificate authority (CA)—and private key are imported into SSL Orchestrator. The CA certificate chain with root certificate is imported into the client browser. Solution Configuration Steps The solution deployment involves policy creation on McAfee DLP and configuration of SSL Orchestrator on the F5 system. I. Configure DLP Policy Log in to the McAfee ePolicy Orchestrator [ePO] system and create a rule set to block PII related violations and assign it to a DLP policy. II. Deploy SSL Orchestrator using Guided Configuration The SSL Orchestrator guided configuration presents a completely new and streamlined user experience. This workflow-based architecture provides guided configuration steps tailored to a selected topology. Step 1: Topology Properties SSL Orchestrator creates discreet configurations based on the selected topology. Selecting explicit forward proxy topology (as shown in the example) will create an explicit proxy listener. Step 2: SSL Configuration Select the previously imported subordinate CA certificate (see Prerequisites, above) for signing and issuing certificates to the end-host for client-requested HTTPS websites that are intercepted by SSL Orchestrator. Step 3: Create the McAfee DLP ICAP Service The services list section defines the security services that interact with SSL Orchestrator. The guided configuration includes a services catalog that contains common product integrations. In the service catalog, double click the McAfee DLP ICAP service and configure the service settings: McAfee DLP IP address, port, URI paths and preview maximum length. Using the service catalog, create additional security services as required before proceeding to the next step. Step 4: Service Chains Create a service chain, which is an arbitrarily ordered lists of security devices. The service chain determines which services receive decrypted traffic. Step 5: Security Policy SSL Orchestrator’s guided configuration presents an intuitive rule-based, drag-and-drop user interface for the definition of security policies. In the background, SSL Orchestrator maintains these security policies as visual per-request policies. If traffic processing is required that exceeds the capabilities of the rule-based user interface, the underlying per-request policy can be managed directly. Use this section to create custom rules as required. Step 6: Intercept Rule Interception rules are based on the selected topology and define the listeners (analogous to BIG-IP Local Traffic Manager virtual servers) that accept and process different types of traffic, such as TCP, UDP, or other. The resulting listeners will bind the SSL settings, VLANs, IPs, and security policies created in the topology workflow. Step 7: Egress Settings The egress settings section defines topology-specific egress characteristics like NAT and outbound route. Step 8: Summary Review the setting and click deploy SSL Orchestrator. III. Verification Open browser and navigate to https:///dlptest.com (DLPTest.com is a DLP testing resource that focuses on testing to make sure your DLP software is working correctly).In the HTTPS Post section, input some PII data In the text box (an example of PII data is ‘ABC Smith, 123-45-6789, 123 Main St, Seattle WA 98008’) and click on the Submit button.You will see the ‘Access Denied’ message in the response. The DLP Incident Manager web page reports the PII violation. Additional Resources Learn more about SSL Orchestrator on f5.com Recommended best practices guide: F5 SSL Orchestrator and McAfee DLP solution512Views2likes0Comments