on 06-Jan-2020 10:13
This article is the beginning of a multi-part series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ.
Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on pre-deployment tasks and preparations for SSL Orchestrator.
This article is divided into the following high level sections:
Please forgive me for using SSL and TLS interchangeably in this article.
Software versions used in this article:
BIG-IP Version: 14.1.2
SSL Orchestrator Version: 5.5
BIG-IQ Version: 7.0.1
Data transiting between clients (PCs, tablets, phones etc.) and servers is predominantly encrypted with Secure Socket Layer (SSL) and its evolution Transport Layer Security (TLS)(ref. Google Transparency Report).
Pervasive encryption means that threats are now predominantly hidden and invisible to security inspection unless traffic is decrypted. The decryption and encryption of data by different devices performing security functions potentially adds overhead and latency. The picture below shows a traditional chaining of security inspection devices such as a filtering web gateway, a data loss prevention (DLP) tool, and intrusion detection system (IDS) and next generation firewall (NGFW).
Also, TLS/SSL operations are computationally intensive and stress the security devices’ resources. This leads to a sub-optimal usage of resource where compute time is used to encrypt/decrypt and not inspect.
F5’s BIG-IP SSL Orchestrator offers a solution to optimize resource utilization, remove latency, and add resilience to the security inspection infrastructure.
F5 SSL Orchestrator ensures encrypted traffic can be decrypted, inspected by security controls, then re-encrypted—delivering enhanced visibility to mitigate threats traversing the network. As a result, you can maximize your security services investment for malware, data loss prevention (DLP), ransomware, and next-generation firewalls (NGFW), thereby preventing inbound and outbound threats, including exploitation, callback, and data exfiltration.
The SSL Orchestrator decrypts the traffic and forwards unencrypted traffic to the different security devices for inspection leveraging its optimized and hardware-accelerated SSL/TLS stack. As shown below the BIG-IP SSL Orchestrator classifies traffic and selectively decrypts traffic. It then forwards it to the appropriate security functions for inspection. Finally, once duly inspected the traffic is encrypted and sent on its way to the resource the client is accessing.
Deploying F5 and inline security tools together has the following benefits:
Improve the scalability of inline security by distributing the traffic across multiple Security appliances, allowing them to share the load and inspect more traffic.
Add, remove, and/or upgrade Security appliances without disrupting network traffic; converting Security appliances from out-of-band monitoring to inline inspection on the fly without rewiring.
This document focuses on the implementation of BIG-IP SSL Orchestrator to process SSL/TLS encrypted traffic and forward it to a security inspection/enforcement devices. The decryption and forwarding behavior are determined by the security policy. This ensures that only targeted traffic is decrypted in compliance with corporate and regulator policy, data privacy requirements, and other relevant factors.
The configuration supports encrypted traffic that originates from within the data center or the corporate network. It also supports traffic originating from clients outside of the security perimeter accessing resources inside the corporate network or demilitarized zone (DMZ) as depicted below.
The decrypted traffic transits through different inspection devices for inbound and outbound traffic. As an example, inbound traffic is decrypted and processed by F5’s Advanced Web Application Firewall (F5 Advanced WAF) as shown below.
* Can be encrypted or cleartext as needed
As an example, outbound traffic is decrypted and sent to a next generation firewall (NGFW) for inspection as shown in the diagram below.
The BIG-IP SSL Orchestrator solution offers 5 different configuration templates. The following topologies are discussed in Network Insertion Use Cases.
In the use case described herein, the BIG-IP is inserted as layer 3 (L3) network device and is configured with an L3 Outbound Topology.
The assumption is that, prior to the insertion of BIG-IP SSL Orchestrator into the network (in a brownfield environment), the network looks like the one depicted below. It is understood that actual networks will vary, that IP addressing, L2 and L3 connectivity will differ, however, this is deemed to be a representative setup.
Note: All IP addressing in this document is provided as examples only. Private IP addressing (RFC 1918) is used as in most corporate environments.
Note: the management network is not depicted in the picture above. Further discussion about management and visibility is the subject of Centralized Management below.
The following is a description of the different reference points shown in the diagram above.
a. This is the connection of the border routers that connect to the internet and other WAN and private links. Typically, private IP addressing space is used from the border routers to the firewalls.
b. The border switching connects to the corporate/infrastructure firewall. Resilience is built into this switching layer by implementing 2 link aggregates (LAG or Port Channel ®).
c. The “demilitarized zone”(DMZ) switches are connected to the firewall. The DMZ network hosts application that are accessible from untrusted networks such as the Internet.
d. Application server connect into the DMZ switch fabric.
e. Firewalls connect into the switch fabric. Typically core and distribution infrastructure switching will provide L2 and L3 switching to the enterprise (in some case there may be additional L3 routing for larger enterprises/entities that require dynamic routing and other advanced L3 services.
f. The connection between the core and distribution layers are represented by a bus on the figure above because the actual connection schema is too intricate to picture. The writer has taken the liberty of drawing a simplified representation. Switches actually interconnect with a mixture of link aggregation and provide differentiated switching using virtualization (e.g. VLAN tagging, 802.1q), and possibly further frame/packet encapsulation (e.g. QinQ, VxLAN).
g. The core and distribution switching are used to create 2 broadcast domains. One is the client network, and the other is the internal application network.
h. The internal applications are connected to their own subnet.
The BIG-IP SSL Orchestrator solution is implemented as depicted below.
In the diagram above, new network connections are depicted in orange (vs. blue for existing connections). Similarly to the diagram showing the original network, the switching for the DMZ is depicted using a bus representation to keep the diagram simple.
The following discusses the different reference points in the diagram above:
a. The BIG-IP SSL Orchestrator is connected to the core switching infrastructure
A new VLAN and network are created on the core switching infrastructure to connect to the firewalls (North) to the BIG-IP SSL Orchestrator devices.
b. The client network (South) is connected to the BIG-IP via a second VLAN and network.
c. The SSL Orchestrator devices are connected to a newly created inspection network. This network is kept separate from the rest of the infrastructure as client traffic transits through the inspection devices unencrypted. As an example, Web Application Firewalls (BIG-IP ASM) are used to filter inbound traffic.
d. The LAN configuration for the connection to the BIG-IP ASM is as depicted below.
e. The NGFW is connected to the INSPECTION switching network in such a manner that traffic traverses it when the BIG-IP SSL Orchestrator is configured to push traffic for inspection.
This article should be a good starting point for planning your initial SSL Orchestrator deployment. We covered the solution overview and use cases. The network topology and architecture was explained with the help of diagrams.
Click Next to proceed to the next article in the series
Great article, Kevin. This is what customers and partners have been asking for. Looking forward to the other deployment methods.
Great series, it's exactly what I was looking for 🙂 Have some questions (as usual):
Sorry for so many questions. If you covered those in another parts (not yet able to read them all), just ignore my questions.
I will try to answer your questions above.
1.- A couple of things:
-the article should read "d. The Lan configuration for the connection to the BIG-IP ASM is as depicted below"
-the article should also read "e. The NGFW is connected to the INSPECTION switching network [...]"
-the letters above refer to the figure showing the different reference points.
We'll work on correcting the formatting and typo in the article.
2.-Dynamic Routing was not considered for this article series - are you talking about routing on ingress/egress, for services in the chain? i do not know what the implications are on ingress/egress and we would need to try that and possibly create another article on how to set that up successfuly.
3.-There is confusion there. As you caught, there is no alternate path drawn in the figure to allow for policy-based routing. The insertion of services in existing networks is tricky. The article suggests that you can have an alternate path between the North and South switches (not shown in the diagram) and that the administrator can manipulate routing to selectively send traffic to the BIG-IP SSL Orchestrator. Please consider the following diagram for more information:
4.-Please refer to K7820 for SNAT uses and best practices. With BIG-IP configurations (SSLO or other modules), whenever a large number of connections are going to require SNAT, you want to make sure that SNAT pools are used to avoid port collisions (running out of ephemeral ports to initiate the connection). There are also some caveats to using SNAT Automap along with floating IP addresses.
5.-BIG-IP is configured in L3 mode in this case and is a hop in the network. So the BIG-IP is not transparent from a network perspective and not configured with vWire. The NGFW is configured to act as transparent L2 device in the inspection/service chain.
I hope this helps clarify some things in this article.
Thanks a lot!! Great explanations 👍 Now everything is clear. Considering SNAT, just not noticed that it's related only to Auto Map. Of course I understand that using single IP for SNAT could be limiting factor, but somehow not considered it important without this hint 😌