What Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware pagefor more information.68KViews2likes3CommentsConfigure the F5 BIG-IP as an Explicit Forward Web Proxy Using LTM
In a previous article, I provided a guide on using F5's Access Policy Manager (APM) and Secure Web Gateway (SWG) to provide forward web proxy services. While that guide was for organizations that are looking to provide secure internet access for their internal users, URL filtering as well as securing against both inbound and outbound malware, this guide will use only F5's Local Traffic Manager to allow internal clients external internet access. This week I was working with F5's very talented professional services team and we were presented with a requirement to allow workstation agents internet access to known secure sites to provide logs and analytics. Of course, this capability can be used to meet a number of other use cases, this was a real-world use case I wanted to share. So with that, let's get to it! Creating a DNS Resolver Navigate to Network > DNS Resolvers > click Create Name: DemoDNSResolver Leave all other settings at their defaults and click Finished Click the newly created DNS resolver object Click Forward Zones Click Add In this use case, we will be forwarding all requests to this DNS resolver. Name: . Address: 8.8.8.8 Note: Please use the correct DNS server for your use case. Service Port: 53 Click Add and Finished Creating a Network Tunnel Navigate to Network > Tunnels > Tunnel List > click Create Name: DemoTunnel Profile: tcp-forward Leave all other settings default and click Finished Create an http Profile Navigate to Local Traffic > Profiles > Services > HTTP > click Create Name: DemoExplicitHTTP Proxy Mode: Explicit Parent Profile: http-explict Scroll until you reach Explicit Proxy settings. DNS Resolver: DemoDNSResolver Tunnel Name: DemoTunnel Leave all other settings default and click Finish Create an Explicit Proxy Virtual Server Navigate to Local Traffic > Virtual Servers > click Create Name: explicit_proxy_vs Type: Standard Destination Address/Mask: 10.1.20.254 Note: This must be an IP address the internal clients can reach. Service Port: 8080 Protocol: TCP Note: This use case was for TCP traffic directed at known hosts on the internet. If you require other protocols or all, select the correct option for your use case from the drop-down menu. Protocol Profile (Client): f5-tcp-progressive Protocol Profile (Server): f5-tcp-wan HTTP Profile: DemoExplicitHTTP VLAN and Tunnel Traffic Enabled on: Internal Source Address Translation: Auto Map Leave all other settings at their defaults and click Finish. Create a Fast L4 Profile Navigate to Local Traffic > Profiles: Protocol: Fast L4 > click Create Name: demo_fastl4 Parent Profile: fastL4 Enable Loose Initiation and Loose Close as shown in the screenshot below. Click Finished Create a Wild Card Virtual Server In order to catch and forward all traffic to the BIG-IP's default gateway, we will create a virtual server to accept traffic from our explicit proxy virtual server created in the previous steps. Navigate to Local Traffic > Virtual Servers > Virtual Server List > click Create Name: wildcard_VS Type: Forwarding (IP) Source Address: 0.0.0.0/0 Destination Address: 0.0.0.0/0 Protocol: *All Protocols Service Port: 0 *All Ports Protocol Profile: demo_fastl4 VLAN and Tunnel Traffic: Enabled on...DemoTunnel Source Address Translation: Auto Map Leave all other settings at their defaults and click Finished. Testing and Validation Navigate to a workstation on your internal network. Launch Internet Explorer or the browser of your preference. Modify the proxy settings to reflect the explicit_proxy_VS created in previous steps. Attempt to access several sites and validate you are able to reach them. Whether successful or unsuccessful, navigate to Local Traffic > Virtual Servers > Virtual Server List > click the Statistics tab. Validate traffic is hitting both of the virtual servers created above. If it is not, for troubleshooting purposes only configure to the virtual servers to accept traffic on All VLANs and Tunnels as well as useful tools such as curl and tcpdump. You have now successfully configured your F5 BIG-IP to act as an explicit forward web proxy using LTM only. As stated above, this use case is not meant to fulfill all forward proxy use cases. If URL filtering and malware protection are required, APM and SWG integration should be considered. Until next time!36KViews8likes34CommentsADFS Proxy Replacement on F5 BIG-IP
BIG-IP Access Policy Manager can now replace the need for Web Application Proxy servers providing security for your modern AD FS deployment with MS-ADFSPIP support released in BIG-IP v13.1. This article will provide a one stop shop for you to gather information on the solution and leverage it in your environment. What is an AD FS Proxy? AD FS proxies are Windows servers that provide access to external users to the AD FS farm in the internal network. This is done on a server called a Web Application Proxy (WAP). More recent versions of Active Directory Federation Services require the proxy to support MS-ADFSPIP (ADFS Proxy Integration Protocol) which involves client certificate auth between proxy and AD FS, trust establishment, header injection, and more. As noted above, BIG-IP APM v13.1 has support for MS-ADFSPIP. You can see Microsoft’s notes on this and supported third party proxies here, noting that F5 is on the list. Here’s a typical ADFS deployment: So what does BIG-IP do for me? Glad you asked! Here’s an example of the single tier deployment architecture. You can also split these roles into a two tier architecture. As you can see, BIG-IP is taking the roles of both load balancer and the web application proxies protecting AD FS. In this diagram we’re adding additional security with Advanced WAF, DDoS, and Network Firewall services. You can see the F5/Microsoft announcement at Ignite hereabout this new feature. If you want to understand more about the architecture, check out John Wagnon’s awesome lightboard lesson here. How do I deploy it? There are a few ways to do it. The simplest is with the latest iApp template to help you deploy everything, available from https://downloads.f5.com. Make sure you’re using at least v1.2.0rc6. You can also get the related deployment guide here. If you want to deploy manually, there are instructions in the deployment guide. The support article here also covers basic deployment and how the pieces work. Who doesn’t love reading support articles? For the admin the new feature comes down to this amazing simple checkbox: Checking a box and entering credentials is WAY easier than deploying multiple Windows servers, configuring them as WAPs, establishing trust, then maintaining and securing them going forward. Access Policy Manager will maintain that trust, exchanging certificates automatically before they expire with AD FS. Note that no access profile is assigned above. If you want one to add more security flexibility then the access profile is supported as well. Check the deployment guide for requirements. If you don’t use one, no access sessions are used. Here’s a quick video explaining the solution and demoing deployment using the iApp. What else can I do? You can add more security using access profiles to add preauthentication, multifactor, etc. A basic access policy (with Azure MFA optional) is included in the iApp. Also included in the iApp is network firewall policy deployment. You can add Advanced WAF features like brute force, credential stuffing, bot protection, and more if desired too.28KViews0likes49CommentsThe BIG-IP Application Security Manager Part 1: What is the ASM?
tl;dr - BIG-IP Application Security Manager (ASM) is a layer 7 web application firewall (WAF) available on F5's BIG-IP platforms. Introduction This article series was written a while back, but we are re-introducing it as a part of our Security Month on DevCentral. I hope you enjoy all the features of this very powerful module on the BIG-IP! This is the first of a 10-part series on the BIG-IP ASM. This module is a very powerful and effective tool for defending your applications and your peace of mind, but what is it really? And, how do you configure it correctly and efficiently? How can you take advantage of all the features it has to offer? Well, the purpose of this article series is to answer these fundamental questions. So, join me as we dive into this really cool technology called the BIG-IP ASM! The Basics The BIG-IP ASM is a Layer 7 ICSA-certified Web Application Firewall (WAF) that provides application security in traditional, virtual, and private cloud environments. It is built on TMOS...the universal product platform shared by all F5 BIG-IP products. It can run on any of the F5 Application Delivery Platforms...BIG-IP Virtual Edition, BIG-IP 2000 -> 11050, and all the VIPRION blades. It protects your applications from a myriad of network attacks including the OWASP Top 10 most critical web application security risks It is able to adapt to constantly-changing applications in very dynamic network environments It can run standalone or integrated with other modules like BIG-IP LTM, BIG-IP DNS, BIG-IP APM, etc Why A Layer 7 Firewall? Traditional network firewalls (Layer 3-4) do a great job preventing outsiders from accessing internal networks. But, these firewalls offer little to no support in the protection of application layer traffic. As David Holmes points out in his article series on F5 firewalls, threat vectors today are being introduced at all layers of the network. For example, the Slowloris and HTTP Flood attacks are Layer 7 attacks...a traditional network firewall would never stop these attacks. But, nonetheless, your application would still go down if/when it gets hit by one of these. So, it's important to defend your network with more than just a traditional Layer 3-4 firewall. That's where the ASM comes in... Some Key Features The ASM comes pre-loaded with over 2,200 attack signatures. These signatures form the foundation for the intelligence used to allow or block network traffic. If these 2,200+ signatures don't quite do the job for you, never fear...you can also build your own user-defined signatures. And, as we all know, network threats are always changing so the ASM is configured to download updated attack signatures on a regular basis. Also, the ASM offers several different policy building features. Policy building can be difficult and time consuming, especially for sites that have a large number of pages. For example, DevCentral has over 55,000 pages...who wants to hand-write the policy for that?!? No one has that kind of time. Instead, you can let the system automatically build your policy based on what it learns from your application traffic, you can manually build a policy based on what you know about your traffic, or you can use external security scanning tools (WhiteHat Sentinel, QualysGuard, IBM AppScan, Cenzic Hailstorm, etc) to build your policy. In addition, the ASM comes configured with pre-built policies for several popular applications (SharePoint, Exchange, Oracle Portal, Oracle Application, Lotus Domino, etc). Did you know? The BIG-IP ASM was the first WAF to integrate with a scanner. WhiteHat approached all the WAFs and asked about the concept of building a security policy around known vulnerabilities in the apps. All the other WAFs said "no"...F5 said "of course!" and thus began the first WAF-scanner integration. The ASM also utilizes Geolocation and IP address intelligence to allow for more sophisticated and targeted defense measures. You can allow/block users from specific locations around the world, and you can block IP addresses that have built a bad reputation on other sites around the Internet. If they were doing bad things on some other site, why let them access yours? The ASM is also built for Payment Card Industry Data Security Standard (PCI DSS) compliance. In fact, you can generate a real-time PCI compliance report at the click of a button! The ASM also comes loaded with the DataGuard feature that automatically blocks sensitive data (Credit Card numbers, SSN, etc) from being displayed in a browser. In addition to the PCI reports, you can generate on-demand charts and graphs that show just about every detail of traffic statistics that you need. The following screenshot is a representative sample of some real traffic that I pulled off a site that uses the ASM. Pretty powerful stuff! I could go on for days here...and I know you probably want me to, but I'll wrap it up for this first article. I hope you can see the value of the ASM both as a technical solution in the defense of your network and also a critical asset in the long-term strategic vision of your company. So, if you already have an ASM and want to know more about it or if you don't have one yet and want to see what you're missing, come on back for the next article where I will talk about the cool features of policy building. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging26KViews4likes6CommentsSSL Profiles Part 8: Client Authentication
This is the eighth article in a series of Tech Tips that highlight SSL Profiles on the BIG-IP LTM. SSL Overview and Handshake SSL Certificates Certificate Chain Implementation Cipher Suites SSL Options SSL Renegotiation Server Name Indication Client Authentication Server Authentication All the "Little" Options This article will discuss the concept of Client Authentication, how it works, and how the BIG-IP system allows you to configure it for your environment. Client Authentication In a TLS handshake, the client and the server exchange several messages that ultimately result in an encrypted channel for secure communication. During this handshake, the client authenticates the server's identity by verifying the server certificate (for more on the TLS handshake, see SSL Overview and Handshake - Article 1in this series). Although the client always authenticates the server's identity, the server is not required to authenticate the client's identity. However, there are some situations that call for the server to authenticate the client. Client authentication is a feature that lets you authenticate users that are accessing a server. In client authentication, a certificate is passed from the client to the server and is verified by the server. Client authentication allow you to rest assured that the person represented by the certificate is the person you expect. Many companies want to ensure that only authorized users can gain access to the services and content they provide. As more personal and access-controlled information moves online, client authentication becomes more of a reality and a necessity. How Does Client Authentication Work? Before we jump into client authentication, let's make sure we understand server authentication. During the TLS handshake, the client authenticates the identity of the server by verifying the server's certificate and using the server's public key to encrypt data that will be used to compute the shared symmetric key. The server can only generate the symmetric key used in the TLS session if it can decrypt that data with its private key. The following diagram shows an abbreviated version of the TLS handshake that highlights some of these concepts. Ultimately, the client and server need to use a symmetric key to encrypt all communication during their TLS session. In order to calculate that key, the server shares its certificate with the client (the certificate includes the server's public key), and the client sends a random string of data to the server (encrypted with the server's public key). Now that the client and server each have the random string of data, they can each calculate (independently) the symmetric key that will be used to encrypt all remaining communication for the duration of that specific TLS session. In fact, the client and server both send a "Finished' message at the end of the handshake...and that message is encrypted with the symmetric key that they have both calculated on their own. So, if all that stuff works and they can both read each other's "Finished" message, then the server has been authenticated by the client and they proceed along with smiles on their collective faces (encrypted smiles, of course). You'll notice in the diagram above that the server sent its certificate to the client, but the client never sent its certificate to the server. When client authentication is used, the server still sends its certificate to the client, but it also sends a "Certificate Request" message to the client. This lets the client know that it needs to get its certificate ready because the next message from the client to the server (during the handshake) will need to include the client certificate. The following diagram shows the added steps needed during the TLS handshake for client authentication. So, you can see that when client authentication is enabled, the public and private keys are still used to encrypt and decrypt critical information that leads to the shared symmetric key. In addition to the public and private keys being used for authentication, the client and server both send certificates and each verifies the certificate of the other. This certificate verification is also part of the authentication process for both the client and the server. The certificate verification process includes four important checks. If any of these checks do not return a valid response, the certificate verification fails (which makes the TLS handshake fail) and the session will terminate. These checks are as follows: Check digital signature Check certificate chain Check expiration date and validity period Check certificate revocation status Here's how the client and server accomplish each of the checks for client authentication: Digital Signature: The client sends a "Certificate Verify" message that contains a digitally signed copy of the previous handshake message. This message is signed using the client certificate's private key. The server can validate the message digest of the digital signature by using the client's public key (which is found in the client certificate). Once the digital signature is validated, the server knows that public key belonging to the client matches the private key used to create the signature. Certificate Chain: The server maintains a list of trusted CAs, and this list determines which certificates the server will accept. The server will use the public key from the CA certificate (which it has in its list of trusted CAs) to validate the CA's digital signature on the certificate being presented. If the message digest has changed or if the public key doesn't correspond to the CA's private key used to sign the certificate, the verification fails and the handshake terminates. Expiration Date and Validity Period: The server compares the current date to the validity period listed in the certificate. If the expiration date has not passed and the current date is within the period, everything is good. If it's not, then the verification fails and the handshake terminates. Certificate Revocation Status: The server compares the client certificate to the list of revoked certificates on the system. If the client certificate is on the list, the verification fails and the handshake terminates. As you can see, a bunch of stuff has to happen in just the right way for the Client-Authenticated TLS handshake to finalize correctly. But, all this is in place for your own protection. After all, you want to make sure that no one else can steal your identity and impersonate you on a critically important website! BIG-IP Configuration Now that we've established the foundation for client authentication in a TLS handshake, let's figure out how the BIG-IP is set up to handle this feature. The following screenshot shows the user interface for configuring Client Authentication. To get here, navigate to Local Traffic > Profiles > SSL > Client. The Client Certificate drop down menu has three settings: Ignore (default), Require, and Request. The "Ignore" setting specifies that the system will ignore any certificate presented and will not authenticate the client before establishing the SSL session. This effectively turns off client authentication. The "Require" setting enforces client authentication. When this setting is enabled, the BIG-IP will request a client certificate and attempt to verify it. An SSL session is established only if a valid client certificate from a trusted CA is presented. Finally, the "Request" setting enables optional client authentication. When this setting is enabled, the BIG-IP will request a client certificate and attempt to verify it. However, an SSL session will be established regardless of whether or not a valid client certificate from a trusted CA is presented. The Request option is often used in conjunction with iRules in order to provide selective access depending on the certificate that is presented. For example: let's say you would like to allow clients who present a certificate from a trusted CA to gain access to the application while clients who do not provide the required certificate be redirected to a page detailing the access requirements. If you are not using iRules to enforce a different outcome based on the certificate details, there is no significant benefit to using the "Request" setting versus the default "Ignore" setting. In both cases, an SSL session will be established regardless of the certificate presented. Frequency specifies the frequency of client authentication for an SSL session. This menu offers two options: Once (default) and Always. The "Once" setting specifies that the system will authenticate the client only once for an SSL session. The "Always"setting specifies that the system will authenticate the client once when the SSL session is established as well as each time that session is reused. The Retain Certificate box is checked by default. When checked, the client certificate is retained for the SSL session. Certificate Chain Traversal Depth specifies the maximum number of certificates that can be traversed in a client certificate chain. The default for this setting is 9. Remember that "Certificate Chain" part of the verification checks? This setting is where you configure the depth that you allow the server to dig for a trusted CA. For more on certificate chains, see article 2 of this SSL series. Trusted Certificate Authorities setting is used to specify the BIG-IP's Trusted Certificate Authorities store. These are the CAs that the BIG-IP trusts when it verifies a client certificate that is presented during client authentication. The default value for the Trusted Certificate Authorities setting is None, indicating that no CAs are trusted. Don't forget...if the BIG-IP Client Certificate menu is set to Require but the Trusted Certificate Authorities is set to None, clients will not be able to establish SSL sessions with the virtual server. The drop down list in this setting includes the name of all the SSL certificates installed in the BIG-IP's /config/ssl/ssl.crt directory. A newly-installed BIG-IP system will include the following certificates: default certificate and ca-bundle certificate. The default certificate is a self-signed server certificate used when testing SSL profiles. This certificate is not appropriate for use as a Trusted Certificate Authorities certificate bundle. The ca-bundle certificate is a bundle of CA certificates from most of the well-known PKIs around the world. This certificate may be appropriate for use as a Trusted Certificate Authorities certificate bundle. However, if this bundle is specified as the Trusted Certificate Authorities certificate store, any valid client certificate that is signed by one of the popular Root CAs included in the default ca-bundle.crt will be authenticated. This provides some level of identification, but it provides very little access control since almost any valid client certificate could be authenticated. If you want to trust only certificates signed by a specific CA or set of CAs, you should create and install a bundle containing the certificates of the CAs whose certificates you trust. The bundle must also include the entire chain of CA certificates necessary to establish a chain of trust. Once you create this new certificate bundle, you can select it in the Trusted Certificate Authorities drop down menu. The Advertised Certificate Authorities setting is used to specify the CAs that the BIG-IP advertises as trusted when soliciting a client certificate for client authentication. The default value for the Advertised Certificate Authorities setting is None, indicating that no CAs are advertised. When set to None, no list of trusted CAs is sent to a client with the certificate request. If the Client Certificate menu is set to Require or Request, you can configure the Advertised Certificate Authorities setting to send clients a list of CAs that the server is likely to trust. Like the Trusted Certificate Authorities list, the Advertised Certificate Authorities drop down list includes the name of all the SSL certificates installed in the BIG-IP /config/ssl/ssl.crt directory. A newly-installed BIG-IP system includes the following certificates: default certificate and ca-bundle certificate. The default certificate is a self-signed server certificate used for testing SSL profiles. This certificate is not appropriate for use as an Advertised Certificate Authorities certificate bundle. The ca-bundle certificate is a bundle of CA certificates from most of the well-known PKIs around the world. This certificate may be appropriate for use as an Advertised Certificate Authorities certificate bundle. If you want to advertise only a specific CA or set of CAs, you should create and install a bundle containing the certificates of the CA to advertise. Once you create this new certificate bundle, you can select it in the Advertised Certificate Authorities setting drop down menu. You are allowed to configure the Advertised Certificate Authorities setting to send a different list of CAs than that specified for the Trusted Certificate Authorities. This allows greater control over the configuration information shared with unknown clients. You might not want to reveal the entire list of trusted CAs to a client that does not automatically present a valid client certificate from a trusted CA. Finally, you should avoid specifying a bundle that contains a large number of certificates when you configure the Advertised Certificate Authorities setting. This will cut down on the number of certificates exchanged during a client SSL handshake. The maximum size allowed by the BIG-IP for native SSL handshake messages is 14,304 bytes. Most handshakes don't result in large message lengths, but if the SSL handshake is negotiating a native cipher and the total length of all messages in the handshake exceeds the 14,304 byte threshold, the handshake will fail. The Certificate Revocation List (CRL) setting allows you to specify a CRL that the BIG-IP will use to check revocation status of a certificate prior to authenticating a client. If you want to use a CRL, you must upload it to the /config/ssl/ssl.crl directory on the BIG-IP. The name of the CRL file may then be entered in the CRL setting dialog box. Note that this box will offer no drop down menu options until you upload a CRL file to the BIG-IP. Since CRLs can quickly become outdated, you should use either OCSP or CRLDP profiles for more robust and current verification functionality. Conclusion Well, that wraps up our discussion on Client Authentication. I hope the information helped, and I hope you can use this to configure your BIG-IP to meet the needs of your specific network environment. Be sure to come back for our next article in the SSL series. As always, if you have any other questions, feel free to post a question here or Contact Us directly. See you next time!25KViews1like21CommentsSSL Profiles Part 7: Server Name Indication
This is the seventh article in a series of Tech Tips that highlight SSL Profiles on the BIG-IP LTM. The other articles are: SSL Overview and Handshake SSL Certificates Certificate Chain Implementation Cipher Suites SSL Options SSL Renegotiation Server Name Indication Client Authentication Server Authentication All the "Little" Options This article will discuss the concept of Server Name Indication (SNI) and how the BIG-IP system allows you to configure it for your environment. What is Server Name Indication? SNI (listed in RFC 4366) is an extension to the TLS protocol that allows the client to include the requested hostname in the first message of the SSL handshake (Client Hello). This allows the server to determine the correct named host for the request and setup the connection accordingly from the start. Prior to the introduction of SNI, the client could not establish secure connections to multiple virtual servers hosted on a single IP address. This happened because the destination server name can only be decoded from the HTTP request header after the SSL connection has been established. As you can see from the following diagram (taken from Jason Rahm's first articlein this series), the standard TLS handshake involves several messages between the client and the server. The server sends the certificate (step 3) to the client long before the handshake is complete. If a web server is used to host multiple DNS hostnames on a single IP address, the certificate passing between the server and the client could get problematic. Using the standard TLS protocol, the server might send the wrong certificate to the client because it does not yet know which certificate the client is looking for. Then, if the client receives a certificate with the wrong name, it will either abort the connection (assuming a Man-in-the-Middle attack) or at least display a warning page to the user stating that there is a problem with the certificate. This scenario is shown in the following figure. A client wants to visit https://www.securesite1.com, but the virtual web server doesn't know to pass the certificate for securesite1.com until it finishes the TLS handshake and reads the HTTP request header from the client. This is where SNI comes in really handy. With the introduction of SNI, the client can indicate the name of the server to which he is attempting to connect as part of the "Client Hello" message in the handshake process. The server then uses this information to select the appropriate certificate to return to the client when it sends back the "Server Hello" packet during the handshake. This allows a server to present multiple certificates on the same IP address and port number and thus allows multiple secure (HTTPS) sites to be served off the same IP address without requiring all the sites to use the same certificate. Having this extension available (per RFC 4366) is a great feature, but it does no good if a client is using a browser that doesn't send the correct SNI message to the server. So, to make SNI practical, the vast majority of your users must use web browsers that support it. Browsers that do not support SNI will be presented with the server's default certificate and are likely to receive a certificate warning. According to Wikipedia, the following browsers support SNI: Internet Explorer 7 or later, on Windows Vista or higher. Does not work on Windows XP, even Internet Explorer 8 (the support of this feature is not browser version dependent, it depends on SChannel system component which introduced the support of TLS SNI extension, starting from Windows Vista, not XP). Mozilla Firefox 2.0 or later Opera 8.0 (2005) or later (the TLS 1.1 protocol must be enabled) Opera Mobile at least version 10.1 beta on Android Google Chrome (Vista or higher. XP on Chrome 6 or newer. OS X 10.5.7 or higher on Chrome 5.0.342.1 or newer) Safari 3.0 or later (Mac OS X 10.5.6 or higher and Windows Vista or higher) Konqueror/KDE 4.7 or later MobileSafari in Apple iOS 4.0 or later Android default browser on Honeycomb (v3.x) or newer Windows Phone 7 MicroB on Maemo Odyssey on MorphOS Now that you know the background of SNI, let's dig into how the BIG-IP is set up for SNI configuration. BIG-IP Configuration Beginning in v11.1.0, the BIG-IP allows you to assign multiple SSL profiles to a virtual server for supporting the use of the TLS SNI feature. The TLS SNI feature is not available in previous BIG-IP versions, so you'll want to upgrade if you are not on v11.1.0 or higher! To support this feature, a virtual server must be assigned a default SSL profile for fallback as well as one SSL profile per HTTPS site. The fallback SSL profile is used when the server name does not match the client request or when the browser does not support the SNI extensions. Using the example from the figures above, suppose you need to host the three domains securesite1.com, securesite2.com, and securesite3.com on the same HTTPS virtual server. Each domain has its own server certificate to use, such as securesite1.crt, securesite2.crt, and securesite3.crt, and each has different security requirements. To ensure that the BIG-IP presents the correct certificate to the browser, you enable SNI, which sends the name of a domain as part of the TLS negotiation. To enable SNI, you configure the Server Name and other settings on an SSL profile, and then assign the profile to a virtual server. For SSL profiles (Client and Server), you type the name for the HTTPS site in the Server Name box. SNI configuration is found by navigating to Local Traffic > Profiles > SSL > Client | Server. The following screenshot shows the three settings used for SNI in the BIG-IP. Server Name specifies the fully qualified DNS hostname of the server that is used in SNI communications. Using the server name, the Local Traffic Manager can choose from multiple SSL profiles prior to the SSL Handshake. If no value is specified, the system uses the Common Name value from the default certificate. The default for this setting leaves the name blank. Default SSL Profile for SNI indicates that the system uses this profile as the default SSL profile when there is no match to the server name, or when the client provides no SNI extension support. Note that when assigning multiple SSL profiles to a single virtual server, you can enable this setting on one Client SSL profile only and on one Server SSL profile only. The default for this setting is unchecked. Require Peer SNI support requires that all network peers provide SNI support as well. If you enable both "Default SSL Profile for SNI" and "Require Peer SNI Support," the system terminates the connection when the client provides no SNI extension. The default for this setting is unchecked. Conclusion I hope this helps with setting up this great feature in your environment. SNI is a powerful tool, and it could go a long way in saving you precious IP addresses for your secure sites! As we noted before, this feature is only supported in BIG-IP version 11.1.0 and later. If you are using a version prior to 11.1.0, you can read this articleon DevCentral that shows how you can use an iRule to take advantage of SNI on previous versions. Finally, you can read more on our DevCentral Wiki about using iRules for SNI. Well, thanks for reading about SNI. Be sure to come back for the next article in the SSL series where I will talk about SSL Forward Proxy. See you then!25KViews1like10CommentsSNI Routing with BIG-IP
In the previous article, The Three HTTP Routing Patterns, Lori MacVittie covers 3 methods of routing. Today we will look at Server Name Indication (SNI) routing as an additional method of routing HTTPS or any protocol that uses TLS and SNI. Using SNI we can route traffic to a destination without having to terminate the SSL connection. This enables several benefits including: Reduced number of Public IPs Simplified configuration More intelligent routing of TLS traffic Terminating SSL Connections When you havea SSL certificate and key you can perform the cryptographic actions required to encrypt traffic using TLS. This is what I refer to as “terminating the SSL connection” throughout this article. When you want to route traffic this is a chickenand an egg problem, becausefor TLS traffic you want to be able to route the traffic by being able to inspect the contents, but this normally requires being able to “terminate the SSL connection”. The goal of this article is to layer in traffic routing for TLS traffic without having to require having/knowing the original SSL certificate and key. Server Name Indication (SNI) SNI is a TLS extension that makes it possible to "share" certificates on a single IP address. This is possible due to a client using a TLS extension that requests a specific name before the server responds with a SSL certificate. Prior to SNI, the other options would be a wildcard certificate or Subject Alternative Name (SAN) that allows you to specify multiple names with a single certificate. SNI with Virtual Servers It has been possible to use SNI on F5 BIG-IP since TMOS 11.3.0. The following KB13452 outlines how it can be configured. In this scenario (from the KB article) the BIG-IP is terminating the SSL connection. Not all clients support SNI and you will always need to specify a “fallback” profile that will be used if a SNI name is not used or matched. The next example will look at how to use SNI without terminating the SSL connection. SNI Routing Occasionally you may have the need to have a hybrid configuration of terminating SSL connections on the BIG-IP and sending connections without terminating SSL. One method is to create two separate virtual servers, one for SSL connections that the BIG-IP will handle (using clientssl profile) and one that it will not handle SSL (just TCP). This works OK for a small number of backends, but does not scale well if you have many backends (run out of Public IP addresses). Using SNI Routing we can handle everything on a single virtual server / Public IP address. There are 3 methods for performing SNI Routing with BIG-IP iRule with binary scan a. Article by Colin Walker code attribute to Joel Moses b. Code Share by Stanislas Piron iRule with SSL::extensions Local Traffic Policy Option #1 is for folks that prefer complete control of the TLS protocol. It only requires the use of a TCP profile. Options #2 and #3 only require the use of a SSL persistence profile and TCP profile. SNI Routing with Local Traffic Policy We will skip option #1 and #2 in this article and look at using a Local Traffic Policy for SNI Routing. For a review of Local Traffic Policies check out the following DevCentral articles: LTM Policy Jan 2015 Simplifying Local Traffic Policies in BIG-IP 12.1 June 2016 In previous articles about Local Traffic Policiesthe focus was on routing HTTP traffic, but today we will use it to route SSL connections using SNI. In the following example, using a Local Traffic Policy named “sni_routing”, we are setting a condition on the SSL Extension “servername” and sending the traffic to a pool without terminating the SSL connection. The pool member could be another server or another BIG-IP device. The next example will forward the traffic to another virtual server that is configured with a clientssl profile. This uses VIP targeting to send traffic to another virtual server on the same device. In both examples it is important to note that the “condition”/“action” has been changed from occurring on “request” (that maps to a HTTP L7 request) to “ssl client hello”. By performing the action prior to any L7 functions occurring, we can forward the traffic without terminating the SSL connection. The previous example policy, “sni_routing”, can be attached to a Virtual Server that only has a TCP profile and SSL persistence profile. No HTTP or clientssl profile is required! This method can also be used to solve the issue of how to consolidate multiple SSL virtual servers behind a single virtual server that have different APM and/or ASM policies. This is similar to the architecture that is used by the Container Connector for Cloud Foundry; in creating a two-tier load balancing solution on a single device. Routed Correctly? TLS 1.3 has interesting proposals on how to obscure the servername (TLS in TLS?), but for now this is a useful and practical method of handling multiple SSL certs on a single IP. In the future this may still be possible as well with TLS 1.3. For example the use of a HTTP Fronting service could be a tier 1 virtual server (this is just my personal speculation, I have not tried, at the time of publishing this was still a draft proposal). In other news it has been demonstrated that a combination of using SNI and a different host header can be used for “domain fronting”. A method to enforce consistent policy (prevent domain fronting) would be to layer in additional conditions that match requested SNI servername (TLS extension) with requested HOST header (L7 HTTP header). This would help enforce that a tenant is using a certificate that is associated with their application and not “borrowing” the name and certificate that is being used by an adjacent service. We don’t think of a TLS extension as an attribute that can be used to route application traffic, but it is useful and possible on BIG-IP.23KViews0likes16CommentsUnderstanding IPSec IKEv1 negotiation on Wireshark
Related Articles: Understanding IPSec IKEv2 negotiation on Wireshark 1. The Big Picture First 6 Identity Protection (Main Mode) messages negotiate security parameters to protect the next 3 messages (Quick Mode) and whatever is negotiated in Phase 2 is used to protect production traffic (ESP or AH, normally ESP for site-site VPN). We call first 6 messages Phase 1 and last 3 messages as Phase 2. Sample pcap:IPSEC-tunnel-capture-1.pcap(for instructions on how to decrypt it just go to website where I got this sample capture:http://ruwanindikaprasanna.blogspot.com/2017/04/ipsec-capture-with-decryption.html) 2. Phase 1 2.1 Policy Negotiation Both peers add a unique SPI just to uniquely identify each side's Security Association (SA): Inframe #1, the Initiator (.70) sends a set of Proposals containing a set of security parameters (Transforms) that Responder (.71) can pick if it matches its local policies: Fair enough, in frame #2 the Responder (.71) picks one of theTransforms: 2.2 DH Key Exchange Then, next 2 Identity Protection packets both peers exchange Diffie-Hellman public key values and nonces (random numbers) which will then allow both peers to agree on a shared secret key: With DH public key value and the nonce both peers will generate a seed key called SKEYID. A further 3 session keys will be generated using this seed key for different purposes: SKEYID_d(d for derivative): not used by Phase 1. It is used as seed key for Phase2 keys, i.e. seed key for production traffic keys in Plain English. SKEYID_a(a for authentication): this key is used to protect message integrity in every subsequent packets as soon as both peers are authenticated (peers will authenticate each other in next 2 packets). Yes, I know, we verify the integrity by using a hash but throwing a key into a hash adds stronger security to hash and it's called HMAC. SKEYID_e(e for encryption): you'll see that the next 2 packets are also encrypted. As selected encryption algorithm for this phase was AES-CBC (128-bits) then we use AES with this key to symmetrically encrypt further data. Nonceis just to protect against replay attacks by adding some randomness to key generation 2.3 Authentication The purpose of this exchange is to confirm each other's identity. If we said we're going to do this using pre-shared keys then verification consists of checking whether both sides has the same pre-shared key. If it is RSA certificate then peers exchange RSA certificates and assuming the CA that signed each side is trusted then verification complete successfully. In our case, this is done via pre-shared keys: In packet #5 the Initiator sends a hash generated using pre-shared key set as key material so that only those who possess pre-master key can do it: The responder performs the same calculation and confirms the hash is correct. Responder also sends a similar packet back to Initiator in frame #6 but I skipped for brevity. Now we're ready for Phase 2. 3. Phase 2 The purpose of this phase is to establish the security parameters that will be used for production traffic (IPSec SA): Now, Initiator sends its proposals to negotiate the security parameters for production traffic as mentioned (the highlighted yellow proposal is just a sample as the rest is collapsed -this is frame #7): Note:Identification payload carries source and destination tunnel IP addresses and if this doesn't match what is configured on both peers then IPSec negotiation will not proceed. Then, in frame #8 we see that Responder picked one of the Proposals: Frame #9 is just an ACK to the picked proposal confirming that Initiator accepted it: I just highlighted the Hash here to reinforce the fact that since both peers were authenticated in Phase 1, all subsequent messages are authenticated and a new hash (HMAC) is generated for each packet.23KViews1like0CommentsUnderstanding IPSec IKEv2 negotiation on Wireshark
Related Articles: Understanding IPSec IKEv1 negotiation on Wireshark 1 The Big Picture There are just 4 messages: Summary: IKE_SA_INIT: negotiate security parameters to protect the next 2 messages (IKE_AUTH) Also creates a seed key (known as SKEYSEED) where further keys are produced: SK_e (encryption): computed for each direction (one for outbound and one for inbound) to encrypt IKE_AUTH messages SK_a (authentication): computed for each direction (one for outbound and one for inbound) to hash (using HMAC) IKE_AUTH messages SK_d (derivation): handed to IPSec to generate encryption and optionally authentication keys for production traffic IKE_AUTH: negotiates security parameters to protect production traffic (CHILD_SA) More specifically, the IPSec protocol used (ESP or AH - typically ESP as AH doesn't support encryption),the Encryption algorithm (AES128? AES256?) and Authentication algorithm (HMAC_SHA256? HMAC_SHA384?). 2 IKE_SA_INIT First the Initiator sends aSecurity Association—>Proposal—>Transform,Transform... payloads which contains the required security settings to protectIKE_AUTHphase as well as to generate the seed key (SK_d) for production traffic (child SA): In this case here the Initiator only sent one option for Encryption, Integrity, Pseudo-Random Function (PRF) and Diffie Hellman group so there are only 4 corresponding transforms but there could be more. Responder picked the 4 available security options also confirmed inSecurity Association—>Proposal—>Transform,Transform… payloads as seen above. 3 IKE_AUTH These are immediately applied to next 2IKE_AUTHmessages as seen below: The above payload is Encrypted using SK_e and Integrity-protected using SK_a (these keys are different for each direction). The firstIKE_AUTHmessage negotiates the security parameters for production traffic (child SAs), authenticates each side and informs what is the source/destination IP/Port that is supposed to go through IPSec tunnel: Now, lastIKE_AUTHmessage sent by Responder confirms which security parameters it picked (Security Associationmessage), repeats the sameTraffic Selectormessages (if correctly configured) and sends hash of message using pre-master key (Authenticationmessage) Note that I highlighted 2 Notify messages. TheINITIAL_CONTACTsignals to Initiator that this is the onlyIKE_SAcurrently active between these peers and if there is any otherIKE_SAit should be terminated in favour of this one. TheSET_WINDOW_SIZEis a flow control mechanism introduced in IKEv2 that allows the other side to send as many outstanding requests as the other peer wants within the window size without receiving any message acknowledging the receipt. From now on, if additional CHILD_SAs are needed, a message calledCREATE_CHILD_SAcan be used to establish additional CHILD_SAs It can also be used to rekeyIKE_SAwhereNotificationpayload is sent of typeREKEY_SAfollowed byCREATE_CHILD_SAwith new key information so new SA is established and old one is subsequently deleted.22KViews3likes0CommentsWhat is Load Balancing?
tl;dr - Load Balancing is the process of distributing data across disparate services to provide redundancy, reliability, and improve performance. The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability." Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user. High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on. A Little Background Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market. In the Beginning, There Was DNS Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order. Figure 1: Basic DNS response for redundancy The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point. From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution. This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. Proprietary Load Balancing in Software One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address. Figure 2: Proprietary cluster IP load balancing When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server. Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers. HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. Network-Based Load balancing Hardware The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT). Figure 3: Load balancing with network-based hardware The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it. Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). This is the basis from which Application Delivery Controllers (ADCs) originated. The ADC Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all. Next Steps Ready to plunge into the next level of Load Balancing? Take a peek at these resources: Go Beyond POLB (Plain Old Load Balancing) The Cloud-Ready ADC BIG-IP Virtual Edition Products, The Virtual ADCs Your Application Delivery Network Has Been Missing Cloud Balancing: The Evolution of Global Server Load Balancing22KViews0likes1Comment