authentication
87 TopicsClient SSL Authentication on BIG-IP as in-depth as it can go
Quick Intro In this article, I'm going to explain how SSL client certificate authentication works on BIG-IP and explain what actually happens during client authentication as in-depth as I can, showing the TLS headers on Wireshark. This article is about the client side of BIG-IP (Client SL profile) authenticating a client connecting to BIG-IP. The Topology For reference so we can follow Wireshark output: How to Configure Client Certificate Authentication on Client SSL profile Essentially, what we're doing here is making BIG-IP verify client's credentials before allowing the TLS handshake to proceed. However, such credentials are in the form of a client certificate. The way to do this is to configure BIG-IP by: Adding a CA file to Trusted Certificate Authorities (ca-file in tmsh) to validate client certificate Optionally adding same CA that signed client certificate to Advertised Certificate Authorities Enforcing Client Certificate validation by setting Client Certificate option on BIG-IP to require Optionally setting the Frequency of such checks if we don't want to stick to the defaults. I'll go through each option now. Adding CA file to Trusted Certificate Authorities We should add to Trusted Certificate Authorities a single certificate file (*.crt) with one CA or concatenated file with 2 or more CAs with the purpose of validating client certificate, i.e. confirming client's identity. Upon receiving client certificate, BIG-IP will go through this list of CAs and confirm client's identity. It also has another purpose which is to authenticate BIG-IP to client but it's out of the scope of this article. Optionally add CA file to Advertised Certificate Authorities Trusted Certificate Authoritiesexplicitly tells BIG-IP the CA or chain of CAs it will use to validate client certificate whereasAdvertised Certificate Authorities tells client in advance what kind of CA BIG-IP trusts so that client can make the decision about which certificate to send to BIG-IP: Why would we use Advertised Certificate Authorities? Let's imagine a situation where Client's application has more than one Client Certificate configured. How is it going to figure out which certificate to send to BIG-IP? That's where Advertised Certificate Authorities comes to rescue us! When we add our CA bundle to Advertised Certificate Authorities, we're telling BIG-IP to add it to a header field named Distinguished Names within Certificate Request message. I dedicated the Appendix section to show you more in-depth how changing Advertised Certificate Authority affects Certificate Request header. Configuring BIG-IP to enforce Client Certificate validation To enable client certificate authentication on BIG-IP we change Local Traffic››Profiles : SSL : Client›› Client Certificate to request: The default is set to ignore where client certificate authentication is disabled. If we truly want to enable client certificate validation we need to select require. The reason why is because request makes BIG-IP request a client certificate from the client but BIG-IP will not perform the validation to confirm if the certificate sent is valid in this mode. The following subsections explain each option. Ignore Client Certificate Authentication is disabled (the default). BIG-IP never sends Certificate Request to client and therefore client does not need to send its certificate to BIG-IP. In this case, TLS handshake proceeds successfully without any client authentication: pcap:ssl-sample-peer-cert-mode-ignore.pcap Wireshark filter used:frame.number == 5 or frame.number == 6 Request BIG-IP requests Client Certificate by sending Certificate Request message but does not check whether client certificate is valid which is not really client authentication, is it? This means that ca-file(Trusted Certificate Authorities in the GUI) will not be used to validate client certificate and we will consider any certificate sent to us to be valid. For example, inssl-sample-peer-cert-mode-request-with-no-client-cert-sent.pcapwe now see that BIG-IP sends aCertificate Requestmessage and client responds with aCertificatemessage this time: Because I didn't add any client certificate to my browser, it sent a blank certificate to BIG-IP. Again, BIG-IP did not perform any validation whatsoever, so TLS handshake proceeded successfully. We can then conclude that this setting only makes BIG-IP request client certificate and that's it. Require It behaves just like Request but BIG-IP also performs client certificate validation, i.e. BIG-IP will use CA we hopefully added to ca-file (Trusted Certificate Authorities in the GUI) to confirm if client certificate is valid. This means that if we don't add a CA to Trusted Certificate Authorities (ca-file in tmsh) then validation will fail. Setting the Frequency of Client Certificate Requests This setting specifies the frequency BIG-IP authenticates client by enabling/disabling TLS session resumption. It has only 2 options: once BIG-IP requests client certificate during first handshake and no longer re-authenticates client as long as TLS session is reused and valid. The way BIG-IP does it is by using Session Resumption/Reuse. During first TLS handshake from client, BIG-IP sends a Session ID to Client within Server Hello header and in subsequent TLS connections, assuming session ID is still in BIG-IP's cache and client re-sends it back to BIG-IP, then session will be resumed every time client tries to establish a TLS session (respecting cache timeout). First time client sends Client Hello with blank session ID as it's cache is empty and then it is assigned a Session ID by BIG-IP (409f...): pcap:ssl-sample-clientcert-auth-once-enabled.pcap Wireshark filter used:filter:!ip.addr == 172.16.199.254 and frame.number > 1 and frame.number < 7 Certificate Requestconfirms BIG-IP is trying to authenticate client. Notice Session ID BIG-IP sent to client is 409f7... Then when Client tries to go through another TLS handshake and sends above session ID in Client Hello (packet #70 below). BIG-IP then confirms session is being resumed by sending samesession IDin Server Hello back to client: Wireshark filter used:!ip.addr == 172.16.199.254 and frame.number > 66 and frame.number < 73 This resumed TLS handshake just means we will not go through full handshake and will no longer need to exchange keys, select ciphers or re-authenticate client as we're reusing those already negotiated in the full TLS handshake where we first received Session ID 409f... always BIG-IP requests client certificate, i.e. re-authenticates client at every handshake. On BIG-IP, this is accomplished by disabling session reuse which makes BIG-IP not to sendSession IDback to Client in the beginning and forcing a full TLS handshake every time. pcap:ssl-sample-clientcert-auth-always-enabled.pcap Wireshark filter used: !ip.addr == 172.16.199.254 and ((frame.number > 1 and frame.number < 7) or (frame.number > 74 and frame.number < 80)) Appendix: Understanding how Advertised Certificate Authority field affects Certificate Request header For this test, I've got the following: myCAbundle.crt: concatenation of root_ca.crt and ltm2.crt (signed by root_ca.crt) client_cert.crt: added to Firefox and signed by ltm2.crt I've also added myCAbundle.crt to Trusted Certificate Authorities so BIG-IP is able to verify that client_cert.crt is valid. For each test, I will use change Advertised Certificate Authorities so we can see what happens. We'll go through 3 tests here: Setting Advertised Certificate Authority to None Setting Advertised Certificate Authority to a certificate that didn't sign client cert Setting Advertised Certificate Authority to a bundle that signed client cert Setting Advertised Certificate Authority to None Note that even though no CA was advertised in Certificate Request message, BIG-IP still advertises Certificate typesandSignature Hash Algorithmsso that client knows in advance what kind of certificate (RSA, DSS or ECDSA) and hash algorithms BIG-IP supports in advance. If client certificate had not been signed using any of the certificate types and hashing algorithms listed then handshake would have failed. However, in this case validation is successful as we can see on frame #8 that client certificate is RSA type and hashed with SHA1: pcap: ssl-sample-advcert-none.pcap filter used: !ip.addr == 172.16.199.254 and frame.number > 1 and frame.number < 16 It's worth noting that Distinguished Names is NOT populated and has length of zero because we didn't attach a bundle to Advertised Certificate Authorities. In this case, it worked fine because my client browser had only one certificate attached, so it shouldn't cause a problem anyway. Setting Advertised Certificate Authority to a certificate that didn't sign client cert pcap: ssl-sample-advcert-default-firefox.pcap Wireshark filter used: None I've set Advertised Certificate Authority to default.crt as this is NOT the CA that signed client's certificate: The difference here when compared to None is that now we can see that Distinguished Names is now populated with the certificate I added (default.crt): However, even though I added the correct certificate to my Firefox browser, it sent a blank certificate instead. Why? That's because BIG-IP signalled in Distinguished Names that default.crt is the CA that signed the certificate BIG-IP is looking for and as Firefox doesn't have any certificate signed by default.crt, it just sent a blank certificate back to BIG-IP. Also, because BIG-IP is now performing proper validation, i.e. comparing whatever client certificate is sent to it with the CA list added to Trusted Certificate Authorities, it knows a blank certificate is not valid and terminates the TLS handshake with a Fatal Alert. Setting Advertised Certificate Authority to a bundle that signed client cert pcap: ssl-sample-advcert-ltm2chainedwithrootca.pcap filter used: !ip.addr == 172.16.199.254 and frame.number > 1 Now I've set Advertised Certificate Authority to the correct bundle that signed my client certificate: And indeed handshake succeeds because: BIG-IP advertises myCAbundle.crt in Certificate Request >> Distinguished Names header, as per Advertised Certificate Authority configuration By reading Distinguished Names field, Client correctly sends the correct client certificate back to BIG-IP BIG-IP correctly validates client certificate using myCAbundle configured on Trusted Certificate Authorities Hope this article provides some clarification about these mysterious TLS headers.16KViews10likes8CommentsAzure Active Directory and BIG-IP APM Integration
Introduction Security is one of the primary considerations for organizations in determining whether or not to migrate applications to the public cloud. The problem for organizations with applications in the cloud, in a data center, managed, or as a service, is to create a cost-effective hybrid architecture that produces secure application access and a great experience that allows users to access apps easily, have consistent user experiences, and enjoy easy access with single-sign-on (SSO) tied to a central identity and authentication strategy. Some applications are not favorable to modernization. There are applications that are not suited for, or incapable of, cloud migration. Many on-premises apps do not support modern authentication and authorization, including standards and protocols such as SAML, OAuth, or OpenID Connect (OIDC). An organization may not have the staff talent or time to perform application modernization for their on-premises apps. With thousands of apps in use daily, hosted in all or any combination of these locations, how can organizations ensure secure, appropriate user access without requiring users to login in multiple times? In addition, how can organizations terminate user access to each application without having to access each app individually? By deploying Microsoft Azure Active Directory, Microsoft’s comprehensive cloud-based identity platform, along with F5’s trusted application access solution, Access Policy Manager (APM), organizations are able to federate user identity, authentication, and authorization and bridge the identity gap between cloud-based (IaaS), SaaS, and on-premises applications. Figure 1Secure hybrid application access This guide discusses the following use cases: ·Users use single sign-on to access applications requires Kerberos-based authentication. ·Users use single sign-on to access applications requires header-based authentication. Microsoft Azure Active Directory and F5 BIG-IP APM Design For organizations with a high security demand with low risk tolerance, the need to keep all aspects of user authentication on premise is required. The Microsoft Azure Active Directory and F5 BIG-IP APM solution integrates directly into AAD configured to work cooperatively with an existing Kerberos based, header based or variety of authentication methods. The solution has these components: •BIG-IP Access Policy Manager (APM) •Microsoft Domain Controller/ Active Directory (AD) •Microsoft Azure Active Directory (AAD) •Application (Kerberos-/header-based authentication) Figure 2APM bridge SAML to Kerberos/header authentication components Figure 3APM bridge SAML to Kerberos authentication process flow Deploying Azure Active Directory and BIG-IP APM integration The joint Microsoft and F5 solution allow legacy applications incapable of supporting modern authentication and authorization to interoperate with Azure Active Directory. Even if an app doesn’t support SAML, and only is able to support header- or Kerberos-based authentication, it can still be enabled with single sign-on (SSO) and support multi-factor authentication (MFA) through the F5 APM and Azure Active Directory combination. Azure Active Directory as an IDaaS delivers a trusted root of identity to APM creating a bridge between modern and legacy applications, delivering SSO and securing the app with MFA. Adding F5 from the gallery To configure the integration of BIG-IP APM into Azure AD, you need to add F5 from the gallery to your list of managed SaaS apps. Sign-on to theAzure portalusing either a work or school account, or a personal Microsoft account. On the left navigation pane, select theAzure Active Directoryservice. Navigate toEnterprise Applicationsand then selectAll Applications. To add new application, selectNew application. In theAdd from the gallerysection, typeF5in the search box. SelectF5from results panel and then add the app. Wait a few seconds while the app is added to your tenant. Configuring Microsoft Azure Active Directory Configure and test Azure AD SSO with F5 using a test user calledA.Vandelay. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in F5. To configure and test Azure AD SSO with F5, complete the following building blocks: Configure Azure AD SSO - to enable your users to use this feature. Create an Azure AD test user - to test Azure AD single sign-on with A.Vandelay. Assign the Azure AD test user - to enable A.Vandelay to use Azure AD single sign-on. Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. In theAzure portal, on theF5application integration page, find theManagesection and selectsingle sign-on. On theSelect a single sign-on methodpage, selectSAML. On theSet up single sign-on with SAMLpage, click the edit/pen icon forBasic SAML Configurationto edit the settings. On theBasic SAML Configurationsection, if you wish to configure the application inIDPinitiated mode, enter the values for the following fields: In theIdentifiertext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ In theReply URLtext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ ClickSet additional URLsand perform the following step if you wish to configure the application inSPinitiated mode: In theSign-on URLtext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ Note These values are for only used for illustration. Replace these them with the actual Identifier, Reply URL and Sign-on URL. Refer to the patterns shown in theBasic SAML Configurationsection in the Azure portal. On theSet up single sign-on with SAMLpage, in theSAML Signing Certificatesection, findFederation Metadata XMLand selectDownloadto download the certificate and save it on your computer. On theSet up F5section, copy the appropriate URL(s) based on your requirement. Create an Azure AD test user In this section, you'll create a test user in the Azure portal called A.Vandelay. From the left pane in the Azure portal, selectAzure Active Directory, selectUsers, and then selectAll users. SelectNew userat the top of the screen. In theUserproperties, follow these steps: In theNamefield, enterA.Vandelay. In theUser namefield, enter the username@companydomain.extension. For example,A.Vandelay@contoso.com. Select theShow passwordcheck box, and then write down the value that's displayed in thePasswordbox. ClickCreate. Assign the Azure AD test user In this section, you'll enable A.Vandelay to use Azure single sign-on by granting access to F5. In the Azure portal, selectEnterprise Applications, and then selectAll applications. In the applications list, selectF5. In the app's overview page, find theManagesection and selectUsers and groups. SelectAdd user, then selectUsers and groupsin theAdd Assignmentdialog. In theUsers and groupsdialog, selectA.Vandelayfrom the Users list, then click theSelectbutton at the bottom of the screen. If you're expecting any role value in the SAML assertion, in theSelect Roledialog, select the appropriate role for the user from the list and then click theSelectbutton at the bottom of the screen. In theAdd Assignmentdialog, click theAssignbutton. Configure F5 BIG-IP APM Configure your on-premise applications based on the authentication type. Configure F5 single sign-on for Kerberos-based application Open your browser and access BIG-IP. You need to import the Metadata Certificate into the F5 (Kerberos) which will be used later in the setup process. Go toSystem > Certificate Management > Traffic Certificate Management >> SSL Certificate List. Click onImportof the right-hand corner. Additionally you also need anSSL Certificatefor the Hostname (Kerbapp.superdemo.live), in this example we used Wildcard Certificate. Go to –F5 BIG-IP Click Access > Guided Configuration > Federation > SAML Service Provider. Specify theEntity ID(same as what you configured on the Azure AD Application Configuration). Create a new Virtual Server, Specify theDestination Address. Choose theWild Card Certificate(orCertyou uploaded for the Application) that we uploaded earlier and theAssociated Private Key. Upload the ConfigurationMetadataand Specify a newName for SAML IDP Connectorand you will also need to specify the Federation Certificate that was uploaded earlier. Create NewBackend App Pool, specify theIP Address(s)of the Backend Application Servers. UnderSingle Sign-on Settings, chooseKerberosand SelectAdvanced Settings. The request needs to be created inuser@domain.suffix. Under theusername sourcespecifysession.saml.last.attr.name.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname. Refer Appendix for complete list of variables and values. Account Name Is the F5 Delegation Account Created ( Check F5 Documentation). Under Endpoint Checks Properties , click Save & Next. Under Timeout Settings, leave default settings and click Save & Next. Review Summaryand click onDeploy. Configure F5 single sign-on for Header-based application Open your browser and access BIG-IP. You need to import the Metadata Certificate into the F5 (Header Based) which will be used later in the setup process. Go toSystem > Certificate Management > Traffic Certificate Management >> SSL Certificate List. Click onImportof the right-hand corner. Additionally you also need anSSL Certificatefor the Hostname (headerapp.superdemo.live), in this example we used Wildcard Certificate. Go to –F5 (Header Based) BIG-IP Click Access > Guided Configuration > Federation > SAML Service Provider. Specify theEntity ID(same as what you configured on the Azure AD Application Configuration). Create a new Virtual Server, Specify theDestination Address,Redirect Portis Optional. Choose theWild Card Certificate(orCertyou uploaded for the Application) that we uploaded earlier and theAssociated Private Key. Upload the ConfigurationMetadataand Specify a newName for SAML IDP Connectorand you will also need to specify the Federation Certificate that was uploaded earlier. Create NewBackend App Pool, specify theIP Address(s)of the Backend Application Servers. Under Single Sign-on, ChooseHTTP header-based. You can add other Headers based on your application. See the Appendix for the list of SAMLSession Variables. Under Endpoint Checks Properties , click Save & Next. Under Timeout Settings, leave default settings and click Save & Next. Review Summaryand click onDeploy. Resources BIG-IP Knowledge Center BIG-IP APM Knowledge Center Configuring Single Sign-On with Access Policy Manager Summary By centralizing access to all your applications, you can manage them more securely. Through the F5 BIG-IP APM and Azure AD integration, you can centralize and use single sign-on (SSO) and multi-factor authentication for on-premise applications. Validated Products and Versions Product BIG-IP APM Version 15.014KViews5likes4CommentsConfiguring APM Client Side NTLM Authentication
Introduction There have been a ton of requests on the boards for a simplified client side NTLM configuration, so based on Michael Koyfman’s excellent Leveraging BIG-IP APM for seamless client NTLM Authentication, I’ve put together this article to show the very basic requirements for setting up APM client side NTLM authentication. So why APM client side NTLM? Like Kerberos, NTLM can provide seamless (i.e.. transparent) logon for local clients. It’s arguably not as secure as Kerberos as authentication protocols go, but if you’ve ever worked with Kerberos, you’ll likely appreciate that NTLM is simpler and a bit more flexible. APM’s client side NTLM authentication is also considerably different than the other client side methods that generally include visual policy authentication agents and a AAA configuration. This difference allows client side NTLM to be enabled and disabled per request as needed by Microsoft Exchange and Secure Web Gateway access features. APM’s client side NTLM is controlled by a feature called ECA, or External Client Authentication. Let’s jump right in and look at how ECA is used to configure NTLM. There have been some changes to the way client side NTLM functions between APM versions, so this article will focus on 11.6. Configuration Step 1: Create a virtual server It may seem counter-intuitive to create the virtual server first, but you’ll need it in step 5 below. Nothing fancy here, just create a standard virtual server and assign an HTTP profile and pool. Of course if you’re offloading SSL (which you should be) also include a client SSL profile. Step 2: Configure system DNS APM needs to be able to resolve the Active Directory domain controller(s), so you’ll need to specify the DNS for the domain. In the BIG-IP management GUI, navigate to System -> Configuration -> Device -> DNS. In the DNS Lookup Server List section, add the IP address of any AD DNS server(s). You can do the same in the TM shell as follows: tmsh modify sys dns name-servers replace-all-with { [IP address of DC DNS server] } Step 3: Create an Active Directory NTLM machine account This step creates a computer account in the Active Directory that APM will use to validate the NTLM tokens from clients. This machine account will be used in a pass-through authentication capacity. Please see the following Microsoft article for more information on NTLM Pass-Through Authentication. In the BIG-IP management GUI, navigate to Access Policy -> Access Profiles -> NTLM -> Machine Account. Click the Create button. At a minimum you need an arbitrary profile object name, the name of the computer account to create (it must not already exists), the domain FQDN (the DCs will be resolved via DNS SRV queries), and the credentials of a user with privileges to create computer accounts. To specify a single domain controller, enter that DC’s FQDN into the Domain Controller FQDN field. Click the Join button to proceed. You can do the same in the TM shell as follows: tmsh create apm ntlm machine-account [profile object name] domain-fqdn [domain FQDN] machine-account-name [new machine account name] administrator-name [admin name] administrator-password ["admin password"] If the command is successful you’ll see a new computer account in the Active Directory. Step 4: Create an NTLM Auth Configuration Now let’s add that computer account object to an NTLM Auth configuration. In the BIG-IP management GUI, navigate to Access Policy -> Access Profiles -> NTLM -> NTLM Auth Configuration. Click the Create button. Give it an arbitrary object profile name and specify the previously-created machine account name. Add the FQDN for a domain controller to the Domain Controller FQDN List field. Note: You must add at least one domain controller here. If you don’t specify anything, users will be allowed access without any NTLM token validation. In some APM versions the Domain Controller FQDN List field displays as mandatory and in others optional. In all cases it is absolutely required. Click the finished button to proceed. You can do the same in the TM shell as follows: tmsh create apm ntlm ntlm-auth [profile object name] machine-account-name [machine account object name] Step 5: Enable the ECA profile for a specific virtual server Unlike the other APM client side authentication methods, there’s no GUI option to enable APM client side NTLM. To do that you need to apply the ECA profile to a virtual via the TM shell. tmsh modify ltm virtual [virtual server name] profiles add { eca } Step 6: Create an iRule to enable client side NTLM This is where all of the magic happens. Create this very simple iRule and apply it to your virtual server: when HTTP_REQUEST { ECA::enable ECA::select select_ntlm:/Common/my-ntlm-machine-auth } That’s it. You need to enable ECA and then specify the name of the NTLM Auth configuration object (created in step 4 above). I would also point out here that client side NTLM authentication is a bit different from Kerberos in that ECA is generally going to issue a 401 Unauthorized NTLM challenge on every new request. If this proves to add too much overhead, the following modification to the above iRule will allow NTLM to be processed once at the beginning of the session. The APM session cookie is used thereafter to maintain the session. when HTTP_REQUEST { if { [ACCESS::session data get session.ntlm.last.result] eq 1 } { ECA::disable } else { ECA::enable ECA::select select_ntlm:/Common/my-ntlm-machine-auth } } Step 7: Create an access profile Create a standard access profile and configure the visual policy like this: There’s nothing to configure inside the NTLM Auth Result agent so it’s just there to validate the returned session.ntlm.last.result session variable. Add this access profile to the virtual server. Step 8: Modify the client browser to support NTLM authentication By default modern browsers do not, for obvious security reasons, send credentials to any web site that asks for them, so you have to explicitly define what sites the browser can send credentials to. For Microsoft Internet Explorer that’s as simple as adding the site to the Local Intranet Sites list. In Internet Explorer, go to Tools -> Internet Options. Open the Security tab, select Local intranet and then click the Sites button. Now click the Advanced button. Add the APM virtual server’s URL here. You can specify the exact URL here (ex. https://ntlm-test.domain.com) or a wildcard (ex. *.domain.com) to cover everything under a given domain. This should also cover Chrome and Opera running in Windows. For Firefox, navigate to the config URL at about:config and type “trusted” in the Search field. You’ll see a short list of keys with the word “trusted” in them. Double click the “network.automatic-ntlm-auth.trusted-uris” key and add either the full FQDN of a specific APM virtual server (ex. ntlm-test.domain.com) or for an entire domain just the domain component itself (ex. .domain.com). Testing and Troubleshooting With your browser(s) configured, go ahead and test. If you get prompted for username and password, NTLM authentication has failed. Probably one of the best first things to do is to open a console (SSH) shell and tail the APM and LTM logs. In many cases the error will be screaming at you from one of these logs. # cd /var/log # tail –f ltm apm Here are some possible causes for client side NTLM authentication to fail: Your browser isn’t correctly configured to send credentials to this site: Review the browser’s security configurations. I mentioned earlier that IE’s Local intranet sites list must include the FQDN of the APM virtual server. Make sure you’re indeed using the Local intranet sites list and not one of the other sites lists. There is also another setting in IE that can cause issues. This setting is enabled by default, but you could be in an environment where it has been disabled by policy. Under the Security tab of IE’s Internet Options, click on the “Custom level…” button, scroll all the way to the bottom of that list and look for the “Automatic logon only in Intranet zone” option. This needs to be checked and is the reason why you add the APM virtual server to the Local intranet sites list. You’re not logged in as a domain member and/or your workstation is not domain joined: Check that you’re actually logged into the domain from a domain-joined workstation. APM can’t access the specified domain controller in the NTLM Auth config: You may see messages in the LTM log like “NT_STATUS_INVALID_COMPUTER_NAME”. This is indicating that APM cannot resolve the address of the domain controller specified in the NTLM Auth configuration. If the domain controller just isn’t available, you may see the “NT_STATUS_HOST_UNREACHABLE” error message. NTLM is not enabled or configured correctly in the Active Directory: You may see messages in the APM log like “NT_STATUS_NETWORK_ACCESS_DENIED” or “NT_STATUS_PIPE_NOT_AVAILABLE”. These are indication that there’s probably something wrong on the Windows domain controller side. You may want to take a look at local or domain group policy settings and NTLM event logs. Worst case the NTLM machine account is corrupt: If all else fails, delete and recreate the NTLM machine account. You’ll need to delete it in APM and in the domain. Wait for replication to finish if there are multiple DCs and you’re using the same machine name, or simply use a different machine name. Let’s now see what that client side NTLM authentication looks like on the wire. For this part I’m going to fire up WireShark and filter on “ntlmssp” The ECA profile is responsible for generating the 401 Unauthorized response to the client’s initial request. In that 401 response is the WWW-Authenticate header and NTLM challenge. If there’s something wrong you may also see error messages manifest in the Wireshark captures. Cross-domain Considerations Client side APM NTLM authentication natively works across domains if the domains are in a forest trust, external two-way trust, or external one-way outgoing trust. For non-trusting domains, since the NTLM Auth config profile is assigned first, there’s no native way to switch between profiles based on something in the client’s NTLM challenge response. If you can identify the clients by some other characteristic, a unique IP subnet for example, that might be an option. Expanding Possibilities Now that we have APM client side NTLM humming along, let’s look at some other things you can do with it. A successful client side NTLM authentication will produce a few interesting session values: session.logon.last.username – the APM session variable that contains the logged in username session.logon.last.domain – the APM session variable that contains the domain of the logged in user session.logon.last.machinename – the APM session variable that contains the logged in user’s workstation name These session variables can be used and evaluated inside the visual policy and in iRules. For example, let’s say you want to do an AD query with the username to see if the user is in a specific AD group: The ECA profile also creates three other values that can be used in iRules: ECA::status – the authentication status for that individual request. It’s usually either STATUS_SUCCESS or STATUS_ACCESS_DENIED ECA::username – the same information in the username session variable ECA::domainname – the same information in the domain session variable ECA::client_machine_name – the same information in the machine name session variable Since the ECA profile is also enabled at will, there are some other things you can do with it. One example would be to enable and disable client side NTLM based on the requested URI: when HTTP_REQUEST { if { [HTTP::uri] starts_with “/protected_uri” } { ECA::enable ECA::select select_ntlm:/Common/my-ntlm-machine-auth } else { ECA::disable } } And finally you have two ECA events, ECA_REQUEST_ALLOWED and ECA_REQUEST_DENIED that can be used to trigger specific actions on successful authentication or denied logon attempts, respectively. Well that’s it for now. If you have any questions please post them below and I’ll try to answer. Otherwise stay tuned for more BIG-IP APM articles. Thanks.10KViews1like43CommentsHow to Extract the UPN from a Digital Certificate on a CAC card using F5 APM
Introduction This article describes two different methods to extract the UPN from the digital certificate for further processing by the BIG-IP. While there are other excellent articles that show you how to build out the entire access policy, this article concentrates on the methods for extracting the UPN. Some Context The CAC card is a "smart" card about the size of a credit card, it is the standard identification for active duty uniformed Service personnel, Selected Reserve, DoD civilian employees, and eligible contractor personnel.It is also the principal card used to enable physical access to buildings and controlled spaces, and it provides access to DoD computer network and systems. Accessing DoD PKI-protected information is most commonly achieved using the PKI certificates stored on your Common Access Card (CAC). The certificates on your CAC can allow you to perform routine activities such as accessing OWA, signing documents, and viewing other PKI-protected information online. In F5, the typical authentication motion for F5 Access Policy Manager when dealing with common access cards is to:- Present a DoD Warning Banner Validate the Certificate through TLS Validate that the certificate has not been revoked Pull the UPN field of the certificate to search for the user in LDAP The F5 Access Policy Manager uses the Universal Principal Name (UPN) taken from the Subject Alternative Name (SAN) field of the Signature Certificate to search for the user in LDAP and allow or deny access based on the information found. The diagram below shows the value that we will be pulling from the certificate to use for further authentication. On a DoD CAC the UPN would be of the format EDIPI@mil. The following describes two methods to extract the UPN as part of an APM policy. If you prefer, I have published a 5-minute demonstration video outlining the steps presented in this article. Otherwise, you may continue reading, or refer back at your desired pace, to the step-by-step presented below. Method One – a Variable Assign within an access policy This method relies on the use of an access policy item called a “Variable Assign” that contains a custom expression. In the diagram below we are placing a variable assign access policy item after checking that the certificate is valid through mutual tls with the On Demand Cert Auth and then checking with an OCSP server that the certificate has not been revoked via OCSP. To add the variable assign click the ‘+’ item in the visual policy editor and select the variable assign item in the visual policy editor. The variable assign is under the Assignment tab in the Visual Policy Editor. In the variable assigngive the access policy item a name for instance “upn_extract” and then click the “Add New Entry” button. Then ensure that Custom Variable is selected Create a variable name – for instance session.custom.upn On the right side select “Custom Expression” And place the following expression in the entry field below.. this expression parses the x509 certificate attributes on the CAC card for the UPN. set x509e_fields [split [mcget {session.ssl.cert.x509extension}] "\n"]; # For each element in the list: foreach field $x509e_fields { # If the element contains UPN: if { $field contains "othername:UPN" } { ## set start of UPN variable set start [expr {[string first "othername:UPN<" $field] +14}] # UPN format is <user@domain> # Return the UPN, by finding the index of opening and closing brackets, then use string range to get everything between. return [string range $field $start [expr { [string first ">" $field $start] - 1 } ] ]; } } # Otherwise return UPN Not Found: return "UPN-NOT-FOUND"; Click “Finished”. Method Two – an Access Policy Agent Event with an iRule The second method relies on the use of an access policy item called an “iRule Event” that uses an iRule to extract the UPN. In the diagram below we are placing an iRule event access policy item after checking that the certificate is valid through mutual tls with the On Demand Cert Auth and then checking with an OCSP server that the certificate has not been revoked via OCSP. To add the iRule Event click the ‘+’ item in the visual policy editor and select the variable assign item in the visual policy editor. The iRule Event is under the General Purpose tab in the Visual Policy Editor. Then provide a name and a Custom iRule Event Agent ID. I like to make the name the same as the identifier, but they can bet different. The custom iRule Event Agent ID ties the visual policy editor iRule event item to an iRule. Create the iRule. Now under Local Traffic/iRules click the create button. Provide a name for the iRule. And place the following iRule in the entry field this iRule parses the x509 certificate attributes on the CAC card for the UPN. when ACCESS_POLICY_AGENT_EVENT { if { [ACCESS::policy agent_id] eq "CERTPROC"} { # This event extracts the user principal name from a client-certificate and places it into a session variable. if { [ACCESS::session data get session.ssl.cert.x509extension] contains "othername:UPN<" } { ACCESS::session data set session.custom.upn [findstr [ACCESS::session data get session.ssl.cert.x509extension] "othername:UPN<" 14 ">"] } } } Click “Finished” Then on the virtual server that provides the service select “Resources” and then select “Manage” Finally move the CERTPROC iRule from available to enabled. Conclusion Both of these methods will ultimately result in the user principal name, the “UPN” being stored in a session variable within the access policy. This session variable can then be used in an LDAP lookup that can verify that the user exists within the directory and can also be used to pull further information from the directory that will enable additional verification and authentication. Examples might be performing single sign on to an application or determining group membership. Which one is better? (Editorial Time) While both methods are completely valid.I prefer the variable assign within an access policy as it provides a single place in the VPE where the configuration resides.It also allows for a more rapid understanding of the configuration from a troubleshooting perspective as the expression resides within the visual policy. The iRule method means there will be multiple locations where the configuration resides, an experienced APM administrator will be able to quickly determine that an iRule is being used, for a less experienced APM administrator this may take some more time to determine that an iRule is being used and this could hinder future trouble shooting. On the other hand the iRule method is more performant than the expression method and may be a better for a high traffic APM VIP.6.4KViews1like0CommentsConfiguring Smart Card Authentication to BIG-IP Management Interface
Developed on BIG-IP Version 13.1 It's been quite a while since my last article, so I wanted to come up with something that I know would benefit all current, future and past customers. Over the past few years of deploying and managing BIG-IP's, I always got the same question from my federal customers. How do we smart card enable our BIG-IP management interface? Well, I'm here to not only tell you but show how it's done. I will also share some of the troubleshooting steps, logs and tools I used to overcome my own issues while attempting this. So, with that, let's get started. Configuring Remote Role Groups So, how many of you today are still using local credentials or defining administrative users one by one within the BIG-IP TMUI? Did you know you could use active directory security groups to make managing administrative access a whole lot easier? Thank goodness the answer is yes, and it's called Remote Role Groups. Before we begin defining the authentication method, you must configure Remote Role Groups since this will be referenced immediately after changing authentication to Remote - ClientCert LDAP. Navigate to System > Users > Select Remote Role Groups Click Create Group Name: BIGIPAdmins Line Order: 1 Attribute String:memberOF=CN=BIGIPadmins,OU=Groups,DC=demo,DC=lab Note: Use the full DN of the active directory security group you are defining with a preceeding 'memberOF='. Assigned Role: Administrator Partition Access: All Terminal Access: tmsh Validating Certificate Revocation Using OCSP You might be wondering; wouldn't this be a part of the troubleshooting steps after we configure TMUI to support Smart Card? You would be right, though doing this in a lab I do not consider myself an expert when deploying and configuring a PKI infrastructure within a Windows 2012 environment so if this is helpful great, if not continue to the next step. Using a copy of my user certificate, I am going to run a command to obtain the AIA information and perform a revocation check against my local OCSP responder to validate I am able to successfully verify my certificate. certutil -URL path\user.cer You will then be prompted with a URL Retrieval Tool Select OCSP (from AIA) and click Retrieve If valid, you will receive a status of Verified as shown above. Obtain a CA or CA Bundle in PEM Encoded Format For DoD Customers, navigate to https://iase.disa.mil/pki-pke/Pages/index.aspx Select For Administrators, Integrators and Developers Select Tools and continue to browse until you locate PKI CA Certificate Bundles: PEM Self-Extracting ZIP Select the .exe that is appropriate for your organization though as an example I have selected For DoD PKI Only. Run the executable to extract all CA certs into new empty directory Launch a command prompt from Start > Run > cmd Change directories until you are at the location where you extracted all CA certificate files. Run the command copy /B *.cer DoDCABundle.cer Import CA Bundle into BIG-IP Log into the BIG-IP TMUI > System > Certificate Management > Device Certificate Management > Device CA Certificate List > Import Browse to the directory that you stored the CA Bundle in Provide a Name and select Import Configure User Authentication Navigate to System > Users > Authentication > Change From the User Directory drop down select Remote - ClientCert LDAP Host: IP address of your directory services server Port: 389 Remote Directory Tree: DC=demo,DC=lab Scope: Sub Bind DN: CN=admin,CN=Users,DC=demo,DC=lab Provide Password and Confirm Check Member Attribute in Group: Enabled SSL: Disabled CA Certificate: Select the CA certificate bundle created in the previous steps. Login Name: Can leave empty Login LDAP Attribute: userPrincipalName (Case Sensitive) Login Filter: [a-zA-Z0-9]\\w*(\?=@) Depth: 10 Client Certificate Name Field: Other Name... OID: 1.3.6.1.4.1.311.20.2.3 OCSP Override: On OCSP Responder: http://IP/ocsp OCSP Response Max Age: -1 OCSP Response Time Skew: 300 OCSP Response Timeout: 300 External Users: Leave Defaults Select Finished If successful, you will be prompted for a client certificate and re authenticated with no issues. But let’s just say I wasn't that lucky the first few times I attempted this config. The big scary error I got and continued to get is below.... Therefor I will provide some of the troubleshooting tips and tricks that assisted me in determining why authentication was failing. Troubleshooting So, let’s start with the one issue that really scared me the most even though it was just a development environment, httpd. During my first attempt at ClientCert authentication I followed the instructions as they were laid out in the deployment guide though for some reason not only could I not authenticate using a certificate, when turning SSO off I still couldn't log in. So, to save you all the trouble of determining what configuration item was causing this issue, I can tell you it was the httpd service not starting due to a non PEM CA certificate. First, I ran a bigstart status httpd and noticed it was not running. When attempting a bigstart start httpd it would fail. Luckily httpd has its own log file though honestly it didn't help much. Launch a putty session and login using root or similar credentials that allow access to both the shell and tmsh. Navigate to the httpd directory by running cd /var/log/httpd Perform a tail on the httpd_errors log by running tail -f httpd_errors The error that I just could not figure out was "Unable to configure verify locations for client authentication." Believe it or not I didn't find much on devcentral or internal resources on this error, so I started rolling back configuration items back one by one until I ran into the ssl-ca-cert-file within the httpd config. After modifying this to none, I was able to log in. That is of course after disabling SSO by running tmsh modify auth cert-ldap system-auth sso off. So, before moving forward ensure your CA file is in PEM format before configuring client-cert LDAP auth. Now, moving on. I won't get into it too much but as mentioned at the beginning of the article, it is probably a good idea to ensure the client certificate can be validated by running certutil -URL path\user.cer. So, the other issue that I ran into because I am running Windows OCSP, I did not have Nonce extension support enabled on my responder. Therefor I was receiving the error messages below in the httpd_errors log. Jan 22 15:47:18 bigip1 err httpd[20075]: [error] OCSP response not successful: 0 Jan 22 15:47:18 bigip1 err httpd[20075]: [error] [client IP] Certificate Verification: Error (50): application verification failure After configuring Nonce extension support, I thought I was good...nope, more hurdles. So now that I have a CA bundle in PEM format, Nonce extension support enabled, a valid user certificate I still couldn't log on. This time after seeing successful responses from my OCSP responder, I went to my secure log due to getting prompted for cert and then username and password. From the shell, navigate to /var/log and run a tail -f secure. In the secure log I was continuously getting unknown user though I know the DN for my group was correct and I certainly have an AD account, so what now? So this is where I went old school and downloaded an archived version of Netmon! Thank you for teaching me this a long long time ago @Mike Melone! After downloading Netmon (you can of course use Wireshark) I started a packet capture with a filter of tcp.port == 389 so that I can see the ldap request and responses. Based on the help for the deployment of ClientCert - LDAP, my interpretation was that you MUST use sAMAccountName if authenticating against AD. However, clearly I misinterpreted because after looking at the capture there was no way the filter was my sAMAccountName but rather my UPN. With that information, I modified the cert-ldap login-attribute from sAMAccountName to userPrincipalName and boom, it worked! Now, by no means would this be the only attribute you could use but rather the attribute that I utilized for successful logon when the UPN on my cert references an alternate UPN suffix than my actual active directory domain name. That wraps another article that I hope the community finds helpful. Below you can find my actual config that you would find under tmsh list auth cert-ldap and tmsh list sys httpd. Until next time. root@(bigip1)(cfg-sync Standalone)(ModuleNotLicensed::Active)(/Common)(tmos)# list sys httpd sys httpd { auth-pam-idle-timeout 12000 ssl-ca-cert-file /Common/CABase64 ssl-ocsp-default-responder http://10.1.20.10/ocsp ssl-ocsp-enable on ssl-ocsp-override-responder on ssl-verify-client require } root@(bigip1)(cfg-sync Standalone)(ModuleNotLicensed::Active)(/Common)(tmos)# list auth cert-ldap auth cert-ldap system-auth { bind-dn CN=admin,CN=Users,DC=demo,DC=lab bind-pw $M$Sh$JrUPQrhEhMicK39ZostQJQ== check-roles-group enabled debug enabled login-attribute userPrincipalName login-filter [a-zA-Z0-9]\\\\w*(\\\?=@) search-base-dn DC=demo,DC=lab servers { 10.1.20.10 } ssl-ca-cert-file Base64CA.crt ssl-cname-field san-other ssl-cname-otheroid 1.3.6.1.4.1.311.20.2.3 sso on Reference Articles BIG-IP Remote User Account Management4.2KViews0likes3CommentsYubikey Authentication Modes and Azure AD integration via the APM
Introduction The Yubikey (https://www.yubico.com/) supports three major functions, authentication, signing and encryption. As far as authentication goes, it supports a list of the following mechanisms. OTP (one-time password) Yubikey OTP (Time based OTP) OATH HOTP (HMAC based OTP) FIDO U2F (Universal 2 nd Factor) FIDO2 PIV (Smartcard) Each of the above-mentioned protocols has its own set of requirements and is therefore not universally supported everywhere. OTP OTP is probably the simplest, with a one-time password being used, typically as the 2 nd factor. However, it is also the weakest, as it does not mitigate against MITM attacks. E.g., A fake site impersonating a legitimate site can trick the user into entering the OTP and subsequently forwards it to the real site. All Yubikey’s by default have manufacture assigned secrets registered with Yubico’s own validation servers. Yubico provides a tool that allows you to re-program the key, giving it a different secret. However, the new secret has to be uploaded to Yubico’s validation servers (https://upload.yubico.com/) otherwise OTP will stop working. Yubikey OTP integrates with a large number of services (e.g., Gmail, LastPass). When a service receives an OTP, it reaches out to Yubico for validation. In the case of Okta, the secrets can be uploaded directly into Okta and validation happens within Okta. FIDO U2F FIDO U2F or U2F for short, mitigates MITM. This method requires the user to register the authenticator (e.g., Yubikey) with the application (e.g., Gmail) first, during which a key pair is generated by the authenticator, and the public key is sent and stored on the application. Once the registration is complete, the user can then use the authenticator as the 2 nd factor. In the case of Gmail, once the user’s credentials are verified, the user touches the Yubikey for 2 nd factor. No code is entered by the user. FIDO2 FIDO2 enables passwordless authentication. Once the Yubikey is registered with an application (e.g., Azure Portal) for FIDO2 authentication, the user touches the Yubikey, optionally provides a PIN code for the key, logs straight in. – no username is entered FIDO2 is an evolution of U2F and is dependent upon WebAuthn (client API implemented within the browser) and CTAP2 (authenticator API that enables FIDO2-capable devices to interface to external/roaming authenticators over Bluetooth, USB or Near field communication (NFC)). Per FIDO Alliance (https://fidoalliance.org/fido2/fido2-web-authentication-webauthn/#:~:text=WebAuthn%20is%20currently%20supported%20in,Windows%2010%20and%20Android%20platforms), browser support for U2F and WebAuthn are shown below. PIV YubiKey can also present itself as a PIV smartcard that contains a client certificate. APM Integration There are a number of articles on DevCentral that cover programming the APM to receive the Yubikey OTP and then send it over to Yubico’s validation servers via a side band connection for OTP verification. My particular use case is to leverage an IDaaS (e.g., Azure AD) as an IDP and use the APM as the SP. My choice of integration is via OIDC, but SAML is an equally valid option. Since authentication is offloaded to Azure AD, both the OTP and FIDO2 passwordless authentication methods are now available. Azure AD does not support U2F. Azure AD and User Configuration OTP If Yubikey is used for OTP, Azure AD needs to have MFA enabled, a ‘Conditional Access’ policy is created to ‘Require multi-factor authentication’ for your selected apps. This process is documented by Microsoft (https://docs.microsoft.com/en-us/azure/active-directory/authentication/tutorial-enable-azure-mfa). The user must also add an authenticator app via self-registration (https://mysignins.microsoft.com/), be sure to click on the highlighted, this allows you to enrol the ‘Yubico Authenticator’ (https://www.yubico.com/products/services-software/download/yubico-authenticator/). The Yubico Authenticator works with the Yubikey to generate the OTP. Yubico argues that it is more secure as unlike a soft authenticator, the secrets are not saved within the authenticator itself, but rather in a secure element within the Yubikey. Note ‘Touch your Yubikey’, which is needed before an OTP is generated. The OTP method does not impose special requirements on the browsers, which means it works on any browsers, as well as on the APM Edge client, which leverages certain browser functions FIDO2 With FIDO2 based passwordless authentication, the ‘FIDO2 Security Key’ option within the Azure AD (e.g., under Security) has to be enabled first. Again, the user goes through self-registration (https://mysignins.microsoft.com/) prior to the Yubikey becoming available. I have tried different browsers on the Mac, the only browsers that work are Chrome and Edge. Milage may vary in Windows. If FIDO2 works, the highlighted option will appear. In the case of Azure AD, a PIN is required (configured over the self-registration process). Once it’s entered, the user logs in after touching the Yubikey. At this time, the latest (v7210) APM Edge client’s embedded browser for login does not work with FIDO2, likely due to WebAuthn not being supported. If the user uses a supported browser, passwordless authentication should work, after which a webtop is presented.4.1KViews1like2CommentsSSL Orchestrator Advanced Use Cases: Forward Proxy Authentication
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless.In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Forward Proxy Authentication Arguably, authentication is an easy one for BIG-IP, but I'm going to ease into this series slowly. There's no better place to start than with an examination of some of the many ways you can configure an F5 BIG-IP to authenticate user traffic. Forward Proxy Overview Forward proxy authentication isn't exclusive to SSL Orchestrator, but a vital component if you need to authenticate inspected outbound client traffic to the Internet. In this article, we are simply going to explore the act of authenticating in a forward proxy, in general. - how it works, and how it's applied. For detailed instructions on setting up Kerberos and NTLM forward proxy authentication, please see the SSL Orchestrator deployment guide. Let's start with a general characterization of "forward proxy" to level set. The semantics of forward and reverse proxy can change depending on your environment, but generally when we talk about a forward proxy, we're talking about something that controls outbound (usually Internet-bound) traffic. This is typically internal organizational traffic to the Internet. It is an important distinction, because it also implicates the way we handle encryption. In a forward proxy, clients are accessing remote Internet resources (ex. https://www.f5.com). For TLS to work, the client expects to receive a valid certificate from that remote resource, though the inspection device in the middle does not own that certificate and private key. So for decryption to work in an "SSL forward proxy", the middle device must re-issue ("forge") the remote server's certificate to the client using a locally-trusted CA certificate (and key). This is essentially how every SSL visibility product works for outbound traffic, and a native function of the SSL Orchestrator. Now, for any of this to work, traffic must of course be directed through the forward proxy, and there are generally two ways that this is accomplished: Explicit proxy - where the browser is configured to access the Internet through a proxy server. This can also be accomplished through auto-configuration scripts (PAC and WPAD). Transparent proxy - where the client is blissfully unaware of the proxy and simply routes to the Internet through a local gateway. It should be noted here that SSL visibility products that deploy at layer 2 are effectively limited to one traffic flow option, and lack the level of control that a true proxy solution provides, including authentication. Also note, BIG-IP forward proxy authentication requires the Access Policy Manager (APM) module licensed and provisioned. Explicit Forward Proxy Authentication The option you choose for outbound traffic flow will have an impact on how you authenticate that traffic, as each works a bit differently. Again, we're not getting into the details of Kerberos or NTLM here. The goal is to derive an essential understanding of the forward proxy authentication mechanisms, how they work, how traffic flows through them, and ultimately how to build them and apply them to your SSL Orchestrator configurations. And as each is different, let us start with explicit proxy. Explicit forward proxy authentication for HTTP traffic is governed by a "407" authentication model. In this model, the user agent (i.e. a browser) authenticates to the proxy server before passing any user request traffic to the remote server. This is an important distinction from other user-based authentication mechanisms, as the browser is generally limited in the types of authentication it can perform here (on the user's behalf). In fact most modern browsers, with some exceptions, are limited to the set of "Windows Integrated" methods (NTLM, Kerberos, and Basic). Explicit forward proxy authentication will look something like this: Figure: 407-based HTTPS and HTTP authentication The upside here is that the Windows Integrated methods are usually "transparent". That is, silently handled by the browser and invisible to the user. If you're logged into a domain-joined workstation with a domain user account, the browser will use this access to generate an NTLM token or fetch a Kerberos ticket on your behalf. If you build an SSL Orchestrator explicit forward proxy topology, you may notice it builds two virtual servers. One of these is the explicit proxy itself, listening on the defined explicit proxy IP and port. And the other is a TCP tunnel VIP. All client traffic arrives at the explicit proxy VIP, then wraps around through the TCP tunnel VIP. The SSL Orchestrator security policy, SSL configurations, and service chains are all connected to the TCP tunnel VIP. Figure: SSL Orchestrator explicit proxy VIP configuration As explicit proxy authentication is happening at the proxy connection layer, to do authentication you simply need to attach your authentication policy to the explicit proxy VIP. This is actually selected directly inside the topology configuration, Interception Rules page. Figure: SSL Orchestrator explicit proxy authentication policy selection But before you can do this, you must first create the authentication policy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Explicit Profile Scope: leave it at 'Profile' Customization Type: leave it at 'Modern' Don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform explicit forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). Now, before we dive into the VPE, let's take a moment to talk about how authentication is going to work here. As previously stated, we are not going to dig into things like Kerberos or NTLM, but we still need something to authenticate to. Once you have something simple working, you can quickly shim in the actual authentication protocol. So let's do basic LocalDB authentication to prove out the configuration. Hop down to Access -> Authentication -> Local User DB -> Instances, and click Create New Instance. Create a simple LocalDB instance: Settings: Name: provide a unique name Leave the remaining settings as is and click OK. Now go to Access -> Authentication -> Local User DB -> Users, and click Create New User. Settings: User Name: provide a unique user name Password: provide a password Instances: selected the LocalDB instance Leave everything else as is and click OK. Now go back to the VPE. You're ready to define your authentication policy. With some exceptions, most explicit forward proxy authentication policies will minimally include a 407 Proxy-Authenticate agent and an authentication agent. The 407 Proxy-Authenticate agent will issue the 407 Proxy-Authenticate response to the client, and pass the user's submitted authentication data (Basic Authorization header, NTLM token, Kerberos ticket) to the auth agent behind it. The auth agent is then responsible for validating that submission and allowing (or denying) access. Since we're using a simple LocalDB to test this, we'll configure this for Basic authentication. Figure: 407-based SWG-Explicit authentication policy 407 HTTP Response Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Note again that this is a simple explicit forward proxy test using a local database for HTTP Basic authentication. Once you have this working, it is super easy to replace the LocalDB method with the authentication protocol you need. Now head back to your SSL Orchestrator explicit proxy configuration. Navigate to the Interception Rules page. On that page you will see a setting for Access Profile. Select your SWG-Explicit access policy here. And that's it. Deploy the configuration and you're done. Configure your browser to point to the SSL Orchestrator explicit proxy IP and port, if you haven't already, and attempt to access an external URL (ex. https://www.f5.com). Since this is configured for HTTP Basic authentication, you should see a popup dialog in the browser requesting username and password. Enter the values you created in the LocalDB user properties. In following articles, I will show you how to configure Kerberos and NTLM for forward proxy authentication. If you want to see what this communication actually looks like on the wire, you can either enable your browser's developer tools, network tab. Or for a cleaner view, head over to a command line on your client and use the cURL command (you'll need cURL installed on your workstation): curl -vk --proxy [PROXY IP:PORT] https://www.example.com --proxy-basic --proxy-user '[username:password]' Figure: cURL explicit proxy output What you see in the output should look pretty close to the explicit proxy diagram from earlier. And if your SSL Orchestrator security policy is defined to intercept TLS, you will see your local CA as the example.com CA issuer. Transparent Forward Proxy Authentication I intentionally started with explicit proxy authentication because it's usually the easiest to get your head around. Transparent forward proxy authentication is a bit different, but you very likely see it all the time. If you've ever connected to hotel, airport/airplane, or coffee shop WiFi, and you were presenting with a webpage or popup screen that asked for username, room number, or asked you to agree to some terms of use, you were using transparent authentication. In this case though, it is commonly referred to as a "Captive Portal". Note that captive portal authentication was introduced to SSL Orchestrator in version 6.0. Captive portal authentication basically works like this: On first time connecting, you navigate to a remote URL (ex. www.f5.com), which passes through a security device (a proxy server, or in the case of hotel/coffee shop WiFi, an access point). The device has never seen you before, so issues an HTTP redirect to a separate URL. This URL will present an authentication point, usually a web page with some form of identity verification, user agreement, etc. You do what you need to do there, and the authentication page redirects you back to the original URL (ex. www.f5.com) and either stores some information about you, or sends something back with you in the redirect (a token). On passing back through the proxy (or access point), you are recognized as an authenticated user and allowed to pass. The token is stored for the life of your sessions so that you are not sent back to the captive portal. Figure: Captive-portal Authentication Process The real beauty here is that you are not at limited in the mechanisms you use to authenticate, like you are in an explicit proxy. The captive portal URL is essentially a webpage, so you could use NTLM, Kerberos, Basic, certificates, federation, OAuth, logon page, basically anything. Configuring this in APM is also super easy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Transparent Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Again, don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform transparent forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). We are going to continue to use the LocalDB authentication method here to keep the configuration simple. But in this case, you could extend that to do Basic authentication or a logon page. If you do Basic, Kerberos, or NTLM, you'll be using a "401 authentication model". This is very similar to the 407 model, except that 401 interacts directly with the user. And again, this is just an example. Captive portal authentication isn't dependent on browser proxy authentication capabilities, and can support pretty much any user authentication method you can throw at it. Figure: 401-based SWG-Transparent authentication policy 401 Authentication Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Now, there are a few additional things to do here. Transparent proxy (captive portal) authentication actually requires two access profiles. The authentication profile you just created gets applied to the captive port (authentication URL). You need a separate access profile on the proxy listener to redirect the user to the captive if no token exists for that user. As it turns out, an SSL Orchestrator security policy is indeed a type of access profile, so it simply gets modified to point to the captive portal URL. The 'named' profile scope you selected in the above authentication profile defines how the two profiles share user identity information, thus both with have a named profile scope, and must use the same named scope value (ex. SSO). You will now create the second access profile: Settings: Name: provide a unique name Profile Type: SSL Orchestrator Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Captive Portals: select 'Enabled' Primary Authentication URI: enter the URL of the captive portal (ex. https://login.f5labs.com) You now need to create a virtual server to hold your captive portal. This is the URL that users are redirected to for authentication (ex. https://login.f5labs.com). The steps are as follows: Create a certificate and private key to enable TLS Create a client SSL profile that contains the certificate and private key Create a virtual server Destination Address/Mask: enter the IP address that the captive portal URL resolves to Service Port: enter 443 HTTP Profle (Client): select 'http' SSL Profile (Client): select your client SSL profile VLANs and Tunnels: enable for your client-facing VLAN Access Profile: select your captive portal access profile Head back into your SSL Orchestrator outbound transparent proxy topology configuration, and go to the Interception Rules page. Under the 'Access Profile' setting, select your new SSL Orchestrator access profile and re-deploy. That's it. Now open a browser and attempt to access a remote resource. Since this is using Basic authentication with LocalDB, you should get prompted for username and password. If you look closely, you will see that you've been redirected to your captive portal URL. 401 Basic authentication is not connection based, so APM stores the user session information by client IP. If you do not get prompted for authentication, it's likely you have an active session already. Navigate to Access -> Overview -> Active Sessions. If you see your LocalDB user account name listed there, delete it and try again (close and re-open the browser). And there you have it. In just a few steps you've configured your SSL Orchestrator outbound topology to perform user authentication, and along the way you have hopefully recognized the immense flexibility at your command. Thanks.2.8KViews4likes10CommentsUsing OpenID Connect to authenticate users
Hello all, I want to use OpenID Connect to authenticate my users before gaining access to one of my application. I want to use my bigip as OpenID Provider (ie: the entity that authenticate the users) . My issue is the following: The OpendID provider (my bigip) never provides me with a ID Token. All I have is an “Access Token” and a “Refresh Token” but no “ID Token”. Below is a description of my lab: resource owner: ip address 10.10.255.1 bigip OpenID Provider: virtual server ip 10.10.255.221 (see below for the access policy) *The agents are left with their default values. Client: I user openid debugger (https://oidcdebugger.com/) in order to request the authorization code. Then I request the Tokens using an html code. I do the following for testing : I request the authorization code from the authorization server (ie my virtual server) . For this I use openID Connect debugger to construct the request for me. Here is the request that I send : ? client_id=e1111098ccff5f81859f9fc83eaa000c29267803efc0bb5b & redirect_uri=https://oidcdebugger.com/debug & scope=openid myscope & response_type=code & response_mode=form_post & state=toto & nonce=7r8saarltr After sending this request I enter my credentials (in the logon agent), I click on authorize then the virtual server correctly redirects me to: https://oidcdebugger.com/debug code=cae85f6fd33c6b27f56f536b6fd9af2ca0fc78e69ac3d13799533c143b38b4ac&state=toto” I now have a correct authorization code that I can exchange for an “access token” AND a “ID Token” . I then send a POST to get the access and ID Tokens using the following HTML code : *note the presence of the “openid” in the scope parameter. However, this is what I get from the authorization server (see in the comment) : -> I have No “ID TOKEN” ☹ Could you please help me configure my access policy so that it supports OpenID Connect and sends me an “ID Token” ??2.8KViews0likes8CommentsSSL Orchestrator Advanced Use Cases: Inbound Authentication
Introduction In your many adventures as an IT pro, you've undoubtedly come across the term "Swiss Army Knife" when describing the F5 BIG-IP. You don't have to be an expert at F5 products...or Swiss Army knives...to understand what this means. The term itself ubiquitously describes the idea of versatility, the ability to solve any problem with one of many tools included in a single shiny package. Now of course the naysayers will argue that this versatility breeds complexity. And while there's no argument that a BIG-IP can be complex, just take a look at your current network and security architectures and count how many different tools are used to solve a single set of challenges. The subtle reality is that there's really no such thing as "one-size-fits-all". Homogenizing technologies like the various public cloud offerings will give you "good enough" capabilities, but then you have to ask yourself, is my competitor's good enough the same as my good enough? Do we really have the same exact challenges? This is where versatility can be a critical advantage. Versatility, for example, can help to stop zero-day attacks before your security products have a chance to roll out their own solutions. Versatility can solve complex software issues that might otherwise require a multitude of expensive vendor tools. And versatility can very often create capabilities (application, authentication, security, etc.), where no formal vendor solution exists. In this article I'll be addressing a specific set of BIG-IP (versatility) characteristics: authentication and orchestration. And in doing so, I will also be showing you some powerful capabilities that you probably didn't know were there. Let's get started! SSL Orchestrator Use Case: Inbound Authentication The basic premise of this use case is that an SSL Orchestrator security policy is built on top of a set of "stateless" Access per-session and per-request policies. Access Policy Manager (APM) is the module you use on a BIG-IP to perform client authentication, and this requires "stateful" per-session and per-request policies. Therefore, as an application virtual server can only contain ONE access policy, the APM and SSL Orchestrator policies cannot coexist. In other words, you cannot add APM authentication to an SSL Orchestrator virtual server (or SSL Orchestrator security policy to an APM virtual server). SSL Orchestrator technically allows for authentication in outbound (forward proxy) topologies, because the explicit or transparent forward proxy authentication policy does not sit on the same virtual server as the SSL Orchestrator security policy. What we're focusing on here, though, is inbound (reverse proxy) authentication where there's generally just the one application virtual. There are fundamentally two ways to address this challenge: Layering virtual servers - often referred to as "VIP targeting", or "VIP-target-VIP", this is where one (external) virtual server uses an iRule command to push traffic to another internal virtual server. This is the simple approach. You put your authentication policy and client-side SSL offload on the external virtual, and an iRule to do the VIP targeting. The targeted internal virtual contains the SSL Orchestrator security policies, the application server pool, and optionally server SSL if you need to re-encrypt. figure: apm-sslo-vip-target Connector profile - a connector profile is a proxy element that was added to BIG-IP in 14.1, and that inserts itself in the client-side proxy flow after layer 5/6 (SSL decryption) and before layer 7 (HTTP). The connector is flow-based, so can be assigned once at flow initiation. Essentially, the connector can "tee" traffic out of the original proxy flow, and then back. The connector itself points to an internal virtual server that can perform any number of functions before returning back to the original proxy flow. figure: apm-sslo-connector For those of you that have spent any time digging around in the guts of an SSL Orchestrator configuration, you may recognize the connector profile. The connector was specifically created for SSL Orchestrator to handle third-party security service insertion. This is the thing that tees decrypted traffic off to the security devices. The connector is fundamentally an LTM object, but with LTM you can attach a single connector to a virtual server. In other words, you can attach a single security device to an LTM virtual server. SSL Orchestrator gives you dynamic assignment of multiple connectors (the service chain), a robust security policy (that attaches the flow to the server chain), dynamic decryption, and a guided configuration user interface to build all of this coolness. In the context of this use case, we'll attach a connector profile to the APM application virtual that points to an internal virtual, and that internal virtual will contain the SSL Orchestrator security policy. It is important to note here that the following solutions will minimally require: LTM base + APM add-on SSL Orchestrator You will be using the SSL Orchestrator "Existing Application" topology option here. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. We'll leave application traffic management and decryption to the APM virtual server. Before I dig into each of these options, let's understand why you would select one over the other as both have pros and cons. The VIP target solution is fundamentally easy. It's two virtual servers and a simple VIP target iRule. However, there's a tiny bit of overhead in a VIP target as you engage the TCP proxy twice. And with multiple applications, a VIP target isn't really re-usable. You have to create a separate virtual server pair for each application. The frontend virtual contains the client-facing destination IP, VLAN, client SSL, and iRule. The internal virtual contains the SSL Orchestrator security policy, application pool, and optional server SSL. The connector solution is re-usable. You simply attach the same connector profile to each application virtual server. It's also going to be slightly more efficient than the VIP target. However, the connector configuration is going to be more complex. Inbound Authentication through VIP targeting We will start with the easiest option first. Before doing anything else, navigate to SSL Orchestrator and create an "Existing Application" topology. Here you'll define the security services, service chain(s), and a security policy. On completion you'll have two "stateless" access policies that will get attached to one of the virtual servers. Create a client SSL profile - assuming you're building an HTTPS site, you'll need a client SSL profile to perform HTTPS decryption. Optionally create a server SSL profile - if you're going to re-encrypt to the application servers, you can either create a custom server SSL profile, or just use the built-in "serverssl" profile. Create the application pool - this is the pool that sends traffic to the application servers. Create the internal virtual server - this is the virtual server that will contain the SSL Orchestrator security policy and application pool. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Port: * SSL Profile (Server): optional server SSL profile VLAN and Tunnel Traffic: enabled on (empty) Source Address Translation: SNAT as required Address/Port Translation: enabled Access Profile: SSL Orchestrator base policy (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Default Pool: select the application pool Create the VIP target iRule - the iRule will pass the flow from the external to the internal virtual server: when ACCESS_ACL_ALLOWED { ## Enter the full name and path of the internal virtual server here virtual "/Common/internal-vip" } Create the authentication per-session access policy - this is a standard APM authentication per-session access policy, and can be anything you need. Create the client-facing external virtual server - this is the application virtual server that the client will communicate with directly. Type: Standard Source: 0.0.0.0/0 Destination: enter the IP address clients will use to access the application Port: enter the port for this application SSL Profile (Client): select the client SSL profile VLAN and Tunnel Traffic: enabled on the client-facing VLAN Address/Port Translation: disabled Access Profile: APM authentication policy iRule: select the VIP target iRule That's it. Client traffic will arrive via HTTPS to the external virtual server, get decrypted by the client SSL profile, and then pass to the authentication access profile. The client authenticates, and then the iRule passes the flow to the internal virtual server. The internal virtual server contains the SSL Orchestrator security policy, so decrypted traffic flows to the security services, returns to the BIG-IP, and then flows out to the application servers. Inbound Authentication through a Connector The connector profile is at the heart of SSL Orchestrator and how it drives traffic to security devices. But we're going to use a connector here in a novel way. We're going to create a connector that points to an internal virtual server, and that virtual server will contain the SSL Orchestrator security policy (see image above). It is effectively "tee-ing" the traffic out of the original proxy flow, across the security stack, and then back into the flow. The beauty here is that, aside from being slightly more efficient than a VIP target, the connector is re-usable across multiple APM application virtual servers. It's worth noting here that the traffic to the SSL Orchestrator security policy will have already been decrypted at the APM virtual, so the security policy should not contain rules specific to TLS handling (i.e. SSL bypass). But as we're talking about inbound traffic, it's very likely you won't be needing any of that complexity in the security policy anyway. The objective of the security policy here is to pass decrypted traffic to a service chain of security devices. As with the VIP target approach, first create an SSL Orchestrator "Existing Application" topology. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. Let's build the connector configuration, which includes three things: Create a Service profile - a Service profile essentially defines the type of connector, and how traffic is processed. Here we will be using the F5 Module service. Navigate to Local Traffic :: Profiles :: Other :: Service, and click create. Give it a name and select F5 Module as the Type. Create the internal virtual server - the internal virtual server will host the SSL Orchestrator security policy: Type: Internal HTTP Profile (client): http Service Profile: select the service profile Access Profile: select the SSL Orchestrator profile (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Create the connector profile - navigate to Local Traffic :: Profiles :: Other :: Connector, and click create. Simply select the internal virtual server here. To make all of the above slightly easier, you can simply run the following commands in a BIG-IP shell: tmsh create ltm profile service sslo-service type f5-module tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service } tmsh create ltm profile connector sslo-connector entry-virtual-server sslo-internal-vip Note that prior to BIG-IP 16.0, the Access Profile selection won't be available in the UI for Internal virtual servers, but you can still add via TMSH: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy [name of policy] Example: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy ssloP_sslotest.app/ssloP_sslotest_per_req_policy Now just create your APM application virtual server as usual, including client-facing destination IP/port, VLAN, client (and optional server) SSL, APM authentication policy, SNAT (as required), and the application pool. On top of that, attach the connector profile in the field labeled Connector Profile. For each additional APM application virtual server, you can re-use this same connector profile. Client traffic will arrive via HTTPS to the APM virtual server, get decrypted by the client SSL profile, pass through the connector, and then to the authentication profile. Note that there's one other subtle difference between these methods that I didn't touch on earlier, and that's the order of events. In the VIP target option, authentication is attempted and completed before any traffic passes to the SSL Orchestrator security policy. So the security devices only see application traffic flows. In the connector option, authentication is engaged after the connector, so the security policy and devices see the entire authentication process. In this case, they will see the APM /my.policy redirects and the APM session cookies. Summary The connector profile presents a lot of really interesting capabilities, even beyond what we've seen here. For example, anywhere that you may have some mutual exclusivity, like APM and ASM policies on a virtual server, you could potentially use the connector attached to an APM virtual to pass traffic to a WAF policy. The connector basically gives you a single "tee" for free in LTM. For multiple connectors, dynamic connector assignment, dynamic decryption, and a robust policy to handle that assignment, you'd use the SSL Orchestrator. In either case, whether using the VIP target or connector approach to inbound authentication with SSL Orchestrator, hopefully you can see some of the immense versatility at your command.2.3KViews5likes2CommentsRadius Authentication role not working
Hi Guys, We setup authentication setup using this article: https://support.f5.com/csp/article/K14324#3 But when we logged in using the accounts on the radius, f5 sets the user as admin account even the account should be read only. Are we missing some configurat2.2KViews0likes13Comments