authentication
27 TopicsYubikey Authentication Modes and Azure AD integration via the APM
Introduction The Yubikey (https://www.yubico.com/) supports three major functions, authentication, signing and encryption. As far as authentication goes, it supports a list of the following mechanisms. OTP (one-time password) Yubikey OTP (Time based OTP) OATH HOTP (HMAC based OTP) FIDO U2F (Universal 2 nd Factor) FIDO2 PIV (Smartcard) Each of the above-mentioned protocols has its own set of requirements and is therefore not universally supported everywhere. OTP OTP is probably the simplest, with a one-time password being used, typically as the 2 nd factor. However, it is also the weakest, as it does not mitigate against MITM attacks. E.g., A fake site impersonating a legitimate site can trick the user into entering the OTP and subsequently forwards it to the real site. All Yubikey’s by default have manufacture assigned secrets registered with Yubico’s own validation servers. Yubico provides a tool that allows you to re-program the key, giving it a different secret. However, the new secret has to be uploaded to Yubico’s validation servers (https://upload.yubico.com/) otherwise OTP will stop working. Yubikey OTP integrates with a large number of services (e.g., Gmail, LastPass). When a service receives an OTP, it reaches out to Yubico for validation. In the case of Okta, the secrets can be uploaded directly into Okta and validation happens within Okta. FIDO U2F FIDO U2F or U2F for short, mitigates MITM. This method requires the user to register the authenticator (e.g., Yubikey) with the application (e.g., Gmail) first, during which a key pair is generated by the authenticator, and the public key is sent and stored on the application. Once the registration is complete, the user can then use the authenticator as the 2 nd factor. In the case of Gmail, once the user’s credentials are verified, the user touches the Yubikey for 2 nd factor. No code is entered by the user. FIDO2 FIDO2 enables passwordless authentication. Once the Yubikey is registered with an application (e.g., Azure Portal) for FIDO2 authentication, the user touches the Yubikey, optionally provides a PIN code for the key, logs straight in. – no username is entered FIDO2 is an evolution of U2F and is dependent upon WebAuthn (client API implemented within the browser) and CTAP2 (authenticator API that enables FIDO2-capable devices to interface to external/roaming authenticators over Bluetooth, USB or Near field communication (NFC)). Per FIDO Alliance (https://fidoalliance.org/fido2/fido2-web-authentication-webauthn/#:~:text=WebAuthn%20is%20currently%20supported%20in,Windows%2010%20and%20Android%20platforms), browser support for U2F and WebAuthn are shown below. PIV YubiKey can also present itself as a PIV smartcard that contains a client certificate. APM Integration There are a number of articles on DevCentral that cover programming the APM to receive the Yubikey OTP and then send it over to Yubico’s validation servers via a side band connection for OTP verification. My particular use case is to leverage an IDaaS (e.g., Azure AD) as an IDP and use the APM as the SP. My choice of integration is via OIDC, but SAML is an equally valid option. Since authentication is offloaded to Azure AD, both the OTP and FIDO2 passwordless authentication methods are now available. Azure AD does not support U2F. Azure AD and User Configuration OTP If Yubikey is used for OTP, Azure AD needs to have MFA enabled, a ‘Conditional Access’ policy is created to ‘Require multi-factor authentication’ for your selected apps. This process is documented by Microsoft (https://docs.microsoft.com/en-us/azure/active-directory/authentication/tutorial-enable-azure-mfa). The user must also add an authenticator app via self-registration (https://mysignins.microsoft.com/), be sure to click on the highlighted, this allows you to enrol the ‘Yubico Authenticator’ (https://www.yubico.com/products/services-software/download/yubico-authenticator/). The Yubico Authenticator works with the Yubikey to generate the OTP. Yubico argues that it is more secure as unlike a soft authenticator, the secrets are not saved within the authenticator itself, but rather in a secure element within the Yubikey. Note ‘Touch your Yubikey’, which is needed before an OTP is generated. The OTP method does not impose special requirements on the browsers, which means it works on any browsers, as well as on the APM Edge client, which leverages certain browser functions FIDO2 With FIDO2 based passwordless authentication, the ‘FIDO2 Security Key’ option within the Azure AD (e.g., under Security) has to be enabled first. Again, the user goes through self-registration (https://mysignins.microsoft.com/) prior to the Yubikey becoming available. I have tried different browsers on the Mac, the only browsers that work are Chrome and Edge. Milage may vary in Windows. If FIDO2 works, the highlighted option will appear. In the case of Azure AD, a PIN is required (configured over the self-registration process). Once it’s entered, the user logs in after touching the Yubikey. At this time, the latest (v7210) APM Edge client’s embedded browser for login does not work with FIDO2, likely due to WebAuthn not being supported. If the user uses a supported browser, passwordless authentication should work, after which a webtop is presented.4.2KViews1like2CommentsSSL Orchestrator Advanced Use Cases: Inbound Authentication
Introduction In your many adventures as an IT pro, you've undoubtedly come across the term "Swiss Army Knife" when describing the F5 BIG-IP. You don't have to be an expert at F5 products...or Swiss Army knives...to understand what this means. The term itself ubiquitously describes the idea of versatility, the ability to solve any problem with one of many tools included in a single shiny package. Now of course the naysayers will argue that this versatility breeds complexity. And while there's no argument that a BIG-IP can be complex, just take a look at your current network and security architectures and count how many different tools are used to solve a single set of challenges. The subtle reality is that there's really no such thing as "one-size-fits-all". Homogenizing technologies like the various public cloud offerings will give you "good enough" capabilities, but then you have to ask yourself, is my competitor's good enough the same as my good enough? Do we really have the same exact challenges? This is where versatility can be a critical advantage. Versatility, for example, can help to stop zero-day attacks before your security products have a chance to roll out their own solutions. Versatility can solve complex software issues that might otherwise require a multitude of expensive vendor tools. And versatility can very often create capabilities (application, authentication, security, etc.), where no formal vendor solution exists. In this article I'll be addressing a specific set of BIG-IP (versatility) characteristics: authentication and orchestration. And in doing so, I will also be showing you some powerful capabilities that you probably didn't know were there. Let's get started! SSL Orchestrator Use Case: Inbound Authentication The basic premise of this use case is that an SSL Orchestrator security policy is built on top of a set of "stateless" Access per-session and per-request policies. Access Policy Manager (APM) is the module you use on a BIG-IP to perform client authentication, and this requires "stateful" per-session and per-request policies. Therefore, as an application virtual server can only contain ONE access policy, the APM and SSL Orchestrator policies cannot coexist. In other words, you cannot add APM authentication to an SSL Orchestrator virtual server (or SSL Orchestrator security policy to an APM virtual server). SSL Orchestrator technically allows for authentication in outbound (forward proxy) topologies, because the explicit or transparent forward proxy authentication policy does not sit on the same virtual server as the SSL Orchestrator security policy. What we're focusing on here, though, is inbound (reverse proxy) authentication where there's generally just the one application virtual. There are fundamentally two ways to address this challenge: Layering virtual servers - often referred to as "VIP targeting", or "VIP-target-VIP", this is where one (external) virtual server uses an iRule command to push traffic to another internal virtual server. This is the simple approach. You put your authentication policy and client-side SSL offload on the external virtual, and an iRule to do the VIP targeting. The targeted internal virtual contains the SSL Orchestrator security policies, the application server pool, and optionally server SSL if you need to re-encrypt. figure: apm-sslo-vip-target Connector profile - a connector profile is a proxy element that was added to BIG-IP in 14.1, and that inserts itself in the client-side proxy flow after layer 5/6 (SSL decryption) and before layer 7 (HTTP). The connector is flow-based, so can be assigned once at flow initiation. Essentially, the connector can "tee" traffic out of the original proxy flow, and then back. The connector itself points to an internal virtual server that can perform any number of functions before returning back to the original proxy flow. figure: apm-sslo-connector For those of you that have spent any time digging around in the guts of an SSL Orchestrator configuration, you may recognize the connector profile. The connector was specifically created for SSL Orchestrator to handle third-party security service insertion. This is the thing that tees decrypted traffic off to the security devices. The connector is fundamentally an LTM object, but with LTM you can attach a single connector to a virtual server. In other words, you can attach a single security device to an LTM virtual server. SSL Orchestrator gives you dynamic assignment of multiple connectors (the service chain), a robust security policy (that attaches the flow to the server chain), dynamic decryption, and a guided configuration user interface to build all of this coolness. In the context of this use case, we'll attach a connector profile to the APM application virtual that points to an internal virtual, and that internal virtual will contain the SSL Orchestrator security policy. It is important to note here that the following solutions will minimally require: LTM base + APM add-on SSL Orchestrator You will be using the SSL Orchestrator "Existing Application" topology option here. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. We'll leave application traffic management and decryption to the APM virtual server. Before I dig into each of these options, let's understand why you would select one over the other as both have pros and cons. The VIP target solution is fundamentally easy. It's two virtual servers and a simple VIP target iRule. However, there's a tiny bit of overhead in a VIP target as you engage the TCP proxy twice. And with multiple applications, a VIP target isn't really re-usable. You have to create a separate virtual server pair for each application. The frontend virtual contains the client-facing destination IP, VLAN, client SSL, and iRule. The internal virtual contains the SSL Orchestrator security policy, application pool, and optional server SSL. The connector solution is re-usable. You simply attach the same connector profile to each application virtual server. It's also going to be slightly more efficient than the VIP target. However, the connector configuration is going to be more complex. Inbound Authentication through VIP targeting We will start with the easiest option first. Before doing anything else, navigate to SSL Orchestrator and create an "Existing Application" topology. Here you'll define the security services, service chain(s), and a security policy. On completion you'll have two "stateless" access policies that will get attached to one of the virtual servers. Create a client SSL profile - assuming you're building an HTTPS site, you'll need a client SSL profile to perform HTTPS decryption. Optionally create a server SSL profile - if you're going to re-encrypt to the application servers, you can either create a custom server SSL profile, or just use the built-in "serverssl" profile. Create the application pool - this is the pool that sends traffic to the application servers. Create the internal virtual server - this is the virtual server that will contain the SSL Orchestrator security policy and application pool. Type: Standard Source: 0.0.0.0/0 Destination: 0.0.0.0/0 Port: * SSL Profile (Server): optional server SSL profile VLAN and Tunnel Traffic: enabled on (empty) Source Address Translation: SNAT as required Address/Port Translation: enabled Access Profile: SSL Orchestrator base policy (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Default Pool: select the application pool Create the VIP target iRule - the iRule will pass the flow from the external to the internal virtual server: when ACCESS_ACL_ALLOWED { ## Enter the full name and path of the internal virtual server here virtual "/Common/internal-vip" } Create the authentication per-session access policy - this is a standard APM authentication per-session access policy, and can be anything you need. Create the client-facing external virtual server - this is the application virtual server that the client will communicate with directly. Type: Standard Source: 0.0.0.0/0 Destination: enter the IP address clients will use to access the application Port: enter the port for this application SSL Profile (Client): select the client SSL profile VLAN and Tunnel Traffic: enabled on the client-facing VLAN Address/Port Translation: disabled Access Profile: APM authentication policy iRule: select the VIP target iRule That's it. Client traffic will arrive via HTTPS to the external virtual server, get decrypted by the client SSL profile, and then pass to the authentication access profile. The client authenticates, and then the iRule passes the flow to the internal virtual server. The internal virtual server contains the SSL Orchestrator security policy, so decrypted traffic flows to the security services, returns to the BIG-IP, and then flows out to the application servers. Inbound Authentication through a Connector The connector profile is at the heart of SSL Orchestrator and how it drives traffic to security devices. But we're going to use a connector here in a novel way. We're going to create a connector that points to an internal virtual server, and that virtual server will contain the SSL Orchestrator security policy (see image above). It is effectively "tee-ing" the traffic out of the original proxy flow, across the security stack, and then back into the flow. The beauty here is that, aside from being slightly more efficient than a VIP target, the connector is re-usable across multiple APM application virtual servers. It's worth noting here that the traffic to the SSL Orchestrator security policy will have already been decrypted at the APM virtual, so the security policy should not contain rules specific to TLS handling (i.e. SSL bypass). But as we're talking about inbound traffic, it's very likely you won't be needing any of that complexity in the security policy anyway. The objective of the security policy here is to pass decrypted traffic to a service chain of security devices. As with the VIP target approach, first create an SSL Orchestrator "Existing Application" topology. This creates the security policy, services and service chain, without also creating the virtual servers and SSL. Let's build the connector configuration, which includes three things: Create a Service profile - a Service profile essentially defines the type of connector, and how traffic is processed. Here we will be using the F5 Module service. Navigate to Local Traffic :: Profiles :: Other :: Service, and click create. Give it a name and select F5 Module as the Type. Create the internal virtual server - the internal virtual server will host the SSL Orchestrator security policy: Type: Internal HTTP Profile (client): http Service Profile: select the service profile Access Profile: select the SSL Orchestrator profile (ssloDefault_accessProfile) Per-Request Policy: select the SSL Orchestrator security policy Create the connector profile - navigate to Local Traffic :: Profiles :: Other :: Connector, and click create. Simply select the internal virtual server here. To make all of the above slightly easier, you can simply run the following commands in a BIG-IP shell: tmsh create ltm profile service sslo-service type f5-module tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service } tmsh create ltm profile connector sslo-connector entry-virtual-server sslo-internal-vip Note that prior to BIG-IP 16.0, the Access Profile selection won't be available in the UI for Internal virtual servers, but you can still add via TMSH: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy [name of policy] Example: tmsh create ltm virtual sslo-internal-vip internal profiles add { http sslo-service ssloDefault_accessProfile } per-flow-request-access-policy ssloP_sslotest.app/ssloP_sslotest_per_req_policy Now just create your APM application virtual server as usual, including client-facing destination IP/port, VLAN, client (and optional server) SSL, APM authentication policy, SNAT (as required), and the application pool. On top of that, attach the connector profile in the field labeled Connector Profile. For each additional APM application virtual server, you can re-use this same connector profile. Client traffic will arrive via HTTPS to the APM virtual server, get decrypted by the client SSL profile, pass through the connector, and then to the authentication profile. Note that there's one other subtle difference between these methods that I didn't touch on earlier, and that's the order of events. In the VIP target option, authentication is attempted and completed before any traffic passes to the SSL Orchestrator security policy. So the security devices only see application traffic flows. In the connector option, authentication is engaged after the connector, so the security policy and devices see the entire authentication process. In this case, they will see the APM /my.policy redirects and the APM session cookies. Summary The connector profile presents a lot of really interesting capabilities, even beyond what we've seen here. For example, anywhere that you may have some mutual exclusivity, like APM and ASM policies on a virtual server, you could potentially use the connector attached to an APM virtual to pass traffic to a WAF policy. The connector basically gives you a single "tee" for free in LTM. For multiple connectors, dynamic connector assignment, dynamic decryption, and a robust policy to handle that assignment, you'd use the SSL Orchestrator. In either case, whether using the VIP target or connector approach to inbound authentication with SSL Orchestrator, hopefully you can see some of the immense versatility at your command.2.4KViews5likes2CommentsAzure Active Directory and BIG-IP APM Integration
Introduction Security is one of the primary considerations for organizations in determining whether or not to migrate applications to the public cloud. The problem for organizations with applications in the cloud, in a data center, managed, or as a service, is to create a cost-effective hybrid architecture that produces secure application access and a great experience that allows users to access apps easily, have consistent user experiences, and enjoy easy access with single-sign-on (SSO) tied to a central identity and authentication strategy. Some applications are not favorable to modernization. There are applications that are not suited for, or incapable of, cloud migration. Many on-premises apps do not support modern authentication and authorization, including standards and protocols such as SAML, OAuth, or OpenID Connect (OIDC). An organization may not have the staff talent or time to perform application modernization for their on-premises apps. With thousands of apps in use daily, hosted in all or any combination of these locations, how can organizations ensure secure, appropriate user access without requiring users to login in multiple times? In addition, how can organizations terminate user access to each application without having to access each app individually? By deploying Microsoft Azure Active Directory, Microsoft’s comprehensive cloud-based identity platform, along with F5’s trusted application access solution, Access Policy Manager (APM), organizations are able to federate user identity, authentication, and authorization and bridge the identity gap between cloud-based (IaaS), SaaS, and on-premises applications. Figure 1Secure hybrid application access This guide discusses the following use cases: ·Users use single sign-on to access applications requires Kerberos-based authentication. ·Users use single sign-on to access applications requires header-based authentication. Microsoft Azure Active Directory and F5 BIG-IP APM Design For organizations with a high security demand with low risk tolerance, the need to keep all aspects of user authentication on premise is required. The Microsoft Azure Active Directory and F5 BIG-IP APM solution integrates directly into AAD configured to work cooperatively with an existing Kerberos based, header based or variety of authentication methods. The solution has these components: •BIG-IP Access Policy Manager (APM) •Microsoft Domain Controller/ Active Directory (AD) •Microsoft Azure Active Directory (AAD) •Application (Kerberos-/header-based authentication) Figure 2APM bridge SAML to Kerberos/header authentication components Figure 3APM bridge SAML to Kerberos authentication process flow Deploying Azure Active Directory and BIG-IP APM integration The joint Microsoft and F5 solution allow legacy applications incapable of supporting modern authentication and authorization to interoperate with Azure Active Directory. Even if an app doesn’t support SAML, and only is able to support header- or Kerberos-based authentication, it can still be enabled with single sign-on (SSO) and support multi-factor authentication (MFA) through the F5 APM and Azure Active Directory combination. Azure Active Directory as an IDaaS delivers a trusted root of identity to APM creating a bridge between modern and legacy applications, delivering SSO and securing the app with MFA. Adding F5 from the gallery To configure the integration of BIG-IP APM into Azure AD, you need to add F5 from the gallery to your list of managed SaaS apps. Sign-on to theAzure portalusing either a work or school account, or a personal Microsoft account. On the left navigation pane, select theAzure Active Directoryservice. Navigate toEnterprise Applicationsand then selectAll Applications. To add new application, selectNew application. In theAdd from the gallerysection, typeF5in the search box. SelectF5from results panel and then add the app. Wait a few seconds while the app is added to your tenant. Configuring Microsoft Azure Active Directory Configure and test Azure AD SSO with F5 using a test user calledA.Vandelay. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in F5. To configure and test Azure AD SSO with F5, complete the following building blocks: Configure Azure AD SSO - to enable your users to use this feature. Create an Azure AD test user - to test Azure AD single sign-on with A.Vandelay. Assign the Azure AD test user - to enable A.Vandelay to use Azure AD single sign-on. Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. In theAzure portal, on theF5application integration page, find theManagesection and selectsingle sign-on. On theSelect a single sign-on methodpage, selectSAML. On theSet up single sign-on with SAMLpage, click the edit/pen icon forBasic SAML Configurationto edit the settings. On theBasic SAML Configurationsection, if you wish to configure the application inIDPinitiated mode, enter the values for the following fields: In theIdentifiertext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ In theReply URLtext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ ClickSet additional URLsand perform the following step if you wish to configure the application inSPinitiated mode: In theSign-on URLtext box, type a URL using the following pattern:https://<YourCustomFQDN>.f5.com/ Note These values are for only used for illustration. Replace these them with the actual Identifier, Reply URL and Sign-on URL. Refer to the patterns shown in theBasic SAML Configurationsection in the Azure portal. On theSet up single sign-on with SAMLpage, in theSAML Signing Certificatesection, findFederation Metadata XMLand selectDownloadto download the certificate and save it on your computer. On theSet up F5section, copy the appropriate URL(s) based on your requirement. Create an Azure AD test user In this section, you'll create a test user in the Azure portal called A.Vandelay. From the left pane in the Azure portal, selectAzure Active Directory, selectUsers, and then selectAll users. SelectNew userat the top of the screen. In theUserproperties, follow these steps: In theNamefield, enterA.Vandelay. In theUser namefield, enter the username@companydomain.extension. For example,A.Vandelay@contoso.com. Select theShow passwordcheck box, and then write down the value that's displayed in thePasswordbox. ClickCreate. Assign the Azure AD test user In this section, you'll enable A.Vandelay to use Azure single sign-on by granting access to F5. In the Azure portal, selectEnterprise Applications, and then selectAll applications. In the applications list, selectF5. In the app's overview page, find theManagesection and selectUsers and groups. SelectAdd user, then selectUsers and groupsin theAdd Assignmentdialog. In theUsers and groupsdialog, selectA.Vandelayfrom the Users list, then click theSelectbutton at the bottom of the screen. If you're expecting any role value in the SAML assertion, in theSelect Roledialog, select the appropriate role for the user from the list and then click theSelectbutton at the bottom of the screen. In theAdd Assignmentdialog, click theAssignbutton. Configure F5 BIG-IP APM Configure your on-premise applications based on the authentication type. Configure F5 single sign-on for Kerberos-based application Open your browser and access BIG-IP. You need to import the Metadata Certificate into the F5 (Kerberos) which will be used later in the setup process. Go toSystem > Certificate Management > Traffic Certificate Management >> SSL Certificate List. Click onImportof the right-hand corner. Additionally you also need anSSL Certificatefor the Hostname (Kerbapp.superdemo.live), in this example we used Wildcard Certificate. Go to –F5 BIG-IP Click Access > Guided Configuration > Federation > SAML Service Provider. Specify theEntity ID(same as what you configured on the Azure AD Application Configuration). Create a new Virtual Server, Specify theDestination Address. Choose theWild Card Certificate(orCertyou uploaded for the Application) that we uploaded earlier and theAssociated Private Key. Upload the ConfigurationMetadataand Specify a newName for SAML IDP Connectorand you will also need to specify the Federation Certificate that was uploaded earlier. Create NewBackend App Pool, specify theIP Address(s)of the Backend Application Servers. UnderSingle Sign-on Settings, chooseKerberosand SelectAdvanced Settings. The request needs to be created inuser@domain.suffix. Under theusername sourcespecifysession.saml.last.attr.name.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname. Refer Appendix for complete list of variables and values. Account Name Is the F5 Delegation Account Created ( Check F5 Documentation). Under Endpoint Checks Properties , click Save & Next. Under Timeout Settings, leave default settings and click Save & Next. Review Summaryand click onDeploy. Configure F5 single sign-on for Header-based application Open your browser and access BIG-IP. You need to import the Metadata Certificate into the F5 (Header Based) which will be used later in the setup process. Go toSystem > Certificate Management > Traffic Certificate Management >> SSL Certificate List. Click onImportof the right-hand corner. Additionally you also need anSSL Certificatefor the Hostname (headerapp.superdemo.live), in this example we used Wildcard Certificate. Go to –F5 (Header Based) BIG-IP Click Access > Guided Configuration > Federation > SAML Service Provider. Specify theEntity ID(same as what you configured on the Azure AD Application Configuration). Create a new Virtual Server, Specify theDestination Address,Redirect Portis Optional. Choose theWild Card Certificate(orCertyou uploaded for the Application) that we uploaded earlier and theAssociated Private Key. Upload the ConfigurationMetadataand Specify a newName for SAML IDP Connectorand you will also need to specify the Federation Certificate that was uploaded earlier. Create NewBackend App Pool, specify theIP Address(s)of the Backend Application Servers. Under Single Sign-on, ChooseHTTP header-based. You can add other Headers based on your application. See the Appendix for the list of SAMLSession Variables. Under Endpoint Checks Properties , click Save & Next. Under Timeout Settings, leave default settings and click Save & Next. Review Summaryand click onDeploy. Resources BIG-IP Knowledge Center BIG-IP APM Knowledge Center Configuring Single Sign-On with Access Policy Manager Summary By centralizing access to all your applications, you can manage them more securely. Through the F5 BIG-IP APM and Azure AD integration, you can centralize and use single sign-on (SSO) and multi-factor authentication for on-premise applications. Validated Products and Versions Product BIG-IP APM Version 15.014KViews5likes4CommentsSSL Orchestrator Advanced Use Cases: Forward Proxy Authentication
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless.In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Forward Proxy Authentication Arguably, authentication is an easy one for BIG-IP, but I'm going to ease into this series slowly. There's no better place to start than with an examination of some of the many ways you can configure an F5 BIG-IP to authenticate user traffic. Forward Proxy Overview Forward proxy authentication isn't exclusive to SSL Orchestrator, but a vital component if you need to authenticate inspected outbound client traffic to the Internet. In this article, we are simply going to explore the act of authenticating in a forward proxy, in general. - how it works, and how it's applied. For detailed instructions on setting up Kerberos and NTLM forward proxy authentication, please see the SSL Orchestrator deployment guide. Let's start with a general characterization of "forward proxy" to level set. The semantics of forward and reverse proxy can change depending on your environment, but generally when we talk about a forward proxy, we're talking about something that controls outbound (usually Internet-bound) traffic. This is typically internal organizational traffic to the Internet. It is an important distinction, because it also implicates the way we handle encryption. In a forward proxy, clients are accessing remote Internet resources (ex. https://www.f5.com). For TLS to work, the client expects to receive a valid certificate from that remote resource, though the inspection device in the middle does not own that certificate and private key. So for decryption to work in an "SSL forward proxy", the middle device must re-issue ("forge") the remote server's certificate to the client using a locally-trusted CA certificate (and key). This is essentially how every SSL visibility product works for outbound traffic, and a native function of the SSL Orchestrator. Now, for any of this to work, traffic must of course be directed through the forward proxy, and there are generally two ways that this is accomplished: Explicit proxy - where the browser is configured to access the Internet through a proxy server. This can also be accomplished through auto-configuration scripts (PAC and WPAD). Transparent proxy - where the client is blissfully unaware of the proxy and simply routes to the Internet through a local gateway. It should be noted here that SSL visibility products that deploy at layer 2 are effectively limited to one traffic flow option, and lack the level of control that a true proxy solution provides, including authentication. Also note, BIG-IP forward proxy authentication requires the Access Policy Manager (APM) module licensed and provisioned. Explicit Forward Proxy Authentication The option you choose for outbound traffic flow will have an impact on how you authenticate that traffic, as each works a bit differently. Again, we're not getting into the details of Kerberos or NTLM here. The goal is to derive an essential understanding of the forward proxy authentication mechanisms, how they work, how traffic flows through them, and ultimately how to build them and apply them to your SSL Orchestrator configurations. And as each is different, let us start with explicit proxy. Explicit forward proxy authentication for HTTP traffic is governed by a "407" authentication model. In this model, the user agent (i.e. a browser) authenticates to the proxy server before passing any user request traffic to the remote server. This is an important distinction from other user-based authentication mechanisms, as the browser is generally limited in the types of authentication it can perform here (on the user's behalf). In fact most modern browsers, with some exceptions, are limited to the set of "Windows Integrated" methods (NTLM, Kerberos, and Basic). Explicit forward proxy authentication will look something like this: Figure: 407-based HTTPS and HTTP authentication The upside here is that the Windows Integrated methods are usually "transparent". That is, silently handled by the browser and invisible to the user. If you're logged into a domain-joined workstation with a domain user account, the browser will use this access to generate an NTLM token or fetch a Kerberos ticket on your behalf. If you build an SSL Orchestrator explicit forward proxy topology, you may notice it builds two virtual servers. One of these is the explicit proxy itself, listening on the defined explicit proxy IP and port. And the other is a TCP tunnel VIP. All client traffic arrives at the explicit proxy VIP, then wraps around through the TCP tunnel VIP. The SSL Orchestrator security policy, SSL configurations, and service chains are all connected to the TCP tunnel VIP. Figure: SSL Orchestrator explicit proxy VIP configuration As explicit proxy authentication is happening at the proxy connection layer, to do authentication you simply need to attach your authentication policy to the explicit proxy VIP. This is actually selected directly inside the topology configuration, Interception Rules page. Figure: SSL Orchestrator explicit proxy authentication policy selection But before you can do this, you must first create the authentication policy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Explicit Profile Scope: leave it at 'Profile' Customization Type: leave it at 'Modern' Don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform explicit forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). Now, before we dive into the VPE, let's take a moment to talk about how authentication is going to work here. As previously stated, we are not going to dig into things like Kerberos or NTLM, but we still need something to authenticate to. Once you have something simple working, you can quickly shim in the actual authentication protocol. So let's do basic LocalDB authentication to prove out the configuration. Hop down to Access -> Authentication -> Local User DB -> Instances, and click Create New Instance. Create a simple LocalDB instance: Settings: Name: provide a unique name Leave the remaining settings as is and click OK. Now go to Access -> Authentication -> Local User DB -> Users, and click Create New User. Settings: User Name: provide a unique user name Password: provide a password Instances: selected the LocalDB instance Leave everything else as is and click OK. Now go back to the VPE. You're ready to define your authentication policy. With some exceptions, most explicit forward proxy authentication policies will minimally include a 407 Proxy-Authenticate agent and an authentication agent. The 407 Proxy-Authenticate agent will issue the 407 Proxy-Authenticate response to the client, and pass the user's submitted authentication data (Basic Authorization header, NTLM token, Kerberos ticket) to the auth agent behind it. The auth agent is then responsible for validating that submission and allowing (or denying) access. Since we're using a simple LocalDB to test this, we'll configure this for Basic authentication. Figure: 407-based SWG-Explicit authentication policy 407 HTTP Response Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Note again that this is a simple explicit forward proxy test using a local database for HTTP Basic authentication. Once you have this working, it is super easy to replace the LocalDB method with the authentication protocol you need. Now head back to your SSL Orchestrator explicit proxy configuration. Navigate to the Interception Rules page. On that page you will see a setting for Access Profile. Select your SWG-Explicit access policy here. And that's it. Deploy the configuration and you're done. Configure your browser to point to the SSL Orchestrator explicit proxy IP and port, if you haven't already, and attempt to access an external URL (ex. https://www.f5.com). Since this is configured for HTTP Basic authentication, you should see a popup dialog in the browser requesting username and password. Enter the values you created in the LocalDB user properties. In following articles, I will show you how to configure Kerberos and NTLM for forward proxy authentication. If you want to see what this communication actually looks like on the wire, you can either enable your browser's developer tools, network tab. Or for a cleaner view, head over to a command line on your client and use the cURL command (you'll need cURL installed on your workstation): curl -vk --proxy [PROXY IP:PORT] https://www.example.com --proxy-basic --proxy-user '[username:password]' Figure: cURL explicit proxy output What you see in the output should look pretty close to the explicit proxy diagram from earlier. And if your SSL Orchestrator security policy is defined to intercept TLS, you will see your local CA as the example.com CA issuer. Transparent Forward Proxy Authentication I intentionally started with explicit proxy authentication because it's usually the easiest to get your head around. Transparent forward proxy authentication is a bit different, but you very likely see it all the time. If you've ever connected to hotel, airport/airplane, or coffee shop WiFi, and you were presenting with a webpage or popup screen that asked for username, room number, or asked you to agree to some terms of use, you were using transparent authentication. In this case though, it is commonly referred to as a "Captive Portal". Note that captive portal authentication was introduced to SSL Orchestrator in version 6.0. Captive portal authentication basically works like this: On first time connecting, you navigate to a remote URL (ex. www.f5.com), which passes through a security device (a proxy server, or in the case of hotel/coffee shop WiFi, an access point). The device has never seen you before, so issues an HTTP redirect to a separate URL. This URL will present an authentication point, usually a web page with some form of identity verification, user agreement, etc. You do what you need to do there, and the authentication page redirects you back to the original URL (ex. www.f5.com) and either stores some information about you, or sends something back with you in the redirect (a token). On passing back through the proxy (or access point), you are recognized as an authenticated user and allowed to pass. The token is stored for the life of your sessions so that you are not sent back to the captive portal. Figure: Captive-portal Authentication Process The real beauty here is that you are not at limited in the mechanisms you use to authenticate, like you are in an explicit proxy. The captive portal URL is essentially a webpage, so you could use NTLM, Kerberos, Basic, certificates, federation, OAuth, logon page, basically anything. Configuring this in APM is also super easy. Head on over to Access -> Profiles / Policies -> Access Profiles (Per-session policies), and click the Create button. Settings: Name: provide a unique name Profile Type: SWG-Transparent Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Again, don't let the name confuse you. Secure Web Gateway (SWG) is not required to perform transparent forward proxy authentication. Click Finished to complete. You'll be taken back to the profile list. To the right of the new profile, click the Edit link to open a new tab to the Visual Policy Editor (VPE). We are going to continue to use the LocalDB authentication method here to keep the configuration simple. But in this case, you could extend that to do Basic authentication or a logon page. If you do Basic, Kerberos, or NTLM, you'll be using a "401 authentication model". This is very similar to the 407 model, except that 401 interacts directly with the user. And again, this is just an example. Captive portal authentication isn't dependent on browser proxy authentication capabilities, and can support pretty much any user authentication method you can throw at it. Figure: 401-based SWG-Transparent authentication policy 401 Authentication Agent Settings: Properties Basic Auth: enter unique text here HTTP Auth Level: select Basic Branch Rules Delete the existing Negotiate Branch Authentication Agent Settings: Type: LocalDB Auth LocalDB Instance: your Local DB instance Now, there are a few additional things to do here. Transparent proxy (captive portal) authentication actually requires two access profiles. The authentication profile you just created gets applied to the captive port (authentication URL). You need a separate access profile on the proxy listener to redirect the user to the captive if no token exists for that user. As it turns out, an SSL Orchestrator security policy is indeed a type of access profile, so it simply gets modified to point to the captive portal URL. The 'named' profile scope you selected in the above authentication profile defines how the two profiles share user identity information, thus both with have a named profile scope, and must use the same named scope value (ex. SSO). You will now create the second access profile: Settings: Name: provide a unique name Profile Type: SSL Orchestrator Profile Scope: leave it at 'Named' Named Scope: enter a unique value here (ex. SSO) Customization Type: set this to 'Standard' Captive Portals: select 'Enabled' Primary Authentication URI: enter the URL of the captive portal (ex. https://login.f5labs.com) You now need to create a virtual server to hold your captive portal. This is the URL that users are redirected to for authentication (ex. https://login.f5labs.com). The steps are as follows: Create a certificate and private key to enable TLS Create a client SSL profile that contains the certificate and private key Create a virtual server Destination Address/Mask: enter the IP address that the captive portal URL resolves to Service Port: enter 443 HTTP Profle (Client): select 'http' SSL Profile (Client): select your client SSL profile VLANs and Tunnels: enable for your client-facing VLAN Access Profile: select your captive portal access profile Head back into your SSL Orchestrator outbound transparent proxy topology configuration, and go to the Interception Rules page. Under the 'Access Profile' setting, select your new SSL Orchestrator access profile and re-deploy. That's it. Now open a browser and attempt to access a remote resource. Since this is using Basic authentication with LocalDB, you should get prompted for username and password. If you look closely, you will see that you've been redirected to your captive portal URL. 401 Basic authentication is not connection based, so APM stores the user session information by client IP. If you do not get prompted for authentication, it's likely you have an active session already. Navigate to Access -> Overview -> Active Sessions. If you see your LocalDB user account name listed there, delete it and try again (close and re-open the browser). And there you have it. In just a few steps you've configured your SSL Orchestrator outbound topology to perform user authentication, and along the way you have hopefully recognized the immense flexibility at your command. Thanks.2.9KViews4likes10CommentsLightboard Lessons: OWASP Top 10 - Broken Authentication
The OWASP Top 10 is a list of the most common security risks on the Internet today. Broken Authentication comes in at the #2 spot in the latest edition of the OWASP Top 10. In this video, John discusses broken authenticationand outlines some mitigation steps to make sure your web application doesn't give access tothe wrong users. Related Resources: Securing against the OWASP Top 10: Broken Authentication1.1KViews0likes0CommentsGiving back to the Dev Community: ssldump data decrypt
Now with TLS1.2 support, #infosec In my previous post, I announced that I’ve traded in my compilers for frequent flier miles and spell chekkers. (ha, joke). That’s right; I’m in the marketing department now. But before I go, I wanted to give one last salute to the Development Community that I served in for so long. There’s an open-source tool called ssldump, written by the Chairman of the IETF TLS committee himself, Eric Rescorla. The tool is actually quite mature (by which we mean old) and the primary development work was done right after SSL3 was tweaked, formalized, and named TLS over ten years ago. If you look at the changelog, one of the very last changes was submitted by F5 Network’s very own Jeffrey Hafey who added VLAN tag support back in 2000. Jeffrey was one of our most brilliant Enterprise Network Engineers at the time; I remember being on the phone with him and I asked what times of day I should call him, because at the time, he was based out of Japan. “Call me anytime” he said. “No, really, when can I call?” I said. “Seriously, anytime, I don’t sleep.” And I don’t think he did. F5 has included the ssldump utility on the BIG-IP platform since those early days. You can even run it from our TMOS shell command interpreter like so: (tmos) # run util ssldump –i external –s0 –d –k /config/ssl/ssl.key/default.key port 443 In the example above, the syntax means “decrypt the SSL traffic coming from the external interface using this key.” It’s a handy tool to have when debugging HTTPS issues. Also, F5’s version of the ssldump utility even decrypts traffic encrypted with your FIPS 140 keys! Don’t worry, that’s not a security leak, it’s a diagnostic feature that only works when you run it on the same device that has the key. Like many browsers and other SSL tools, ssldump doesn’t support the newer versions of the TLS protocol (1.1 and 1.2) when decrypting application data. Until now. I’m posting a patch to SourceForge that adds TLS1.1 and 1.2 support for decrypting application data. Basic unit testing has been run on it and before you chide me about using magic numbers for my buffer lengths, the patch is following the convention of the existing code around keep the changes as manageable as possible. BIG-IP Versions 10.2.4 and 11.2.0 should have these fixes. You can download the patch yourself and apply it to your own Ubuntu, Debian, or CentOS release; if you find deficiencies in the patch, just email me and we’ll see if we can make it better. There is no doubt that our competitors will pick up the changes and integrate them into their own tools, but you know what? That’s okay. F5 is the market leader in SSL processing and this is just how we roll. In a future post we’ll take a look at why there’s suddenly a lot of interest in TLS1.2. Troubleshooting TLS problems With ssldump Mutations in the TLS Protocol554Views0likes0CommentsClient SSL Authentication on BIG-IP as in-depth as it can go
Quick Intro In this article, I'm going to explain how SSL client certificate authentication works on BIG-IP and explain what actually happens during client authentication as in-depth as I can, showing the TLS headers on Wireshark. This article is about the client side of BIG-IP (Client SL profile) authenticating a client connecting to BIG-IP. The Topology For reference so we can follow Wireshark output: How to Configure Client Certificate Authentication on Client SSL profile Essentially, what we're doing here is making BIG-IP verify client's credentials before allowing the TLS handshake to proceed. However, such credentials are in the form of a client certificate. The way to do this is to configure BIG-IP by: Adding a CA file to Trusted Certificate Authorities (ca-file in tmsh) to validate client certificate Optionally adding same CA that signed client certificate to Advertised Certificate Authorities Enforcing Client Certificate validation by setting Client Certificate option on BIG-IP to require Optionally setting the Frequency of such checks if we don't want to stick to the defaults. I'll go through each option now. Adding CA file to Trusted Certificate Authorities We should add to Trusted Certificate Authorities a single certificate file (*.crt) with one CA or concatenated file with 2 or more CAs with the purpose of validating client certificate, i.e. confirming client's identity. Upon receiving client certificate, BIG-IP will go through this list of CAs and confirm client's identity. It also has another purpose which is to authenticate BIG-IP to client but it's out of the scope of this article. Optionally add CA file to Advertised Certificate Authorities Trusted Certificate Authoritiesexplicitly tells BIG-IP the CA or chain of CAs it will use to validate client certificate whereasAdvertised Certificate Authorities tells client in advance what kind of CA BIG-IP trusts so that client can make the decision about which certificate to send to BIG-IP: Why would we use Advertised Certificate Authorities? Let's imagine a situation where Client's application has more than one Client Certificate configured. How is it going to figure out which certificate to send to BIG-IP? That's where Advertised Certificate Authorities comes to rescue us! When we add our CA bundle to Advertised Certificate Authorities, we're telling BIG-IP to add it to a header field named Distinguished Names within Certificate Request message. I dedicated the Appendix section to show you more in-depth how changing Advertised Certificate Authority affects Certificate Request header. Configuring BIG-IP to enforce Client Certificate validation To enable client certificate authentication on BIG-IP we change Local Traffic››Profiles : SSL : Client›› Client Certificate to request: The default is set to ignore where client certificate authentication is disabled. If we truly want to enable client certificate validation we need to select require. The reason why is because request makes BIG-IP request a client certificate from the client but BIG-IP will not perform the validation to confirm if the certificate sent is valid in this mode. The following subsections explain each option. Ignore Client Certificate Authentication is disabled (the default). BIG-IP never sends Certificate Request to client and therefore client does not need to send its certificate to BIG-IP. In this case, TLS handshake proceeds successfully without any client authentication: pcap:ssl-sample-peer-cert-mode-ignore.pcap Wireshark filter used:frame.number == 5 or frame.number == 6 Request BIG-IP requests Client Certificate by sending Certificate Request message but does not check whether client certificate is valid which is not really client authentication, is it? This means that ca-file(Trusted Certificate Authorities in the GUI) will not be used to validate client certificate and we will consider any certificate sent to us to be valid. For example, inssl-sample-peer-cert-mode-request-with-no-client-cert-sent.pcapwe now see that BIG-IP sends aCertificate Requestmessage and client responds with aCertificatemessage this time: Because I didn't add any client certificate to my browser, it sent a blank certificate to BIG-IP. Again, BIG-IP did not perform any validation whatsoever, so TLS handshake proceeded successfully. We can then conclude that this setting only makes BIG-IP request client certificate and that's it. Require It behaves just like Request but BIG-IP also performs client certificate validation, i.e. BIG-IP will use CA we hopefully added to ca-file (Trusted Certificate Authorities in the GUI) to confirm if client certificate is valid. This means that if we don't add a CA to Trusted Certificate Authorities (ca-file in tmsh) then validation will fail. Setting the Frequency of Client Certificate Requests This setting specifies the frequency BIG-IP authenticates client by enabling/disabling TLS session resumption. It has only 2 options: once BIG-IP requests client certificate during first handshake and no longer re-authenticates client as long as TLS session is reused and valid. The way BIG-IP does it is by using Session Resumption/Reuse. During first TLS handshake from client, BIG-IP sends a Session ID to Client within Server Hello header and in subsequent TLS connections, assuming session ID is still in BIG-IP's cache and client re-sends it back to BIG-IP, then session will be resumed every time client tries to establish a TLS session (respecting cache timeout). First time client sends Client Hello with blank session ID as it's cache is empty and then it is assigned a Session ID by BIG-IP (409f...): pcap:ssl-sample-clientcert-auth-once-enabled.pcap Wireshark filter used:filter:!ip.addr == 172.16.199.254 and frame.number > 1 and frame.number < 7 Certificate Requestconfirms BIG-IP is trying to authenticate client. Notice Session ID BIG-IP sent to client is 409f7... Then when Client tries to go through another TLS handshake and sends above session ID in Client Hello (packet #70 below). BIG-IP then confirms session is being resumed by sending samesession IDin Server Hello back to client: Wireshark filter used:!ip.addr == 172.16.199.254 and frame.number > 66 and frame.number < 73 This resumed TLS handshake just means we will not go through full handshake and will no longer need to exchange keys, select ciphers or re-authenticate client as we're reusing those already negotiated in the full TLS handshake where we first received Session ID 409f... always BIG-IP requests client certificate, i.e. re-authenticates client at every handshake. On BIG-IP, this is accomplished by disabling session reuse which makes BIG-IP not to sendSession IDback to Client in the beginning and forcing a full TLS handshake every time. pcap:ssl-sample-clientcert-auth-always-enabled.pcap Wireshark filter used: !ip.addr == 172.16.199.254 and ((frame.number > 1 and frame.number < 7) or (frame.number > 74 and frame.number < 80)) Appendix: Understanding how Advertised Certificate Authority field affects Certificate Request header For this test, I've got the following: myCAbundle.crt: concatenation of root_ca.crt and ltm2.crt (signed by root_ca.crt) client_cert.crt: added to Firefox and signed by ltm2.crt I've also added myCAbundle.crt to Trusted Certificate Authorities so BIG-IP is able to verify that client_cert.crt is valid. For each test, I will use change Advertised Certificate Authorities so we can see what happens. We'll go through 3 tests here: Setting Advertised Certificate Authority to None Setting Advertised Certificate Authority to a certificate that didn't sign client cert Setting Advertised Certificate Authority to a bundle that signed client cert Setting Advertised Certificate Authority to None Note that even though no CA was advertised in Certificate Request message, BIG-IP still advertises Certificate typesandSignature Hash Algorithmsso that client knows in advance what kind of certificate (RSA, DSS or ECDSA) and hash algorithms BIG-IP supports in advance. If client certificate had not been signed using any of the certificate types and hashing algorithms listed then handshake would have failed. However, in this case validation is successful as we can see on frame #8 that client certificate is RSA type and hashed with SHA1: pcap: ssl-sample-advcert-none.pcap filter used: !ip.addr == 172.16.199.254 and frame.number > 1 and frame.number < 16 It's worth noting that Distinguished Names is NOT populated and has length of zero because we didn't attach a bundle to Advertised Certificate Authorities. In this case, it worked fine because my client browser had only one certificate attached, so it shouldn't cause a problem anyway. Setting Advertised Certificate Authority to a certificate that didn't sign client cert pcap: ssl-sample-advcert-default-firefox.pcap Wireshark filter used: None I've set Advertised Certificate Authority to default.crt as this is NOT the CA that signed client's certificate: The difference here when compared to None is that now we can see that Distinguished Names is now populated with the certificate I added (default.crt): However, even though I added the correct certificate to my Firefox browser, it sent a blank certificate instead. Why? That's because BIG-IP signalled in Distinguished Names that default.crt is the CA that signed the certificate BIG-IP is looking for and as Firefox doesn't have any certificate signed by default.crt, it just sent a blank certificate back to BIG-IP. Also, because BIG-IP is now performing proper validation, i.e. comparing whatever client certificate is sent to it with the CA list added to Trusted Certificate Authorities, it knows a blank certificate is not valid and terminates the TLS handshake with a Fatal Alert. Setting Advertised Certificate Authority to a bundle that signed client cert pcap: ssl-sample-advcert-ltm2chainedwithrootca.pcap filter used: !ip.addr == 172.16.199.254 and frame.number > 1 Now I've set Advertised Certificate Authority to the correct bundle that signed my client certificate: And indeed handshake succeeds because: BIG-IP advertises myCAbundle.crt in Certificate Request >> Distinguished Names header, as per Advertised Certificate Authority configuration By reading Distinguished Names field, Client correctly sends the correct client certificate back to BIG-IP BIG-IP correctly validates client certificate using myCAbundle configured on Trusted Certificate Authorities Hope this article provides some clarification about these mysterious TLS headers.16KViews10likes8CommentsHow to Extract the UPN from a Digital Certificate on a CAC card using F5 APM
Introduction This article describes two different methods to extract the UPN from the digital certificate for further processing by the BIG-IP. While there are other excellent articles that show you how to build out the entire access policy, this article concentrates on the methods for extracting the UPN. Some Context The CAC card is a "smart" card about the size of a credit card, it is the standard identification for active duty uniformed Service personnel, Selected Reserve, DoD civilian employees, and eligible contractor personnel.It is also the principal card used to enable physical access to buildings and controlled spaces, and it provides access to DoD computer network and systems. Accessing DoD PKI-protected information is most commonly achieved using the PKI certificates stored on your Common Access Card (CAC). The certificates on your CAC can allow you to perform routine activities such as accessing OWA, signing documents, and viewing other PKI-protected information online. In F5, the typical authentication motion for F5 Access Policy Manager when dealing with common access cards is to:- Present a DoD Warning Banner Validate the Certificate through TLS Validate that the certificate has not been revoked Pull the UPN field of the certificate to search for the user in LDAP The F5 Access Policy Manager uses the Universal Principal Name (UPN) taken from the Subject Alternative Name (SAN) field of the Signature Certificate to search for the user in LDAP and allow or deny access based on the information found. The diagram below shows the value that we will be pulling from the certificate to use for further authentication. On a DoD CAC the UPN would be of the format EDIPI@mil. The following describes two methods to extract the UPN as part of an APM policy. If you prefer, I have published a 5-minute demonstration video outlining the steps presented in this article. Otherwise, you may continue reading, or refer back at your desired pace, to the step-by-step presented below. Method One – a Variable Assign within an access policy This method relies on the use of an access policy item called a “Variable Assign” that contains a custom expression. In the diagram below we are placing a variable assign access policy item after checking that the certificate is valid through mutual tls with the On Demand Cert Auth and then checking with an OCSP server that the certificate has not been revoked via OCSP. To add the variable assign click the ‘+’ item in the visual policy editor and select the variable assign item in the visual policy editor. The variable assign is under the Assignment tab in the Visual Policy Editor. In the variable assigngive the access policy item a name for instance “upn_extract” and then click the “Add New Entry” button. Then ensure that Custom Variable is selected Create a variable name – for instance session.custom.upn On the right side select “Custom Expression” And place the following expression in the entry field below.. this expression parses the x509 certificate attributes on the CAC card for the UPN. set x509e_fields [split [mcget {session.ssl.cert.x509extension}] "\n"]; # For each element in the list: foreach field $x509e_fields { # If the element contains UPN: if { $field contains "othername:UPN" } { ## set start of UPN variable set start [expr {[string first "othername:UPN<" $field] +14}] # UPN format is <user@domain> # Return the UPN, by finding the index of opening and closing brackets, then use string range to get everything between. return [string range $field $start [expr { [string first ">" $field $start] - 1 } ] ]; } } # Otherwise return UPN Not Found: return "UPN-NOT-FOUND"; Click “Finished”. Method Two – an Access Policy Agent Event with an iRule The second method relies on the use of an access policy item called an “iRule Event” that uses an iRule to extract the UPN. In the diagram below we are placing an iRule event access policy item after checking that the certificate is valid through mutual tls with the On Demand Cert Auth and then checking with an OCSP server that the certificate has not been revoked via OCSP. To add the iRule Event click the ‘+’ item in the visual policy editor and select the variable assign item in the visual policy editor. The iRule Event is under the General Purpose tab in the Visual Policy Editor. Then provide a name and a Custom iRule Event Agent ID. I like to make the name the same as the identifier, but they can bet different. The custom iRule Event Agent ID ties the visual policy editor iRule event item to an iRule. Create the iRule. Now under Local Traffic/iRules click the create button. Provide a name for the iRule. And place the following iRule in the entry field this iRule parses the x509 certificate attributes on the CAC card for the UPN. when ACCESS_POLICY_AGENT_EVENT { if { [ACCESS::policy agent_id] eq "CERTPROC"} { # This event extracts the user principal name from a client-certificate and places it into a session variable. if { [ACCESS::session data get session.ssl.cert.x509extension] contains "othername:UPN<" } { ACCESS::session data set session.custom.upn [findstr [ACCESS::session data get session.ssl.cert.x509extension] "othername:UPN<" 14 ">"] } } } Click “Finished” Then on the virtual server that provides the service select “Resources” and then select “Manage” Finally move the CERTPROC iRule from available to enabled. Conclusion Both of these methods will ultimately result in the user principal name, the “UPN” being stored in a session variable within the access policy. This session variable can then be used in an LDAP lookup that can verify that the user exists within the directory and can also be used to pull further information from the directory that will enable additional verification and authentication. Examples might be performing single sign on to an application or determining group membership. Which one is better? (Editorial Time) While both methods are completely valid.I prefer the variable assign within an access policy as it provides a single place in the VPE where the configuration resides.It also allows for a more rapid understanding of the configuration from a troubleshooting perspective as the expression resides within the visual policy. The iRule method means there will be multiple locations where the configuration resides, an experienced APM administrator will be able to quickly determine that an iRule is being used, for a less experienced APM administrator this may take some more time to determine that an iRule is being used and this could hinder future trouble shooting. On the other hand the iRule method is more performant than the expression method and may be a better for a high traffic APM VIP.6.5KViews1like0CommentsAzure Active Directory and BIG-IP APM Integration with SAP ERP
Introduction Despite recent advances in security and identity management, controlling and managing access to applications through the web—whether by onsite employees, remote employees or contractors, customers, partners, or the public—is as difficult as ever. IT teams are challenged to control access based on granular characteristics such as user role while still providing fast authentication and, preferably, unified access with single sign-on (SSO) capabilities. The ability to audit access and recognize and stop attempts at unauthorized access are also critical in today’s security environment. F5® BIG-IP® Local Traffic Manager™ (LTM) and F5 BIG-IP® Access Policy Manager® (APM) address these challenges, providing extended access management capabilities when used in conjunction with the Microsoft Azure Active Directory (AAD) identity management platform. The integrated solution allows AAD to support applications with header-based and Kerberos based authentication and multifactor authentication using a variety of factor types. In addition, the BIG-IP system can act as a reverse proxy for publishing on-premises applications beyond the firewall, where they can be accessed through AAD. This document will discuss the process of configuring AAD and F5 Big-IP to meet this requirement while still providing the flexibility and power of the cloud. Audience This guide is written for IT professionals who need to design an F5 network. These IT professionals can fill a variety of roles: ·Systems engineers who need a standard set of procedures for implementing solutions ·Project managers who create statements of work for F5 implementations ·F5 partners who sell technology or create implementation documentation Customer Use Cases Security is one of the primary considerations for organizations in determining whether or not to migrate applications to the public cloud. The problem for organizations with applications in the cloud, in a data center, managed, or as a service, is to create a cost-effective hybrid architecture that produces secure application access and a great experience that allows users to access apps easily, have consistent user experiences, and enjoy easy access with single-sign-on (SSO) tied to a central identity and authentication strategy. Some applications are not favorable to modernization. There are applications that are not suited for, or incapable of, cloud migration. Many on-premises apps do not support modern authentication and authorization, including standards and protocols such as SAML, OAuth, or OpenID Connect (OIDC). An organization may not have the staff talent or time to perform application modernization for their on-premises apps. With thousands of apps in use daily, hosted in all or any combination of these locations, how can organizations ensure secure, appropriate user access without requiring users to login in multiple times? In addition, how can organizations terminate user access to each application without having to access each app individually? By deploying Microsoft Azure Active Directory, Microsoft’s comprehensive cloud-based identity platform, along with F5’s trusted application access solution, Access Policy Manager (APM), organizations are able to federate user identity, authentication, and authorization and bridge the identity gap between cloud-based (IaaS), SaaS, and on-premises applications. Figure 1Secure hybrid application access This guide discusses the following use cases: ·Users use single sign-on to access SAP ERP application that requires Kerberos-based authentication. Microsoft Azure Active Directory and F5 BIG-IP APM Design For organizations with a high security demand with low risk tolerance, the need to keep all aspects of user authentication on premise is required. The Microsoft Azure Active Directory and F5 BIG-IP APM solution integrates directly into AAD configured to work cooperatively with an existing Kerberos based, header based or variety of authentication methods. The solution has these components: •BIG-IP Access Policy Manager (APM) •Microsoft Domain Controller/ Active Directory (AD) •Microsoft Azure Active Directory (AAD) •SAP ERP Application (Kerberos-based authentication) Figure 2APM bridge SAML to Kerberos authentication components Figure 3APM bridge SAML to Kerberos authentication process flow Deploying Azure Active Directory and BIG-IP APM integration The joint Microsoft and APM solution allow legacy applications incapable of supporting modern authentication and authorization to interoperate with Azure Active Directory. Even if an app doesn’t support SAML, and only is able to support header- or Kerberos-based authentication, it can still be enabled with single sign-on (SSO) and support multi-factor authentication (MFA) through the F5 APM and Azure Active Directory combination. Azure Active Directory as an IDaaS delivers a trusted root of identity to APM creating a bridge between modern and SAP ERP applications, delivering SSO and securing the app with MFA. Configuring Microsoft Azure Active Directory These instructions configure Azure AD SSO with APM to be used with SAP ERP. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in F5. To configure and test Azure AD SSO with APM, complete the following tasks: ·Create an Azure AD user– to add users to Azure AD. ·Assign the Azure AD user- to enable users to use Azure AD single sign-on. ·Configure Azure AD SSO- to enable your users to use this feature. Create an Azure AD user In this section, you'll create a test user in the Azure portal named Harvey Winn. From the left pane in the Azure portal, click Users, and then selectAll users. Click +New userat the top of the screen. In theUserproperties, follow these steps: User name: harvey@aserracorp.com Name:Harvey Winn Select theShow passwordcheck box, and then write down the value that's displayed in thePasswordbox. ClickCreate. Assign Azure AD users to application 1.In the search field, type “enterprise applications” and click on Enterprise applications. 2.Click on “New applications 3.In the search field under Add from the gallery, type “f5” and click on SAP ERP Central Component (ECC) and then Add. 4.In the SAP ERP Central Component (ECC) - Protected by F5 Networks BIG-IP APM | OverviewClick window, click 1. Assign users and groups, and in the next screen, click + Add user. 5.In Home > SAP ERP Central Component (ECC) - Protected by F5 Networks BIG-IP APM | Users and groups > Add Assignment page, click Users and groups. 6.In the search field under Users and groups, search “harvey” and click on the user Harvey Winn, click on Select and then click on Assign. Configure Azure AD SSO 1.Click on Single sign-on. 2.Click on SAML. 3.In Home > SAP ERP Central Component (ECC) - Protected by F5 Networks BIG-IP APM | Single sign-on > SAML-based Sign-on page, under Basic SAML Configuration, click the edit icon. 4.Complete the following information and click Save. ·Identifier (Entity ID): https://saperp.aserracorp.com/ ·Reply URL (Assertion Consumer Service URL): https://saperp.aserracorp.com/saml/sp/profile/post/acs ·Relay State: https://saperp.aserracorp.com/irj/portal ·Logout Url: https://saperp.aserracorp.com/saml/sp/profile/redirect/slo 5.In Home > SAP ERP Central Component (ECC) - Protected by F5 Networks BIG-IP APM | Single sign-on > SAML-based Sign-on page, under User Attributes & Claims, click the edit icon, and click + Add new claim. 6.In Home > SAP ERP Central Component (ECC) - Protected by F5 Networks BIG-IP APM | Single sign-on > SAML-based Sign-on > User Attributes & Claims > Manage claim page, complete the following information and click Save. ·Name: sAMAccountName ·Source attribute: user.onpremisessamaccountname 7.Click > SAML-based Sign-on > , to verify information 8.Under SAML Signing Certificate and next to Federation Metadata XML, click right click on Download and select Save Link As… 9.Rename File name to SAPEP.xml and click Save. Note: APM Guided Configuration will not accept spaces in the file name 10.Azure AD configuration completed. Configure F5 BIG-IP APM These instructions configure with APM to be used with Azure AD SSO for SAP ERP application access. For SSO to work, you need to establish a link relationship between APM and Azure AD in relation to the SAP ERP. To configure and test Azure AD SSO with APM, complete the following tasks: Configure the Service Provider (SAP ERP): Service Provider can sign authentication requests and decrypt assertions. Configure a Virtual Server: When the clients send application traffic to a virtual server, the virtual server listens for that traffic, processes the configuration associated with the server, and directs the traffic according to the policy result and the settings in the configuration. Configure External Identity Provider Connector: Define settings for an external SAML IdP. When acting as a SAML Service Provider, the BIG-IP system sends authentication requests to and consumes assertions from external SAML IdPs that you specify. Configure the Pool Properties: enables you to configure a pool of one or more servers. If you have a suitable pool configured already, select it. Otherwise, create a new one. Add servers, select a load balancing method, and, optionally, assign a health monitor to the pool. Configure Single Sign-On: leverages credential caching and credential proxying technology so users can enter their credentials once to access their secured web applications. This SSO mechanism allows the user to get a Kerberos ticket and present it transparently to the IIS application. You must know the Kerberos Realm, Account Name, and Account Password before proceeding. 1. In BIG-IP click Access > Guided Configuration > Federation > SAML Service Provider. 2. Click Next. 3. In the Service Provider Properties page, configure the following information, leave default settings and click Save & Next. • Configuration Name: saperp • Entity ID: https://saperp.aserracorp.com/ • Scheme: https • Host: saperp.aserracorp.com • Relay State: https://saperp.aserracorp.com/irj/portal 4. In the Virtual Server Properties page, configure the following information, leave default settings and click Save & Next. • Destination Address: 206.124.129.129 • Service Port: 443 HTTPS (default) • Enable Redirect Port: Checked (default) • Redirect Port: 80 HTTP (default) • Client SSL Profile: Create new • Client SSL Certificate: asper.aserracorp.com • Associated Private Key: saperp.aserracorp.com 5. In the External Identity Provider Connector Settings page, configure the following information, leave default settings and click Save & Next. • Select method to configure your IdP Connector: Metadata • Upload a file in the format name .xml: Choose File saper.xml • Name: saperp_aad_idp 6. In the Pool Properties page, configure the following information, leave default settings and click Save & Next. • Select a Pool: Create New • Load Balancing Method: Least Connections (member) • Pool Servers • IP Address/Node Name: /Common/172.31.23.14 • Port: 50000 7. In the Single Sign-On Settings page, click Enable Single Sign-On, and then click on Show Advanced Settings, configure the following information, leave default settings and click Save & Next. • Select Single Sign-On Type: Kerberos • Credentials Source • Username Source: session.saml.last.attr.name.sAMAccountName • SSO Method Configuration • Kerberos Realm: ASERRACORP.COM • Account Name: sapsrvacc • Account Password: password • Confirm Account Password: password • KDC: 172.16.60.5 • SPN Pattern: HTTP/sapsrv.aserracorp.com@ASERRACORP.COM • Ticket Lifetime: 600 (default) • Send Authorization: Always (default) 8. In the Endpoint Checks Properties page, leave default settings and click Save & Next. 9. In the Timeout Settings page, leave default settings and click Save & Next. 10. In the Your application is ready to be deployed page, click Deploy. 11. APM configuration completed. Resources BIG-IP Knowledge Center BIG-IP APM Knowledge Center Configuring Single Sign-On with Access Policy Manager Summary By centralizing access to all your applications, you can manage them more securely. Through the F5 BIG-IP APM and Azure AD integration, you can centralize and use single sign-on (SSO) and multi-factor authentication for SAP ERP.1.3KViews0likes0CommentsUse of BIG-IP to authenticate API calls based on oAuth2.0 framework
Introduction There is an earlier article in the seriesthat shows how to use the NGINX Controller for Authentication of API calls (See Use of NGINX Controller to Authenticate API Calls). It is also possible to use the BIG-IP to perform authentication of API calls.This is usually the preferred method if a translation needs to occur between the authentication method being used by clients and one that is being used by the API. Topology Picture 1 below, represents the overall solution topology. In previous articles, I've explained how to use API security features available on BIG-IP. In this article, we'll take a look at how to use BIG-IP to authenticate calls using the oAuth2.0 framework before they get forwarded to a WAF. Order is important here. A flood of unauthorized calls may put a significant load on WAF. Therefore, it is vital to authorize calls before passing them to WAF. Picture 1. Configuration A protection profile associated with a virtual server configures authentication of API calls, as well as sets up policies to secure the API. Per-request policy gets automatically created along with the profile. The policy gets most of its configuration from the profile but requires explicit specification of the provider list. The following diagram shows how all configuration pieces interact together. Let us go through all the steps to configure BIG-IP to authenticate API calls using the oAuth2.0 framework. Firstly, resolver. It allows BIG-IP to resolve domain names e.g. API server hostname to forward API calls to. Next is identity provider for oAuth. In this example, Okta is used. Therefore Okta is responsible for: Issuing JWT tokens to API clients Issuing JWK keys to BIG-IP As soon as OpenID URI for Okta is specified in the BIG-IP configuration, other related information is automatically retrieved, including JWK keys. Provider list aggregates multiple identity providers. This is useful if you want to accept JWT tokens from more than one provider. API protection profile contains primary configuration with following parameters: OpenAPI file Resolver Per-request policy Note, per-request policy gets created and configured with most properties implicitly based on options selected in the protection profile. However, identity provider needs to be configured explicitly. Open the "Access control" tab of the profile to access a per-request policy Per-request policy diagram shows how every incoming API call gets processed. At first BIG-IP checks identity. If a JWT token is valid, then BIGIP checks to see if the endpoint is in the allowed URL list. If both tests pass then BIG-IP forwards call towards its destination. Click on "OAuth scope" to specify a provider list. Specify provider list and change response to "response2". It returns the appropriate response code and authentication failure reason. The last step is to assign the API protection profile to a virtual server. From this point, BIG-IP will verify the identity of every incoming call before forwarding it to its destination. Following is an example of a call with a valid JWT token that gets forwarded to the destination and the response that is received: $ curl -sv https://7a583404-3e51-4cf4-935d-f9f84f108b17.com/uuid -H "Authorization: Bearer eyJra...omitted...f8b_Q" > GET /uuid HTTP/1.1 > Host: 7a583404-3e51-4cf4-935d-f9f84f108b17.com > User-Agent: curl/7.54.0 > Accept: */* > Authorization: Bearer eyJra...omitted...f8b_Q > < HTTP/1.1 200 OK < date: Thu, 23 Jan 2020 19:42:33 GMT < content-type: application/json < content-length: 53 < connection: keep-alive < access-control-allow-origin: * < access-control-allow-credentials: true < { "uuid": "c9f949a6-7fca-477a-9345-8cfc61a73d7b" } * Connection #0 to host 7a583404-3e51-4cf4-935d-f9f84f108b17.com left intact Following is an example of a situation where the token is invalid or the API endpoint is not in the allowed URL list. In this case, the call is blocked with an appropriate error message. $ curl -sv https://7a583404-3e51-4cf4-935d-f9f84f108b17.com/uuid -H "Authorization: Bearer eyJra...omitted...BAD...TOKEN...rGF-w" > GET /uuid HTTP/1.1 > Host: 7a583404-3e51-4cf4-935d-f9f84f108b17.com > User-Agent: curl/7.54.0 > Accept: */* > Authorization: Bearer eyJra...omitted...BAD...TOKEN...rGF-w > < HTTP/1.1 401 Unauthorized < www-authenticate: Bearer error="invalid_token",error_description="Internal error during signature verification" < content-length: 0 < connection: Close < Date: Thu, 23 Jan 2020 19:06:35 GMT < * Closing connection 0 $ curl -sv https://7a583404-3e51-4cf4-935d-f9f84f108b17.com/NO_SUCH_ENDPOINT -H "Authorization: Bearer eyJra...omitted...f8b_Q" > GET /NO_SUCH_ENDPOINT HTTP/1.1 > Host: 7a583404-3e51-4cf4-935d-f9f84f108b17.com > User-Agent: curl/7.54.0 > Accept: */* > Authorization: Bearer eyJra...omitted...f8b_Q > < HTTP/1.1 403 Forbidden < content-length: 0 < connection: Close < Date: Thu, 23 Jan 2020 19:44:52 GMT < * Closing connection 0 * TLSv1.2 (OUT), TLS alert, Client hello (1): Hopefully this was useful. See you in comments! Good luck!1.4KViews1like0Comments