zero trust
5 TopicsF5's API Security Alignment with NIST SP 800-228
Introduction F5 has positioned itself as a comprehensive API security leader with solutions that directly address the emerging NIST SP 800-228 "Guidelines for API Protection for Cloud-Native Systems."F5’s multi-layered approach covers the entire API lifecycle, from development to runtime protection. It is closely aligned with NIST’s recommended controls and architectural patterns. F5's product portfolio comprehensively addresses NIST 800-228 requirements F5's current API security ecosystem includes BIG-IP Advanced WAFF5 Distributed Cloud Services, and NGINX Plus . This creates a unified platform that addresses all 22 NIST recommended controls (REC-API-1 through REC-API-22). The company's 2024 acquisition of Wib Security strengthened its pre-runtime protection capabilities, while Heyhack enhanced its penetration testing offerings. These strategic moves demonstrate F5's commitment to comprehensive API security coverage. The F5 Distributed Cloud Services API Security platform is a comprehensive WAAP solution. The platform provides AI-powered API discovery, real-time threat detection, advanced bot protection, web application firewall, DoS/DDoS protection, and automated policy enforcement. This directly supports NIST's focus on continuous monitoring and adaptive security. Comprehensive mapping to NIST SP 800-228 control framework F5's solutions address all seven thematic groups outlined in NIST SP 800-228. These "target" objectives include security controls that address the OWASP API Top 10. These mitigations address broken object-level authentication, sensitive information disclosure, input validation, and other security vulnerabilities. If you haven't read the new document, I encourage you to do so. You can find the document here. The following may seem confusing at first, but the REC-API headings map to the NIST document. These are high-level target controls. You can further group these by thinking of Pre-Runtime Protections (REC-API-1 through REC-API-8) and Runtime Protections (REC-API-9 through REC-API-22). We have done our best to map F5's capabilities at a high level to the target controls below. In a future article, we will provide specific configuration controls mapping to each target level. API specification and inventory management (REC-API-1 to REC-API-4) F5's AI/ML-powered API discovery automatically identifies and catalogs API endpoints, including shadow APIs that pose security risks. The platform generates OpenAPI specifications from traffic analysis and maintains a real-time API inventory with risk scoring. The F5 Distributed Cloud Services platform provides external domain crawling and comprehensive API lifecycle tracking. This directly addressing NIST's requirements for preventing unauthorized APIs from becoming attack vectors. API Discovery of API Endpoints Schema validation and input handling (REC-API-5 to REC-API-8) F5 implements a positive security model that enforces OpenAPI specifications at runtime. F5 platforms provide granular parameter validation, content-type enforcement, and request size limiting. The platform automatically validates request/response schemas against predefined specifications and uses machine learning to detect schema drift, ensuring continuous compliance with API contracts. In cases when a pre-defined schema is not available, the platform can "learn" through discovery and build an Open API Spec that can later be imported into the platform for adding security controls. Authentication and authorization (REC-API-9 to REC-API-12) F5's authentication architecture supports OAuth 2.0, OpenID Connect, SAML, and JWT validation with comprehensive scope checking. The F5 Application Delivery and Security platform provides per-request policy enforcement with role-based access control (RBAC) and attribute-based access control (ABAC). The platform's cryptographic X.509 identity bootstrapping ensures every component receives unique identity credentials, supporting NIST's emphasis on strong authentication mechanisms. Sensitive data protection (REC-API-13 to REC-API-15) F5's data classification engine automatically identifies and protects PII, HIPAA, GDPR, and PCI-DSS data types flowing through APIs. The platform implements real-time data flow policies with redaction mechanisms and monitors for potential data exfiltration. The F5 Distributed Cloud Services provides context-aware data protection that goes beyond traditional PII to include business-sensitive information. Sensitive Information Discovery and Redaction Access control and request flow (REC-API-16 to REC-API-18) F5's real-time response capabilities enable immediate blocking of specific keys or users on demand. The platform implements mature token management with hardened API behavior detection for abnormal usage patterns. The behavioral analytics engine continuously monitors API usage patterns to detect compromised credentials and automated attacks. Rate limiting and abuse prevention (REC-API-19 to REC-API-21) F5 provides granular rate limiting by user, IP, application ID, method, and field through multiple implementation approaches. The NGINX Plus leaky bucket algorithm ensures smooth traffic management, while BIG-IP APM offers sophisticated quota management with spike arrest capabilities. The platform's L7 DDoS protection uses machine learning to detect and mitigate application-layer attacks accurately. API Endpoint Rate Limiting Settings Logging and observability (REC-API-22) F5's comprehensive logging framework captures all API interactions, authentication events, and data access with contextual information. The platform provides real-time analytics with application performance monitoring, security event correlation, and business intelligence capabilities. Integration with SIEM platforms like Splunk and Datadog ensures actionable intelligence connects to operational response capabilities. Implementation of NIST's three API gateway patterns F5's architecture uniquely supports all three API gateway patterns outlined in NIST SP 800-228: Centralized gateway pattern The F5 Distributed Cloud ADN provides a global application delivery network with centralized policy management through a unified SaaS console. This approach ensures consistent security policy enforcement across all endpoints while leveraging F5's global network infrastructure for optimal performance and threat intelligence sharing. Hybrid gateway pattern F5's distributed data plane with centralized control represents the optimal balance between centralized management and distributed performance. The F5 Distributed Customer Edge nodes deployed at customer sites provide local API processing with global policy synchronization. This enables organizations to maintain data sovereignty while benefiting from centralized security management. Decentralized gateway pattern The NGINX Plus deployment model enables lightweight API gateways positioned close to applications, perfect for microservices architectures. The NGINX Ingress Controller provides Kubernetes-native API management with per-service gateway deployment in service mesh environments. This ensures policy enforcement occurs as close to individual service instances as possible. In addition, BIG-IP can be deployed to provide API security and provide many of the same mitigations as listed above. This can be beneficial as most modern enterprises already have F5 BIG-IPs in their environments. Advanced zero trust and identity-based segmentation F5's zero trust architecture implements NIST's identity-centric security principles through cryptographic principles. TLS is a cornerstone of F5 technologies. Our platforms are purpose-built for cryptography, including TLS 1.3 and Post Quantum. mTLS can be used to authenticate both sides of the TLS handshake. F5's strong authentication and authorization features fit nicely into an API Security Zero Trust design. The continuous verification model ensures no implicit trust based on network location, while least privilege enforcement provides granular access control based on identity and attributes. F5's integration with enterprise identity providers like Microsoft Entra ID and Okta enables seamless implementation of zero trust principles across existing infrastructure. Comprehensive pre-runtime and runtime protection F5's pre-runtime protection includes integration with CI/CD pipelines through the recent Wib Security acquisition, enabling vulnerability detection during development. The platform provides automated security reconnaissance through Heyhack's capabilities and API scanning before production deployment. For runtime protection, F5's behavioral analytics engine establishes baseline API behavior and detects anomalies in real-time. The threat intelligence integration protects coordinated attack campaigns. API endpoint markup automatically identifies and tokenizes dynamic URL components for enhanced protection. Implementation recommendations Organizations implementing F5 solutions for NIST SP 800-228 compliance should consider a phased approach starting with API discovery and inventory management, followed by authentication and authorization controls, and culminating in comprehensive monitoring and analytics. For a purely SaaS solution, Distributed Cloud presents a mature API security solution offering cutting-edge capabilities. For enterprises requiring on-premises deployment, BIG-IP Advanced WAF and Access Policy Manager provide the most robust capabilities with enterprise-grade performance and extensive customization options. The hybrid deployment model of SaaS and on-premises often provides the optimal balance of cost, performance, and security for large organizations with complex infrastructure requirements. Conclusion F5's API security portfolio represents a mature, comprehensive solution that directly addresses the full spectrum of NIST SP 800-228 requirements. F5’s strategic acquisitions, innovative AI integration, and proven enterprise scalability position it as a leading choice for organizations seeking to implement comprehensive API security aligned with emerging federal guidelines. With continued investment in cloud-native capabilities and AI-powered threat detection, F5 is well-positioned to maintain its leadership as API security requirements continue evolving.163Views1like0CommentsZero Trust Application Access for Federal Agencies
Introduction Zero Trust Network Access (ZTNA) and Zero Trust Application Access (ZTAA) represent two distinct architectural approaches to implementing zero trust application access. ZTAA is emerging as the superior choice for enterprises seeking high-performance, application-centric protection. While both operate under the "never trust, always verify" principle, ZTAA can deliver better performance, lower costs, and provide greater granular control at the application layer, where business-critical assets reside. As a leader in application access, F5 provides strong authentication and authorization through its mature BIG-IP Access Policy Manager platform. Access Policy Manager, or APM, is a tool that helps organizations with zero trust. It does this by following many of the zero trust principles that organizations like the DoD, CISA, and NIST document. Capabilities like strong encryption, user interrogation, conditional and contextual access, device posture, risk scoring, and API integration with third-party security vendors all contribute to a modern zero-trust access solution. It can be said that F5 and APM were the original zero-trust access solutions long before Forrester coined the term "zero trust" back in 2010. Understanding the Architectural Divide ZTNA operates as a network-centric model, creating secure tunnels from users to applications through centralized trust brokers and gateways. This approach can necessitate substantial modifications to the network infrastructure, client software deployment, and, in some cases, re-routing all traffic through tunnel concentration points. ZTNA is well-established and has well-established vendor ecosystems. However, ZTNA can cause performance problems, increase latency, and require big changes to the network architecture. Zero Trust Application Access is different because it focuses on individual applications. It protects these applications directly by using reverse proxies that are already in place in the business environments where these applications are located or at cloud gateways for cloud-based workloads. This architecture lets users connect directly to applications without tunneling. This means no extra work, keeps existing network investments, and gives you control at the application layer. ZTAA operates agentless in many scenarios and integrates seamlessly with cloud-native, containerized, and microservices architectures. F5 Zero Trust Direct Application Access The technical differences create distinct performance profiles. ZTNA's tunnel concentration can create bottlenecks for high-volume applications and add latency from traffic backhauling. At the same time, ZTAA eliminates these performance issues through direct application access and a distributed proxy architecture. Organizations with large application portfolios, cloud-native environments, or performance-sensitive applications find that ZTAA delivers superior user experience and operational efficiency. It is worth noting that ZTNA solutions are, at their core, just a proxy and use encryption for transport, such as TLS or IPsec. ZTAA or ZTNA? Application portfolio size serves as a strong decision criterion. Cost and complexity are also strong considerations. Organizations with fewer than 20 applications, primarily legacy systems, and uniform user bases typically find ZTNA's network-centric approach adequate. However, enterprises with 20+ applications, cloud-native architectures, and diverse user requirements achieve better outcomes with ZTAA's application-specific controls. Performance requirements strongly favor ZTAA for high-volume, real-time, or latency-sensitive applications. Cost considerations also help ZTAA adoption. It can be implemented for a smaller amount of ZTNA costs (depending on how the vendor is doing it) while keeping current network infrastructure investments. Organizations prioritizing rapid deployment, application-by-application rollout, or cloud-first strategies find ZTAA's minimal infrastructure impact and flexible deployment models advantageous. Infrastructure strategy alignment matters significantly. ZTNA is best for big network changes and unified SASE plans. ZTAA is best for applications-first approaches, DevOps cultures, and cloud-native changes. The regulatory environment influences decisions, with some compliance frameworks requiring network-level controls that favor ZTNA, while others benefit from ZTAA's granular application-level security audit trails. F5's ZTAA Leadership Position for Federal Agencies F5 has a strong security position in both federal and commercial landscapes—nearly all the Fortune 50 trust F5 to protect their most mission-critical applications. In addition, federal organizations like the DoD and civilian agencies trust F5 to preserve our nation's most critical infrastructure. The federal sector was an early adopter of zero trust principles. NIST and CISA were instrumental in designing zero-trust reference architectures. The NIST 800-207 document was a landmark, describing how organizations can approach the implementation of a zero-trust architecture in their environments. The DoD Zero Trust Strategy document builds off this architecture and gets specific by calling out controls under each zero trust pillar. The DoD Zero Trust Strategy document outlines 152 targets and requirements for achieving a mature zero trust implementation. F5 today meets or partially meets 57 of those targets. In addition, recent work was published by the NCOEE/NIST describing a completely independent, tested solution utilizing F5 as a Zero Trust Application Access. CISA 5 Pillar Maturity Model – Optimal Level F5 Key Capabilities for Zero Trust Application Access F5 BIG-IP APM Identity Aware Proxy (ZTAA) uses access control per request that checks each application access attempt individually. This moves from session-based authentication to transaction-level verification. The platform provides context-aware authentication, evaluating user identity, device posture, location, and application sensitivity for each request. Continuous device posture checking maintains real-time, ongoing assessments throughout user sessions with adaptive multi-factor authentication and risk-based step-up authentication. F5's Privileged User Access (PUA) solution complements ZTAA with DoD-approved capabilities for both privileged and unprivileged user authentication to government systems. The agent-free deployment adds strong authentication, including CAC/PKI and MFA, to old systems that don’t have native support. It also manages temporary passwords and has many audit trails to make sure the system is compliant and secure. The solution is truly zero trust, with neither the end user nor the endpoint knowing the ephemeral password used during the session. Passwords are never stored on disk and are destroyed when the session terminates, creating a strong access solution. Full proxy architecture brings visibility into your network data plane. Protocols like TLS 1.3 and Post-Quantum look to strengthen your network security posture, but they also bring potential blind spots. TLS 1.3 key structure is ephemeral by design. This protocol feature is excellent for application security, but it creates potential blind spots for threat hunters. Traditionally, packet capture inspections happen out of band and potentially at a future date. With TLS 1.3, packet inspection out of band becomes increasingly tricky. Since TLS 1.3 is a perfect forward secret by default, the symmetric key used during sessions is ephemeral. This means you will need every ephemeral key generated during a session to decrypt out of band. This creates challenges with the SOC and your threat hunters. F5 can help with its SSL Orchestration solution. By orchestrating decrypted traffic to your security inspection stack and re-encrypting it to your applications, you can utilize all the strong security features of TLS 1.3 and PQC while still providing complete visibility into your data-plane traffic. Additional Distinctions F5's full-proxy architecture enables comprehensive traffic inspection and control that competitors cannot match. F5 provides a unified platform integrating ZTAA, application delivery, and enterprise-grade security capabilities. The platform also offers fast TLS decryption at large scale without slowing down performance. It also supports old applications and new web services. F5 adds advanced bot detection, fraud prevention, and API security capabilities that pure-play ZTNA vendors lack. F5's extensive identity provider partnerships include deep Microsoft Azure AD integration with Conditional Access policies, native Okta SAML/OIDC federation, and comprehensive custom LDAP/Active Directory support. Protocol support spans SAML, OAuth, OIDC, RADIUS, LDAP, and Active Directory with flexible deployment across on-premises, cloud, hybrid, and managed service models. Identity Aware Proxy - Key Capabilities APM's Identity Aware Proxy is F5's Zero Trust Application Access solution. We throw around a lot of acronyms in the IT industry, so I just wanted to get that out of the way and make it clear. As I mentioned earlier in this post, F5 can currently meet or partially meet 57 of the 152 targets listed in the DoD Zero Trust strategy guide. APM's IAP solution helps meet many of those 57 targets. Let’s look at some of these features in the access guided configuration. You can find it in the APM or Access Policy Manager’s GUI. If you would like to see a full walk-through sample config, check out this page for a great write-up and lab. Authentication and Authorization Authentication and authorization are at the forefront of any Zero Trust solution. APM provides for robust authentication and authorization integration out of the box. APM has deep integration with Active Directory and supports many of the identity SaaS providers, such as Okta, Ping, SailPoint, and Azure Entra ID. In the image above, MFA is a capability built into the GUI, which makes it very easy to implement a two-factor solution within your ZTAA solution. MFA should be a component of every Zero Trust solution, and F5 makes it easy to integrate with your favorite identity provider. Conditional and Contextual Access Another key component of any ZTAA solution is conditional and contextual access. The new perimeter in a zero-trust world doesn't really exist. We should prioritize protecting the data and application, rather than focusing on our network perimeter security. This is not completely true, as we will keep using network firewalls. But the main idea of zero trust is about data and strong identity, not gateways into our networks. Based on that last sentence, we must be able to interrogate both the user and the device they are accessing from. This involves checking a device's posture for an active firewall or determining its location and the time of day of access. Users should be required to provide a strong identity to include MFA and ABAC controls. In the image below, we show the contextual configuration options for Identity Aware Proxy. This capability makes it easy to configure complex if-then logic flows. Another strong capability sometimes overlooked is APM's ability to query third-party systems for additional context. The HTTP Connector, as shown below, allows the administrator to configure a third-party risk score provider or additional telemetry for access decisions. This is all done via API calls, and so it makes interoperability seamless with other ecosystem vendors. Conclusion ZTAA is the change from zero trust architecture to application-focused security. It offers better performance, strong identity, lower costs, and more flexibility than traditional ZTNA approaches. F5 leads this transformation through its authentication and authorization technology platform, comprehensive application security capabilities, and proven enterprise deployment success across federal and civilian agencies. Organizations evaluating zero trust solutions should prioritize ZTAA for their application portfolios, cloud-native environments, and performance-critical deployments. F5's unified platform approach, technical differentiators, and market-leading capabilities make it the clear choice for enterprises seeking comprehensive zero-trust application access solutions that scale with business growth and digital transformation initiatives.246Views3likes2CommentsSecuring and Scaling Hybrid Apps with F5/NGINX (Part 3)
In part 2 of our series, I demonstrated how to configure ZT (Zero Trust) use cases centering around authentication with NGINX Plus in hybrid environments. We deployed NGINX Plus as the external LB to route and authenticate users connecting to my Kubernetes applications. In this article, we explore other areas of the ZT spectrum configurable on the External LB Service, including: Authorization and Access Encryption mTLS Monitoring/Auditing ZT Use case #1: Authorization Many people think that authentication and authorization can be used interchangeably. However, they both mean different things. Authentication involves the process of verifying user identities based on the credentials presented. Even though authenticated users are verified by the system, they do not necessarily have the authority to access protected applications. That is where authorization comes into play. Authorization involves the process of verifying the authority of an identity before granting access to application. Authorization in the context of OIDC authentication involves retrieving claims from user ID tokens and setting conditions to validate whether the user is authorized to enter the system. An authenticated user is granted an ID token from the IdP with specific user information through JWT claims. The configuration of these claims is typically set from the IdP. Revisiting the OIDC auth use case configured in the previous section, we can retrieve the ID tokens of authenticated users from the NGINX key-value store. $ curl -i http://localhost:8010/api/9/http/keyvals/oidc_acess_tokens Then we can view the decoded value of the ID token using jwt.io. Below is an example of decoded payload data from the ID token. { "exp": 1716219261, "iat": 1716219201, "admin": true, "name": "Micash", "zone_info": "America/Los_Angeles" "jti": "9f8ff4bd-4857-4e12-9634-e5876f786f98", "iss": "http://idp.f5lab.com:8080/auth/realms/master", "aud": "account", "typ": "Bearer", "azp": "appworld2024", "nonce": "gMNK3tu06j6tp5-jGa3aRhkj4F0P-Z3e04UfcFeqbes" } NGINX Plus has access to these claims as embedded variables. They are accessed by prefixing $jwt_claim_ to the desired field (for example, $jwt_claim_admin for the admin claim). We can easily set conditions on these claims and block unauthorized users before they even reach the back-end applications. Going back to our frontend.conf file in the previous part of our series. We can set $jwt_flag variable to 0 or 1 based on the value of the admin JWT claim. We then use the jwt_claim_require directive to validate the ID token. ID tokens with admin claims set to false will be rejected. map $jwt_claim_admin $jwt_status { "true" 1; default 0; } server { include conf.d/openid_connect.server_conf; # Authorization code flow and Relying Party processing error_log /var/log/nginx/error.log debug; # Reduce severity level as required listen [::]:443 ssl ipv6only=on; listen 443 ssl; server_name example.work.gd; ssl_certificate /etc/ssl/nginx/default.crt; # self-signed for example only ssl_certificate_key /etc/ssl/nginx/default.key; location / { # This site is protected with OpenID Connect auth_jwt "" token=$session_jwt; error_page 401 = @do_oidc_flow; auth_jwt_key_request /_jwks_uri; # Enable when using URL auth_jwt_require $jwt_status; proxy_pass https://cluster1-https; # The backend site/app } } Note: Authorization with NGINX Plus is not restricted to only JWT tokens. You can technically set conditions on a variety of attributes, such as: Session cookies HTTP headers Source/Destination IP addresses ZT use case #2: Mutual TLS Authentication (mTLS) When it comes to ZT, mTLS is one of the mainstream use cases falling under the Zero Trust umbrella. For example, enterprises are using Service Mesh technologies to stay compliant with ZT standards. This is because Service Mesh technologies aim to secure service to service communication using mTLS. In many ways, mTLS is similar to the OIDC use case we implemented in the previous section. Only here, we are leveraging digital certificates to encrypt and authenticate traffic. This underlying framework is defined by PKI (Public Key Infrastructure). To explain this framework in simple terms we can refer to a simple example; the driver's license you carry in your wallet. Your driver’s license can be used to validate your identity, the same way digital certificates can be used to validate the identity of applications. Similarly, only the state can issue valid driver's licenses, the same way only Certificate Authorities (CAs) can issue valid certificates to applications. It is also important that only the state can issue valid certificates. Therefore, every CA must have a private secure key to sign and issue valid certificates. Configuring mTLS with NGINX can be broken down in two parts: Ingress mTLS; Securing SSL client traffic and validating client certificates against a trusted CA. Egress mTLS; securing SSL upstream traffic and offloading authentication of TLS material to a trusted HTTPS back-end server. Ingress mTLS You can configure ingress mTLS on the NLK deployment by simply referencing the trusted certificate authority adding the ssl_client_certificate directive in the server context. This will configure NGINX to validate client certificates with the referenced CA. Note: If you do not have a CA, you can create one using OpenSSL or Cloudflare PKI and TLS toolkits server { listen 443 ssl; status_zone https://cafe.example.com; server_name cafe.example.com; ssl_certificate /etc/ssl/nginx/default.crt; ssl_certificate_key /etc/ssl/nginx/default.key; ssl_client_certificate /etc/ssl/ca.crt; } Egress mTLS Egress mTLS is a slight alternative to ingress mTLS where NGINX verifies certificates of upstream applications rather than certificates originating from clients. This feature can be enabled by adding the proxy_ssl_trusted_certificate directive to the server context. You can reference the same trusted CA we used for verification when configuring ingress mTLS or reference a different CA. In addition to verifying server certificates, NGINX as a reverse-proxy can pass over certs/keys and offload verification to HTTPS upstream applications. This can be done by adding the proxy_ssl_certificate and proxy_ssl_certificate_key directives in the server context. server { listen 443 ssl; status_zone https://cafe.example.com; server_name cafe.example.com; ssl_certificate /etc/ssl/nginx/default.crt; ssl_certificate_key /etc/ssl/nginx/default.key; #Ingress mTLS ssl_client_certificate /etc/ssl/ca.crt; #Egress mTLS proxy_ssl_certificate /etc/nginx/secrets/default-egress.crt; proxy_ssl_certificate_key /etc/nginx/secrets/default-egress.key; proxy_ssl_trusted_certificate /etc/nginx/secrets/default-egress-ca.crt; } ZT use case #3: Secure Assertion Markup Language (SAML) SAML (Security Assertion Markup Language) is an alternative SSO solution to OIDC. Many organizations may choose between SAML and OIDC depending on requirements and IdPs they currently run in production. SAML requires a SP (Service Provider) to exchange XML messages via HTTP POST binding to a SAML IdP. Once exchanges between the SP and IdP are successful, the user will have session access to the protected backed applications with one set of user credentials. In this section, we will configure NGINX Plus as the SP and enable SAML with the IdP. This will be like how we configured NGINX Plus as the relying party in an OIDC authorization code flow (See ZT Use case #1). Setting up the IdP The one prerequisite is setting up your IdP. In our example, we will set up the Microsoft Entra ID on Azure. You can use the SAML IdP of your choosing. Once the SAML application is created in your IdP, you can access the SSO fields necessary to link your SP (NGINX Plus) to your IdP (Microsoft Entra ID). You will need to edit the basic SAML configuration by clicking on the pencil icon next to Edit in Basic SAML Configuration, as seen in the figure above. Add the following values and click Save: Identifier (Entity ID) -- https://fourth.run.place Reply URL (Assertion Consumer Service URL) -- https://fourth.run.place/saml/acs Sign on URL: https://fourth.run.place Logout URL (Optional): https://fourth.run.place/saml/sls Finally download the Certificate (Raw) from Microsoft Entra ID and save it to your NGINX Plus instance. This certificate is used to verify signed SAML assertions received from the IdP. Once the certificate is saved on the NGINX Plus instance, extract the public key from the downloaded certificate and convert it to SPKI format. We will use this certificate later when we configure NGINX Plus in the next section. $ openssl x509 -in demo-nginx.der -outform DER -out demo-nginx.der $ openssl x509 -inform DER -in demo-nginx.der -pubkey -noout > demo-nginx.spki Configuring NGINX Plus as the SAML Service Provider After the IdP is setup, we can configure NGINX Plus as the SP to exchange and validate XML messages with the IdP. Once logged into the NGINX Plus instance, simply clone the nginx SAML GitHub repo. $ git clone https://github.com/nginxinc/nginx-saml.git && cd nginx-saml Copy the config files into the /etc/nginx/conf.d directory. $ cp frontend.conf saml_sp.js saml_sp.server_conf saml_sp_configuration.conf /etc/nginx/conf.d/ Notice that by default, frontend.conf listens on port 8010 with clear text http. You can merge kube_lb.conf into frontend.conf to enable TLS termination and update the upstream context with application endpoints you wish to protect with SAML. Finally we will need to edit the saml_sp_configuration.conf file and update variables in the map context based on the parameters of your SP and IdP: $saml_sp_entity_id; https://fourth.run.place $saml_sp_acs_url; https://fourth.run.place/saml/acs $saml_sp_sign_authn; false $saml_sp_want_signed_response; false $saml_sp_want_signed_assertion; true $saml_sp_want_encrypted_assertion; false $saml_idp_entity_id; Unique identifier that identifies the IdP to the SP. This field is retrieved from your IdP $saml_idp_sso_url; This is the login URL and is also retrieved from the IdP $saml_idp_verification_certificate; Variable referencing the certificate downloaded from the previous section when setting up the IdP. This certificate will verify signed assertions received from the IdP. Use the full directory (/etc/nginx/conf.d/demo-nginx.spki) $saml_sp_slo_url; https://fourth.run.place/saml/sls $saml_idp_slo_url; This is the logout URL retrieved from the IdP $saml_sp_want_signed_slo; true The remaining variables defined in saml_sp_configuration.conf can be left unchanged, unless there is a specific requirement for enabling them. Once the variables are set appropriately, we can reload NGINX Plus. $ nginx -s reload Testing Now we will verify the SAML flow. open your browser and enter https://fourth.run.place in the address bar. This should redirect me to the IDP login page. Once you login with your credentials, I should be granted access to my protected application ZT use case #4: Monitoring/Auditing NGINX logs/metrics can be exported to a variety of 3rd party providers including: Splunk, Prometheus/Grafana, cloud providers (AWS CloudWatch and Azure Monitor Logs), Datadog, ELK stack, and more. You can monitor NGINX metrics and logs natively with NGINX Instance Manager or NGINX SaaS. The NGINX Plus API provides me a lot of flexibility by exporting metrics to any third-party tool that accepts JSON. For example, you can export NGINX Plus API metrics to our native real-time dashboard from part 1. native real-time dashboard from part 1 Whichever tool I chose, monitoring/auditing my data generated from my IT systems is key to understanding and optimizing my applications. Conclusion Cloud providers offer a convenient way to expose Kubernetes Services to the internet. Simply create Kubernetes Service of type: LoadBalancer and external users connect to your services via public entry point. However, cloud load balancers do nothing more than basic TCP/HTTP load balancing. You can configure NGINX Plus with many Zero Trust capabilities as you scale out your environment to multiple clusters in different regions, which is what we will cover in the next part of our series.351Views2likes0CommentsSecuring and Scaling Hybrid apps with F5 NGINX (Part 2)
If you attended a cybersecurity trade-show lately, you may have noticed the term “Zero Trust (ZT)” advertised on almost every booth. It may seem like most security companies are offering the same value proposition: ‘Securing apps with ZT’. Its commonality stems from the fact that ZT is a broad term that can span endless use cases. ZT is not a feature or capability, rather a philosophy embraced by IT security leaders based on the idea that all traffic entering and exiting a system is not trusted and must be scrutinized before passing through. Organizations are shifting to a zero-trust mindset due to the increased complexity of cyber-attacks. Perimeter based firewalls are no longer sufficient in securing digital resources. In Part 1 of our series, we configured NGINX Plus as an external load balancer to route and terminate TLS traffic to cluster nodes. In this part of the series, we leverage the same NGINX Plus deployment to enable ZT use cases that will improve the security posture of your hybrid applications. NOTE: Part 1 of the series is a prerequisite for enabling the ZT use cases in our examples. Please ensure that part 1 is completed before starting with part 2 Part 1 of the series ZT Use case #1: OIDC Authentication OIDC (OpenID Connect) is an authentication layer on top of the OAuth 2.0 framework. Many organizations will choose OIDC to authenticate digital identities and enable SSO (Single-Sign-On) for consumer applications. With Single-Sign-on technologies, users gain access to multiple applications with one set of user credentials by authenticating their identities through an IdP (Identity Provider). I can configure NGINX Plus to operate as an OIDC relaying party to exchange and validate ID tokens with the IdP (Identity Provider), in addition to basic reverse-proxy load balancing configured in part 1. I will extend the architecture in part 1 with an IdP and configure NGINX Plus as the identity aware proxy. Prerequisites for NGINX Plus Before configuring NGINX Plus as the OIDC identity aware proxy: 1. Installation of NJS is required. $ sudo apt-get install nginx-plus-module-njs 2. Load the NJS module into the NGINX configuration by adding the following line at the top of your nginx.conf file. load_module modules/ngx_http_js_module.so; 3. Clone the OIDC GitHub repository in your directory of choice cd /home/ubuntu && git clone --branch R28 https://github.com/nginxinc/nginx-openid-connect.git Setting up the IdP The IdP will manage and store digital identities to mitigate attackers from impersonating users to steal sensitive information. There are many IdP vendors to choose from; Okta, Ping Identity, Azure AD. We chose Okta as the IdP in our example moving forward. If you do not have access to an IdP, you can quickly get started with the Okta Command Line Interface (CLI) and run the okta register command to sign up for a new account. Once account creation is successful, we will use the Okta CLI to preconfigure Okta as the IdP, creating what Okta calls an app integration. Other IdPs will have different nomenclatures defining an application integration. For example, Azure AD will call them App registrations. If you are not using Okta, you can follow the documentation of the IdP you are using and skip to the next section (Configuring NGINX as the OpenID Connect relying party). 1. Run the okta login command to authenticate the Okta Cli with your Okta developer account. Enter your Okta domain and API token at the prompts $ okta login Okta Org URL: https://your-okta-domain Okta API token: your-api-token 2. Create the app integration $ okta apps create --app-name=mywebapp --redirect-uri=https://<nginx-plus-hostname>:443/_codexch where --app-name defines the application name (here, mywebapp) --redirect-uri defines the URI to which sign-ins are redirected to the NGINX Plus. <nginx-plus-hostname> should resolve to the NGINX Plus external IP configured in part 1. We use port 443 since TLS termination is configured on NGINX Plus from part 1. Recall we used self-signed certificates and keys to configure TLS on NGINX Plus. In a production environment, we recommend using certs/keys issued by a trusted certificate authority such as Let’s Encrypt. Once the command from step #2 is completed, the client ID and Secret generated from the app integration can be found in ${HOME}/.okta.env Configuring NGINX as the OpenID Connect relying party Now that we have finished setting up our IdP, we can now start configuring NGINX Plus as the OpenID Connect relying party. Once logged into the NGINX Plus instance, simply run the configuration script from your home directory. $ ./nginx-openid-connect/configure.sh -h <nginx-plus-hostname> -k request -i <YOURCLIENTID> -s <YOURCLIENTSECRET> –x https://dev-xxxxxxx.okta.com/.well-known/openid-configuration where -h defines the hostname of NGINX Plus -k defines how NGINX will retrieve JWK files to validate JWT signatures. The JWK file is retrieved from a subrequest to the IdP -i defines the Client ID generated from the IdP -s defines the Client Secret generated from the IdP -x defines the URL of the OpenID configuration endpoint. Using Okta as the example, the URL starts with your Okta organization domain, followed by the path URI /.well-known/openid-configuration The configure script will generate OIDC config files for NGINX Plus. We will copy the generated config files into the /etc/nginx/conf.d directory from part 1. $ sudo cp frontend.conf openid_connect.js openid_connect.server_conf openid_connect_configuration.conf /etc/nginx/conf.d/ You will notice by default that frontend.conf listens on port 8010 with clear text http. We need to merge kube_lb.conf into frontend.conf to enable both use cases from part 1 and 2. The resulting frontend.conf should look something like this: https://gist.github.com/nginx-gists/af067326734063da6a4ff42146873262 Finally, I will need to edit the openid_connect_configuration.conf file and modify my client secret to the one generated by my Okta IdP. Reload NGINX Plus for the new config to take effect. $ nginx -s reload Testing the Environment Now we are ready to test our environment in action. To summarize, we set up an IdP and configured NGINX Plus as the identity aware proxy to validate user ID tokens before the entering the Kubernetes cluster. To test the environment, we will open a browser and enter the hostname of NGINX Plus into the address field. You should be redirected to your IdP login page. Note: The host name should resolve to the Public IP of the NGINX Plus machine. Once you are prompted with the IdP login page from your browser, you can access the Kubernetes pods once the user credentials are validated. User credentials should be defined from the IdP. Once you are logged into your application, the ID token of the authenticated is stored in the NGINX Plus Key-Value Store. Enabling PKCE with OIDC In the previous section, we learned how to configure NGINX Plus as the OIDC relying party to authenticate user identities attempting connections to protected Kubernetes applications. However, there are few cases where attackers can intercept code exchange requests issued from the IdP and hijack your ID tokens and gain access to your sensitive applications. PKCE is an extension of the OIDC Authorization code flow designed to protect against authorization code interception and theft. PKCE provides an extra layer of security where the attacker will need to provide a code verifier in addition to the authorization code in exchange for the ID token from the IdP. In our current setup, NGINX Plus will send a random generated PKCE code verifier as a query parameter when redirecting users to the IdP login page. The IdP will use this PKCE code verifier as extra validation when the authorization code is exchanged for the ID token. PKCE needs to be enabled from both NGINX Plus and the IdP. To enable PKCE verification on NGINX Plus, edit the openid_connect_configuration.conf file and modify $oidc_pkce_enable to 1 and reload NGINX Plus. Depending on the IdP you are using, a checkbox should be available to enable PKCE. Testing the Environment To test that PKCE is working, we will open a browser and enter the NGINX Plus host name once again. You should be redirected to the login page, only this time you will notice the URL had slightly changed with additional query parameters: code_challenge_method – Method used to hash the plain code verifier (most likely SHA256) code_challenge – The hashed value of the plain code verifier NGINX Plus will provide this plain code verifier along with the authorization code in exchange for the ID token. NGINX Plus will then validate the ID token and store it in cache. Extending OIDC with 3rd party Systems Customers may need to integrate their OIDC workflow with proprietary Auth/Auth systems already in production. For example, additional metadata pertaining to a user may need to be collected from an external Redis Cache or JFrog Artifactory. We can fill this gap by extending our diagram from the previous section. In addition to token validation with NGINX Plus, I pull response data from JFrog Artifactory and pass it to the backend applications once users are authenticated. Note: I am using JFrog Artifactory as an example 3rd party endpoint here. I can technically use any endpoint I want. Testing the Environment To test our environment in action, I will make a few updates to my NGINX OIDC configuration. You can pull our updated GitHub repository and use it as reference for updating your NGINX configuration. Update #1: Adding the njs source code The first update is extending NGINX Plus with njs and retrieving response data from our 3rd party system. Add the KvOperations.js source file in your /etc/nginx/njs directory. Update #2: frontend.conf I am adding lines 37-39 to frontend.conf so that NGINX Plus will initiate the sub-request to our 3rd party system after users are authenticated with OIDC. We are setting the URI of this sub-request to /kvtest. More on this on the next update. Update #3: openid_connect.server_conf We are adding lines 35-48 to openid_connect.server_conf consisting of two internal redirects in NGINX: /kvtest; Internal redirect from sub-requests with URI /kvtest will functions in KvOperations.js /auth; Internal redirect for sub-requests with URI /auth will be proxied to the 3rd party endpoint. You can replace the <artifactory-url> in line 47 with your own endpoints Update #4: openid_connect_configuration.conf This update is optional and applies when passing dynamic variables to your 3rd party endpoints. You can dynamically update variables on the fly by sending POST requests to the NGINX Plus Key-Value store. $ curl -iX POST -d '{"admin", "<input-value>"}' http://localhost:9000/api/7/http/<keyval-zone-name> We are defining/instantiating the new Key-Value store in lines 102-104. Once the updates are complete, I can test the optimized OIDC environment by troubleshooting/verifying the application is on the receiving end of my dynamic input values. Wrapping it up I covered a subset of ZT use cases with NGINX in hybrid architectures. The use cases presented in this article center around authentication. In the next part of our series, I will cover more ZT use case that include: Alternative authentication methods (SAML) Encryption Authorization and Access Control Monitoring/Auditing426Views0likes0CommentsVerify, but Never Trust?
Much is being written lately about so-called "Zero Trust Model" security, which prompts me to ask, "Since when did we security folk trust anyone?" On the NIST site, you'll find a thorough report NIST commissioned from Forrester. A main theme of this report is that the old axiom of security "trust, but verify" is now obsolete. Hardened perimeters, once successfully traversed, leave infrastructures that trust the user and traffic implicitly, to their unending peril. What does all this mean for those of us tasked with security? Well, it's not a new concept, just a new label. We have known for years that the notion of a perimeter in a data center is evaporating, largely due to the increasingly browser-driven nature of all apps, and threats moving up the stack to the application. The network "perimeter" is largely intact, but with seemingly everything of importance transported via HTTP (and increasingly TLS-encrypted), our infrastructures may as well be open at the network level. Let's consider the fundamental tenets set forth in the report linked above: Zero Trust is applicable for every organization/industry. Zero Trust is technology and vendor agnostic. Zero Trust is scalable. Zero Trust protects Civil Liberties by protecting personal/confidential data. First, if we're in security, we should be considering how Zero Trust applies and can help improve my organization's security posture. We should be evangelizing this new way of thinking internally, in an effort to educate all aspects of the organization - networking, platform, application development, and any other team that may have a vested stake. Since Zero Trust is vendor- and technology-agnostic, it's incumbent upon everyone to evaluate current technologies, solutions and architectures to determine whether current implementations adhere to a Zero Trust model. No one piece of technology or one vendor will bring you to Zero Trust nirvana. Next, we must consider what is meant by "scalable" in this context. F5 has long been in the business of highly-scalable solutions, whether for offloading encryption, web application security, access management, or good old fashioned load-balancing. However, that's only part of what is meant by scalable here. Does our implementation of a Zero Trust Model scale across the organization? Does it apply to both internal and external users and applications? Is access to data cumbersome and overwhelmed by security controls? Does it consider all paths to sensitive data? On that last question, regarding paths to data, we hit upon the most important tenet above: the protection of data. In the end, "data wants to be free" and it is up to the security measures in place to ensure that it still travels freely, but only to those individuals who are properly authorized. This implies that web-based access paths (Internet and Intranet apps) along with other non-HTTP paths such as drive mounts or direct database access must all be considered and properly secured. Protecting data then requires good access management, good input validation, and at-rest data encryption. In order to be scalable, these security measures must be more or less frictionless from a UX perspective. These are high bars, indeed. The BIG-IP platform is uniquely instrumented to deliver business applications, and facilitate a Zero Trust model. Whether it is providing good input validation to prevent data exfiltration via CSRF or SQL injection with Application Security Manager (ASM), or integrating diverse access management mechanisms via Access Policy Manager (APM) without need of any special clients or portals, BIG-IP has a part to play in your Zero Trust implementation. Zero Trust is nothing new, we have been working for years to improve our application layer defenses through better coding, better frameworks, and new web technologies. Zero Trust does provide a codified framework to measure our success in developing highly secure and scalable infrastructures. Has your organization begun considering Zero Trust Model security? What challenges are you seeing, and how are F5 technologies factoring in (or not) along the way to overcoming those challenges? I look forward to your comments below.410Views0likes3Comments