application delivery
2278 TopicsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home24Views1like0CommentsLeverage F5 BIG-IP APM and Azure AD Conditional Access Easy button
Integrating F5 BIG-IP APM’s Identity Aware Proxy (IAP) with Microsoft EntraID (Previously called AzureAD) Conditional Access enables fine-grained, adaptable, zero trust access to any application, regardless of location and authentication method, with continuous monitoring and verification.
2.3KViews1like2CommentsZero Trust building blocks - Leverage Microsoft Intune endpoint Compliance with F5 BIG-IP APM Access
Use case summary Let's walk through a real life scenario, we have company A that's building its Zero Trust strategy and of course it will be great to make use of existing solutions to reach our target. Microsoft Intune introduces a great source of intelligence and compliance enforcement for endpoints, combined with F5 BIG-IP Access Policy Manager (APM) integrated with Microsoft EntraID (previously called AzureAD) this extends the enforcement to the endpoints accessing Company A resources whether it's a SAAS or locally hosted. Below is the flow of some use cases that leverage how F5 BIG-IP APM and Microsoft Intune pave the way to achieve Zero Trust strategy. We've an endpoint Managed by Microsoft Intune. Microsoft Intune contains device compliance policy to determine the conditions at which the machine to be considered compliant and the configuration profile determine the configurations for specific applications in our case (F5 Access VPN). We have the following use cases, User tries to access web application through F5 BIG-IP APM, BIG-IP is already integrated with Microsoft Intune and Microsoft EntraID (previously called AzureAD). F5 BIG-IP APM acts as SP, that directs user request to Microsoft EntraID (previously called AzureAD) for authentication and compliance check. If the user successfully authenticate and pass compliance policy, user will be redirected back to the application with SAML assertion response otherwise the user will be denied to acces. A demo was created by our awesome Access guru Matt_Dierick User tries to use SSL VPN to access corporate resources, User click on F5 Access VPN connection pushed to the endpoint via configuration profile at Microsoft Intune. User selects the proper authentication method (Username&Password, Smart Card or Certificate based Authentication). Once user successfully authenticate and pass compliance check, a temporary certificate is pushed to the machine. The temporary certificate is used to authenticate with F5 BIG-IP APM and then the user is granted access to SSL VPN connection. A demo was created for this use case as well by our awesome Access guru Matt_Dierick , as Microsoft Intune portal got updated, we may now perform the endpoint management related tasks through endpoint.microsoft.com portal instead of portal.azure.com, make sure to follow Microsoft documentations for any new updates. Conclusion In conclusion to the highlighted use cases, we can see that we can make use of existing solutions and extend their capabilities with the ease of integration to acheive our organization Zero Trust strategy. F5 BIG-IP in general allows the organization to decouple client side connection from server side, which simplifies further services integration to boost organization security posture. F5 BIG-IP APM allows us to integrate with different parties to extend their capabilties whether they endpoint compliance, risk factor or IDaaS to use such insights for securing application or network access. In addition to corporate related secure access, if we have customers accessing applications and need integration with Google or other Open ID Connect (OIDC) provider, you can make use of F5 BIG-IP APM OIDC integration to that 3rd party for customers' access. Additional resources Configuring Access Policy Manager for MDM applications BIG-IP Access Policy Manager: Third-Party Integration OAuth and OpenID Connect - Made easy with Access Guided Configurations templates3.6KViews6likes0CommentsPowering Progressive Deployment in Kubernetes with NGINX and Argo Rollouts
This article demonstrates you how you can use NGINX Gateway Fabric combined with Argo Rollouts to perform progressive delivery. The Canary pattern will be introduced and is employed as we explore and execute rollout scenarios.440Views1like0CommentsHow I did it - "F5 BIG-IP Observability with Dynatrace and F5 Telemetry Streaming"
Welcome back to another edition of “How I Did It.” It’s been a while since we looked at observability… Oh wait, I just said that. Anyway, in this post I’ll walk through how I integrated F5 Telemetry Streaming with Dynatrace. To show the results, I’ve included sample dashboards that highlight how the ingested telemetry data can be visualized effectively. Let’s dive in before I repeat myself again.44Views2likes0CommentsThinking Outside the Box: Rewriting Web Pages with F5 Distributed Cloud (XC)
This article demonstrates how to dynamically rewrite web page content, such as updating links or replacing text, by using native features in F5 Distributed Cloud (XC). It provides a creative workaround that leverages JavaScript injection to modify pages on the fly, avoiding the need for a separate proxy like NGINX or BIG-IP.329Views4likes3CommentsF5 NGINX Plus R36 Release Now Available
We’re thrilled to announce the availability of F5 NGINX Plus Release 36 (R36). NGINX Plus R36 adds advanced capabilities like an HTTP CONNECT forward proxy, richer OpenID Connect (OIDC) abilities, expanded ACME options, and new container images packaged with popular modules. In addition, NGINX Plus inherits all the latest capabilities from NGINX Open Source, the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. Here’s a summary of the most important updates in R36: HTTP CONNECT Forward Proxy NGINX Plus can now tunnel HTTP traffic via the HTTP CONNECT method, making it easy to centralize egress policies through a trusted NGINX Plus server. Advanced features enable capabilities that limit proxied traffic to specific origin clients, ports, or destination hosts and networks. Native OIDC Support for PKCE, Front-Channel Logout, and POST Client Authentication We’ve made additional enhancements to the popular OpenID Connect module by adding support for PKCE enforcement, the ability to log out of all applications a client was signed in to, and support for authenticating the OIDC client using the client_secret_post method. ACME Enhancements for Certificate Automation Certificate automation capabilities are more powerful than ever, as we continue to make improvements to the ACME (Automatic Certificate Management Environment) module. New features include support for the TLS-ALPN-01 challenge method, the ability to select a specific certificate chain, IP-based certificates, ACME profiles, and external account bindings. TLS Certificate Compression High-volume or high-latency connections can now benefit from a new capability that optimizes TLS handshakes by compressing certificates and associated CA chains. Container Images with Popular Modules Container images now include our most popular modules, making it even easier to deploy NGINX Plus in container environments. Alongside the previously included njs module, images now ship with the ACME, OpenTelemetry Tracing, and Prometheus Exporter modules. Key features inherited from NGINX Open Source include: Support for 0-RTT in QUIC Inheritance control for headers and trailers Support for OpenSSL 3.5 New Features in Detail HTTP CONNECT Forward Proxy NGINX Plus R36 adds native HTTP CONNECT support via a new ngx_http_tunnel_module that enables NGINX Plus to operate as a forward proxy for outbound HTTP/HTTPS traffic. You can restrict clients, ports, and hosts, as well as which networks are reachable via centralized egress policies. This new capability allows you to monitor outbound connections through one or more NGINX Plus instances instead of relying on separate proxy products or DIY open source projects. Why it matters Enforces consistent egress policies without introducing another proxy to your stack. Centralizes proxy rules and threat monitoring for outbound connections. Reduces the need for custom modules or sidecar proxies to tunnel outbound TLS. Who it helps Platform and network teams that must route all outbound traffic through a controlled proxy. Security teams that want a single place to enforce and audit egress rules. Operators consolidating L7 functions (inbound and outbound) on NGINX Plus. The following is a sample forward proxy configuration that filters ports, destination hosts and networks. Note the requirement to specify a DNS resolver. For simplicity, we've configured a popular, publicly available DNS. However, doing so is not recommended for certain production environments, due to access limitations and unpredictable availability. num_map $request_port $allowed_ports { 443 1; 8440-8445 1; } geo $upstream_last_addr $allowed_nets { 52.58.199.0/24 1; 3.125.197.172/24 1; } map $host $allowed_hosts { hostnames; nginx.org 1; *.nginx.org 1; } server { listen 3128; resolver 1.1.1.1; # unsafe tunnel_allow_upstream $allowed_nets $allowed_ports $allowed_hosts; tunnel_pass; } Native OpenID Connect Support for PKCE, Front-Channel Logout, and POST Client Authentication R36 extends the native OIDC features introduced in R34 with PKCE enforcement, front-channel logout, and support for authenticating clients via the client_secret_post method. PKCE for authorization code flow can now be auto-enabled for a configured Identity Provider when S256 is advertised for “code_challenge_methods_supported” in federated metadata. Alternatively, it can be explicitly toggled with a simple directive: pkce on; # force PKCE even if the OP does not advertise it pkce off; # disable PKCE even if the OP advertises S256 A new front-channel logout endpoint lets an Identity Provider trigger a browser-based sign-out of all applications that a client is signed in to. oidc_provider okta_app { issuer https://dev.okta.com/oauth2/default; client_id <client_id>; client_secret <client_secret>; logout_uri /logout; logout_token_hint on; frontchannel_logout_uri /front_logout; userinfo on; } Support for client_secret_post improves compatibility with identity providers that require POST-based client authentication per OpenID Connect Core 1.0. Why it matters Brings NGINX Plus in line with modern identity provider expectations that treat PKCE as a best practice for all authorization code flows. Enables reliable logout across multiple applications from a single identity provider trigger. Makes NGINX Plus easier to integrate with providers that insist on POST-based client authentication. Who it helps Identity access management and security teams standardizing on OIDC and PKCE. Application and API owners who need consistent login and logout behavior across many services. Architects integrating with cloud identity providers (Entra ID, Okta, etc.) that require specific OIDC patterns. ACME Enhancements for Certificate Automation Building on the ACME support shipped in our previous release, NGINX Plus R36 incorporates new features from the nginx acme project, including: TLS-ALPN-01 challenge support Selection of a specific certificate chain IP-based certificates ACME profiles External account bindings These capabilities align NGINX Plus with what is available in NGINX Open Source via the ngx_http_acme_module. Why it matters Lets you keep more of the certificate lifecycle inside NGINX instead of relying on external tooling. Makes it easier to comply with CA policies and organizational PKI standards. Supports more complex environments such as IP-only endpoints and specific chain requirements. Who it helps SRE and platform teams running large fleets that rely on automated certificate renewal. Security and PKI teams that need tighter control over certificate chains, profiles, and CA bindings. Operators who want to standardize on a single ACME implementation across NGINX Plus and NGINX Open Source. TLS Certificate Compression TLS certificate compression introduced in NGINX Open Source 1.29.1 is now available in NGINX Plus R36. The ssl_certificate_compression directive controls certificate compression during TLS handshakes, which remains off by default and must be explicitly enabled. Why it matters Reduces the size of TLS handshakes when certificate chains are large. Can optimize performance in high connection volume or high-latency environments. Offers incremental performance optimization without changing application behavior. Who it helps Performance-focused operators running services with many short-lived TLS connections. Teams supporting mobile or high-latency clients where every byte in the handshake matters. Operators who want to experiment with handshake optimizations while retaining control via explicit configuration. Container Images with Popular Modules Included Finally, NGINX Plus R36 introduces container images that bundle NGINX Plus with commonly requested first-party modules such as OpenTelemetry (OTel) Tracing, ACME, Prometheus Exporter, and NGINX JavaScript (njs). Options include images with or without F5 NGINX Agent and those running NGINX in privileged or unprivileged mode. Additionally, a slim NGINX Plus–only image will be made available for teams who prefer to build bespoke custom containers. Additional Enhancements Available in NGINX Plus R36 Upstream-specific request headers Dynamically assigned proxy_bind pools Changes Inherited from NGINX Open Source NGINX Plus R36 is based on the NGINX 1.29.3 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R35 was released (which was based on the 1.29.0 mainline release). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to Platform Support Added Platforms Support for the following platforms has been added: Debian 13 Rocky Linux 10 SUSE Linux Enterprise Server 16 Removed Platforms Support for the following platforms has been removed: Alpine Linux 3.19 – Reached End of Support in November 2025 Deprecated Platforms Support for the following platforms will be removed in a future release: Alpine Linux 3.20 Important Warning NGINX Plus is built on the latest minor release of each supported operating system platform. In many cases, the latest revisions of these operating systems are adapting their platforms to support OpenSSL 3.5 (e.g. RHEL 9.7 and 10.1). In these situations, NGINX Plus requires that OpenSSL 3.5.0 or later is installed for proper operation. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to NGINX Open Source that are designed for enterprise-grade performance, scalability, and security. Follow this guide for more information on installing and deploying NGINX Plus R36 or NGINX Open Source.252Views1like0CommentsUsing n8n To Orchestrate Multiple Agents
I’ve been heads-down building a series of AI step-by-step labs, and this one might be my favorite so far: a practical, cost-savvy “mixture of experts” architectural pattern you can run with n8n and self-hosted models on Ollama. The idea is simple. Not every prompt needs a heavyweight reasoning model. In fact, most don’t. So we put a small, fast model in front to classify the user’s request—coding, reasoning, or something else—and then hand that prompt to the right expert. That way, you keep your spend and latency down, and only bring out the big guns when you really need them. Architecture at a glance: Two hosts: one for your models (Ollama) and one for your n8n app. Keeping these separate helps n8n stay snappy while the model server does the heavy lifting. Docker everywhere, with persistent volumes for both Ollama and n8n so nothing gets lost across restarts. Optional but recommended: NVIDIA GPU on the model host, configured with the NVIDIA Container Toolkit to get the most out of inference. On the model server, we spin up Ollama and pull a small set of targeted models: deepseek-r1:1.5b for the classifier and general chit-chat deepseek-r1:7b for the reasoning agent (this is your “brains-on” model) codellama:latest for coding tasks (Python, JSON, Node.js, iRules, etc.) llama3.2:3b as an alternative generalist On the app server, we run n8n. Inside n8n, the flow starts with the “On Chat Message” trigger. I like to immediately send a test prompt so there’s data available in the node inspector as I build. It makes mapping inputs easier and speeds up debugging. Next up is the Text Classifier node. The trick here is a tight system, prompt and clear categories: Categories: Reasoning and Coding Options: When no clear match → Send to an “Other” branch Optional: You can allow multiple matches if you want the same prompt to hit more than one expert. I’ve tried both approaches. For certain, ambiguous asks, allowing multiple can yield surprisingly strong results. I attach deepseek-r1:1.5b to the classifier. It’s inexpensive and fast, which is exactly what you want for routing. In the System Prompt Template, I tell it: If a prompt explicitly asks for coding help, classify it as Coding If it explicitly asks for reasoning help, classify it as Reasoning Otherwise, pass the original chat input to a Generalist From there, each classifier output connects to its own AI Agent node: Reasoning Agent → deepseek-r1:7b Coding Agent → codellama:latest Generalist Agent (the “Other” branch) → deepseek-r1:1.5b or llama3.2:3b I enable “Retry on Fail” on the classifier and each agent. In my environment (cloud and long-lived connections), a few retries smooth out transient hiccups. It’s not a silver bullet, but it prevents a lot of unnecessary red Xs while you’re iterating. Does this actually save money? If you’re paying per token on hosted models, absolutely. You’re deferring the expensive reasoning calls until a small model decides they’re justified. Even self-hosted, you’ll feel the difference in throughput and latency. CodeLlama crushes most code-related queries without dragging a reasoning model into it. And for general questions—“How do I make this sandwich?”—A small generalist is plenty. A few practical notes from the build: Good inputs help. If you know you’re asking for code, say so. Your classifier and downstream agent will have an easier time. Tuning beats guessing. Spend time on the classifier’s system prompt. Small changes go a long way. Non-determinism is real. You’ll see variance run-to-run. Between retries, better prompts, and a firm “When no clear match” path, you can keep that variance sane. Bigger models, better answers. If you have the budget or hardware, plugging in something like Claude, GPT, or a higher-parameter DeepSeek will lift quality. The routing pattern stays the same. Where to take it next: Wire this to Slack so an engineering channel can drop prompts and get routed answers in place. Add more “experts” (e.g., a data-analysis agent or an internal knowledge agent) and expand your classifier categories. Log token counts/latency per branch so you can actually measure savings and adjust thresholds/models over time. This is a lab, not a production, but the pattern is production-worthy with the right guardrails. Start small, measure, tune, and only scale up the heavy models where you’re seeing real business value. Let me know what you build—especially if you try multi-class routing and send prompts to more than one expert. Some of the combined answers I’ve seen are pretty great. Here's the lab in our git, if you'd like to try it out for yourself. If video is more your thing, try this: Thanks for building along, and I’ll see you in the next lab.
116Views3likes0CommentsAutomate Let's Encrypt Certificates on BIG-IP
To quote the evil emperor Zurg: "We meet again, for the last time!" It's hard to believe it's been six years since my first rodeo with Let's Encrypt and BIG-IP, but (uncompromised) timestamps don't lie. And maybe this won't be my last look at Let's Encrypt, but it will likely be the last time I do so as a standalone effort, which I'll come back to at the end of this article. The first project was a compilation of shell scripts and python scripts and config files and well, this is no different. But it's all updated to meet the acme protocol version requirements for Let's Encrypt. Here's a quick table to connect all the dots: Description What's Out What's In acme client letsencrypt.sh dehydrated python library f5-common-python bigrest BIG-IP functionality creating the SSL profile utilizing an iRule for the HTTP challenge The f5-common-python library has not been maintained or enhanced for at least a year now, and I have an affinity for the good work Leo did with bigrest and I enjoy using it. I opted not to carry the SSL profile configuration forward because that functionality is more app-specific than the certificates themselves. And finally, whereas my initial project used the DNS challenge with the name.com API, in this proof of concept I chose to use an iRule on the BIG-IP to serve the challenge for Let's Encrypt to perform validation against. Whereas my solution is new, the way Let's Encrypt works has not changed, so I've carried forward the process from my previous article that I've now archived. I'll defer to their how it works page for details, but basically the steps are: Define a list of domains you want to secure Your client reaches out to the Let’s Encrypt servers to initiate a challenge for those domains. The servers will issue an http or dns challenge based on your request You need to place a file on your web server or a txt record in the dns zone file with that challenge information The servers will validate your challenge information and notify you You will clean up your challenge files or txt records The servers will issue the certificate and certificate chain to you You now have the key, cert, and chain, and can deploy to your web servers or in our case, to the BIG-IP Before kicking off a validation and generation event, the client registers your account based on your settings in the config file. The files in this project are as follows: /etc/dehydrated/config # Dehydrated configuration file /etc/dehydrated/domains.txt # Domains to sign and generate certs for /etc/dehydrated/dehydrated # acme client /etc/dehydrated/challenge.irule # iRule configured and deployed to BIG-IP by the hook script /etc/dehydrated/hook_script.py # Python script called by dehydrated for special steps in the cert generation process # Environment Variables export F5_HOST=x.x.x.x export F5_USER=admin export F5_PASS=admin You add your domains to the domains.txt file (more work likely if signing a lot of domains, I tested the one I have access to). The dehydrated client, of course is required, and then the hook script that dehydrated interacts with to deploy challenges and certificates. I aptly named that hook_script.py. For my hook, I'm deploying a challenge iRule to be applied only during the challenge; it is modified each time specific to the challenge supplied from the Let's Encrypt service and is cleaned up after the challenge is tested. And finally, there are a few environment variables I set so the information is not in text files. You could also move these into a credential vault. So to recap, you first register your client, then you can kick off a challenge to generate and deploy certificates. On the client side, it looks like this: ./dehydrated --register --accept-terms ./dehydrated -c Now, for testing, make sure you use the Let's Encrypt staging service instead of production. And since I want to force action every request while testing, I run the second command a little differently: ./dehydrated -c --force --force-validation Depicted graphically, here are the moving parts for the http challenge issued by Let's Encrypt at the request of the dehydrated client, deployed to the F5 BIG-IP, and validated by the Let's Encrypt servers. The Let's Encrypt servers then generate and return certs to the dehydrated client, which then, via the hook script, deploys the certs and keys to the F5 BIG-IP to complete the process. And here's the output of the dehydrated client and hook script in action from the CLI: # ./dehydrated -c --force --force-validation # INFO: Using main config file /etc/dehydrated/config Processing example.com + Checking expire date of existing cert... + Valid till Jun 20 02:03:26 2022 GMT (Longer than 30 days). Ignoring because renew was forced! + Signing domains... + Generating private key... + Generating signing request... + Requesting new certificate order from CA... + Received 1 authorizations URLs from the CA + Handling authorization for example.com + A valid authorization has been found but will be ignored + 1 pending challenge(s) + Deploying challenge tokens... + (hook) Deploying Challenge + (hook) Challenge rule added to virtual. + Responding to challenge for example.com authorization... + Challenge is valid! + Cleaning challenge tokens... + (hook) Cleaning Challenge + (hook) Challenge rule removed from virtual. + Requesting certificate... + Checking certificate... + Done! + Creating fullchain.pem... + (hook) Deploying Certs + (hook) Existing Cert/Key updated in transaction. + Done! This results in a deployed certificate/key pair on the F5 BIG-IP, and is modified in a transaction for future updates. This proof of concept is on github in the f5devcentral org if you'd like to take a look. Before closing, however, I'd like to mention a couple things: This is an update to an existing solution from years ago. It works, but probably isn't the best way to automate today if you're just getting started and have already started pursuing a more modern approach to automation. A better path would be something like Ansible. On that note, there are several solutions you can take a look at, posted below in resources. Resources https://github.com/EquateTechnologies/dehydrated-bigip-ansible https://github.com/f5devcentral/ansible-bigip-letsencrypt-http01 https://github.com/s-archer/acme-ansible-f5 https://github.com/s-archer/terraform-modular/tree/master/lets_encrypt_module (Terraform instead of Ansible) https://community.f5.com/t5/technical-forum/let-s-encrypt-with-cloudflare-dns-and-f5-rest-api/m-p/292943 (Similar solution to mine, only slightly more robust with OCSP stapling, the DNS instead of HTTP challenge, and with bash instead of python)28KViews6likes19CommentsCNSA 2.0 Implementation Guide with OpenSSL: How to Build a Quantum-Resistant Certificate Authority
Are your certificates quantum-ready? Explore DevCentral's ML-DSA post-quantum cryptography PKI tutorial meeting NSA quantum-safe requirements and build your own internal certificate authority with PQC support.434Views1like3Comments