announcement
357 TopicsWin Big in Vegas: The iRules Contest is back with $5k on the line at AppWorld 2026
Hey there, community, iRules Contest here...did you miss me? Well I’m back in business, baby, in Vegas, no less! At AppWorld 2026, we’re challenging DevCentral community members in attendance to design and build innovative iRules that solve real-world problems, improve performance, and enhance customer experiences. Whether you’re a seasoned iRules veteran or just getting started, we can’t wait to see what you create. Note: participation in this edition of the iRules Contest is limited to AppWorld 2026 attendees. But fear not! We’re hitting the road this year as well. The Challenge Plan out and write an iRule that go beyond BIG-IP’s built-in capabilities. Think of the future: the possibilities are wide open. We’ll drop a couple hints leading up to the event, and you’ll have a final hint in your registration swag bag, so keep your eyes peeled. There might even be a hint in an iRules related article to release this week, who knows? $5,000 to the Grand Prize Winner -- Are You In? Total prize money is $10,000, with the other $5,000 distributed across 2nd place, 3rd place, and five category awards. Place Prize Grand Prize $5,000 2nd Place $2,500 3rd Place $1,000 Five Category Awards $300/ea What Makes for a Winning Entry? The 100-point scale judging criteria for submissions is defined below across five categories: Innovation & Creativity (25 points) Does this solution show original thinking? Consider: Novel use of iRule features or creative problem-solving Fresh perspective on common challenges Unique approach that stands out from typical solutions Business Impact (20 points) Would customers actually use this? Consider: Solves a real operational problem or customer need Practical applicability and potential adoption Clear business value Technical Excellence (25 points) Is it well-built and production-ready? Consider: Works correctly and handles edge cases Performance-conscious (efficient, minimal resource impact) Follows security best practices Clean, readable code Theme & Requirements Alignment (20 points) Does it address the contest theme using required technologies (to be announced at the event)? Consider: Relevance to the specified theme Effective use of required technology How well the chosen technology fits the solution Presentation (10 points) Can you understand what it does and why it matters? Consider: Clear explanation of the problem and solution Quality of demo or presentation Documentation sufficient to implement Important Dates Contest Opens: 6:00PM Pacific Time MARCH 10, 2026 Submission Deadline: 11:59PM Pacific Time MARCH 10, 2026 Winners Announced: MARCH 12, 2026 during general sessions How to Enter Register for AppWorld 2026 — You must be a registered attendee Register for the Contest — Registration will open on the AppWorld event app soon. The contest is open to all f5 partners, customers, and DevCentral members registered for and in attendance at the contest MARCH 10, 2026 at F5 AppWorld 2026, except as described in the Official Rules. Please see the Official Rules for complete terms, including conditions for participation and eligibility. Build and submit — During the 6-hour window on contest night before 11:59PM. Edit your draft entry as much as you like, but once you submit, that’s what we’ll review. There is an example entry pinned at the top of the Contest Entries page you should follow. Make sure to add these tags to your entry: "appworld 2026", "vegas", and "irules" as shown on that example. This contest is BYOD. Bring your own device to develop and submit your iRules submission. However, a lab environment in our UDF platform will be provided if you need a development environment to test your code against. New to iRules? No problem. We welcome participants at all skill levels. If you’re just getting started, check out our Getting Started with iRules: Basic Concepts guide. This contest is a great opportunity to learn by doing. Also, feel free to bring your favorite AI buddy with you to help craft your entry. The goal is innovation and impact, not syntax expertise. Questions? Post any and all of your contest-related questions to the pinned thread in the Contests group on DevCentral. We’ll monitor, but allow for a business day to receive a response leading up to AppWorld. The iRules Contest has a history of surfacing creative solutions from the community. Some of the best ideas we’ve seen came from people who approached problems differently, and we’re looking forward to seeing what you build this year. Register. Prepare. Compete. See you at AppWorld!291Views3likes0CommentsRunning bigip to terraform resources
Hi, Posting here in the hopes someone finds this useful. This is not a product, it's a small open source tool that I've made to help manage our BigIPs. TL;DR: Running BigIP to Terraform resources: https://github.com/schibsted/bigip-to-terraform We recently started speaking about managing our BigIP in a more DevOpsy way at work. We have been using the web GUI most of the time and recently it has become more and more tricky to do transformations on the config text file to do large scale changes. We use terraform for AWS and some other things and I've not used it much myself so I thought I'd give terraform for BigIP a go. After looking at the docs and comparing with our running config and speaking to some different colleagues I found I wanted to see a terraform representation of our running config to see how new resources could be configured. So I wrote a script to dump our running config to terraform resources. It uses the python API to extract VIPs, pools and attendant nodes, writes a skeleton resource file and then "terraform import"s each resource. After that it uses "terraform show" with some light processing to generate a complete and valid terraform .tf file for all the resources found. There is one specific bug in the BigIP plugin to terraform (see the "issues" on github) that stops me from getting a complete automatic extract in our environment. And also for our full configuration (once I've removed the VIP resources that causes problems) "terraform plan" takes between 15 and 25 minutes. So I made a option to extract just VIPs matching a string or RE pattern, as well as their attendant pools and nodes. I've been able to "terraform apply" these back to a BigIP. The README file is quite complete, basically do `./runner` to get it all or `./runner -v 'pattern'` for a substring match in the VIP name, full path or IP number. This is not a migration tool since it does not extract or handle iRules, policies and such at all, they have to exist in the target environment already.1.3KViews5likes4CommentsBIG-IP Next for Kubernetes CNF 2.2 what's new
Introduction BIG-IP Next CNF v2.2.0 offers new enhancements to BIG-IP Next for Kubernetes CNFs with a focus on analytics capabilities, traffic distribution, subscriber management, and operational improvements that address real-world challenges in high-scale deployments. High-Speed Logging for Traffic Analysis The Reporting feature introduces high-speed logging (HSL) capabilities that capture session and flow-level metrics in CSV format. Key data points include subscriber identifiers, traffic volumes, transaction counts, video resolution metrics, and latency measurements, exported via Syslog (RFC5424, RFC3164, or legacy-BIG-IP formats) over TCP or UDP. Fluent-bit handles TMM container log processing, forwarding to Fluentd for external analytics servers. Custom Resources simplify configuration of log publishers, reporting intervals, and enforcement policies, making it straightforward to integrate into existing Kubernetes workflows. DNS Cache Inspection and Management New utilities provide detailed visibility into DNS cache operations. The bdt_cli tool supports listing, counting, and selectively deleting cache records using filters for domain names, TTL ranges, response codes, and cache types (RRSet, message, or nameserver). Complementing this, dns-cache-stats delivers performance metrics including hit/miss ratios, query volumes, response time distributions across intervals, and nameserver behavior patterns. These tools enable systematic cache analysis and maintenance directly from debug sidecars. Stateless and Bidirectional DAG Traffic Distribution Stateless DAG implements pod-based hashing to distribute traffic evenly across TMM pods without maintaining flow state. This approach embeds directly within the CNE installation, eliminating separate DAG infrastructure. Bidirectional DAG extends this with symmetric routing for client-to-server and return flows, using consistent redirect VLANs and hash tables. Deployments must align TMM pod counts with self-IP configurations on pod_hash-enabled VLANs to ensure balanced distribution. Dynamic GeoDB Updates for Edge Firewall Policies Edge Firewall Geo Location policies now support dynamic GeoDB updates, replacing static country/region lists embedded in container images. The Controller and PCCD components automatically incorporate new locations and handle deprecated entries with appropriate logging. Firewall Policy CRs can reference newly available geos immediately, enabling responsive policy adjustments without container restarts or rebuilds. This maintains policy currency in environments requiring frequent threat intelligence updates. Subscriber Creation and CGNAT Logging RADIUS-triggered subscriber creation integrates with distributed session storage (DSSM) for real-time synchronization across TMM pods. Subscriber records capture identifiers like IMSI, MSISDN, or NAI, enabling automated session lifecycle management. CGNAT logging enhancements include Subscriber ID in translation events, providing clear IP-to-subscriber mapping. This facilitates correlation of network activity with individual users, supporting troubleshooting, auditing, and regulatory reporting requirements. Kubernetes Secrets Integration for Sensitive Configuration Custom Resources now reference sensitive data through Kubernetes’ native Secrets using secretRef fields (name, namespace, key). The cne-controller fetches secrets securely via mTLS, monitors for updates, and propagates changes to consuming components. This supports certificate rotation through cert-manager without CR reapplication. RBAC controls ensure appropriate access while eliminating plaintext sensitive data from YAML manifests. Dynamic Log Management and Storage Optimization REST API endpoints and ConfigMap watching enable runtime log level adjustments per pod without restarts. Changes propagate through pod-specific ConfigMaps monitored by the F5 logging library. An optional Folder Cleaner CronJob automatically removes orphaned log directories, preventing storage exhaustion in long-running deployments with heavy Fluentd usage. Key Enhancements Overview Several refinements have improved operational aspects: CNE Controller RBAC: Configurable CRD monitoring via ConfigMap eliminates cluster-wide list permissions, with manual controller restart required for list changes. CGNAT/DNAT HA: F5Ingress automatically distributes VLAN configurations to standby TMM pods (excluding self-IPs) for seamless failover. Memory Optimization: 1GB huge page support via tmm.hugepages.preferredhugepagesize parameter. Diagnostics: QKView requests can be canceled by ID, generating partial diagnostics from collected data. Metrics Control: Per-table aggregation modes (Aggregated, Semi-Aggregated, Diagnostic) with configurable export intervals via f5-observer-operator-config ConfigMap. Related content BIG-IP Next for Kubernetes CNF - latest release BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions109Views2likes0CommentsWhat’s New in BIG-IQ v8.4.1?
Introduction F5 BIG-IQ Centralized Management, a key component of the F5 Application Delivery and Security Platform (ADSP), helps teams maintain order and streamline administration of BIG-IP app delivery and security services. In this article, I’ll highlight some of the key features, enhancements, and use cases introduced in the BIG-IQ v8.4.1 release and cover the value of these updates. Effective management of this complex application landscape requires a single point of control that combines visibility, simplified management and automation tools. Demo Video New Features in BIG-IQ 8.4.1 Support for F5 BIG-IP v17.5.1.X and BIG-IP v21.0 BIG-IQ 8.4.1 provides full support for the latest versions of BIG-IP (BIG-IP 17.5.1.X and 21.0) ensuring seamless discovery and compatibility across all modules. Users who upgrade to BIG-IP 17.5.1.X+ or 21.0 retain the same functionality without disruptions, maintaining consistency in their management operations. As you look to upgrade BIG-IP instances to the latest versions, our recommendation is to use BIG-IQ. By leveraging the BIG-IQ device/software upgrade workflows, teams get a repeatable, standardized, and auditable process for upgrades in a single location. In addition to upgrades, BIG-IQ also enables teams to handle backups, licensing, and device certificate workflows in the same tool—creating a one-stop shop for BIG-IP device management. Note that BIG-IQ works with BIG-IP appliances and Virtual Editions (VEs). Updated TMOS Layer In the 8.4.1 release, BIG-IQ's underlying TMOS version has been upgraded to v17.5.1.2, which will enhance the control plane performance, improve security efficacy, and enable better resilience of the BIG-IQ solution. MCP Support BIG-IP v21.0 introduced MCP Profile support—enabling teams to support AI/LLM workloads with BIG-IP to drive better performance and security. Additionally, v21.0 also introduces support for S3-optimized profiles, enhancing the performance of data delivery for AI workloads. BIG-IQ 8.4.1 and its interoperability with v21.0 helps teams streamline and scale management of these BIG-IP instances—enabling them to support AI adoption plans and ensure fast and secure data delivery. Enhanced BIG-IP and F5OS Visibility and Management BIG-IQ 8.4.1 introduces the ability to provision, license, configure, deploy, and manage the latest BIG-IP devices and app services (v17.5.1.X and v21.0). In 8.4, BIG-IQ introduced new visibility fields—including model, serial numbers, count, slot tenancy, and SW version—to help teams effectively plan device strategy from a single source of truth. These enhancements also improved license visibility and management workflows, including exportable reports. BIG-IQ 8.4.1 continues to offer this enhanced visibility and management experience for the latest BIG-IP versions. Better Security Administration BIG-IQ 8.4.1 includes general support for SSL Orchestrator 13.0 to help teams manage encrypted traffic and potential threats. BIG-IQ includes dedicated dashboards and management workflows for SSL Orchestrator. In BIG-IQ 8.4, F5 introduced support and management for Venafi Trust Protection Platform v22.x-24.x, a leading platform for certificate management and certificate authority services. This integration enables teams to automate and centrally manage BIG-IP SSL device certificates and keys. BIG-IQ 8.4.1 continues this support. Finally, BIG-IQ 8.4.1 continues to align with AWS security protocols so customers can confidently partner with F5. In BIG-IQ 8.4, F5 introduced support for IMDSv2, which uses session-oriented authentication to access EC2 instance metadata, as opposed to the request/response method of IMDSv1. This session/token-based method is more secure as it reduces the likelihood of attackers successfully using application vulnerabilities to access instance metadata. Enhanced Automation Integration & Protocol Support BIG-IQ 8.4.1 continues with BIG-IQ's support for the latest version of AS3 and templates (v3.55+). By supporting the latest Automation Toolchain (AS3/DO) BIG-IQ is aligned with current BIG‑IP APIs and schemas, enabling reliable, repeatable app and device provisioning. It reduces deployment failures from version mismatches, improves security via updated components, and speeds operations through standardized, CI/CD-friendly automation at scale. BIG-IQ 8.4 (and 8.4.1) provides support for IPv6. IPv6 provides vastly more IP addresses, simpler routing, and end‑to‑end connectivity as IPv4 runs out. BIG‑IQ’s IPv6 profile support centralizes configuration, visibility, and policy management for IPv6 traffic across BIG‑IP devices—reducing errors and operational overhead while enabling consistent, secure IPv6 adoption. Upgrading to v8.4.1 You can upgrade from BIG-IQ 8.X to BIG-IQ 8.4.1. BIG-IQ Centralized Management Compatibility Matrix Refer to Knowledge Article K34133507 BIG-IQ Virtual Edition Supported Platforms BIG-IQ Virtual Edition Supported Platforms provides a matrix describing the compatibility between the BIG-IQ VE versions and the supported hypervisors and platforms. Conclusion Effective management—orchestration, visibility, and compliance—relies on consistent app services and security policies across on-premises and cloud deployments. Easily control all your BIG-IP devices and services with a single, unified management platform, F5® BIG-IQ®. F5® BIG-IQ® Centralized Management reduces complexity and administrative burden by providing a single platform to create, configure, provision, deploy, upgrade, and manage F5® BIG-IP® security and application delivery services. Related Content Boosting BIG-IP AFM Efficiency with BIG-IQ: Technical Use Cases and Integration Guide Five Key Benefits of Centralized Management F5 BIG-IQ What's New in v8.4.0?
337Views5likes0CommentsF5 AppWorld 2026 Registration - early bird pricing.
Join us March 10–12 at Fontainebleau Las Vegas and Meet the Moment at F5 AppWorld 2026. Connect with your community and explore how the F5 Application Delivery and Security Platform gives you control without compromise. Over three days you will experience inspiring keynotes, learn new approaches in breakouts, deepen your skills in hands-on labs, and connect with peers, F5 leaders, and partners. Register early and save: Conference pass: $499 Conference pass + F5 Academy labs: $899 Team pass: 4 for the price of 3 Take advantage of early bird pricing and register today! We look forward to seeing you in Vegas. Your DevCentral Team. --- ** Early bird pricing expires Feb 13, 2026.1.1KViews5likes4CommentsImplementing F5 NGINX STIGs: A Practical Guide to DoD Security Compliance
Introduction In today’s security-conscious environment, particularly within federal and DoD contexts, Security Technical Implementation Guides (STIGs) have become the gold standard for hardening systems and applications. For organizations deploying NGINX—whether as a web server, reverse proxy, or load balancer—understanding and implementing NGINX STIGs is critical for maintaining compliance and securing your infrastructure. This guide walks through the essential aspects of NGINX STIG implementation, providing practical insights for security engineers and system administrators tasked with meeting these stringent requirements. Understanding STIGs and Their Importance STIGs are configuration standards created by the Defense Information Systems Agency (DISA) to enhance the security posture of DoD information systems. These guides provide detailed technical requirements for securing software, hardware, and networks against known vulnerabilities and attack vectors. For NGINX deployments, STIG compliance ensures: Protection against common web server vulnerabilities Proper access controls and authentication mechanisms Secure configuration of cryptographic protocols Comprehensive logging and auditing capabilities Defense-in-depth security posture Key NGINX STIG Categories Access Control and Authentication Critical Controls: The STIG mandates strict access controls for NGINX configuration files and directories. All NGINX configuration files should be owned by root (or the designated administrative user) with permissions set to 600 or more restrictive. # Verify permissions sudo chmod 600 /etc/nginx/nginx.conf Client Certificate Authentication: For environments requiring mutual TLS authentication, NGINX must be configured to validate client certificates: # Include the following lines in the server {} block of nginx.conf: ssl_certificate /etc/nginx/ssl/server_cert.pem; ssl_certificate_key /etc/nginx/ssl/server_key.pem; # Enable client certificate verification ssl_client_certificate /etc/nginx/ca_cert.pem; ssl_verify_client on; # Optional: Set verification depth for client certificates ssl_verify_depth 2; location / { proxy_pass http://backend_service; # Restrict access to valid PIV credentials if ($ssl_client_verify != SUCCESS) { return 403; } } Certificate Management: All certificates must be signed by a DoD-approved Certificate Authority Private keys must be protected with appropriate file permissions (400) Certificate expiration dates must be monitored and renewed before expiry Cryptographic Protocols and Ciphers One of the most critical STIG requirements involves configuring approved cryptographic protocols and cipher suites. Approved TLS Versions: STIGs typically require TLS 1.2 as a minimum, with TLS 1.3 preferred: ssl_protocols TLSv1.2 TLSv1.3; FIPS-Compliant Cipher Suites: When operating in FIPS mode, NGINX must use only FIPS 140-2 validated cipher suites: ssl_ciphers 'TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256'; ssl_prefer_server_ciphers on; Logging and Auditing Comprehensive logging is mandatory for STIG compliance, enabling security monitoring and incident response. Required Log Formats: log_format security_log '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time ' '$ssl_protocol/$ssl_cipher'; access_log /var/log/nginx/access.log security_log; error_log /var/log/nginx/error.log info; Key Logging Requirements: Log all access attempts (successful and failed) Capture client IP addresses and authentication details Record timestamps in UTC or local time consistently Ensure logs are protected from unauthorized modification (600 permissions) Implement log rotation and retention policies Pass Security Attributes via a Proxy STIGs require implementation of security attributes to implement security policy for access control and flow control for users, data, and traffic: # Include the "proxy_pass" service as well as the "proxy_set_header" values as required: proxy_pass http://backend_service; proxy_set_header X-Security-Classification "Confidential"; proxy_set_header X-Data-Origin "Internal-System"; proxy_set_header X-Access-Permissions "Read,Write"; Request Filtering and Validation Protecting against malicious requests is a core STIG requirement: # Limit request methods if ($request_method !~ ^(GET|POST|PUT|DELETE|HEAD)$) { return 405; } # Request size limits client_max_body_size 10m; client_body_buffer_size 128k; # Timeouts to prevent slowloris attacks client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 5s 5s; send_timeout 10s; # Rate limiting limit_req_zone $binary_remote_addr zone=req_limit:10m rate=10r/s; limit_req zone=req_limit burst=20 nodelay; SIEM Integration Forward NGINX logs to SIEM platforms for centralized monitoring: # Syslog integration error_log syslog:server=siem.example.com:514,facility=local7,tag=nginx,severity=info; access_log syslog:server=siem.example.com:514,facility=local7,tag=nginx NGINX Plus Specific STIG Considerations Organizations using NGINX Plus have additional capabilities to meet STIG requirements: Active Health Checks upstream backend { zone backend 64k; server backend1.example.com; server backend2.example.com; } match server_ok { status 200-399; header Content-Type ~ "text/html"; body ~ "Expected Content"; } server { location / { proxy_pass http://backend; health_check match=server_ok; } } JWT Authentication For API security, NGINX Plus can validate JSON Web Tokens: location /api { auth_jwt "API Authentication"; auth_jwt_key_file /etc/nginx/keys/jwt_public_key.pem; auth_jwt_require exp iat; } Dynamic Configuration API The NGINX Plus API must be secured and access-controlled: location /api { api write=on; allow 10.0.0.0/8; # Management network only deny all; # Require client certificate ssl_verify_client on; } Best Practices for STIG Implementation Start with Baseline Configuration: Use DISA's STIG checklist as your starting point and customize for your environment. Implement Defense in Depth: STIGs are minimum requirements; layer additional security controls where appropriate. Automate Validation: Use configuration management and automated scanning to maintain continuous compliance. Document Deviations: When technical controls aren't feasible, document risk acceptances and compensating controls. Regular Updates: STIGs are updated periodically; establish a process to review and implement new requirements. Testing Before Production: Validate STIG configurations in development/staging before deploying to production. Monitor and Audit: Implement continuous monitoring to detect configuration drift and security events. Conclusion Achieving and maintaining NGINX STIG compliance requires a comprehensive approach combining technical controls, process discipline, and ongoing vigilance. While the requirements can seem daunting initially, properly implemented STIGs significantly enhance your security posture and reduce risk exposure. By treating STIG compliance as an opportunity to improve security rather than merely a checkbox exercise, organizations can build robust, defensible NGINX deployments that meet the most stringent security requirements while maintaining operational efficiency. Remember: security is not a destination but a journey. Regular reviews, updates, and continuous improvement are essential to maintaining compliance and protecting your infrastructure in an ever-evolving threat landscape. Additional Resources DISA STIG Library: https://public.cyber.mil/stigs/ NGINX Security Controls: https://docs.nginx.com/nginx/admin-guide/security-controls/ NIST Cybersecurity Framework: https://www.nist.gov/cyberframework Have questions about implementing NGINX STIGs in your environment? Share your challenges and experiences in the comments below.169Views1like0CommentsWhat is the best practice for migrating from iseries to rseries?
hi ,we plan to migrate to new r-series F5 (v15.1.x) from i-series legacy appliance v13.x.x. We will create the same vlans and IP address config, but the physical interfaces will be different. The new r-series appliance is already licensed. What is the best practice for this migration? option1: import the whole UCS file to new r-series appliance. after importing the ucs to new appliance, what are the next steps to complete the whole migration? option2: copy the config for every module, for example to copy ltm config first, then gtm, final AFW ...... can someone please advise, thanks in advance!1.5KViews0likes9Comments