deployment
4132 TopicsBIG-IP VE: 40G Throughput from 4x10G physical NICs
Hello F5 Community, I'm designing a BIG-IP VE deployment and need to achieve 40G throughput from 4x10G physical NICs. After extensive research (including reading K97995640), I've created this flowchart to summarize the options. Can you verify if this understanding is correct? **My Environment:** - Physical server: 4x10G NICs - ESXi 7.0 - BIG-IP VE (Performance LTM license) - Goal: Maximize throughput for data plane **Research Findings:** From F5 K97995640: "Trunking is supported on BIG-IP VE... intended to be used with SR-IOV interfaces but not with the default vmxnet3 driver. [Need 40G to F5 VE] ┌──────┴──---------------------- ────┐ │ │ [F5 controls] [ESXi controls] (F5 does LACP) (ESXi does LACP) │ │ Only SR-IOV Link Aggregation │ │ ┌───┴───┐ ┌───┴───┐ │40G per│ │40G agg │ │ flow │ │10G/flow │ └───────┘ └───────┘41Views0likes4CommentsImport the iSeries UCS file into the rSeries BIG IP tenant.
Hello, I have import ucs file from my iseries and trying to import it into big ip tenant on rseries but I got this error. I have tried the manual version and F5 journeys but the result is always the same. --------------------------------------------------------------------------------------------------------------------------------------------------------------- load_config_files[4900]: "/usr/bin/tmsh -n -g -a load sys config partitions all " - failed. -- 010713d0:3: Symmetric Unit Key decrypt failure - decrypt failure Unexpected Error: Loading configuration process failed. 2025 Nov 20 21:05:12 localhost.localdomain load_config_files[4900]: "/usr/bin/tmsh -n -g -a load sys config partitions all " - failed. -- 010713d0:3: Symmetric Unit Key decrypt failure - decrypt failure Broadcast message from systemd-journald@localhost.localdomain (Thu 2025-11-20 21:05:14 EAT): load_config_files[5614]: "/usr/bin/tmsh -n -g -a load sys config partitions all base " - failed. -- 010713d0:3: Symmetric Unit Key decrypt failure - decrypt failure Unexpected Error: Loading configuration process failed. 2025 Nov 20 21:05:14 localhost.localdomain load_config_files[5614]: "/usr/bin/tmsh -n -g -a load sys config partitions all base " - failed. -- 010713d0:3: Symmetric Unit Key decrypt failure - decrypt failure Configuration loading error: base-config-load-failed For additional details, please see messages in /var/log/ltmSolved112Views0likes4CommentsBuilding a Secure Application DMZ with F5 Distributed Cloud and Equinix Network Edge
Why: Establishing a Secure Application DMZ Enterprises increasingly need to deliver their own applications directly to customers across geographies. Relying solely on external providers for Points of Presence (PoPs) can limit control, visibility, and flexibility. A secure Application Demilitarized Zone (DMZ) empowers organizations to: Establish their own PoPs for internet-facing applications. Maintain control over security, compliance, and performance. Deliver applications consistently across regions. Reduce dependency on third-party infrastructure. This approach enables enterprises to build a globally distributed application delivery footprint tailored to their business needs. What: A Unified Solution to Secure Global Application Delivery The joint solution integrates F5 Distributed Cloud (F5XC) Customer Edge (CE) deployed via the Equinix Network Edge Marketplace, with Equinix Fabric to create a strategic point of control for secure, scalable application delivery. Key Capabilities Secure Ingress/Egress: CE devices serve as secure gateways for public-facing applications, integrating WAF, API protection, and DDoS mitigation. Global Reach: Equinix’s infrastructure enables CE deployment in strategic locations worldwide. Multi cloud Networking: Seamless connectivity across public clouds, private data centers, and edge locations. Centralized Management: F5XC Console provides unified visibility, policy enforcement, and automation. Together, these components form a cohesive solution that supports enterprise-grade application delivery with security, performance, and control. How: Architectural Overview Core Components F5XC Customer Edge (CE): Deployed as a virtual network function at Equinix PoPs, CE serves as the secure entry point for applications. F5 Distributed Cloud Console: Centralized control plane for managing CE devices, policies, and analytics. Equinix Network Edge Marketplace: Enables rapid provisioning of CE devices as virtual appliances. Equinix Fabric: High-performance interconnectivity between CE devices, clouds, and data centers. Key Tenets of the Solution Strategic Point of Control - CE becomes the enterprise’s own PoP, enabling secure and scalable delivery of applications. Unified Security Posture - Integrated WAF, API security, and DDoS protection across all CE locations. Consistent Policy Enforcement - Centralized control plane ensures uniform security and compliance policies. Multicloud and Edge Flexibility - Seamless connectivity across AWS, Azure, GCP, private clouds, and data centers. Rapid Deployment - CE provisioning via Equinix Marketplace reduces time-to-market and operational overhead. Partner and Customer Connectivity - Supports business partner exchanges and direct customer access without traditional networking complexity. Additional Links Multicloud chaos ends at the Equinix Edge with F5 Distributed Cloud CE F5 and Equinix Partnership Equinix Fabric Overview Secure Extranet with Equinix Fabric and F5 Distributed Cloud Additional Equinix and F5 partner information106Views2likes0CommentsBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home47Views2likes1CommentBIG-IP device fails to install node-inspector
Hi all, when I followed the steps in 'Steps to Setup Node-Inspector on BIG-IP' and executed the following command, an error occurred. command: [root@bigip1:Active:Standalone] ~ # npm install -g node-inspector@0.12.8 errors: npm ERR! Linux 3.10.0-862.14.4.el7.ve.x86_64 npm ERR! argv "/usr/bin/node" "/usr/bin/.npm__" "install" "-g" "node-inspector@0.12.8" npm ERR! node v6.9.1 npm ERR! npm v3.10.8 npm ERR! path /usr/lib/node_modules npm ERR! code EROFS npm ERR! errno -30 npm ERR! syscall access npm ERR! rofs EROFS: read-only file system, access '/usr/lib/node_modules' npm ERR! rofs This is most likely not a problem with npm itself npm ERR! rofs and is related to the file system being read-only. npm ERR! rofs npm ERR! rofs Often virtualized file systems, or other file systems npm ERR! rofs that don't support symlinks, give this error. npm ERR! Please include the following file with any support request: npm ERR! /root/npm-debug.log logs: [root@bigip1:Active:Standalone] ~ # tail -30 /root/npm-debug.log 7616 silly idealTree | `-- lodash@3.10.1 7616 silly idealTree +-- xmldom@0.1.31 7616 silly idealTree +-- xtend@4.0.2 7616 silly idealTree +-- y18n@3.2.2 7616 silly idealTree `-- yargs@3.32.0 7617 silly generateActionsToTake Starting 7618 silly install generateActionsToTake 7619 warn checkPermissions Missing write access to /usr/lib/node_modules 7620 silly rollbackFailedOptional Starting 7621 silly rollbackFailedOptional Finishing 7622 silly runTopLevelLifecycles Finishing 7623 silly install printInstalled 7624 verbose stack Error: EROFS: read-only file system, access '/usr/lib/node_modules' 7624 verbose stack at Error (native) 7625 verbose cwd /root 7626 error Linux 3.10.0-862.14.4.el7.ve.x86_64 7627 error argv "/usr/bin/node" "/usr/bin/.npm__" "install" "-g" "node-inspector@0.12.8" 7628 error node v6.9.1 7629 error npm v3.10.8 7630 error path /usr/lib/node_modules 7631 error code EROFS 7632 error errno -30 7633 error syscall access 7634 error rofs EROFS: read-only file system, access '/usr/lib/node_modules' 7635 error rofs This is most likely not a problem with npm itself 7635 error rofs and is related to the file system being read-only. 7635 error rofs 7635 error rofs Often virtualized file systems, or other file systems 7635 error rofs that don't support symlinks, give this error. 7636 verbose exit [ -30, true ] This seems like a directory access permission issue, but I can't change the read/write permissions on the F5 device. How should this be resolved? f5-appsvcs-extension/contributing/node_inspector_profiling_as3.md at v3.54.2 · F5Networks/f5-appsvcs-extensionSolved71Views0likes4CommentsF5 HA deployment in Azure using Azure Load Balancer
I just created an HA 90 (Active/Standby) peer for one of our customers adding an F5 to their current stand alone infrastructure in Azure. We are using a 3-NIC deployment model using the external interface for the VIPs and the Internal for our HA peering. We are also using secondary IP addresses on the external NIC which are in turn used for the VIPs on the F5. ✔ 3-NIC BIG-IP deployment (Management, Internal, External) ✔ Secondary IPs on the external NIC ✔ Those secondary IPs are mapped to BIG-IP Virtual Servers (VIPs) ✔ Internal NIC is used only for HA sync (not for traffic) For redundancy I have suggested using CFE in for failover but the customer wants to use and Azure load balancer and having the F5s as backend pool members. They do not want to use CFE. Is it possible to deploy an F5 HA pair in Azure using an Azure Load Balancer while the VIPs are using secondary NICs on the external interface? I'm afraid using an ALB would require making changes to the current VIP configurations on F5 to support a wildcard. Any other HA deployment models within Azure given the current infrastructure would also be helpful. Thank You72Views0likes2CommentsUsing n8n To Orchestrate Multiple Agents
I’ve been heads-down building a series of AI step-by-step labs, and this one might be my favorite so far: a practical, cost-savvy “mixture of experts” architectural pattern you can run with n8n and self-hosted models on Ollama. The idea is simple. Not every prompt needs a heavyweight reasoning model. In fact, most don’t. So we put a small, fast model in front to classify the user’s request—coding, reasoning, or something else—and then hand that prompt to the right expert. That way, you keep your spend and latency down, and only bring out the big guns when you really need them. Architecture at a glance: Two hosts: one for your models (Ollama) and one for your n8n app. Keeping these separate helps n8n stay snappy while the model server does the heavy lifting. Docker everywhere, with persistent volumes for both Ollama and n8n so nothing gets lost across restarts. Optional but recommended: NVIDIA GPU on the model host, configured with the NVIDIA Container Toolkit to get the most out of inference. On the model server, we spin up Ollama and pull a small set of targeted models: deepseek-r1:1.5b for the classifier and general chit-chat deepseek-r1:7b for the reasoning agent (this is your “brains-on” model) codellama:latest for coding tasks (Python, JSON, Node.js, iRules, etc.) llama3.2:3b as an alternative generalist On the app server, we run n8n. Inside n8n, the flow starts with the “On Chat Message” trigger. I like to immediately send a test prompt so there’s data available in the node inspector as I build. It makes mapping inputs easier and speeds up debugging. Next up is the Text Classifier node. The trick here is a tight system, prompt and clear categories: Categories: Reasoning and Coding Options: When no clear match → Send to an “Other” branch Optional: You can allow multiple matches if you want the same prompt to hit more than one expert. I’ve tried both approaches. For certain, ambiguous asks, allowing multiple can yield surprisingly strong results. I attach deepseek-r1:1.5b to the classifier. It’s inexpensive and fast, which is exactly what you want for routing. In the System Prompt Template, I tell it: If a prompt explicitly asks for coding help, classify it as Coding If it explicitly asks for reasoning help, classify it as Reasoning Otherwise, pass the original chat input to a Generalist From there, each classifier output connects to its own AI Agent node: Reasoning Agent → deepseek-r1:7b Coding Agent → codellama:latest Generalist Agent (the “Other” branch) → deepseek-r1:1.5b or llama3.2:3b I enable “Retry on Fail” on the classifier and each agent. In my environment (cloud and long-lived connections), a few retries smooth out transient hiccups. It’s not a silver bullet, but it prevents a lot of unnecessary red Xs while you’re iterating. Does this actually save money? If you’re paying per token on hosted models, absolutely. You’re deferring the expensive reasoning calls until a small model decides they’re justified. Even self-hosted, you’ll feel the difference in throughput and latency. CodeLlama crushes most code-related queries without dragging a reasoning model into it. And for general questions—“How do I make this sandwich?”—A small generalist is plenty. A few practical notes from the build: Good inputs help. If you know you’re asking for code, say so. Your classifier and downstream agent will have an easier time. Tuning beats guessing. Spend time on the classifier’s system prompt. Small changes go a long way. Non-determinism is real. You’ll see variance run-to-run. Between retries, better prompts, and a firm “When no clear match” path, you can keep that variance sane. Bigger models, better answers. If you have the budget or hardware, plugging in something like Claude, GPT, or a higher-parameter DeepSeek will lift quality. The routing pattern stays the same. Where to take it next: Wire this to Slack so an engineering channel can drop prompts and get routed answers in place. Add more “experts” (e.g., a data-analysis agent or an internal knowledge agent) and expand your classifier categories. Log token counts/latency per branch so you can actually measure savings and adjust thresholds/models over time. This is a lab, not a production, but the pattern is production-worthy with the right guardrails. Start small, measure, tune, and only scale up the heavy models where you’re seeing real business value. Let me know what you build—especially if you try multi-class routing and send prompts to more than one expert. Some of the combined answers I’ve seen are pretty great. Here's the lab in our git, if you'd like to try it out for yourself. If video is more your thing, try this: Thanks for building along, and I’ll see you in the next lab.
124Views3likes0CommentsBig-IP LTM integration with Big-IP DNS in Azure
We are deploying Big-IPs to Azure. We are going with 3 NICs(mgmt/client/server) Big-IP LTM/APM nodes. They will integrate with existing Big-IP DNS nodes. What is the NIC to use for not only the initial bigip_add (port 22), but for also iquery 4353? Best practice? I understand big3d will listen on self ips and mgmt. Per https://clouddocs.f5.com/cloud/public/v1/azure/Azure_multiNIC.html, it mentions 4353 comms on internal network for config sync, etc. What about for F5 DNS integration and iquery comms? Does anybody have any experience with this configuration and/or best practice recommendations?Solved92Views0likes3CommentsRequirement for BIG-IQ VM Deployment in AWS
Can anyone please suggest on below. We have a requirement to deploy a BIG-IQ VM in the AWS cloud to manage our existing LTM, GTM, and WAF devices. We are planning to manage approximately 100 F5 devices using BIG-IQ. Could you please share the recommended system requirements (RAM and disk space) for the BIG-IQ instance to support this scale? and other details as well if required for the same. Kind regards63Views0likes3CommentsCannot ping external interface
Hi All, first post here, first time F5 devices and a complete novice. I have a couple of BIG-IP devices and the luxury to play and learn before we go live. I have one I am sure is going to be a simple (and probably stupid question) On our LAN I have been able to set one device with a management interface, a virtual server and all the hosts and nodes are connecting fine. This is in a typical round robin setup. The thing I cannot figure out is the external port and address. For brevities sake and simplicity I have one physical interface connected directly to the gateway provided by our ISP and we have a block of static public IPs provided. I have assigned , or want to assign, one of the spare IP address to this interface. This is method we have with our other (non F5 firewalls) and it works, but not here. I have created a VLAN called external , set it to untagged and assigned the interface connected to the gateway to this VLAN. I then assigned that VLAN to my VirtualServer. However I cannot ping or reach the external IP address in any fashion and I am not sure why156Views0likes9Comments