BNK
4 TopicsF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things out😄62Views1like0CommentsF5 Kubernetes CNF/BNK GSLB functionality ?
Hello everyone is there F5 CNF/BNK GSLB functionality ? I see the containers gslb-engine (probably the main GTM/DNS module) and gslb-probe-agent (probably the big3d in a container/pod ) but no CR/CRD definitions about it and and can this data be shared between F5 TMM in different clusters (something like DNS sync groups) or probing normal F5 BIG-IP devices (not in kubernetes). https://clouddocs.f5.com/cnfs/robin/latest/cnf-software-install.html https://clouddocs.f5.com/cnfs/robin/latest/intro.html89Views0likes4CommentsF5 CNF/SPK/BNK and etc. support for Custom URL classifications/apps/IPS signatures?
While I played with CNF/SPK/BNK and etc. I didn't see anything in the docks about this https://clouddocs.f5.com/cnfs/robin/latest/ I think it is important feature as if a URL is wrongly classified by Brightcloud DB to be able to add the url to custom URL category as for example to allow it. As shown in https://clouddocs.f5.com/cnfs/aon/latest/cnf-pe-url-categorization.html I think this is somewhere hidden as there is option called "customdb" , so maybe the downloader pod can be configured to pull the custom URL classification. As the irules for CNF do not support "HTTP_REQUEST" and "HTTP_RESPONSE" events as mentioned in https://clouddocs.f5.com/cnfs/openshift/latest/cnf-irule-crd.html this seems important. Outside of that Custom IPS signatures like for the normal AFM will be nice as there is IPS pod I think like the IP intelligence it could connect to external feed list that has the custom signatures (the same for the URL category) https://clouddocs.f5.com/cnfs/robin/latest/cnf-ipi-feedlist-crd.html For the custom apps that PEM uses with iRules ( https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-policy-enforcement-manager-implementations-14-1-0/creating-custom-classifications.html ) I am just mentioning this but I see less use cases than what I see with custom URL categories and custom IPS signatures. I did write to cnfdocs@f5.com as mentioned in the web documents. Hope they see it and as mentioned ""To provide feedback and help improve this document, please email us at cnfdocs@f5.com. "" 🙂17Views0likes0CommentsScaling and Traffic-Managed Model Context Protocol (MCP) with BIG-IP Next for K8s
Introduction As AI models get more advanced, running them at scale—especially in cloud-native environments like Kubernetes—can be tricky. That’s where the Model Context Protocol (MCP) comes in. MCP makes it easier to connect and interact with AI models, but managing all the traffic and scaling these services as demand grows is a whole different challenge. In this article and demo video, I will show how F5's BIG-IP Next for K8s (BNK), a powerful cloud native traffic management platform from F5 can solve that and keep things running smoothly and scale your MCP services as needed. Model Context Protocol (MCP) in a nutshell. There were many articles explaining what is MCP on the internet. Please refer to those in details. In a nutshell, it is a standard framework or specification to securely connect AI apps to your critical data, tools, and workflow. The specification allow Tracking of context across multiple conversation Tool integration — model call external tools Share memory/state — remember information. MCP’s "glue" model to tools through a universal interface "USB-C for AI" What EXACTLY does MCP solve? MCP addresses many challenges in the AI ecosystem. I believe two key challenges it solve Complexities of integrating AI Model (LLM) with external sources and tools By standardization with a universal connector ("USB-C for AI") Everyone build "USB-C for AI" port so that it can be easily plug in each other Interoperability. Security with external integration Framework to establish secure connection Managing permission and authorization. What is BIG-IP’s Next for K8s (BNK)? BNK is F5 modernized version of the well-known Big-IP platform, redesigned to work seamlessly in cloud-native environments like Kubernetes. It is a scalable networking and security solution for ingress and egress traffic control. It builds on decades of F5's leadership in application delivery and security. It powers Kubernetes networking for today's complex workloads. BNK can be deployed on X86 architecture or ARM architecture - Nvidia Data Processing Unit (DPU) Lets see how F5's BNK scale and traffic managed an AIOps ecosystem. DEMO Architecture Setup Video Key Takeaways BIGIP Next for K8s, the backbone of the MCP architecture Technology built on decades of market-leading application delivery controller technology Secure, Deliver, and Optimize your AI infrastructure Provides deep insight through observability and visibility of your MCP traffic.
557Views1like0Comments