cloud
2072 TopicsCVE mitigation on F5 XC vs classic F5 WAF
Hi, there is serious CVE out there: https://www.cve.org/CVERecord?id=CVE-2025-55182 And F5 reacted quickly: https://my.f5.com/manage/s/article/K000158058#BIG-IP F5 itself is not affected, but F5 company created signatures addressing this issue. But it seems they are NOT available in F5 XC. That leads me to thinking what is the process, what can we expect? We have deployed signatures on some onsite environments, but how about services behind F5 XC? Thanks, Zdenek8Views0likes1CommentGlobal Log Receiver - HTTP Receiver keeps sending the same logs
I have set up an HTTP receiver as follows: { "metadata": { "name": "syslog-ng", "namespace": "shared", "labels": {}, "annotations": {}, "disable": false }, "spec": { "ns_list": { "namespaces": [ "my-company" ] }, "http_receiver": { "uri": "http://xxx.xxx.xxx.xxx:8084", "auth_none": {}, "compression": { "compression_none": {} }, "batch": { "timeout_seconds_default": {}, "max_events_disabled": {}, "max_bytes_disabled": {} }, "no_tls": {} }, "security_events": {} } } I receive logs but they get repeated on every new POST. The Test Connection button fails with 504 Gateway Timeout. On the receiving end, I have a syslog-ng with HTTP receiver, not much configured there. Any ideas?16Views0likes0CommentsAFM Logging Proxy Protocol Header Sent by F5 XC
Hello, We are using F5 distributed cloud XC DDOS service for our published services in proxy mode all traffic coming to F5 BIG-IP AFM sourced from XC IP ranges, at the same time XC is inserting "PROXY Protocol" version 2 header. I need your help to know how to extract "original IP" from header and send it to an external syslog server via irule or any other way. Thanks44Views0likes3CommentsF5 HA deployment in Azure using Azure Load Balancer
I just created an HA 90 (Active/Standby) peer for one of our customers adding an F5 to their current stand alone infrastructure in Azure. We are using a 3-NIC deployment model using the external interface for the VIPs and the Internal for our HA peering. We are also using secondary IP addresses on the external NIC which are in turn used for the VIPs on the F5. ✔ 3-NIC BIG-IP deployment (Management, Internal, External) ✔ Secondary IPs on the external NIC ✔ Those secondary IPs are mapped to BIG-IP Virtual Servers (VIPs) ✔ Internal NIC is used only for HA sync (not for traffic) For redundancy I have suggested using CFE in for failover but the customer wants to use and Azure load balancer and having the F5s as backend pool members. They do not want to use CFE. Is it possible to deploy an F5 HA pair in Azure using an Azure Load Balancer while the VIPs are using secondary NICs on the external interface? I'm afraid using an ALB would require making changes to the current VIP configurations on F5 to support a wildcard. Any other HA deployment models within Azure given the current infrastructure would also be helpful. Thank You73Views0likes2CommentsBig-IP LTM integration with Big-IP DNS in Azure
We are deploying Big-IPs to Azure. We are going with 3 NICs(mgmt/client/server) Big-IP LTM/APM nodes. They will integrate with existing Big-IP DNS nodes. What is the NIC to use for not only the initial bigip_add (port 22), but for also iquery 4353? Best practice? I understand big3d will listen on self ips and mgmt. Per https://clouddocs.f5.com/cloud/public/v1/azure/Azure_multiNIC.html, it mentions 4353 comms on internal network for config sync, etc. What about for F5 DNS integration and iquery comms? Does anybody have any experience with this configuration and/or best practice recommendations?Solved93Views0likes3CommentsKerberos Authentication Failing for Exchange 2016 Behind F5 Cloud WAF
Hi Team, We’re running Microsoft Exchange Server 2016 CU24 on Windows Server 2019, and have enabled Kerberos (Negotiate) authentication due to NTLM being deprecated in F5 Cloud WAF. Environment summary: Exchange DAG setup: 4 servers in Primary Site, 2 in DR Site Active Directory: Windows Server 2019 F5 Component: Cloud WAF (BIG-IP F5 Cloud Edition) handling inbound HTTPS traffic Namespaces: mail.domain.lk, webmail.domain.lk, autodiscover.domain.lk Authentication configuration: Negotiate (Kerberos) with NTLM, Basic, and OAuth as fallback SPNs: Correctly registered under the ASA (Alternate Service Account) computer account Certificate: SAN includes mail, webmail, and autodiscover Current status: Internal domain-joined Outlook 2019 clients work without issue. Outlook 2016, Office 2021, and Microsoft 365 desktop apps continue to prompt for passwords. Internal OWA and external OWA through F5 Cloud WAF both work correctly. Observation: Autodiscover XML shows <AuthPackage>Negotiate</AuthPackage> for all URLs. Kerberos authentication works internally, so SPNs and ASA setup are confirmed healthy. Password prompts appear only when traffic passes through F5 Cloud WAF, which terminates TLS before reaching Exchange. Suspected cause: F5 Cloud WAF may not support Kerberos Constrained Delegation (KCD) in the current configuration. TLS termination on F5 breaks the Kerberos authentication chain. NTLM/Basic fallback might not be fully passed through from WAF to backend. We would appreciate clarification on: Does F5 Cloud WAF support Kerberos Constrained Delegation (KCD) for backend Exchange 2016 authentication? If not, can Kerberos pass-through or secure fallback methods (NTLM/Basic) be enabled? Recommended configuration for supporting Outlook 2016 and Microsoft 365 clients when Exchange advertises Kerberos (Negotiate)? Is there an F5 reference configuration or iRule template for this scenario (Exchange 2016 + Kerberos)? Thank you for your guidance.Solved191Views0likes7CommentsAzure F5 deployed using marketplace is unable to ping backend server
Hello All, I have deployed Azure F5 load balancer using Azure market place. I created a Self-IP and made it part of Internal VLAN with IP address as 10.10.10.24/28 and the backend server is also in Azure hosted in Ubuntu with IP address as 10.10.10.21/28. I am unable to ping the backend server from F5 and vice-versa. Sanity checks : The Self IP is attached to untagged interface port lockdown status is allow all the MAC address associated with port 1.1 is correct and is in reference to subnet 10.10.10.24 curl works locally on backend server : 10.10.10.21 Able to ping self IPs from respective device that is F5 and backend server Parallely I have installed apache2 on backend server and I am able to curl on port 80 on 10.10.10.21 IP but the health monitor check fails from F5 self IP for Http traffic. there is no NSG associated to this subnet Any idea what could be wrong ? why is ICMP and HTTP health check is failing. I am troubleshooting since past 4 hours but with no success70Views0likes1CommentF5 XC HTTP 404 rout_not_found / rsp_code 404
I would like to add more point about the HTTP 404 error: route_not_found / rsp_code 404 in an XC (RE + CE) deployment. 1. Even if XC has the correct host match value in the route, you might still observe a 404 response. In such cases, check the DNS configuration on the CEs. A possible reason could be that the CEs are unable to resolve DNS for host which is configured in route. 2. Even if XC has the correct host match value, the path might not match. For example, if you have a single route as shown below and the request comes as https://example.com/, you may see rsp_code 404 , as it is not matching any routes. Example : HTTP Method:ANY Path Match : Prefix Prefix:/hello Headers Host example.com Orginpool: example_orgin pool https://my.f5.com/manage/s/article/K00014749069Views1like2CommentsF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things out😄77Views1like0Comments