ingress
5 TopicsF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things out😄63Views1like0CommentsHow to rewrite a path to a backend service dropping the prefix and passing the remaining path?
Hello, I am not sure whether my posting is appropriate in this area, so please delete it if there is a violation of posting rules... This must be a common task, but I cannot figure out how to do the following fanout rewrite in our nginx ingress: http://abcccc.com/httpbin/anything -> /anything (the httpbin backend service) When I create the following ingress with a path of '/' and send the query, I receive a proper response. curl -I -k http://abczzz.com/anything apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mikie-ingress namespace: mikie spec: ingressClassName: nginx rules: - host: abczzz.com http: paths: - path: / pathType: Prefix backend: service: name: httpbin-service port: number: 8999 What I really need is to be able to redirect to different services off of this single host, so I changed the ingress to the following, but the query always fails with a 404. Basically, I want the /httpbin to disappear and pass the path onto the backend service, httpbin. curl -I -k http://abczzz.com/httpbin/anything apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mikie-ingress namespace: mikie annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - host: abczzz.com http: paths: - path: /httpbin(/|$)(.*) pathType: Prefix backend: service: name: httpbin-service port: number: 8999 Thank you for your time and interest, Mike21KViews0likes15CommentsSNAT ingress public to private, snat egress private to public pool
Hello, I have a requirement to SNAT all traffic inbound to the VIP to a private IP (pool), on the same subnet as my internal hosts. That part is simple. However, the egress or return traffic outbound, from the pool member back to the client, must be SNAT'd, once again (requirement), to a pool of public address. So, SNAT in, then SNAT out. It seems as though I would need to SNAT on the HTTP_RESPONSE, back to client. If I am correct, or id there is a better way, please advise.567Views0likes2CommentsBigIP Controller for Kubernetes not adding pool members
Good evening I'm trying to get the BigIP Controller up and running in my lab with CRD's but I can't get it to work. Gonna try to give the information needed for troubleshooting but please bare with me and let me know if I missed something. The situation is like this: The controller talks to the F5 and creates the Virtual Server and the pool successfully, but the pool is empty. I used the latest helm chart and running the container with the following parameters (note that I did not use the nodeSelector option, although I tried that too): - --credentials-directory - /tmp/creds - --bigip-partition=rancher - --bigip-url=bigip-01.domain.se - --custom-resource-mode=true - --verify-interval=30 - --insecure=true - --log-level=DEBUG - --pool-member-type=nodeport - --log-as3-response=true Virtual Server Manifest: apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: namespace: istio-system name: istio-vs labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.225" virtualServerHTTPSPort: 443 tlsProfileName: bigip-tlsprofile httpTraffic: none pools: - service: istio-ingressgateway servicePort: 443 TLSProfile apiVersion: cis.f5.com/v1 kind: TLSProfile metadata: name: bigip-tlsprofile namespace: istio-system labels: f5cr: "true" spec: tls: clientSSL: "" termination: passthrough reference: bigip The istio-ingressgateway service: kubectl describe service -n istio-system istio-ingressgateway ... omitted some info ... Name: istio-ingressgateway Selector: app=istio-ingressgateway,istio=ingressgateway ... omitted some info ... Port: status-port 15021/TCP TargetPort: 15021/TCP NodePort: status-port 32395/TCP Endpoints: 10.42.2.9:15021 Port: http2 80/TCP TargetPort: 8080/TCP NodePort: http2 31380/TCP Endpoints: 10.42.2.9:8080 Port: https 443/TCP TargetPort: 8443/TCP NodePort: https 31390/TCP Endpoints: 10.42.2.9:8443 Port: tcp 31400/TCP TargetPort: 31400/TCP NodePort: tcp 31400/TCP Endpoints: 10.42.2.9:31400 Port: tls 15443/TCP TargetPort: 15443/TCP NodePort: tls 31443/TCP Endpoints: 10.42.2.9:15443 Session Affinity: None External Traffic Policy: Cluster Events: <none> The pod running the gateway: kubectl describe pod -n istio-system istio-ingressgateway-647f8dc56f-kqf7g Name: istio-ingressgateway-647f8dc56f-kqf7g Namespace: istio-system Priority: 0 Node: rancher-prod1/192.168.1.45 Start Time: Fri, 19 Mar 2021 21:20:23 +0100 Labels: app=istio-ingressgateway chart=gateways heritage=Tiller install.operator.istio.io/owning-resource=unknown istio=ingressgateway istio.io/rev=default operator.istio.io/component=IngressGateways pod-template-hash=647f8dc56f release=istio service.istio.io/canonical-name=istio-ingressgateway service.istio.io/canonical-revision=latest Should also add that I'm using this ingress gateway to access applications via the exposed node port so I know it works. 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) ready to poll, last wait: 30s 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) notifying listener: {l:0xc0000da300 s:0xc0000da360} 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) listener callback - num items: 3 err: <nil> 2021/03/27 20:25:50 [DEBUG] Found endpoints for backend istio-system/istio-ingressgateway: [] Looking at the code for the controller I interpret it from the return type declaration that the NodePoller returned 3 nodes and 0 errors: type pollData struct { nl []v1.Node err error } Controller version: f5networks/k8s-bigip-ctlr:2.3.0 F5 version: BIG-IP 16.0.1.1 Build 0.0.6 Point Release 1 AS3 version: 3.26.0 Any ideas? Kind regards, PatrikSolved2.3KViews0likes3CommentsF5 APM VPN TCP session Timeout
There are lot of documents and articles that talk about changing the timeouts for TCP profiles. None of the options appear to apply to tcp sessions that are created inside an SSL VPN terminating on the APM. I have changed the base tcp protocol timeouts to be 3600 seconds on the Access Profile, but, the APM will issue an RST at 300 seconds for any idle tcp sessions created by a remote access user. Access Profile: Profile TCP: The tcp profiles are applied to VIPs. There is a VIP associated with the Access Policy for the VPN, but the issue isn't the VPN itself timeout, but tcp sessions initiated by the user over the VPN or initiated by the server over the VPN once established . I can't see any way to apply a tcp profile to these connections. Can the timeout be changed?437Views0likes1Comment