BigIP Controller for Kubernetes not adding pool members
Good evening I'm trying to get the BigIP Controller up and running in my lab with CRD's but I can't get it to work. Gonna try to give the information needed for troubleshooting but please bare with me and let me know if I missed something. The situation is like this: The controller talks to the F5 and creates the Virtual Server and the pool successfully, but the pool is empty. I used the latest helm chart and running the container with the following parameters (note that I did not use the nodeSelector option, although I tried that too): ---credentials-directory -/tmp/creds ---bigip-partition=rancher ---bigip-url=bigip-01.domain.se ---custom-resource-mode=true ---verify-interval=30 ---insecure=true ---log-level=DEBUG ---pool-member-type=nodeport ---log-as3-response=true Virtual Server Manifest: apiVersion:"cis.f5.com/v1" kind:VirtualServer metadata: namespace:istio-system name:istio-vs labels: f5cr:"true" spec: virtualServerAddress:"192.168.1.225" virtualServerHTTPSPort:443 tlsProfileName:bigip-tlsprofile httpTraffic:none pools: -service:istio-ingressgateway servicePort:443 TLSProfile apiVersion:cis.f5.com/v1 kind:TLSProfile metadata: name:bigip-tlsprofile namespace:istio-system labels: f5cr:"true" spec: tls: clientSSL:"" termination:passthrough reference:bigip The istio-ingressgateway service: kubectl describe service -n istio-system istio-ingressgateway ... omitted some info ... Name: istio-ingressgateway Selector: app=istio-ingressgateway,istio=ingressgateway ... omitted some info ... Port: status-port 15021/TCP TargetPort: 15021/TCP NodePort: status-port 32395/TCP Endpoints: 10.42.2.9:15021 Port: http2 80/TCP TargetPort: 8080/TCP NodePort: http2 31380/TCP Endpoints: 10.42.2.9:8080 Port: https 443/TCP TargetPort: 8443/TCP NodePort: https 31390/TCP Endpoints: 10.42.2.9:8443 Port: tcp 31400/TCP TargetPort: 31400/TCP NodePort: tcp 31400/TCP Endpoints: 10.42.2.9:31400 Port: tls 15443/TCP TargetPort: 15443/TCP NodePort: tls 31443/TCP Endpoints: 10.42.2.9:15443 Session Affinity: None External Traffic Policy: Cluster Events: <none> The pod running the gateway: kubectl describe pod -n istio-system istio-ingressgateway-647f8dc56f-kqf7g Name: istio-ingressgateway-647f8dc56f-kqf7g Namespace: istio-system Priority: 0 Node: rancher-prod1/192.168.1.45 Start Time: Fri, 19 Mar 2021 21:20:23 +0100 Labels: app=istio-ingressgateway chart=gateways heritage=Tiller install.operator.istio.io/owning-resource=unknown istio=ingressgateway istio.io/rev=default operator.istio.io/component=IngressGateways pod-template-hash=647f8dc56f release=istio service.istio.io/canonical-name=istio-ingressgateway service.istio.io/canonical-revision=latest Should also add that I'm using this ingress gateway to access applications via the exposed node port so I know it works. 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) ready to poll, last wait: 30s 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) notifying listener: {l:0xc0000da300 s:0xc0000da360} 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) listener callback - num items: 3 err: <nil> 2021/03/27 20:25:50 [DEBUG] Found endpoints for backend istio-system/istio-ingressgateway: [] Looking at the code for the controller I interpret it from the return type declaration that the NodePoller returned 3 nodes and 0 errors: type pollData struct { nl []v1.Node err error } Controller version: f5networks/k8s-bigip-ctlr:2.3.0 F5 version: BIG-IP 16.0.1.1 Build 0.0.6 Point Release 1 AS3 version: 3.26.0 Any ideas? Kind regards, PatrikSolved2KViews0likes3CommentsKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM957Views5likes2CommentsF5 Kubernetes Container Integration
Two problems, finding docs to setup f5 kube-proxy. The doc is missing from this link - http://clouddocs.f5.com/products/asp/v1.0/tbd but I havn't gotten far enough to be able to test communication. The second is k8s-bigip-ctlr is not writing VIP or pool updates. I have k8s-bigip-ctlr and asp running. $ kubectl get pods --namespace kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE f5-asp-1d61j 1/1 Running 0 57m 10.20.30.168 ranchernode2.lax.verifi.com f5-asp-9wmbw 1/1 Running 0 57m 10.20.30.162 ranchernode1.lax.verifi.com heapster-818085469-4bnsg 1/1 Running 7 25d 10.42.228.59 ranchernode1.lax.verifi.com k8s-bigip-ctlr-deployment-1527378375-d1p8v 1/1 Running 0 41m 10.42.68.136 ranchernode2.lax.verifi.com kube-dns-1208858260-ppgc0 4/4 Running 8 25d 10.42.26.16 ranchernode1.lax.verifi.com kubernetes-dashboard-2492700511-r20rw 1/1 Running 6 25d 10.42.29.28 ranchernode1.lax.verifi.com monitoring-grafana-832403127-cq197 1/1 Running 7 25d 10.42.240.16 ranchernode1.lax.verifi.com monitoring-influxdb-2441835288-p0sg1 1/1 Running 5 25d 10.42.86.70 ranchernode1.lax.verifi.com tiller-deploy-3991468440-1x80g 1/1 Running 6 25d 10.42.6.76 ranchernode1.lax.verifi.com I have tried with k8s-bigip-ctlr 1.0.0 (Latest), which fails with different errors. Creating VIP with bigip-virtual-server_v0.1.0.json 2017/06/27 22:50:13 [WARNING] Could not get config for ConfigMap: k8s.vs - minLength must be of an integer Creating Pool with bigip-virtual-server_v0.1.0.json 2017/06/27 22:46:45 [WARNING] Could not get config for ConfigMap: k8s.pool - format must be a valid format . So I tired 1.1.0-beta.1 and it does produce something in the logs like its working but doesn't write any changes to the F5. (using f5schemadb bigip-virtual-server_v0.1.3.json) Here using f5schemadb://bigip-virtual-server_v0.1.3.json with 1.1.0-beta.1 seems get the farthest. 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Add name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] Updating ConfigMap {ServiceName:hello ServicePort:80 Namespace:default} annotation - status.virtual-server.f5.com/ip: 10.20.28.70 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Update name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] Config Map kind: ConfigMap apiVersion: v1 metadata: name: hello-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.3.json" data: |- { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "kubernetes", "virtualAddress": { "bindAddr": "10.20.28.70", "port": 443 } }, "backend": { "serviceName": "hello", "servicePort": 80 } } }860Views0likes8CommentsF5 load balancing Kubernetes masters
Hi, We are trying to setup an HA kubernetes cluster. We have alot of the work done on this like the etcd cluster setup and complete, etc. We are hoping to load balance the k8 api servers with the f5 but havent been able to get that to work. We have configured the F5 virtual server with IP and port 6443 (normal k8 master api server port) and setup the pools to point to one of the 3 nodes at the moment to make sure we can get it working. We are using Round Robin and no persistence profile. So assuming we have https calls to https://F5_VIP:6443/ that are just being load balanced to the nodes. Is there any documentation on doing this or has this been done before? any guidance would be appreciated.799Views1like1Commentk8s bigip-controller node removal
Hi! We're having issues with network traffic in our tanzu kubernetes cluster during upgrades. The upgrade process deletes nodes 1 by 1, and creates a new node with the upgraded k8s version. We have tried modifying the node-poll-interval and verify-interval, but there seems to be an issue with this approach. Our traffic is directed to a random node, and from that node, is forwarded to the correct node/pod inside the cluster. When a node is deleted, it takes up to 30 seconds for that node to be removed from bigip. This results in substantial packet loss during every upgrade of our cluster. Is there a way to perhaps remove/disable nodes based on labels/taints? unschedulable taint would be perfect for this. Any other suggestions? Best regards765Views0likes1CommentHow does BIG-IP distribute IP addresses to individual pods?
Hello. I consider to adopt BIG-IP to out project, but I have second thought because I don't know how does BIG-IP distribute IP addresses to pods. When I use BIG-IP for cluster mode on kubernetes, are IP Addresses which are provided by BIG-IP External IP Address, or Internal IP Address? How does BIG-IP know available IP Address range?Solved752Views0likes4CommentsAn example of an AS3 Rest API call to create a GSLB configuration on BIG-IP.
Hi everyone, Below you can find an example of an AS3 Rest API call that creates a simple GSLB configuration on BIG-IP devices. The main purpose of this article is to share this configuration with others. Of course, on different sites (github, etc) you can find different bits of data, but I think this example will be useful, because it contains all the necessary information about how to create different GSLB objects at the same time, such as: Data Centers (DCs), Servers, Virtual Servers (VSs), Wide IPs, pools and more over. { "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.21.0", "id": "GSLB_test", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "DC1": { "class": "GSLB_Data_Center" }, "DC2": { "class": "GSLB_Data_Center" }, "device01": { "class": "GSLB_Server", "dataCenter": { "use": "DC1" }, "virtualServers": [ { "name": "/ocp/Shared/ingress_vs_1_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "device02": { "class": "GSLB_Server", "dataCenter": { "use": "DC2" }, "virtualServers": [ { "name": "/ocp2/Shared/ingress_vs_2_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "dns_listener": { "class": "Service_UDP", "virtualPort": 53, "virtualAddresses": [ "A.B.C.D" ], "profileUDP": { "use": "custom_udp" }, "profileDNS": { "use": "custom_dns" } }, "custom_dns": { "class": "DNS_Profile", "remark": "DNS Profile test", "parentProfile": { "bigip": "/Common/dns" } }, "custom_udp": { "class": "UDP_Profile", "datagramLoadBalancing": true }, "testpage_local": { "class": "GSLB_Domain", "domainName": "testpage.local", "resourceRecordType": "A", "pools": [ { "use": "testpage_pool" } ] }, "testpage_pool": { "class": "GSLB_Pool", "resourceRecordType": "A", "members": [ { "server": { "use": "/Common/Shared/device01" }, "virtualServer": "/ocp/Shared/ingress_vs_1_443" }, { "server": { "use": "/Common/Shared/device02" }, "virtualServer": "/ocp2/Shared/ingress_vs_2_443" } ] } } } } } P.S. The AS3 scheme guide was very helpful: https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.html632Views1like2CommentsCan I ssl passthrough with LTM connecting to kubernetes?
Hi, I'm working on applying LTM to kubernetes cluster, and I have a question. I want to setup f5 BIG-IP controller with cluster mode, but if I do so, because of lack of virtual server's type configuration, virtual server will be standard type. I know L4 virtual server can passthrough, and I used to do. But now, on the standard virtual server, I have never been able to passthrough ssl. Are there any way to passthrough ssl with kubernetes cluster? Or can I manage virtual server type?599Views0likes6CommentsError computing object status for pool
I tried AS3 with existing pool, but `Error computing object status for pool (/pass/to_mypool) ...` error has occured and cannot configure AS3 declaration. My pool created by configmap, because I'm using BIG-IP with kubernetes (with cluster mode). The pool seems to be fine (monitors works). Statistics also works. Then, what does this error mean? How should I fix it?599Views0likes2Comments