cancel
Showing results for 
Search instead for 
Did you mean: 

BigIP Controller for Kubernetes not adding pool members

Good evening

 

I'm trying to get the BigIP Controller up and running in my lab with CRD's but I can't get it to work. Gonna try to give the information needed for troubleshooting but please bare with me and let me know if I missed something.

 

The situation is like this:

The controller talks to the F5 and creates the Virtual Server and the pool successfully, but the pool is empty.

 

I used the latest helm chart and running the container with the following parameters (note that I did not use the nodeSelector option, although I tried that too):

        - --credentials-directory         - /tmp/creds         - --bigip-partition=rancher         - --bigip-url=bigip-01.domain.se         - --custom-resource-mode=true         - --verify-interval=30         - --insecure=true         - --log-level=DEBUG         - --pool-member-type=nodeport         - --log-as3-response=true

 

Virtual Server Manifest:

 apiVersion: "cis.f5.com/v1"  kind: VirtualServer  metadata:    namespace: istio-system    name: istio-vs    labels:      f5cr: "true"  spec:    virtualServerAddress: "192.168.1.225"    virtualServerHTTPSPort: 443    tlsProfileName: bigip-tlsprofile    httpTraffic: none    pools:    - service: istio-ingressgateway      servicePort: 443

 

TLSProfile

apiVersion: cis.f5.com/v1 kind: TLSProfile metadata:   name: bigip-tlsprofile   namespace: istio-system   labels:     f5cr: "true" spec:   tls:     clientSSL: ""     termination: passthrough     reference: bigip

 

The istio-ingressgateway service:

kubectl describe service -n istio-system istio-ingressgateway ... omitted some info ... Name:                     istio-ingressgateway Selector:                 app=istio-ingressgateway,istio=ingressgateway ... omitted some info ... Port:                     status-port  15021/TCP TargetPort:               15021/TCP NodePort:                 status-port  32395/TCP Endpoints:                10.42.2.9:15021 Port:                     http2  80/TCP TargetPort:               8080/TCP NodePort:                 http2  31380/TCP Endpoints:                10.42.2.9:8080 Port:                     https  443/TCP TargetPort:               8443/TCP NodePort:                 https  31390/TCP Endpoints:                10.42.2.9:8443 Port:                     tcp  31400/TCP TargetPort:               31400/TCP NodePort:                 tcp  31400/TCP Endpoints:                10.42.2.9:31400 Port:                     tls  15443/TCP TargetPort:               15443/TCP NodePort:                 tls  31443/TCP Endpoints:                10.42.2.9:15443 Session Affinity:         None External Traffic Policy:  Cluster Events:                   <none>

 

The pod running the gateway:

kubectl describe pod -n istio-system istio-ingressgateway-647f8dc56f-kqf7g Name:         istio-ingressgateway-647f8dc56f-kqf7g Namespace:    istio-system Priority:     0 Node:         rancher-prod1/192.168.1.45 Start Time:   Fri, 19 Mar 2021 21:20:23 +0100 Labels:       app=istio-ingressgateway               chart=gateways               heritage=Tiller               install.operator.istio.io/owning-resource=unknown               istio=ingressgateway               istio.io/rev=default               operator.istio.io/component=IngressGateways               pod-template-hash=647f8dc56f               release=istio               service.istio.io/canonical-name=istio-ingressgateway               service.istio.io/canonical-revision=latest

 

Should also add that I'm using this ingress gateway to access applications via the exposed node port so I know it works.

 

2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) ready to poll, last wait: 30s 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) notifying listener: {l:0xc0000da300 s:0xc0000da360} 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) listener callback - num items: 3 err: <nil> 2021/03/27 20:25:50 [DEBUG] Found endpoints for backend istio-system/istio-ingressgateway: []

 

Looking at the code for the controller I interpret it from the return type declaration that the NodePoller returned 3 nodes and 0 errors:

type pollData struct { nl []v1.Node err error }

 

Controller version: f5networks/k8s-bigip-ctlr:2.3.0

F5 version: BIG-IP 16.0.1.1 Build 0.0.6 Point Release 1

AS3 version: 3.26.0

 

Any ideas?

 

Kind regards,

Patrik

1 ACCEPTED SOLUTION

Update. After deleting everything and re-deploying the members were populated as expected. The question was edited above but for the record the config above had these two conflicting parameters before:

 

- --pool-member-type=nodeport - --log-as3-response=true - --pool-member-type=cluster

I added the cluster option when troubleshooting something else earlier and forgot to remove it. Newbie mistake!

View solution in original post

3 REPLIES 3

  caught conflicting controller start args. Removed - --pool-member-type=cluster. Fixed it, but still the same issue though.

 

Thank you Stan!

Update. After deleting everything and re-deploying the members were populated as expected. The question was edited above but for the record the config above had these two conflicting parameters before:

 

- --pool-member-type=nodeport - --log-as3-response=true - --pool-member-type=cluster

I added the cluster option when troubleshooting something else earlier and forgot to remove it. Newbie mistake!

Wrote a guide for the whole process of running it:

https://loadbalancing.se/2021/03/28/installing-troubleshooting-and-running-bigip-ingress-controller/