kubernetes
19 TopicsBIG-IP for SIP resources running in Kubernetes
Hello, We are trying to setup Virtual Server using BIG-IP that would server as a Load Balancer for SIP traffic for resources that are deployed in Kubernetes cluster and exposed through NodePort. Our F5 is not part of the Kubernetes cluster and it is a standalone Virtual Machine that sends its traffic to NodePort service of our SIP resources. We are facing few issues and hope someone can help us understand them. UDP not working When we try to use UDP the problem is that F5 (10.224.64.223) sends SIP OPTIONS to ip address/port that we defined as access point for SIP elements in Kubernetes (Node IP and NodePort port, 10.224.64.222, port:31131). But due to Kubernetes deployment, responses are sent from different IP address and port (10.224.64.222, port 30834). And this gets rejected by the F5. 10:17:23.695039 IP 10.224.64.223.51938 > 10.224.64.222.31131: UDP, length 575 out slot1/tmm1 lis=mon_mrf_sip_udp port=1.2 trunk= 10:17:23.700849 IP 10.224.64.220.30834 > 10.224.64.223.51938: UDP, length 520 in slot1/tmm0 lis= port=1.2 trunk= 10:17:23.700949 IP 10.224.64.223 > 10.224.64.220: ICMP 10.224.64.223 udp port 51938 unreachable, length 36 out slot1/tmm0 lis= port=1.2 trunk= Even the usage of macvlan on Kubernetes pods does not help. With macvlan we manage to achieve that IP address is preserved (10.226.64.225), but still the port changes (5060 -> 25404). And F5 rejects it. 10:42:07.370926 IP 10.224.64.223.54412 > 10.224.64.225.5060: SIP: OPTIONS sip:10.224.64.225:5060 SIP/2.0 out slot1/tmm0 lis= port=1.2 trunk= 10:42:07.378237 IP 10.224.64.225.25404 > 10.224.64.223.54412: UDP, length 425 in slot1/tmm0 lis= port=1.2 trunk= 10:42:07.378325 IP 10.224.64.223 > 10.224.64.225: ICMP 10.224.64.223 udp port 54412 unreachable, length 36 out slot1/tmm0 lis= port=1.2 trunk= So I guess there is no way to have it working for UDP at all with resources being deployed in Kubernets cluster? (host-network is not an option). TCP (in Message Routing mode) not working When we try to use TCP we found out that "Standard (SIP - legacy profile)" mode behaves differently then "Message Routing" one. In case when we use "Legacy" SIP monitor via TCP it establishes a TCP connection with destination server prior to sending the SIP Options message. This is OK for us. But when we try to use "Message Routing" (from what I understood this is generally advisable for SIP traffic) for TCP monitoring, TCP connection is not established before OPTIONS message is sent and this is not acceptable by our SIP servers. So I have few questions: Is it even possible to use F5 BIG-IP TLM VE as SIP LB for SIP resources operating in Kubernetes cluster (for both UDP and TCP) or the ONLY option is to use F5 BIG-IP Next Service Proxy Kubernetes (SPK) for SIP traffic? Is there a way to somehow force F5 that does Monitoring usin Message Routing mode to open TCP connection prior to sending SIP requests? Due to UDP problem above (that probably is solvable only if SPK version is used) is some way for F5 to do the UDP-2-TCP conversion of SIP traffic? Kind Regards, Zvonimir37Views0likes0CommentsF5 kubernetes f5 controller failing to compose 'poolMemberAddrs' and failing to generate F5 objects
Hi - I set this up: http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.0/ and getting errors after uploading my configmap and an applicable service: 2017/04/28 23:43:30 [INFO] File "/app/python/_f5.py", line 393, in _create_ltm_config_kubernetes 2017/04/28 23:43:30 [INFO] for node in backend['poolMemberAddrs']: 2017/04/28 23:43:30 [INFO] TypeError: 'NoneType' object is not iterable The config file generated by /app/bin/k8s-bigip-ctlr does not populate "poolMemberAddrs" so the python f5 handler /app/python/bigipconfigdriver.py is crashing since it cannot figure out the nodeport targets: /app cat /tmp/k8s-bigip-ctlr.config281602422/config.json {"bigip":{"username":"xxxxxxxxx","password":"yyyyyyyy","url":";:["k8s"]},"global":{"log-level":"INFO","verify-interval":30},"services":[{"virtualServer":{"backend":{"serviceName":"av-service","servicePort":30000,"poolMemberPort":0,"poolMemberAddrs":null},"frontend":{"virtualServerName":"default_av-service","partition":"k8s","balance":"round-robin","mode":"http","virtualAddress":{"bindAddr":"1.2.3.4","port":80},"iappPoolMemberTable":{"name":"","columns":null}}}}]}/app I ran out of anything helpful with debug statements or documentation about the closed source go binary... io:$ kubectl get services/av-service -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR av-service 10.25.104.158 80:30000/TCP 2h app=av io:$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' 10.25.82.193 10.25.82.65 10.25.83.54 Is this the right place to ask about what possible reasons the controller is crashing here?380Views0likes3CommentsAn example of an AS3 Rest API call to create a GSLB configuration on BIG-IP.
Hi everyone, Below you can find an example of an AS3 Rest API call that creates a simple GSLB configuration on BIG-IP devices. The main purpose of this article is to share this configuration with others. Of course, on different sites (github, etc) you can find different bits of data, but I think this example will be useful, because it contains all the necessary information about how to create different GSLB objects at the same time, such as: Data Centers (DCs), Servers, Virtual Servers (VSs), Wide IPs, pools and more over. { "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.21.0", "id": "GSLB_test", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "DC1": { "class": "GSLB_Data_Center" }, "DC2": { "class": "GSLB_Data_Center" }, "device01": { "class": "GSLB_Server", "dataCenter": { "use": "DC1" }, "virtualServers": [ { "name": "/ocp/Shared/ingress_vs_1_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "device02": { "class": "GSLB_Server", "dataCenter": { "use": "DC2" }, "virtualServers": [ { "name": "/ocp2/Shared/ingress_vs_2_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "dns_listener": { "class": "Service_UDP", "virtualPort": 53, "virtualAddresses": [ "A.B.C.D" ], "profileUDP": { "use": "custom_udp" }, "profileDNS": { "use": "custom_dns" } }, "custom_dns": { "class": "DNS_Profile", "remark": "DNS Profile test", "parentProfile": { "bigip": "/Common/dns" } }, "custom_udp": { "class": "UDP_Profile", "datagramLoadBalancing": true }, "testpage_local": { "class": "GSLB_Domain", "domainName": "testpage.local", "resourceRecordType": "A", "pools": [ { "use": "testpage_pool" } ] }, "testpage_pool": { "class": "GSLB_Pool", "resourceRecordType": "A", "members": [ { "server": { "use": "/Common/Shared/device01" }, "virtualServer": "/ocp/Shared/ingress_vs_1_443" }, { "server": { "use": "/Common/Shared/device02" }, "virtualServer": "/ocp2/Shared/ingress_vs_2_443" } ] } } } } } P.S. The AS3 scheme guide was very helpful: https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.html710Views1like2CommentsF5 Kubernetes Container Integration
Two problems, finding docs to setup f5 kube-proxy. The doc is missing from this link - http://clouddocs.f5.com/products/asp/v1.0/tbd but I havn't gotten far enough to be able to test communication. The second is k8s-bigip-ctlr is not writing VIP or pool updates. I have k8s-bigip-ctlr and asp running. $ kubectl get pods --namespace kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE f5-asp-1d61j 1/1 Running 0 57m 10.20.30.168 ranchernode2.lax.verifi.com f5-asp-9wmbw 1/1 Running 0 57m 10.20.30.162 ranchernode1.lax.verifi.com heapster-818085469-4bnsg 1/1 Running 7 25d 10.42.228.59 ranchernode1.lax.verifi.com k8s-bigip-ctlr-deployment-1527378375-d1p8v 1/1 Running 0 41m 10.42.68.136 ranchernode2.lax.verifi.com kube-dns-1208858260-ppgc0 4/4 Running 8 25d 10.42.26.16 ranchernode1.lax.verifi.com kubernetes-dashboard-2492700511-r20rw 1/1 Running 6 25d 10.42.29.28 ranchernode1.lax.verifi.com monitoring-grafana-832403127-cq197 1/1 Running 7 25d 10.42.240.16 ranchernode1.lax.verifi.com monitoring-influxdb-2441835288-p0sg1 1/1 Running 5 25d 10.42.86.70 ranchernode1.lax.verifi.com tiller-deploy-3991468440-1x80g 1/1 Running 6 25d 10.42.6.76 ranchernode1.lax.verifi.com I have tried with k8s-bigip-ctlr 1.0.0 (Latest), which fails with different errors. Creating VIP with bigip-virtual-server_v0.1.0.json 2017/06/27 22:50:13 [WARNING] Could not get config for ConfigMap: k8s.vs - minLength must be of an integer Creating Pool with bigip-virtual-server_v0.1.0.json 2017/06/27 22:46:45 [WARNING] Could not get config for ConfigMap: k8s.pool - format must be a valid format . So I tired 1.1.0-beta.1 and it does produce something in the logs like its working but doesn't write any changes to the F5. (using f5schemadb bigip-virtual-server_v0.1.3.json) Here using f5schemadb://bigip-virtual-server_v0.1.3.json with 1.1.0-beta.1 seems get the farthest. 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Add name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] Updating ConfigMap {ServiceName:hello ServicePort:80 Namespace:default} annotation - status.virtual-server.f5.com/ip: 10.20.28.70 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Update name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] Config Map kind: ConfigMap apiVersion: v1 metadata: name: hello-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.3.json" data: |- { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "kubernetes", "virtualAddress": { "bindAddr": "10.20.28.70", "port": 443 } }, "backend": { "serviceName": "hello", "servicePort": 80 } } }881Views0likes8CommentsHow to define tcp_half_open monitor with k8s-bigip-ctlr
Hello, how I can define backend health monitor to be tcp_half_open through k8s-bigip-ctlr? currently backend configuration have a key "healthMonitors" but it take values only TCP, UDP or HTTP, I even tried to change health monitor manually - but it is reverted by controller pod. any ideas? https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.5/#backend I tried with: { "virtualServer": { "backend": { "servicePort": 8080, "serviceName": "my-service", "healthMonitors": [{"protocol": "tcp_half_open"}] }, "frontend": { "virtualAddress": { "port": 9090, "bindAddr": "123.44.11.11" }, "partition": "k8s", "balance": "least-connections-member", "mode": "tcp" } } } but getting obvious error: 2019/10/31 19:54:57 [WARNING] Could not get config for ConfigMap: f5.vs - configMap is not valid, errors: ["virtualServer.backend.healthMonitors.0.protocol: virtualServer.backend.healthMonitors.0.protocol must be one of the following: \"http\", \"tcp\", \"udp\""]529Views0likes5CommentsBigIP Controller for Kubernetes not adding pool members
Good evening I'm trying to get the BigIP Controller up and running in my lab with CRD's but I can't get it to work. Gonna try to give the information needed for troubleshooting but please bare with me and let me know if I missed something. The situation is like this: The controller talks to the F5 and creates the Virtual Server and the pool successfully, but the pool is empty. I used the latest helm chart and running the container with the following parameters (note that I did not use the nodeSelector option, although I tried that too): ---credentials-directory -/tmp/creds ---bigip-partition=rancher ---bigip-url=bigip-01.domain.se ---custom-resource-mode=true ---verify-interval=30 ---insecure=true ---log-level=DEBUG ---pool-member-type=nodeport ---log-as3-response=true Virtual Server Manifest: apiVersion:"cis.f5.com/v1" kind:VirtualServer metadata: namespace:istio-system name:istio-vs labels: f5cr:"true" spec: virtualServerAddress:"192.168.1.225" virtualServerHTTPSPort:443 tlsProfileName:bigip-tlsprofile httpTraffic:none pools: -service:istio-ingressgateway servicePort:443 TLSProfile apiVersion:cis.f5.com/v1 kind:TLSProfile metadata: name:bigip-tlsprofile namespace:istio-system labels: f5cr:"true" spec: tls: clientSSL:"" termination:passthrough reference:bigip The istio-ingressgateway service: kubectl describe service -n istio-system istio-ingressgateway ... omitted some info ... Name: istio-ingressgateway Selector: app=istio-ingressgateway,istio=ingressgateway ... omitted some info ... Port: status-port 15021/TCP TargetPort: 15021/TCP NodePort: status-port 32395/TCP Endpoints: 10.42.2.9:15021 Port: http2 80/TCP TargetPort: 8080/TCP NodePort: http2 31380/TCP Endpoints: 10.42.2.9:8080 Port: https 443/TCP TargetPort: 8443/TCP NodePort: https 31390/TCP Endpoints: 10.42.2.9:8443 Port: tcp 31400/TCP TargetPort: 31400/TCP NodePort: tcp 31400/TCP Endpoints: 10.42.2.9:31400 Port: tls 15443/TCP TargetPort: 15443/TCP NodePort: tls 31443/TCP Endpoints: 10.42.2.9:15443 Session Affinity: None External Traffic Policy: Cluster Events: <none> The pod running the gateway: kubectl describe pod -n istio-system istio-ingressgateway-647f8dc56f-kqf7g Name: istio-ingressgateway-647f8dc56f-kqf7g Namespace: istio-system Priority: 0 Node: rancher-prod1/192.168.1.45 Start Time: Fri, 19 Mar 2021 21:20:23 +0100 Labels: app=istio-ingressgateway chart=gateways heritage=Tiller install.operator.istio.io/owning-resource=unknown istio=ingressgateway istio.io/rev=default operator.istio.io/component=IngressGateways pod-template-hash=647f8dc56f release=istio service.istio.io/canonical-name=istio-ingressgateway service.istio.io/canonical-revision=latest Should also add that I'm using this ingress gateway to access applications via the exposed node port so I know it works. 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) ready to poll, last wait: 30s 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) notifying listener: {l:0xc0000da300 s:0xc0000da360} 2021/03/27 20:25:43 [DEBUG] [CORE] NodePoller (0xc0001d45a0) listener callback - num items: 3 err: <nil> 2021/03/27 20:25:50 [DEBUG] Found endpoints for backend istio-system/istio-ingressgateway: [] Looking at the code for the controller I interpret it from the return type declaration that the NodePoller returned 3 nodes and 0 errors: type pollData struct { nl []v1.Node err error } Controller version: f5networks/k8s-bigip-ctlr:2.3.0 F5 version: BIG-IP 16.0.1.1 Build 0.0.6 Point Release 1 AS3 version: 3.26.0 Any ideas? Kind regards, PatrikSolved2.1KViews0likes3CommentsF5 load balancing Kubernetes masters
Hi, We are trying to setup an HA kubernetes cluster. We have alot of the work done on this like the etcd cluster setup and complete, etc. We are hoping to load balance the k8 api servers with the f5 but havent been able to get that to work. We have configured the F5 virtual server with IP and port 6443 (normal k8 master api server port) and setup the pools to point to one of the 3 nodes at the moment to make sure we can get it working. We are using Round Robin and no persistence profile. So assuming we have https calls to https://F5_VIP:6443/ that are just being load balanced to the nodes. Is there any documentation on doing this or has this been done before? any guidance would be appreciated.850Views1like1Commentk8s bigip-controller node removal
Hi! We're having issues with network traffic in our tanzu kubernetes cluster during upgrades. The upgrade process deletes nodes 1 by 1, and creates a new node with the upgraded k8s version. We have tried modifying the node-poll-interval and verify-interval, but there seems to be an issue with this approach. Our traffic is directed to a random node, and from that node, is forwarded to the correct node/pod inside the cluster. When a node is deleted, it takes up to 30 seconds for that node to be removed from bigip. This results in substantial packet loss during every upgrade of our cluster. Is there a way to perhaps remove/disable nodes based on labels/taints? unschedulable taint would be perfect for this. Any other suggestions? Best regards783Views0likes1CommentCan I ssl passthrough with LTM connecting to kubernetes?
Hi, I'm working on applying LTM to kubernetes cluster, and I have a question. I want to setup f5 BIG-IP controller with cluster mode, but if I do so, because of lack of virtual server's type configuration, virtual server will be standard type. I know L4 virtual server can passthrough, and I used to do. But now, on the standard virtual server, I have never been able to passthrough ssl. Are there any way to passthrough ssl with kubernetes cluster? Or can I manage virtual server type?606Views0likes6Comments