Forum Discussion
F5 Kubernetes Container Integration
Two problems, finding docs to setup f5 kube-proxy. The doc is missing from this link - http://clouddocs.f5.com/products/asp/v1.0/tbd but I havn't gotten far enough to be able to test communication.
The second is k8s-bigip-ctlr is not writing VIP or pool updates.
I have k8s-bigip-ctlr and asp running.
$ kubectl get pods --namespace kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
f5-asp-1d61j 1/1 Running 0 57m 10.20.30.168 ranchernode2.lax.verifi.com
f5-asp-9wmbw 1/1 Running 0 57m 10.20.30.162 ranchernode1.lax.verifi.com
heapster-818085469-4bnsg 1/1 Running 7 25d 10.42.228.59 ranchernode1.lax.verifi.com
k8s-bigip-ctlr-deployment-1527378375-d1p8v 1/1 Running 0 41m 10.42.68.136 ranchernode2.lax.verifi.com
kube-dns-1208858260-ppgc0 4/4 Running 8 25d 10.42.26.16 ranchernode1.lax.verifi.com
kubernetes-dashboard-2492700511-r20rw 1/1 Running 6 25d 10.42.29.28 ranchernode1.lax.verifi.com
monitoring-grafana-832403127-cq197 1/1 Running 7 25d 10.42.240.16 ranchernode1.lax.verifi.com
monitoring-influxdb-2441835288-p0sg1 1/1 Running 5 25d 10.42.86.70 ranchernode1.lax.verifi.com
tiller-deploy-3991468440-1x80g 1/1 Running 6 25d 10.42.6.76 ranchernode1.lax.verifi.com
I have tried with k8s-bigip-ctlr 1.0.0 (Latest), which fails with different errors.
Creating VIP with bigip-virtual-server_v0.1.0.json
2017/06/27 22:50:13 [WARNING] Could not get config for ConfigMap: k8s.vs - minLength must be of an integer
Creating Pool with bigip-virtual-server_v0.1.0.json
2017/06/27 22:46:45 [WARNING] Could not get config for ConfigMap: k8s.pool - format must be a valid format
. So I tired 1.1.0-beta.1 and it does produce something in the logs like its working but doesn't write any changes to the F5. (using f5schemadb bigip-virtual-server_v0.1.3.json)
Here using f5schemadb://bigip-virtual-server_v0.1.3.json with 1.1.0-beta.1 seems get the farthest.
2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors
2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Add name: hello-vs namespace: default
2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true
2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs".
2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type
2017/06/27 22:58:19 [DEBUG] Updating ConfigMap {ServiceName:hello ServicePort:80 Namespace:default} annotation - status.virtual-server.f5.com/ip: 10.20.28.70
2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services
2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services)
2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs
2017/06/27 22:58:19 [DEBUG] Services: []
2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors
2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Update name: hello-vs namespace: default
2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true
2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs".
2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type
2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services
2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services)
2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs
2017/06/27 22:58:19 [DEBUG] Services: []
Config Map
kind: ConfigMap
apiVersion: v1
metadata:
name: hello-vs
namespace: default
labels:
f5type: virtual-server
data:
schema: "f5schemadb://bigip-virtual-server_v0.1.3.json"
data: |-
{
"virtualServer": {
"frontend": {
"balance": "round-robin",
"mode": "http",
"partition": "kubernetes",
"virtualAddress": {
"bindAddr": "10.20.28.70",
"port": 443
}
},
"backend": {
"serviceName": "hello",
"servicePort": 80
}
}
}
- kylec_251298Nimbostratus
1.1.0-beta.1 is still the only container version that works. But the issue was type: NodePort wasn't set for the hello service.
kind: Service apiVersion: v1 metadata: name: hello spec: selector: app: hello tier: backend type: NodePort ports: - protocol: TCP port: 80 targetPort: http
- Rick_SalsaRet. Employee
With k8s-bigip-ctlr:1.0.0, the schema is f5schemadb://bigip-virtual-server_v0.1.2.json. The v0.1.3 version is for v1.1.0-beta.
If you're using something like weave or flannel, you'll need to use a NodePort for your exposed services. If you use Calico w/ BGP, you can have the BIG-IP lb directly to the pod IPs (as your pool members). Future releases will offer better support for vxlan use cases.
Finally, make sure you've deployed the f5-kube-proxy to replace k8s' kube-proxy implementation to make sure that ASP works as expected.
- Rick_SalsaRet. Employee
You can find the docs for deploying f5-kube-proxy in the F5 Kubernetes Container Integration guide.
What environment are you running your k8s cluster on?
We'll make sure to get that other doc bug fixed.
- kylec_251298Nimbostratus
Thank you for your response rsal, it was the
that needed to be set. I added a answer to my question with that below. I haven't tried k8s-bigip-ctlr:1.0.0 with f5schemadb://bigip-virtual-server_v0.1.2.json since I added that. But its working with v0.1.3 version with v1.1.0-beta.type: NodePort
However its not actually any nodes to the pool in the F5. It did create the VIP and pool. Is kube-proxy needed for this?
2017/06/29 20:35:02 [DEBUG] Delegating type *v1.Service to virtual server processors 2017/06/29 20:35:02 [DEBUG] Process Service watch - change type: Add name: hello namespace: default 2017/06/29 20:35:02 [DEBUG] Service backend matched {ServiceName:hello ServicePort:80 Namespace:default}: using node port 30775 2017/06/29 20:35:02 [DEBUG] ConfigWriter (0xc420339050) writing section name services 2017/06/29 20:35:02 [DEBUG] ConfigWriter (0xc420339050) successfully wrote section (services) 2017/06/29 20:35:02 [INFO] Wrote 1 Virtual Server configs 2017/06/29 20:35:02 [DEBUG] Services: [{"virtualServer":{"backend":{"serviceName":"hello","servicePort":80,"poolMemberAddrs":[]},"frontend":{"virtualServerName":"default_hello-vs","partition":"kubernetes","balance":"round-robin","mode":"http","virtualAddress":{"bindAddr":"10.20.28.70","port":443},"iappPoolMemberTable":{"name":"","columns":null}}}}]
Here is the service:
$ kubectl describe service hello Name: hello Namespace: default Labels: Selector: app=hello,tier=backend Type: NodePort IP: 10.43.203.210 Port: 80/TCP NodePort: 30775/TCP Endpoints: 10.42.189.143:80,10.42.20.122:80,10.42.236.34:80 + 4 more... Session Affinity: None No events.
Also thank you for sending the kube-proxy link, I didn't scroll down far enough on the page (the tip box made me think the rest of the page was empty) to find the kube-proxy example manifests.
I am running kubernetes via Rancher. Their configuration has kube-proxy running as part of their stack, I was thinking I could update it and add the commands and extra volume. But I'm trying to make sense of the way they have it setup. I'm a bit lost here.
- rsal_79565Historic F5 Account
Does Rancher deploy kube-proxy as a DaemonSet? Here are the important volume-mounts and host paths that our kube-proxy needs:
volumeMounts: - mountPath: /var/run/kubernetes/proxy-plugin name: proxy-plugin - mountPath: /run/kubeconfig name: kubeconfig
volumes: - hostPath: path: /etc/kubernetes/kubelet.conf name: kubeconfig - hostPath: path: /etc/kubernetes/proxy-plugin name: proxy-plugin
You should be able to match there Rancher has the kubelet.conf file. You can use the same directory for the proxy-plugin volume which we need to be able to write to.
- kylec_251298Nimbostratus
Hmmm yeah the problem is the way rancher deploys kube-proxy. It deploys all of kubernetes as basically a docker-compose file and runs everything from containers.
This is how the Rancher kube-proxy is started.
proxy: privileged: true image: rancher/k8s:v1.5.4-rancher1-4 network_mode: host command: - kube-proxy - --master=http://kubernetes.kubernetes.rancher.internal - --v=2 - --healthz-bind-address=0.0.0.0 labels: io.rancher.container.dns: 'true' io.rancher.scheduler.global: 'true'
So adding a volume to the host won't do any good because
doesnt exist. That is running from another container./etc/kubernetes/kubelet.conf
I might need to contact Rancher to see how this could be done.
- dpf5_342584Nimbostratus
I tested with schema version 0.1.2: schema: "f5schemadb://bigip-virtual-server_v0.1.2.json"
and controller version: image: "f5networks/k8s-bigip-ctlr:1.3.0"
Rancher v 1.6.10 and Kubernetes 1.7
- dpf5_342584Nimbostratus
I'm having new problem now, anybody knows what to fix?
The controller is keep writing Virtual server every second.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com