calico
4 TopicsCalico, Kubernetes and BIG-IP
In order to follow along with this post you will need a couple things. First, a working Kubernetes deployment. If you don't have one following this link will get you up and running.The second thing you will need is a BIG-IP. Don't have one? Click here. You will need the advanced routing modules to get BGP working.The last thing is a Container Connector. If you don't have one of those yet you can pull it from Docker Hub. Now, it's a lot of work to get all these pieces up and running. An easier solution would be to automate it, as we do, using OpenStack and heat templates. This gets your services up and runningquickly, and identical every time. Need to update? Change your heat template. Broke your stack? Use the heat template.And that's it. Let's Get Started Spin up a simple nginx service in your kubernetes deployment so we can see all this magic happening. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #008200; -webkit-text-stroke: #008200} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #336699; -webkit-text-stroke: 0px #336699} span.s5 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} # Run the Pods. kubectl run nginx --replicas=2 --image=nginx # Create the Service. kubectl expose deployment nginx --port=80 # Run a Pod and try to access the `nginx` Service. $ kubectl run access --rm -ti --image busybox /bin/sh Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. / # wget -q nginx -O - You should see a response from nginx. Great! Our Service is accessible. You can exit the Pod now. Installing Calico Let's install Calico next. Following along here, it's super easy to install and we use the provided config map almost exactly, the only change we make is to the IP pool. Calicoctl can be ran two ways, the first is through a docker container which allows most commands to work. This can be useful just for a quick check but can become cumbersome if running a lot of commands and doesn't allow you to run commands that need access to the PID namespace or files on the host without adding volume mounts. To install calicoctl on the node: wget https://github.com/projectcalico/calico-containers/releases/download/v1.0.1/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/bin To run calicoctl through a docker container: docker run -i --rm --net=host calico/ctl:v1.0.1 version Now, this is where things get a bit more interesting. We need to setup the BGP peering in Calico so it can advertise our endpoints to the BIG-IP. Take note of the asNumber Calico is using. You can set your own or use the default then run the create command. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #c7254e; -webkit-text-stroke: 0px #c7254e} calicoctl config get asnumber cat << EOF | calicoctl create -f - apiVersion: v1 kind: bgpPeer metadata: peerIP: 172.16.1.6 scope: global spec: asNumber: 64511 EOF I'm using the typical 3-interface setup: A management interface, an external (web-facing) interface, and an internal interface (172.16.1.6 in my example) that connects to the Kubernetes cluster. So replace 172.16.1.6 with whatever IP address you've assigned to the BIG-IP's internal interface. Verify it was setup correctly: sudo calicoctl node status You should recieve an output similar to this: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} Calico process is running. IPv4 BGP status +--------------+-------------------+-------+------------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+------------+-------------+ | 172.16.1.6 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.7 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.3 | global | up | 21:41:21 | Established | +--------------+-------------------+-------+------------+-------------+ The "global" peer type will show "State:start" and "Info:Active" (or another status showing it isn't connected) until we get the BIG-IP configured to handle BGP. Once the BIG-IP has been configured it should show "State:up" and "Info:Established" as you can see above. If for some reason you need to remove BIG-IP as a BGP peer, you can run: sudo calicoctl delete bgppeer --scope=global 172.16.1.6 We aren't done in our Kubernetes deployment yet. It's time for the important piece... the Container Connecter (known as the CC). Container Connecters To start let's setup a secret to hold our BIG-IP information - User name, Password and URL. If you need some help with secrets check here We setup our secrets through a yaml similar to the below and apply it by running kubectl create -f secret.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} apiVersion: v1 items: - apiVersion: v1 data: password: xxx url: xxx username: xxx kind: Secret metadata: name: bigip-credentials namespace: kube-system type: Opaque kind: List metadata: {} Something important to note about the secret: it has to be in the same namespace you deploy the CC in. If it's not the CC won't be able to update your BIG-IP. Let's take a look at the config we are going to use to deploy the CC: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: f5-k8s-controller namespace: kube-system spec: replicas: 1 template: metadata: name: f5-k8s-controller labels: app: f5-k8s-controller spec: containers: - name: f5-k8s-controller # Specify the path to your image here image: "path/to/cc/image:latest" env: # Get sensitive values from the bigip-credentials secret - name: BIGIP_USERNAME valueFrom: secretKeyRef: name: bigip-credentials key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: name: bigip-credentials key: password - name: BIGIP_URL valueFrom: secretKeyRef: name: bigip-credentials key: url command: ["/app/bin/f5-k8s-controller"] args: - "--bigip-url=$(BIGIP_URL)" - "--bigip-username=$(BIGIP_USERNAME)" - "--bigip-password=$(BIGIP_PASSWORD)" - "--namespace=default" - "--bigip-partition=k8s" - "--pool-member-type=cluster" Important pieces of this config are the path to your CC image and the last three arguments we are going to start up the CC with. The namespace argument is the namespace you want the CC to watch for changes on. Our nginx service is in default so we want to watch default. The bigip-partition is where the CC will create your virtual servers. This partition has to already exist on your BIG-IP and it can NOT be your "Common" partition (the CC will take over the partition and manage everything inside). And the last one, pool-member-type. We are using cluster because Calico allows us to see all endpoints (pods in the nginx service) which is the whole point of this post! You can also leave this argument off and it will default to a NodePort setup but that won't allow you to take advantage of BIG-IP's advanced load balancing across all endpoints. To deploy the CC we run kubectl create -f cc.yaml And to verify everything looks good kubectl get deployment f5-k8s-controller --namespace kube-system We are almost done setting up the CC but we have one last piece, telling it what needs to be configured on the BIG-IP. To do that we use a ConfigMap and run kubectl create -f vs-config.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s3 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} kind: ConfigMap apiVersion: v1 metadata: name: example-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.1.json" data: | { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "k8s", "virtualAddress": { "bindAddr": "172.17.0.1", "port": 80 } }, "backend": { "serviceName": "nginx", "servicePort": 80 } } } Let's talk about what we are seeing. The virtual server section is what gets applied to the BIG-IP and concatenated with information from Kubernetes about the service endpoints. So, frontend is BIG-IP configuration options and backend is Kubernetes options. Of note, the bindAddr is where your traffic is coming into from the outside world in our demo world, so 172.17.0.1 is my internet-facing IP address. Awesome, if all went well you should now be able to access the BIG-IP GUI and see your virtual server being configured automatically. If you want to see something really cool (and you do) run kubectl edit deployment nginxand under 'spec' update the 'replicas' count to 5 then go check the virtual server pool members on the BIG-IP. Cool huh? The BIG-IP can see our pool members but it can't actually route traffic to them. We need to setup the BGP peering on our BIG-IP so it has a map to get traffic to the pool members. BIG-IP BGP Peering From the BIG-IP GUI do these couple of steps to allow BGP peering to work: Network >> Self IPs >> selfip.internal (or whatever you called your internal network) Add a custom TCP port of 179 - this allows BGP peering through Go to Network >> Route Domains >> 0 (again, if you called it something different, use that) Under Dynamic Routing Protocols move "BGP" to Enabled and push Update. We are done with the GUI, lets SSH into our BIG-IP and keep moving. Explanation of these commands are beyond the scope of this post so if you are confused or want more info check the docs here and here p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #ff2f93; -webkit-text-stroke: #ff2f93} span.s1 {font-kerning: none} imish enable configure terminal router bgp 64511 neighbor group1 peer-group neighbor group1 remote-as 64511 neighbor 172.0.0.0 peer-group group1 You will need to add all your nodes that you want to peer from your Kubernetes deployment. As an example, if you have 1 master and 2 workers you need to add all 3. Let's check some of our configuration outputs on the BIG-IP to verify we are seeing what is expected. From the BIG-IP run ip route There will be some output but what we are looking for is something like 10.4.0.1/26 via 172.16.1.9 dev internal proto zebra where 10.4.0.1 is your pod IP and 172.16.1.9 is your node IP. This tells us that the BIG-IP has a route to our pods by going through our nodes! If everything is looking good it's time to test it out end to end. Curl your BIG-IP's external IP and you will get a response from your nginx pod and that's it! You now have a working end to end setup using a BIG-IP for your load balancing, Calico advertising BGP routes to the endpoints and Kubernetes taking care of your containers. The Container Connector and other products for containerized environments like the Application Service Proxy are available for use today. For more information check out our docs. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 16.0px; font: 14.0px Courier; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none}1.7KViews0likes0CommentsCIS and Kubernetes - Part 1: Install Kubernetes and Calico
Welcome to this series to see how to: Install Kubernetes and Calico (Part 1) Deploy F5 Container Ingress Services (F5 CIS) to tie applications lifecycle to our application services (Part 2) Here is the setup of our lab environment: BIG-IP Version: 15.0.1 Kubernetes component: Ubuntu 18.04 LTM We consider that your BIG-IPs are already setup and running: Licensed and setup as a cluster The networking setup is already done Part 1: Install Kubernetes and Calico Setup our systems before installing kubernetes Step1: Update our systems and install docker To run containers in Pods, Kubernetes uses a container runtime. We will use docker and follow the recommendation provided here As root on ALL Kubernetes components (Master and Node): # Install packages to allow apt to use a repository over HTTPS apt-get -y update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - # Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Install Docker CE. apt-get -y update && apt-get install -y docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker We may do a quick test to ensure docker run as expected: docker run hello-world Step2: Setup Kubernetes tools (kubeadm, kubelet and kubectl) To setup Kubernetes, we will leverage the following tools: kubeadm: the command to bootstrap the cluster. kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command line util to talk to your cluster. As root on ALL Kubernetes components (Master and Node): curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get -y update We can review which version of kubernetes is supported with F5 Container Ingress Services here At the time of this article, the latest supported version is v1.13.4. We'll make sure to install this specific version with our following step apt-get install -qy kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00 kubernetes-cni=0.6.0-00 apt-mark hold kubelet kubeadm kubectl Install Kubernetes Step1: Setup Kubernetes with kubeadm We will follow the steps provided in the documentation here As root on the MASTER node (make sure to update the api server address to reflect your master node IP): kubeadm init --apiserver-advertise-address=10.1.20.20 --pod-network-cidr=192.168.0.0/16 Note: SAVE somewhere the kubeadm join command. It is needed to "assimilate" the node later. In my example, it looks like the following (YOURS WILL BE DIFFERENT): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 Now you should NOT be ROOT anymore. Go back to your non root user. Since i use Ubuntu, i'll use the default "ubuntu" user Run the following commands as highlighted in the screenshot above: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Step2: Install the networking component of Kubernetes The last step is to setup the network related to our k8s infrastructure. In our kubeadm init command, we used --pod-network-cidr=192.168.0.0/16 in order to be able to setup next on network leveraging Calico as documented here kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml You may monitor the deployment by running the command: kubectl get pods --all-namespaces After some time (<1 min), everything shouldhave a "Running" status. Make sure that CoreDNS started also properly. If everything is up and running, we have our master setup properly and can go to the node to setup k8s on it. Step3: Add the Node to our Kubernetes Cluster Now that the master is setup properly, we can assimilate the node. You need to retrieve the "kubeadmin join …" command that you received at the end of the "kubeadm init …" cmd. You must run the following command as ROOT on the Kubernetes NODE (remember that you got a different hash and token, the command below is an example): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 We can check the status of our node by running the following command on our MASTER (ubuntu user) kubectl get nodes Both component should have a "Ready" status. Last step is to setup Calico between our BIG-IPs and our Kubernetes cluster Setup Calico We need to setup Calico on our BIG-IPs and k8S components. We will setup our environment with the following AS Number: 64512 Step1: BIG-IPs Calico setup F5 has documented this procedure here We will use our self IPs on the internal network. Therefore we need to make sure of the following: The self IP has a portlock down setup to "Allow All" Or add a TCP custom port to the self IP: TCP port 179 You need to allow BGP on the default route domain 0 on your BIG-IPs. Connect to the BIG-IP GUI on go into Network > Route domain. Click on Route Domain "0" and allow BGP Click on "Update" Once this is done,connect via SSH and get into a bash shell on both BIG-IPs Run the following commands: #access the IMI Shell imish #Switch to enable mode enable #Enter configuration mode config terminal #Setup route bgp with AS Number 64512 router bgp 64512 #Create BGP Peer group neighbor calico-k8s peer-group #assign peer group as BGP neighbors neighbor calico-k8s remote-as 64512 #we need to add all the peers: the other BIG-IP, our k8s components neighbor 10.1.20.20 peer-group calico-k8s neighbor 10.1.20.21 peer-group calico-k8s #on BIG-IP1, run neighbor 10.1.20.12 peer-group calico-k8s #on BIG-IP2, run neighbor 10.1.20.11 peer-group calico-k8s #save configuration write #exit end You can review your setup with the command show ip bgp neighbors Note: your other BIG-IP should be identified with a router ID and have a BGP state of "Active". The k8s node won't have a router ID since BGP hasn't already been setup on those nodes. Keep your BIG-IP SSH sessions open. We'll re-use the imish terminal once our k8s components have Calico setup Step2: Kubernetes Calico setup On the MASTER node (not as root), we need to retrieve the calicoctl binary curl -O -Lhttps://github.com/projectcalico/calicoctl/releases/download/v3.10.0/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/local/bin We need to setup calicoctl as explained here sudo mkdir /etc/calico Create a file /etc/calico/calicoctl.cfg with your preferred editor (you'll need sudo privilegies). This file should contain the following apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/ubuntu/config" Note: you may have to change the path specified by the kubeconfig parameter based on the user you use to do kubectl command To make sure that calicoctl is properly setup, run the command calicoctl get nodes You should get a list of your Kubernetes nodes Now we can work on our Calico/BGP configuration as documented here On the MASTER node: cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 64512 EOF Note: Because we setup nodeToNodeMeshEnabled to True, the k8s node will receive the same config We may now setup our BIG-IP BGP peers. Replace the peerIP Value with the IP of your BIG-IPs cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip1 spec: peerIP: 10.1.20.11 asNumber: 64512 EOF cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip2 spec: peerIP: 10.1.20.12 asNumber: 64512 EOF Review your setup with the command: calicoctl get bgpPeer If you go back to your BIG-IP SSH connections, you may check that your Kubernetes nodes have a router ID now in your BGP configuration: imish show ip bgp neighbors Summary So far we have: Setup Kubernetes Setup Calico between our BIG-IPs and our Kubernetes cluster In the next article, we will setup F5 container Ingress Services (F5 CIS)4.3KViews1like1CommentCIS and Kubernetes - Part 2: Install F5 Container ingress services
In our previous article, we have setup Kubernetes and calico with our BIG-IPs. Now we will setup F5 Container Ingress Services (F5 CIS) and deploy an ingress service. Note: in this deployment, we will setup F5 CIS in the namespace kube-system BIG-IPs Setup Setup our Kubernetes partition Before deploying F5 Container Ingress Services, we need to setup our BIG-IPs: Setup a partition that CIS will use. Install AS3 On EACH BIG-IP, you need to do the following: In the GUI, go to System > User > Partitions List and click on the Create button Create a partition called "kubernetes" and click Finished Next we need to download the AS3 extension and install it on each BIG-IP. Install AS3 Go to GitHub . Select the release with the "latest" tag and download the rpm. Once you have the rpm, go to iApps > Package Management LX and click the Import button (you need to run BIG-IP v12.1 or later) on EACH BIG-IP Chose your rpm and click Upload. Once the rpm has been uploaded and loaded, you should see this: Our BIG-IPs are setup properly now. Next, we will deploy F5 CIS CIS Deployment Connect to your Kubernetes cluster. We will need to setup the following: Create a Kubernetes secret to store our BIG-IPs credentials Setup a service account for CIS and setup RBAC Setup our CIS configuration. We will need 1xCIS per BIG-IP (even when BIG-IP is setup as a cluster - we don't recommend automatic sync between BIG-IPs) Deploy one CIS per BIG-IP Store our BIG-IP credentials in a kubernetes secret To store your credentials (login/password) in a kubernetes secret, you can run the following command (https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-secrets.html#secret-bigip-login): kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=<your_login> --from-literal=password=<your_password> In my setup, my credentials are the following: Login: admin Password: D3f4ult123 So we'll run the following command: kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=D3f4ult123 ubuntu@ip-10-1-1-4:~$ kubectl create secret generic bigip-login --namespace kube-system --from-literal=username=admin --from-literal=password=D3f4ult123 secret/bigip-login created Create a Service Account /RBAC for CIS Now we need to create our service account (https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html#set-up-rbac-authentication): Run the following command: : kubectl create serviceaccount bigip-ctlr -n kube-system ubuntu@ip-10-1-1-4:~$ kubectl create serviceaccount bigip-ctlr -n kube-system serviceaccount/bigip-ctlr created Use your favorite editor to create this file: f5-k8s-sample-rbac.yaml: # for use in k8s clusters only # for OpenShift, use the OpenShift-specific examples kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: bigip-ctlr-clusterrole rules: - apiGroups: ["", "extensions"] resources: ["nodes", "services", "endpoints", "namespaces", "ingresses", "pods"] verbs: ["get", "list", "watch"] - apiGroups: ["", "extensions"] resources: ["configmaps", "events", "ingresses/status"] verbs: ["get", "list", "watch", "update", "create", "patch"] - apiGroups: ["", "extensions"] resources: ["secrets"] resourceNames: ["<secret-containing-bigip-login>"] verbs: ["get", "list", "watch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: bigip-ctlr-clusterrole-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: bigip-ctlr-clusterrole subjects: - apiGroup: "" kind: ServiceAccount name: bigip-ctlr namespace: kube-system Deploy this config by running the following command: kubectl apply -f f5-k8s-sample-rbac.yaml ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-k8s-sample-rbac.yaml clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding created Next step is to setup our CIS deployment files Create our CIS deployment configurations Use your favorite editor to create the following files: setup_cis_bigip1.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: k8s-bigip1-ctlr-deployment namespace: kube-system spec: selector: matchLabels: app: k8s-bigip1-ctlr # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: labels: app: k8s-bigip1-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=10.1.20.11", "--bigip-partition=kubernetes", "--insecure=true", "--pool-member-type=cluster", "--agent=as3" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login setup_cis_bigip2.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: k8s-bigip2-ctlr-deployment namespace: kube-system spec: selector: matchLabels: app: k8s-bigip2-ctlr # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: labels: app: k8s-bigip2-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=10.1.20.12", "--bigip-partition=kubernetes", "--insecure=true", "--pool-member-type=cluster", "--agent=as3" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login To deploy our CIS, run the following commands: kubectl apply -f setup_cis_bigip1.yaml kubectl apply -f setup_cis_bigip2.yaml You can make sure that they got deployed successfully: kubectl get pods -n kube-system You can review if it launched successfully with the commands: kubectl logs <pod_name> -n kube system ubuntu@ip-10-1-1-4:~$ kubectl logs k8s-bigip2-ctlr-deployment-6f674c8d58-bbzqv -n kube-system 2020/01/03 12:04:06 [INFO] Starting: Version: 1.12.0, BuildInfo: n2050-623590021 2020/01/03 12:04:06 [INFO] ConfigWriter started: 0xc0002bb950 2020/01/03 12:04:06 [INFO] Started config driver sub-process at pid: 14 2020/01/03 12:04:07 [INFO] NodePoller (0xc000b7c090) registering new listener: 0x11bfea0 2020/01/03 12:04:07 [INFO] NodePoller started: (0xc000b7c090) 2020/01/03 12:04:07 [INFO] Watching Ingress resources. 2020/01/03 12:04:07 [INFO] Watching ConfigMap resources. 2020/01/03 12:04:07 [INFO] Handling ConfigMap resource events. 2020/01/03 12:04:07 [INFO] Handling Ingress resource events. 2020/01/03 12:04:07 [INFO] Registered BigIP Metrics 2020/01/03 12:04:07 [INFO] [2020-01-03 12:04:07,663 __main__ INFO] entering inotify loop to watch /tmp/k8s-bigip-ctlr.config563475778/config.json 2020/01/03 12:04:08 [INFO] Successfully Sent the FDB Records You may also check your BIG-IPs configuration: if CIS has been able to connect successfully, it will have created *another* partition called "kubernetes_AS3".This is explained here: https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-use-as3-backend.html CIS will create partitionkubernetes_AS3to store LTM objects such as pools, and virtual servers. FDB, and Static ARP entries are stored inkubernetes. These partitions should not be managed manually. Application Deployment In this section, we will do the following: Deploy an application with 2 replicas (ie 2 instances of our app) Setup an ingress service to connect to your application Deploy our application and service Create the following service and deployment configuration files: f5-hello-world-app-http-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: f5-hello-world namespace: default spec: replicas: 2 selector: matchLabels: app: f5-hello-world template: metadata: labels: app: f5-hello-world spec: containers: - env: - name: service_name value: f5-hello-world image: f5devcentral/f5-hello-world:latest imagePullPolicy: Always name: f5-hello-world ports: - containerPort: 8080 protocol: TCP f5-hello-world-app-http-service.yaml apiVersion: v1 kind: Service metadata: name: f5-hello-world namespace: default labels: app: f5-hello-world spec: ports: - name: f5-hello-world port: 8080 protocol: TCP targetPort: 8080 type: NodePort selector: app: f5-hello-world Apply your deployment and service: kubectl apply -f f5-hello-world-app-http-deployment.yaml kubectl apply -f f5-hello-world-app-http-service.yaml ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-hello-world-app-http-deployment.yaml deployment.apps/f5-hello-world created ubuntu@ip-10-1-1-4:~$ kubectl apply -f f5-hello-world-app-http-service.yaml service/f5-hello-world created You can review your deployment and service with kubectl get : ubuntu@ip-10-1-1-4:~$ kubectl get pods NAMEREADYSTATUSRESTARTSAGE f5-hello-world-847698f5c6-59mmn1/1Running02m3s f5-hello-world-847698f5c6-pcz9v1/1Running02m3s ubuntu@ip-10-1-1-4:~$ kubectl get svc NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE f5-hello-worldNodePort10.97.83.208<none>8080:31380/TCP2m kubernetesClusterIP10.96.0.1<none>443/TCP63d Define our ingress service CIS ingress annotations can be reviewed here: https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/v1.11/#ingress-resources Create the following file: f5-as3-ingress.yaml: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: singleingress1 namespace: default annotations: # See the k8s-bigip-ctlr documentation for information about # all Ingress Annotations # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest/#supported-ingress-annotations virtual-server.f5.com/ip: "10.1.10.80" virtual-server.f5.com/http-port: "443" virtual-server.f5.com/partition: "kubernetes_AS3" virtual-server.f5.com/health: '[{"path": "/", "send": "HTTP GET /", "interval": 5, "timeout": 10}]' spec: backend: # The name of the Service you want to expose to external traffic serviceName: f5-hello-world servicePort: 8080 you can review the BIG-IP configuration by going into the kubernetes_AS3 partition Summary In this article we saw how to deploy CIS with AS3 to handle ingress services. CIS has more capabilities than ingress that you may review here: https://clouddocs.f5.com/containers/v2/kubernetes/ (configmap, routes, …) ---- Relevant links to this article: https://www.f5.com/products/automation-and-orchestration/container-ingress-services https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-use-as3-backend.html https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-k8s-as3.html https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html2.5KViews0likes0CommentsCalico cannot advertise routes to BIG-IP through BGP
Hi. I setup BIG-IP as a usual, but I got some unfamiliar error and I recognized Calico cannot advertise its routes to BIG-IP. In BIG-IP, I found error log like `Open Cap: IgnoringVendor specific capability, code 70 len 0` (code is either 69 or 70). BGP itself seems to be established because `sudo calicoctl node status` on kubernetes master node returns `[BIG-IP Internal IP] | global | up | 09:32:57 | Established` and `show ip bgp neighbors` on BIG-IP terminal returns `BGP state = Established, up for ...`. But on BIG-IP, there are also `BGP connection is non shared network` messages left. I tried to ping to kubernetes master node. It returned correct response. From master node to BIG-IP also returned correct. So, I have no idea why Calico cannot advertise routes. I used this procedure (https://support.f5.com/csp/article/K14436300). Our BIG-IP version is 14.1.0, and k8s-bigip-ctlr version is 1.10.0. Using kubernetes version 1.13.1 with Calico v3.7.5. Does anyone have an idea what I should try to do or confirm to?507Views0likes1Comment