Technical Articles
F5 SMEs share good practice.
cancel
Showing results for 
Search instead for 
Did you mean: 
Chris_Zhang
F5 Employee
F5 Employee

Introduction

The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment.

The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution.

This architecture is depicted below.

0151T000003lgmfQAA.png

The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale.

There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections.

NGINX Plus Ingress Controller

Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image.

The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ).

A sample Deployment file deploying NIC as a Service is shown below,

apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-ingress
 namespace: nginx-ingress
spec:
 replicas: 3
 selector:
   matchLabels:
     app: nginx-ingress
 template:
   metadata:
     labels:
       app: nginx-ingress
    #annotations:
      #prometheus.io/scrape: "true"
      #prometheus.io/port: "9113"
   spec:
     serviceAccountName: nginx-ingress
     imagePullSecrets:
     - name: abgmbh.azurecr.io
     containers:
     - image: abgmbh.azurecr.io/nginx-plus-ingress:edge
       name: nginx-plus-ingress
       ports:
       - name: http
         containerPort: 80
       - name: https
         containerPort: 443
      #- name: prometheus
        #containerPort: 9113
       securityContext:
         allowPrivilegeEscalation: true
         runAsUser: 101 #nginx
         capabilities:
           drop:
           - ALL
           add:
           - NET_BIND_SERVICE
       env:
       - name: POD_NAMESPACE
         valueFrom:
           fieldRef:
             fieldPath: metadata.namespace
       - name: POD_NAME
         valueFrom:
           fieldRef:
             fieldPath: metadata.name
       args:
         - -nginx-plus
         - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
         - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
         - -ingress-class=sock-shop
        #- -v=3 # Enables extensive logging. Useful for troubleshooting.
        #- -report-ingress-status
        #- -external-service=nginx-ingress
        #- -enable-leader-election
        #- -enable-prometheus-metrics

Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created.

Below shows the counterpart Ingress with the ‘sock-shop’ annotation.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: sock-shop-ingress
 annotations:
   kubernetes.io/ingress.class: "sock-shop"
spec:
 tls:
 - hosts:
   - socks.ab.gmbh
   secretName: wildcard.ab.gmbh
 rules:
 - host: socks.ab.gmbh
   http:
     paths:
     - path: /
       backend:
         serviceName: front-end
         servicePort: 80

This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself.

The above concludes Ingress configuration with the NIC.

0151T000003lgn4QAA.png

F5 Container Ingress Services

The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses.

Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS.

A sample Deployment file is shown below,

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: k8s-bigip-ctlr-deployment
 namespace: kube-system
spec:
 # DO NOT INCREASE REPLICA COUNT
 replicas: 1
 template:
   metadata:
     name: k8s-bigip-ctlr
     labels:
       app: k8s-bigip-ctlr
   spec:
     # Name of the Service Account bound to a Cluster Role with the required
     # permissions
     serviceAccountName: bigip-ctlr
     containers:
       - name: k8s-bigip-ctlr
         image: "f5networks/k8s-bigip-ctlr"
         env:
           - name: BIGIP_USERNAME
             valueFrom:
               secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: username
           - name: BIGIP_PASSWORD
             valueFrom:
               secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: password
         command: ["/app/bin/k8s-bigip-ctlr"]
         args: [
           # See the k8s-bigip-ctlr documentation for information about
           # all config options
           # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
           "--bigip-username=$(BIGIP_USERNAME)",
           "--bigip-password=$(BIGIP_PASSWORD)",
           "--bigip-url=https://x.x.x.x:8443",
           "--bigip-partition=k8s",
           "--pool-member-type=cluster",
           "--agent=as3",
           "--manage-ingress=false",
           "--insecure=true",
           "--as3-validation=true",
           "--node-poll-interval=30",
           "--verify-interval=30",
           "--log-level=INFO"
           ]
     imagePullSecrets:
       # Secret that gives access to a private docker registry
       - name: f5-docker-images
       # Secret containing the BIG-IP system login credentials
       - name: bigip-login

Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL (Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release.

'--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned.

The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration.

The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB).

           "--bigip-url=https://x.x.x.x:8443",
           "--bigip-partition=k8s",
           "--pool-member-type=cluster",
           "--agent=as3",
           "--manage-ingress=false",
           "--insecure=true",
           "--as3-validation=true",

To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod.

kind: ConfigMap
apiVersion: v1
metadata:
 name: as3-template
 namespace: kube-system
 labels:
   f5type: virtual-server
   as3: "true"
data:
 template: |
   {
      "class": "AS3",
      "action": "deploy",
      "persist": true,
      "declaration": {
        "class": "ADC",
        "id":"1847a369-5a25-4d1b-8cad-5740988d4423",
        "schemaVersion": "3.16.0",
        "Nginx_IC": {
                "class": "Tenant",
                "Nginx_IC_vs": {
                    "class": "Application",
                    "template": "https",
                    "serviceMain": {
                        "class": "Service_HTTPS",
                        "virtualAddresses": [
                            "10.1.0.14"
                        ],
                        "virtualPort": 443,
                        "redirect80": false,
                        "serverTLS": {
                            "bigip": "/Common/clientssl"
                        },
                        "clientTLS": {
                            "bigip": "/Common/serverssl"
                        },
                        "pool": "Nginx_IC_pool"
                    },
                    "Nginx_IC_pool": {
                        "class": "Pool",
                        "monitors": [
                            "https"
                        ],
                        "members": [
                            {
                                "servicePort": 443,
                                "shareNodes": true,
                                "serverAddresses": []
                            }
                        ]
                    }
                }
            }
        } 
    }

They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically.

Now, create a Service to expose the NIC’s.

apiVersion: v1
kind: Service
metadata:
 name: nginx-ingress
 namespace: nginx-ingress
 labels:
   cis.f5.com/as3-tenant: Nginx_IC
   cis.f5.com/as3-app: Nginx_IC_vs
   cis.f5.com/as3-pool: Nginx_IC_pool
spec:
 type: ClusterIP
 ports:
   - port: 443
     targetPort: 443
     protocol: TCP
     name: https
 selector:
   app: nginx-ingress

Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned.

On the BIG-IP, the following should be created.

0151T000003lgnEQAQ.png

The end product is below.

0151T000003lgnFQAQ.png

Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses.

The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ).

Hope this article helps to lift the veil on some integration mysteries.

Comments
kunalpuriii
Altocumulus
Altocumulus

Hello Lief Zimmerman

 

Thanks for posting this document,

 

I am working on same topology, however did not get this working as of now.

 

Would it be possible for you to help.

 

Thanks

Kunal

LiefZimmerman
Community Manager
Community Manager

 - I recommend posting your specific question in our questions section. That is the most likely way to get the most eyes on your problem.

You might consider adding the URL to this article as a reference of your question and then you may even @mention the author of this article () in your question. Often, someone in the community (either Chris or another) will step in and offer guidance. Failing that, please reach out to your account management team.

 

Hope that helps.

Chris_Zhang
F5 Employee
F5 Employee

Hey Kunal,

 

Regarding user/pass, you need to create a secret within k8s and reference that secret in the form of variables in the yaml file. - the references are already in place, so please create a secret per this article ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html#kctlr-initial-setup-bigip ), step 3.

 

For "--bigip-url=<ip_address-or-hostname>", if your BIG-IP has a single interface, the management by default is on port 8443. Use the address that you use to administer the appliance.

 

You do not need to add anything to the ConfigMap as related to your question. If you follow the referenced article, all the prerequisites should be setup and ready to go.

 

--insecure=true means CIS will not validate certificate presented by the BIG-IP. All traffic is still SSL encrypted.

 

Install a recent version of f5-appsvcs on the BIG-IP, otherwise it won't understand AS3 embedded within the ConfigMap.

 

Once the CIS is able to communicate with the BIG-IP, the AS3 within the ConfigMap will set up everything in the BIG-IP. You do not have to configure anything manually inside the BIG-IP.

 

The integration is meant for NGINX Plus Ingress Controller, the Open Source Nginx might work as well, but I have not tested it at all.

 

Thanks,

Chris

chongwp
Nimbostratus
Nimbostratus

My "k8s-bigip-ctlr" is having this error in its log.

> 2020-03-20T11:40:39.934958194Z 2020/03/20 11:40:39 [ERROR] Error parsing ConfigMap kube-system_as3-template

Need a  recent version of f5-appsvcs on the BIG-IP?

 

Chris_Zhang
F5 Employee
F5 Employee

This message is likely cosmetic, I have those messages as well. I will ask internally and see what is causing them.

kunalpuriii
Altocumulus
Altocumulus

0691T000008GicXQAS.png I have setup and working now, issue is with AS3-template. I am only able to create VIP with name"serviceMain".

If i have multiple cluster connected to F5, how i will be able to create multiple VIP's.

 

I have tried changing name of VIP but its not working.

 

Can you suggest me what would be correct config for AS3 for flexible VIP name

 

Chris_Zhang
F5 Employee
F5 Employee

With AS3, the name of the virtual server is fixed to 'ServiceMain', but you can put the name in 'Description'.

kunalpuriii
Altocumulus
Altocumulus

Hello Chris... thanks for responding to the queries... In above example clusterip is used which advertised nginx ingress controller IP to the F5. can you please confirm what would be the data plane forwarding, is it like below

F5-->k8'sNode-->Services-->POD where NGINX is running or is it different?

 

Thanks again for your help

Chris_Zhang
F5 Employee
F5 Employee

When you use ClusterIP, the BIG-IP needs to be able to deliver traffic to that IP space. If you use Calico (BGP), that traffic is routed. If you use Flannel (VXLAN), that traffic is sent inside the VXLAN tunnel.

 

With BGP, the route table will have the next-hop set to the k8s nodes. Traffic is routed to k8s nodes and those nodes further route traffic to the NGINX IC pods.

 

With VXLAN, the BIG-IP establishes a tunnel with the k8s nodes at the other end. Traffic is sent inside the tunnel and arrives at the k8s nodes, and traffic is taken out from the tunnel and delivered to the NGINX IC pods.

kunalpuriii
Altocumulus
Altocumulus

Hello  

 

I hope you are doing well.

 

Just wanted to check if this solution still works. I am trying to recreate the environment but its not working, its giving me below error

 

2020/06/08 20:47:18 [ERROR] [AS3] Response from BIG-IP: code: ERR_REQUEST_FAILED --- tenant:Nginx_IC --- message: declaration failed

2020/06/08 20:47:18 [ERROR] [AS3] Response from BIG-IP: code: 200 --- tenant:k8s-AS3_AS3 --- message: no change

 

I have tried this setup with CIS 2.0.0 and f5appsvc 3.20.0 and also CIS 1.14.0 and f5appsvc 3.17.1, i am using same working configuration from march. but getting below error

 

nginx SVC config

root@master-1:~# kubectl describe svc nginx-ingress2 -n nginx-ingress

Name:       nginx-ingress2

Namespace:     nginx-ingress

Labels:      cis.f5.com/as3-app=Nginx_vs

          cis.f5.com/as3-pool=Nginx_IC_pool

          cis.f5.com/as3-tenant=Nginx_IC

Annotations:    <none>

Selector:     app=nginx-ingress

Type:       ClusterIP

IP:        10.111.160.103

Port:       https 443/TCP

TargetPort:    443/TCP

Endpoints:     10.1.2.191:443

Session Affinity: None

Events:      <none>

 

Configmap for CIS and F5 integration

root@master-1:~# kubectl describe configmap nginx-as3 -n kube-system

Name:     nginx-as3

Namespace:  kube-system

Labels:    as3=true

       f5type=virtual-server

Annotations: <none>

 

Data

====

template:

----

{

  "class": "AS3",

  "action": "deploy",

  "persist": true,

  "declaration": {

   "class": "ADC",

   "schemaVersion": "3.13.0",

   "id": "1847a369-5a25-4d1b-8cad-5740988d4423",

   "label": "APP Template",

   "remark": "HTTP application",

   "Nginx_IC": {

       "class": "Tenant",

       "Nginx_IC_vs": {

         "class": "Application",

         "template": "generic",

         "app_80_vs": {

          "class": "Service_HTTP",

          "remark": "app",

          "virtualAddresses": [

           "10.165.36.141"

           ],

          "virtualPort": 80,

          "profileTCP": {

          "bigip": "/Common/f5-tcp-lan"

          },

       "pool": "Nginx_IC_pool"

          },

          "Nginx_IC_pool": {

          "class": "Pool",

          "members": [

          {

           "servicePort": 80,

           "shareNodes": true,

           "serverAddresses": []

          }

         ]

        }

       }

      }

    }

 }

Events: <none>

 

CIS:

root@master-1:~# kubectl describe pod k8s-bigip-ctlr-deployment-6759c46587-tdk79 -n kube-system

Name:     k8s-bigip-ctlr-deployment-6759c46587-tdk79

Namespace:  kube-system

Priority:   0

Node:     worker-2/192.168.5.22

Start Time:  Mon, 08 Jun 2020 20:40:16 +0000

Labels:    app=k8s-bigip-ctlr

       pod-template-hash=6759c46587

Annotations: <none>

Status:    Running

IP:      10.1.2.192

IPs:

 IP:      10.1.2.192

Controlled By: ReplicaSet/k8s-bigip-ctlr-deployment-6759c46587

Containers:

 k8s-bigip-ctlr:

  Container ID: docker://4f4bfd89700af786bfa3920e5287160003a4500370c4e133c159cc33c62ed984

  Image:     f5networks/k8s-bigip-ctlr:1.14.0

  Image ID:   docker-pullable://f5networks/k8s-bigip-ctlr@sha256:25bdfc947ed4cdd172a68e37c51dbaa8ca87fcbc4d894622b42a260755a2bf68

  Port:     <none>

  Host Port:   <none>

  Command:

   /app/bin/k8s-bigip-ctlr

  Args:

   --bigip-username=$(BIGIP_USERNAME)

   --bigip-password=$(BIGIP_PASSWORD)

   --bigip-url=https://192.168.5.210

   --bigip-partition=k8s-AS3

   --pool-member-type=cluster

   --agent=as3

   --manage-ingress=false

   --insecure=true

   --as3-validation=true

   --node-poll-interval=30

   --verify-interval=30

   --log-level=INFO

  State:     Running

   Started:   Mon, 08 Jun 2020 20:40:20 +0000

  Ready:     True

  Restart Count: 0

  Environment:

   BIGIP_USERNAME: <set to the key 'username' in secret 'bigip-login'> Optional: false

   BIGIP_PASSWORD: <set to the key 'password' in secret 'bigip-login'> Optional: false

  Mounts:

   /var/run/secrets/kubernetes.io/serviceaccount from bigip-ctlr-token-r6rvn (ro)

Conditions:

 Type       Status

 Initialized    True

 Ready       True

 ContainersReady  True

 PodScheduled   True

Volumes:

 bigip-ctlr-token-r6rvn:

  Type:    Secret (a volume populated by a Secret)

  SecretName: bigip-ctlr-token-r6rvn

  Optional:  false

QoS Class:    BestEffort

Node-Selectors: <none>

Tolerations:   node.kubernetes.io/not-ready:NoExecute for 300s

         node.kubernetes.io/unreachable:NoExecute for 300s

Events:

 Type  Reason   Age    From        Message

 ----  ------   ----    ----        -------

 Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/k8s-bigip-ctlr-deployment-6759c46587-tdk79 to worker-2

 Normal Pulling  17m    kubelet, worker-2 Pulling image "f5networks/k8s-bigip-ctlr:1.14.0"

 Normal Pulled   17m    kubelet, worker-2 Successfully pulled image "f5networks/k8s-bigip-ctlr:1.14.0"

 Normal Created  17m    kubelet, worker-2 Created container k8s-bigip-ctlr

 Normal Started  17m    kubelet, worker-2 Started container k8s-bigip-ctlr

 

Any help is greatly appreciated.

 

Thanks

Kunal

Chris_Zhang
F5 Employee
F5 Employee

Hey Kunal,

 

I just did a test with the latest CIS (e.g., 2.0) by changing the deployment file with the following, and everything seems to be working.

image: "f5networks/k8s-bigip-ctlr:latest"

Please take a look at this folder in my repository ( https://gitlab.com/abgmbh/kitchen_sink/-/tree/master/k8s%20and%20Nginx/Kubernetes_IC/F5_Container_In... ).

 

On a side note, can you please use code quote for code or logs in future replies? It might be much easier to read as the logs are quite long.

 

Thanks,

Chris

Ali_M
Nimbostratus
Nimbostratus

Hey  ,

Thanks for the article, it's helpful. I'm trying to run through this setup and unfortunately I'm running into problems. I want to preface this by saying I am using the open-source version of NGINX. I know that this tutorial is based off of nginx-plus but I don't see anything obvious why this wouldn't work with the open source version. Please correct me if I'm wrong.

So I am able to get to the point where the big-ip controller on my Kubernetes cluster creates a virtual server, node pool and my nodes. The nodes created on the F5 are created using the NGINX pod's Cluster IP's. The nodes are shown as "down" -- as expected, as I see in your screenshot they're also down and the F5 doesn't know how to ping the pods.

When trying to hit my website via web browser, I get "ERR_CONNECTION_TIMED_OUT".

Here are the configuration args I am setting on my big-ip-controller:

- --credentials-directory=/var/run/secrets/credentials
          - --bigip-url=xxxxx
          - --pool-member-type=cluster
          - --bigip-partition=$(BIGIP_PARTITION)
          - --insecure=true
          - --manage-ingress=false
          - --as3-validation=true
          - --manage-configmaps=true
          - --log-as3-response=true

My ConfigMap for the AS3 declaration is the same as yours except that I changed the client certificate. I have tried the original ConfigMap as well, no luck there.

Here is what my NGINX Service looks like:

apiVersion: v1
kind: Service
metadata:
 name: ali-service
 namespace: xxx
 labels:
   cis.f5.com/as3-tenant: Nginx_IC
   cis.f5.com/as3-app: Nginx_IC_vs
   cis.f5.com/as3-pool: Nginx_IC_pool
spec:
 type: ClusterIP
 ports:
   - port: 443
     targetPort: 443
     protocol: TCP
     name: https
 selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress

And finally, I did try making my service a `NodePort` (instead of ClusterIP) to make sure that I can reach my service via NGINX. This works. I've reverted the change back to ClusterIP.

I'm not exactly sure what I'm doing wrong and I would appreciate some guidance on what to check.

My company does have a F5 support plan so if you think a ticket would be better, I can get one going. Alternatively, if you have a moment to respond here or on a phone call that would work for me as well.

Thanks Chris!

Edit: One thing I want to point out is my nodes on the F5 are created in the "Common" partition, not in the "NGINX_IC" partition. It's hard to tell whether it should be created in the NGINX_IC partition. Either way, the Node Pool is referencing the nodes. But I don't know if this is contributing to my problem.

Edit 2: I think I understand where I'm going wrong, now that I think of it. My cluster CNI is flannel and I'm using host-gw. When running the big-ip controller in "cluster" mode, you're essentially integrating the F5 within the overlay network, which means I need to be running VXLAN. Can I confirm with you, Chris, that host-gw won't work? I think this is an obvious no because then the big-ip controller adds your actual Kubernetes VM nodes to the F5 node pool, not the pods of the nginx controller. I've tried setting the controller mode to "nodeport" which also doesn't seem to work. Thanks!

Chris_Zhang
F5 Employee
F5 Employee

Hey Ali,

 

When you say host-gw, are you referring to adding static routes on the F5 with the pod IP space pointing to the Kubernetes nodes as gateways? It is routing similar to what BGP does but I have not tested it myself. Are you able to reach the Pods from within the F5? You might also need to add routes on the Kubernetes nodes for returning traffic.

 

From a routing's perspective it sounds like a feasible solution. But I don't know if there are intricacies that Calico as a CNI does with regards to routing that we don't know about.

 

Regarding NGINX_IC partition, my understanding is that it's all done by AS3 via ConfigMap. Partition is only an administrative function, creating nodes in Common should not affect data traffic.

 

Thanks,

Chris

Ali_M
Nimbostratus
Nimbostratus

Hey @Chris, thanks for the response!

 

host-gw is a mode in flannel. You're right -- I believe it uses the machine's routing table to move traffic in and out of the overlay network. Unfortunately I cannot reach pods from within the F5, that is a limitation and only supported in Flannel VXLAN it seems (unless I do static routes as you mentioned but these seems very complicated).

 

Thanks for clarifying the partition question. Do you know how we can specify in the AS3 declaration so that the nodes get created in the correct partition? I'd like to do it for organizational purposes.

 

Thanks again!

 

 

Chris_Zhang
F5 Employee
F5 Employee

Use Common and Shared, see below. The objects will be create in /Common/Shared location.

"Common": {
            "class": "Tenant",
            "enable": true,
            "Shared": {
                "class": "Application",
                "template": "shared",
...

https://gitlab.com/abgmbh/kitchen_sink/-/blob/master/AS3/Common.json#L10

Ali_M
Nimbostratus
Nimbostratus

Thanks  . Three other questions for you:

 

  1. How can I do SSL offloading? One article I found online suggested I set my ServiceMain `class` to `Service_TCP`. That didn't seem to work.
  2. My NGINX-Ingress has `externalTrafficPolicy: "Local"` set so that the source IP is reserved but I still don't see it working. How do I get the F5 to set `x-forwarded-for`?
  3. Third question is do you have a handy reference guide where I can find out what other things I can do using AS3? It seems like the AS3 reference guide doesn't closely follow what we are doing here. In other words, it looks like AS3 for Kubernetes setup is a bit different than your regular AS3 declarations. Correct me if I'm wrong.

 

Thanks!

 

Edit: Figured out how to get SSL passthrough working manually. Had to change my VS to Performance (L4). Time to figure out how to do it via AS3 🙂

Chris_Zhang
F5 Employee
F5 Employee

SSL offload would need a SSL cert on the F5. - https://gitlab.com/abgmbh/kitchen_sink/-/blob/master/k8s%20and%20Nginx/Kubernetes_IC/F5_Container_In...

 

SSL passthrough is a straight up L4 VIP. - https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/non-http-servi... (without the ICAP)

 

With XFF, you can either use a HTTP profile ( https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.h... ) to insert the header or use an irule and have AS3 to attach that irule.

 

For use cases, please take a look at this ( https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/ ).

 

The AS3 should be the same for the Kubernetes setup as well, the BIG-IP gets to process it regardless where the declaration comes from.

 

To assist with writing AS3 declaration, you might want to use Visual Studio Code and take advantage of syntax checking feature. ( https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/validate.html )

 

 

kunalpuriii
Altocumulus
Altocumulus

Thanks  for all your help sofar.

I have question regarding AS3 extension, currently we have test this integration by installing AS3 extension under  iApps > Package Management LX. in F5( Is this a pre-requisite or a definite requirement)

 

We have 10+ opco's and vCMP is deployed for all of the, do I need to install AS3 extension in all the vCMP's to have AS3 support, Is there any centralized designed may be using BIG IQ? or any other centralized server?

 

Thanks

Kunal Puri

Eric_Chen
F5 Employee
F5 Employee

 AS3 does need to be installed on each device. The BIG-IP Controller for Kubernetes talks directly to the BIG-IP device.

kunalpuriii
Altocumulus
Altocumulus

 thanks for your response, I actually tested it in the past and have tested it again. If we dont have f5-appsvcs installed in F5, F5 does not get AS3 update. Thus new pool member are not updated.

kunalpuriii
Altocumulus
Altocumulus

 Also is it any limitation to specify no. of F5 in --bigip-url=https://x.x.x.x:8443

Can we update multiple F5(20) from the same CIS by specifying different bigip url in the same CIS POD?

Chris_Zhang
F5 Employee
F5 Employee

 . Just to clarify, are you looking at injecting the same configs (e.g., VIP, pool) to all 20 F5's?

kunalpuriii
Altocumulus
Altocumulus

 yes not 20, but yes multiple F5 , 8 HA pair to be specific. Yes same configuration

Chris_Zhang
F5 Employee
F5 Employee

One CIS can only speak to one BIG-IP. But you can have multiple CIS's with each talking to its associating BIG-IP.

Why the pool is with 5 Nginx conrollers ? Shouldn't it be with 3 "replicas: 3"?

Chris_Zhang
F5 Employee
F5 Employee

Good eye, Nikoolayy1! Yes, the picture should show 3

Thanks just I was studying for my F5 cloud expert cert and I was curious if I was missing something.

Version history
Last update:
‎05-Mar-2020 14:52
Updated by:
Contributors