openshift
6 TopicsKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM971Views5likes2CommentsAn example of an AS3 Rest API call to create a GSLB configuration on BIG-IP.
Hi everyone, Below you can find an example of an AS3 Rest API call that creates a simple GSLB configuration on BIG-IP devices. The main purpose of this article is to share this configuration with others. Of course, on different sites (github, etc) you can find different bits of data, but I think this example will be useful, because it contains all the necessary information about how to create different GSLB objects at the same time, such as: Data Centers (DCs), Servers, Virtual Servers (VSs), Wide IPs, pools and more over. { "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.21.0", "id": "GSLB_test", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "DC1": { "class": "GSLB_Data_Center" }, "DC2": { "class": "GSLB_Data_Center" }, "device01": { "class": "GSLB_Server", "dataCenter": { "use": "DC1" }, "virtualServers": [ { "name": "/ocp/Shared/ingress_vs_1_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "device02": { "class": "GSLB_Server", "dataCenter": { "use": "DC2" }, "virtualServers": [ { "name": "/ocp2/Shared/ingress_vs_2_443", "address": "A.B.C.D", "port": 443, "monitors": [ { "bigip": "/Common/custom_icmp_2" } ] } ], "devices": [ { "address": "A.B.C.D" } ] }, "dns_listener": { "class": "Service_UDP", "virtualPort": 53, "virtualAddresses": [ "A.B.C.D" ], "profileUDP": { "use": "custom_udp" }, "profileDNS": { "use": "custom_dns" } }, "custom_dns": { "class": "DNS_Profile", "remark": "DNS Profile test", "parentProfile": { "bigip": "/Common/dns" } }, "custom_udp": { "class": "UDP_Profile", "datagramLoadBalancing": true }, "testpage_local": { "class": "GSLB_Domain", "domainName": "testpage.local", "resourceRecordType": "A", "pools": [ { "use": "testpage_pool" } ] }, "testpage_pool": { "class": "GSLB_Pool", "resourceRecordType": "A", "members": [ { "server": { "use": "/Common/Shared/device01" }, "virtualServer": "/ocp/Shared/ingress_vs_1_443" }, { "server": { "use": "/Common/Shared/device02" }, "virtualServer": "/ocp2/Shared/ingress_vs_2_443" } ] } } } } } P.S. The AS3 scheme guide was very helpful: https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/refguide/schema-reference.html744Views1like2Commentsopenshift multi cluster CIS HA
I encounter a weird issue configuring a high available CIS 2.19 on Openshift 4.16. The primary cis hangs in a loop, printing: [WARNING] AutoMonitor value is not defined or not supported. Defaulting to none If I switch off the primary and start the secondary, the secondary works as should and creates the objects on the F5 big ip ve. For the routes defined on the secondary cluster. Attached are the deployment and configmap yamls. I could not find anything about the AutoMonitor, so I have no idea what this is. If I configure the primary cluster as a standalone, multi cluster works fine.Solved99Views0likes7CommentsHealth Monitor unable to connect to OpenShift Router
Hi, We have F5 VS routing traffic to a service behind OpenShift Router ( We are not using F55 CIS ). The OpenShift Route is configured as TLS Passthrough. I want to re-encrypt TLS at F5. In case of TLS Passthrough configuration OpenShift Router determines the route based on TLS Client Hello hostname. So I have OPS Route with host name “my-tls-passthrough-service.com” and I have F5 VS with hostname “my-f5vs.com” and a pool with single member pointing to OPS Router IP and port 443 . I have configured Client and Server SSL profiles. Also, in server SSL profile I have set “Server Name” attribute to “my-tls-passthrough-service.com” . Everything works as expected – the request reaches the service through F5 . The problem I have is when I configure Health Monitor. The generic HTTPS monitor doesn’t help as it checks the status of OPS Router , not the service behind it. But when I add ServerSSL profile to Health check monitor I get pool member marked down and message in local traffic log “Unable to connect “ Can you please help - without health monitor the set up is useless69Views0likes4Commentsha cis multi cluster Openshift route creation
I like to verify that when creating a route on an Openshift multicluster HA cis environment, the endpoints of a service on the secondary cluster are added to the poolmembers automatically. First I had the annotation below add: virtual-server.f5.com/multiClusterServices: | [ { "clusterName": "openshift-engineering-02", "service": "tea-svc", "namespace": "cafe", "servicePort": 8080, "weight": 100 } ] Creating routes without this annotation still adds the pods of the service with the same name and in the same namespace on the secondary cluster I saw. Is this annotation not required for a HA cis multi cluster application? Does HA CIS always add the pods of the secondary cluster as poolmembers if they belong to the same service and namespace as on the primary cluster? And the same if the secondary CIS becomes the active CIS? What about services on other external clusters? Is the annotation for virtual-server.f5.com/multiClusterServices only required if the service or namespace do not match with the names in the route manifest?Solved53Views0likes2Commentsfeature request: container egress service
After installing cis in a test environment and getting ready to install in a new production environment I wonder if there also will be a container egress service (CES)? It is very easy to set a gateway for selected namespaces with AdminPolicyBasedExternalRoute in Openshift. See, F5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows | DevCentral The solution above does not scale well if multiple namespace-egress IP address mappings are desired. A nice solution would be a CES that watches the creating and deletion of pods in selected namespaces. Then it can manage address lists with the pods ip addresses in the F5 ltm. Forwarding ip virtual services will use these address lists to match pod ip addresses to an egress ip defined in a snat pool. Also the creation and deletion of forwarding ip virtual servers and address lists could be managed with a "CES". A possible issue is that a container in a pod can start network connections before the forwarding IP virtual server accepts the new pod IP address. But this can easily be solved with adding an initcontainer in the pod that tests the network connectivity. This would be a good alternative for Openshift egress IPs or Istio gateways. Reason to want this, is to offer applications on Openshift an own egress IP address and stop using the node IP address for external network connections of the pods.46Views0likes3Comments