cloud
1728 TopicsOpenShift Service Mesh 2.x/3.x with F5 BIG-IP
Overview OpenShift Service Mesh (OSSM) is Red Hat´s packaged version of Istio Service Mesh. Istio has the Ingress Gateway component to handle incoming traffic from outside of the cluster. Like other ingress controllers, it requires an external load balancer to get the traffic into the ingress PODs. This follows the canonical Kubenetes 2-tier arrangement for getting the traffic inside the cluster. This is depicted in the next figure: This article covers the configuration of OpenShift Service Mesh 2.x/3.x and expose it to the BIG-IP, and how to properly monitor its health, either using BIG-IP´s Container Ingress Services (CIS) or without using it. Exposing OSSM in BIG-IP - VIP configuration It is a customer choice how to publish OSSM in the BIG-IP: A Layer 4 (L4) Virtual Server is more simple and certificate management is done in OpenShift. The advantages of using this mode are the potential higher performance and scalability, including connection mirroring, yet mirroring is not usually used for HTTP traffic due to the typical retry mechanism of HTTP applications. Connection persistence is limited to the source IP. When using CIS, this is done with a TransportServer CR, which creates a fastL4 type virtual server in the BIG-IP. A Layer 7 (L7) Virtual Server requires additional configuration because TLS termination is required. In this mode, OpenShift can take advantage of BIG-IP´s TLS off-loading capabilities and Hardware/Network/SaaS/Cloud HSM integrations, which store private keys securely, including FIPS level support. Working at L7 also allows to do per-application traffic management, including headers and payload rewrites, cookie persistence, etc. It also allows to do per-application multi-cluster. The above features are provided by the LTM (load balancing) module in BIG-IP. The possibilities are further expanded when using modules such as ASM (Advanced WAF) and Access (authentication). When using CIS, this is done with a VirtualServer CR, which creates a standard-type virtual server in the BIG-IP. Exposing OSSM to BIG-IP - pool configuration There are two options to expose Istio Ingress Gateways to BIG-IP: Using ClusterIP addresses, these are POD IPs which are dynamic. This requires the use of CIS for discovering the IP addresses of the Ingress Gateway PODs. Using NodePort addresses, these are reachable from the outside network. When using these, it is not strictly necessary to use CIS, but it is recommended. Exposing OpenShift Service Mesh using ClusterIP This requires the use of CIS with the following parameters --orchestration-cni=ovn --static-routing-mode=true These make CIS create IP routes in the BIG-IP for reaching the POD IPs inside the OpenShift cluster. Please note that this only works if all the OpenShift nodes are directly connected in the same subnet as the BIG-IP. Additionally, it is required following parameter. It is the one that actually makes CIS populate pool members with Cluster (POD) IPs: --pool-member-type=cluster It is not needed to change any configuration in OSSM because ClusterIP mode is the default mode in Istio Ingress Gateways. Exposing OpenShift Service Mesh using NodePort Using NodePort allows to have known IP addresses for the Ingress Gateways, reachable from outside the cluster. Note that when using nodePort, only one Ingress Gateway replica will run per node. The behavior of NodePort varies using the externalTrafficPolicy field: Using the Cluster value, any OpenShift node will accept traffic and will redirect the traffic to any node that has an Ingress Gateway POD, in a load balancing fashion. This is the easiest to setup, but because each request might go to a different node makes health checking not reliable (it is not known which POD goes down). Using the Local value, only the OpenShift nodes that have an Ingress Gateway PODs will accept traffic. The traffic will be delivered to the local Ingress Gateway PODs, without further indirection. This is the recommended way when using NodePort because of its deterministic behaviour and therefore reliable health checking. Next, it is described how to setup a NodePort using the Local externalTrafficPolicy. There are two options for configuring OSSM: Using the ServiceMeshControlPlane CR method: this is the default method in OSSM 2.x for backwards compatibility, but it doesn’t allow to fine tune the configuration of the proxy. See this OSSM 2.x link for further details. This is deprecated and not available in OSSM 3.x. Using Gateway injection method: this is the only method possible in OSSM 3.x and the current recommendation from Red Hat for OSSM 2.x. Using this method allows you to tune the proxy settings. In this article, it will be shown how this tuning is of special interest because at present the Ingress Gateway doesn’t have good default values for allowing reliable health checking. These will be discussed in the Health Checking section. When using ServiceMeshControlPlane CR method, the above will be configured as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: gateways: ingress: enabled: false runtime: deployment: replicas: 2 service: externalTrafficPolicy: Local ports: - name: status-port nodePort: 30021 port: 15021 targetPort: 15021 - name: http2 nodePort: 30080 port: 80 targetPort: 8080 - name: https nodePort: 30443 port: 443 targetPort: 8443 type: NodePort When using the Gateway injection method (recommended), the Service definition is manually created analogously to the ServiceMeshControlPlane CR: apiVersion: v1 kind: Service [...] spec: externalTrafficPolicy: Local type: NodePort ports: - name: status-port nodePort: 30021 port: 15021 protocol: TCP targetPort: 15021 - name: http2 nodePort: 30080 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 30443 port: 443 protocol: TCP targetPort: 8443 Where the ports section is optional but recommended in order to have deterministic ports, and required when not using CIS (because it requires static ports). The nodePort values can be customised. When not using CIS, it is needed to manually configure the pool members in the BIG-IP. It is typical in OpenShift to have the Ingress components (OpenShift Router or Istio) in dedicated infra nodes. See this Red Hat solution for details. When using the ServiceMeshControlPlane method, the configuration is as follows: apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane [...] spec: runtime: defaults: pod: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved When using the Gateway injection method, the configuration is added to the Deployment file directly: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved The configuration above is also a good practice when using CIS. Additionally, CIS by default adds all nodes IPs to the Service pool regardless of whether the externalTrafficPolicy is set to Cluster or Local value. The health check will discard nodes where there are no Ingress Gateways. It can be limited to the scope of the nodes discovered by CIS with the following parameter: --node-label-selector Health Checking and retries for the Ingress Gateway Ingress Gateway Readiness The Ingress Gateway has the following readinessProbe for Kubernete´s own health checking: readinessProbe: failureThreshold: 30 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 where the failureThreshold value of 30 is considered way too large and only marks down the Ingress Gateway as not Ready after 90 seconds (tested to be failureThreshold *timeoutSeconds). In this article, it is recommended to mark down an Ingress Gateway no later than 16 seconds. When using CIS, Kubernetes informs whenever a POD is not Ready and CIS automatically, removes its associated pool member from the pool. In order to achieve the desired behaviour of marking down the Ingress Gateway before 16 seconds, it is required to change the default failureThreshold value in the Deployment file by adding the following snippet: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: spec: containers: - name: istio-proxy image: auto readinessProbe: failureThreshold: 5 httpGet: path: /healthz/ready port: 15021 scheme: HTTP initialDelaySeconds: 1 periodSeconds: 2 successThreshold: 1 timeoutSeconds: 3 Which keeps all other values equal and sets failureThreshold to 5, therefore marking down the Ingress Gateway after 15 seconds. When not using CIS, a HTTP health check has to be configured manually in the BIG-IP. An example health check monitor is shown next: Connection draining When an Ingress Gateway POD is deleted (because of an upgrade, scale-down event, etc...), it immediately returns HTTP 503 in the /healthz/ready endpoint and keeps serving connections until it is effectively deleted. This is called the drain period and by default is extremely short (3 seconds) for any external load balancer. This value has to be increased so the Ingress Gateway PODs being deleted continue serving connections until the Ingress Gateway POD is removed from the external load balancer (the BIG-IP) and the outstanding connections finalised. This setting can only be tuned using the Gateway injection method and it is applied by adding the following snippet in the Deployment file: apiVersion: apps/v1 kind: Deployment [...] spec: template: metadata: annotations: proxy.istio.io/config: | terminationDrainDuration: 45s In the example above, it has been used as the default drain period of the OpenShift Router (45 seconds). The value can be customised, keeping in mind that: When using CIS, it should allow CIS to update the configuration in the BIG-IP and drain the connections. When not using CIS, it should allow the health check to detect the condition of the POD and drain the connections. Additional recommendations The next recommendations apply to any ingress controller or API manager and have been previously suggested when using OpenShift Router. Handle non-graceful errors with the pool’s reselect tries To deal better with non-graceful shutdowns or transient errors, this mechanism will reselect a new Ingress Gateway POD when a request fails. The recommendation is to set the number of tries to the number of Ingress Gateway PODs -1. When using CIS, this can be set in the VirtualServer or TransportServer CRs with the reselectTries parameter. Set an additional TCP monitor for Ingress Gateway´s application traffic sockets This complementary TCP monitor (for both HTTP and HTTPS listeners) validates that Ready instances can actually receive traffic in the application’s traffic sockets. Although this is handled with the reselect tries mechanism, this monitor will provide visibility that such types of errors are happening. Conclusion and closing remarks We hope this article highlights the most important aspects of integrating OpenShift Service Mesh with BIG-IP. A key aspect for having a reliable Ingress Gateway integration is to modify OpenShift Service Mesh’s terminationDrainDuration and readinessProbe.failureThreshold defaults. F5 has submitted to Red Hat RFE 04270713 to improve these. This article will be updated accordingly. Whether CIS integration is used or not, BIG-IP allows you to expose OpenShift ServiceMesh reliably with extensive L4-L7 security and traffic management capabilities. It also allows fine-grained access control, scalable SNAT or keeping the original source IP, among others. Overall, BIG-IP is able to fulfill any requirement. We look forward to hearing your experience and feedback on this article.296Views2likes4CommentsF5 BIG-IP deployment with OpenShift - platform and networking options
Introduction This article is an architectural overview on how F5 BIG-IP can be used with Red Hat OpenShift. Several topics are covered, including: 1-tier or 2-tier arrangements, where the BIG-IP load balance workload PODs directly or load balance ingress controllers (such as NGINX+ or OpenShift's built-in router) respectively. Multi-cluster arrangements, where the BIG-IP can load-balance, or do route sharding across two or more clusters. multi-tenancy, and IP address management options. While this article has a NetOps/infrastructure focus, the follow-up article BIG-IP deployment with OpenShift—application publishing focuses in DevOps/applications. Overall architecture When using BIG-IP with Red Hat OpenShift, the container Container Ingress Services (CIS from now on) container is used to connect the BIG-IP APIs with the Kubernetes APIs. The source of truth is OpenShift. When a user configuration is applied or when a change occurs in the OpenShift cluster, then CIS automatically updates the configuration in the BIG-IP. Under the hood, CIS updates the BIG-IP configuration using the AS3 declarative API. It is not necessary to know if this applies, as all the configuration can be applied using Kubernetes resource types. IP Address Management (IPAM from now on) is important when it is desired that the DevOps teams operate independently from the infrastructure administrators. CIS supports IPAM by making use of the F5 IPAM Controller (FIC from now on), which is deployed as a container as well. It can be seen how these components fit together in the next picture. CIS and FIC are PODs deployed in the OpenShift cluster and AS3 is deployed in the BIG-IP. In the next sections, we cover the different deployment options and considerations to be taken into account. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you will find additional technical details and examples. Networking - CNI options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and F5 BIG-IP supports all Openshift's native CNIs: OVNKubernetes - This is the preferred option. GA since Openshift 4.6, makes use of Geneve encapsulation, but BIG-IP interacts with this CNI in a routed mode in which the packets from/to the BIG-IP don't use encapsulation. Additionally, POD's cluster IPs are discovered dynamically by CIS when OpenShift nodes are added or removed. This latter makes this method also the easiest from BIG-IP management point of view. Check CIS configuration for OVNKubernetes for details. OpenshiftSDN - supported since Openshift 3.x, it is being phased out in favour of OVNKubernetes. It makes use of VXLAN encapsulation between the nodes and between the nodes and the BIG-IPs. This requires manual configuration of VXLAN tunnels in the BIG-IPs when OpenShift nodes are added or removed. Check CIS configuration for OpenShiftSDN for details. Feature-wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading to reduce the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs; this is suggested in OpenShift's documentation as well, and needs to be set at installation time in the install-config.yaml file. See this OpenShift.com link for details. Networking - the importance of supporting clusters' CNI There are basically two modes to interact with a Kubernetes workload from outside the cluster: Using NodePort Service type. In this case, external hosts access the PODs using any of the cluster's nodes IPs. When a request reaches a node, Kubernetes' kube-proxy is reponsible for forwarding the request to a POD in the local or remote node. When sending to a remote node, it adds noticeable overhead. In two-tier deployments externalTrafficPolicy: local and could be used with appropriate monitoring to avoid this additional hop. NodePort is popular for other external Load Balancers because it is an easy method to access the PODs without having to support the CNI, as the name indicates by using Kubernete's nodes. IP address. This has the drawback of an additional indirection. This drawback is specially relevant for 1-tier deployments because application PODs cannot be accessed directly, eliminating the advantages of this deployment type. On the other hand, BIG-IP supports OpenShift CNI's, both OpenShiftSDN and OVNKubernetes. Using LoadBalancer Service type. The packet path in this mode is equivalent to NodePort, in which the external load balancers need an intermediate kube-proxy hop before reaching the POD. An alternative to bypassing kube-proxy is the use of hostNetwork access, but this is discouraged in general because of its security implications. Using ClusterIP Service type. This is the preferred mode because when sending a request, this is sent directly to the destination POD. This requires to support OpenShfit's CNIs, which is the case of BIG-IP. It is worth noting that BIG-IP also supports other CNIs such as Calico or Cilium. This arrangement can be seen next. Please note in the above figure the traffic path from the BIG-IP, where the arrow reaches the inside of the CNI area. This is to indicate that it can address the ingress controllers or the workload POD's IPs within the cluster network. Using this Service type Cluster IP is also more flexible because it allows CIS to use 1-tier and 2-tier arrangements simultaneously. Networking - Load Balancer arrangement options There are basically two arrangement options, 1 and 2 tier. In a nutshell: A 2-tier arrangement is the typical way in which Kubernetes clusters are deployed. In this arrangement, the BIG-IP has only the role of External Load Balancer (first tier only) and sends the client requests to the Ingress Controller Instances (second tier). The Ingress Controllers ultimately forward the requests to the workload PODs. In a 1-tier arrangement, the BIG-IP sends the requests to the workload PODs directly. This is a much simplified arrangement, in which the BIG-IP performs the role of both External Load Balancer and Ingress Controller. Next, we will see the advantages of each arrangement. Please note that when using ClusterIP, this selection can be done on a per-Service basis. From BIG-IP point of view, it is irrelevant what are the endpoints. Load Balancer arrangement option - 2-tier arrangement Unlike most External Load Balancers, the BIG-IP can expose services with either Layer 4 functionalities or Layer 7 functionalities. In Layer 7 mode, SSL/TLS off-loading, HSM, Advanced WAF, and other advanced services can be used. A tier-2 arrangement provides greater scalability compared to 1-tier arrangements in terms of number of L7 routes exposed or number Kubernetes PODs because the control plane workload (the related Kubernetes events that are generated for these PODs and Routes) is split between BIG-IP/CIS and the in-cluster Ingress Controller. This arrangement also has strong isolation between the two tiers, ideal when each tier is managed by different teams (i.e.: platform and developer teams). A BIG-IP 2-tier arrangement is shown next: Load Balancer arrangement option - 1-tier arrangement In this arrangement, the BIG-IP typically operates in L7 mode and sends the traffic directly to the final workload POD. This is done by sending traffic to Services in ClusterIP mode. In this arrangement, persistence is handled easily and the worker's PODs can be directly monitored by the BIG-IP, providing an accurate view of the application's health. A BIG-IP 1-tierrangement is shown next: This arrangement is simpler to troubleshoot, has less latency and potentially higher per-session performance. An isolation between platform and developer teams can be achieved with CIS and FIC, yet this is not as strong isolated compared to 2-tier arrangements. This is described in BIG-IP deployment with OpenShift — application publishing options. BIG-IP platform flexibility: deployment, scalability, and multi-tenancy options Using BIG-IP, the deployment options are independent of the BIG-IP being an appliance, a scale-out chassis, or a Virtual Edition. The configuration is always the same down to the L2 (vlan/tunnel) config level. Only the L1 (physical interface) configuration changes. This platform flexibility also opens the possibilities of using different options for scalability, multi-tenancy, hardware accelerators, or Hardware Security Modules (HSMs). These latter are specially important to keep the SSL/TLS private keys in an FIPS compliant manner. The HSMs can be onboard, on-prem Network HSMs, or cloud SaaS HSMs. Multi-tenancy Options In this section, multi-tenancy refers to the case in which different projects from one or more OpenShift clusters are serviced by a single BIG-IP. Next, it is outlined the different CIS deployment options: A CIS instance can manage all namespaces on a given OpenShift cluster or a subset of these. Namespaces can be specified with a list or a label selector (i.e.: envionment=test or environment=production). Multiple CIS instances, handling different namespaces, can share a single or different BIG-IPs. Each CIS instance will own a dedicated partition in a BIG-IP. For example, it is feasible to setup an OpenShift cluster with devevelopment, pre-production, and production labeled namespaces and these be serviced by different CIS instances in the same or different BIG-IPs for each environment. Multiple CIS instances in a single BIG-IP can also handle different OpenShift clusters. This is thanks to the soft isolation provided by BIG-IP partitions. Network isolation between these partitions can be achieved with routed domains. Some of these deployment options are shown next: IP address management (IPAM) CIS has the capability of dynamically allocating IP addresses using the F5 IPAM Controller (FIC) companion. At the time of writing, it is possible to retrieve IP addresses from the following providers: Infoblox F5 local DB provider, which makes use of a PVC for persistence. For the DevOps team, it is transparent which provider is used; it is only required to specify an ipamLabel attribute in the exposed L7 or L4 service. The DevOps team can also have the ability of indicating when it wants to share IP addresses between different L7 or L4 services by means of the HosGroup attribute. This is described in the follow-up article. BIG-IP data plane scalability options A single BIG-IP cluster can scale up horizontally with up to 8 BIG-IP instances and have the different projects distributed in these. This is referred to as Scale-N in the BIG-IP documentation. This mode is often not used because it requires additional orchestration or manual operation for optimal load distribution. In this mode, projects would have soft-isolation between projects by means of BIG-IP partitions. When ultimate scalability or hard isolation is required, then TMOS vCMP technology or in newer versions F5OS tenants facilities can be used in larger appliances and scale-out chassis. These multi-tenant facilities allow running independent BIG-IP instances, isolated at hardware level, even allowing using different versions of BIG-IP. The tenant BIG-IP instances can get allocated different amounts of hardware resources. In the next picture, the different tenants are shown in different colored bars using several blades (grey bars). Using chassis-based platforms allows to scale data plane performance and increase redundancy by adding blades to the systems without the need of a reconfiguration in the CIS/OpenShift side of things. BIG-IP control plane scalability options When using very large OpenShfit clusters with either a large number of services exposed or a large number of Pods and there is a high number of changes, these will trigger many events in the Kubernetes API. These events are processed by CIS and ultimately in the BIG-IP's control plane. In these cases, the following strategies can be used to improve BIG-IP's control plane scalability: Dissagregate the different projects in different BIG-IPs. These might be multiple BIG-IP VEs or instances in F5 vCMP or F5OS tenants when using hardware platforms. Use a 2-tier architecture, which reduces the number of Kubernetes objects and events that the BIG-IP is exposed to. In the upcoming months, CIS will be available in BIG-IP Next. This is a re-architecture of BIG-IP and incorporates major scalability improvements in the control plane. Multi-cluster OpenShift Since CIS version 2.14 it is also possible that BIG-IP load balances between 2 or more clusters in Active-Active, Active-Standby, or Ratio modes. 1-tier or 2-tier arrangements are possible. Next, it shows a single BIG-IP exposing workloads from 2 OpenShift clusters. Please note that OpenShift clusters don't require to be running with the same version, so this arrangement is also interesting for performing OpenShift upgrades. When using CIS in multi-cluster mode, an additional CIS instance in a secondary cluster is needed for redundancy. If there are more than 2 OpenShift clusters, no additional CIS instances are needed. Therefore, a typical BIG-IP cluster of 2 units load balancing 2 or more OpenShift clusters will always require 4 CIS instances. For each BIG-IP, one of the CIS instances has the (P)rimary role and is in charge of making changes in the BIG-IP by default. The (S)econdary CIS will be on standby. Both CIS instances access all OpenShift clusters. A more comprehensive view of this can be seen in the next diagram, which considers having more than 2 OpenShift clusters. OpenShift clusters that don't host a CIS instance are referred to as remotely managed. Conclusion F5 BIG-IPs provides unmatched deployment options and features with Openshift; these include: The support of OpenShift's CNIs which allows sending the traffic directly instead of using hostNetwork (which implies a security risk) or using the common NodePort which incurs the additional kube-proxy indirection. Both 1-tier or 2-tier arrangements (or both types simultaneously) are possible. F5´s Container Ingress Services provides the ability to handle multiple OpenShift clusters, exposing its services in a single VIP. This is a unique feature in the industry. To complete the circle, this integration also provides IP address management (IPAM) which provides great flexibility to DevOps teams. All these are available regardless. The BIG-IP is a Virtual Edition, an appliance or a chassis platform allowing great scalability and multi-tenancy options. The follow-up article BIG-IP deployment with OpenShift—application publishing focuses on DevOps and applications. In this, it is described how CIS can also unleash all traffic management and security features in a Kubernetes native way. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via our github repository.5.2KViews5likes16CommentsF5 Distributed Cloud Kubernetes Integration: Securing Services with Direct Pod Connectivity
Introduction As organizations embrace Kubernetes for container orchestration, they face critical challenges in exposing services securely to external consumers while maintaining granular control over traffic management and security policies. Traditional approaches using NodePort services or basic ingress controllers often fall short in providing the advanced application delivery and security features required for production workloads. F5 Distributed Cloud (F5 XC) addresses these challenges by offering enterprise-grade application delivery and security services through its Customer Edge (CE) nodes. By establishing direct connectivity to Kubernetes pods, F5 XC can provide sophisticated load balancing, WAF protection, API security, and multi-cloud connectivity without the limitations of NodePort-based architectures. This article demonstrates how to architect and implement F5 XC CE integration with Kubernetes clusters to expose and secure services effectively, covering both managed Kubernetes platforms (AWS EKS, Azure AKS, Google GKE) and self-managed clusters using K3S with Cilium CNI. Understanding F5 XC Kubernetes Service Discovery F5 Distributed Cloud includes a native Kubernetes service discovery feature that communicates directly with Kubernetes API servers to retrieve information about services and their associated pods. This capability operates in two distinct modes: Isolated Mode In this mode, F5 XC CE nodes are isolated from the Kubernetes cluster pods and can only reach services exposed as NodePort services. While the discovery mechanism can retrieve all services, connectivity is limited to NodePort-exposed endpoints with the inherent NodePort limitations: Port Range Restrictions: Limited to ports 30000-32767 Security Concerns: Exposes services on all node IPs Performance Overhead: Additional network hops through kube-proxy Limited Load Balancing: Basic round-robin without advanced health checks Non-Isolated Mode, Direct Pod Connectivity (and why it matters) This is the focus of our implementation. In non-isolated mode, F5 XC CE nodes can reach Kubernetes pods directly using their pod IP addresses. This provides several advantages: Simplified Architecture: Eliminate NodePort complexity and port management limitation Enhanced Security: Apply WAF, DDoS protection, and API security directly at the pod level Advanced Load Balancing: Sophisticated algorithms, circuit breaking, and retry logic Architectural Patterns for Pod IP Accessibility To enable direct pod connectivity from external components like F5 XC CEs, the pod IP addresses must be routable outside the Kubernetes cluster. The implementation approach varies based on your infrastructure: Cloud Provider Managed Kubernetes Cloud providers typically handle pod IP routing through their native Container Network Interfaces (CNIs): Figure 1: Cloud providers' K8S CNI routes PODs IPs to the Cloud Provider Private Cloud Routing Table AWS EKS: Uses Amazon VPC CNI, which assigns VPC IP addresses directly to pods Azure AKS: Traditional CNI mode allocates Azure VNET IPs to pods Google GKE: VPC-native clusters provide direct pod IP routing In these environments, the cloud provider's CNI automatically updates routing tables to make pod IPs accessible within the VPC/VNET. Self-Managed Kubernetes Clusters For self-managed clusters, you need an advanced CNI that can expose the Kubernetes overlay network. The most common solutions are: Cilium: Provides eBPF-based networking with BGP support Calico: Offers flexible networking policies with BGP peering capabilities and eBPF support as well These CNIs typically use BGP to advertise pod subnets to external routers, making them accessible from outside the cluster. Figure 2: Self-managed K8S clusters use advanced CNI with BGP to expose the overlay subnet Cloud Provider Implementations AWS EKS Architecture Figure 3: AWS EKS with F5 XC CE integration using VPC CNI With AWS EKS, the VPC CNI plugin assigns real VPC IP addresses to pods, making them directly routable within the VPC without additional configuration. Azure AKS Traditional CNI Figure 4: Azure AKS with traditional CNI mode for direct pod connectivity Azure's traditional CNI mode allocates IP addresses from the VNET subnet directly to pods, enabling native Azure networking features. Google GKE VPC-Native Figure 5: Google GKE VPC-native clusters with alias IP ranges for pods GKE's VPC-native mode uses alias IP ranges to provide pods with routable IP addresses within the Google Cloud VPC. Deeper dive into the implementation Implementation Example 1: AWS EKS Integration Let's walk through a complete implementation using AWS EKS as our Kubernetes platform. Prerequisites and Architecture Network Configuration: VPC CIDR: 10.154.0.0/16 Three private subnets (one per availability zone) F5 XC CE deployed in Private Subnet 1 EKS worker nodes distributed across all three subnets Figure 6: Complete EKS implementation architecture with F5 XC CE integration Kubernetes Configuration: EKS cluster with AWS VPC CNI Sample application: microbot (simple HTTP service) Three replicas distributed across nodes What is running inside the K8S cluster? The PODs We have three PODs in the default namespace. Figure 7: The running PODs in the EKS cluster One running with POD IP 10.154.125.116, another one with POD IP 10.154.76.183 and one running with POD IP 10.154.69.183. microbot POD is a simple HTTP application that is returning the full name of the POD and an image. Figure 8: The microbot app The services Figure 9: The services running in the EKS cluster Configure F5 XC Kubernetes Service Discovery Create a K8S service discovery object. Figure 10: Kubernetes service discovery configuration In the “Access Credentials” activate the “Show Advanced Fields” slider. This is the key! Figure 11: The "advanced fields" slider Then provide the Kubeconfig file of the K8S cluster and select “Kubernetes POD reachable”. Figure 12: Kubernetes POD network reachability Then the K8S should be displayed in the “Service Discoveries”. Figure 13: The discovered PODs IPs One can see that the services are discovered by the F5 XC node and more interestingly, the PODs IPs. Are the pods reachable from the F5XC CE? Figure 14: Testing connectivity to pod 10.154.125.116 Figure 15: Testing connectivity to pod 10.154.76.183 Figure 16: Testing connectivity to pod 10.154.69.183 Yes, they are! Create Origin Pool with K8S Service Create an origin pool that references your Kubernetes service: Figure 17: Creating origin pool with Kubernetes service type Create an HTTPS Load-Balancer and test the service Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above. Figure 18: Traffic load-balanced across the three PODs The result shows traffic being load-balanced across all EKS pods. Implementation Example 2: Self-Managed K3S with Cilium CNI One infrastructure subnet (10.154.1.0/24) in which the following components are going to be deployed: F5 XC CE single node (10.154.1.100) Two Linux Ubuntu nodes (10.154.1.10 & 10.154.1.11) On the Linux Ubuntu nodes, a Kubernetes cluster is going to be deployed using K3S (www.k3s.io) with the following specifications: PODs overlay subnet: 10.160.0.0/16 Services overlay subnet: 10.161.0.0/16 Default K3S CNI (flannel) will be disabled K3S CNI will be replaced by Cilium CNI to expose directly the PODs overlay subnet to the “external world” Figure 19: Self-managed K3S cluster with Cilium CNI and BGP peering to F5 XC CE What is running inside the K8S cluster? The PODs We have two PODs in the default namespace. Figure 20: The running PODs in the K8S cluster One running on node “k3s-1” with POD IP 10.160.0.203 and the other one running on node “k3s-2” with POD IP 10.160.1.208. microbot POD is a simple HTTP application that is returning the full name of the POD and an image. The services Figure 21: The services running in the K8S cluster Different Kubernetes services are created to expose the microbot PODs, one of type Cluster IP and the other one of type LoadBalancer. The type of service doesn’t really matter for F5XC because we are working in a full routed mode between the CE and the K8S cluster. F5XC only needs to “know” the PODs IPs, which will be discovered through the services. Configure F5 XC Kubernetes Service Discovery Steps are identical regarding what we did for EKS. And once done, services and PODs IPs are discovered by F5XC. Figure 22: The discovered PODs IPs Configure the BGP peering on F5XC CE In this example topology, BGP peerings are established directly between the K8S nodes and the F5 XC CE. Other implementations are possible, for instance, with an intermediate router. Figure 23: BGP peerings Check if the peerings are established. Figure 24: Verification of the BGP peerings Are the pods reachable from the F5XC CE? Figure 25: PODs reachability test They are! Create Origin Pool with K8S Service As we did for the EKS configuration, create an origin pool that references your Kubernetes service. Create an HTTPS Load-Balancer and test the service Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above. Figure 26: Traffic load-balanced across the two PODs Scaling up? Let’s add another POD to the deployment to see how F5XC will handle the load-balancing after. Figure 27: Scaling up the Microbot PODs And it’s working! Load is spread automatically as soon as new PODs instances are available for the given service. Figure 28: Traffic load-balanced across the three PODs Appendix - K3S and Cilium deployment example Step 1: Install K3S without Default CNI On the master node: curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" \ INSTALL_K3S_EXEC="--flannel-backend=none \ --disable-network-policy \ --disable=traefik \ --disable servicelb \ --cluster-cidr=10.160.0.0/16 \ --service-cidr=10.161.0.0/16" sh - # Export kubeconfig export KUBECONFIG=/etc/rancher/k3s/k3s.yaml # Get token for worker nodes sudo cat /var/lib/rancher/k3s/server/node-token On worker nodes: IP_MASTER=10.154.1.10 K3S_TOKEN=<token-from-master> curl -sfL https://get.k3s.io | K3S_URL=https://${IP_MASTER}:6443 K3S_TOKEN=${K3S_TOKEN} sh - Step 2: Install and Configure Cilium On the K3S master node, please perform the following: Install Helm and Cilium CLI: # Install Helm sudo snap install helm --classic # Download Cilium CLI CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI_ARCH=amd64 curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin Install Cilium with BGP support: helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.16.5 \ --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.160.0.0/16" \ --set kubeProxyReplacement=true \ --set k8sServiceHost=10.154.1.10 \ --set k8sServicePort=6443 \ --set bgpControlPlane.enabled=true \ --namespace kube-system \ --set bpf.hostLegacyRouting=false \ --set bpf.masquerade=true # Monitor installation cilium status --wait Step 3: Configure BGP Peering Label nodes for BGP: kubectl label nodes k3s-1 bgp=true kubectl label nodes k3s-2 bgp=true Create BGP configuration: # BGP Cluster Config apiVersion: cilium.io/v2alpha1 kind: CiliumBGPClusterConfig metadata: name: cilium-bgp spec: nodeSelector: matchLabels: bgp: "true" bgpInstances: - name: "k3s-instance" localASN: 65001 peers: - name: "f5xc-ce" peerASN: 65002 peerAddress: 10.154.1.100 peerConfigRef: name: "cilium-peer" --- # BGP Peer Config apiVersion: cilium.io/v2alpha1 kind: CiliumBGPPeerConfig metadata: name: cilium-peer spec: timers: holdTimeSeconds: 9 keepAliveTimeSeconds: 3 gracefulRestart: enabled: true restartTimeSeconds: 15 families: - afi: ipv4 safi: unicast advertisements: matchLabels: advertise: "bgp" --- # BGP Advertisement apiVersion: cilium.io/v2alpha1 kind: CiliumBGPAdvertisement metadata: name: bgp-advertisements labels: advertise: bgp spec: advertisements: - advertisementType: "PodCIDR"81Views1like0CommentsDouble Trouble: Multiple Controllers Handling the Same Kubernetes LoadBalancer Service
Kubernetes doesn’t prevent multiple controllers from handling the same Service. In fact, from Kubernetes’ perspective, it’s just a chunk of YAML describing an abstract networking resource. If two controllers are watching services and both think they should provision a VIP, they’ll each do so.389Views6likes2CommentsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.387Views3likes2CommentsDistributed Cloud for App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.246Views3likes0CommentsUsing Aliases to launch F5 AMI Images in AWS Marketplace
F5 lists 82 product offerings in the AWS Marketplace as Amazon Machine Images (AMI). Each version of each product in each AWS Region has a different AMI. That’s around 22,000 images! Each AMI is identified by an AMI ID. You use the AMI ID to indicate which AMI you want to use when launching an F5 product. You can find AMI IDs using the AWS Web Console, but the AWS CLI is the best tool for the job. Searching for AMIs using the AWS CLI Here’s how you find the AMI IDs for version 17.5.1.2 of BIG-IP Virtual Edition in the us-east-1 AWS region: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1.2*' --query "sort_by(Images,&Name)[:]. {Description: Description, Id:ImageId }" --region us-east-1 --output table ---------------------------------------------------------------------------------------------------- | DescribeImages | +------------------------------------------------------------------------+-------------------------+ | Description | Id | +------------------------------------------------------------------------+-------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 1Boot Loc-250916013758 | ami-0948eabdf29ef2a8f | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 2Boot Loc-250916015535 | ami-0cb3aaa67967ad029 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 1Boot Loc-250916013616 | ami-05d70b82c9031ff39 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 2Boot Loc-250916014744 | ami-0b6021cc939308f3e | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-encrypted-threat-protection-250916015535 | ami-01f4fde300d3763be | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-AWF Plus 16vCPU-250916015534 | ami-015474056159387ac | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 200Mbps-250916015522 | ami-06ce5b03dce2a059d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 25Mbps-250916015520 | ami-0826808708df97480 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 3Gbps-250916015523 | ami-08c63c8f7ca71cf37 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 10Gbps-250916015532 | ami-0e806ef17838760e4 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 1Gbps-250916015530 | ami-05e31c2a0ac9ec050 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 200Mbps-250916015528 | ami-02dc0995af98d0710 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 25Mbps-250916015527 | ami-08b8f2daefde800e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 5Gbps-250916015531 | ami-0d16154bb1102f3e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 10Gbps-250916015512 | ami-05c9527fff191feba | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 1Gbps-250916015510 | ami-05ce2932601070d5c | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 200Mbps-250916015508 | ami-0f6044db3900ba46f | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 25Mbps-250916014542 | ami-0de57aba160170358 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 5Gbps-250916015511 | ami-04271103ab2d1369d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 10Gbps-250916014739 | ami-0d06d2a097d7bb47a | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737 | ami-01707e969ebcc6138 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 200Mbps-250916014735 | ami-06f9a44562d94f992 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 25Mbps-250916013626 | ami-0aa2bca574c66af13 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 5Gbps-250916014738 | ami-01951e02c52deef85 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 200Mbps-0916015525 | ami-03df50dfc04f19df5 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 25Mbps-50916015524 | ami-0777c069eaae20ea1 | +------------------------------------------------------------------------+-------------------------+ This command shows all 17.5.1* releases of the "PayGo Good 1Gbps" flavor of BIG-IP in the us-west-1 region sorted by newest release first: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1*PAYG-Good 1Gbps*' --query "reverse(sort_by(Images,&CreationDate))[:]. {Description: Name, Id:ImageId , date:CreationDate}" --region us-west-1 --output table ---------------------------------------------------------------------------------------------------------------------------------------------------- | DescribeImages | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | Description | Id | date | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-0de8ca1229be5f7fe | 2025-09-16T23:12:28.000Z | | F5 BIGIP-17.5.1-0.80.7 PAYG-Good 1Gbps-250811055424-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-09afcec6f36494382 | 2025-08-15T19:03:23.000Z | | F5 BIGIP-17.5.1-0.0.7 PAYG-Good 1Gbps-250618090310-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-03e389e112872fd53 | 2025-07-01T06:00:44.000Z | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ Notice that the same BIG-IP VE release has a different AMI ID in each AWS region. Attempting to launch a product in one region using an AMI ID from a different region will fail. This causes a problem when a shell script or automation tool is used to launch new EC2 instances and the AMI IDs have been hardcoded for one region and you attempt to use it in another. Wouldn’t it be nice to have a single AMI identifier that works in all AWS regions? Introducing AMI Aliases The Ami Alias is a similar ID to the AMI ID, but it’s easier to use in automation. An AMI alias has the form /aws/service/marketplace/prod-<identifier>/<version> , for example, "PayGo Good 1Gbps" /aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 You can use this Ami Alias ID in any Region, and AWS automatically maps it to the correct Regional AMI ID. BIG-IP AMI Alias Identifiers F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 16vCPU) prod-qqgc2ltsirpio F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 200Mbps) prod-yajbds56coa24 F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 25Mbps) prod-qiufc36l6sepa F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 3Gbps) prod-fp5qrfirjnnty F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 10Gbps) prod-w2p3rtkjrjmw6 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 1Gbps) prod-g3tye45sqm5d4 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 200Mbps) prod-dnpovgowtyz3o F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 25Mbps) prod-wjoyowh6kba46 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 5Gbps) prod-hlx7g47cksafk F5 BIG-IP VE - ALL (BYOL, 1 Boot Location) prod-zvs3u7ov36lig F5 BIG-IP VE - ALL (BYOL, 2 Boot Locations) prod-ubfqxbuqpsiei F5 BIG-IP VE - LTM/DNS (BYOL, 1 Boot Location) prod-uqhc6th7ni37m F5 BIG-IP VE - LTM/DNS (BYOL, 2 Boot Locations) prod-o7jz5ohvldaxg F5 BIG-IP Virtual Edition - BETTER (PAYG, 10Gbps) prod-emsxkvkzwvs3o F5 BIG-IP Virtual Edition - BETTER (PAYG, 1Gbps) prod-4idzu4qtdmzjg F5 BIG-IP Virtual Edition - BETTER (PAYG, 200Mbps) prod-firaggo6h7bt6 F5 BIG-IP Virtual Edition - BETTER (PAYG, 25Mbps) prod-wijbh7ib34hyy F5 BIG-IP Virtual Edition - BETTER (PAYG, 5Gbps) prod-rfglxslpwq64g F5 BIG-IP Virtual Edition - GOOD (PAYG, 10Gbps) prod-54qdbqglgkiue F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps) prod-s6e6miuci4yts F5 BIG-IP Virtual Edition - GOOD (PAYG, 200Mbps) prod-ynybgkyvilzrs F5 BIG-IP Virtual Edition - GOOD (PAYG, 25Mbps) prod-6zmxdpj4u4l5g F5 BIG-IP Virtual Edition - GOOD (PAYG, 5Gbps) prod-3ze6zaohqssua F5 BIG-IQ Virtual Edition - (BYOL) prod-igv63dkxhub54 F5 Encrypted Threat Protection prod-bbtl6iceizxoi F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 200Mbps) prod-gkzfxpnvn53v2 F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 25Mbps) prod-qu34r4gipys4s NGINX Plus Alias Identifiers NGINX Plus Basic - Amazon Linux 2 (LTS) AMI prod-jhxdrfyy2jtva NGINX Plus Developer - Amazon Linux 2 (LTS) prod-kbeepohgkgkxi NGINX Plus Developer - Amazon Linux 2 (LTS) ARM Graviton prod-vulv7pmlqjweq NGINX Plus Developer - Amazon Linux 2023 prod-2zvigd3ltowyy NGINX Plus Developer - Amazon Linux 2023 ARM Graviton prod-icspnobisidru NGINX Plus Developer - RHEL 8 prod-tquzaepylai4i NGINX Plus Developer - RHEL 9 prod-hwl4zfgzccjye NGINX Plus Developer - Ubuntu 22.04 prod-23ixzkz3wt5oq NGINX Plus Developer - Ubuntu 24.04 prod-tqr7jcokfd7cw NGINX Plus FIPS Premium - RHEL 9 prod-v6fhyzzkby6c2 NGINX Plus Premium - Amazon Linux 2 (LTS) AMI prod-4dput2e45kkfq NGINX Plus Premium - Amazon Linux 2 (LTS) ARM Graviton prod-56qba3nacijjk NGINX Plus Premium - Amazon Linux 2023 prod-w6xf4fmhpc6ju NGINX Plus Premium - Amazon Linux 2023 ARM Graviton prod-e2iwqrpted4kk NGINX Plus Premium - RHEL 8 AMI prod-m2v4zstxasp6s NGINX Plus Premium - RHEL 9 prod-rytmqzlxdneig NGINX Plus Premium - Ubuntu 22.04 prod-dtm5ujpv7kkro NGINX Plus Premium - Ubuntu 24.04 prod-opg2qh33mi4pk NGINX Plus Standard - Amazon Linux 2 (LTS) AMI prod-mdgdnfftmj7se NGINX Plus Standard - Amazon Linux 2 (LTS) ARM Graviton prod-2kagbnj7ij6zi NGINX Plus Standard - Amazon Linux 2023 prod-i25cyug3btfvk NGINX Plus Standard - Amazon Linux 2023 ARM Graviton prod-6s5rvlqlgrt74 NGINX Plus Standard - RHEL 8 prod-ebhpntvlfwluc NGINX Plus Standard - RHEL 9 prod-3e7rk2ombbpfa NGINX Plus Standard - Ubuntu 22.04 prod-7rhflwjy5357e NGINX Plus Standard - Ubuntu 24.04 prod-b4rly35ct3dlc NGINX Plus with NGINX App Protect Developer - Amazon Linux 2 prod-pjmfzy5htmaks NGINX Plus with NGINX App Protect Developer - Debian 11 prod-ixsytlu2eluqa NGINX Plus with NGINX App Protect Developer - RHEL 8 prod-6v57ggy3dqb6c NGINX Plus with NGINX App Protect Developer - Ubuntu 20.04 prod-4a4g7h7mpepas NGINX Plus with NGINX App Protect DoS Developer - Amazon Linux 2023 prod-fmqayhbsryoz2 NGINX Plus with NGINX App Protect DoS Developer - Debian 11 prod-4e5fwakhrn36y NGINX Plus with NGINX App Protect DoS Developer - RHEL 8 prod-ubid75ixhf34a NGINX Plus with NGINX App Protect DoS Developer - RHEL 9 prod-gg7mi5njfuqcw NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 20.04 prod-qiwzff7orqrmy NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 22.04 prod-h564ffpizhvic NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 24.04 prod-wckvpxkzj7fvk NGINX Plus with NGINX App Protect DoS Premium - Amazon Linux 2023 prod-lza5c4nhqafpk NGINX Plus with NGINX App Protect DoS Premium - Debian 11 prod-ych3dq3r44gl2 NGINX Plus with NGINX App Protect DoS Premium - RHEL 8 prod-266ker45aot7g NGINX Plus with NGINX App Protect DoS Premium - RHEL 9 prod-6qrqjtainjlaa NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 20.04 prod-hagmbnluc5zmw NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 22.04 prod-y5iwq6gk4x4yq NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 24.04 prod-k3cb7avaushvq NGINX Plus with NGINX App Protect Premium - Amazon Linux 2 prod-tlghtvo66zs5u NGINX Plus with NGINX App Protect Premium - Debian 11 prod-6kfdotc3mw67o NGINX Plus with NGINX App Protect Premium - RHEL 8 prod-okwnxdlnkmqhu NGINX Plus with NGINX App Protect Premium - Ubuntu 20.04 prod-5wn6ltuzpws4m NGINX Plus with NGINX App Protect WAF + DoS Premium - Amazon Linux 2023 prod-mualblirvfcqi NGINX Plus with NGINX App Protect WAF + DoS Premium - Debian 11 prod-k2rimvjqipvm2 NGINX Plus with NGINX App Protect WAF + DoS Premium - RHEL 8 prod-6nlubep3hg4go NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 18.04 prod-f2diywsozd22m NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 20.04 prod-ajcsh5wsfuen2 NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 22.04 prod-6adjgf6yl7hek NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 24.04 prod-autki7guiiqio Using AMI Aliases for BIG-IP The following example shows using an AMI alias to launch a new "F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps)" instance version 17.5.1.2-0.0.5 by using the AWS CLI. aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 --instance-type m5.xlarge --key-name MyKeyPair The next example shows a CloudFormation template that accepts the AMI alias as an input parameter to create an instance. AWSTemplateFormatVersion: 2010-09-09 Parameters: AmiAlias: Description: AMI alias Type: 'String' Resources: MyEC2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Sub "resolve:ssm:${AmiAlias}" InstanceType: "g4dn.xlarge" Tags: -Key: "Created from" Value: !Ref AmiAlias Using AMI Aliases for NGINX Plus NGINX Plus images in the AWS Marketplace are not version specific, so just use "latest" as the version to launch. For example, this will launch NGINX Plus Premium on Ubuntu 24.04: aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-opg2qh33mi4pk/latest --instance-type c5.large --key-name MyKeyPair Finding AMI Aliases in AWS Marketplace AMI aliases are new to the AWS Marketplace, so not all products have them. To locate the alias for an AMI you use often, you need to resort to the AWS Marketplace web console. Here are the step-by-step instructions provided by Amazon: 1. Navigate to AWS Marketplace Go to AWS Marketplace Sign in to your AWS account 2. Find and Subscribe to the Product Search for or browse to find your desired product Click on the product listing Click "Continue to Subscribe" Accept the terms and subscribe to the product 3. Configure the Product After subscribing, click "Continue to Configuration" Select your desired: Delivery Method (if multiple options are available) Software Version Region 4. Locate the AMI Alias At the bottom of the configuration page, you'll see: AMI ID: ami-1234567890EXAMPLE AMI Alias: /aws/service/marketplace/prod-<identifier>/<version> New Tools for Your AMI Hunt In this article, we focused on using AMI Aliases to select the right F5 product to launch in AWS EC2. But, there’s one more takeaway. Scroll back up to the top of this page and take a closer look at the "aws ec2 describe-images" commands. These commands use JMESpath to filter, sort, and format the output. Find out more about filtering the output of AWS CLI commands here.139Views4likes0Comments