devops
1577 TopicsSimplifying Application Health Monitoring with F5 BIG-IP
A simple agreement between BIG-IP administrators and application owners can foster smooth collaboration between teams. Application owners define their own simple or complex health monitors and agree to expose a conventional /health endpoint. When a /health endpoint responds with an HTTP 200 request, BIG-IP assumes the application is healthy based on the application owners' own criteria. The Challenge of Health Monitoring in Modern Environments F5 BIG-IP administrators in Network Operations (NetOps) teams often work with application teams because the BIG-IP acts as a full proxy, providing services like: TLS termination Load balancing Health monitoring Health checks are crucial for effective load balancing. The BIG-IP uses them to determine where to send traffic among back-end application servers. However, health monitoring frequently causes friction between teams. Problems with the Traditional Approach Traditionally, BIG-IP administrators create and maintain health monitors ranging from simple ICMP pings to complex monitors that: Simulate user transactions Verify HTTP response codes Validate payload contents Track application dependencies This leads to several issues: Knowledge Gap: NetOps may not fully grasp each application's intricacies. Change Management Overhead: Application updates require retesting monitors, causing delays. Production Risk: Monitors can break after application changes, incorrectly marking services as up/down. Team Friction: Troubleshooting failed health checks involves tedious back-and-forth between teams. A Cloud-Native Solution The cloud-native and microservices communities have patterns that elegantly solve these problems. One widely used pattern is the [health endpoint], which adapts well to BIG-IP environments. The /health Endpoint Convention Cloud-native applications commonly expose dedicated health endpoints like /health, /healthy, or /ready. These return standard status codes reflecting the application's state. The /health endpoint provides a clear contract between NetOps and application teams for BIG-IP integration. Implementing the Contract This approach establishes a simple agreement: Application Team Responsibilities: Implement /health to return HTTP 200 when the application is ready for traffic Define "healthy" based on application needs (database connectivity, dependencies, etc.) Maintain the health check logic as the application changes BIG-IP Team Responsibilities: Configure an HTTP monitor targeting the /health endpoint Treat 200 as "healthy", anything else as "unhealthy" Benefits of This Approach Aligned Expertise: Application teams define health based on their knowledge. Less Friction: BIG-IP configuration stays stable as applications evolve. Better Reliability: Health checks reflect true application health, including dependencies. Easier Troubleshooting: The /health endpoint can return detailed diagnostic info, but this is ignored by the BIG-IP and used strictly for troubleshooting. Implementation Examples F5 BIG-IP Health Monitor Configuration ltm monitor http /Common/app-health-monitor { defaults-from /Common/http destination *:* interval 5 recv 200 recv-disable none send "GET /health HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" time-until-up 0 timeout 16 } Node.js Health Endpoint Implementation const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Application is running'); }); app.get('/health', async (req, res) => { try { const dbStatus = await checkDatabaseConnection(); const serviceStatus = await checkDependentServices(); if (dbStatus && serviceStatus) { return res.status(200).json({ status: 'healthy', database: 'connected', services: 'available', timestamp: new Date().toISOString() }); } res.status(503).json({ status: 'unhealthy', database: dbStatus ? 'connected' : 'disconnected', services: serviceStatus ? 'available' : 'unavailable', timestamp: new Date().toISOString() }); } catch (error) { res.status(500).json({ status: 'error', message: error.message, timestamp: new Date().toISOString() }); } }); async function checkDatabaseConnection() { // Check real database connection return true; } async function checkDependentServices() { // Check required service connections return true; } app.listen(port, () => { console.log(`Application listening at http://localhost:${port}`); }); Adopting this health check pattern can greatly reduce friction between NetOps and application teams while improving reliability. The simple contract of HTTP 200 for healthy provides the needed integration while letting each team focus on their expertise. For apps that can't implement a custom /health endpoint, BIG-IP admins can still use traditional ICMP or TCP port monitoring. However, these basic checks can't accurately reflect an app's true health and complex dependencies. This approach fosters collaboration and leverages the specialized knowledge of both network and application teams. The result is more reliable services and smoother operations.435Views1like0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.973Views5likes2CommentsPassing Arguments to iCall Scripts
iCall is a control-plane automation tool for the BIG-IP platform. There are several articles on overview and implementation details, but lost among them is being clear about how to pass arguments to iCall scripts. A post in the technical forum on disabling multiple other interfaces if one should fail highlighted the fact that the configuration can get pretty bloated if one does not pass data to the script. Here are two ways to do it: Setting variables in user_alert.conf events The alert definition supports variable names and values, here are a few examples: alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } alert interface_1_1_down "Link: 1.1 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.1 } }" } The key/value pair arguments are set in the context of the exec command like so: { { name k1 value v1 } { name k2 value v2 } { name k3 value v3 } } Setting variables in iCall handlers A second method of setting variables is to do so in the handler definition. Note, however, this only is supported on the periodic handler. sys icall handler periodic myPeriodicTestHandlerWithArguments { arguments { { name k1 value v1 } { name k2 value v2 } { name k3 value v3 } } interval 30 script myTestScriptWithArguments } Reading the variables in the iCall script This is where the magic happens! In the iCall script, there's a little snippet to gain access to all that goodness you set in the alerts and/or handlers: sys icall script myTestScriptWithArguments { app-service none definition { foreach var { k1 k2 k3 } { set $var $EVENT::context($var) } tmsh::log "k1: ${k1}" tmsh::log "k2: ${k2}" tmsh::log "k3: ${k3}" } description none events none } The for loop iterates through the names you established in the alert/handler (specified also in the script) and then sets each variable to the context you provided. In this dummy example, I'm just logging it. But let's look at a real example to close the loop. Use Case: Disable other interfaces when one fails In the forum thread, the ask was to validate the alerts, handlers, and scripts they had assembled to accomplish disabling multiple interfaces when one fails. Totally possible without passing arguments, but think about how many objects you need to accomplish this. It's a lot! The only number of objects that doesn't change is how many alert definitions you need in user_alert.conf. But...the size of that definition shrinks considerably. Let's start with the user_alert.conf file, and I'm limiting to two interfaces (one failure triggering the other to be disabled) for brevity. alert interface_1_1_down "Link: 1.1 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.1 } }" } alert interface_1_3_down "Link: 1.3 is DOWN" { exec command="tmsh generate sys icall event interface_manager context { { name action value disabled } { name interface value 1.3 } }" } alert interface_1_1_up "Link: 1.1 is UP" { exec command="tmsh generate sys icall event interface_manager context { { name action value enabled } { name interface value 1.1 } }" } alert interface_1_3_up "Link: 1.3 is UP" { exec command="tmsh generate sys icall event interface_manager context { { name action value enabled } { name interface value 1.3 } }" } Pretty simple here. Notice there is only one exec command, and I only need to pass the action desired (enabled, disabled) and the failing interface. Now let's look at the handler. This is the easier piece of the puzzle. We only need one triggered handler to call the script. sys icall handler triggered interface_manager { script interface_manager subscriptions { interface_manager { event-name interface_manager } } } So here in the handler, the event-name matches the event specified in the alert, and for consistency, I've named the script that as well. And now the script. sys icall script interface_manager { app-service none definition { foreach var { action interface } { set $var $EVENT::context($var) } switch ${interface} { "1.1" { tmsh::modify /net interface 1.3 ${action} } "1.3" { tmsh::modify /net interface 1.1 ${action} } } } description none events none } You can see at the top of the script definition is our little snippet to extract and set the variables for use, and then I'm using a switch statement to then modify the interfaces I want disabled or enabled based on the source interface failure. By passing the action along with the interface, I don't have to have two handlers and two scripts, one for each interface state. You could further optimize by passing with each source interface failure a list of the interfaces that should be disabled, then execute a tmsh::modify in a foreach loop, but the script is easier to modify programmatically than the user_alert.conf file. Testing the solution My first attempt to test the script failed because I disabled the interface administratively rather than having it fail. I had to look up how to "fail" an interface in my BIG-IP VE running on VMWare Fusion. Turns out I just have to deactivate the appropriate NIC in the VM settings. Here's a quick one minute video showing the results of the alert, handler, and script above.
1.8KViews0likes0CommentsLightboard Lessons: iCall
In this episode of Lightboard Lessons, I give an introduction to iCall, the built-in event-based BIG-IP control-plane scripting engine. Resources iCall release article iCall Codeshare iCall Triggers Example with iStats to Invalidate Cache iCall Periodic Example Pool Check to Disable Interface Passing Arguments to iCall Scripts1.9KViews0likes4CommentsiCall Triggers - Invalidating Cache from iRules
iCall is BIG-IP's all new (as of BIG-IP version 11.4) event-based automation system for the control plane. Previously, I wrote up the iCall system overview, as well as an article on the use of a periodic handler for automating backups. This article will feature the use of the triggered iCall handler to allow a user to submit a http request to invalidate the cache served up for an application managed by the Application Acceleration Manager. Starting at the End Before we get to the solution, I'd like to address the use case for invalidating cache. In many cases, the team responsible for an application's health is not the network services team which is the typical point of access to the BIG-IP. For large organizations with process overhead in generating tickets, invalidating cache can take time. A lot of time. So the request has come in quite frequently..."How can I invalidate cache remotely?" Or even more often, "Can I invalidate cache from an iRule?" Others have approached this via script, and it has been absolutely possible previously with iRules, albeit through very ugly and very-not-recommended ways. In the end, you just need to issue one TMSH command to invalidate the cache for a particular application: tmsh::modify wam application content-expiration-time now So how do we get signal from iRules to instruct BIG-IP to run a TMSH command? This is where iCall trigger handlers come in. Before we hope back to the beginning and discuss the iRule, the process looks like this: Back to the Beginning The iStats interface was introduced in BIG-IP version 11 as a way to make data accessible to both the control and data planes. I'll use this to pass the data to the control plane. In this case, the only data I need to pass is to set a key. To set an iStats key, you need to specify : Class Object Measure type (counter, gauge, or string) Measure name I'm not measuring anything, so I'll use a string starting with "WA policy string" and followed by the name of the policy. You can be explicit or allow the users to pass it in a query parameter as I'm doing in this iRule below: when HTTP_REQUEST { if { [HTTP::path] eq "/invalidate" } { set wa_policy [URI::query [HTTP::uri] policy] if { $wa_policy ne "" } { ISTATS::set "WA policy string $wa_policy" 1 HTTP::respond 200 content "App $wa_policy cache invalidated." } else { HTTP::respond 200 content "Please specify a policy /invalidate?policy=policy_name" } } } Setting the key this way will allow you to create as many triggers as you have policies. I'll leave it as an exercise for the reader to make that step more dynamic. Setting the Trigger With iStats-based triggers, you need linkage to bind the iStats key to an event-name, wacache in my case. You can also set thresholds and durations, but again since I am not measuring anything, that isn't necessary. sys icall istats-trigger wacache_trigger_istats { event-name wacache istats-key "WA policy string wa_policy_name" } Creating the Script The script is very simple. Clear the cache with the TMSH command, then remove the iStats key. sys icall script wacache_script { app-service none definition { tmsh::modify wam application dc.wa_hero content-expiration-time now exec istats remove "WA policy string wa_policy_name" } description none events none } Creating the Handler The handler is the glue that binds the event I created in the iStats trigger. When the handler sees an event named wacache, it'll execute the wacache_script iCall script. sys icall handler triggered wacache_trigger_handler { script wacache_script subscriptions { messages { event-name wacache } } } Notes on Testing Add this command to your arsenal - tmsh generate sys icall event <event-name> context none</event-name> where event-name in my case is wacache. This allows you to troubleshoot the handler and script without worrying about the trigger. And this one - tmsh modify sys db log.evrouted.level value Debug. Just note that the default is Notice when you're all done troubleshooting.1.8KViews0likes6CommentsBIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough
Introduction Modern AI factories—hyperscale environments powering everything from generative AI to autonomous systems—are pushing the limits of traditional infrastructure. As these facilities process exabytes of data and demand near-real-time communication between thousands of GPUs, legacy CPUs struggle to balance application logic with infrastructure tasks like networking, encryption, and storage management. Data Processing Units (DPUs), purpose-built accelerators that offload these housekeeping tasks, freeing CPUs and GPUs to focus on what they do best. DPUs are specialized system-on-chip (SoC) devices designed to handle data-centric operations such as network virtualization, storage processing, and security enforcement. By decoupling infrastructure management from computational workloads, DPUs reduce latency, lower operational costs, and enable AI factories to scale horizontally. BIG-IP Next for Kubernetes and Nvidia DPU Looking at F5 ability to deliver and secure every app, we needed it to be deployed at multiple levels, a crucial one being edge and DPU. Installing F5 BIG-IP Next for Kubernetes on Nvidia DPU requires installing Nvidia’s DOCA framework to be installed. What’s DOCA? NVIDIA DOCA is a software development kit for NVIDIA BlueField DPUs. BlueField provides data center infrastructure-on-a-chip, optimized for high-performance enterprise and cloud computing. DOCA is the key to unlocking the potential of the NVIDIA BlueField data processing unit (DPU) to offload, accelerate, and isolate data center workloads. With DOCA, developers can program the data center infrastructure of tomorrow by creating software-defined, cloud-native, GPU-accelerated services with zero-trust protection. Now, let's explore BIG-IP Next for Kubernetes components, The BIG-IP Next for Kubernetes solution has two main parts: the Data Plane - Traffic Management Micro-kernel (TMM) and the Control Plane. The Control Plane watches over the Kubernetes cluster and updates the TMM’s configurations. The BIG-IP Next for Kubernetes Data Plane (TMM) manages the supply of network traffic both entering and leaving the Kubernetes cluster. It also proxies the traffic to applications running in the Kubernetes cluster. The Data Plane (TMM) runs on the BlueField-3 Data Processing Unit (DPU) node. It uses all the DPU resources to handle the traffic and frees up the Host (CPU) for applications. The Control Plane can work on the CPU or other nodes in the Kubernetes cluster. This makes sure that the DPU is still used for processing traffic. Use-case examples: There are some recently awesome use cases released by F5’s team based on conversation and work from the field. Let’s explore those items: Protecting MCP servers with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs LLM routing with dynamic load balancing with F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs F5 optimizes GPUs for distributed AI inferencing with NVIDIA Dynamo and KV cache integration. Deployment walk-through In our demo, we go through the configurations from BIG-IP Next for Kubernetes Main BIG-IP Next for Kubernetes features L4 ingress flow HTTP/HTTPs ingress flow Egress flow BGP integration Logging and troubleshooting (Qkview, iHealth) You can find a quick walk-through via BIG-IP Next for Kubernetes - walk-through Related Content BIG-IP Next for Kubernetes - walk-through BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes and Nvidia DPU-3 walkthrough BIG-IP Next for Kubernetes F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs
758Views2likes3CommentsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.
810Views4likes1CommentUsing Aliases to launch F5 AMI Images in AWS Marketplace
F5 lists 82 product offerings in the AWS Marketplace as Amazon Machine Images (AMI). Each version of each product in each AWS Region has a different AMI. That’s around 22,000 images! Each AMI is identified by an AMI ID. You use the AMI ID to indicate which AMI you want to use when launching an F5 product. You can find AMI IDs using the AWS Web Console, but the AWS CLI is the best tool for the job. Searching for AMIs using the AWS CLI Here’s how you find the AMI IDs for version 17.5.1.2 of BIG-IP Virtual Edition in the us-east-1 AWS region: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1.2*' --query "sort_by(Images,&Name)[:]. {Description: Description, Id:ImageId }" --region us-east-1 --output table ---------------------------------------------------------------------------------------------------- | DescribeImages | +------------------------------------------------------------------------+-------------------------+ | Description | Id | +------------------------------------------------------------------------+-------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 1Boot Loc-250916013758 | ami-0948eabdf29ef2a8f | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-All Modules 2Boot Loc-250916015535 | ami-0cb3aaa67967ad029 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 1Boot Loc-250916013616 | ami-05d70b82c9031ff39 | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-LTM 2Boot Loc-250916014744 | ami-0b6021cc939308f3e | | F5 BIGIP-17.5.1.2-0.0.5 BYOL-encrypted-threat-protection-250916015535 | ami-01f4fde300d3763be | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-AWF Plus 16vCPU-250916015534 | ami-015474056159387ac | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 200Mbps-250916015522 | ami-06ce5b03dce2a059d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 25Mbps-250916015520 | ami-0826808708df97480 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Adv WAF Plus 3Gbps-250916015523 | ami-08c63c8f7ca71cf37 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 10Gbps-250916015532 | ami-0e806ef17838760e4 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 1Gbps-250916015530 | ami-05e31c2a0ac9ec050 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 200Mbps-250916015528 | ami-02dc0995af98d0710 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 25Mbps-250916015527 | ami-08b8f2daefde800e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Best Plus 5Gbps-250916015531 | ami-0d16154bb1102f3e9 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 10Gbps-250916015512 | ami-05c9527fff191feba | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 1Gbps-250916015510 | ami-05ce2932601070d5c | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 200Mbps-250916015508 | ami-0f6044db3900ba46f | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 25Mbps-250916014542 | ami-0de57aba160170358 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Better 5Gbps-250916015511 | ami-04271103ab2d1369d | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 10Gbps-250916014739 | ami-0d06d2a097d7bb47a | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737 | ami-01707e969ebcc6138 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 200Mbps-250916014735 | ami-06f9a44562d94f992 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 25Mbps-250916013626 | ami-0aa2bca574c66af13 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 5Gbps-250916014738 | ami-01951e02c52deef85 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 200Mbps-0916015525 | ami-03df50dfc04f19df5 | | F5 BIGIP-17.5.1.2-0.0.5 PAYG-PVE Adv WAF Plus 25Mbps-50916015524 | ami-0777c069eaae20ea1 | +------------------------------------------------------------------------+-------------------------+ This command shows all 17.5.1* releases of the "PayGo Good 1Gbps" flavor of BIG-IP in the us-west-1 region sorted by newest release first: aws ec2 describe-images --owners aws-marketplace --filters 'Name=name,Values=F5 BIGIP-17.5.1*PAYG-Good 1Gbps*' --query "reverse(sort_by(Images,&CreationDate))[:]. {Description: Name, Id:ImageId , date:CreationDate}" --region us-west-1 --output table ---------------------------------------------------------------------------------------------------------------------------------------------------- | DescribeImages | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | Description | Id | date | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ | F5 BIGIP-17.5.1.2-0.0.5 PAYG-Good 1Gbps-250916014737-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-0de8ca1229be5f7fe | 2025-09-16T23:12:28.000Z | | F5 BIGIP-17.5.1-0.80.7 PAYG-Good 1Gbps-250811055424-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-09afcec6f36494382 | 2025-08-15T19:03:23.000Z | | F5 BIGIP-17.5.1-0.0.7 PAYG-Good 1Gbps-250618090310-7fb2f9db-2a12-4915-9abb-045b6388cccd | ami-03e389e112872fd53 | 2025-07-01T06:00:44.000Z | +--------------------------------------------------------------------------------------------+------------------------+----------------------------+ Notice that the same BIG-IP VE release has a different AMI ID in each AWS region. Attempting to launch a product in one region using an AMI ID from a different region will fail. This causes a problem when a shell script or automation tool is used to launch new EC2 instances and the AMI IDs have been hardcoded for one region and you attempt to use it in another. Wouldn’t it be nice to have a single AMI identifier that works in all AWS regions? Introducing AMI Aliases The Ami Alias is a similar ID to the AMI ID, but it’s easier to use in automation. An AMI alias has the form /aws/service/marketplace/prod-<identifier>/<version> , for example, "PayGo Good 1Gbps" /aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 You can use this Ami Alias ID in any Region, and AWS automatically maps it to the correct Regional AMI ID. BIG-IP AMI Alias Identifiers F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 16vCPU) prod-qqgc2ltsirpio F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 200Mbps) prod-yajbds56coa24 F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 25Mbps) prod-qiufc36l6sepa F5 Advanced WAF with LTM, IPI, and Threat Campaigns (PAYG, 3Gbps) prod-fp5qrfirjnnty F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 10Gbps) prod-w2p3rtkjrjmw6 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 1Gbps) prod-g3tye45sqm5d4 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 200Mbps) prod-dnpovgowtyz3o F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 25Mbps) prod-wjoyowh6kba46 F5 BIG-IP BEST with IPI and Threat Campaigns (PAYG, 5Gbps) prod-hlx7g47cksafk F5 BIG-IP VE - ALL (BYOL, 1 Boot Location) prod-zvs3u7ov36lig F5 BIG-IP VE - ALL (BYOL, 2 Boot Locations) prod-ubfqxbuqpsiei F5 BIG-IP VE - LTM/DNS (BYOL, 1 Boot Location) prod-uqhc6th7ni37m F5 BIG-IP VE - LTM/DNS (BYOL, 2 Boot Locations) prod-o7jz5ohvldaxg F5 BIG-IP Virtual Edition - BETTER (PAYG, 10Gbps) prod-emsxkvkzwvs3o F5 BIG-IP Virtual Edition - BETTER (PAYG, 1Gbps) prod-4idzu4qtdmzjg F5 BIG-IP Virtual Edition - BETTER (PAYG, 200Mbps) prod-firaggo6h7bt6 F5 BIG-IP Virtual Edition - BETTER (PAYG, 25Mbps) prod-wijbh7ib34hyy F5 BIG-IP Virtual Edition - BETTER (PAYG, 5Gbps) prod-rfglxslpwq64g F5 BIG-IP Virtual Edition - GOOD (PAYG, 10Gbps) prod-54qdbqglgkiue F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps) prod-s6e6miuci4yts F5 BIG-IP Virtual Edition - GOOD (PAYG, 200Mbps) prod-ynybgkyvilzrs F5 BIG-IP Virtual Edition - GOOD (PAYG, 25Mbps) prod-6zmxdpj4u4l5g F5 BIG-IP Virtual Edition - GOOD (PAYG, 5Gbps) prod-3ze6zaohqssua F5 BIG-IQ Virtual Edition - (BYOL) prod-igv63dkxhub54 F5 Encrypted Threat Protection prod-bbtl6iceizxoi F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 200Mbps) prod-gkzfxpnvn53v2 F5 Per-App-VE Advanced WAF with LTM, IPI, TC (PAYG, 25Mbps) prod-qu34r4gipys4s NGINX Plus Alias Identifiers NGINX Plus Basic - Amazon Linux 2 (LTS) AMI prod-jhxdrfyy2jtva NGINX Plus Developer - Amazon Linux 2 (LTS) prod-kbeepohgkgkxi NGINX Plus Developer - Amazon Linux 2 (LTS) ARM Graviton prod-vulv7pmlqjweq NGINX Plus Developer - Amazon Linux 2023 prod-2zvigd3ltowyy NGINX Plus Developer - Amazon Linux 2023 ARM Graviton prod-icspnobisidru NGINX Plus Developer - RHEL 8 prod-tquzaepylai4i NGINX Plus Developer - RHEL 9 prod-hwl4zfgzccjye NGINX Plus Developer - Ubuntu 22.04 prod-23ixzkz3wt5oq NGINX Plus Developer - Ubuntu 24.04 prod-tqr7jcokfd7cw NGINX Plus FIPS Premium - RHEL 9 prod-v6fhyzzkby6c2 NGINX Plus Premium - Amazon Linux 2 (LTS) AMI prod-4dput2e45kkfq NGINX Plus Premium - Amazon Linux 2 (LTS) ARM Graviton prod-56qba3nacijjk NGINX Plus Premium - Amazon Linux 2023 prod-w6xf4fmhpc6ju NGINX Plus Premium - Amazon Linux 2023 ARM Graviton prod-e2iwqrpted4kk NGINX Plus Premium - RHEL 8 AMI prod-m2v4zstxasp6s NGINX Plus Premium - RHEL 9 prod-rytmqzlxdneig NGINX Plus Premium - Ubuntu 22.04 prod-dtm5ujpv7kkro NGINX Plus Premium - Ubuntu 24.04 prod-opg2qh33mi4pk NGINX Plus Standard - Amazon Linux 2 (LTS) AMI prod-mdgdnfftmj7se NGINX Plus Standard - Amazon Linux 2 (LTS) ARM Graviton prod-2kagbnj7ij6zi NGINX Plus Standard - Amazon Linux 2023 prod-i25cyug3btfvk NGINX Plus Standard - Amazon Linux 2023 ARM Graviton prod-6s5rvlqlgrt74 NGINX Plus Standard - RHEL 8 prod-ebhpntvlfwluc NGINX Plus Standard - RHEL 9 prod-3e7rk2ombbpfa NGINX Plus Standard - Ubuntu 22.04 prod-7rhflwjy5357e NGINX Plus Standard - Ubuntu 24.04 prod-b4rly35ct3dlc NGINX Plus with NGINX App Protect Developer - Amazon Linux 2 prod-pjmfzy5htmaks NGINX Plus with NGINX App Protect Developer - Debian 11 prod-ixsytlu2eluqa NGINX Plus with NGINX App Protect Developer - RHEL 8 prod-6v57ggy3dqb6c NGINX Plus with NGINX App Protect Developer - Ubuntu 20.04 prod-4a4g7h7mpepas NGINX Plus with NGINX App Protect DoS Developer - Amazon Linux 2023 prod-fmqayhbsryoz2 NGINX Plus with NGINX App Protect DoS Developer - Debian 11 prod-4e5fwakhrn36y NGINX Plus with NGINX App Protect DoS Developer - RHEL 8 prod-ubid75ixhf34a NGINX Plus with NGINX App Protect DoS Developer - RHEL 9 prod-gg7mi5njfuqcw NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 20.04 prod-qiwzff7orqrmy NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 22.04 prod-h564ffpizhvic NGINX Plus with NGINX App Protect DoS Developer - Ubuntu 24.04 prod-wckvpxkzj7fvk NGINX Plus with NGINX App Protect DoS Premium - Amazon Linux 2023 prod-lza5c4nhqafpk NGINX Plus with NGINX App Protect DoS Premium - Debian 11 prod-ych3dq3r44gl2 NGINX Plus with NGINX App Protect DoS Premium - RHEL 8 prod-266ker45aot7g NGINX Plus with NGINX App Protect DoS Premium - RHEL 9 prod-6qrqjtainjlaa NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 20.04 prod-hagmbnluc5zmw NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 22.04 prod-y5iwq6gk4x4yq NGINX Plus with NGINX App Protect DoS Premium - Ubuntu 24.04 prod-k3cb7avaushvq NGINX Plus with NGINX App Protect Premium - Amazon Linux 2 prod-tlghtvo66zs5u NGINX Plus with NGINX App Protect Premium - Debian 11 prod-6kfdotc3mw67o NGINX Plus with NGINX App Protect Premium - RHEL 8 prod-okwnxdlnkmqhu NGINX Plus with NGINX App Protect Premium - Ubuntu 20.04 prod-5wn6ltuzpws4m NGINX Plus with NGINX App Protect WAF + DoS Premium - Amazon Linux 2023 prod-mualblirvfcqi NGINX Plus with NGINX App Protect WAF + DoS Premium - Debian 11 prod-k2rimvjqipvm2 NGINX Plus with NGINX App Protect WAF + DoS Premium - RHEL 8 prod-6nlubep3hg4go NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 18.04 prod-f2diywsozd22m NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 20.04 prod-ajcsh5wsfuen2 NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 22.04 prod-6adjgf6yl7hek NGINX Plus with NGINX App Protect WAF + DoS Premium - Ubuntu 24.04 prod-autki7guiiqio Using AMI Aliases for BIG-IP The following example shows using an AMI alias to launch a new "F5 BIG-IP Virtual Edition - GOOD (PAYG, 1Gbps)" instance version 17.5.1.2-0.0.5 by using the AWS CLI. aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-s6e6miuci4yts/17.5.1.2-0.0.5 --instance-type m5.xlarge --key-name MyKeyPair The next example shows a CloudFormation template that accepts the AMI alias as an input parameter to create an instance. AWSTemplateFormatVersion: 2010-09-09 Parameters: AmiAlias: Description: AMI alias Type: 'String' Resources: MyEC2Instance: Type: AWS::EC2::Instance Properties: ImageId: !Sub "resolve:ssm:${AmiAlias}" InstanceType: "g4dn.xlarge" Tags: -Key: "Created from" Value: !Ref AmiAlias Using AMI Aliases for NGINX Plus NGINX Plus images in the AWS Marketplace are not version specific, so just use "latest" as the version to launch. For example, this will launch NGINX Plus Premium on Ubuntu 24.04: aws ec2 run-instances --image-id resolve:ssm:/aws/service/marketplace/prod-opg2qh33mi4pk/latest --instance-type c5.large --key-name MyKeyPair Finding AMI Aliases in AWS Marketplace AMI aliases are new to the AWS Marketplace, so not all products have them. To locate the alias for an AMI you use often, you need to resort to the AWS Marketplace web console. Here are the step-by-step instructions provided by Amazon: 1. Navigate to AWS Marketplace Go to AWS Marketplace Sign in to your AWS account 2. Find and Subscribe to the Product Search for or browse to find your desired product Click on the product listing Click "Continue to Subscribe" Accept the terms and subscribe to the product 3. Configure the Product After subscribing, click "Continue to Configuration" Select your desired: Delivery Method (if multiple options are available) Software Version Region 4. Locate the AMI Alias At the bottom of the configuration page, you'll see: AMI ID: ami-1234567890EXAMPLE AMI Alias: /aws/service/marketplace/prod-<identifier>/<version> New Tools for Your AMI Hunt In this article, we focused on using AMI Aliases to select the right F5 product to launch in AWS EC2. But, there’s one more takeaway. Scroll back up to the top of this page and take a closer look at the "aws ec2 describe-images" commands. These commands use JMESpath to filter, sort, and format the output. Find out more about filtering the output of AWS CLI commands here.122Views3likes0Comments