VIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.227Views5likes2CommentsBIG-IP Next Installation Guides
The article count on F5 BIG-IP Next installation guides here on DevCentral is growing! These resources will walk you through the initial steps of getting Central Manager and instances installed on the various platforms for labs and production. I'll keep this updated as new guides are released. Theofficial documentation on clouddocs can be referenced as well. VMware Central Manager on Fusion Central Manager on ESXi Instances on Fusion Instances on ESXi Instances via the vSphere provider in Central Manager KVM Central Manager and instances on Nutanix Community Edition Central Manager and instances on Proxmox Virtual Environment25Views0likes0CommentsStreamlining BIG-IP Next Deployments: Automate with CI/CD Pipelines Using Terraform Cloud and GitHub
Automation is key to maintaining efficiency and consistency in today's fast-paced IT environment. In this article, I will demonstrate how to automate the deployment of BIG-IP Next configurations using Terraform Cloud and GitHub. By integrating AS3 JSON and Terraform configuration code, you can ensure that any changes made in your GitHub repository automatically trigger Terraform Cloud to deploy the updated configurations to your BIG-IP Next instance via the BIG-IP Next Central Manager. Key Players: BIG-IP Next:Your powerful application delivery controller, offers advanced features for load balancing, security, and more. BIG-IP Next Central Manager: The brain of your BIG-IP Next deployment, orchestrating and managing all your BIG-IP instances. BIG-IP Next Terraform resources:A powerful interface allowing programmatic control over your BIG-IP configuration, simplifying automation. Terraform Cloud:A robust platform for infrastructure-as-code, providing version control, collaboration, and powerful automation tools. GitHub:A popular version control system for collaborative software development, where your Terraform configuration files will reside. Terraform Agent: A local agent installed on a dedicated VM in your private data center as a bridge between Terraform Cloud and your BIG-IP Next instances. The Workflow: Define your Infrastructure in GitHub:Using the Terraform resources documented athttps://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP-Next/big-ip-next-index.html#release-notes, you describe your desired BIG-IP Next configuration in code (e.g., creating virtual servers, pools, monitors, and other application services). Store your Terraform code in a GitHub repository. Configure Terraform Cloud: Set up a workspace in Terraform Cloud and link it to your GitHub repository. Configure a VCS trigger to automatically initiate a Terraform plan and apply it when changes are made to your code in GitHub. Install and Configure Terraform Agent: Set up a VM in your private data center, run Ubuntu, and install the Terraform Agent. Configure the agent to connect to your Terraform Cloud workspace. Automatic Configuration: When you push changes to your Terraform code in GitHub, Terraform Cloud detects the update, triggers a Terraform plan, and sends it to the Terraform Agent. The agent then communicates with your BIG-IP Next Central Manager, to implement the necessary changes to your BIG-IP Next instances. Benefits: Simplified Management: No more manual configuration and tedious updates! Terraform Cloud automates deployment, reducing errors and ensuring consistency across your BIG-IP Next environment. Increased Efficiency: Spend less time on repetitive tasks and focus on building and deploying applications faster. Collaboration and Version Control:Work collaboratively with your team, track changes, and easily revert to previous configurations using GitHub's robust version control capabilities. Scalability and Flexibility:Terraform Cloud seamlessly scales to manage large and complex environments, providing flexibility and adaptability for your growing needs. Getting Started: Set up GitHub Repository: Create a repository in GitHub and store your Terraform configuration files there. You can clone the GitHub repository from https://github.com/f5bdscs/example-AS3.git and begin working on it. terraform { required_providers { bigipnext = { source = "F5Networks/bigipnext" version = "1.2.0" } } cloud { organization = "39nX-example" workspaces { name = "39nX-example" } } } variable "host" {} variable "username" {} variable "password" {} provider "bigipnext" { username = var.username password = var.password host = var.host } resource "bigipnext_cm_as3_deploy" "test" { target_address = "10.1.1.10" as3_json = file("as3.json") } Explanation: Terraform Block: Defines the required provider bigipnext with source and version. Specifies cloud organization and workspace name. Variable Declarations: host, username, and password are declared as input variables. Provider Configuration: Uses the input variables for username, password, and host. Resource Definition: bigipnext_cm_as3_deploy resource with target_address and as3_json file. Make sure to create and populate the as3.json file with the necessary AS3 declarations. Also, ensure you provide values for host, username, and password when running the Terraform commands. { "class": "ADC", "schemaVersion": "3.45.0", "id": "example-declaration-01", "label": "Sample 1", "remark": "Simple HTTP application with round robin pool", "next-cm-tenant01": { "class": "Tenant", "EXAMPLE_APP": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.1.20.10" ], "pool": "next-cm-pool01" }, "next-cm-pool01": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.1.20.4" ] } ] } } } } Configure Terraform Cloud:Create a workspace, link it to your GitHub repository, and set up a VCS trigger to activate plans and apply changes. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-vcs-changeto integrate Terraform Cloud with your GitHub repository. Install and Configure Terraform Agent:Set up a VM in your private data center, install the Terraform Agent, and configure it to connect to your Terraform Cloud workspace. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud/cloud-agents to install Terraform Cloud agent Deploy your configuration: Push your code to GitHub and watch as Terraform Cloud automatically updates your BIG-IP Next instances. You can watch the Demonstration Video here https://youtu.be/0xEtj-jAepE384Views0likes0CommentsBIG-IP Next Central Manager API with Postman
In my last article I dove into the Central Manager AS3 endpoints with thecURL command. As I was preparing for this one, I thought it would work better as a live stream than a traditional article. Here's the stream you can watch in the replay, and the resources I mentioned on the stream are posted below. Show description: I've been working with the BIG-IP Next API from the API reference and with curl on the command line, and I gotta tell you, as much as I don't love Postman, it's super handy when learning an API. In this episode of DevCentral Connects, I'll download the collection from the Next documentation, get the environment variables set up, and walk through some of the tasks available in the collection to start working with the BIG-IP Next API. Resources BIG-IP Next Articles on DevCentral BIG-IP Next Academy group on DevCentral Embracing AS3: Foundations BIG-IP Next automation: AS3 basics BIG-IP Next automation: Working with the AS3 endpoints 20.0 Postman collection 20.1 Postman collection 20.2 Postman collection286Views1like0CommentsGetting Started with BIG-IP Next: Fundamentals
In the first article in this series, I introduced BIG-IP Next at the 50,000-foot (or meter for the saner parts of the world...) level. In this article, I will get closer to the brass tacks of tackling some technical tasks, but still hover over the trenches so I can lay a little more groundwork into the components of the BIG-IP Next: Central Manager and instances. Central Manager The Central Manager is the brains of the operation, and aptly named since it is the centralized location where most management tasks regarding BIG-IP Next instances will coalesce. Gone are the days of logging into BIG-IP devices. It won't be supported! Also gone are the days of creating a node to create a pool and creating some profiles and iRules and snat pools and then slapping all that together on a virtual server. That's not to say that some shared objects won't exist--they will, or at least they can. In classic BIG-IP, the virtual server was the "top dog" from an object perspective unless you already have used iApps or AS3 declarations, in which case those options are similar to what we have with BIG-IP Next, where the application service wears the crown. Everything about that application service is defined within that context, including multiple virtual servers where necessary. That will be done in the GUI via application templates, or via the API with AS3 directly or via FAST templates. The included http application template in the Central Manager GUI allows for a lot of checkbox functionality, but accessing some of the functionality you may be used to will require additional or edited templates. Beyond managing the instances and the application services, you'll also be able to manage your security policies, attack and bot signature security services updates and monitor/report on deployed policies. And of course, you'll be able to manage users and performance maintenance on the Central Manager system itself. There is no license required for Central Manager; you can download it now and get started with your discovery as soon as you're ready! I have it installed on my iMac in VMware Fusion currently, and I'll be writing articles in the next couple of weeks on installation for Fusion and ESXi. Instances Whereas Central Manager is the brain of the BIG-IP Next operation, the instances are the brawn. They can take the form of a tenant on F5 VELOS or rSeries hardware, a KVM and/or VMware Virtual Edition for private clouds and coming soon, or a Virtual Edition on select public clouds. (Note: Instances can also take the form of CNFs in headless kubernetes deployments, but that won't be addressed in this series.) Onboarding instances is not as complex a process as setting up classic BIG-IP because day one operations are not intermingled with day two and beyond. You define the CPU, memory, disk, and network resources you need depending on what modules you're licensing for use and fire it up. Once that candle is lit, you run through a few onboarding steps with either a postman collection or write an onboarding script to walk through those steps for you. That's it for setup on the instances; the rest of the process is managed on Central Manager. Limited access will be available on instances for troubleshooting through a sidecar proxy, but even that is configured and managed through Central Manager. Instances are licensed. Make sure to check with your account team; you might already be entitled to BIG-IP Next licensing, but a conversion transaction will be necessary. For lab discovery, you can generate a trial license on MyF5 to get started! I'll cover installation on KVM, Fusion, and ESXi in the next couple of weeks. Leon Seng has already written up installing a BIG-IP Next instance on Proxmox! "Next" Up Alrighty then! Enough talk, Jason, let's do something! I hear you, I hear you...starting next week, I'll be releasing incremental steps into the installation, onboarding, licensing, upgrading, backup/restore, etc, of both the Central Manager and the instances. Here's the general workflow I'll follow: Ignore the platform. I'll step through all the support versions I have access to and keep placeholders to circle back as more platforms are supported. I hope to see you all at AppWorld, but if not, don't be a stranger here on DevCentral, reach out any time!1.9KViews7likes0CommentsCreate F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how: First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads. Click on the "Copy Download Link" Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below. proxmox $ curl -O -L -J [link for Central Manager from F5 downloads] proxmox $ curl -O -L -J [link for Next from F5 downloads] On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2. proxmox $ cd ~/ proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc BIG-IP-Next-20.2.1-F5-ca-bundle.cert BIG-IP-Next-20.2.1-F5-certificate.cert Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment. # # Central Manager # # use either DHCP or Static IP example # # using DHCP (change values to match your environment) proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit # static IP (change values to match your environment) # proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit # import disk qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0 # # Next instance # # Note that you need at least two interfaces, one for management and one for data-plane # # use either DHCP or Static IP example # # DHCP proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # static IP # proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # import disk proxmox $ proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0 You should now see a new VM created on the Proxmox GUI. Finally, start the VM. This will take a few minutes. The BIG-IP Next VM is now ready to be onboarded per instructions found here.2.1KViews6likes3CommentsGetting Started with BIG-IP Next: Configuring Instance High Availability
With BIG-IP classic, there are a lot of design choices to make and steps on both systems to arrive at an HA pair. With BIG-IP Next, this is simplified quite a bit. Once configured, the highly available pair is treated by Central Manager as a single entity. There might be alternative options in the future, but as of version 20.1, HA for instances is active/standby only. In this article, I'll walk you through the steps to configure HA for instances in the Central Manager GUI. Background and Prep Work I set up two HA systems in my preparation for this article. The first had dedicated interfaces for the management interface, the external and internal traffic interfaces, and the HA interface. So when configuring the virtual machine, I made sure each system had four NICs. For the second, I merged all the non-management interfaces on a single NIC and used vlan tagging, so those systems had two NICs. In my lab that looks like this: The IP addressing scheme in my lab is shown below. First the four NIC system: 4-NIC System next-4nic-a next-4nic-b floating mgmt 172.16.2.152/24 172.16.2.153/24 172.16.2.151/24 cntrlplane ha (vlan 245) 10.10.245.1/30 10.10.245.2/30 NA dataplane ha (int 1.3) 10.0.5.1/30 10.0.5.2/30 NA dataplane ext (int 1.1) 10.0.2.152/24 10.0.2.153/24 10.0.2.151/24 dataplane int (int 1.2) 10.0.3.152/24 10.0.3.153/24 10.0.3.151/24 And now the two NIC system: 2-NIC System next-2nic-a next-2nic-b floating mgmt 172.16.2.162/24 172.16.2.163/24 172.16.2.161/24 cntrlplane ha (vlan 245) 10.10.245.5/30 10.10.245.6/30 NA dataplane ha (vlan 50) 10.0.5.5/30 10.0.5.6/30 NA dataplane ext (vlan 30) 10.0.2.162/24 10.0.2.163/24 10.0.2.161/24 dataplane int (vlan 40) 10.0.3.162/24 10.0.2.163/24 10.0.3.161/24 Beyond the self IP addresses for your traffic interfaces, you'll need additional IP addresses for the floating address, the control-plane HA sub-interfaces (which are created for you), and teh data-plane HA interfaces. Before proceeding, make sure you have a plan for network segmentation and addressing similar to above, you've installed two like instances, and that one (and only one) of them is licensed. Configuration This walk through is for the 2-NIC system shown above, but the steps are mostly the same. First, login to Central Manager, and click on Manage Instances. Click on the standalone mode for the system you want to be active initially in your HA pair. For me, that's next-2nic-a. (You can also just click on the system name and then select HA in the menu, but this saves a click.) In the pop-up dialog, select Enable HA. Read the notes below to make sure your systems are ready to be paired. On this screen, a list of available standalone systems will populate. Click the down arrow and select your second system, next-2nic-b in my case. Then click Next. On this next prompt, you'll need to create two vlans, one for the control plane and one for the data plane. The control plane mechanics are taken care of for you and you don't need to plan connectivity other than to select an available vlan that won't conflict with anything else in your system. For the data plane, you need to have a dedicated vlan and/or interface set aside. Click Create VLAN for the control plane. Name and tag your vlan. In my case I used cp-ha as my vlan name and tag 245. Click Done. Now click Create VLAN for the data plane. Because I'm tagging all networks on the 2-NIC system, my own interface is 1.1. So I named my data plan vlan dp-ha, set the tag to 50, selected interface 1.1, and clicked Done. Now that both HA VLANs have been created, click Next. On this screen, you'll name your HA pair system. This will need to be unique from other HA pairs, so plan accordingly. I named mine next-ha-1, but that's generic and unlikely to be helpful in your environment. Then set your HA management IP, this is how Central Manager will connect to the HA pair. You can enable auto-failback if desired, but I left that unchecked. For the HA Nodes Addresses, I referenced my addressing table posted at the top of this article and filled those in as appropriate. When you get those filled out, click Next. Now you'll be presented with a list of your traffic VLANs. On my system I have v102-ext and v103-int for my external and internal networks. First, I clicked v102-ext. On this screen you'll need to add a couple rows so you can populate the active node IP, the standby node IP, and the floating IP. The order doesn't matter, but I ordered them as shown, and again referenced my addressing table. Once populated, click Save. That will return you to this screen, where you'll notice that v102-ext now has a green checkbox where the yellow warning was. Now click into your other traffic VLAN (v103-int in my case) if applicable to your environment or skip this next step. This is a repeat of the external traffic network for the internal traffic network. I referenced my address table one more time and filled the details out as appropriate, then clicked Save. Make sure that you have green checkboxes on the traffic VLANs, then click Next. Review the summary of the HA settings you've configured, and if everything looks right, click Deploy to HA. On the "are you sure?" dialog where you're prompted to confirm your deployment, click Yes, Deploy. You'll then see messaging at the top of the HA configuration page for the instance indicating that HA is being created. Also note that the Mode on this page during creation still indicates standalone. Once the deployment is complete, you'll see the mode has changed to HA and the details for your active and standby nodes are provided. Also present here is the Enable automatic failover option, which is enabled by default. This is for software upgrades. If left enabled, the standby unit will be upgraded first, a failover will be executed, and the the remaining system will be upgraded. If in your HA configuration you specified auto-failback, then after the second system is upgraded there will be another failover executed to complete the process. And finally, as seen in the list of instances, there are three now instead of four, with next-ha-1 taking the place of next-2nic-a and next-2nic-b from where we started. Huzzah! You now have a functioning BIG-IP Next HA pair. After we conclude the "Getting Started" series, we'll start to look at the benefits of automation around all the tasks we've covered so far, including HA. The click-ops capabilities are nice to have, but I think you'll find the ability to automate all this from a script or something like an Ansible playbook will really start to drive home the API-first aspects of Next.849Views1like6CommentsApplication observability (Open Telemetry Tracing)
Hello, do you, or your customers, need BIG-IP to deliver OTEL tracing? It won't (AFAIK) be implemented in BIG-IP classic, but I've opened a RFE to ask for implementation of Open Telemetry (distributed) Tracing on BIG-IP Next: RFE: (Bug alias 1621853) [RFE] Implement OTEL traces If you'll need it, don't hesitate to open a support case to link that RFE-ID, that will give it more weight for prioritization.50Views0likes2Comments