terraform
28 TopicsTerraform LTM provider - ICMP disabled on resulting VIPs
Hello, I recently started using the terraform provider to create my VIPs. It works great! It makes my life much easier and faster to create the non-prod environments and migrate those configs to prod. I've encountered one strange thing I'm struggling with though. I'm unable to ping the LTM VIPs. The VIPs work perfectly other than we are unable to ICMP ping them. I hand-created a basic VIP in the same partition, on the same VLAN/Network, and I can ping it, so it's not a routing or firewall problem. There's no module other than LTM running on this F5, so there's no firewall policies or anything like that in play. Just an standard LTM VIP with HTTP and client-SSL profiles. Nothing I create with terraform is pingable though. There are no policies or irules in use. On the virtual address list ICMP Echo is set to always, ARP is enabled, state is enabled. Has anyone else encountered this? I searched the forums and didn't find anything notable, and I haven't been able to find a solution yet. Even comparing the config files from the F5 hasn't produced anything notable. I'm sure it's something small that I'm missing. LTM VIP configuration (sanitized) is inline below. Thanks! ltm virtual /partition/app1PD-CLL-HTTPS { description "server1, Terraform - Servicing the CLL" destination /partition/10.1.212.244:443 ip-protocol tcp mask 255.255.255.255 persist { /partition/Cookie-app1CLL { default yes } } pool /partition/app1PD-CLL profiles { /partition/partition-HTTP-Weblogic-Proxy { } /partition/OC-255.255.255.255 { } /partition/server1 { context clientside } /Common/tcp { } } serverssl-use-sni disabled source 0.0.0.0/0 source-address-translation { pool /partition/10.1.212.244 type snat } translate-address enabled translate-port enabled }Solved75Views0likes2CommentsHow to deploy an F5XC SMSv2 site with the help of automation
To deploy an F5XC Customer Edge (CE) in SMSv2 mode with the help of automation, it is necessary to follow the three main steps below: Verify the prerequisites at the technical architecture level for the environment in which the CE will be deployed (public cloud or datacenter/private cloud) Create the necessary objects at the F5XC platform level Deploy the CE instance in the target environment We will provide more details for all the steps as well as the simplest Terraform skeleton code to deploy an F5XC CE in the main cloud environments (AWS, GCP and Azure). Step 1: verification of architecture prerequisites To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». This Internet access can be provided in several ways: "Cloud provider" type site: Public IP address directly on the interface Private IP address on the interface and use of a NAT Gateway as default route Private IP address on the interface and use of a security appliance (firewall type, for example) as default route Private IP address on the interface and use of an explicit proxy Datacenter or "private cloud" type site: Private IP address on the interface and use of a security appliance (firewall type, for example) or router as default route Private IP address on the interface and use of an explicit proxy Furthermore, public IP addresses on the interface and "direct" routing to Internet It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). Basic CE matrix flow. Interface Direction and protocols Use case / purpose SLO Egress – TCP 53 (DNS), TCP 443 (HTTPS), UDP 53 (DNS), UDP 123 (NTP), UDP 4500 (IPSEC) Registration, software download and upgrade, VPN tunnels towards F5XC infrastructure for control plane SLO Ingress – None RE / CE use case CE to CE use case by using F5 ADN SLO Ingress – UDP 4500 Site Mesh Group for direct CE to CE secure connectivity over SLO interface (no usage of F5 ADN) SLO Ingress – TCP 80 (HTTP), TCP 443 (HTTPS) HTTP/HTTPS LoadBalancer on the CE for WAAP use cases SLI Egress – Depends on the use case / application, but if the security constraint permits it, no restriction SLI Ingress – Depends on the use case / application, but if the security constraint permits it, no restriction For advanced details regarding IPs and domains used for: Registration / software upgrade Tunnels establishment towards F5XC infrastructure Please refer to: https://docs.cloud.f5.com/docs-v2/platform/reference/network-cloud-ref#new-secure-mesh-v2-sites Step 2: creation of necessary objects at the F5XC platform level This step will be performed by the Terraform script by: Creating an SMSv2 token Creating an F5XC site of SMSv2 type API certificate and terraform variables First, it is necessary to create an API certificate. Please follow the instructions in our official documentation here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-my-credentials or here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-service-credentials Depending on the type of API certificate you want to create and use (user credential or service credential). In the Terraform variables, those are the ones that you need to modify: The “location of the api key” should be the full path where your API P12 file is stored. variable "f5xc_api_p12_file" { type = string description = "F5XC tenant api key" default = "<location of the api key>" } If your F5XC console URL is https://mycompany.console.ves.volterra.io then the value for the f5xc_api_url will be https://mycompany.console.ves.volterra.io/api variable "f5xc_api_url" { type = string default = "https://<tenant name>.console.ves.volterra.io/api" } When using terraform, you will also need to export the P12 certificate password as an environment variable. export VES_P12_PASSWORD=<password of P12 cert> Creation of the SMSv2 token. This is achieved with the following Terraform code and with the “type = 1” parameter. # #F5XC objects creation # resource "volterra_token" "smsv2-token" { depends_on = [volterra_securemesh_site_v2.site] name = "${var.f5xc-ce-site-name}-token" namespace = "system" type = 1 site_name = volterra_securemesh_site_v2.site.name } Creation of the F5XC SMSv2 site. This is achieved with the following Terraform code (example for GCP). This is where you need to configure all the options you want to be applied at site creation. resource "volterra_securemesh_site_v2" "site" { name = format("%s-%s", var.f5xc-ce-site-name, random_id.suffix.hex) namespace = "system" block_all_services = false logs_streaming_disabled = true enable_ha = false labels = { "ves.io/provider" = "ves-io-GCP" } re_select { geo_proximity = true } gcp { not_managed {} } } For instance, if you want to use a corporate proxy and have the CE tunnels passing through the proxy, the following should be added: custom_proxy { enable_re_tunnel = true proxy_ip_address = "10.154.32.254" proxy_port = 8080 } And if you want to force CE to REs connectivity with SSL, the following should be added: tunnel_type = "SITE_TO_SITE_TUNNEL_SSL" Step 3: creation of the CE instance in the target environment This step will be performed by the Terraform script by: Generating a cloud-init file Creating the F5XC site instance in the environment based on the marketplace images or the available F5XC images How to list F5XC available images in Azure: az vm image list --all --publisher f5-networks --offer f5xc_customer_edge --sku f5xccebyol --output table | sort -k4 -V And check in the output, the one with the highest version. x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:9.2025.17 9.2025.17 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.1 2024.40.1 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.2 2024.40.2 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.44.1 2024.44.1 x64 f5xc_customer_edge f5-networks f5xccebyol_2 f5-networks:f5xc_customer_edge:f5xccebyol_2:2024.44.2 2024.44.2 Architecture Offer Publisher Sku Urn Version -------------- ------------------ ----------- ------------ ----------------------------------------------------- --------- We are going to re-use some of the parameters in the Terraform script, to instruct the Terraform code which image it should use. source_image_reference { publisher = "f5-networks" offer = "f5xc_customer_edge" sku = "f5xccebyol" version = "9.2025.17" } Also, for Azure, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once by running the following commands: Select the Azure subscription in which you are planning to deploy the F5XC CE: az account set -s <subscription-id> Accept the terms and conditions for the F5XC CE for this subscription: az vm image terms accept --publisher f5-networks --offer f5xc_customer_edge --plan f5xccebyol How to list F5XC available images in GCP: gcloud compute images list --project=f5-7626-networks-public --filter="name~'f5xc-ce'" --sort-by=~creationTimestamp --format="table(name,creationTimestamp)" And check in the output, the one with the highest version. NAME CREATION_TIMESTAMP f5xc-ce-crt-20250701-0123 2025-07-09T02:15:08.352-07:00 f5xc-cecrt-20250701-0099-9 2025-07-02T01:32:40.154-07:00 f5xc-ce-202505151709081 2025-06-25T22:31:23.295-07:00 How to list F5XC available images in AWS: aws ec2 describe-images \ --region eu-west-3 \ --filters "Name=name,Values=*f5xc-ce*" \ --query "reverse(sort_by(Images, &CreationDate))[*].{ImageId:ImageId,Name:Name,CreationDate:CreationDate}" \ --output table And check in the output, the ami with the latest creation date. Also, for AWS, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once. Go to this page in your AWS Console Then select "View purchase options" and then select "Subscribe". Putting everything together: Global overview We are going to use Azure as the target environment to deploy the F5XC CE. The CE will be deployed with two NICs, the SLO being in a public subnet and a public IP will be attached to the NIC. We assume that all the prerequisites from step 1 are met. Terraform skeleton for Azure is available here: https://github.com/veysph/Prod-TF/ It's not intended to be the perfect thing, just an example of the minimum basic things to deploy an F5XC SMSv2 CE with automation. Changes and enhancements based on the different needs you might have are more than welcome. It's really intended to be flexible and not too strict. Structure of the terraform directory: provider.tf contains everything that is related to the needed providers variables.tf contains all the variables used in the terraform files f5xc_sites.tf contains everything that is related to the F5XC objects creation main.tf contains everything to start the F5XC CE in the target environment Deployment Make all the relevant changes in variables.tf. Don't forget to export your P12 password as an environment variable (see Step 2, API certificate and terraform variables)! Then run, terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2308Views4likes1CommentF5 Per applications AS3 Declarations via Terraform
F5 Per applications AS3 Declarations via Terraform. Good evening all, I would like to put together a proof of concept surrounding using Terraform (the clients preferred automation platform) to populate and manage AS3 declarations. I am attempting to follow the following F5 docs page in my lab, and it is not working as I would have expected. [https://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP/per-app-as3.html#example2](https://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP/per-app-as3.html#example2) I have two separate files such is suggested in the article. One with two applications (app1-2.json) that acts as the base line for the first push, then a second file (app3.json) with a third application that I would like to ADD to the existing AS3 deceleration leaving my F5 with 3 total applications. I have one file [main.tf](http://main.tf) that looks like the following: resource "bigip\_as3" "as3-example" { as3\_json = file("app1-2.json") tenant\_filter = var.tenant tenant\_name = "Tenant" } I use that [main.tf](http://main.tf) file to push the original app1-2 file to produce the initial declaration with two applications. Then, I edit that file to look like resource "bigip\_as3" "as3-example" { \# as3\_json = data.template\_file.init.rendered as3\_json = file("app3.json") tenant\_filter = var.tenant tenant\_name = "Tenant" } Since per-application declarations are enabled, I assumed editing this file and applying it would push the third application and leave the other two in tact. That is not the case. When I push this edited [main.tf](http://main.tf) file, it edits the existing declaration deleting app1 and app 2 and creating app3. Can anyone shed some light on how we are supposed to use Terraform in per application deployments? I feel like I have to be missing something silly.167Views0likes3CommentsCustomer-driven Site Deployment Using AWS and F5 Distributed Cloud Terraform Modules
Introduction and Problem Scope F5 Distributed Cloud Mesh’s Secure Networking provides connectivity and security services for your applications running on the Edge, Private Clouds, or Public Clouds. This simplifies the deployment and configuration of connectivity and security services for your Multi-Cloud and Edge Cloud deployment needs across heterogeneous environments. F5 Distributed Cloud Services leverages the “Site” construct to deploy our Secure Mesh or AppStack Site instances to manage workloads. A Site could be a customer location like AWS, Azure, GCP (Google Cloud Platform), private cloud, or an edge site. To run F5 Distributed Cloud Services, the site needs to be deployed with one or more instances of F5 Distributed Cloud Node, a software appliance that is managed by F5 Distributed Cloud Console. This site is where customer applications and F5 Distributed Cloud services are running. To deploy a Node, different options are available: Customer deployment topology description We will explain the above steps in the context of a greenfield deployment, the Terraform scripts of which are available here. The corresponding logical topology view of this deployment is shown in Fig.2. This deployment scenario instantiates the following resources: Single-node CE cluster AWS SLO interface AWS VPC AWS SLO interface subnet AWS route tables AWS Internet Gateway Assign AWS EIP to SLO The objective of this deployment is to create a Site with a single CE node in a new VPC for the provided AWS region and availability zone. The CE will be created as an AWS EC2 instance. An AWS subnet is created within the VPC. CE Site Local Outside (SLO) interface will be attached to VPC subnet and the created EC2 instance. SLO is a logical interface of a site (CE node) through which reachability is achieved to external (e.g. Internet or other services outside the public cloud site). To enable reachability to the Internet, the default route of the CE node will point to the AWS Internet gateway. Also, the SLO will be configured with an AWS External IP address (Elastic IP). Fig.2. Customer Deployment Topology in AWS List of terraform input parameters provided in vars file Parameters must be customized to adapt to the customer environment. The definition of the parameters in the “terraform.tfvars” show in below table. Parameters Definitions owner Identifies the email of the IT manager used to authenticate to the AWS system project_prefix Prefix that will be used to identify the resource objects in AWS and XC. project_suffix The suffix that will be used to identify the site’s resources in AWS and XC ssh_public_key_file Local file system’s path to ssh public key file f5xc_tenant Full F5XC tenant name f5xc_api_url F5XC API url f5xc_cluster_name Name of the Cluster f5xc_api_p12_file Local file system path to api_cert_file (downloaded from XC Console) aws_region AWS region for the XC Site aws_existing_vpc_id Existing VPC ID (brownfield) aws_vpc_cidr_block CIDR Block of the VPC aws_availability_zone AWS Availability Zone (a) aws_vpc_slo_subnet_node0 AWS Subnet in the VPC for the SLO subnet Configuring other environmental variables Export the following environment variables in the working shell, setting it to customer’s deployment context. Environment Variables Definitions AWS_ACCESS_KEY AWS Access key for authentication AWS_SECRET_ACCESS_KEY AWS Secret key for authentication VES_P12_PASSWORD XC P12 Password from Console TF_VAR_f5xc_api_p12_cert_password Same as VES_P12_PASSWORD Deploy Topology Deploy the topology with: terraform init terraform plan terraform deploy –auto-approve And monitor the status of the Sites on the F5 Distributed Cloud Services Console. Created site object will be available in Secure Mesh Site section of the F5 Distributed Cloud Services Console. Video-based description of the deployment Scenario This demonstration video shows the procedure for provisioning the deployment topology described above in three steps. References https://docs.cloud.f5.com/docs-v2/platform/services/mesh/secure-networking https://docs.cloud.f5.com/docs-v2/platform/concepts/site https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-aws-site-terraform https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/troubleshooting/troubleshoot-manual-ce-deployment-registration-issues217Views0likes0CommentsF5 Distributed Cloud Site Lab on Proxmox VE with Terraform
Overview F5 Distributed Cloud (XC) Sites can be deployed in public and private cloud like VMWare and KVM using images available for download at https://docs.cloud.f5.com/docs/images . Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization based on KVM and Linux Containers. This article shows how a redundant Secure Mesh site protecting a multi-node App Stack site can be deployed using Terraform automation on Proxmox VE. Logical Topology ----------------- vmbr0 (with Internet Access) | | | +----+ +----+ +----+ | m0 | | m1 | | m2 | 3 node Secure Mesh Site +----+ +----+ +----+ | | | ------------------------------------- vmbr1 (or vmbr0 vlan 100) | | | | | | +----+ +----+ +----+ +----+ +----+ +----+ | m0 | | m1 | | m2 | | w0 | | w1 | | w2 | 6 node App Stack Site +----+ +----+ +----+ +----+ +----+ +----+ The redundant 3 node Secure Mesh Site connects via Site Local Outside (SLO) interfaces to the virtual network (vmbr0) for Internet Access and the Secure Mesh Site providing DHCP services and connectivity via its Site Local Inside (SLI) interfaces to the App Stack nodes connected to another virtual network (vmbr1 or vlan tagged vmbr0). Each node from the App Stack site is getting DHCP services and Internet connectivity via the Secure Mesh Sites SLI interfaces. Requirements Proxmox VE server or cluster (this example leverages a 3 node cluster) with a total capacity of at least 24 CPU (sufficient for 9 nodes, 4 vCPU each) 144 GB of RAM (for 9 nodes, 16GB each) ISO file storage for cloud-init disks (local or cephfs) Block storage for VM's and template (local-lvm or cephpool) F5 XC CE Template (same for Secure Mesh and App Stack site, see below on how to create one) F5 Distributed Cloud access and API credentials Terraform CLI Terraform example configurations files from https://github.com/mwiget/f5xc-proxmox-site The setup used to write this article consists of 3 Intel/ASUS i3-1315U with 64GB RAM and a 2nd disk each for Ceph storage. Each NUC has a single 2.5G Ethernet port, which are interconnected via a physical Ethernet Switch and internally connected to Linux Bridge vmbr0. The required 2nd virtual network for the Secure Mesh Site is created using the same Linux Bridge vmbr0 but with a VLAN tag (e.g. 100). There are other options to create a second virtual network in Proxmox, e.g. via Software Defined Networking (SDN) or dedicated physical Ethernet ports. Installation Clone the example repo $ git clone https://github.com/mwiget/f5xc-proxmox-site $ cd f5xc-proxmox-site Create F5 XC CE Template The repo contains helper scripts to download and create the template that can be executed directly from the Proxmox server shell. Modify the environment variable in the scripts according to your setup, mainly the VM template id to one that isn't used yet and the block storage location available on your Proxmox cluster (e.g. local-lvm or cephpool): $ cat download_ce_image.sh #!/bin/bash image=$(curl -s https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images|grep qcow2\"| cut -d\" -f2) if test -z $image; then echo "can't find qcow2 image from download url. Check https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images" exit 1 fi if ! -f $image; then echo "downloading $image ..." wget $image fi The following script must be executed on the Proxmox server (don't forget to adjust qcow2, id and storage): $ cat create_f5xc_ce_template.sh #!/bin/bash # adjust full path to downloaded qcow2 file, target template id and storage ... qcow2=/root/rhel-9.2024.11-20240523024833.qcow2 id=9000 storage=cephpool echo "resizing image to 50G ..." qemu-img resize $qcow2 50G echo "destroying existing VM $id (if present) ..." qm destroy $id echo "creating vm template $id from $image .." qm create $id --memory 16384 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci qm set $id --name f5xc-ce-template qm set $id --scsi0 $storage:0,import-from=$qcow2 qm set $id --boot order=scsi0 qm set $id --serial0 socket --vga serial0 qm template $id Create terraform.tfvars Copy the example terraform.tfvars.example to terraform.tfvars and adjust the variables based on your setup: $ cp terraform.tfvars.example terraform.tfvars project_prefix Site and node names will use this prefix, e.g. your initials ssh_public_key Your ssh public key provided to each node pm_api_url Proxmox API URL pm_api_token_id Proxmox API Token Id, e.g. "root@pam!prox" pm_api_token_secret Proxmox API Token Secret pm_target_nodes List of proxmox servers to use iso_storage_pool Proxmox storage pool for the cloud init ISO disks pm_storage_pool Proxmox storage pool for the VM disks pm_clone Name of the created F5 XC CE Template pm_pool resource pool to which the VM will be added (optional) f5xc_api_url https://<tenant>.console.ves.volterra.io/api f5xc_api_token F5 XC API Token f5xc_tenant F5 XC Tenant Id f5xc_api_p12_file Path to the encrypted F5 XC API P12 file. The password is expected to be provided via environment variable VES_P12_PASSWORD Set count=1 for module "firewall" The examples defined in the various toplevel *.tf files use Terraform count meta-argument to enable or disable various site types to build. Setting `count=0` disables it and `count=1` enables it. It is even possible to create multiple sites of the same type by setting count to a higher number. Each site adds the count index as suffix to the site name. To re-created the setup documented here, edit the file lab-firewall.tf and set `count=1` in the firewall and appstack modules. Deploy sites Use terraform CLI to deploy: $ terraform init $ terraform plan $ terraform apply Terraform output will show periodic progress, including site status until ONLINE. Once deployed, you can check status via F5 XC UI, including the DHCP leases assigned to the App Stack nodes: A kubeconfig file has been automatically created and it can be sourced via env.sh and used to query the cluster: $ source env.sh $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION C NTAINER-RUNTIME mw-fw-appstack-0-m0 Ready ves-master 159m v1.29.2-ves 192.168.100.114 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-m1 Ready ves-master 159m v1.29.2-ves 192.168.100.96 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-m2 Ready ves-master 159m v1.29.2-ves 192.168.100.49 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w0 Ready <none> 120m v1.29.2-ves 192.168.100.121 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w1 Ready <none> 120m v1.29.2-ves 192.168.100.165 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w2 Ready <none> 120m v1.29.2-ves 192.168.100.101 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 Next steps Now it's time to explore the Secure mesh site and the App stack cluster: Deploy a service on the new cluster, create a Load Balancer and Origin pool to expose the service? Need more or less worker nodes? Simply change the worker node count in the lab-firewall.tf file and re-apply via `terraform apply`. Destroy deployment Use terraform again to destroy the site objects in F5 XC and the Virtual Machines and disks on Proxmox with $ terraform destroy Summary This article documented how to deploy a dual nic Secure Mesh site and a multi-node App Stack Cluster via Terraform on Proxmox VE. There are additional examples in secure-mesh-single-nic.tf,secure-mesh-dual-nic.tf and appstack.tf. You can explore and modify the provided modules based on your particular needs. Resources https://github.com/mwiget/f5xc-proxmox-site F5 Distributed Cloud Documentation Terraform Proxmox Provider by Telmate Proxmox Virtual Environment750Views0likes0CommentsDeploy WAF on any Edge with F5 Distributed Cloud (SaaS Console, Automation)
F5 XC WAAP/WAF presents a clear advantage over classical WAAP/WAFs in that it can be deployed on a variety of environments without loss of functionality. In this first article of a series, we present an overview of the main deployment options for XC WAAP while follow-on articles will dive deeper into the details of the deployment procedures.6.7KViews9likes0CommentsBIG-IP Orchestrate in private Data Center using Terraform Cloud Agent
What is Terraform Cloud? Terraform Cloud offers organizations a unified workflow for provisioning their cloud, private data center, and SaaS infrastructure, ensuring continuous infrastructure management throughout its entire lifecycle. What is F5 BIG-IP? BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. It helps in doing Application delivery server load balancing of applications securely and at scale. BIG-IP can be deployed in private or public clouds. BIG-IP in private Data Center When BIG-IP is in a private data center, it has a private IP, making it tricky to reach with tools like Terraform Cloud from the outside. However, if you're dealing with both private and public clouds using Terraform Cloud, you can use Terraform Cloud agents. These agents help control BIG-IP in private data centers, even when the IP isn't accessible externally. BIG-IP supports Application Services 3 (AS3) and FAST templates, presenting a highly synergistic relationship with Terraform. This synergy is particularly pronounced due to the availability of the BIG-IP Terraform provider, coupled with dedicated resources designed specifically for the deployment of AS3 and FAST templates. AS3 and FAST templates serve as powerful tools for configuring and managing BIG-IP application services. AS3 simplifies the process of defining, managing, and deploying application-related configurations, providing a declarative model for specifying how applications should be set up on BIG-IP devices. FAST, on the other hand, extends automation capabilities by incorporating telemetry functionalities, and enhancing monitoring and reporting capabilities. The integration with Terraform is pivotal in this context, as the BIG-IP Terraform provider facilitates the seamless incorporation of AS3 and FAST templates into infrastructure-as-code (IaC) workflows. The example Terraform configuration is at https://github.com/scshitole/privateDC How to orchestrate BIG-IP in a private Data Center? In this illustrative scenario, the BIG-IP system is operational within a private Data Center. A Virtual Machine has been configured to host the Terraform Cloud agent, encapsulated within a container. The Terraform configuration pertinent to our deployment resides in the GitHub repository: https://github.com/scshitole/privateDC. Let us now turn our attention to the Terraform Agent—an agile and lightweight component capable of executing within a container on a Virtual Machine. Its primary function is to establish and maintain a secure connection with Terraform Cloud, perpetually polling for instructions. Notably, this agent not only retrieves directives from Terraform Cloud but also acquires essential information regarding the Terraform workspace and the TF configuration. What distinguishes this agent is its seamless operation without necessitating alterations to existing firewall rules. Its communication with Terraform Cloud transpires over HTTPS and only demands appropriately configured DNS settings. Upon queuing a Terraform plan, the control plane initiates the dispatch of the configuration to the agent. Subsequently, the agent diligently retrieves the workspace and orchestrates the deployment of the configuration onto the BIG-IP infrastructure. What advantages does this solution offer? Automation Workflows with Terraform Cloud: By leveraging Terraform Cloud, we gain the capability to establish automated workflows for configuring BIG-IP. This not only streamlines the configuration process but also enhances efficiency through the power of automation. Additionally, Terraform Cloud enables the orchestration of BIG-IP Web Application Firewall (WAF) configurations in a Hybrid Cloud environment, providing a comprehensive solution for managing security across diverse infrastructures. Enhanced Security with Maintained Private IPs and Credentials: The solution ensures a robust security posture by maintaining the confidentiality of the infrastructure's private IP addresses and credentials. This practice prevents security sprawls and unauthorized access attempts, fortifying the integrity of the entire system. Seamless BIG-IP Configuration Migration: The flexibility of BIG-IP configuration migration is a notable advantage, allowing for a smooth transition between private and public cloud environments. This bidirectional migration capability ensures adaptability to evolving infrastructure needs, facilitating a seamless shift of BIG-IP configurations as organizational requirements dictate. Whether moving configurations from a private cloud to a public cloud or vice versa, this capability provides agility and scalability in infrastructure management. How to set up configuration on Terraform Cloud? o Once logged into Terraform Cloud, choose your organization from the available options. o Go to the Projects & Workspaces section and opt for the Version Control Workflow. o The BIG-IP Terraform configuration template resides in the GitHub repository; please choose the relevant repository. o Choose the correct GitHub repository; it should be visible here. o Now, provide a name for the workspace; feel free to select something relevant, but it must be unique. o Enter the variables here, including details such as the BIG-IP's IP address, username, and password. Ensure to choose the HCL option, and if needed, you can set it as invisible. o Then, go to the established workspace and click on "New run. o Go to the Agents section and select Create Agent Pool. o Enter a fitting name for the Agent Pool, as illustrated. You can opt for a unique name matching the workspace for easy identification, although it's not mandatory. o Provide a suitable description for the Agent Pool, explaining its specific purpose or activities. Then, proceed to click on "Generate Token"; you will require this token when running the agent. o Copy the newly generated token and follow the outlined steps to configure your agents. Run the provided docker command, including essential environment variables like TFC_AGENT_TOKEN and TFC_AGENT_NAME. If desired, you can also run the docker in the background using the appropriate docker command option. Key Take Away Terraform Cloud streamlines infrastructure provisioning and management across various environments for consistent lifecycle control. Terraform Cloud agents enable effective orchestration of BIG-IP configurations in private data centers, addressing challenges associated with private IPs. The seamless integration of BIG-IP, AS3, FAST templates, and Terraform supports efficient infrastructure-as-code workflows, especially beneficial in multi-cloud setups. Terraform Cloud facilitates automated workflows, simplifies BIG-IP configurations, and supports orchestrating Web Application Firewall (WAF) setups in Hybrid Cloud environments. Emphasizing security, the solution maintains the confidentiality of private IPs and credentials, preventing security sprawl and unauthorized access. The solution offers flexibility by allowing seamless BIG-IP configuration migration between private and public cloud environments, ensuring adaptability and scalability. For more details, please watch the accompanying video https://youtu.be/RgCqnDxpf3E321Views1like0CommentsF5 rules for AWS WAF Terraform
Dear, good afternoon I'm implementing the rules of F5 OWSAP10 https://aws.amazon.com/marketplace/pp/prodview-ah3rqi2hcqzsi But I'm working with infrastructure by Terraform code To carry out the implementation I need the correct name of the rule and the correct name of the vendor for implementation and I cannot find this information in the documentation Can you help me? ex: { overrideAction = { type = var.NAME == "BLOCK" ? "NONE" : var.NAME } managedRuleGroupIdentifier = { "vendorName" : "NAME", "managedRuleGroupName" : "NAME" } ruleGroupType = "ManagedRuleGroup" excludeRules = [] }Solved2KViews0likes8CommentsUsing F5's Terraform modules in an air-gapped environment
Introduction IT Industry research, such as Accelerate, shows improving a company's ability to deliver software is critical to their overall success. The following key practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification F5 has published Terraform modules on GitHub.com to help customers adopt deployment automation practices, focused on streamlining instantiation of BIG-IPs on AWS, Azure, and Google. Using these modules allows F5 customers to leverage their embedded knowledge and expertise. But we have limited access to public resources Not all customer Terraform automation hosts running the CLI or enterprise products are able to access public internet resources like GitHub.com and the Terraform Registry. The following steps describe how to create and maintain a private air-gapped copy of F5's modules for these secured customer environments. Creating your air-gapped copy of the modules you need This example uses a personal GitHub account as an analog for air-gapped targets. So, we can't use the fork feature of github.com to create the copy. For this approach, we're assuming a workstation that has access to both the source repository host and the target repository host. So, not truly fully air-gapped. We'll show a workflow using git bundle in the future. Retrieve remote URL for one of the modules at F5's devcentral GitHub account Export the remote URL for the source repository export MODULEGITHUBURL="git@github.com:f5devcentral/terraform-aws-bigip-module.git" Create a repository on target air-gapped host Follow the appropriate directions for the air-gapped hosted Git (BitBucket, GitLab, GitHub Enterprise, etc.). And, retrieve the remote url for this repository. Export the remote URL for the air-gapped repository Note: The air-gapped repository is still empty at this point. Note: The example is using github.com, your real-world use will be using your internal git host export MODULEAIRGAPURL="git@github.com:myteamsaccount/localmodulerepo.git" Clone the module source repository This example uses F5's module for Azure git clone $MODULEGITHUBURL Add the target repository as an additional remote Again, we're using F5's AWS module as an example. We're using the remote url exported as MODULEAIRGAPURL to create the additional git repository remote. cd terraform-aws-bigip-module git remote add airgap $MODULEAIRGAPURL Pass the latest to the air-gapped repository Note: In the example below we're pushing the main branch. In some older repositories, the primary repository branch may still be named master . Note: Pushing the tags into the airgap repository is critical to version management of the modules. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Using your air-gapped copy of the modules Identify the module version to use This lists all of the tags available in the repository. git tag e.g. 0.9.2 v0.9 v0.9.1 v0.9.3 v0.9.4 v0.9.5 Review new versions for environment acceptance At this point, your organization should perform any acceptance testing of the new tags prior to using them in production environments. Source reference in Terraform module using git Unlike using the Terraform Registry, when using git as your module resource the version reference is included in the source URL. The source reference is the prefix git:: followed by the remote URL of the airgap repository, followed by ?ref= , finally followed by the tag identified in the previous step. Note: We are referencing the airgap repository, NOT the origin repository. Note: It is highly recommended to include the version reference in the URL. If the reference is not included in the URL, the latest commit to the default branch will be used at apply time. This means that the results of an apply will be non-deterministic, causing unexpected results, possibly service disruptions. module "bigip" { source = "git::https://github.com/myteamsaccount/localmodulerepo.git?ref=v0.9.3" ... } Check out Terraform for more detailed configuration requirements Source reference in Terraform module using a private Terraform registry If you have an instance of Terraform Enterprise it's possible to connect the private git repository created above to the [private module registry(https://www.terraform.io/docs/enterprise/admin/module-sharing.html)] available in Terraform Enterprise. module "bigip" { source = "privateregistry/modulereference" version = "v0.9.3" ... } Maintaining your air-gapped copy of the modules On-going maintenance of private repository Once the repository is established, perform the following actions whenever you want to retrieve the latest versions of the F5 modules. If you have a registry enabled on Terraform Enterprise, it should update automatically when the private repository is updated. # get the latest from the origin repository git fetch origin # push any changes to the airgap repository git push airgap main # push all repository tags to the airgap repository git push --tags airgap Review new versions for environment acceptance When your private repository is updated, do not forget to perform any acceptance testing you need to validate compliance and compatibility with your environment's expectations. Other references Installing and running iControl extensions in isolated GCP VPCs Deploy BIG-IP on GCP with GDM without Internet access1.8KViews1like0Comments