cloud
57 TopicsCode to create unreachable ELA license files from BIG-IQ
Problem this snippet solves: *NOTE* if you are upgrading your BIG-IP,please refer to F5 solution:https://support.f5.com/csp/article/K13540950 BIG-IQ traditionally expects to be able to reach any BIG-IP devices it is going to license. This code helps create a license file from the ELA SKU offerings which can be applied on an Unreachable BIG-IP. I've added some troubleshoting steps at the end of the article, Dossier errors seen on the BIG-IP, just in case! How to use this snippet: SSH into the BIG-IP device and run the following command to gain the MAC address of the management interface tmsh show sys mac-address | grep -i interface [root@bigip1:Active:Standalone] config # tmsh show sys mac-address | grep -i interface ll:50:56:xx:xx:36net interfacemgmtmac-address xxxxxxxxxxxxxxxxxnet interface1.3mac-address xxxxxxxxxxxxxxxxxnet interface1.1mac-address xxxxxxxxxxxxxxxxx net interface1.2mac-address In the example above the MAC address we need is “ll:50:56:xx:xx:36” Now SSH into the BIG-IQ Move into the /shared directory (cd /shared) Copy over the Create-license.PY python script and run it by typing python Create-license.py The script runs and will prompt you for the following information [root@Preece-bigiq-cm1:Active:Standalone] shared # python Create-license.py Enter BIG-IQ user ID: admin Enter BIG-IQ Password: Enter Management IP address of BIG-IQ: 44.131.176.101 Enter Management IP address of BIG-IP to be licensed: 44.131.176.22 Enter Management MAC address of BIG-IP to be licensed: ll:50:56:xx:kk:36 Enter the name of the License Pool from which to take BIG-IP license: Load-18 Enter the license name to be assigned to the BIG-IP: F5-BIG-MSP-BT-1GIPIF-LIC-DEV Enter hypervisor used, valid options are: aws, azure, gce, hyperv, kvm, vmware,xen: vmware Optional: Enter chargeback tag if required: Department-A Optional: Enter tenant name if required: Customer-B Once the details have been filled in the script authenticates to the BIG-IQ and generates the license (30 seconds) If everything went well, you will be presented with a success message. The license file is saved as IP-address_bigip.license in the same directory as you run the script Using SCP copy the new license file from the BIG-IQ to your desktop. Copy the license file into the /config directory of the BIG-IP device. Rename the file, copy ip-address.bigip.license bigip.license Reload the license by typing reloadlic Observe the BIG-IP device restart its services and show as active. You can review in the GUI (System—License) and provision modules as needed. Code : import getpass # used to hide the users password input import json import os import requests from time import sleep """ This script uses the BIG-IQ API to license an unreachable (dark site) BIG-IP. The BIG-IQ licensing API needs certain details provided in order to license an appliance, these details can either be provided in a file call lic-data.json or if that file does not exist you will be prompted to enter them. The minimum contents of lic-data.json should be: { "licensePoolName": " -- Enter License Pool Name here. License Pool name can be found in BIG-IQ GUI -- ", "command": "assign", "address": " -- Enter MGMT IP Address of BIG-IP here -- ", "assignmentType": "UNREACHABLE", "macAddress": " -- Enter MAC address of MGMT IP for the BIG-IP here -- ", "hypervisor": " -- Enter hypervisor value here options are; aws, azure, gce, hyperv, kvm, vmware, xen: --", "unitOfMeasure": "yearly", "skuKeyword1": "-- Enter License Name here. License Name (or Offering name) can be found in the BIG-IQ GUI -- " } Additional Optional key:value pairs can be added to the JSON file to afix useful tags to the license. The json file with optional key:value pairs looks like: { "licensePoolName": " -- Enter License Pool Name here. License Pool name can be found in BIG-IQ GUI -- ", "command": "assign", "address": " -- Enter MGMT IP Address of BIG-IP here -- ", "assignmentType": "UNREACHABLE", "macAddress": " -- Enter MAC address of MGMT IP for the BIG-IP here -- ", "hypervisor": " -- Enter hypervisor value here options are; aws, azure, gce, hyperv, kvm, vmware, xen: --", "unitOfMeasure": "yearly", "skuKeyword1": "-- Enter License Name here. License Name (or Offering name) can be found in the BIG-IQ GUI -- ", "chargebackTag": "OPTIONAL: Remove this line if you are not going to use it", "tenant": "OPTIONAL: Remove this line if you are not going to use it" } A completed minimal lic-data.json file will look like this: { "licensePoolName": "byol-pool-utility", "command": "assign", "address": "10.1.1.10", "assignmentType": "UNREACHABLE", "macAddress": "06:ce:c2:43:b3:05", "hypervisor": "kvm", "unitOfMeasure": "yearly", "skuKeyword1": "F5-BIG-MSP-BT-P3-3GF-LIC-DEV" } lic-data.json must reside in the directory from which you execute this python script. """ def bigiqAuth(_bigiqAuthUrl, _bigiqCredentials): """ This function authenticates with BIG-IQ and collects the authentication token provided. Theo token will be used for subsequent calls to BIG-IQ """ _errFlag=0 try: _bigiqAuthInfo=_bigiq_session.post(_bigiqAuthUrl, data=json.dumps(_bigiqCredentials), verify=False) print(_bigiqAuthUrl) _bigiqAuthInfo.raise_for_status() print("Response code: %s" %_bigiqAuthInfo.status_code) except requests.exceptions.HTTPError as err: print(err) _errFlag=1 #end try if _errFlag==0: _bigiqResponse=_bigiqAuthInfo.json() _bigiqToken=_bigiqResponse['token'] for _token in _bigiqToken: if (_token == 'token'): _bigiqAuthToken=(_bigiqToken[_token]) # End if # Next _authHeaders={ "X-F5-Auth-Token": "{_authToken}".format(_authToken=_bigiqAuthToken) } else: _authHeaders=0 #end if print("** Completed Authentication ***") return(_authHeaders); #End Def def extractLicense(_rawLicenseJSON): """ This function pulls the generated license from BIG-IQ """ for _license in _rawLicenseJSON: if (_license=='licenseText'): _extractedLicense=_rawLicenseJSON[_license] #end if if (_license=='status'): if (_rawLicenseJSON[_license]=="FINISHED"): print("***** License has been assigned *****") else: _extractedLicense="FAILED" #end if #end if #next return(_extractedLicense); #End def def licenseData(): """ This function read the lic-data.json file. If it does not exist you will be prompted to enter the necessary values. """ if os.path.exists('lic-data.json'): with open('./lic-data.json') as licfile: _licdata = json.load(licfile) else: _bigipAddress=raw_input("Enter Management IP address of BIG-IP to be licensed: ") _bigipMACaddress=raw_input("Enter Management MAC address of BIG-IP to be licensed: ") _licensePoolName=raw_input("Enter the name of the License Pool from which to take BIG-IP license: ") _licenseSKU=raw_input("Enter the license name to be assigned to the BIG-IP: ") _hypervisorType=raw_input("Enter hypervisor used, valid options are: aws, azure, gce, hyperv, kvm, vmware, xen: ") _chargebackTag=raw_input("Optional: Enter chargeback tag if required: ") _tenantTag=raw_input("Optional: Enter tenant name if required: ") _licdata={ "licensePoolName": "{_licensePool}".format(_licensePool=_licensePoolName), "command": "assign", "address": "{_bigipIP}".format(_bigipIP=_bigipAddress), "assignmentType": "UNREACHABLE", "macAddress": "{_bigipMAC}".format(_bigipMAC=_bigipMACaddress), "hypervisor": "{_hypervisor}".format(_hypervisor=_hypervisorType), "unitOfMeasure": "yearly", "skuKeyword1": "{_license}".format(_license=_licenseSKU), "chargebackTag": "{_chargeback}".format(_chargeback=_chargebackTag), "tenant": "{_tenant}".format(_tenant=_tenantTag) } # End if return(_licdata); def urlConstruction(_bigiqUrl, _bigiqIP): """ This function rewrites the selflink URL returned by BIG-IQ to reflect BIG-IQ management IP address rather than localhost """ count=0 _urlDeConstruct=_bigiqUrl.split("/") _urlReConstruct="" for _urlElement in _urlDeConstruct: #print("%d %s" %(count,_urlElement)) if (_urlElement=="https:"): _urlReConstruct=_urlReConstruct+_urlElement+"//" elif (_urlElement=="localhost"): _urlReConstruct=_urlReConstruct+_bigiqIP else: if (_urlElement!=""): _urlReConstruct=_urlReConstruct+"/"+_urlElement #end if #end if count+=1 #Next return(_urlReConstruct); #End Def _userID=raw_input("Enter BIG-IQ user ID: ") _password=getpass.getpass(prompt="Enter BIG-IQ Password: ") _bigiqAddress=raw_input("Enter Management IP address of BIG-IQ: ") _credPostBody={ "username": "{_uname}".format(_uname=_userID), "password": "{_pword}".format(_pword=_password), "loginProvideriName": "RadiusServer" } _deviceToBeLicensed=licenseData() _bigipAddress=_deviceToBeLicensed['address'] print("BIG-IP Address is: %s" %_bigipAddress) _bigiq_session=requests.session() _bigiq_auth_url="https://{_bigiqIP}/mgmt/shared/authn/login".format(_bigiqIP=_bigiqAddress) # Authenticates with BIG-IQ _bigiqAuthHeader=bigiqAuth(_bigiq_auth_url, _credPostBody) # if _bigiqAuthHeader==0: print("Unable to authenticate with BIG-IQ. Check BIG-IQ reachability and credentials") else: _bigiq_url1="https://{_bigiqIP}/mgmt/cm/device/tasks/licensing/pool/member-management".format(_bigiqIP=_bigiqAddress) # # --- This section requests the license from BIG-IQ. Posting the criteria as laid out in the _deviceToBeLicensed JSON blob # _errFlag=0 try: _bigiqLicenseDevice=_bigiq_session.post(_bigiq_url1, headers=_bigiqAuthHeader, data=json.dumps(_deviceToBeLicensed), verify=False) _bigiqLicenseDevice.raise_for_status() print("Response code: %s" %_bigiqLicenseDevice.status_code) except requests.exceptions.HTTPError as err: print("Issue received, check rquest and or check connectivity %s" %err) _errFlag=1 #end try if _errFlag==0: #print(_bigiqLicenseDevice.status_code) _bigiqResponse=_bigiqLicenseDevice.json() print(_bigiqResponse) print(_bigiqResponse['selfLink']) _bigiqLicenseStatus_url=_bigiqResponse['selfLink'] _bigiqLicenseStatus_url=urlConstruction(_bigiqLicenseStatus_url, _bigiqAddress) print(_bigiqLicenseStatus_url) print("--- Standby for 30 seconds whilst BIG-IQ generates license ---") sleep(30) _errFlag1=0 try: _licenseStatus=_bigiq_session.get(_bigiqLicenseStatus_url, headers=_bigiqAuthHeader, verify=False) _licenseStatus.raise_for_status() print("Response code: %s" %_licenseStatus.status_code) except requests.exceptions.HTTPError as err: print("Issue received, check rquest and or check connectivity %s" %err) _errFlag=1 #end try if _errFlag==0: print(_licenseStatus.content) _licenseStatusDetail=_licenseStatus.json() _licenseOutput=extractLicense(_licenseStatusDetail) if (_licenseOutput=="FAILED"): print("***** License Assignment Failed. Most likely a valid license already exists for device, revoke it before applying a new license *****") else: _licenseFname=(_bigipAddress+"_bigip.license") _licensefile=open(_licenseFname, "w") _licensefile.write("%s" %_licenseOutput) _licensefile.close() print(_licenseOutput) print("***** SUCCESS, the license is stored here %s *****" %_licenseFname) #end if #end if #end if #end if Tested this on version: 13.x, 14.x, 15.x and 16.x Troubleshooting When you apply the license to the BIG-IP you may see an error similar to: License is not operational (expired or digital signature does not match contents) This could simply be that you copy and paste the license file badly, please use MD5SUM on the BIG-IQ to the output license file and compare to the same file on the BIG-IP Example: md5sum 10.2.3.4_bigip.license You can also review the /var/log/ltm file for "Dossier error" messages Dossier error: 1 (MAC address is mismatched) Dossier error: 12 (Hypervisor is mismatched) If this does not help, please open a support case and attach a recent qkview file.2.5KViews3likes4CommentsCode comparison between deploying AS3 or FAST API Declarations with Ansible and Terraform
I found it interesting about the different ways to deploy AS3 declarations with Ansible and Terraform and I will provide some examples and a comparison at the end of the Article. My examples below are based on the links that are in the codeshare article. 1. Ansible and AS3/FAST With Ansible there are 2 ways to deploy AS3 declarations with the Ansible default URI module as given in the link https://clouddocs.f5.com/training/fas-ansible-workshop-101/3.0-as3-intro.html (the example uses "delegate_to: localhost" but with "connection: local" it is the same and sometimes one or the other is used to solve some bugs from what I have seen) or using the new F5 module 'bigip_as3_deploy'' , as given in link https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/playbook_tutorial.html , that is much newer and simpler. The two Ansible modules in most cases do the same and support jinja2 templates as shown below but as I mentioned the F5 module is just 4 lines of code, I recommend going with it and only if a bug or a limitation is found then to try the URI module. Another advantige that "bigip_as3_deploy" has is that when a change is pushed to the AS3 (an exta node is added to the declaration or something else) this is seen in the playbook output. F5 has ansible collection called "bigip_fast_application" that seems to use the f5 FAST iApp templates that are the replacement for the iApp templates for even simpler deployment of AS3 as FAST is a frontend for AS3 but there is not a lot of documentation about it but after playing around, I managed to figure it out 😉 (see example 3) https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/modules_2_0/module_index.html --- - name: LINKLIGHT AS3 hosts: bigip connection: local gather_facts: false collections: - f5networks.f5_bigip vars: pool_members: "{{ groups['webnodes_as3'] }}" ansible_host: xxxx ansible_user: xxxx ansible_httpapi_password: xxxx ansible_httpapi_port: 443 ansible_network_os: f5networks.f5_bigip.bigip ansible_httpapi_use_ssl: yes ansible_httpapi_validate_certs: no private_ip: 20.20.20.22 Example_1: tasks: - name: CREATE AS3 JSON BODY set_fact: as3_app_body: "{{ lookup('template', 'as3_template.j2', split_lines=False) }}" - name: Deploy or Update AS3 bigip_as3_deploy: content: "{{ lookup('template','tenant_base.j2', split_lines=False) }}" tags: [ deploy ] Example_2: tasks: - name: CREATE AS3 JSON BODY set_fact: as3_app_body: "{{ lookup('template', 'as3_template.j2', split_lines=False) }}" - name: PUSH AS3 uri: url: "https://{{ ansible_host }}/mgmt/shared/appsvcs/declare" method: POST body: "{{ lookup('template','tenant_base.j2', split_lines=False) }}" status_code: 200 timeout: 300 body_format: json force_basic_auth: yes user: "{{ ansible_user }}" password: "{{ ansible_httpapi_password }}" validate_certs: no Example template as3_template.j2 "web_app": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "{{private_ip}}" ], "pool": "app_pool" }, "app_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 443, "serverAddresses": [ {% set comma = joiner(",") %} {% for mem in pool_members %} {{comma()}} "{{ hostvars[mem]['ansible_host'] }}" {% endfor %} ] } ] } } Example template tenant_base.j2 { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "testid", "label": "test-label", "remark": "test-remark", "WorkshopExample":{ "class": "Tenant", {{ as3_app_body }} } } } Example_3 and template simple_http.json - name: Create FAST application bigip_fast_application: template: "examples/simple_http" content: "{{ lookup('template', 'simple_http.json') }}" state: "create" { "tenant_name": "Tenant9", "application_name": "Application1", "virtual_port": 443, "virtual_address": "192.168.1.2", "server_port": 80, "server_addresses": ["10.10.10.1"] } 2. Terraform and AS3/FAST With Terraform there are again two ways to deploy AS3 declarations and one is to use the AS3 resource "bigip_as3" as seen in the link https://clouddocs.f5.com/products/orchestration/terraform/latest/userguide/as3-integration.html and the other is to use "bigip_fastxxx" resources as seen in the link https://community.f5.com/t5/technical-articles/manage-f5-big-ip-fast-with-terraform-intro/tac-p/309615#M13882 that are based on the F5 fast templates that in the background are again AS3. The "bigip_as3" is a nice option, simalar to Ansible "files" but as we saw in Ansible we can also not only files Jinja2 templates that can use the Ansible variables to make a dynamic configuration. The workaround seems to use "bigip_fastxxx" but this still limits us to 4 predefined FAST templates. Example1: variable hostname { type = string default = "xxxx" } variable username { type = string default = "xxxx" } variable password { type = string default = "xxxx" } terraform { required_providers { bigip = { source = "F5Networks/bigip" } } } provider "bigip" { address = var.hostname username = var.username password = var.password } resource "bigip_as3" "as3-example2" { as3_json = "${file("example1.json")}" } resource "bigip_fast_https_app" "this" { application = "myApp4" tenant = "scenario4" virtual_server { ip = "10.1.10.224" port = 443 } tls_server_profile { tls_cert_name = "/Common/default.crt" tls_key_name = "/Common/default.key" } pool_members { addresses = ["10.1.10.120", "10.1.10.121", "10.1.10.122"] port = 80 } snat_pool_address = ["10.1.10.50", "10.1.10.51", "10.1.10.52"] load_balancing_mode = "least-connections-member" monitor { send_string = "GET / HTTP/1.1\\r\\nHost: example.com\\r\\nConnection: Close\\r\\n\\r\\n" response = "200 OK" } } Example file example1.file: { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "example-declaration-01", "label": "Sample 1", "remark": "Simple HTTP application with round robin pool", "Sample_01": { "class": "Tenant", "defaultRouteDomain": 0, "Application_1": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.2.10" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 80, "serverAddresses": [ "192.0.1.100", "192.0.1.110" ] } ] } } } } } Edit: Now I saw that F5 has made a new terraform resource named "bigip_fast_application" that is the same as the Ansible resource with the same name and it can use any FAST template and the JSON file from the Ansible example to create a FAST iApp Service but still the Jinja2 options for inserting variables in Templates are not possible with Terraform, so it will be more complex to make dynamic files for Terraform to ingest. Example2: resource "bigip_fast_application" "foo-app" { template = "examples/simple_http" fast_json = "${file("new_fast_app.json")}" } 3. Ansible and Terraform comparison Ansible and Terraform are great tools and it depends what you are seeking but Ansible defenetly seems the more versatile one, compared to Terraform with it's jinja2 support for sharing variables with the template files as with example "hostvars[mem]['ansible_host'] ", where the pool member's IP Addresses are taken from the Ansible inventory file. Terraform with it's "FAST" resources that utilize the FAST iApp templates, provides even more simplicy to configuring basic TCP/UDP/HTTP or HTTPS services but the Ansible module bigip_fast_application does the same and it is not limited to just 4 FAST templates (as I mentioned Terraform has released bigip_fast_application, so now it is not limited to just the 4 templates). With it's template support when a new RPM AS3 is released Ansible can use it as for Terraform in some cases a new resource needs to be created by the F5 vendor/provider for it to be available in Terraform. There are constantly new AS3 definitions and RPM packages and Ansible has the edge in that regard with it's flexible jinja2 templates that can ingest variables! Terraform seems to have a jinja2 provider, so maybe it can auto create file that Terraform can use bit it seems complex and I have not used it or found an article about how it works, so maybe I will do a post in the future about this 😎 I am not saying in no way that Terraform shouldn't be used as when deploying resources in the AWS/Azure or other cloud provider then there Terraform is king! Better have just one tool to deploy your IaC (infrastructure as code) that will deploy the cloud native services and the F5 Virtual Machines and their configuration. If you use standard configurations that Terraform is covering, by all means better have one tool than two tools for Automation. Also terraform is better for discovering the state of already existing resources that have already been deployed as Ansible is weaker in that regard. The deployment of F5 AWAF/ASM policies seems the next big thing as it can now also be automated, using the declarative model, so maybe I will do a comparison between Ansible and Terraform in the future for Declarative WAF deployments. Still from what I have seen the same principles seem in place, where with Terraform is easier to do it, especially for cloud deployments but Ansible has more flexibility. The cool thing is that the Declarative WAF has an "url" parameter, so the security policy can be pulled from github for example as a source of truth, so Ansible and Terraform can just be used to push the AS3 declaration to the F5 devices that will then pull the needed data from the URL. "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, Till then, this is a nice video and articles about the subject: https://www.youtube.com/watch?v=Ecua-WRGyJc https://community.f5.com/t5/technical-articles/advanced-waf-v16-0-declarative-api/ta-p/289251 Edit: Now Terraform has the option for the F5 ASM/AWAF to inject learning suggestions that the policy builder has made for a Terraform created policy and this is something Ansible still can't do. In the future Ansible could be able to do the same. You can check: https://community.f5.com/t5/technical-articles/manage-f5-big-ip-advanced-waf-policies-with-terraform-intro/ta-p/300828 As a fast note also as of now GTM/DNS configurations can be managed with AS3 😀 https://www.youtube.com/watch?v=OHSBpOtQg_0&t=411s Please share your opinions as I am glad hear them and get more usefull info for this topic!1.7KViews2likes0CommentsCreate an Internet exposed HTTPS Load-Balancer on Volterra with Terraform (Origin handled by a Volterra node)
Problem this snippet solves: How to create an Internet exposed HTTPS Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. The Origin is HTTP based but will be exposed on the Internet over HTTPS. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-https-lb.tf file) How to use this snippet: Pre-requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation:https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } Encode in base 64 the public key of the TLS certificate you want to use in the HTTPS load-balancer, From a shell, run: base64 publicpart_of_tls_certificate.pem Get the Volterra vesctl tool: https://gitlab.com/volterra.io/vesctl/blob/main/README.md Then in your home directory, create a .vesconfig file with the following lines: server-urls:https://<tenant>.console.ves.volterra.io/api key: /<full path to>/private_key.key cert: /<full path to>/certificate.cert Then in the folder where you have installed vesctl, run: ./vesctl.darwin-amd64 request secrets get-public-key > tenant-public-key ./vesctl.darwin-amd64 request secrets get-policy-document --namespace shared --name ves-io-allow-volterra > ves-io-allow-volterra-policy ./vesctl.darwin-amd64 request secrets encrypt --policy-document ves-io-allow-volterra-policy --public-key tenant-public-key privkey.pem > blindfolded-privkey Where privkey.pem is the private key of your TLS certificate. The Volterra encrypted TLS key will be available in the blindfolded-privkey file. In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-https-origin-pool" { name = "sample-https-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-https-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-https-lb" { depends_on = [volterra_origin_pool.sample-https-origin-pool] //Mandatory "Metadata" name = "sample-https-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] https { add_hsts = true http_redirect = true tls_parameters { no_mtls = true tls_config { default_security = true } tls_certificates { certificate_url = "string:/// " } secret_encoding_type = "EncodingNone" } } } } default_route_pools { pool { name = "sample-https-origin-pool" namespace = "mynamespace" } weight = 1 } //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found697Views2likes0CommentsBypass Azure Login Page by adding a login hint in the SAML Request
Problem this snippet solves: Enhance the login experience between F5 (SAML SP) and Azure (SAML IDP) by injecting the "email address" as a login hint on behalf of the user. This enhances the user experience because it allows to bypass the Azure Login Page and avoids the user to type two times his login/email address. Example of use Your application need to be accessed by both "domain users" and "federated users". Your application is protected by the F5 APM with a "Login Page" that asks for the user "email address". Based on the "email address" value you determine the domain: if the user is a "domain user", you authenticate him on the local directory (AD Auth, LDAP Auth or ...) if the user is a "federated user" (such as xxx@gmail.com), you send him to the Azure IDP that will manage all federated access This snippet is particularly interesting for the "federated user" scenario because: without this code, a "federated user" will need to type his "login" twice. First time on "F5 Login Page" and the second time on "Azure Login Page" with this code, a "federated user" will need to type his "login" only on the F5 Login Page How to use this snippet: Go to "Access > Federation > SAML Service Provider > External IDP Connectors" and edit the "External IdP Connectors" object that match with the Azure IDP app. On the "Single Sign On Service Settings" add at the end of the "Single Sign On Service URL" the following string "?login_hint=" as shown in the picture below. The string "?login_hint=" is added here only to be able to uniquely identify it later by the iRule and replaced it. 3. Finally, apply the iRule below on the VS that has the Access Policy enabled and for which the SAML SP role is attributed and is binded to the Azure IDP application. The iRule will simply catch the "Single Sign On Service URL" and replace it with "?login_hint=xxxx@gmail.com". Code : when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_RESPONSE_RELEASE { if { [string tolower [HTTP::header value "Location"]] contains "/saml2/?login_hint="} { set user_login [ACCESS::session data get "session.logon.last.mail"] #log local0. "Before adding the hint [HTTP::header value "Location"]" set locationWithoutHint "?login_hint=" set locationWithHint "?login_hint=$user_login" HTTP::header replace Location [string map -nocase "${locationWithoutHint} ${locationWithHint}" [HTTP::header Location]] #log local0. "After adding the hint [HTTP::header value "Location"]" } } Tested this on version: No Version Found4.9KViews1like5CommentsCreate an HTTPS Origin based on public FQDN for VoltMesh
Problem this snippet solves: How to create an HTTPS Origin that could be used in a VoltMesh HTTP or HTTPS Load-Balancer. This Origin is based on a public FQDN name. How to use this snippet: Pre-requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation:https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : resource "volterra_origin_pool" "origin-dns" { name = "origin-dns" namespace = "mynamespace" origin_servers { public_name { dns_name = "myorigin.mydomain.com" } labels = { } } use_tls { use_host_header_as_sni = true tls_config { default_security = true } skip_server_verification = true no_mtls = true } no_tls = false port = "443" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } Tested this on version: No Version Found476Views1like0CommentsCreate a VPC VoltMesh AWS site (two interfaces node)
Problem this snippet solves: How to create a VoltMesh node inside an existing VPC. The VoltMesh node will be a two interfaces node and so could be used as both an ingress or egress gateway for the VPC. How to use this snippet: Pre-Requirements: Get and create the following from the AWS console: Get the ID of the VPC in which you want to deploy the VoltMesh node Get the ID of the "workload subnet" where are sitting the ressources you want to expose with the VoltMesh node in the VPC Create and get the ID of the following: One subnet (/28 for instance) that will be used as "outside" subnet for the VoltMesh node ie handling the Internet connectivity One subnet (/28 for instance) that will be used as "inside" subnet for the VoltMesh node For more information regarding our AWS concepts, please refer to: https://www.volterra.io/docs/how-to/site-management/create-aws-site Have entered your AWS account credentials within the Volterra console. Please refer to: https://www.volterra.io/docs/how-to/site-management/cloud-credentials Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : resource "volterra_aws_vpc_site" "aws-vpc-example" { name = "aws-vpc-example" namespace = "system" aws_region = " " assisted = false instance_type = "t3.xlarge" //AWS credentials entered in the Volterra Console aws_cred { name = " " namespace = "system" tenant = " " } vpc { vpc_id = " " } ingress_egress_gw { aws_certified_hw = "aws-byol-multi-nic-voltmesh" no_forward_proxy = true no_global_network = true no_inside_static_routes = true no_outside_static_routes = true no_network_policy = true } //Availability zones and subnet options for the Volterra Node az_nodes { //AWS AZ aws_az_name = " " //Site local outside subnet outside_subnet { existing_subnet_id = " " } //Site local inside subnet inside_subnet { existing_subnet_id = " " } //Workload subnet workload_subnet { existing_subnet_id = " " } } //Mandatory logs_streaming_disabled = true //Mandatory no_worker_nodes = true } Tested this on version: No Version Found539Views1like0CommentsAWS S3 Proxy: JavaScript iRuleLX
Problem this snippet solves: Create a secure proxy to AWS S3 via iRule/IRuleLX Related Article: Creating a Secure AWS S3 Proxy with F5 iRulesLX How to use this snippet: Install iRule via iRulesLX Workspace Create iRulesLX plugin Create AWS role or IAM credentials Create FQDN pool to AWS S3 Create Virtual Server Enable OneConnect and WebAcceleration profiles Assign iRule to Virtual Server Code : var f5 = require('f5-nodejs'); var ilx = new f5.ILXServer(); var url = require('url'); var URI = require('urijs'); var AWS = require('aws-sdk'); // optionally use config.json with stored credentials or assign Role when running in AWS //AWS.config.loadFromPath('./config.json'); var s3 = new AWS.S3(); ilx.addMethod('aws_s3_rpc_add_creds', function(req, res) { var path = req.params()[0]; var params = {Bucket:"secure-bucket", Key: path }; var signed_url = s3.getSignedUrl('getObject',params); var parsedUrl = new URI(signed_url); var q = parsedUrl.search(true); var expires = parseInt(q['Expires']); var expire_after = Math.round(expires - (new Date() / 1000)); res.reply([parsedUrl.query(),expires, expire_after]); }); ilx.listen(); Tested this on version: 13.01.1KViews1like6CommentsSlack Mutual TLS Recipe: Adding X-Client-Certificate-SAN header from client certificate
Problem this snippet solves: The following is based on the documentation from Slack of how to authenticate requests from Slack via mutual TLS and pass along the information to a service that is not capable of mutual TLS via a X-Client-Certificate-SAN header. Adapted from: https://api.slack.com/docs/verifying-requests-from-slack#mutual_tls Based on question from: https://devcentral.f5.com/s/question/0D51T00006n6YltSAE/extract-san-from-client-ssl-certificate-insert-into-http-header How to use this snippet: Attach to Virtual Server that has both a HTTP and clientssl profile. The clientssl profile must be configured for "require" or "request" to process the client certificate and use a CA certificate that verifies that it is a trusted certificate. The iRule will replace any headers that are sent by the client. Code : when HTTP_REQUEST { if {[SSL::cert 0] ne ""}{ # extract SAN set santemp [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 32 ","] # remove DNS: prefix set san [findstr $santemp "DNS" 4] # insert X-Client-Certificate-SAN header HTTP::header replace X-Client-Certificate-SAN $san } else { HTTP::header remove X-Client-Certificate-SAN } } Tested this on version: 11.51.2KViews1like3CommentsGTM Monitors internal LTM, but I need Public IP as Answer
Problem this snippet solves: A WideIP is linked to an LTM Virtual Server that uses Internal IP Addresses. the DNS should reply with External IP addresses. although its possible via gui, its quiet tricky to get the monitors and the translation right. for your convenience , here is an Irule that does just that. How to use this snippet: cut and past this code into a new Irule under DNS->Delivery->Irules->Irule List and then add it to the DNS Listener.this Irule fixes 2 A records. a.a.a.a = internal ip address#1 aaa.aaa.com. = the A record#1 b.b.b.b = external ip address#1 c.c.c.c = internal ip address#2 ccc.ccc.com. = the A record#2 d.d.d.d = external ip address#2 Code : when DNS_RESPONSE { set rrs [DNS::answer] foreach rr $rrs { if { ([DNS::rdata $rr] eq "a.a.a.a")} { DNS::answer clear DNS::answer insert [DNS::rr "aaa.aaa.com. IN A b.b.b.b"] } elseif { ([DNS::rdata $rr] eq "c.c.c.c")} { DNS::answer clear DNS::answer insert [DNS::rr "ccc.ccc.com. IN A d.d.d.d"] } } }802Views1like5Comments