BIG-IP Next Central Manager API with Postman
In my last article I dove into the Central Manager AS3 endpoints with thecURL command. As I was preparing for this one, I thought it would work better as a live stream than a traditional article. Here's the stream you can watch in the replay, and the resources I mentioned on the stream are posted below. Show description: I've been working with the BIG-IP Next API from the API reference and with curl on the command line, and I gotta tell you, as much as I don't love Postman, it's super handy when learning an API. In this episode of DevCentral Connects, I'll download the collection from the Next documentation, get the environment variables set up, and walk through some of the tasks available in the collection to start working with the BIG-IP Next API. Resources BIG-IP Next Articles on DevCentral BIG-IP Next Academy group on DevCentral Embracing AS3: Foundations BIG-IP Next automation: AS3 basics BIG-IP Next automation: Working with the AS3 endpoints 20.0 Postman collection 20.1 Postman collection 20.2 Postman collection336Views1like0CommentsBIG-IP Next – Introduction to the Blueprints API
If you have ever attempted to automate the BIG-IP configuration, you are probably familiar with F5’s AS3 extension. Although AS3 is supported in BIG-IP Next, there is another API that might be the better option if you haven’t started your migration journey up until now. This is called the Blueprints API. In this article, I want to show you how to use it to automate your applications with AS3. Overview When you use the BIG-IP Next GUI, you instantly see the benefits of having a centrally managed configuration across all your BIG-IP instances. The steps to create an application service in the GUI now have a siloed setup where you define 4 main sections separately: Application Properties Virtual Server Properties Pool Properties Deployment Properties Each one of these sections allows you to adjust areas of your application service while still having a way to manage configurations across multiple BIG-IP instances. In other words, you can define one pool under the pool properties, but still have different pool members under the deployment properties for each of your BIG-IP instances. This creates a centrally managed application service that does not require the exact same configuration in each environment. When you perform these tasks in the GUI, BIG-IP Next is generating its own API calls internally. It takes each of your configuration items outlined in the 4 sections above and defines the application service as a Blueprint. This Blueprint is then used to modify anything about the configuration/deployment moving forward. If you aren’t a fan of using a GUI and you are trying to automate, this same exact API is exposed to you as well. This means we get to use the same centrally managed configuration in our API calls. It also means that we can easily automate existing application services by simply using the API to manage them moving forward. So what does this Blueprint API look like? Below is sample JSON used to create a Blueprint called “bobs-blueprint”: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description": "", "application_name": "bobs-blueprint", "enable_Global_Resiliency": false, "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice", "servicePort": 8080 } ], "virtuals": [ { "FastL4_TOS": 0, "FastL4_idleTimeout": 600, "FastL4_looseClose": true, "FastL4_looseInitialization": true, "FastL4_pvaAcceleration": "full", "FastL4_pvaDynamicClientPackets": 1, "FastL4_pvaDynamicServerPackets": 0, "FastL4_resetOnTimeout": true, "FastL4_tcpCloseTimeout": 43200, "FastL4_tcpHandshakeTimeout": 43200, "InspectionServicesEnum": [], "TCP_idle_timeout": 60, "UDP_idle_timeout": 60, "ciphers": "DEFAULT", "ciphers_server": "DEFAULT", "enable_Access": false, "enable_FastL4": false, "enable_FastL4_DSR": false, "enable_HTTP2_Profile": false, "enable_HTTP_Profile": false, "enable_InspectionServices": false, "enable_SsloPolicy": false, "enable_TCP_Profile": false, "enable_TLS_Client": false, "enable_TLS_Server": false, "enable_UDP_Profile": false, "enable_WAF": false, "enable_iRules": false, "enable_mirroring": true, "enable_snat": true, "iRulesEnum": [], "multiCertificatesEnum": [], "perRequestAccessPolicyEnum": "", "pool": "juice", "snat_addresses": [], "snat_automap": true, "tls_c_1_1": false, "tls_c_1_2": true, "tls_c_1_3": false, "tls_s_1_2": true, "tls_s_1_3": false, "trustCACertificate": "", "virtualName": "bobs-vs", "virtualPort": 80 } ] } } As you can see, the structure of this JSON is siloed in a very similar way to the GUI: Note: For those readers who are wondering where the Deployment section is, that is handled in a separate API call after the blueprint has been created. I’ll discuss that in more detail later. In the sections below, I’ll review a few of the API endpoints you can use with some steps on how to perform the following common tasks: Viewing an existing Blueprint Creating a new Blueprint Deploying a Blueprint Viewing an Existing Blueprint Before we start creating a new Blueprint from scratch, it is probably a good idea to explain how we can view a list of our current Blueprints. To do so, we simply make a GET request to the following API endpoint: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints This will return a list of every Blueprint created by the GUI and/or API. Below is an example output: { "_embedded": { "appsvcs": [ { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints//3f2ef264-cf09-45c8-a925-f2c8fccf09f6" } }, "created": "2024-06-25T13:32:22.160399Z", "deployments": [ { "id": "1e5f9c06-8800-4ab7-ad5e-648d55b83b68", "instance_id": "ce179e66-b075-4068-bc4e-8e212954da49", "target": { "address": "10.2.1.3" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.11", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:00.193649Z", "modified": "2024-06-25T17:46:00.193649Z", "last_record": { "id": "cb6a06a1-c66d-41c3-a747-9a27b101a0f1", "task_id": "6a65642d-810b-4194-9693-91a15f6d1ef0", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.392732Z", "end_time": "2024-06-25T17:46:00.193649Z", "status": "completed" } }, { "id": "001e14e8-7900-482a-bdd4-aca35967a5cc", "instance_id": "0546acf5-3b88-422d-a948-28bbf0973212", "target": { "address": "10.2.1.4" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.12", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:07.94836Z", "modified": "2024-06-25T17:46:07.94836Z", "last_record": { "id": "8a8a29be-e56b-4ef1-bf6a-92f7ccc9e9b7", "task_id": "0632cc18-9f0f-4a5e-875a-0d769b02e19b", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.406377Z", "end_time": "2024-06-25T17:46:07.94836Z", "status": "completed" } } ], "deployments_count": { "total": 2, "completed": 2 }, "description": "", "fqdn": "", "gslb_enabled": false, "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6", "modified": "2024-06-25T17:32:09.174243Z", "name": "juice-shop", "set_name": "Examples", "successful_instances": 2, "template_name": "http", "tenant_name": "tenantSrLEVevFQnWwXT90590F3USQ", "type": "FAST" } ] }, "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/" } }, "count": 1, "total": 1 } In the example above, I only have one Blueprint in my BIG-IP Next CM instance. If we look deeper into the output, we can start to see some detail around the configuration parameters as well as the deployment parameters for each BIG-IP. There is also an ID field in the JSON that we can use to reference a specific Blueprint. In the example above, we have: "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6" This is important because as we start to modify, deploy, or delete our existing blueprints, we will need this ID to be able to make changes. We can also use this ID to view more detail on a specific Blueprint rather than an entire list of all Blueprints. To do so, we simply follow make a request to the endpoint below: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}} In our example, this would be: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/3f2ef264-cf09-45c8-a925-f2c8fccf09f6 The output from this API call provides robust detail on the Blueprint. It is probably too much detail to paste in an article like this, but there are some examples here if interested: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetApplicationByID Viewing a Blueprint like this can provide us with the latest configuration of our application service so that we can ensure we are using the most up-to-date files. It also can provide us with some templates/example configurations that we can use to create new application services moving forward. Creating a new Blueprint Now that we have a pretty good understanding of the JSON structure and we know how to view some examples of Blueprints that have already been created, we can simply use them as a reference and create our own Blueprint from scratch. The basic format for creating a new Blueprint is below: { "name": <blueprint_name>, "set_name": <template_set> "template_name": <template_name>, "parameters": { "application_description": <simple_description>, "application_name": <blueprint_name>, "enable_Global_Resiliency": false, "pools": [ { <pool_configuration_parameters> } ], "virtuals": [ { <virtual_server_configuration parameters> } ] } } More detail on each of the variable values below: <blueprint_name> - This is the name you choose for your blueprint. I generally recommend this name be the same in both the “name” field and the “application_name” field which is why in the JSON above you’ll see this in both. <template_set> - This is going to be the template set containing your FAST template. If you are using the default templates provided to you, this value would be “Examples” <template_name> - This is the specific FAST template you are going to use for the configuration. If you are using the default template provided to you, this value would be “http” <simple_description> - This can be any short description you would like to use for your application service. <pool_configuration_parameters> - This will be the list of parameters that you are going to define for your pool. You do not have to fill in every single value if the FAST template contains values for that field. <virtual_server_configuration parameters> - This will be the list of parameters that you are going to define for your virtual server. You do not have to fill in every single value if the FAST template contains values for that field. Important Note: When creating your JSON, you will be defining a FAST template to use along with your application service (just like you would in the GUI). This means that you do not have to fill out every value under your pool and virtual server configuration. It will take in the values provided from your FAST template. With our guide above, we can now make a new Blueprint for the “bobs-blueprint” example I referenced earlier: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description":"This is a test of the blueprints api", "application_name": "bobs-blueprint", "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice_pool", "servicePort": 3000 } ], "virtuals": [ { "pool": "juice_pool", "virtualName": "blueprint_vs", "virtualPort": 80 } ] } } As you can see, this is a much more condensed version of the original JSON I had for the example shown in the beginning of this article. Again, this is because we are referencing the FAST template “Examples/http” and taking in those values to configure the rest of the application service. With our newly created JSON, the last step to creating this Blueprint is to send this in a POST request to the following API endpoint: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints After sending our POST, you'll notice we are given the "id" of the Blueprint in the response. As mentioned above, we can use this ID to modify, deploy, etc. { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments" } }, "deployments": [ { "deploymentID": "1a153f44-acaf-4487-81c7-61b8b5498627", "instanceID": "4ef739d1-9ef1-4eb7-a5bb-36c6d1334b16", "status": "pending", "taskID": "518534c8-9368-48dd-b399-94f55e72d5a7", "task_type": "CREATE" }, { "deploymentID": "82ca8d6d-dbfd-43a8-a604-43f170d9f190", "instanceID": "0546acf5-3b88-422d-a948-28bbf0973212", "status": "pending", "taskID": "7ea24157-1412-4963-9560-56fb0ab78d8c", "task_type": "CREATE" } ], "id": "9c35a614-65ac-4d18-8082-589ea9bc78d9" } Now that the Blueprint has been created, we can go into our BIG-IP Next CM GUI and see our newly created application service: You’ll notice that our newly created application service is in Draft mode. This is because we have not deployed the service yet. We’ll discuss that in the next section. Deploying a Blueprint Once we have our Blueprint created, the final step is to configure the deployment. As discussed above, this is done through a separate API Call. The format for a deployment is as follows: { "deployments": [ { "parameters": { "pools": [ { "poolName": <pool_name> "poolMembers": [ { "name": <node1_name>, "address": <node1_ip_address> }, { "name": <node2_name>, "address": <node2_ip_address> } ] } ], "virtuals": [ { "virtualName": <virtual_server_name>, "virtualAddress": <virtual_server_ip_address> } ] }, "target": { "address": <bigip_instance_ip_address> }, "allow_overwrite": true } ] } Keep in mind that some of these values above are referencing names from your Blueprint configuration. These names need to be exactly the same. See below for more detail on each of these values: <pool_name> - This references the pool from your Blueprint. This value must match what was used for “poolName” in the pool configuration of the Blueprint. <node1_name> - This is any name you choose to describe your node in the pool <node1_ip_address> - This is the specific IP address for your node <node2_name> - If you are using more than one node, this would be the name you choose to represent your second node in the pool. This format can repeat for 3 nodes, 4 nodes, etc. <node2_ip_address> - If you are using more than one node, this would be the IP address of your second node. This format can repeat for 3 nodes, 4 nodes, etc. <virtual_server_name> - This references the virtual server from your Blueprint. This value must match what was used for “virtualName” in the virtuals configuration of the Blueprint. <virtual_server_ip_address> - This would be the IP address of the Virtual Server being deployed on the BIG-IP <bigip_instance_ip_address> - This is the IP address of the BIG-IP instance we are deploying to Important Note: If you are deploying the same application to more than one BIG-IP instance, you can include multiple deployment blocks in your API call. Using the format above, we can now create our deployment for “bobs-blueprint”: { "deployments": [ { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.1.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.15" } ] }, "target": { "address": "10.2.1.3" }, "allow_overwrite": true }, { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.2.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.20" } ] }, "target": { "address": "10.2.1.4" }, "allow_overwrite": true } ] } In this example deployment, I am deploying to two separate BIG-IP instances (10.2.1.3 and 10.2.1.4). This is where we can really start to see the value of the Blueprints API. With this structure, I can use the same pool and virtual server setup from our Blueprint, while still using different Virtual Server and Node IP addresses for the deployment at each instance. All of this is done with one API call. The final step is to send this in a POST request to our deployments endpoint. The endpoint is very similar to viewing a blueprint as it uses our same Blueprint ID. See the format below: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}}/deployments Using the ID from the response we received after creating our example blueprint, our endpoint would be: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments After sending our POST, we can go back to the BIG-IP Next CM GUI and see our application service is no longer considered a Draft. If we click into the application service, we’ll see our two deployments are up and healthy: Conclusion Hopefully after reading this article, you can see the value of using the Blueprints API for your automation. I think as an alternative to other automation methods, this can provide benefits such as: Same centrally managed format/structure as GUI created applications Since the BIG-IP Next CM GUI is already creating these JSON files under the hood, we can easily automate existing applications by using the Blueprints API for them moving forward Deployments are handled separately from your application configuration You can deploy your application service to multiple BIG-IP instances at once Combining the Blueprints API with FAST templates allows you make application on-boarding much more streamlined. If you liked this article and are looking for more information our our Blueprints API, please visit the API documentation here: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetAllApplications64Views0likes1CommentHow to secure egress with F5 Service Proxy for Kubernetes
Outline: Securing Egress Challenges How F5 can help Technical bit on how it works Getting trafficinto your clusters to your workloads is just a small part of the cluster admin's tasks, and there are many options available. Controlling the packets going out is harder and often ignored. This makes your clusters more vulnerable to security risks because they don’t follow the same strict rules as your traditional networks. This article will dive deeper into how SPK can control traffic exiting your clusters, even when your application workload uses multus to attach additional external interfaces. Secure Egress Challenges By default, a pod deployed using calico CNI will follow the default route to get out of the cluster. Traffic will look like it’s coming from the worker host’s external IP address on the management interface. While KubernetesNetworkPolicies can be used for egress, it becomes painful to manage the lifecycle of hundreds or thousands of policies across all namespaces as the cluster grows. If you deploy a pod with multus interfaces, as commonly seen with telco applications, you add another way for that pod to bypass any NetworkPolicies applied within the cluster. What if there was a way to manage egress dynamically (as pods are spun up and down) and easily so that the cluster admin could centrally configure and control traffic flowing out of the cluster? How F5 can help Service Proxy for Kubernetes (SPK) is a cloud-native application traffic management solution, designed for communication service provider (CoSP) 5G networks and other application workloads. With SPK and its Calico egress gateway feature, managing a pod's default calico network interface as well as any multus interfaces becomes easy and consistent with the CSRC daemonset. Kernel routes are automatically configured so that the pods traffic will always be routed via the SPK pod where you can apply consistent, namespace-aware network policies, source NAT translation, and other controls. If the "watched" application workload is deleted, the corresponding host rules also get removed. Technical Overview This section will provide an overview of how to configure the above scenario. Host Prerequisites On the host, two shims of type macvlan bridges are created on physical interfaces, one for the application pod's calico traffic and one for the macvlan traffic, which will forward packets on to SPK. These interfaces allow connectivity to the SPK's "internal" and "external2" interfaces, respectively. ip link add spk-shim link ens224 type macvlan mode bridge ip addr add 10.1.30.244/24 dev shim1 ip link set shim1 up ip link add spk-shim2 link ens256 type macvlan mode bridge ip addr add 10.1.10.244/24 dev shim2 ip link set shim2 up Application Prerequisites and Configuration In theSPK controller values.yaml file, configure your application workload namespaces in the watchNamespace block. watchNamespace: - "spk-apps" - "spk-apps-2" Since we want SPK to do the source NAT for pod egress traffic, we create an IPPool with natOutgoing set to false. This IPPool will be used by the applications. apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: name: app-ip-pool spec: cidr: 10.124.0.0/16 ipipMode: Always natOutgoing: false Ensure that the application namespaces are annotated like below to use the IPPool. kubectl annotate namespace spk-apps "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] kubectl annotate namespace spk-apps-2 "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] Deploy your application. See below for an example deployment manifest for the application. Note that I'm attaching a secondary macvlan interface, which is in addition to the default calico interface. It will get an IP address automatically as configured in the corresponding NetworkAttachmentDefinition. Note the specific labels used by SPK, which allows you to enable traffic routing to SPK on a per application basis. Additionally, the enableSecureSPK=true label will instruct SPK to create additional listeners that will pick up traffic coming from the pod's secondary macvlan interface. (Will show these listeners later) apiVersion: apps/v1 kind: Deployment metadata: name: nginx annotations: spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "macvlan-conf-ens256-myapp1" } ]' labels: app: nginx enableSecureSPK: "true" enablePseudoCNI: "true" secureSPKPort: "8050" secureSPKCNFPodIfName: "net1" secondaryCNINodeIfName: "spk-shim2" primaryCNINodeIfName: "spk-shim" secureSPKNetAttachDefName: "macvlan-conf-ens256" secureSPKEgressVlanName: "external" SPK Configuration Deploy the custom resource that will configure a listener that does two things: listen for traffic coming from the internal vlan, or the calico interface of targeted application pods SNAT the traffic so that the source IP is an IP address of SPK apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: ns-f5-spk spec: #leave commented out for snat automap #egressSnatpool: "snatpool-1" dualStackEnabled: false maxTmmReplicas: 1 vlans: vlanList: [internal] disableListedVlans: false Next, we deploy the CSRC Daemonset that dynamically creates the kernel rules and routes for us. Note that I am setting the daemonsetMode to "pseudoCNI" which means I want to route both primary (calico) and secondary interface traffic to SPK. values-csrc.yaml image: repository: gitlab.tky.lab:5050/registry/spk/200 # daemonset mode, regular, secureSPK, or pseudoCNI #daemonsetMode: "regular" daemonsetMode: "pseudoCNI" ipFamily: "ipv4" imageCredentials: name: f5-common-pull-creds config: iptableid: 200 interfacename: "spk-shim" #tmmpodinterfacename: "internal" json: ipPoolCidrInfo: cidrList: - name: cluster-cidr0 value: "172.21.107.192/26" - name: cluster-cidr1 value: "172.21.66.0/26" - name: cluster-cidr2 value: "10.124.18.192/26" - name: node-cidr0 value: "10.1.11.0/24" - name: node-cidr1 value: "10.1.10.0/24" ipPoolList: - name: default-ipv4-ippool value: "172.21.64.0/18" - name: spk-app1-pool value: "10.124.0.0/16" Testing You can then log onto the worker node that is hosting the applications and confirm the routes and rules are created. Essentially, the rules are making calico interfaces use a custom route table that ensures that the default route is via the SPK. # ip rule 0: from all lookup local 32254: from all to 172.21.107.192/26 lookup main 32254: from all to 172.21.66.0/26 lookup main 32254: from all to 10.124.18.192/26 lookup main 32254: from all to 172.28.15.0/24 lookup main 32254: from all to 10.1.10.0/24 lookup main 32257: from 10.124.18.207 lookup ns-f5-spkshim1ipv4257 <--match on app pod1 calico IP!!! 32257: from 10.124.18.211 lookup ns-f5-spkshim1ipv4257 <--match on app pod2 calico IP!!! 32258: from 10.1.10.171 lookup ns-f5-spkshim2ipv4258 <--match on app pod1 macvlan IP!!! 32258: from 10.1.10.170 lookup ns-f5-spkshim2ipv4258 <--match on app pod2 macvlan IP!!! 32766: from all lookup main 32767: from all lookup default # ip route show table ns-f5-spkshim1ipv4257 default via 10.1.30.242 dev shim1 10.1.30.242 via 10.1.30.242 dev shim1 # ip route show table ns-f5-spkshim2ipv4258 default via 10.1.10.160 dev shim2 10.1.10.160 via 10.1.10.160 dev shim2 If I then try to execute a curl command towards a server that exists in a network segment beyond SPK, the application pod will hit the CSRC-configured ip rule and then forwarded to its new default gateway, which is SPK. Since SPK has Source NAT enabled, the "Client IP" from the server perspective is the self-IP of SPK. This means you can apply firewall policies to application workloads in a deterministic way as well as have visibility into what kind of traffic is coming out of your clusters. k exec -it nginx-7d7699f86c-hsx48 -n my-app1 -- curl 10.1.70.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: server.tky.f5se.com Server IP: 10.1.70.30 Server Port: 80 Client IP: 10.1.30.242 Client Port: 59248 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.70.30 user-agent: curl/7.88.1 A simple tcpdump command run in the debug container of SPK confirms that the pod's calico interface IP (10.124.18.192) is the source IP of the incoming traffic on SPK, and after Source NAT is applied using the self-IP of SPK (10.1.30.242), the packet is sent out towards the server. /tcpdump -nni 0.0 tcp port 80 ----snip---- 12:34:51.964200 IP 10.124.18.192.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= ----snip---- 12:34:51.964233 IP 10.1.30.242.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= Let's take a look at egress application traffic that is using the secondary macvlan interface. In this case, I have not configured Source NAT so SPK will forward the traffic out, retaining the original pod IP. k exec -it nginx-7d7699f86c-g4hpv -n my-app1 -- curl 10.1.80.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: ue-client3 Server IP: 10.1.80.30 Server Port: 80 Client IP: 10.1.10.170 Client Port: 56436 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.80.30 user-agent: curl/7.88.1 Another tcpdump command run in the debug container of SPK shows that it receives the above GET request and sends it out without Source NAT in this case. /tcpdump -nni 0.0 tcp port 80 ----snip---- 13:54:40.389281 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= ----snip---- 13:54:40.389305 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= You can use the familiar tmctl command inside the debug container of SPK to confirm the statistics for both listeners that process the pod's primary (egress-ipv4) and secondary (secure-egress-ipv4-virtual-server) interface egress traffic. /tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,clientside.bytes_in,clientside.bytes_out,no_staged_acl_match_accept -w 200 name clientside.bytes_in clientside.bytes_out no_staged_acl_match_accept ---------------------------------------------- ------------------- -------------------- -------------------------- secure-egress-ipv4-virtual-server 394 996 1 egress-ipv4 394 1011 1 Now that you have egress traffic routed to the SPK data plane pods, you can use the below F5 published custom resource definitions (CRDs) to apply granular access control lists (ACLs) to meet your security requirements. The firewall configuration is defined as code (YAML manifests) so it natively integrates with K8s and portable across clusters. F5BigContextGlobal: CRD to define the default global firewall behavior and reference the firewall policy. F5BigFwPolicy: CRD to define your firewall rules. In summary, the above diagrams and configuration snippets show how SPK can capture all egress traffic in a dynamic way so that you don't have to sacrifice security and control in your ever-changing Kubernetes clusters.128Views0likes0Comments- 339Views1like0Comments
VIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.379Views5likes2CommentsStreamlining BIG-IP Next Deployments: Automate with CI/CD Pipelines Using Terraform Cloud and GitHub
Automation is key to maintaining efficiency and consistency in today's fast-paced IT environment. In this article, I will demonstrate how to automate the deployment of BIG-IP Next configurations using Terraform Cloud and GitHub. By integrating AS3 JSON and Terraform configuration code, you can ensure that any changes made in your GitHub repository automatically trigger Terraform Cloud to deploy the updated configurations to your BIG-IP Next instance via the BIG-IP Next Central Manager. Key Players: BIG-IP Next:Your powerful application delivery controller, offers advanced features for load balancing, security, and more. BIG-IP Next Central Manager: The brain of your BIG-IP Next deployment, orchestrating and managing all your BIG-IP instances. BIG-IP Next Terraform resources:A powerful interface allowing programmatic control over your BIG-IP configuration, simplifying automation. Terraform Cloud:A robust platform for infrastructure-as-code, providing version control, collaboration, and powerful automation tools. GitHub:A popular version control system for collaborative software development, where your Terraform configuration files will reside. Terraform Agent: A local agent installed on a dedicated VM in your private data center as a bridge between Terraform Cloud and your BIG-IP Next instances. The Workflow: Define your Infrastructure in GitHub:Using the Terraform resources documented athttps://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP-Next/big-ip-next-index.html#release-notes, you describe your desired BIG-IP Next configuration in code (e.g., creating virtual servers, pools, monitors, and other application services). Store your Terraform code in a GitHub repository. Configure Terraform Cloud: Set up a workspace in Terraform Cloud and link it to your GitHub repository. Configure a VCS trigger to automatically initiate a Terraform plan and apply it when changes are made to your code in GitHub. Install and Configure Terraform Agent: Set up a VM in your private data center, run Ubuntu, and install the Terraform Agent. Configure the agent to connect to your Terraform Cloud workspace. Automatic Configuration: When you push changes to your Terraform code in GitHub, Terraform Cloud detects the update, triggers a Terraform plan, and sends it to the Terraform Agent. The agent then communicates with your BIG-IP Next Central Manager, to implement the necessary changes to your BIG-IP Next instances. Benefits: Simplified Management: No more manual configuration and tedious updates! Terraform Cloud automates deployment, reducing errors and ensuring consistency across your BIG-IP Next environment. Increased Efficiency: Spend less time on repetitive tasks and focus on building and deploying applications faster. Collaboration and Version Control:Work collaboratively with your team, track changes, and easily revert to previous configurations using GitHub's robust version control capabilities. Scalability and Flexibility:Terraform Cloud seamlessly scales to manage large and complex environments, providing flexibility and adaptability for your growing needs. Getting Started: Set up GitHub Repository: Create a repository in GitHub and store your Terraform configuration files there. You can clone the GitHub repository from https://github.com/f5bdscs/example-AS3.git and begin working on it. terraform { required_providers { bigipnext = { source = "F5Networks/bigipnext" version = "1.2.0" } } cloud { organization = "39nX-example" workspaces { name = "39nX-example" } } } variable "host" {} variable "username" {} variable "password" {} provider "bigipnext" { username = var.username password = var.password host = var.host } resource "bigipnext_cm_as3_deploy" "test" { target_address = "10.1.1.10" as3_json = file("as3.json") } Explanation: Terraform Block: Defines the required provider bigipnext with source and version. Specifies cloud organization and workspace name. Variable Declarations: host, username, and password are declared as input variables. Provider Configuration: Uses the input variables for username, password, and host. Resource Definition: bigipnext_cm_as3_deploy resource with target_address and as3_json file. Make sure to create and populate the as3.json file with the necessary AS3 declarations. Also, ensure you provide values for host, username, and password when running the Terraform commands. { "class": "ADC", "schemaVersion": "3.45.0", "id": "example-declaration-01", "label": "Sample 1", "remark": "Simple HTTP application with round robin pool", "next-cm-tenant01": { "class": "Tenant", "EXAMPLE_APP": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.1.20.10" ], "pool": "next-cm-pool01" }, "next-cm-pool01": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.1.20.4" ] } ] } } } } Configure Terraform Cloud:Create a workspace, link it to your GitHub repository, and set up a VCS trigger to activate plans and apply changes. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-vcs-changeto integrate Terraform Cloud with your GitHub repository. Install and Configure Terraform Agent:Set up a VM in your private data center, install the Terraform Agent, and configure it to connect to your Terraform Cloud workspace. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud/cloud-agents to install Terraform Cloud agent Deploy your configuration: Push your code to GitHub and watch as Terraform Cloud automatically updates your BIG-IP Next instances. You can watch the Demonstration Video here https://youtu.be/0xEtj-jAepE450Views0likes0CommentsUsing Tcl packages in iRules
In standard Tcl scripts, you have the ability to include packages, like for working with ip addresses. For example, if I want to normalize an IPv6 address, I can do so like this: % package require ip 1.2 % ::ip::normalize ::1 0000:0000:0000:0000:0000:0000:0000:0001 % ::ip::normalize 2001:db8:: 2001:0db8:0000:0000:0000:0000:0000:0000 % ::ip::normalize 2001:0db8:85a3::8a2e:0370:7334 2001:0db8:85a3:0000:0000:8a2e:0370:7334 Which is great, but the package keyword is disabled in iRules, so we don't have access to those tools and thus have to build them ourselves. Or do we? Making Tcl source iRules compatible I had an inquiry from my colleague Matt Stovallon the ability to manage IPv6 within iRules. The good news was that there is an argument on the IP::addr command to parse IPv6 addresses. The bad news is that the requirement is for the full IPv6 address but the parsed address returned by IP::addr is the short version: Rule /Common/t1 <RULE_INIT>: 2001:0db8:85a3:0000:0000:8a2e:0370:7334 Parsed w/ IP::addr: 2001:db8:85a3::8a2e:370:7334 While looking into the problem, I noticed a codeshare entry to compress/expand IPv6 addresses from MVP Kai_Wilke. That is an option for sure. But I also wanted to see what the Tcl source had to say about IP and whether it was possible to make it iRules compatible. Matt needed three functions out of this effort: Normalizing IPv6 addresses Compressing IPv6 addresses Getting the IPv6 network prefix given a netmask In the Tcl ip package, these functions, respectively, are ::ip::normalize, ::ip::contract, and ::ip::prefix, and in reviewing the source, they are native Tcl. This is good! proc ::ip::normalize {ip {Ip4inIp6 0}} { foreach {ip mask} [SplitIp $ip] break set version [version $ip] set s [ToString [Normalize $ip $version] $Ip4inIp6] if {($version == 6 && $mask != 128) || ($version == 4 && $mask != 32)} { append s /$mask } return $s } proc ::ip::contract {ip} { foreach {ip mask} [SplitIp $ip] break set version [version $ip] set s [ToString [Normalize $ip $version]] if {$version == 6} { set r "" foreach o [split $s :] { append r [format %x: 0x$o] } set r [string trimright $r :] regsub {(?:^|:)0(?::0)+(?::|$)} $r {::} r } else { set r [string trimright $s .0] } return $r } proc ::ip::prefix {ip} { foreach {addr mask} [SplitIp $ip] break set version [version $addr] set addr [Normalize $addr $version] return [ToString [Mask$version $addr $mask]] } Within each command procedure, you'll notice there are many more procedures that are referenced, and thus many more that need to be made compliant. Matt made a mermaid dependency chart for each of the top-level functions he needed. Here's the one for normalize: With that, it just came down to cleaning up the syntax. With iRules, the use of procedures requires the use of the call command. Honing in on the normalize function, here's the before and after of that single procedure: ### BEFORE ### proc ::ip::normalize {ip {Ip4inIp6 0}} { foreach {ip mask} [SplitIp $ip] break set version [version $ip] set s [ToString [Normalize $ip $version] $Ip4inIp6] if {($version == 6 && $mask != 128) || ($version == 4 && $mask != 32)} { append s /$mask } return $s } ### AFTER ### proc normalize {ip {Ip4inIp6 0}} { foreach {ip mask} [call irule_library::SplitIp $ip] break set version [call irule_library::version $ip] set s [call irule_library::ToString [call irule_library::Normalize $ip $version] $Ip4inIp6] if {($version == 6 && $mask != 128) || ($version == 4 && $mask != 32)} { append s /$mask } return $s } No heavy lifting required for solid access to time-tested tools developed by the Tcl community! The choice here was to build an iRule called irule_library that will serve as a library of procedures for the necessary ip package functions. If you use partitions, you'll want to add those references to the irule library objects (like /Common/irule_libary::ToString). You can see the finished work for all the procs and the other dependency graphs in Matt's IPv6 iRule library github repo. What else would you like to convert from Tcl packages to an iRules-compliant procs library?176Views3likes0Comments