BIG-IP Next Central Manager API with Postman
In my last article I dove into the Central Manager AS3 endpoints with thecURL command. As I was preparing for this one, I thought it would work better as a live stream than a traditional article. Here's the stream you can watch in the replay, and the resources I mentioned on the stream are posted below. Show description: I've been working with the BIG-IP Next API from the API reference and with curl on the command line, and I gotta tell you, as much as I don't love Postman, it's super handy when learning an API. In this episode of DevCentral Connects, I'll download the collection from the Next documentation, get the environment variables set up, and walk through some of the tasks available in the collection to start working with the BIG-IP Next API. Resources BIG-IP Next Articles on DevCentral BIG-IP Next Academy group on DevCentral Embracing AS3: Foundations BIG-IP Next automation: AS3 basics BIG-IP Next automation: Working with the AS3 endpoints 20.0 Postman collection 20.1 Postman collection 20.2 Postman collection331Views1like0CommentsBIG-IP Next – Introduction to the Blueprints API
If you have ever attempted to automate the BIG-IP configuration, you are probably familiar with F5’s AS3 extension. Although AS3 is supported in BIG-IP Next, there is another API that might be the better option if you haven’t started your migration journey up until now. This is called the Blueprints API. In this article, I want to show you how to use it to automate your applications with AS3. Overview When you use the BIG-IP Next GUI, you instantly see the benefits of having a centrally managed configuration across all your BIG-IP instances. The steps to create an application service in the GUI now have a siloed setup where you define 4 main sections separately: Application Properties Virtual Server Properties Pool Properties Deployment Properties Each one of these sections allows you to adjust areas of your application service while still having a way to manage configurations across multiple BIG-IP instances. In other words, you can define one pool under the pool properties, but still have different pool members under the deployment properties for each of your BIG-IP instances. This creates a centrally managed application service that does not require the exact same configuration in each environment. When you perform these tasks in the GUI, BIG-IP Next is generating its own API calls internally. It takes each of your configuration items outlined in the 4 sections above and defines the application service as a Blueprint. This Blueprint is then used to modify anything about the configuration/deployment moving forward. If you aren’t a fan of using a GUI and you are trying to automate, this same exact API is exposed to you as well. This means we get to use the same centrally managed configuration in our API calls. It also means that we can easily automate existing application services by simply using the API to manage them moving forward. So what does this Blueprint API look like? Below is sample JSON used to create a Blueprint called “bobs-blueprint”: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description": "", "application_name": "bobs-blueprint", "enable_Global_Resiliency": false, "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice", "servicePort": 8080 } ], "virtuals": [ { "FastL4_TOS": 0, "FastL4_idleTimeout": 600, "FastL4_looseClose": true, "FastL4_looseInitialization": true, "FastL4_pvaAcceleration": "full", "FastL4_pvaDynamicClientPackets": 1, "FastL4_pvaDynamicServerPackets": 0, "FastL4_resetOnTimeout": true, "FastL4_tcpCloseTimeout": 43200, "FastL4_tcpHandshakeTimeout": 43200, "InspectionServicesEnum": [], "TCP_idle_timeout": 60, "UDP_idle_timeout": 60, "ciphers": "DEFAULT", "ciphers_server": "DEFAULT", "enable_Access": false, "enable_FastL4": false, "enable_FastL4_DSR": false, "enable_HTTP2_Profile": false, "enable_HTTP_Profile": false, "enable_InspectionServices": false, "enable_SsloPolicy": false, "enable_TCP_Profile": false, "enable_TLS_Client": false, "enable_TLS_Server": false, "enable_UDP_Profile": false, "enable_WAF": false, "enable_iRules": false, "enable_mirroring": true, "enable_snat": true, "iRulesEnum": [], "multiCertificatesEnum": [], "perRequestAccessPolicyEnum": "", "pool": "juice", "snat_addresses": [], "snat_automap": true, "tls_c_1_1": false, "tls_c_1_2": true, "tls_c_1_3": false, "tls_s_1_2": true, "tls_s_1_3": false, "trustCACertificate": "", "virtualName": "bobs-vs", "virtualPort": 80 } ] } } As you can see, the structure of this JSON is siloed in a very similar way to the GUI: Note: For those readers who are wondering where the Deployment section is, that is handled in a separate API call after the blueprint has been created. I’ll discuss that in more detail later. In the sections below, I’ll review a few of the API endpoints you can use with some steps on how to perform the following common tasks: Viewing an existing Blueprint Creating a new Blueprint Deploying a Blueprint Viewing an Existing Blueprint Before we start creating a new Blueprint from scratch, it is probably a good idea to explain how we can view a list of our current Blueprints. To do so, we simply make a GET request to the following API endpoint: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints This will return a list of every Blueprint created by the GUI and/or API. Below is an example output: { "_embedded": { "appsvcs": [ { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints//3f2ef264-cf09-45c8-a925-f2c8fccf09f6" } }, "created": "2024-06-25T13:32:22.160399Z", "deployments": [ { "id": "1e5f9c06-8800-4ab7-ad5e-648d55b83b68", "instance_id": "ce179e66-b075-4068-bc4e-8e212954da49", "target": { "address": "10.2.1.3" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.11", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:00.193649Z", "modified": "2024-06-25T17:46:00.193649Z", "last_record": { "id": "cb6a06a1-c66d-41c3-a747-9a27b101a0f1", "task_id": "6a65642d-810b-4194-9693-91a15f6d1ef0", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.392732Z", "end_time": "2024-06-25T17:46:00.193649Z", "status": "completed" } }, { "id": "001e14e8-7900-482a-bdd4-aca35967a5cc", "instance_id": "0546acf5-3b88-422d-a948-28bbf0973212", "target": { "address": "10.2.1.4" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.12", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:07.94836Z", "modified": "2024-06-25T17:46:07.94836Z", "last_record": { "id": "8a8a29be-e56b-4ef1-bf6a-92f7ccc9e9b7", "task_id": "0632cc18-9f0f-4a5e-875a-0d769b02e19b", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.406377Z", "end_time": "2024-06-25T17:46:07.94836Z", "status": "completed" } } ], "deployments_count": { "total": 2, "completed": 2 }, "description": "", "fqdn": "", "gslb_enabled": false, "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6", "modified": "2024-06-25T17:32:09.174243Z", "name": "juice-shop", "set_name": "Examples", "successful_instances": 2, "template_name": "http", "tenant_name": "tenantSrLEVevFQnWwXT90590F3USQ", "type": "FAST" } ] }, "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/" } }, "count": 1, "total": 1 } In the example above, I only have one Blueprint in my BIG-IP Next CM instance. If we look deeper into the output, we can start to see some detail around the configuration parameters as well as the deployment parameters for each BIG-IP. There is also an ID field in the JSON that we can use to reference a specific Blueprint. In the example above, we have: "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6" This is important because as we start to modify, deploy, or delete our existing blueprints, we will need this ID to be able to make changes. We can also use this ID to view more detail on a specific Blueprint rather than an entire list of all Blueprints. To do so, we simply follow make a request to the endpoint below: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}} In our example, this would be: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/3f2ef264-cf09-45c8-a925-f2c8fccf09f6 The output from this API call provides robust detail on the Blueprint. It is probably too much detail to paste in an article like this, but there are some examples here if interested: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetApplicationByID Viewing a Blueprint like this can provide us with the latest configuration of our application service so that we can ensure we are using the most up-to-date files. It also can provide us with some templates/example configurations that we can use to create new application services moving forward. Creating a new Blueprint Now that we have a pretty good understanding of the JSON structure and we know how to view some examples of Blueprints that have already been created, we can simply use them as a reference and create our own Blueprint from scratch. The basic format for creating a new Blueprint is below: { "name": <blueprint_name>, "set_name": <template_set> "template_name": <template_name>, "parameters": { "application_description": <simple_description>, "application_name": <blueprint_name>, "enable_Global_Resiliency": false, "pools": [ { <pool_configuration_parameters> } ], "virtuals": [ { <virtual_server_configuration parameters> } ] } } More detail on each of the variable values below: <blueprint_name> - This is the name you choose for your blueprint. I generally recommend this name be the same in both the “name” field and the “application_name” field which is why in the JSON above you’ll see this in both. <template_set> - This is going to be the template set containing your FAST template. If you are using the default templates provided to you, this value would be “Examples” <template_name> - This is the specific FAST template you are going to use for the configuration. If you are using the default template provided to you, this value would be “http” <simple_description> - This can be any short description you would like to use for your application service. <pool_configuration_parameters> - This will be the list of parameters that you are going to define for your pool. You do not have to fill in every single value if the FAST template contains values for that field. <virtual_server_configuration parameters> - This will be the list of parameters that you are going to define for your virtual server. You do not have to fill in every single value if the FAST template contains values for that field. Important Note: When creating your JSON, you will be defining a FAST template to use along with your application service (just like you would in the GUI). This means that you do not have to fill out every value under your pool and virtual server configuration. It will take in the values provided from your FAST template. With our guide above, we can now make a new Blueprint for the “bobs-blueprint” example I referenced earlier: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description":"This is a test of the blueprints api", "application_name": "bobs-blueprint", "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice_pool", "servicePort": 3000 } ], "virtuals": [ { "pool": "juice_pool", "virtualName": "blueprint_vs", "virtualPort": 80 } ] } } As you can see, this is a much more condensed version of the original JSON I had for the example shown in the beginning of this article. Again, this is because we are referencing the FAST template “Examples/http” and taking in those values to configure the rest of the application service. With our newly created JSON, the last step to creating this Blueprint is to send this in a POST request to the following API endpoint: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints After sending our POST, you'll notice we are given the "id" of the Blueprint in the response. As mentioned above, we can use this ID to modify, deploy, etc. { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments" } }, "deployments": [ { "deploymentID": "1a153f44-acaf-4487-81c7-61b8b5498627", "instanceID": "4ef739d1-9ef1-4eb7-a5bb-36c6d1334b16", "status": "pending", "taskID": "518534c8-9368-48dd-b399-94f55e72d5a7", "task_type": "CREATE" }, { "deploymentID": "82ca8d6d-dbfd-43a8-a604-43f170d9f190", "instanceID": "0546acf5-3b88-422d-a948-28bbf0973212", "status": "pending", "taskID": "7ea24157-1412-4963-9560-56fb0ab78d8c", "task_type": "CREATE" } ], "id": "9c35a614-65ac-4d18-8082-589ea9bc78d9" } Now that the Blueprint has been created, we can go into our BIG-IP Next CM GUI and see our newly created application service: You’ll notice that our newly created application service is in Draft mode. This is because we have not deployed the service yet. We’ll discuss that in the next section. Deploying a Blueprint Once we have our Blueprint created, the final step is to configure the deployment. As discussed above, this is done through a separate API Call. The format for a deployment is as follows: { "deployments": [ { "parameters": { "pools": [ { "poolName": <pool_name> "poolMembers": [ { "name": <node1_name>, "address": <node1_ip_address> }, { "name": <node2_name>, "address": <node2_ip_address> } ] } ], "virtuals": [ { "virtualName": <virtual_server_name>, "virtualAddress": <virtual_server_ip_address> } ] }, "target": { "address": <bigip_instance_ip_address> }, "allow_overwrite": true } ] } Keep in mind that some of these values above are referencing names from your Blueprint configuration. These names need to be exactly the same. See below for more detail on each of these values: <pool_name> - This references the pool from your Blueprint. This value must match what was used for “poolName” in the pool configuration of the Blueprint. <node1_name> - This is any name you choose to describe your node in the pool <node1_ip_address> - This is the specific IP address for your node <node2_name> - If you are using more than one node, this would be the name you choose to represent your second node in the pool. This format can repeat for 3 nodes, 4 nodes, etc. <node2_ip_address> - If you are using more than one node, this would be the IP address of your second node. This format can repeat for 3 nodes, 4 nodes, etc. <virtual_server_name> - This references the virtual server from your Blueprint. This value must match what was used for “virtualName” in the virtuals configuration of the Blueprint. <virtual_server_ip_address> - This would be the IP address of the Virtual Server being deployed on the BIG-IP <bigip_instance_ip_address> - This is the IP address of the BIG-IP instance we are deploying to Important Note: If you are deploying the same application to more than one BIG-IP instance, you can include multiple deployment blocks in your API call. Using the format above, we can now create our deployment for “bobs-blueprint”: { "deployments": [ { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.1.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.15" } ] }, "target": { "address": "10.2.1.3" }, "allow_overwrite": true }, { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.2.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.20" } ] }, "target": { "address": "10.2.1.4" }, "allow_overwrite": true } ] } In this example deployment, I am deploying to two separate BIG-IP instances (10.2.1.3 and 10.2.1.4). This is where we can really start to see the value of the Blueprints API. With this structure, I can use the same pool and virtual server setup from our Blueprint, while still using different Virtual Server and Node IP addresses for the deployment at each instance. All of this is done with one API call. The final step is to send this in a POST request to our deployments endpoint. The endpoint is very similar to viewing a blueprint as it uses our same Blueprint ID. See the format below: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}}/deployments Using the ID from the response we received after creating our example blueprint, our endpoint would be: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments After sending our POST, we can go back to the BIG-IP Next CM GUI and see our application service is no longer considered a Draft. If we click into the application service, we’ll see our two deployments are up and healthy: Conclusion Hopefully after reading this article, you can see the value of using the Blueprints API for your automation. I think as an alternative to other automation methods, this can provide benefits such as: Same centrally managed format/structure as GUI created applications Since the BIG-IP Next CM GUI is already creating these JSON files under the hood, we can easily automate existing applications by using the Blueprints API for them moving forward Deployments are handled separately from your application configuration You can deploy your application service to multiple BIG-IP instances at once Combining the Blueprints API with FAST templates allows you make application on-boarding much more streamlined. If you liked this article and are looking for more information our our Blueprints API, please visit the API documentation here: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetAllApplications55Views0likes1CommentHow to Split DNS with Managed Namespace on F5 Distributed Cloud (XC) Part 2 – TCP & UDP
Re-Introduction In Part 1, we covered the deployment of the DNS workloads to our Managed Namespace and creating an HTTPS Load Balancer and Origin Pool for DNS over HTTPS. If you missed Part 1, feel free to jump over and give it a read. In Part 2, we will cover creating a TCP and UDP Load Balancer and Origin Pools for standard TCP & UDP DNS. TCP Origin Pool First, we need to create an origin pool. On the left menu, under Manage, Load Balancers, click Origin Pools. Let’s give our origin pool a name, and add some Origin Servers, so under Origin Servers, click Add Item. In the Origin Server settings, we want to select K8s Service Name of Origin Server on given Sites as our type, and enter our service name, which will be the service name from Part 1 and our namespace, so “servicename.namespace”. For the Site, we select one of the sites we deployed the workload to, and under Select Network on the Site, we want to seledt vK8s Networks on the Site, then click Apply. Do this for each site we deployed to so we have several servers in our Origin Pool. In Part 1, our Services defined the targetPort as 5553. So, we set Port to 5553 on the origin. This is all we need to configure for our TCP Origin, so click Save and Exit. TCP Load Balancer Next, we are going to make a TCP Load Balancer, since its less steps (and quicker) than a UDP Load Balancer (today). On the left menu under Manage, Load Balancers, select TCP Load Balancers. Let’s set a name for our TCP LB and set our listen port, 53 is a reserved port on Customer Edge Sites so we need to use something else, so let’s use 5553 again, under origin pools we set the origin that we created previously, and then we get to the important piece, which is Where to Advertise. In Part 1 we advertised to the internet with some extra steps on how to advertise to an internal network, in this part we will advertise internally. Select Advertise Custom, then click edit configuration. Then under Custom Advertise VIP Configuration, click Add Item. We want to select the Site where we are going to advertise, the network interface we will advertise.Click Apply, then Apply again. We don’t need to configure anything else, so click Save and Exit. UDP Load Balancer For UDP Load Balancers we need to jump to the Load Balancer section again, but instead of a load balancer, we are going to create a Virtual Host which are not listed in the Distributed Applications tile, so from the top drop down “Select Service” choose the Load Balancers tile. In the left menu under Manage, we go to Virtual Hosts instead of Load Balancers. The first thing we will configure is an Advertise Policy, so let’s select that. Advertise Policy Let’s give the policy a name, select the location we want to advertise on the Site Local Inside Network, and set the port to 5553. Save and Exit. Endpoints Now back to Manage, Virtual Hosts, and Endpoints so we can add an endpoint. Name the endpoint and specify based on the screenshot below. Endpoint Specifier: Service Selector Info Discovery: Kubernetes Service: Service Name Service Name: service-name.namespace Protocol: UDP Port: 5553 Virtual Site or Site or Network: Site Reference: Site Name Network Type: Site Local Service Network Save and Exit. Cluster The Cluster configuration will be simple, from Manage, Virtual Hosts, Clusters, add Cluster. We just need a name and select the Origin Servers / Endpoints and select the endpoint we just created. Save and Exit. Route The Route configuration will be simple as well, from Manage, Virtual Hosts, Routes, add Route. Name the route and under List of Routes click Configure, then Add Item. Leave most settings as they are, and under Actions, choose Destination List, then click Configure. Under Origin Pools and Weights, click Add Item. Under Cluster with Weight and Priority select the cluster we created previously, leave Weight as null for this configuration, then click Apply, apply again, apply again, Apply again, Save and Exit. Now we can Finally create a Virtual Host. Virtual Host Under Manage, Virtual Host, Select Virtual Host, then Click Add Virtual Host. There are a ton of options here, but we only care about a couple. Give the Virtual Host a name. Proxy Type: UDP Proxy Advertise Policy: previously created policy Moment of Truth, Again Now that we have our services published we can give them a test. Since they are currently on a non standard port, and most systems dont let us specify a port in default configurations we need to test with dig, nslookup, etc. To test TCP with nslookup: nslookup -port=5553 -vc google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 To test UDP with nslookup: nslookup -port=5553 google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 IP Tables for Non-Standard DNS Ports If we wanted to use the nonstandard port tcp/udp dns on Linux or MacOS, we can use IPTABLES to forward all the traffic for us. There isnt a way to set this up in Windows OS today, but as in Part 1, Windows Server 2022 supports encrypted DNS over HTTPS, and it can be pushed as policy through Group Policy as well. iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Conclusion I hope this helps with a common use-case we are hearing every day, and shows how simple it is to deploy workloads into our Managed Namespaces.1.3KViews2likes2CommentsWhat is an iApp?
iApp is a seriously cool, game changing technology that was released in F5’s v11. There are so many benefits to our customers with this tool that I am going to break it down over a series of posts. Today we will focus on what it is. Hopefully you are already familiar with the power of F5’s iRules technology. If not, here is a quick background. F5 products support a scripting language based on TCL. This language allows an administrator to tell their BIG-IP to intercept, inspect, transform, direct and track inbound or outbound application traffic. An iRule is the bit of code that contains the set of instructions the system uses to process data flowing through it, either in the header or payload of a packet. This technology allows our customers to solve real-time application issues, security vulnerabilities, etc that are unique to their environment or are time sensitive. An iApp is like iRules, but for the management plane. Again, there is a scripting language that administrators can build instructions the system will use. But instead of describing how to process traffic, in the case of iApp, it is used to describe the user interface and how the system will act on information gathered from the user. The bit of code that contains these instructions is referred to as an iApp or iApp template. A system administrator can use F5-provided iApp templates installed on their BIG-IP to configure a service for a new application. They will be presented with the text and input fields defined by the iApp author. Once complete, their answers are submitted, and the template implements the configuration. First an application service object (ASO) is created that ties together all the configuration objects which are created, like virtual servers and profiles. Each object created by the iApp is then marked with the ASO to identify their membership in the application for future management and reporting. That about does it for what an iApp is…..next up, how they can work for you.1.2KViews0likes4CommentsHow to Prepare Your Network Infrastructure to Add HPC Clusters for AI to Your Data Center
HPC AI clusters are getting deployed as highly-engineered 'lego blocks' which are opaque to established data center operations and standards. By taking advantage of established Kubernetes based networking solutions that provide high-speed intelligent networking, you can save yourself from expensive cost overruns, data center re-auditing, and delays. By using Kubernetes based solutions which take advantage of the high-speed networking solutions already required by HP AI deployments, you further optimize your investment in AI.78Views3likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.4.0 with IPv6 and TLS Passthrough
We announced the next release of F5 NGINX Gateway Fabric version 1.4.0 which includes a lot of smaller but very necessary features. This allows us to dedicate more time to advancing our non-functional testing framework and ensuring we maintain top performance across releases. Nevertheless, we have some great highlights of this release: IPv6 support TLS passthrough (via TLSRoute) Server zone metrics Ability to add custom pod annotations Plenty of bug fixes! During this release cycle, we discovered a bug around our custom policies that occurred when you had the same path for more than one Route: The policy would not be applied to either Route. For this release, we’ve decided to enforce a restriction so that policies cannot be applied when two or more routes share the same path. However, we are pursuing a long-term solution to lift this restriction on this edge case, as we understand that use cases that route based on header, query parameter, or other request attributes on the same path do exist. IPv6 Support While most Kubernetes clusters are still utilizing IPv4, we recognized that anyone employing a IPv6 cluster would have no ability to deploy NGINX Gateway Fabric. Thus, we implemented a simple feature to dual IPv4/IPv6 networking for NGINX Gateway Fabric. This option is enabled by default, so you can simply install as normal on an IPv6 cluster. TLS Passthrough New with 1.4 is TLSRoute support. This Route type enables the TLS Passthrough use case and is similar to setting up an HTTPRoute. This allows you to pass encrypted traffic through NGINX Gateway Fabric where it is terminated by your backend application, ensuring end-to-end encryption. As most information passes through NGINX Gateway Fabric with this route, setup is easy. You can enable TLS passthrough for any application using our guide available here. Non-Functional Testing This release marks the completion of automating our non-functional testing that we execute before each release. If you are unfamiliar with these tests, our team runs NGINX Gateway Fabric through a series of scenarios, non-functional tests, to test if our performance is regressing or improving from previous releases. As an infrastructure product that you rely on, it is our top priority to ensure that stability and performance are not compromised as new features are released. The results of all non-functional testing are available in the GitHub repository for anyone to see and should give you an idea of how well NGINX Gateway Fabric performs in general and across releases. What’s Next NGINX Gateway Fabric 1.5.0 will bring NGINX code snippets to the Gateway API with a first-class Upstream Settings policy to configure keepalive connections and NGINX zone size. If you are familiar with NGINX or find that you need to use a feature that NGINX provides that is not yet available via a Gateway API extension, you can put a NGINX code snippet within a SnippetFilter to apply NGINX configuration to a Route rule. You will even be able to use the feature to load other modules NGINX provides and leverage the vast wealth of NGINX functionality. We will still be providing many NGINX features via first-class policies and filters, such as the Upstream Settings policy, as they allow us to handle much of the complexity of translating to Gateway API for you. These custom policies and filters allow us to handle a lot of the complexity of applying NGINX config across the Gateway API framework for you. The Upstream Settings policy can set upstream management directives that are unable to be applied via snippets effectively. We will continue to deliver these custom policies and filters across all of our releases, in addition to new Gateway API resources and NGINX Gateway Fabric specific features. You can see a preview of the full snippet design here, though not all features may be implemented in one release cycle. For more information on our strategy towards first-class NGINX customization via Gateway API extensions, see our full enhancement proposalhere. Resources For the complete changelog for NGINX Gateway Fabric 1.4.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.71Views0likes0CommentsCertified Kubernetes Administrator (CKA) for F5 guys and gals
Summary The Certified Kubernetes Administrator (CKA) program is an industry certification that allows you to quickly establish your skills and credibility in today's job market. Kubernetes is exploding in popularity, and with the majority of new development happening in microservices-based technologies, it's important for existing network and security teams to learn in order to stay relevant. We recently chatted about this on DevCentral live show: What is CKA? The CKA is designed by the Cloud Native Computing Foundation (CNCF), which is a part of the non-profit Linux Foundation. As a result, the learning required for this certification is not skewed to any 3rd party cloud, hardware, or software platform. This vendor-agnostic approach means you can easily access the technology you need to learn, build your own cluster, and pass the exam. This agnostic approach also means the exam, which you can take from home, does not require special knowledge that you normally get from working in a corporate environment. Personally I don't manage a production k8s environment, and I passed the exam - no vendor-specific knowledge required. Why should I care about CKA? Are you feeling "left behind" or "legacy" as the industry converts to Kubernetes? Are new terms and concepts used by developers creating silos or gaps between traditional Networks or Security teams, and developers? Are you looking to integrate your corporate security into Kubernetes environments? Do you want to stay relevant in the job market? If you answered yes to any of the above, the CKA is a great way to catch up and thrive in Kubernetes conversations at your workplace. Sure, but how is this relevant to an F5 guy or gal? Personally I'm often helping devs expose their k8s apps with a firewall policy or SSL termination. NGINX Kubernetes Ingress Controller (KIC) is extremely popular for developers to manage ingress within their Kubernetes cluster. F5 Container Ingress Services (CIS) is a popular solution for enterprises integrating k8s or OpenShift with their existing security measures. Both of these are areas where your networking and security skills are needed by your developers! As a network guy, the CKA rounded out my knowledge of Kubernetes - from networking to storage to security and troubleshooting. The figure below is a common solution that I talk customers through when they want to use KIC for managing traffic inside their cluster, but CIS to have inbound traffic traverse an enterprise firewall and integrate with Kubernetes. How do I study for the CKA? Personally I took a course at ACloudGuru, which was all the preparation I needed. Others I know took a course from Udemy and spoke highly of it, and others just read all they could from free community sources. Recently the exam fee (currently $375 USD) changed to include 2x practice exams from Killer.sh. This wasn't available when I signed up for the exam, but my colleagues spoke extremely highly of the preparation they received from these practice exams. I recommend finding a low-cost course to study if you can, and strongly recommend you build your own cluster from the ground up as preparation for your exam. Building your own cluster (deploying Linux hosts, installing a runtime like Docker, and then installing k8s control plane servers and nodes) is a fantastic way to appreciate the architecture and purpose of Kubernetes. Finally, take the exam (register here). I recommend paying for and scheduling your exam before you start studying - that way you'll have a deadline to motivate you! I'm a CKA, now what? Share your achievement on LinkedIn! Share your knowledge with colleagues. You'll be in high demand from employers but more importantly, you'll be valuable in your workplace when you can help developers with things like network and security and integrations with your existing platforms. And then shoot me a note to let me know you've done it!775Views3likes0CommentsArchitecture Options for Kubernetes Service Discovery in Distributed Cloud
The F5 Distributed Cloud (XC) Virtual Edition (VE) Customer Edge (CE) platform can be deployed within your data center or cloud environment. It can perform service discovery for services in your Kubernetes (K8s) clusters. Why do Service discovery? Service discovery is important in systems that change and move around, like microservices architectures. It helps find and connect services automatically. Instead of hard coding network locations, service discovery makes sure that services can easily find and communicate with each other, even when they scale or change locations. This improves scalability, resilience, and simplifies managing services in complex environments like Kubernetes or cloud infrastructures. By reducing manual intervention, service discovery enhances the overall efficiency and reliability of application deployments. The F5 Distributed Cloud (XC) CE can use the native kube-apiserver, or Consul, to query for services as they come online enabling admins to reference these discovered services. These services become XC origin pool definitions and can then be published locally through a proxy (http load balancer) on the CE itself or via our Global Application Delivery Network (ADN) - (Regional Edge Deployment). The F5 XC Load Balancer does more than just balance packets. It offers a set of SAAS security services that are easy to use. Customers can have a globally redundant layer of security while serving content from private K8’s clusters. This write-up covers two distinct service discovery architecture options available with XC. Secure K8s Gateway (VE CE) Kubernetes Sitetype Customer Edge (K8s sitetype CE) Depending on your service discovery use-case you may end up with one of these two options or both as they are not mutually exclusive. The first option of using the CE as a Secure K8s Gateway, may be the easier option for folks not particularly versed with the nuances of Kubernetes. Architecture 1:Virtual Edition Customer Edge (VE CE) If a picture is worth a thousand words then a working lab environment is worth a million. This repo walks through an entire Secure K8s GW setup and will leave you with a config that could easily be expanded upon. You can quickly build a PoC and start getting familiar with these modern app capabilities by using these tools. The readme includes details on how to use everything and what functions the various tools provide. It's all shell script and yaml so it's very easy to read through these and understand what's going on. https://github.com/dober-man/ve-ce-secure-k8s-gw This repo is designed to automate the deployment and configuration of a secure Kubernetes Gateway in the F5 Distributed Cloud (F5 XC) environment. It provides scripts and YAML configurations to set up secure communication, networking policies, and infrastructure components required to establish a secure gateway in a Kubernetes cluster. The readme file also documents the pre-reqs but you will at a minimum need an XC tenant, an XC Virtual Edition CE and an Ubuntu 22.04 server. If you do not have an XC tenant or VE CE, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Architecture 2:Kubernetes Sitetype Customer Edge (K8s sitetype CE) In this architecture, the entire CE runs as a service within the cluster. This model is going to require a bit more fundamental knowledge of Kubernetes and more tightly integrates with the cluster. You can quickly build a PoC and start getting familiar with these modern app capabilities by using this repo. https://github.com/dober-man/k8s-sitetype-ce This repo is focused on automating the deployment of the k8s-sitetype CE in a Kubernetes cluster. It provides scripts to simplify the process of setting up a secure site gateway for handling network traffic between cloud environments, on-premises infrastructure, and edge locations. The readme file documents the pre-reqs, but you will at a minimum need an XC tenant and an Ubuntu 22.04 server. If you do not have an XC tenant, reach out to your local F5 Account team. Please use the issues feature of Github to report any discrepancies with the builds or documentation. Summary - F5 Distributed Cloud offers a number of Kubernetes integration options for service discovery but also offers several other capabilities including Virtual K8s (Namespace as a Service) and Managed K8s which will be covered in future articles. Please feel free to drop a like or leave a comment below.89Views0likes0Comments