big-ip next
52 TopicsBIG-IP Next: iRules pool routing
If you use routing in iRules with the pool command in BIG-IP and you’re starting to kick the tires on BIG-IP Next in your lab environments, please note the pool reference is not just the pool name. For example, in classic BIG-IP, if I had a pool named myPool then the command in my iRule would just bepool myPool. In BIG-IP Next (as of this publish date, they are working on restoring the relative path) you will need these additional details: Tenant Name Application Name Pool Service Name The format for the pool reference is then: pool /app/myTenantName:ApplicationName/pool/PoolServiceName Consider this partial AS3 declaration (stripped down to the necessary lines for brevity): { "class": "ADC", "tenant2zBLVQCbR2unEw5ge6nVrQ": { "class": "Tenant", "testapp1": { "class": "Application", "pool_reference_test_7_testvip1": { "class": "iRule", "iRule": { "base64": "" } }, "testpool1": { "class": "Pool" }, "testpool2": { "class": "Pool" }, "testpool2_service": { "class": "Service_Pool", "pool": "testpool2" }, "testpool3": { "class": "Pool" }, "testpool3_service": { "class": "Service_Pool", "pool": "testpool3" }, "testvip1": { "class": "Service_HTTP", "iRules": [ "pool_reference_test_7_testvip1" ], "pool": "testpool1", "virtualAddresses": [ "10.0.2.51" ], "virtualPort": 80 } } } } In this case, there is a default pool (testpool1) attached to the virtual server, but the ones that will require routing in the iRule, testpool2 and testpool3, are not attached. They are mapped in the Service_Pool classes though, and that's what we need for the iRule. From this declaration, we need: Tenant Name: tenant2zBLVQCbR2unEw5ge6nVrQ Application Name: testapp1 Service Pool Names: testpool2_service, testpool3_service The original iRule then, as shown here: when HTTP_REQUEST { if { [string tolower [HTTP::uri]] == "/tp2" } { pool testpool2 HTTP::uri / } elseif { [string tolower [HTTP::uri]] == "/tp3" } { pool testpool3 HTTP::uri / } } Becomes: when HTTP_REQUEST { if { [string tolower [HTTP::uri]] == "/tp2" } { pool /app/tenant2zBLVQCbR2unEw5ge6nVrQ:testapp1/pool/testpool2_service HTTP::uri / } elseif { [string tolower [HTTP::uri]] == "/tp3" } { pool /app/tenant2zBLVQCbR2unEw5ge6nVrQ:testapp1/pool/testpool3_service HTTP::uri / } } When creating an application service in the Central Manager GUI, here’s the workflow I used: Create the application service without the iRule, but with whatever pools you’re going to route to so that the pools and pool services are defined. Validate the app and view results. This is where you’ll find your tenant and service pool names. The app’s name should be obvious as you set it! Go ahead and deploy; there isn’t a way here to save drafts currently. Create or edit the iRule with the pool format above with your details. Edit the deployment to reference your iRule (and the correct version), then redeploy. This should get you where you need to be! Comment below or start a thread in the forums if you get stuck.332Views2likes2CommentsIntroducing F5 BIG-IP Next CNF Solutions for Red Hat OpenShift
5G and Red Hat OpenShift 5G standards have embraced Cloud-Native Network Functions (CNFs) for implementing network services in software as containers. This is a big change from previous Virtual Network Functions (VNFs) or Physical Network Functions (PNFs). The main characteristics of Cloud-Native Functions are: Implementation as containerized microservices Small performance footprint, with the ability to scale horizontally Independence of guest operating system,since CNFs operate as containers Lifecycle manageable by Kubernetes Overall, these provide a huge improvement in terms of flexibility, faster service delivery, resiliency, and crucially using Kubernetes as unified orchestration layer. The later is a drastic change from previous standards where each vendor had its own orchestration. This unification around Kubernetes greatly simplifies network functions for operators, reducing cost of deploying and maintaining networks. Additionally, by embracing the container form factor, allows Network Functions (NFs) to be deployed in new use cases like far edge. This is thanks to the smaller footprint while at the same time these can be also deployed at large scale in a central data center because of the horizontal scalability. In this article we focus on Red Hat OpenShift which is the market leading and industry reference implementation of Kubernetes for IT and Telco workloads. Introduction to F5 BIG-IP Next CNF Solutions F5 BIG-IP Next CNF Solutions is a suite of Kubernetes native 5G Network Functions, implemented as microservices. It shares the same Cloud Native Engine (CNE) as F5 BIG-IP Next SPK introduced last year. The functionalities implemented by the CNF Solutions deal mainly with user plane data. User plane data has the particularity that the final destination of the traffic is not the Kubernetes cluster but rather an external end-point, typically the Internet. In other words, the traffic gets in the Kubernetes cluster and it is forwarded out of the cluster again. This is done using dedicated interfaces that are not used for the regular ingress and egress paths of the regular traffic of a Kubernetes cluster. In this case, the main purpose of using Kubernetes is to make use of its orchestration, flexibility, and scalability. The main functionalities implemented at initial GA release of the CNF Solutions are: F5 Next Edge Firewall CNF, an IPv4/IPv6 firewall with the main focus in protecting the 5G core networks from external threads, including DDoS flood protection and IPS DNS protocol inspection. F5 Next CGNAT CNF, which offers large scale NAT with the following features: NAPT, Port Block Allocation, Static NAT, Address Pooling Paired, and Endpoint Independent mapping modes. Inbound NAT and Hairpining. Egress path filtering and address exclusions. ALG support: FTP/FTPS, TFTP, RTSP and PPTP. F5 Next DNS CNF, which offers a transparent DNS resolver and caching services. Other remarkable features are: Zero rating DNS64 which allows IPv6-only clients connect to IPv4-only services via synthetic IPv6 addresses. F5 Next Policy Enforcer CNF, which provides traffic classification, steering and shaping, and TCP and video optimization. This product is launched as Early Access in February 2023 with basic functionalities. Static TCP optimization is now GA in the initial release. Although the CGNAT (Carrier Grade NAT) and the Policy Enforcer functionalities are specific to User Plane use cases, the Edge Firewall and DNS functionalities have additional uses in other places of the network. F5 and OpenShift BIG-IP Next CNF Solutions fully supportsRed Hat OpenShift Container Platform which allows the deployment in edge or core locations with a unified management across the multiple deployments. OpenShift operators greatly facilitates the setup and tuning of telco grade applications. These are: Node Tuning Operator, used to setup Hugepages. CPU Manager and Topology Manager with NUMA awareness which allows to schedule the data plane PODs within a NUMA domain which is aligned with the SR-IOV NICs they are attached to. In an OpenShift platform all these are setup transparently to the applications and BIG-IP Next CNF Solutions uniquely require to be configured with an appropriate runtimeClass. F5 BIG-IP Next CNF Solutions architecture F5 BIG-IP Next CNF Solutions makes use of the widely trusted F5 BIG-IP Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. The CNF functionalities come from a microservices re-architecture of the broadly used F5 BIG-IP VNFs. The below diagram illustrates how a microservices architecture used. The data plane POD scales vertically from 1 to 16 cores and scales horizontally from 1 to 32 PODs, enabling it to handle millions of subscribers. NUMA nodes are supported. The next diagram focuses on the data plane handling which is the most relevant aspect for this CNF suite: Typically, each data plane POD has two IP address, one for each side of the N6 reference point. These could be named radio and Internet sides as shown in the diagram above. The left-side L3 hop must distribute the traffic amongst the lef-side addresses of the CNF data plane. This left-side L3 hop can be a router with BGP ECMP (Equal Cost Multi Path), an SDN or any other mechanism which is able to: Distribute the subscribers across the data plane PODs, shown in [1] of the figure above. Keep these subscribers in the same PODs when there is a change in the number of active data plane PODs (scale-in, scale-out, maintenance, etc...) as shown in [2] in the figure above. This minimizes service disruption. In the right side of the CNFs, the path towards the Internet, it is typical to implement NAT functionality to transform telco's private addresses to public addresses. This is done with the BIG-IP Next CG-NAT CNF. This NAT makes the return traffic symmetrical by reaching the same POD which processed the outbound traffic. This is thanks to each POD owning part of this NAT space, as shown in [3] of the above figure. Each POD´s NAT address space can be advertised via BGP. When not using NAT in the right side of the CNFs, it is required that the network is able to send the return traffic back to the same POD which is processing the same connection. The traffic must be kept symmetrical at all times, this is typically done with an SDN. Using F5 BIG-IP Next CNF Solutions As expected in a fully integrated Kubernetes solution, both the installation and configuration is done using the Kubernetes APIs. The installation is performed using helm charts, and the configuration using Custom Resource Definitions (CRDs). Unlike using ConfigMaps, using CRDs allow for schema validation of the configurations before these are applied. Details of the CRDs can be found in this clouddocs site. Next it is shown an overview of the most relevant CRDs. General network configuration Deploying in Kubernetes automatically configures and assigns IP addresses to the CNF PODs. The data plane interfaces will require specific configuration. The required steps are: Create Kubernetes NetworkNodePolicies and NetworkAttchment definitions which will allow to expose SR-IOV VFs to the CNF data planes PODs (TMM). To make use of these SR-IOV VFs these are referenced in the BIG-IP controller's Helm chart values file. This is described in theNetworking Overview page. Define the L2 and L3 configuration of the exposed SR-IOV interfaces using the F5BigNetVlan CRD. If static routes need to be configured, these can be added using the F5BigNetStaticroute CRD. If BGP configuration needs to be added, this is configured in the BIG-IP controller's Helm chart values file. This is described in the BGP Overview page. It is expected this will be configured using a CRD in the future. Traffic management listener configuration As with classic BIG-IP, once the CNFs are running and plumbed in the network, no traffic is processed by default. The traffic management functionalities implemented by BIG-IP Next CNF Solutions are the same of the analogous modules in the classic BIG-IP, and the CRDs in BIG-IP Next to configure these functionalities are conceptually similar too. Analogous to Virtual Servers in classic BIG-IP, BIG-IP Next CNF Solutions have a set of CRDs that create listeners of traffic where traffic management policies are applied. This is mainly the F5BigContextSecure CRD which allows to specify traffic selectors indicating VLANs, source, destination prefixes and ports where we want the policies to be applied. There are specific CRDs for listeners of Application Level Gateways (ALGs) and protocol specific solutions. These required several steps in classic BIG-IP: first creating the Virtual Service, then creating the profile and finally applying it to the Virtual Server. In BIG-IP Next this is done in a single CRD. At time of this writing, these CRDs are: F5BigZeroratingPolicy - Part of Zero-Rating DNS solution; enabling subscribers to bypass rate limits. F5BigDnsApp - High-performance DNS resolution, caching, and DNS64 translations. F5BigAlgFtp - File Transfer Protocol (FTP) application layer gateway services. F5BigAlgTftp - Trivial File Transfer Protocol (TFTP) application layer gateway services. F5BigAlgPptp - Point-to-Point Tunnelling Protocol (PPTP) application layer gateway services. F5BigAlgRtsp - Real Time Streaming Protocol (RTSP) application layer gateway services. Traffic management profiles and policies configuration Depending on the type of listener created, these can have attached different types of profiles and policies. In the case of F5BigContextSecure it can get attached thefollowing CRDs to define how traffic is processed: F5BigTcpSetting - TCP options to fine-tune how application traffic is managed. F5BigUdpSetting - UDP options to fine-tune how application traffic is managed. F5BigFastl4Setting - FastL4 option to fine-tune how application traffic is managed. and the following policies for security and NAT: F5BigDdosPolicy - Denial of Service (DoS/DDoS) event detection and mitigation. F5BigFwPolicy - Granular stateful-flow filtering based on access control list (ACL) policies. F5BigIpsPolicy - Intelligent packet inspection protects applications from malignant network traffic. F5BigNatPolicy - Carrier-grade NAT (CG-NAT) using large-scale NAT (LSN) pools. The ALG listeners require the use of F5BigNatPolicy and might make use for the F5BigFwPolicyCRDs.These CRDs have also traffic selectors to allow further control over which traffic these policies should be applied to. Firewall Contexts Firewall policies are applied to the listener with best match. In addition to theF5BigFwPolicy that might be attached, a global firewall policy (hence effective in all listeners) can be configured before the listener specific firewall policy is evaluated. This is done with F5BigContextGlobal CRD, which can have attached a F5BigFwPolicy. F5BigContextGlobal also contains the default action to apply on traffic not matching any firewall rule in any context (e.g. Global Context or Secure Context or another listener). This default action can be set to accept, reject or drop and whether to log this default action. In summary, within a listener match, the firewall contexts are processed in this order: ContextGlobal Matching ContextSecure or another listener context. Default action as defined by ContextGlobal's default action. Event Logging Event logging at high speed is critical to provide visibility of what the CNFs are doing. For this the next CRDs are implemented: F5BigLogProfile - Specifies subscriber connection information sent to remote logging servers. F5BigLogHslpub - Defines remote logging server endpoints for the F5BigLogProfile. Demo F5 BIG-IP Next CNF Solutions roadmap What it is being exposed here is just the begin of a journey. Telcos have embraced Kubernetes as compute and orchestration layer. Because of this, BIG-IP Next CNF Solutions will eventually replace the analogous classic BIG-IP VNFs. Expect in the upcoming months that BIG-IP Next CNF Solutions will match and eventually surpass the features currently being offered by the analogous VNFs. Conclusion This article introduces fully re-architected, scalable solution for Red Hat OpenShift mainly focused on telco's user plane. This new microservices architecture offers flexibility, faster service delivery, resiliency and crucially the use of Kubernetes. Kubernetes is becoming the unified orchestration layer for telcos, simplifying infrastructure lifecycle, and reducing costs. OpenShift represents the best-in-class Kubernetes platform thanks to its enterprise readiness and Telco specific features. The architecture of this solution alongside the use of OpenShift also extends network services use cases to the edge by allowing the deployment of Network Functions in a smaller footprint. Please check the official BIG-IP Next CNF Solutions documentation for more technical details and check www.f5.com for a high level overview.2.1KViews3likes2CommentsCreate F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how: First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads. Click on the "Copy Download Link" Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below. proxmox $ curl -O -L -J [link for Central Manager from F5 downloads] proxmox $ curl -O -L -J [link for Next from F5 downloads] On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2. proxmox $ cd ~/ proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc BIG-IP-Next-20.2.1-F5-ca-bundle.cert BIG-IP-Next-20.2.1-F5-certificate.cert Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment. # # Central Manager # # use either DHCP or Static IP example # # using DHCP (change values to match your environment) proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit # static IP (change values to match your environment) # proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit # import disk qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0 # # Next instance # # Note that you need at least two interfaces, one for management and one for data-plane # # use either DHCP or Static IP example # # DHCP proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # static IP # proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # import disk proxmox $ proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0 You should now see a new VM created on the Proxmox GUI. Finally, start the VM. This will take a few minutes. The BIG-IP Next VM is now ready to be onboarded per instructions found here.2.5KViews6likes4CommentsBIG-IP Next Automation: AS3 Basics
I need a little Mr. Miyagi right now to grab my face and intently look me in the eye and give me a "Concentrate! Focus power!" For those of you youngins' who don't know who that is, he's the OG Karate Kid mentor. Anyway, I have a thousand things I want to say about AS3 but in this article, I'll attempt to cut this down to a narrow BIG-IP Next-specific context to get you started. It helps that last December I did a five-part streaming series on AS3 in the BIG-IP classic context. If you haven't seen that, you have my blessing to stop right now, take some time to digest AS3 conceptually and practice against workloads and configurations in BIG-IP classic that you know and understand, before returning here to embrace all the newness of BIG-IP Next. AS3 is FOUNDATIONAL in BIG-IP Next In classic BIG-IP, you could edit the bigip.conf file directly, use tmsh commands, or iControlREST commands to imperatively create/modify/delete BIG-IP objects. With the exception of system configuration and shared configuration objects, this is not the case with BIG-IP Next. All application configuration is AS3 at its lowest state level. This doesn't mean you have to work primarily in AS3 configuration. If you utilize the migration utility in Central Manager, it will generate the AS3 necessary to get your apps up and running. Another option is to use the built-in http FAST template (we'll cover FAST in later articles) to build out an application from scratch in the GUI. But if you use features outside the purview of that template, or you need to edit your migration output, you'll need to work in the AS3 configuration declaration, even if just a little bit. Apples to Apples It's a fun card game, no? My family takes it to snarky absurd levels of sarcasm, to the point that when we play with "outsiders" we get lots of blank looks and stares as we're all rolling on the floor laughing. Oh well, to each his own. But we're here to talk about AS3, right? Well, in BIG-IP Next, there is a compatibility API for AS3, such that you can take a declaration from BIG-IP classic and as long as the features within that declaration are supported, it should "just work" via the Central Manager API. That's pretty cool, right? Let's start with a basic application declaration from the recent video posted by Mark_Dittmerexploring the API differences between classic and Next. { "class": "ADC", "schemaVersion": "3.0.0", "id": "generated-for-testing", "Tenant_1": { "class": "Tenant", "App_1": { "class": "Application", "Service_1": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.0.1" ], "virtualPort": 80, "pool": "Pool_1" }, "Pool_1": { "class": "Pool", "members": [ { "servicePort": 80, "serverAddresses": [ "10.1.0.1", "10.1.0.2" ] } ] } } } } A simple VIP with a pool with two pool members. A toy config to be sure, but it is useful here to show the format (JSON) of an AS3 declaration and some of the schema as well. With the compatibility API, this same declaration can be posted to a classic BIG-IP like this: POST https://<BIG-IP IP Address>/mgmt/shared/appsvcs/declare Or a BIG-IP Next instance like this: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/declare?target_address=<BIG-IP Next instance IP Address> For those already embracing AS3, this compatibility API in BIG-IP Next should make the transition easier. AS3 Workflow in BIG-IP Next With BIG-IP classic, you had to install the AS3 package (technically an iControl LX, or sometimes referenced as an iApps v2 package) onto each BIG-IP system you wanted to use the AS3 declarative configuration model on. Each BIG-IP was an island, and the configuration management of the overall system of BIG-IPs was reliant on an external system for source of truth. With BIG-IP Next, the Central Manager API has native AS3 support so there are no packages to install to prepare the environment. Also, Central Manager is the centralized AS3 interface for all Next instances. This has several benefits: A singular and centralized source of truth for your configuration management No external package management requirements Tremendous improvement in API performance management since most of the heavy lifting is offloaded from the instances and onto Central Manager and the control-plane functionality that remains on the instance is intentionally designed for API-first operations The general application deployment workflow introduced exclusively for Next, which I'll reference as the documents API, is twofold: Create an application service First, you create the application service on Central Manager. You can use the same JSON declaration from the section above here, only the API endpoint is different: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents A successful transaction will result in an application service document on Central Manager. A couple notes on this at time of writing: Documents created through the API are not validated against the journeys migration tool that is available for use in the Central Manager GUI. Documents are not schema validated at the attribute level of classes, so whereas a class used in classic might be supported in Next, some of the attributes might not be. This means that whereas the document creation process can appear successful, the deployment will fail if classes and/or class attributes supported in classic BIG-IP are present in the AS3 declarations when an attempt to apply to an instance occurs. Deploy the application service Assuming, however, all your AS3 work is accurate to the Next-supported schema, you post the specified document by ID to the target BIG-IP Next instance, here as a JSON payload versus a query parameter on the compatibility API shown earlier. POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents/<Document ID>/deployments { "target": "<BIG-IP Next Instance IP Address>" } At this point, your service should be available to receive traffic on the instance it was deployed on. Next Up... Now that we have the theory in place, join me next time where we'll take a look at working with a couple application services through both approaches. Resources CM App Services Management AS3 Schema AS3 User Guide (classic, but useful) AS3 Reference Guide (classic, but useful) AS3 Foundations (streaming series)1.3KViews0likes4CommentsiRule LX replacement on bigip-Next
Hi all - we use ILX on our current bigip platform and wanted to understand what will be alternate for it in the NEXT platform. my use case is - when a HTTP request comes in the F5 extracts some information out of the request along with the client IP . the ILX part makes a REST call to an external URL with that information . if it returns a 200 response - the HTTP request is allowed to move forward to the pool . if a 403 is returned from the external service a reset is sent back. how can i do the same in bigip-Next - as we have nodejs code performing this tak. ThanksSolved158Views0likes5CommentsF5 Next - how to reference irule procedures
Hi, Anyone figured how references works for irule objects with the call command? and if it is possible do i need to assign it to the vs? This is my stack: I manage to use the call command to a proc within the samle irule but i haven't found a way how to reference proclibrary (irule) from my_irule https://clouddocs.f5.com/bigip-next/20-2-0/irules/bigipn_object_naming_irule.html { "_embedded": { "stacks": [ { "_links": { "self": "/applications/a95e7451-d077-4ec3-a9c1-d0f3bea7f615/stacks/e87175a1-34f3-43d2-b52a-7b7466ed8851" }, "clientSide": { "l4ClientSide": "default:service_2:vs", "persistence": { "cookieMethod": { "method": "COOKIE_INSERT_METHOD" }, "template": "COOKIE_TEMPLATE" } }, "enabled": true, "id": "e87175a1-34f3-43d2-b52a-7b7466ed8851", "irules": [ { "description": "default:service_2:proclibrary", "rule": "when RULE_INIT {\nlog local0. \"proclib started\"\n}\nproc responder {} {\n HTTP::respond 200 content {hell from proc}\n}" }, { "description": "default:service_2:my_irule", "rule": "proc test {} {\nHTTP::respond 200 content [virtual name]\n}\nwhen HTTP_REQUEST {\nset vs_name [virtual name]\nlog local0. \"hello there\"\ncall /app/default:service_2/proclibrary::responder\n\n}" } ], "name": "vs", "serverSide": { "l4ServerSide": "default:service_2:vs" }, "stackType": "HttpAdvancedProxy" } ] }, "_links": { "self": "/applications/a95e7451-d077-4ec3-a9c1-d0f3bea7f615/stacks?" }, "count": 1, "total": 1 }53Views0likes1CommentBIG-IP Next – Introduction to the Blueprints API
If you have ever attempted to automate the BIG-IP configuration, you are probably familiar with F5’s AS3 extension. Although AS3 is supported in BIG-IP Next, there is another API that might be the better option if you haven’t started your migration journey up until now. This is called the Blueprints API. In this article, I want to show you how to use it to automate your applications with AS3. Overview When you use the BIG-IP Next GUI, you instantly see the benefits of having a centrally managed configuration across all your BIG-IP instances. The steps to create an application service in the GUI now have a siloed setup where you define 4 main sections separately: Application Properties Virtual Server Properties Pool Properties Deployment Properties Each one of these sections allows you to adjust areas of your application service while still having a way to manage configurations across multiple BIG-IP instances. In other words, you can define one pool under the pool properties, but still have different pool members under the deployment properties for each of your BIG-IP instances. This creates a centrally managed application service that does not require the exact same configuration in each environment. When you perform these tasks in the GUI, BIG-IP Next is generating its own API calls internally. It takes each of your configuration items outlined in the 4 sections above and defines the application service as a Blueprint. This Blueprint is then used to modify anything about the configuration/deployment moving forward. If you aren’t a fan of using a GUI and you are trying to automate, this same exact API is exposed to you as well. This means we get to use the same centrally managed configuration in our API calls. It also means that we can easily automate existing application services by simply using the API to manage them moving forward. So what does this Blueprint API look like? Below is sample JSON used to create a Blueprint called “bobs-blueprint”: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description": "", "application_name": "bobs-blueprint", "enable_Global_Resiliency": false, "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice", "servicePort": 8080 } ], "virtuals": [ { "FastL4_TOS": 0, "FastL4_idleTimeout": 600, "FastL4_looseClose": true, "FastL4_looseInitialization": true, "FastL4_pvaAcceleration": "full", "FastL4_pvaDynamicClientPackets": 1, "FastL4_pvaDynamicServerPackets": 0, "FastL4_resetOnTimeout": true, "FastL4_tcpCloseTimeout": 43200, "FastL4_tcpHandshakeTimeout": 43200, "InspectionServicesEnum": [], "TCP_idle_timeout": 60, "UDP_idle_timeout": 60, "ciphers": "DEFAULT", "ciphers_server": "DEFAULT", "enable_Access": false, "enable_FastL4": false, "enable_FastL4_DSR": false, "enable_HTTP2_Profile": false, "enable_HTTP_Profile": false, "enable_InspectionServices": false, "enable_SsloPolicy": false, "enable_TCP_Profile": false, "enable_TLS_Client": false, "enable_TLS_Server": false, "enable_UDP_Profile": false, "enable_WAF": false, "enable_iRules": false, "enable_mirroring": true, "enable_snat": true, "iRulesEnum": [], "multiCertificatesEnum": [], "perRequestAccessPolicyEnum": "", "pool": "juice", "snat_addresses": [], "snat_automap": true, "tls_c_1_1": false, "tls_c_1_2": true, "tls_c_1_3": false, "tls_s_1_2": true, "tls_s_1_3": false, "trustCACertificate": "", "virtualName": "bobs-vs", "virtualPort": 80 } ] } } As you can see, the structure of this JSON is siloed in a very similar way to the GUI: Note: For those readers who are wondering where the Deployment section is, that is handled in a separate API call after the blueprint has been created. I’ll discuss that in more detail later. In the sections below, I’ll review a few of the API endpoints you can use with some steps on how to perform the following common tasks: Viewing an existing Blueprint Creating a new Blueprint Deploying a Blueprint Viewing an Existing Blueprint Before we start creating a new Blueprint from scratch, it is probably a good idea to explain how we can view a list of our current Blueprints. To do so, we simply make a GET request to the following API endpoint: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints This will return a list of every Blueprint created by the GUI and/or API. Below is an example output: { "_embedded": { "appsvcs": [ { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints//3f2ef264-cf09-45c8-a925-f2c8fccf09f6" } }, "created": "2024-06-25T13:32:22.160399Z", "deployments": [ { "id": "1e5f9c06-8800-4ab7-ad5e-648d55b83b68", "instance_id": "ce179e66-b075-4068-bc4e-8e212954da49", "target": { "address": "10.2.1.3" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.11", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:00.193649Z", "modified": "2024-06-25T17:46:00.193649Z", "last_record": { "id": "cb6a06a1-c66d-41c3-a747-9a27b101a0f1", "task_id": "6a65642d-810b-4194-9693-91a15f6d1ef0", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.392732Z", "end_time": "2024-06-25T17:46:00.193649Z", "status": "completed" } }, { "id": "001e14e8-7900-482a-bdd4-aca35967a5cc", "instance_id": "0546acf5-3b88-422d-a948-28bbf0973212", "target": { "address": "10.2.1.4" }, "parameters": { "pools": [ { "isServicePool": false, "poolMembers": [ { "address": "10.1.3.100", "name": "old" }, { "address": "10.2.3.100", "name": "new" } ], "poolName": "juice" } ], "virtuals": [ { "allow_networks": [], "enable_allow_networks": false, "virtualAddress": "10.2.2.12", "virtualName": "juice-shop" } ] }, "last_successful_deploy_time": "2024-06-25T17:46:07.94836Z", "modified": "2024-06-25T17:46:07.94836Z", "last_record": { "id": "8a8a29be-e56b-4ef1-bf6a-92f7ccc9e9b7", "task_id": "0632cc18-9f0f-4a5e-875a-0d769b02e19b", "created_application_path": "/applications/tenantSrLEVevFQnWwXT90590F3USQ/juice-shop", "start_time": "2024-06-25T17:45:55.406377Z", "end_time": "2024-06-25T17:46:07.94836Z", "status": "completed" } } ], "deployments_count": { "total": 2, "completed": 2 }, "description": "", "fqdn": "", "gslb_enabled": false, "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6", "modified": "2024-06-25T17:32:09.174243Z", "name": "juice-shop", "set_name": "Examples", "successful_instances": 2, "template_name": "http", "tenant_name": "tenantSrLEVevFQnWwXT90590F3USQ", "type": "FAST" } ] }, "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/" } }, "count": 1, "total": 1 } In the example above, I only have one Blueprint in my BIG-IP Next CM instance. If we look deeper into the output, we can start to see some detail around the configuration parameters as well as the deployment parameters for each BIG-IP. There is also an ID field in the JSON that we can use to reference a specific Blueprint. In the example above, we have: "id": "3f2ef264-cf09-45c8-a925-f2c8fccf09f6" This is important because as we start to modify, deploy, or delete our existing blueprints, we will need this ID to be able to make changes. We can also use this ID to view more detail on a specific Blueprint rather than an entire list of all Blueprints. To do so, we simply follow make a request to the endpoint below: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}} In our example, this would be: GET https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/3f2ef264-cf09-45c8-a925-f2c8fccf09f6 The output from this API call provides robust detail on the Blueprint. It is probably too much detail to paste in an article like this, but there are some examples here if interested: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetApplicationByID Viewing a Blueprint like this can provide us with the latest configuration of our application service so that we can ensure we are using the most up-to-date files. It also can provide us with some templates/example configurations that we can use to create new application services moving forward. Creating a new Blueprint Now that we have a pretty good understanding of the JSON structure and we know how to view some examples of Blueprints that have already been created, we can simply use them as a reference and create our own Blueprint from scratch. The basic format for creating a new Blueprint is below: { "name": <blueprint_name>, "set_name": <template_set> "template_name": <template_name>, "parameters": { "application_description": <simple_description>, "application_name": <blueprint_name>, "enable_Global_Resiliency": false, "pools": [ { <pool_configuration_parameters> } ], "virtuals": [ { <virtual_server_configuration parameters> } ] } } More detail on each of the variable values below: <blueprint_name> - This is the name you choose for your blueprint. I generally recommend this name be the same in both the “name” field and the “application_name” field which is why in the JSON above you’ll see this in both. <template_set> - This is going to be the template set containing your FAST template. If you are using the default templates provided to you, this value would be “Examples” <template_name> - This is the specific FAST template you are going to use for the configuration. If you are using the default template provided to you, this value would be “http” <simple_description> - This can be any short description you would like to use for your application service. <pool_configuration_parameters> - This will be the list of parameters that you are going to define for your pool. You do not have to fill in every single value if the FAST template contains values for that field. <virtual_server_configuration parameters> - This will be the list of parameters that you are going to define for your virtual server. You do not have to fill in every single value if the FAST template contains values for that field. Important Note: When creating your JSON, you will be defining a FAST template to use along with your application service (just like you would in the GUI). This means that you do not have to fill out every value under your pool and virtual server configuration. It will take in the values provided from your FAST template. With our guide above, we can now make a new Blueprint for the “bobs-blueprint” example I referenced earlier: { "name":"bobs-blueprint", "set_name": "Examples", "template_name": "http", "parameters": { "application_description":"This is a test of the blueprints api", "application_name": "bobs-blueprint", "pools": [ { "loadBalancingMode": "round-robin", "monitorType": [ "http" ], "poolName": "juice_pool", "servicePort": 3000 } ], "virtuals": [ { "pool": "juice_pool", "virtualName": "blueprint_vs", "virtualPort": 80 } ] } } As you can see, this is a much more condensed version of the original JSON I had for the example shown in the beginning of this article. Again, this is because we are referencing the FAST template “Examples/http” and taking in those values to configure the rest of the application service. With our newly created JSON, the last step to creating this Blueprint is to send this in a POST request to the following API endpoint: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints After sending our POST, you'll notice we are given the "id" of the Blueprint in the response. As mentioned above, we can use this ID to modify, deploy, etc. { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments" } }, "deployments": [ { "deploymentID": "1a153f44-acaf-4487-81c7-61b8b5498627", "instanceID": "4ef739d1-9ef1-4eb7-a5bb-36c6d1334b16", "status": "pending", "taskID": "518534c8-9368-48dd-b399-94f55e72d5a7", "task_type": "CREATE" }, { "deploymentID": "82ca8d6d-dbfd-43a8-a604-43f170d9f190", "instanceID": "0546acf5-3b88-422d-a948-28bbf0973212", "status": "pending", "taskID": "7ea24157-1412-4963-9560-56fb0ab78d8c", "task_type": "CREATE" } ], "id": "9c35a614-65ac-4d18-8082-589ea9bc78d9" } Now that the Blueprint has been created, we can go into our BIG-IP Next CM GUI and see our newly created application service: You’ll notice that our newly created application service is in Draft mode. This is because we have not deployed the service yet. We’ll discuss that in the next section. Deploying a Blueprint Once we have our Blueprint created, the final step is to configure the deployment. As discussed above, this is done through a separate API Call. The format for a deployment is as follows: { "deployments": [ { "parameters": { "pools": [ { "poolName": <pool_name> "poolMembers": [ { "name": <node1_name>, "address": <node1_ip_address> }, { "name": <node2_name>, "address": <node2_ip_address> } ] } ], "virtuals": [ { "virtualName": <virtual_server_name>, "virtualAddress": <virtual_server_ip_address> } ] }, "target": { "address": <bigip_instance_ip_address> }, "allow_overwrite": true } ] } Keep in mind that some of these values above are referencing names from your Blueprint configuration. These names need to be exactly the same. See below for more detail on each of these values: <pool_name> - This references the pool from your Blueprint. This value must match what was used for “poolName” in the pool configuration of the Blueprint. <node1_name> - This is any name you choose to describe your node in the pool <node1_ip_address> - This is the specific IP address for your node <node2_name> - If you are using more than one node, this would be the name you choose to represent your second node in the pool. This format can repeat for 3 nodes, 4 nodes, etc. <node2_ip_address> - If you are using more than one node, this would be the IP address of your second node. This format can repeat for 3 nodes, 4 nodes, etc. <virtual_server_name> - This references the virtual server from your Blueprint. This value must match what was used for “virtualName” in the virtuals configuration of the Blueprint. <virtual_server_ip_address> - This would be the IP address of the Virtual Server being deployed on the BIG-IP <bigip_instance_ip_address> - This is the IP address of the BIG-IP instance we are deploying to Important Note: If you are deploying the same application to more than one BIG-IP instance, you can include multiple deployment blocks in your API call. Using the format above, we can now create our deployment for “bobs-blueprint”: { "deployments": [ { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.1.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.15" } ] }, "target": { "address": "10.2.1.3" }, "allow_overwrite": true }, { "parameters": { "pools": [ { "poolName": "juice_pool", "poolMembers": [ { "name": "node1", "address": "10.2.3.100" } ] } ], "virtuals": [ { "virtualName": "blueprint_vs", "virtualAddress": "10.2.2.20" } ] }, "target": { "address": "10.2.1.4" }, "allow_overwrite": true } ] } In this example deployment, I am deploying to two separate BIG-IP instances (10.2.1.3 and 10.2.1.4). This is where we can really start to see the value of the Blueprints API. With this structure, I can use the same pool and virtual server setup from our Blueprint, while still using different Virtual Server and Node IP addresses for the deployment at each instance. All of this is done with one API call. The final step is to send this in a POST request to our deployments endpoint. The endpoint is very similar to viewing a blueprint as it uses our same Blueprint ID. See the format below: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/{{Blueprint_id}}/deployments Using the ID from the response we received after creating our example blueprint, our endpoint would be: POST https://{{bigip_next_cm_mgmt_ip}}/api/v1/spaces/default/appsvcs/blueprints/9c35a614-65ac-4d18-8082-589ea9bc78d9/deployments After sending our POST, we can go back to the BIG-IP Next CM GUI and see our application service is no longer considered a Draft. If we click into the application service, we’ll see our two deployments are up and healthy: Conclusion Hopefully after reading this article, you can see the value of using the Blueprints API for your automation. I think as an alternative to other automation methods, this can provide benefits such as: Same centrally managed format/structure as GUI created applications Since the BIG-IP Next CM GUI is already creating these JSON files under the hood, we can easily automate existing applications by using the Blueprints API for them moving forward Deployments are handled separately from your application configuration You can deploy your application service to multiple BIG-IP instances at once Combining the Blueprints API with FAST templates allows you make application on-boarding much more streamlined. If you liked this article and are looking for more information our our Blueprints API, please visit the API documentation here: https://clouddocs.f5.com/products/bigip-next/mgmt-api/latest/ApiReferences/bigip_public_api_ref/r_openapi-next.html#tag/Application-Services/operation/GetAllApplications252Views2likes2CommentsBIG-IP Next Central Manager API with Postman
In my last article I dove into the Central Manager AS3 endpoints with thecURL command. As I was preparing for this one, I thought it would work better as a live stream than a traditional article. Here's the stream you can watch in the replay, and the resources I mentioned on the stream are posted below. Show description: I've been working with the BIG-IP Next API from the API reference and with curl on the command line, and I gotta tell you, as much as I don't love Postman, it's super handy when learning an API. In this episode of DevCentral Connects, I'll download the collection from the Next documentation, get the environment variables set up, and walk through some of the tasks available in the collection to start working with the BIG-IP Next API. Resources BIG-IP Next Articles on DevCentral BIG-IP Next Academy group on DevCentral Embracing AS3: Foundations BIG-IP Next automation: AS3 basics BIG-IP Next automation: Working with the AS3 endpoints 20.0 Postman collection 20.1 Postman collection 20.2 Postman collection442Views1like0CommentsHow to secure egress with F5 Service Proxy for Kubernetes
Outline: Securing Egress Challenges How F5 can help Technical bit on how it works Getting trafficinto your clusters to your workloads is just a small part of the cluster admin's tasks, and there are many options available. Controlling the packets going out is harder and often ignored. This makes your clusters more vulnerable to security risks because they don’t follow the same strict rules as your traditional networks. This article will dive deeper into how SPK can control traffic exiting your clusters, even when your application workload uses multus to attach additional external interfaces. Secure Egress Challenges By default, a pod deployed using calico CNI will follow the default route to get out of the cluster. Traffic will look like it’s coming from the worker host’s external IP address on the management interface. While KubernetesNetworkPolicies can be used for egress, it becomes painful to manage the lifecycle of hundreds or thousands of policies across all namespaces as the cluster grows. If you deploy a pod with multus interfaces, as commonly seen with telco applications, you add another way for that pod to bypass any NetworkPolicies applied within the cluster. What if there was a way to manage egress dynamically (as pods are spun up and down) and easily so that the cluster admin could centrally configure and control traffic flowing out of the cluster? How F5 can help Service Proxy for Kubernetes (SPK) is a cloud-native application traffic management solution, designed for communication service provider (CoSP) 5G networks and other application workloads. With SPK and its Calico egress gateway feature, managing a pod's default calico network interface as well as any multus interfaces becomes easy and consistent with the CSRC daemonset. Kernel routes are automatically configured so that the pods traffic will always be routed via the SPK pod where you can apply consistent, namespace-aware network policies, source NAT translation, and other controls. If the "watched" application workload is deleted, the corresponding host rules also get removed. Technical Overview This section will provide an overview of how to configure the above scenario. Host Prerequisites On the host, two shims of type macvlan bridges are created on physical interfaces, one for the application pod's calico traffic and one for the macvlan traffic, which will forward packets on to SPK. These interfaces allow connectivity to the SPK's "internal" and "external2" interfaces, respectively. ip link add spk-shim link ens224 type macvlan mode bridge ip addr add 10.1.30.244/24 dev shim1 ip link set shim1 up ip link add spk-shim2 link ens256 type macvlan mode bridge ip addr add 10.1.10.244/24 dev shim2 ip link set shim2 up Application Prerequisites and Configuration In theSPK controller values.yaml file, configure your application workload namespaces in the watchNamespace block. watchNamespace: - "spk-apps" - "spk-apps-2" Since we want SPK to do the source NAT for pod egress traffic, we create an IPPool with natOutgoing set to false. This IPPool will be used by the applications. apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: name: app-ip-pool spec: cidr: 10.124.0.0/16 ipipMode: Always natOutgoing: false Ensure that the application namespaces are annotated like below to use the IPPool. kubectl annotate namespace spk-apps "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] kubectl annotate namespace spk-apps-2 "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] Deploy your application. See below for an example deployment manifest for the application. Note that I'm attaching a secondary macvlan interface, which is in addition to the default calico interface. It will get an IP address automatically as configured in the corresponding NetworkAttachmentDefinition. Note the specific labels used by SPK, which allows you to enable traffic routing to SPK on a per application basis. Additionally, the enableSecureSPK=true label will instruct SPK to create additional listeners that will pick up traffic coming from the pod's secondary macvlan interface. (Will show these listeners later) apiVersion: apps/v1 kind: Deployment metadata: name: nginx annotations: spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "macvlan-conf-ens256-myapp1" } ]' labels: app: nginx enableSecureSPK: "true" enablePseudoCNI: "true" secureSPKPort: "8050" secureSPKCNFPodIfName: "net1" secondaryCNINodeIfName: "spk-shim2" primaryCNINodeIfName: "spk-shim" secureSPKNetAttachDefName: "macvlan-conf-ens256" secureSPKEgressVlanName: "external" SPK Configuration Deploy the custom resource that will configure a listener that does two things: listen for traffic coming from the internal vlan, or the calico interface of targeted application pods SNAT the traffic so that the source IP is an IP address of SPK apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: ns-f5-spk spec: #leave commented out for snat automap #egressSnatpool: "snatpool-1" dualStackEnabled: false maxTmmReplicas: 1 vlans: vlanList: [internal] disableListedVlans: false Next, we deploy the CSRC Daemonset that dynamically creates the kernel rules and routes for us. Note that I am setting the daemonsetMode to "pseudoCNI" which means I want to route both primary (calico) and secondary interface traffic to SPK. values-csrc.yaml image: repository: gitlab.tky.lab:5050/registry/spk/200 # daemonset mode, regular, secureSPK, or pseudoCNI #daemonsetMode: "regular" daemonsetMode: "pseudoCNI" ipFamily: "ipv4" imageCredentials: name: f5-common-pull-creds config: iptableid: 200 interfacename: "spk-shim" #tmmpodinterfacename: "internal" json: ipPoolCidrInfo: cidrList: - name: cluster-cidr0 value: "172.21.107.192/26" - name: cluster-cidr1 value: "172.21.66.0/26" - name: cluster-cidr2 value: "10.124.18.192/26" - name: node-cidr0 value: "10.1.11.0/24" - name: node-cidr1 value: "10.1.10.0/24" ipPoolList: - name: default-ipv4-ippool value: "172.21.64.0/18" - name: spk-app1-pool value: "10.124.0.0/16" Testing You can then log onto the worker node that is hosting the applications and confirm the routes and rules are created. Essentially, the rules are making calico interfaces use a custom route table that ensures that the default route is via the SPK. # ip rule 0: from all lookup local 32254: from all to 172.21.107.192/26 lookup main 32254: from all to 172.21.66.0/26 lookup main 32254: from all to 10.124.18.192/26 lookup main 32254: from all to 172.28.15.0/24 lookup main 32254: from all to 10.1.10.0/24 lookup main 32257: from 10.124.18.207 lookup ns-f5-spkshim1ipv4257 <--match on app pod1 calico IP!!! 32257: from 10.124.18.211 lookup ns-f5-spkshim1ipv4257 <--match on app pod2 calico IP!!! 32258: from 10.1.10.171 lookup ns-f5-spkshim2ipv4258 <--match on app pod1 macvlan IP!!! 32258: from 10.1.10.170 lookup ns-f5-spkshim2ipv4258 <--match on app pod2 macvlan IP!!! 32766: from all lookup main 32767: from all lookup default # ip route show table ns-f5-spkshim1ipv4257 default via 10.1.30.242 dev shim1 10.1.30.242 via 10.1.30.242 dev shim1 # ip route show table ns-f5-spkshim2ipv4258 default via 10.1.10.160 dev shim2 10.1.10.160 via 10.1.10.160 dev shim2 If I then try to execute a curl command towards a server that exists in a network segment beyond SPK, the application pod will hit the CSRC-configured ip rule and then forwarded to its new default gateway, which is SPK. Since SPK has Source NAT enabled, the "Client IP" from the server perspective is the self-IP of SPK. This means you can apply firewall policies to application workloads in a deterministic way as well as have visibility into what kind of traffic is coming out of your clusters. k exec -it nginx-7d7699f86c-hsx48 -n my-app1 -- curl 10.1.70.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: server.tky.f5se.com Server IP: 10.1.70.30 Server Port: 80 Client IP: 10.1.30.242 Client Port: 59248 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.70.30 user-agent: curl/7.88.1 A simple tcpdump command run in the debug container of SPK confirms that the pod's calico interface IP (10.124.18.192) is the source IP of the incoming traffic on SPK, and after Source NAT is applied using the self-IP of SPK (10.1.30.242), the packet is sent out towards the server. /tcpdump -nni 0.0 tcp port 80 ----snip---- 12:34:51.964200 IP 10.124.18.192.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= ----snip---- 12:34:51.964233 IP 10.1.30.242.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= Let's take a look at egress application traffic that is using the secondary macvlan interface. In this case, I have not configured Source NAT so SPK will forward the traffic out, retaining the original pod IP. k exec -it nginx-7d7699f86c-g4hpv -n my-app1 -- curl 10.1.80.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: ue-client3 Server IP: 10.1.80.30 Server Port: 80 Client IP: 10.1.10.170 Client Port: 56436 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.80.30 user-agent: curl/7.88.1 Another tcpdump command run in the debug container of SPK shows that it receives the above GET request and sends it out without Source NAT in this case. /tcpdump -nni 0.0 tcp port 80 ----snip---- 13:54:40.389281 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= ----snip---- 13:54:40.389305 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= You can use the familiar tmctl command inside the debug container of SPK to confirm the statistics for both listeners that process the pod's primary (egress-ipv4) and secondary (secure-egress-ipv4-virtual-server) interface egress traffic. /tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,clientside.bytes_in,clientside.bytes_out,no_staged_acl_match_accept -w 200 name clientside.bytes_in clientside.bytes_out no_staged_acl_match_accept ---------------------------------------------- ------------------- -------------------- -------------------------- secure-egress-ipv4-virtual-server 394 996 1 egress-ipv4 394 1011 1 Now that you have egress traffic routed to the SPK data plane pods, you can use the below F5 published custom resource definitions (CRDs) to apply granular access control lists (ACLs) to meet your security requirements. The firewall configuration is defined as code (YAML manifests) so it natively integrates with K8s and portable across clusters. F5BigContextGlobal: CRD to define the default global firewall behavior and reference the firewall policy. F5BigFwPolicy: CRD to define your firewall rules. In summary, the above diagrams and configuration snippets show how SPK can capture all egress traffic in a dynamic way so that you don't have to sacrifice security and control in your ever-changing Kubernetes clusters.152Views0likes0Comments