devops
1537 TopicsiControl REST Cookbook - Virtual Server (ltm virtual)
This cookbook lists selected ready-to-use iControl REST curl commands for virtual-server related resources. Each recipe consists of the curl command, it's tmsh equivalent, and sample output. In this cookbook, the following curl options are used. Option Meaning ______________________________________________________________________________________ -s Suppress progress meter. Handy when you want to pipe the output. ______________________________________________________________________________________ -k Allows "insecure" SSL connections. ______________________________________________________________________________________ -u Specify user ID and password. For the start, you should use the "admin" account that you normally use to access the Configuration Utility. When you specify the password at the same time, concatenate with ":". e.g., admin:admin. ______________________________________________________________________________________ -X <method> Specify the HTTP method. When omitted, the default is GET. In the REST framework, POST means create (tmsh create), PATCH means overwriting the existing resource with the data sent (tmsh modify), and PATCH is for merging (ditto). ______________________________________________________________________________________ -H <Header> Specify the request header. When you send (POST, PATCH, PUT) data, you need to tell the server that the data is in JSON format. i.e., -H "Content-Type: application/json. ______________________________________________________________________________________ -d 'data' The JSON data to send. Note that you need to quote the entire json blob, and each "name":"value" pairs must be quoted. When you have nested quotes, make sure you escape (\) them. Get information of the virtual <vs> tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?ver=12.1.0', addressStatus: 'yes', autoLasthop: 'default', cmpEnabled: 'yes', connectionLimit: 0, description: 'TestData', destination: '/Common/192.168.184.226:80', enabled: true, gtmScore: 0, ipProtocol: 'tcp', mask: '255.255.255.255', mirror: 'disabled', mobileAppTunnel: 'disabled', nat64: 'disabled', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.0' }, rateLimit: 'disabled', rateLimitDstMask: 0, rateLimitMode: 'object', rateLimitSrcMask: 0, serviceDownImmediateAction: 'none', source: '0.0.0.0/0', sourceAddressTranslation: { type: 'automap' }, sourcePort: 'preserve', synCookieStatus: 'not-activated', translateAddress: 'enabled', translatePort: 'enabled', vlansDisabled: true, vsIndex: 4, rules: [ '/Common/irule' ], rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~iRuleTest?ver=12.1.0' } ], policiesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=12.1.0', isSubcollection: true }, profilesReference: { link: 'https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=12.1.0', isSubcollection: true } } Get only specfic field of the virtual <vs> The naming convension for the parameters is slightly different from the ones on tmsh, so look for the familiar names in the GET response above. The example below queris the Default Pool (pool). tmsh list ltm <vs> pool curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?options=pool Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', generation: 1, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs?options=pool&ver=12.1.1', pool: '/Common/vs-pool', poolReference: { link: 'https://localhost/mgmt/tm/ltm/pool/~Common~vs-pool?ver=12.1.1' } } Get all the information of the virtual <vs> Unlike the tmsh equivalent, iControl REST GET does not return the configuration information of the attached policies and profiles. To see them, use expandSubcollections tmsh list ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>?expandSubcollections=true Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/192.168.184.240:80", "enabled": true, "fullPath": "vs", "generation": 291, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "items": [ { "context": "all", "fullPath": "/Common/http", "generation": 291, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "http", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/http/~Common~http?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~http?ver=13.1.0" }, { "context": "all", "fullPath": "/Common/tcp", "generation": 287, "kind": "tm:ltm:virtual:profiles:profilesstate", "name": "tcp", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/profile/tcp/~Common~tcp?ver=13.1.0" }, "partition": "Common", "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles/~Common~tcp?ver=13.1.0" } ], "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~vs/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "selfLink": "https://localhost/mgmt/tm/ltm/virtual/vs?expandSubcollections=true&ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 2 } Get stats of the virtual <vs> tmsh show ltm <vs> curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs>/stats Sample Output { kind: 'tm:ltm:virtual:virtualstats', generation: 1109, selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/stats?ver=12.1.0', entries: { 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats': { nestedStats: { kind: 'tm:ltm:virtual:virtualstats', selfLink: 'https://localhost/mgmt/tm/ltm/virtual/vs/~Common~vs/stats?ver=12.1.0', entries: { 'clientside.bitsIn': { value: 12880 }, 'clientside.bitsOut': { value: 34592 }, 'clientside.curConns': { value: 0 }, 'clientside.evictedConns': { value: 0 }, 'clientside.maxConns': { value: 2 }, 'clientside.pktsIn': { value: 26 }, 'clientside.pktsOut': { value: 26 }, 'clientside.slowKilled': { value: 0 }, 'clientside.totConns': { value: 6 }, cmpEnableMode: { description: 'all-cpus' }, cmpEnabled: { description: 'enabled' }, csMaxConnDur: { value: 37 }, csMeanConnDur: { value: 29 }, csMinConnDur: { value: 17 }, destination: { description: '192.168.184.226:80' }, 'ephemeral.bitsIn': { value: 0 }, 'ephemeral.bitsOut': { value: 0 }, 'ephemeral.curConns': { value: 0 }, 'ephemeral.evictedConns': { value: 0 }, 'ephemeral.maxConns': { value: 0 }, 'ephemeral.pktsIn': { value: 0 }, 'ephemeral.pktsOut': { value: 0 }, 'ephemeral.slowKilled': { value: 0 }, 'ephemeral.totConns': { value: 0 }, fiveMinAvgUsageRatio: { value: 0 }, fiveSecAvgUsageRatio: { value: 0 }, tmName: { description: '/Common/vs' }, oneMinAvgUsageRatio: { value: 0 }, 'status.availabilityState': { description: 'available' }, 'status.enabledState': { description: 'enabled' }, 'status.statusReason': { description: 'The virtual server is available' }, syncookieStatus: { description: 'not-activated' }, 'syncookie.accepts': { value: 0 }, 'syncookie.hwAccepts': { value: 0 }, 'syncookie.hwSyncookies': { value: 0 }, 'syncookie.hwsyncookieInstance': { value: 0 }, 'syncookie.rejects': { value: 0 }, 'syncookie.swsyncookieInstance': { value: 0 }, 'syncookie.syncacheCurr': { value: 0 }, 'syncookie.syncacheOver': { value: 0 }, 'syncookie.syncookies': { value: 0 }, totRequests: { value: 4 } } } } } } Change one of the configuration options of the virtual <vs> The command below changes the Description field of the virtual ("description" in tmsh and iControl REST). tmsh modify ltm virtual <vs> description "Hello World!" curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"description": "Hello World!"}' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', ... description: 'Hello World!', <==== Changed. ... } Disable the virtual <vs> The command syntax is same as above: To change the state of a virtual from "enabled" to "disabled", send "disabled":true. For enabling the virtual, use "enabled":true. Note that the Boolean type true/false does not require quotations. tmsh modify ltm virtual <vs> disabled curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"disabled": true}' \ Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... disabled: true, <== Changed ... } Add another iRule to <vs> When the virtual has iRules already attached, you need to send the existing ones too along with the additional one. For example, to add /Common/testRule1 to the virtual with /Common/testRule1, specify both in an array (square brackets). Note that the /Common/testRule2 iRule object should be already created. tmsh modify ltm virtual <vs> rules {testRule1 testRule2} curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual/<vs> \ -X PATCH -H "Content-Type: application/json" \ -d '{"rules": ["/Common/testRule1", "/Common/testRule2"] }' Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', fullPath: 'vs', ... rules: [ '/Common/test1', '/Common/test2' ], <== Changed rulesReference: [ { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test1?ver=12.1.1' }, { link: 'https://localhost/mgmt/tm/ltm/rule/~Common~test2?ver=12.1.1' } ], ... } Create a new virtual <vs> You can create a skeleton virtual by specifying only Destination Address and Mask. The remaining parameters such as profiles are set to default. You can later modify the parameters by PATCH-ing. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> curl -sku admin:admin -X POST -H "Content-Type: application/json" \ -d '{"name": "vs", "destination":"192.168.184.230:80", "mask":"255.255.255.255"}' \ https://<host>/mgmt/tm/ltm/virtual Sample Output { kind: 'tm:ltm:virtual:virtualstate', name: 'vs', partition: 'Common', fullPath: '/Common/vs', ... destination: '/Common/192.168.184.230:80', <== Created ... mask: '255.255.255.255', <== Created ... } Create a new virtual <vs> with a lot of parameters You can specify all the essential parameters upon creation. This example creates a new virtual with pool, default persistence profile, profiles, iRule, and source address translation. The call fails if any of the parameters conflicts. For example, you cannot specify "Cookie Persistence" without specifying appropriate profiles. If you do not specify any profile, it falls back to the default fastL4 , which is not compatible with Cookie Persistence. tmsh create ltm virtual <vs> destination <ip:port> mask <ip> pool <pool> persist replace-all-with { cookie } profiles add { tcp http clientssl } rules { <rule> } source-address-translation { type automap } curl -sku admin:admin https://<host>/mgmt/tm/ltm/virtual -H "Content-Type: application/json" -X POST -d '{"name": "vs", \ "destination": "10.10.10.10:10", \ "mask": "255.255.255.255", \ "pool": "CentOS-all80", \ "persist": [ {"name": "cookie"} ], \ "profilesReference": {"items": [ {"context": "all", "name": "http"}, {"context": "all", "name": "tcp"}, {"context": "clientside", "name": "clientssl"}] }, \ "rules": [ "ShowVersion" ], \ "sourceAddressTranslation": {"type": "automap"} }' Sample Output { "addressStatus": "yes", "autoLasthop": "default", "cmpEnabled": "yes", "connectionLimit": 0, "destination": "/Common/10.10.10.10:10", "enabled": true, "fullPath": "/Common/test", "generation": 592, "gtmScore": 0, "ipProtocol": "tcp", "kind": "tm:ltm:virtual:virtualstate", "mask": "255.255.255.255", "mirror": "disabled", "mobileAppTunnel": "disabled", "name": "vs", "nat64": "disabled", "partition": "Common", "persist": [ { "name": "cookie", "nameReference": { "link": "https://localhost/mgmt/tm/ltm/persistence/cookie/~Common~cookie?ver=13.1.0" }, "partition": "Common", "tmDefault": "yes" } ], "policiesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/policies?ver=13.1.0" }, "pool": "/Common/CentOS-all80", "poolReference": { "link": "https://localhost/mgmt/tm/ltm/pool/~Common~CentOS-all80?ver=13.1.0" }, "profilesReference": { "isSubcollection": true, "link": "https://localhost/mgmt/tm/ltm/virtual/~Common~test/profiles?ver=13.1.0" }, "rateLimit": "disabled", "rateLimitDstMask": 0, "rateLimitMode": "object", "rateLimitSrcMask": 0, "rules": [ "/Common/ShowVersion" ], "rulesReference": [ { "link": "https://localhost/mgmt/tm/ltm/rule/~Common~ShowVersion?ver=13.1.0" } ], "selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~test?ver=13.1.0", "serviceDownImmediateAction": "none", "source": "0.0.0.0/0", "sourceAddressTranslation": { "type": "automap" }, "sourcePort": "preserve", "synCookieStatus": "not-activated", "translateAddress": "enabled", "translatePort": "enabled", "vlansDisabled": true, "vsIndex": 52 } Delete a virtual <vs> tmsh delete ltm virtual <vs> curl -sku admin:admin https://192.168.226.55/mgmt/tm/ltm/virtual/<vs> -X DELETE Sample Output No output (just 200 OK and no response body) References curl.1 the man page curl Releases and Downloads ... including the port for Windows Jason Rahm's "Demystifying iControl REST" series(DevCentral) -- This is Part I of 7 at the time of this article. iControl REST API reference (DevCentral) iControl® REST API User Guide (DevCentral) -- Link is for 12.1. Search for the older versions.16KViews3likes9CommentsIntroducing F5 BIG-IP Next CNF Solutions for Red Hat OpenShift
5G and Red Hat OpenShift 5G standards have embraced Cloud-Native Network Functions (CNFs) for implementing network services in software as containers. This is a big change from previous Virtual Network Functions (VNFs) or Physical Network Functions (PNFs). The main characteristics of Cloud-Native Functions are: Implementation as containerized microservices Small performance footprint, with the ability to scale horizontally Independence of guest operating system,since CNFs operate as containers Lifecycle manageable by Kubernetes Overall, these provide a huge improvement in terms of flexibility, faster service delivery, resiliency, and crucially using Kubernetes as unified orchestration layer. The later is a drastic change from previous standards where each vendor had its own orchestration. This unification around Kubernetes greatly simplifies network functions for operators, reducing cost of deploying and maintaining networks. Additionally, by embracing the container form factor, allows Network Functions (NFs) to be deployed in new use cases like far edge. This is thanks to the smaller footprint while at the same time these can be also deployed at large scale in a central data center because of the horizontal scalability. In this article we focus on Red Hat OpenShift which is the market leading and industry reference implementation of Kubernetes for IT and Telco workloads. Introduction to F5 BIG-IP Next CNF Solutions F5 BIG-IP Next CNF Solutions is a suite of Kubernetes native 5G Network Functions, implemented as microservices. It shares the same Cloud Native Engine (CNE) as F5 BIG-IP Next SPK introduced last year. The functionalities implemented by the CNF Solutions deal mainly with user plane data. User plane data has the particularity that the final destination of the traffic is not the Kubernetes cluster but rather an external end-point, typically the Internet. In other words, the traffic gets in the Kubernetes cluster and it is forwarded out of the cluster again. This is done using dedicated interfaces that are not used for the regular ingress and egress paths of the regular traffic of a Kubernetes cluster. In this case, the main purpose of using Kubernetes is to make use of its orchestration, flexibility, and scalability. The main functionalities implemented at initial GA release of the CNF Solutions are: F5 Next Edge Firewall CNF, an IPv4/IPv6 firewall with the main focus in protecting the 5G core networks from external threads, including DDoS flood protection and IPS DNS protocol inspection. F5 Next CGNAT CNF, which offers large scale NAT with the following features: NAPT, Port Block Allocation, Static NAT, Address Pooling Paired, and Endpoint Independent mapping modes. Inbound NAT and Hairpining. Egress path filtering and address exclusions. ALG support: FTP/FTPS, TFTP, RTSP and PPTP. F5 Next DNS CNF, which offers a transparent DNS resolver and caching services. Other remarkable features are: Zero rating DNS64 which allows IPv6-only clients connect to IPv4-only services via synthetic IPv6 addresses. F5 Next Policy Enforcer CNF, which provides traffic classification, steering and shaping, and TCP and video optimization. This product is launched as Early Access in February 2023 with basic functionalities. Static TCP optimization is now GA in the initial release. Although the CGNAT (Carrier Grade NAT) and the Policy Enforcer functionalities are specific to User Plane use cases, the Edge Firewall and DNS functionalities have additional uses in other places of the network. F5 and OpenShift BIG-IP Next CNF Solutions fully supportsRed Hat OpenShift Container Platform which allows the deployment in edge or core locations with a unified management across the multiple deployments. OpenShift operators greatly facilitates the setup and tuning of telco grade applications. These are: Node Tuning Operator, used to setup Hugepages. CPU Manager and Topology Manager with NUMA awareness which allows to schedule the data plane PODs within a NUMA domain which is aligned with the SR-IOV NICs they are attached to. In an OpenShift platform all these are setup transparently to the applications and BIG-IP Next CNF Solutions uniquely require to be configured with an appropriate runtimeClass. F5 BIG-IP Next CNF Solutions architecture F5 BIG-IP Next CNF Solutions makes use of the widely trusted F5 BIG-IP Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. The CNF functionalities come from a microservices re-architecture of the broadly used F5 BIG-IP VNFs. The below diagram illustrates how a microservices architecture used. The data plane POD scales vertically from 1 to 16 cores and scales horizontally from 1 to 32 PODs, enabling it to handle millions of subscribers. NUMA nodes are supported. The next diagram focuses on the data plane handling which is the most relevant aspect for this CNF suite: Typically, each data plane POD has two IP address, one for each side of the N6 reference point. These could be named radio and Internet sides as shown in the diagram above. The left-side L3 hop must distribute the traffic amongst the lef-side addresses of the CNF data plane. This left-side L3 hop can be a router with BGP ECMP (Equal Cost Multi Path), an SDN or any other mechanism which is able to: Distribute the subscribers across the data plane PODs, shown in [1] of the figure above. Keep these subscribers in the same PODs when there is a change in the number of active data plane PODs (scale-in, scale-out, maintenance, etc...) as shown in [2] in the figure above. This minimizes service disruption. In the right side of the CNFs, the path towards the Internet, it is typical to implement NAT functionality to transform telco's private addresses to public addresses. This is done with the BIG-IP Next CG-NAT CNF. This NAT makes the return traffic symmetrical by reaching the same POD which processed the outbound traffic. This is thanks to each POD owning part of this NAT space, as shown in [3] of the above figure. Each POD´s NAT address space can be advertised via BGP. When not using NAT in the right side of the CNFs, it is required that the network is able to send the return traffic back to the same POD which is processing the same connection. The traffic must be kept symmetrical at all times, this is typically done with an SDN. Using F5 BIG-IP Next CNF Solutions As expected in a fully integrated Kubernetes solution, both the installation and configuration is done using the Kubernetes APIs. The installation is performed using helm charts, and the configuration using Custom Resource Definitions (CRDs). Unlike using ConfigMaps, using CRDs allow for schema validation of the configurations before these are applied. Details of the CRDs can be found in this clouddocs site. Next it is shown an overview of the most relevant CRDs. General network configuration Deploying in Kubernetes automatically configures and assigns IP addresses to the CNF PODs. The data plane interfaces will require specific configuration. The required steps are: Create Kubernetes NetworkNodePolicies and NetworkAttchment definitions which will allow to expose SR-IOV VFs to the CNF data planes PODs (TMM). To make use of these SR-IOV VFs these are referenced in the BIG-IP controller's Helm chart values file. This is described in theNetworking Overview page. Define the L2 and L3 configuration of the exposed SR-IOV interfaces using the F5BigNetVlan CRD. If static routes need to be configured, these can be added using the F5BigNetStaticroute CRD. If BGP configuration needs to be added, this is configured in the BIG-IP controller's Helm chart values file. This is described in the BGP Overview page. It is expected this will be configured using a CRD in the future. Traffic management listener configuration As with classic BIG-IP, once the CNFs are running and plumbed in the network, no traffic is processed by default. The traffic management functionalities implemented by BIG-IP Next CNF Solutions are the same of the analogous modules in the classic BIG-IP, and the CRDs in BIG-IP Next to configure these functionalities are conceptually similar too. Analogous to Virtual Servers in classic BIG-IP, BIG-IP Next CNF Solutions have a set of CRDs that create listeners of traffic where traffic management policies are applied. This is mainly the F5BigContextSecure CRD which allows to specify traffic selectors indicating VLANs, source, destination prefixes and ports where we want the policies to be applied. There are specific CRDs for listeners of Application Level Gateways (ALGs) and protocol specific solutions. These required several steps in classic BIG-IP: first creating the Virtual Service, then creating the profile and finally applying it to the Virtual Server. In BIG-IP Next this is done in a single CRD. At time of this writing, these CRDs are: F5BigZeroratingPolicy - Part of Zero-Rating DNS solution; enabling subscribers to bypass rate limits. F5BigDnsApp - High-performance DNS resolution, caching, and DNS64 translations. F5BigAlgFtp - File Transfer Protocol (FTP) application layer gateway services. F5BigAlgTftp - Trivial File Transfer Protocol (TFTP) application layer gateway services. F5BigAlgPptp - Point-to-Point Tunnelling Protocol (PPTP) application layer gateway services. F5BigAlgRtsp - Real Time Streaming Protocol (RTSP) application layer gateway services. Traffic management profiles and policies configuration Depending on the type of listener created, these can have attached different types of profiles and policies. In the case of F5BigContextSecure it can get attached thefollowing CRDs to define how traffic is processed: F5BigTcpSetting - TCP options to fine-tune how application traffic is managed. F5BigUdpSetting - UDP options to fine-tune how application traffic is managed. F5BigFastl4Setting - FastL4 option to fine-tune how application traffic is managed. and the following policies for security and NAT: F5BigDdosPolicy - Denial of Service (DoS/DDoS) event detection and mitigation. F5BigFwPolicy - Granular stateful-flow filtering based on access control list (ACL) policies. F5BigIpsPolicy - Intelligent packet inspection protects applications from malignant network traffic. F5BigNatPolicy - Carrier-grade NAT (CG-NAT) using large-scale NAT (LSN) pools. The ALG listeners require the use of F5BigNatPolicy and might make use for the F5BigFwPolicyCRDs.These CRDs have also traffic selectors to allow further control over which traffic these policies should be applied to. Firewall Contexts Firewall policies are applied to the listener with best match. In addition to theF5BigFwPolicy that might be attached, a global firewall policy (hence effective in all listeners) can be configured before the listener specific firewall policy is evaluated. This is done with F5BigContextGlobal CRD, which can have attached a F5BigFwPolicy. F5BigContextGlobal also contains the default action to apply on traffic not matching any firewall rule in any context (e.g. Global Context or Secure Context or another listener). This default action can be set to accept, reject or drop and whether to log this default action. In summary, within a listener match, the firewall contexts are processed in this order: ContextGlobal Matching ContextSecure or another listener context. Default action as defined by ContextGlobal's default action. Event Logging Event logging at high speed is critical to provide visibility of what the CNFs are doing. For this the next CRDs are implemented: F5BigLogProfile - Specifies subscriber connection information sent to remote logging servers. F5BigLogHslpub - Defines remote logging server endpoints for the F5BigLogProfile. Demo F5 BIG-IP Next CNF Solutions roadmap What it is being exposed here is just the begin of a journey. Telcos have embraced Kubernetes as compute and orchestration layer. Because of this, BIG-IP Next CNF Solutions will eventually replace the analogous classic BIG-IP VNFs. Expect in the upcoming months that BIG-IP Next CNF Solutions will match and eventually surpass the features currently being offered by the analogous VNFs. Conclusion This article introduces fully re-architected, scalable solution for Red Hat OpenShift mainly focused on telco's user plane. This new microservices architecture offers flexibility, faster service delivery, resiliency and crucially the use of Kubernetes. Kubernetes is becoming the unified orchestration layer for telcos, simplifying infrastructure lifecycle, and reducing costs. OpenShift represents the best-in-class Kubernetes platform thanks to its enterprise readiness and Telco specific features. The architecture of this solution alongside the use of OpenShift also extends network services use cases to the edge by allowing the deployment of Network Functions in a smaller footprint. Please check the official BIG-IP Next CNF Solutions documentation for more technical details and check www.f5.com for a high level overview.2.1KViews3likes2CommentsSeamless Application Migration to OpenShift Virtualization with F5 Distributed Cloud
As organizations endeavor to modernize their infrastructure, migrating applications to advanced virtualization platforms like Red Hat OpenShift Virtualization becomes a strategic imperative. However, they often encounter challenges such as minimizing downtime, maintaining seamless connectivity, ensuring consistent security, and reducing operational complexity. Addressing these challenges is crucial for a successful migration. This article explores howF5 Distributed Cloud (F5 XC), in collaboration with Red Hat's Migration Toolkit for Virtualization (MTV), provides a robust solution to facilitate a smooth, secure, and efficient migration to OpenShift Virtualization. The Joint Solution: F5 XC CE and Red Hat MTV Building upon our previous work ondeploying F5 Distributed Cloud Customer Edge (XC CE) in Red Hat OpenShift Virtualization, we delve into the next phase of our joint solution with Red Hat. By leveraging F5 XC CE in both VMware and OpenShift environments, alongside Red Hat’s MTV, organizations can achieve a seamless migration of virtual machines (VMs) from VMware NSX to OpenShift Virtualization. This integration not only streamlines the migration process but also ensures continuous application performance and security throughout the transition. Key Components: Red Hat Migration Toolkit for Virtualization (MTV): Facilitates the migration of VMs from VMware NSX to OpenShift Virtualization, an add-on to OpenShift Container Platform F5 Distributed Cloud Customer Edge (XC CE) in VMware: Manages and secures application traffic within the existing VMware NSX environment. F5 XC CE in OpenShift: Ensures consistent load balancing and security in the new OpenShift Virtualization environment. Demonstration Architecture To illustrate the effectiveness of this joint solution, let’s delve into the Demo Architecture employed in our demo: The architecture leverages F5 XC CE in both environments to provide a unified and secure load balancing mechanism. Red Hat MTV acts as the migration engine, seamlessly transferring VMs while F5 XC CE manages traffic distribution to ensure zero downtime and maintain application availability and security. Benefits of the Joint Solution 1. Seamless Migration: Minimal Downtime: The phased migration approach ensures that applications remain available to users throughout the process. IP Preservation: Maintaining the same IP addresses reduces the complexity of network reconfiguration and minimizes potential disruptions. 2. Enhanced Security: Consistent Policies: Security measures such as Web Application Firewalls (WAF), bot detection, and DoS protection are maintained across both environments. Centralized Management: F5 XC CE provides a unified interface for managing security policies, ensuring robust protection during and after migration. 3. Operational Efficiency: Unified Platform: Consolidating legacy and cloud-native workloads onto OpenShift Virtualization simplifies management and enhances operational workflows. Scalability: Leveraging Kubernetes and OpenShift’s orchestration capabilities allows for greater scalability and flexibility in application deployment. 4. Improved User Experience: Continuous Availability: Users experience uninterrupted access to applications, unaware of the backend migration activities. Performance Optimization: Intelligent load balancing ensures optimal application performance by efficiently distributing traffic across environments. Watch the Demo Video To see this joint solution in action, watch our detailed demo video on the F5 DevCentral YouTube channel. The video walks you through the migration process, showcasing how F5 XC CE and Red Hat MTV work together to facilitate a smooth and secure transition from VMware NSX to OpenShift Virtualization. Conclusion Migrating virtual machines (VMs) from VMware NSX to OpenShift Virtualization is a significant step towards modernizing your infrastructure. With the combined capabilities of F5 Distributed Cloud Customer Edge and Red Hat’s Migration Toolkit for Virtualization, organizations can achieve this migration with confidence, ensuring minimal disruption, enhanced security, and improved operational efficiency. Related Articles: Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization165Views1like0CommentsAutomating F5 NGINX Instance Manager Deployments on VMWare
With F5 NGINX One, customers can leverage the F5 NGINX One SaaS console to manage inventory, stage/push configs to cluster groups, and take advantage of our FCPs (Flexible Consumption Plans). However, the NGINX One console may not be feasible to customers with isolated environments with no connectivity outside the organization. In these cases, customers can run self-managed builds with the same NGINX management capabilities inside their isolated environments. In this article, I step through how to automate F5 NGINX Instance Manager deployments with packer and terraform. Prerequisites I will need a few prerequisites before getting started with the tutorial: Installing vCenter on my ESXI host; I need to install vCenter to login and access my vSphere console. A client instance with packer and terraform installed to run my build. I use a virtual machine on my ESXI host. NGINX license keys; I will need to pull my NGINX license keys from MyF5 and store them in my client VM instance where I will run the build. Deploying NGINX Instance Manager Deploying F5 NGINX Instance Manager in your environment involves two steps: Running a packer build outputting a VM template to my datastore Applying the terraform build using the VM template from step 1 to deploy and install NGINX Instance Manager Running the Packer Build Before running the packer build, I will need to SSH into my VM installer and download packer compatible ISO tools and plugins. $ sudo apt-get install mkisofs && packer plugins install github.com/hashicorp/vsphere && packer plugins install github.com/hashicorp/ansible Second, pull the GitHub repository and set the parameters for my packer build in the packer hcl file (nms.packer.hcl). $ git pull https://github.com/nginxinc/nginx-management-suite-iac.git $ cd nginx-management-suite-iac/packer/nms/vsphere $ cp nms.pkrvars.hcl.example nms.pkrvars.hcl The table below lists the variables that need to be updated. nginx_repo_crt Path to license certificate required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.crt) nginx_repo_key Path to license key required to install NGINX Instance Manager (/etc/ssl/nginx/nginx-repo.key) iso_path Path of the ISO where the VM template will boot from. The ISO must be stored in my vSphere datastore cluster_name The vSphere cluster datacenter The vSphere datacenter datastore The vSphere datastore network The vSphere network where the packer build will run. I can use static IPs if DHCP is not available. Now I can run my packer build $ export VSPHERE_URL="my-vcenter-url" $ export VSPHERE_PASSWORD="my-password" $ export VSPHERE_USER="my-username" $ ./packer-build.sh -var-file="nms.pkrvars.hcl" **Note: If DHCP is not available in my vSphere network, I need to assign static ips before running the packer build script. Running the Packer Build with Static IPs To assign static IPs, I modified the cloud init template in my packer build script (packer-build.sh). Under the auto-install field, I can add my Ubuntu Netplan configuration and manually assign my ethernet IP address, name servers, and default gateway. #cloud-config autoinstall: version: 1 network: version: 2 ethernets: addresses: - 10.144.xx.xx/20 nameservers: addresses: - 172.27.x.x - 8.8.x.x search: [] routes: - to: default via: 10.144.xx.xx identity: hostname: localhost username: ubuntu password: ${saltedPassword} Running the Terraform Build As mentioned in the previous section, the packer build will output a VM template to my vSphere datastore. I should be able to see the file template in nms-yyyy-mm-dd/nms-yyyy-mm-dd.vmtx directory of my datastore. Before running the terraform build, I set parameters in terraform parameter file (terraform.tfvars). $ cp terraform.ttfvars.example terraform.tfvars $ vi terraform.tfvars The table below lists the variables that need to be updated. cluster_name The vSphere cluster datacenter The vSphere datacenter datastore The vSphere datastore network The vSphere network to deploy and install NIM template_name The VM template generated by the packer build (nms-yyyy-mm-dd) ssh_pub_key The public SSH key (~/.ssh_id_rsa.pub) ssh_user The SSH user (ubuntu) Once parameters are set, I will need to set my env variables. $ export TF_VAR_vsphere_url="my-vcenter-url.com" $ export TF_VAR_vsphere_password="my-password" $ export TF_VAR_vsphere_user="my-username" #Set the admin password for NIM user $ export TF_VAR_admin_password="my-admin-password" And initialize/apply my terraform build. $ terraform init $ terraform apply **Note: If DHCP is not available in my vSphere network, I need to assign static IPs once again in my terraform script before running the build. Assigning Static IPs in Terraform Build (optional) To assign static IPs, I will need to modify the main terraform file (main.tf). I will add the following clone context inside my vsphere-virtual-machine vm resource and set the options to the appropriate IPs and netmask. clone { template_uuid = data.vsphere_virtual_machine.template.id customize { linux_options { host_name = "foo" domain = "example.com" } network_interface { ipv4_address = "10.144.xx.xxx" ipv4_netmask = 20 dns_server_list = ["172.27.x.x, 8.8.x.x"] } ipv4_gateway = "10.144.xx.xxx" } } Connecting to NGINX Instance Manager Once the terraform build is complete, I will see the NGINX Instance Manager VM running in the vSphere console. I can open a new tab in my browser and enter the IP address to connect and login with admin/$TF_VAR_admin_password creds. Conclusion Installing NGINX Instance Manager in your environment is now easier than ever. Following this tutorial, I can install NGINX Instance Manager in under 5 minutes and manage NGINX inventory inside my isolated environment.78Views0likes0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization382Views1like0CommentsBIG-IP Next Automation: AS3 Basics
I need a little Mr. Miyagi right now to grab my face and intently look me in the eye and give me a "Concentrate! Focus power!" For those of you youngins' who don't know who that is, he's the OG Karate Kid mentor. Anyway, I have a thousand things I want to say about AS3 but in this article, I'll attempt to cut this down to a narrow BIG-IP Next-specific context to get you started. It helps that last December I did a five-part streaming series on AS3 in the BIG-IP classic context. If you haven't seen that, you have my blessing to stop right now, take some time to digest AS3 conceptually and practice against workloads and configurations in BIG-IP classic that you know and understand, before returning here to embrace all the newness of BIG-IP Next. AS3 is FOUNDATIONAL in BIG-IP Next In classic BIG-IP, you could edit the bigip.conf file directly, use tmsh commands, or iControlREST commands to imperatively create/modify/delete BIG-IP objects. With the exception of system configuration and shared configuration objects, this is not the case with BIG-IP Next. All application configuration is AS3 at its lowest state level. This doesn't mean you have to work primarily in AS3 configuration. If you utilize the migration utility in Central Manager, it will generate the AS3 necessary to get your apps up and running. Another option is to use the built-in http FAST template (we'll cover FAST in later articles) to build out an application from scratch in the GUI. But if you use features outside the purview of that template, or you need to edit your migration output, you'll need to work in the AS3 configuration declaration, even if just a little bit. Apples to Apples It's a fun card game, no? My family takes it to snarky absurd levels of sarcasm, to the point that when we play with "outsiders" we get lots of blank looks and stares as we're all rolling on the floor laughing. Oh well, to each his own. But we're here to talk about AS3, right? Well, in BIG-IP Next, there is a compatibility API for AS3, such that you can take a declaration from BIG-IP classic and as long as the features within that declaration are supported, it should "just work" via the Central Manager API. That's pretty cool, right? Let's start with a basic application declaration from the recent video posted by Mark_Dittmerexploring the API differences between classic and Next. { "class": "ADC", "schemaVersion": "3.0.0", "id": "generated-for-testing", "Tenant_1": { "class": "Tenant", "App_1": { "class": "Application", "Service_1": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.0.1" ], "virtualPort": 80, "pool": "Pool_1" }, "Pool_1": { "class": "Pool", "members": [ { "servicePort": 80, "serverAddresses": [ "10.1.0.1", "10.1.0.2" ] } ] } } } } A simple VIP with a pool with two pool members. A toy config to be sure, but it is useful here to show the format (JSON) of an AS3 declaration and some of the schema as well. With the compatibility API, this same declaration can be posted to a classic BIG-IP like this: POST https://<BIG-IP IP Address>/mgmt/shared/appsvcs/declare Or a BIG-IP Next instance like this: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/declare?target_address=<BIG-IP Next instance IP Address> For those already embracing AS3, this compatibility API in BIG-IP Next should make the transition easier. AS3 Workflow in BIG-IP Next With BIG-IP classic, you had to install the AS3 package (technically an iControl LX, or sometimes referenced as an iApps v2 package) onto each BIG-IP system you wanted to use the AS3 declarative configuration model on. Each BIG-IP was an island, and the configuration management of the overall system of BIG-IPs was reliant on an external system for source of truth. With BIG-IP Next, the Central Manager API has native AS3 support so there are no packages to install to prepare the environment. Also, Central Manager is the centralized AS3 interface for all Next instances. This has several benefits: A singular and centralized source of truth for your configuration management No external package management requirements Tremendous improvement in API performance management since most of the heavy lifting is offloaded from the instances and onto Central Manager and the control-plane functionality that remains on the instance is intentionally designed for API-first operations The general application deployment workflow introduced exclusively for Next, which I'll reference as the documents API, is twofold: Create an application service First, you create the application service on Central Manager. You can use the same JSON declaration from the section above here, only the API endpoint is different: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents A successful transaction will result in an application service document on Central Manager. A couple notes on this at time of writing: Documents created through the API are not validated against the journeys migration tool that is available for use in the Central Manager GUI. Documents are not schema validated at the attribute level of classes, so whereas a class used in classic might be supported in Next, some of the attributes might not be. This means that whereas the document creation process can appear successful, the deployment will fail if classes and/or class attributes supported in classic BIG-IP are present in the AS3 declarations when an attempt to apply to an instance occurs. Deploy the application service Assuming, however, all your AS3 work is accurate to the Next-supported schema, you post the specified document by ID to the target BIG-IP Next instance, here as a JSON payload versus a query parameter on the compatibility API shown earlier. POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents/<Document ID>/deployments { "target": "<BIG-IP Next Instance IP Address>" } At this point, your service should be available to receive traffic on the instance it was deployed on. Next Up... Now that we have the theory in place, join me next time where we'll take a look at working with a couple application services through both approaches. Resources CM App Services Management AS3 Schema AS3 User Guide (classic, but useful) AS3 Reference Guide (classic, but useful) AS3 Foundations (streaming series)1.1KViews0likes4CommentsEnhance your GenAI chatbot with the power of Agentic RAG and F5 platform
Agentic RAG (Retrieval-Augmented Generation) enhances the capabilities of a GenAI chatbot by integrating dynamic knowledge retrieval into its conversational abilities, making it more context-aware and accurate. In this demo, I will demonstrate an autonomous decision-making GenAI chatbot utilizing Agentic RAG. I will explore what Agentic RAG is and why it's crucial in today's AI landscape. I will also discuss how organizations can leverage GPUaaS (GPU as a Service) or AI Factory providers to accelerate their AI strategy. F5 platform provides robust security features that protect sensitive data while ensuring high availability and performance. They optimize the chatbot by streamlining traffic management and reducing latency, ensuring smooth interactions even during high demand. This integration ensures the GenAI chatbot is not only smart but also reliable and secure for enterprise use.215Views1like0CommentsiRules 101 - #12 - The Session Command
One of the things that makes iRules so incredibly powerful is the fact that it is a true scripting language, or at least based on one. The fact that they give you the tools that TCL brings to the table - regular expressions, string functions, even things as simple as storing, manipulating and recalling variable data - sets iRules apart from the rest of the crowd. It also makes it possible to do some pretty impressive things with connection data and massaging/directing it the way you want it. Other articles in the series: Getting Started with iRules: Intro to Programming with Tcl | DevCentral Getting Started with iRules: Control Structures & Operators | DevCentral Getting Started with iRules: Variables | DevCentral Getting Started with iRules: Directing Traffic | DevCentral Getting Started with iRules: Events & Priorities | DevCentral Intermediate iRules: catch | DevCentral Intermediate iRules: Data-Groups | DevCentral Getting Started with iRules: Logging & Comments | DevCentral Advanced iRules: Regular Expressions | DevCentral Getting Started with iRules: Events & Priorities | DevCentral iRules 101 - #12 - The Session Command | DevCentral Intermediate iRules: Nested Conditionals | DevCentral Intermediate iRules: Handling Strings | DevCentral Intermediate iRules: Handling Lists | DevCentral Advanced iRules: Scan | DevCentral Advanced iRules: Binary Scan | DevCentral Sometimes, though, a simple variable won't do. You've likely heard of global variables in one of the earlier 101 series and read the warning there, and are looking for another option. So here you are, you have some data you need to store, which needs to persist across multiple connections. You need it to be efficient and fast, and you don't want to have to do a whole lot of complex management of a data structure. One of the many ways that you can store and access information in your iRule fits all of these things perfectly, little known as it may be. For this scenario I'd recommend the usage of the session command. There are three main permutations of the session command that you'll be using when storing and referencing data within the session table. These are: session add: Stores user's data under the specified key for the specified persistence mode session lookup: Returns user data previously stored using session add session delete: Removes user data previously stored using session add A simple example of adding some information to the session table would look like: when CLIENTSSL_CLIENTCERT { set ssl_cert [SSL::cert 0] session add ssl $ssl_cert 90 } By using the session add command, you can manually place a specific piece of data into the LTM's session table. You can then look it up later, by unique key, with the session lookup command and use the data in a different section of your iRule, or in another connection all together. This can be helpful in different situations where data needs to be passed between iRules or events that it might not normally be when using a simple variable. Such as mining SSL data from the connection events, as below: when CLIENTSSL_CLIENTCERT { # Set results in the session so they are available to other events session add ssl [SSL::sessionid] [list [X509::issuer] [X509::subject] [X509::version]] 180 } when HTTP_REQUEST { # Retrieve certificate information from the session set sslList [session lookup ssl [SSL::sessionid]] set issuer [lindex sslList 0] set subject [lindex sslList 1] set version [lindex sslList 2] } Because the session table is optimized and designed to handle every connection that comes into the LTM, it's very efficient and can handle quite a large number of items. Also note that, as above, you can pass structured information such as TCL Lists into the session table and they will remain intact. Keep in mind, though, that there is currently no way to count the number of entries in the table with a certain key, so you'll have to build all of your own processing logic for now, where necessary. It's also important to note that there is more than one session table. If you look at the above example, you'll see that before we listed any key or data to be stored, we used the command session add ssl. Note the "ssl" portion of this command. This is a reference to which session table the data will be stored in. For our purposes here there are effectively two session tables: ssl, and uie. Be sure you're accessing the same one in your session lookup section as you are in your session add section, or you'll never find the data you're after. This is pretty easy to keep straight, once you see it. It looks like: session add uie ... session lookup uie Or: session add ssl ... session lookup ssl You can find complete documentation on the session command here, in the iRules, as well as some great examplesthat depict some more advanced iRules making use of the session command to great success. Check out Codeshare for more examples.3.4KViews0likes8CommentsAnother FSE iRules Challenge, Even More Surprising Results
I have an awesome job. I get to play with cool technology, with good people, at an awesome company, and actually don’t get in trouble for doing so. I get to blend writing and talking and blathering on endlessly to anyone that will listen with completely geeking out and diving into the nuts and bolts of things to see what makes things work. This doesn’t suck. One of the things that doesn’t suck the most is getting to kick on the light bulb for people that haven’t quite gotten their hands around our programmability technologies just yet. F5 is laden with opportunities to get your script on. From iRules to iControl to iCall and TMSH scripting, there is no shortage of opportunities to get down and dirty with some code. That being said, not everyone is up to speed on such things yet and I take particular joy in being able to help them connect the wires, get the first flickers of that “Holy crap this stuff is cool!” halogen, and then go on their merry. Lucky as I am, what with the awesome job and all, I get many opportunities for just such interactions. More seasoned readers may be familiar with the one on which today’s post is focused, the FSE challenge. I haven’t posted one of these in a little bit so let’s have a refresher from days of yore. First off, What is an FSE? An FSE is a Field Sales Engineer. FSEs are the engineering lifeblood of the sales force here at F5. They’re the ones out in the trenches dealing with customer requirements and issues, building real world solutions, and generally doing all the cool stuff that I get to talk about theoretically, but in the real world. I’ve got mad respect for those FSEs that take their jobs seriously and learn how to build full fledged F5 solutions that leverage our crazy broad product set and, you guessed it, our out of the box tools like iControl and iRules. Those that choose to flex those muscles garner a special place in my encrypted little heart. Next, What is with this challenge business? Every time we get a new batch of FSEs in at corporate for brainwashing err, training we put them through what we lovingly refer to as a boot camp. This is, as you might expect given the name, a rapid way of getting folks up to speed on not only F5 technology but all of the surrounding whats-its and know-how that is expected of someone out in the field slinging our tech. This invariably includes a delve into iRules. There is formal training, of course, but the challenge is a different beast all together. I effectively pose as a customer in the field with a complex (at least complex by beginners’ standards) problem that needs solving. I present it to the batch of keyboard jockeys, give them time to ask questions, take notes, etc., then cut them loose. In their “free time” (See: sleepless hours well into the night) they get to hash out the solution to the problem. They are expected to write, test, and lightly document an iRules solution to eradicate the posed problem point by point. Points are awarded for effectiveness, efficiency, and exportability, meaning ease of use and hand-off. I come back in a week later, after pouring over the proffered code snippets, and announce the winners (top3) based on said criteria, who are then awarded fabulous cash and prizes! (Bold + italics means it must be true, right? Even when it’s not. Since it’s not. At all.) Lastly, What was the challenge? People are always curious to hear what the actual challenge was when looking at the submissions, so here you go: Scenario: A client has an https based application that is undergoing upgrades and large changes, and they need to create business logic in the network layer to allow for a smooth transition and consistent user experience. Desired Solution: Ensure that all requests from the client to the BIG-IP are SSL encrypted Ensure all traffic to the back end is plain-text For all canonical names of domain.com (I.E. bob.domain.com, app1.domain.com, etc.) remove the canonical name and prepend to URI (I.E. bob.domain.com/my/app becomes domain.com/bob/my/app). Standard canonicals are excluded from this re-writing (mail, smtp, www, ns, ns1) These host/uri changes must happen transparently to the clients accessing the application. Anyone accessing the application from the internal network (10.1.*) with an appropriate auth cookie (Name: X-Int-Auth. Value=True) bypasses the above logic and accesses the old structure. Log any request (IP of client and URI requested) to a canonical name x.domain.com that is non standard (mail, smtp, www, ns, ns1) so that data can be collected as to when users have fully transitioned. So now that the table is set, on with the feast! This time around I had another killer host of entries into the hopper. There were people of all experience levels from “newbie, never coded, what does this double equals thing do?” levels to one heck of a ringer, who would make himself known eventually, even though he flew under the radar at first, sly dog that he is. Out of the raft of valiant attempts and solid efforts, my arduous duty was to narrow it down to the top three and announce them to the group, and later (now) the world. Such was my task, and such was performed. I bring you this quarter’s FSE iRules Challenge winners: 3rd Place – Benn Alp Complete with ASCII art, Benn put in an heroic effort on this submission. He ended up with a heck of a lot of code, and most of it was extremely valid, which just goes to show that while he didn’t have the most efficient solution, he stuck with things until he got where he wanted. I encourage less meandering approaches to coding, but I was impressed by the thought that went into this one and the potential that is apparent in his thought process, logic and effort. Way to go Benn! ############################################################################## # _ ____ _ ____ _ _ _ # # (_) _ \ _ _| | ___ ___ / ___| |__ __ _| | | ___ _ __ __ _ ___ # # | | |_) | | | | |/ _ \/ __| | | | '_ \ / _` | | |/ _ \ '_ \ / _` |/ _ \ # # | | _ <| |_| | | __/\__ \ | |___| | | | (_| | | | __/ | | | (_| | __/ # # |_|_| \_\\__,_|_|\___||___/ \____|_| |_|\__,_|_|_|\___|_| |_|\__, |\___| # # |___/ # ############################################################################## # # Benn Alp b.alp@f5.com # when RULE_INIT { # # CONFIGURABLE ITEMS #---------------------------------------------------------------------------------------------------------- # Debug Logging. (Note: Irrispective of how this is set logs to satisfy requirement 6 will be sent - # 6 Log any request (IP of client and URI requested) to a canonical name x.domain.com that is non standard # (mail, smtp, www, ns, ns1) so that data can be collected as to when users have fully transitioned" is irrespective of this setting. # 0 - Disabled # 1 - Enabled #---------------------------------------------------------------------------------------------------------- set static::DebugLogging 1 #---------------------------------------------------------------------------------------------------------- # Behaviour when requirement 1 is violated for transformed apps (Ensure that all requests from the client to the BIG-IP are SSL encrypted) # Legacy apps continue to work as per normal. # 0 - Reject # 1 - Redirect to https #---------------------------------------------------------------------------------------------------------- set static::SSLBehaviour 1 #---------------------------------------------------------------------------------------------------------- } when HTTP_REQUEST { set Rewrite 0 if { $static::DebugLogging } { log local0. "Trigger HTTP_REQUEST" } # Requirement 5 - Anyone accessing the application from the Internal Network (10.1.x.x) and with an appropriate auth cookie (Name: X-Int-Auth Value=True) bypassess the above logic and access the old structure, or anybody using domain.com bypass/return if {[IP::client_addr] starts_with "10.1." and [HTTP::header "X-Init-Auth"] equals "True" or [string tolower [HTTP::host]] equals "domain.com" } { if { $static::DebugLogging } { log local0. "Trigger return based on Requirement 5 or domain=domain.com" } return } else { if { [HTTP::host] contains ".domain.com" } { # Requirement 3.1 Standard canonicals are excluded from this re-writing (mail, smtp, www, ns, ns1) and # Requirement 6 - Log any request (IP of Client and URI requested) to a canonical name x.domain.com that is non standard (Mail.smtp,www,ns,ns1) so that data can be collected as to when users have fully transitioned # Switch -exact was faster than data groups.. switch -exact [string tolower [HTTP::host]] { "mail.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - mail.domain.com/[HTTP::uri]" } "smtp.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - smtp.domain.com/[HTTP::uri]" } "www.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI www.domain.com/[HTTP::uri]" } "ns.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI ns.domain.com/[HTTP::uri]" } "ns1.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI ns1.domain.com/[HTTP::uri]" } ... And that's all I'm showing of Benn's solution. It goes on for a while, and was an awesome effort but it's...rather long. ;) 2nd Place – Max Iftikhar Max set a high bar indeed with his submission which used the uber efficient stream profile, a solid cut at response re-writing, one of the pitfalls of this particular challenge, and some handy dandy string manipulation. This one was efficient, brief, and looked like it could have been the overall winner. All things being equal, in many other FSE classes this very well could have won, as it is a darn fine effort, and Max should hold his head high while coding. Unless of course he can’t see the monitor, then hold it rather normally and just know you kicked some tail, Max. when HTTP_REQUEST { if { [TCP::local_port] == 80 } { # redirect to https HTTP::redirect "https://[getfield [HTTP::host] ":" 1][HTTP::uri]" } } when HTTP_REQUEST { set rewrite 0 set canonical [getfield [HTTP::host] "." 1] set host1 [HTTP::host] set host2 [getfield [HTTP::host] "$canonical" 1] set uri1 "[HTTP::uri]" set uri2 ""/"$canonical[HTTP::uri]" if {[IP::addr [IP::client_addr] equals 10.1.x.x/16] and [HTTP::cookie exists"X-Int-Auth"} { pool http_pool } else { log local0. "Received request from [IP::client_addr] -> [HTTP::host][HTTP::uri]" } # Rewrite the Host header HTTP::header replace "Host" $::host2 # Make uri path start with /canoncial if it doesn't already if { not ([HTTP::uri] starts_with "/$canonical") } { HTTP::uri [string map -nocase {$uri1 $uri2} [HTTP::uri]] set urlRewrite 1 } } when HTTP_RESPONSE { if {$rewrite} { # Check if response is a redirect if {[HTTP::is_redirect] and [HTTP::header Location] contains $find} { # Rewrite the redirect Location header value HTTP::header replace "Host" $::host1 HTTP::header replace Location [string map -nocase "$url1 $url2" [HTTP::header Location]] } # Check if response payload type is text if {[HTTP::header value Content-Type] contains "text"} { # Set the replacement strings STREAM::expression "@$url1@$url2@" # Enable the stream filter for this response only STREAM::enable } } } Winner! – Joe Martin Last but the exact inverse of least, our winner in fact, was Joe Martin. Joe seemed like a normal, average, every day FSE challenge entrant upon first blush. He didn’t even bother to out himself at the onset as having written iRules before when I asked for experience levels. Clever ploy, Joe, very clever. As I was later to find out Joe seemed to in fact be a cyborg-robot-iRules-ninja-hacker-dinosaur sent back from the future to bust the curve for all FSE iRules Challengees everywhere. Seriously, this guy knew what he was doing. This iRule is pretty darn close to the code I would churn out to solve this particular problem and, not to self aggrandize, but that’s not such a bad thing coming from the guy judging the challenge, amirite? Upon presenting the results and having shaken the hand of the Cylon (No windows, you may not autocorrect Cylon to colon. Go away, I’m making jokes here.) in charge of iRules affairs himself, I asked Joe how many iRules he’d written before, because it was obvious that he had done so. Much to his credit he admitted to having written hundreds, which makes a whole heck of a lot of sense, and makes me able to sleep just a bit better at night without keeping a light on to watch out for those cyborg iRules ninja invaders. Big congrats to Joe for a darn fine hunk of codey bits. when HTTP_REQUEST { set request_rewrite 0 #Check to see if this in an internal developer request (internal IP and X-Auth header) if { ([HTTP::header X-Int-Auth] equals "True") && ([IP::client_addr] equals "10.1.0.0/16") } { pool http_pool } else { set orig_host [string tolower [HTTP::host]] scan $orig_host %s.%s.%s host domain tld set new_host "$domain.$tld" #Make sure "host" portion of DNS name is not in exclusion data group "class_no-rewrite" if { ![class match $host equals class_no-rewrite] } { #Flag connection as "request rewritten", rewrite host header and URI, and log request info" set request_rewrite 1 HTTP::header replace Host $new_host HTTP::uri "/$host[HTTP::uri]" log local0. "Request to $orig_host from [IP::client_addr] rewritten to [HTTP::uri]" pool http_pool } } } when HTTP_RESPONSE { #If the request was rewritten we need to rewrite Location headers and embedded URLs if {$request_rewrite} { if { [HTTP::is_redirect] } { HTTP::header replace Location [string map -nocase "$new_host $orig_host" [HTTP::header Location]] } else { STREAM::expression "@/$host/@/@ @http://$new_host@https://$orig_host@" STREAM::enable } } } All said and done it was another fine experience hosting the FSE iRules challenge. There was code, and fun, and fun code, and coding fun, and funny code, and…well you get the idea. I’m looking forward to the next crop and seeing what they’re capable of. I’ll be working on my cyborg detection methodologies in the meantime. Until then, remember kids: code hard. #Colin #iRules #iRulesChallenge #Cylons760Views0likes1Comment