Getting Started with BIG-IP Next: Licensing Instances in Central Manager
This article assumes that the license was not applied during the initial instance setup and covers only the GUI process. For the API process or for disconnected mode, please reference the instructions for licensing on Clouddocs. Download the JSON Web Token from MyF5 I don't have a paid license, so I'm going to use my trial license available at MyF5. Your mileage may vary here. Go to my products & plans, trials, and then in the my trials listing (assuming you've requested/received one) click BIG-IP Next. Click downloads and licenses (note, however, the helpful list of resources down in guides and references). You can just copy your JSON web token, but I chose to download. Install the Token Login to Central Manager and click manage instances. Click on your new unlicensed instance. In the left-hand menu at the bottom, click License. Click activate license. We already downloaded our token, so after reviewing the information, click next. Note that I made sure that my Central Manager has access to the licensing server and the steps covered in this article assume the same. If you've managed classic BIG-IP licenses, copying and pasting dossiers to get licenses should be a well-understood process. On this screen, paste your token into the box, give it a name, and click activate. After a brief interrogation of the licensing server, you should now have a healthy, licensed, BIG-IP Next Instance! Resources How to: Manage BIG-IP Next instance licenses566Views0likes9CommentsThe Business Partner Exchange - An F5 Distributed Cloud Services Demonstration
Large enterprises face challenges when deploying applications at scale, including managing application sprawl, segregating partner and customer traffic, and maintaining consistent security policies. To address these issues, comprehensive traffic management, policy enforcement, and resource allocation are essential for seamless and secure application deployment. The Business Partner Exchange demo illustrates how F5 distributed cloud services with Equinix effectively addresses these challenges.46Views1like0CommentsSimplify Network Segmentation for Hybrid Cloud
Introduction Enterprises have always had the need to maintain separate development and production environments. Operational efficiency, reduction of blast radius, security and compliance are generally the common objectives behind separating these environments. By dividing networks into smaller, isolated segments, organizations can enhance security, optimize performance, and ensure regulatory compliance. This article demonstrates a practical strategy for implementing network segmentation in modern multicloud environments that also connect on-prem infrastructure. This uses F5 Distributed Cloud (F5 XC) services to connect and secure network segments in cloud environments like Amazon Web Services (AWS) and on-prem datacenters. Need for Segmentation Network segmentation is critical for managing complex enterprise environments. Traditional methods like Virtual Routing and Forwarding (VRFs) and Multiprotocol Label Switching (MPLS) have long been used to create isolated network segments in on-prem setups. F5 XC ensures segmentation in environments like AWS and it can extend the same segmentation to on-prem environments. These techniques separate traffic, enhance security, and improve network management by preventing unauthorized access and minimizing the attack surface. Scenario Overview Our scenario depicts an enterprise with three different environments (prod, dev, and shared services) extended between on-prem and cloud. A 3rd party entity requires access to a subset of the enterprise's services. This article, covers the following two networking segmentation use-cases: Hybrid Cloud Transit Extranet (servicing external 3 rd party partners/customers) Hybrid Cloud Transit Consider an enterprise with three distinct environments: Production (Prod), Development (Dev), and Shared Services. Each environment requires strict isolation to ensure security and performance. Using F5 XC Cloud Connect, we can assign each VPC a network segment effectively isolating the VPC’s. Segments in multiple locations (or VPC’s) can traverse F5 XC to reach distant locations whether in another cloud environment or on-prem. Network segments are isolated by default, for example, our Prod segment cannot access Shared. A segment connector is needed to allow traffic between Prod and Shared. The following diagram shows the VPC segments, ensuring complete "ships in the night" isolation between environments. In this setup, Prod, Dev, and Shared Services environments operate independently and are completely isolated from one another at the control plane level. This ensures that any issues or attacks in one environment do not affect the others. Customer Requirement: Shared Services Access Many enterprises deploy common services across their organization to support internal workloads and applications. Some examples include DHCP, DNS, NTP, and NFS, services that need to be accessible to both Prod and Dev environments while keeping Prod and Dev separate from each other. Segment Connectors is a method to allow communication between two isolated segments by leaking the routes between the source and destination segments. It is important to note that segment connector can be of type Direct or SNAT. Direct allows bidirectional communication between segments whereas the SNAT option allows unidirectional communication from the source to the destination. Extending Segmentation to On-Premises Enterprises already use segmented networks within their on-premises infrastructure. Extending this segmentation to AWS involves creating similar isolated segments in the cloud and establishing secure communication channels. F5 XC allows you to easily extend this segmentation from on-prem to the cloud regardless of the underlay technology. In this scenario, communication between the on-premises Prod segment and its cloud counterpart is seamless, and the same also applies for the Dev segment. Meanwhile Dev and Prod stay separate ensuring that existing security and isolation is preserved across the hybrid environment. Extranet In this scenario an external entity (customer/partner) needs access to a few applications within our Prod segment. There are two different ways to enable this access, Network-centric and App-centric. Let’s refer to the external entity as Company B. In order to connect Company B we generally need appropriate cloud credentials, but Company B will not share their cloud credentials with us. To solve this problem, F5 XC recommends using AWS STS:AssumeRole functionality whereby Company B creates an AWS IAM Role that trusts F5 XC with the minimum privileges necessary to configure Transit Gateway (TGW) attachments and TGW route table entries to extend access to the F5 XC network or network segments. Section 1 – Network-centric Extranet Many times, partners & customers need to access a unique subset of your enterprise’s applications. This can be achieved with F5 XC’s dedicated network segments and segment connectors. With a segment connector for the external and prod network segments, we can give Company B access to the required HTTP service without gaining broader access to other non-Prod segments. Locking Down with Firewall Policies We can implement a Zero Trust firewall policy to lock down access from the external segment. By refining these policies, we ensure that third-party consumers can only access the services they are authorized to use. Our firewall policy on the CE only allows access from the external segment to the intended application on TCP/80 in Prod. [ec2-user@ip-10-150-10-146 ~]$ curl --head 10.1.10.100 HTTP/1.1 200 OK Server: nginx/1.24.0 (Ubuntu) Date: Thu, 30 May 2024 20:50:30 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Wed, 22 May 2024 21:35:11 GMT Connection: keep-alive ETag: "664e650f-267" Accept-Ranges: bytes [ec2-user@ip-10-150-10-146 ~]$ ping -O 10.1.10.100 PING 10.1.10.100 (10.1.10.100) 56(84) bytes of data. no answer yet for icmp_seq=1 no answer yet for icmp_seq=2 no answer yet for icmp_seq=3 ^C --- 10.1.10.100 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3153ms After applying the new policies, we confirm that the third-party access is restricted to the intended services only, enhancing security and compliance. This demonstrates how F5 Distributed Cloud services enable networking segmentation across on-prem and cloud environments, with granular control over security policies applied between the segments. Section 2 - App-centric Extranet In the scenario above, Company B can directly access one or more services in Prod with a segment connector and we’ve locked it down with a firewall policy. For the App-centric method, we’ll only publish the intended services that live in Prod to the external segment. App-centric connectivity is made possible without a segment connector by using load balancers within App Connect that target the application within the Prod segment and advertises its VIP address to the external segment. The following illustration shows how to configure each component in the load balancer. Visualization of Traffic Flows The visualization flow analysis tool in the F5 XC Console shows traffic flows between the connected environments. By analyzing these flows, particularly between third-party consumers and the Prod environment, we can identify any unintended access or overreach. The following diagram is for a Network-centric connection flow: This following diagram shows an App-centric connection flow using the load balancer: Product Feature Demo Conclusion Effective network segmentation is a cornerstone of secure and efficient cloud environments. We’ve discussed how F5 XC enables hybrid cloud transit and extranet communication. Extranet can be done with either a network centric or app-centric deployment. F5 XC is an end to end platform that manages and orchestrates end-to-end segmentation and security in hybrid-cloud environments. Enterprises can achieve comprehensive segmentation, ensuring isolation, secure access, and compliance. The strategies and examples provided demonstrate how to implement and manage segmentation across hybrid environments, catering to diverse requirements and enhancing overall network security. Additional Resources More features and guidance are provided in the comprehensive guide below, where showing exactly how you can use the power and flexibility of F5 Distributed Cloud and Cloud Connect to deliver a Network-centric approach with a firewall and an App-centric approach with a load balancer. Create and manage segmented networks inyour own cloud and on-prem environments, and achieve the following benefits: Ability to isolate environments within AWS Ability to extend segmentation to on-prem environments Ability to connect external partners or customers to a specific segment Use Enhanced Firewall Policies to limit access and reduce the blast radius Enhance the compliance and regulatory requirements by isolating sensitive data and systems Visualize and monitor the traffic flows and policies across segments and network domains Workflow Guide - Secure Network Fabric (Multi-Cloud Networking) YouTube: Using network segmentation for hybrid-cloud and extranet with F5 Distributed Cloud Services DevCentral:Secure Multicloud Networking Article Series GitHub: S-MCN Use-case Playbooks (Console, Automation) for F5 Distributed Cloud Customers F5.com: Product Information Product Documentation Network Segmentation Cloud Connect Network Segment Connectors App Security App Networking CE Site Management150Views0likes0CommentsAccess Troubleshooting: BIG-IP APM OIDC integration
Introduction Troubleshooting Access use cases can be challenging due to the interconnected components used to achieve such use cases. A simple example for Active Directory authentication can go through below challenges, DNS resolution of Domain Controller (DC) configured. Reachability between F5 and DC. Communication ports used. Domain account privileges. Looking at the issue of non-working Active Directory (AD) authentication is a complex task, yet looking at each component to verify the functionality is much easier and shows output the influence further troubleshooting actions. Implementation and troubleshooting We discussed the implementation of OpenID Connect over here Let's discuss here how we can troubleshoot issues in OIDC implementation, here's a summary of the main points we are checking Role Troubleshooting main points OAuth Authorization Server DNS resolution for the authentication destination. Routing setup to the authentication system. Authentication configurations and settings. Scope settings. Token signing and settings. OAuth Client DNS resolution for the authorization server. Routing setup. Token settings. Authorization attributes and parameters. OAuth Resource Server Token settings. Scope settings Looking at the main points, you can see the common areas we need to check while troubleshooting OAuth / OIDC solutions, below are the troubleshooting approach we are following, Check the logs. APM logging provides a comprehensive set of logs, the main logs to be checked apm, ltm and tmm. DNS resolution and check DNS resolver settings. Routing setup. Authentication methods settings. OAuth settings and parameters. Check the logs The logs are your true friends when it comes to troubleshooting. We start by creating debug logging profile Overview > Event logs > Setting. Select the target Access Policy to apply the debug profile. Case 1: Connection reset after authentication In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 but connection resets at this point. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. One main reason would be mismatched settings between Auth server and Client configurations. In our setup I’m using provider flow type as Hybrid and format code-idtoken. Local Time 2024-06-11 06:47:48 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:204adb19: Session variable 'session.logon.last.result' set to '1' Partition Common Local Time 2024-06-11 06:47:49 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:204adb19: Authorization code not found. Partition Common Checking back the configuration to validate the needed flow type: adjust flow type at the provider settings to beAuthorization Code instead of Hybrid. Case 2: Expired JWT Keys In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID from Access > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. One main reason can be the need to rediscover JWT keys. Local Time 2024-06-11 06:51:06 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:848f0568: Session variable 'session.oauth.client.last.errMsg' set to 'None of the configured JWK keys match the received JWT token, JWT Header: eyJhbGciOiJSUzI1NiIsImtpZCI6ImMzYWJlNDEzYjIyNjhhZTk3NjQ1OGM4MmMxNTE3OTU0N2U5NzUyN2UiLCJ0eXAiOiJKV1QifQ' Partition Common The action to be taken would be to rediscover the JWT keys if they are automatic or add the new one manually. Head toAccess ›› Federation : OAuth Client / Resource Server : Provider Select the created provider. Click Discover to fetch new keys from provider Save and apply the new policies settings. Case 3: OAuth Client DNS resolver failure In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. From the below logs we can see the logon was successful but somehow the Authorization code wasn’t detected. Another reason for such behavior can be the DNS failure to reach to OAuth provider to validate JWT keys. Local Time 2024-06-12 19:36:12 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:fb5d96bc: Session variable 'session.oauth.client.last.errMsg' set to 'HTTP error 503, DNS lookup failed' Partition Common Checking DNS resolver Network ›› DNS Resolvers : DNS Resolver List Validate resolver config. is correct. Check route to DNS server Network ›› Routes Note, DNS resolver uses TMM traffic routes not the management plane system routing. Case 4: Token Mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. We will find the logs showing Bearer token is received yet no token enabled at the client / resource server connections. Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.client./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.token_type' set to 'Bearer' Partition Common Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.scope./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.errMsg' set to 'Token is not active' Partition Common We need to make sure client and resource server have JWT token enabled instead of opaque and proper JWT token is selected. Case 5: Audience mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. We will find the logs stating incorrect or unmatched audience. Local Time 2024-06-23 21:32:42 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:42ef6c51: Session variable 'session.oauth.scope.last.errMsg' set to 'Audience not found : Claim audience= f5local JWT_Config Audience=' Partition Common Case 6: Scope mismatch In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users receive authorization error with wrong scope. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. Scope name is mentioned in the logs, in this case I named it “wrongscope” You will see scope includes openid string, this is because we have openid enabled. Change the scope to the one configured at the provider side. Local Time 2024-06-24 06:20:28 Log Message /Common/oidc_google_t1.app/oidc_google_t1:Common:edacbe31:/Common/oidc_google_t1.app/oidc_google_t1_act_oauth_client_0_ag: OAuth: Request parameter 'scope=openid wrongscope' Partition Common Case 7: Incorrect JWT Signature In this case the below is the connection sequence, User accessing through F5 acting as Client + RS. Users are redirected to OAuth provider for authentication. User is redirected back to F5 with Access denied. Troubleshooting steps: Checking logs by clicking the session ID fromAccess > Overview. We will find the logs showing Bearer token is received yet no token enabled at the client / resource server connections. Local Time 2024-06-21 07:25:12 Log Message /Common/f5_local_client_rs.app/f5_local_client_rs:Common:c224c941: Session variable 'session.oauth.scope./Common/f5_local_client_rs.app/f5_local_client_rs_oauthServer_f5_local_provider.errMsg' set to 'Token is not active' Partition Common When trying to renew the JWT key we see this error in the GUI. An error occurred: Error in processing URL https://accounts.google.com/.well-known/openid-configuration. The message is - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target We need at this step to validate the used CA bundle and if we need to allow the trust of expired or self-signed JWT tokens. General issues In addition to the listed cases above, we have some general issues: DNS failure at client side not able to reach whether the F5 virtual server or OAuth provider to provide authentication information. In this case, please verify DNS configurations and Network setup on the client machine. Validate HTTP / SSL / TCP profiles at the virtual server are correctly configured. Related Content DNS Resolver Overview BIG-IP APM deployments using OAuth/OIDC with Microsoft Azure AD may fail to authenticate OAuth and OpenID Connect - Made easy with Access Guided Configurations templates Request and validate OAuth / OIDC tokens with APM F5 APM OIDC with Azure Entra AD Configuring an OAuth setup using one BIG-IP APM system as an OAuth authorization server and another as the OAuth client121Views0likes0CommentsF5 BIG-IP deployment with OpenShift - platform and networking options
Introduction This article is an architectural overview on how F5 BIG-IP can be used with Red Hat OpenShift. Several topics are covered, including: 1-tier or 2-tier arrangements, where the BIG-IP load balance workload PODs directly or load balance ingress controllers (such as NGINX+ or OpenShift's built-in router) respectively. Multi-cluster arrangements, where the BIG-IP can load-balance, or do route sharding across two or more clusters. multi-tenancy, and IP address management options. While this article has a NetOps/infrastructure focus, the follow-up articleBIG-IP deployment with OpenShift—application publishing focuses in DevOps/applications. Overall architecture When using BIG-IP with Red Hat OpenShift, the container Container Ingress Services (CIS from now on) container is used to connect the BIG-IP APIs with the Kubernetes APIs. The source of truth is OpenShift. When a user configuration is applied or when a change occurs in the OpenShift cluster, then CIS automatically updates the configuration in the BIG-IP. Under the hood, CIS updates the BIG-IP configuration using the AS3 declarative API.It is not necessary to know if this applies, as all the configuration can be applied using Kubernetes resource types. IP Address Management (IPAM from now on) is important when it is desired that the DevOps teams operate independently from the infrastructure administrators. CIS supports IPAM by making use of the F5 IPAM Controller (FIC from now on), which is deployed as a container as well. It can be seen how these components fit together in the next picture. CIS and FIC are PODs deployed in the OpenShift cluster and AS3 is deployed in the BIG-IP. In the next sections, we cover the different deployment options and considerations to be taken into account. The full documentation can be found in F5 clouddocs. F5 BIG-IP container integrations are Open Source Software (OSS) and can be found in this github repository where you will find additional technical details and examples. Networking - CNI options Kubernetes' networking is provided by Container Networking Interface plugins (CNI from now on) and F5 BIG-IP supports all Openshift's native CNIs: OVNKubernetes - This is the preferred option. GA since Openshift 4.6, makes use of Geneve encapsulation, but BIG-IP interacts with this CNI in a routed mode in which the packets from/to the BIG-IP don't use encapsulation. Additionally, POD's cluster IPs are discovered dynamically by CIS when OpenShift nodes are added or removed. This latter makes this method also the easiest from BIG-IP management point of view. CheckCIS configuration for OVNKubernetesfor details. OpenshiftSDN - supported since Openshift 3.x, it is being phased out in favour of OVNKubernetes. It makes use of VXLAN encapsulation between the nodes and between the nodes and the BIG-IPs. This requires manual configuration of VXLAN tunnels in the BIG-IPs when OpenShift nodes are added or removed. CheckCIS configuration for OpenShiftSDNfor details. Feature-wise these CNIs we can compare them from the next table from the Openshift documentation. Besides the above features, performance should also be taken into consideration. The NICs used in the Openshift cluster should do encapsulation off-loading to reduce the CPU load in the nodes. Increasing the MTU is recommended specially for encapsulating CNIs; this is suggested in OpenShift's documentation as well, and needs to be set at installation time in the install-config.yaml file. See this OpenShift.com link for details. Networking - the importance of supporting clusters' CNI There are basically two modes to interact with a Kubernetes workload from outside the cluster: Using NodePort Service type. In this case, external hosts access the PODs using any of the cluster's nodes IPs. When a request reaches a node, Kubernetes' kube-proxy is reponsible for forwarding the request to a POD in the local or remote node. When sending to a remote node, it adds noticeable overhead. In two-tier deployments externalTrafficPolicy: local and could be used with appropriate monitoring to avoid this additional hop. NodePort is popular for other external Load Balancers because it is an easy method to access the PODs without having to support the CNI, as the name indicates by using Kubernete's nodes. IP address. This has the drawback of an additional indirection. This drawback is specially relevant for 1-tier deployments because application PODs cannot be accessed directly, eliminating the advantages of this deployment type. On the other hand, BIG-IP supports OpenShift CNI's, both OpenShiftSDN and OVNKubernetes. Using LoadBalancer Service type. The packet path in this mode is equivalent to NodePort, in which the external load balancers need an intermediate kube-proxy hop before reaching the POD. An alternative to bypassing kube-proxy is the use of hostNetwork access, but this is discouraged in general because of its security implications. Using ClusterIP Service type. This is the preferred mode because when sending a request, this is sent directly to the destination POD. This requires to support OpenShfit's CNIs, which is the case of BIG-IP.It is worth noting that BIG-IP also supports other CNIs such as Calico or Cilium. This arrangement can be seen next. Please note in the above figure the traffic path from the BIG-IP, where the arrow reaches the inside of the CNI area. This is to indicate that it can address the ingress controllers or the workload POD's IPs within the cluster network. Using this Service type Cluster IP is also more flexible because it allows CIS to use 1-tier and 2-tier arrangements simultaneously. Networking - Load Balancer arrangement options There are basically two arrangement options, 1 and 2 tier. In a nutshell: A 2-tier arrangement is the typical way in which Kubernetes clusters are deployed. In this arrangement, the BIG-IP has only the role of External Load Balancer (first tier only) and sends the client requests to the Ingress Controller Instances (second tier). The Ingress Controllers ultimately forward the requests to the workload PODs. In a 1-tier arrangement, the BIG-IP sends the requests to the workload PODs directly. This is a much simplified arrangement, in which the BIG-IP performs the role of both External Load Balancer and Ingress Controller. Next, we will see the advantages of each arrangement.Please note that when usingClusterIP,this selection can be doneonaper-Servicebasis.From BIG-IP point of view, it is irrelevant what are the endpoints. Load Balancer arrangement option - 2-tier arrangement Unlike most External Load Balancers, the BIG-IP can exposeservices with either Layer 4 functionalities or Layer 7 functionalities. In Layer 7 mode, SSL/TLS off-loading, HSM, Advanced WAF, and other advanced services can be used. A tier-2 arrangement provides greater scalability compared to 1-tier arrangements in terms of number of L7 routes exposed or number Kubernetes PODs because the control plane workload (the related Kubernetes events that are generated for these PODs and Routes) is split between BIG-IP/CIS and the in-cluster Ingress Controller. This arrangement also has strong isolation between the two tiers, ideal when each tier is managed by different teams (i.e.: platform and developer teams). A BIG-IP 2-tier arrangement is shown next: Load Balancer arrangement option - 1-tier arrangement In this arrangement, the BIG-IP typically operates in L7 mode and sends the traffic directly to the final workload POD. This is done by sendingtraffic to Services in ClusterIP mode. In this arrangement, persistence is handled easily and the worker's PODs can be directly monitored by the BIG-IP, providing an accurate view of the application's health. A BIG-IP 1-tierrangement is shown next: This arrangement is simpler to troubleshoot, has less latency and potentially higher per-session performance. An isolation between platform and developer teams can be achieved with CIS and FIC, yet this is not as strong isolated compared to 2-tier arrangements. This is described inBIG-IP deployment with OpenShift—application publishing options. BIG-IP platform flexibility: deployment, scalability, and multi-tenancy options Using BIG-IP, the deployment options are independent of the BIG-IP being an appliance, a scale-out chassis, or a Virtual Edition. The configuration is always the same down to the L2 (vlan/tunnel) config level. Only the L1 (physical interface) configuration changes. This platform flexibility also opens the possibilities of using different options for scalability, multi-tenancy, hardware accelerators, or Hardware Security Modules (HSMs). These latter are specially important to keepthe SSL/TLS private keys in an FIPS compliant manner. The HSMs can be onboard, on-prem Network HSMs, or cloud SaaS HSMs. Multi-tenancy Options In this section, multi-tenancy refers to the case in which different projects from one or more OpenShift clusters are serviced by a single BIG-IP. Next, it is outlined the different CIS deployment options: A CIS instance can manage all namespaces on a given OpenShift cluster or a subset of these.Namespaces can be specified with a list or a label selector (i.e.: envionment=test or environment=production). Multiple CIS instances, handling different namespaces, can share a single or different BIG-IPs. Each CIS instance will own a dedicated partition in a BIG-IP. For example, it is feasible to setup an OpenShift cluster with devevelopment, pre-production, and production labeled namespaces and these be serviced by different CIS instances in the same or different BIG-IPs for each environment. Multiple CIS instances in a single BIG-IP can also handle different OpenShift clusters. This is thanks to the soft isolation provided by BIG-IP partitions. Network isolation between these partitions can be achieved with routed domains. Some of these deployment options are shown next: IP address management (IPAM) CIS has the capability of dynamically allocating IP addresses using the F5 IPAM Controller (FIC) companion. At the time ofwriting, it is possible to retrieve IP addresses from the following providers: Infoblox F5 local DB provider, which makes use of a PVC for persistence. For the DevOps team, it is transparent which provider is used; it is only required to specify an ipamLabel attribute in the exposed L7 or L4 service. The DevOps team can also have the ability of indicating when it wants to share IP addresses between different L7 or L4 services by means of the HosGroup attribute. This is described in the follow-up article. BIG-IP data plane scalability options A single BIG-IP cluster can scale up horizontally with up to 8 BIG-IP instances and have the different projects distributed in these. This is referred to as Scale-N in the BIG-IP documentation. This mode is often not used because it requires additional orchestration or manual operation for optimal load distribution. In this mode, projectswould have soft-isolation between projects by means of BIG-IP partitions. When ultimate scalability or hard isolation is required, then TMOSvCMP technologyor in newer versions F5OS tenantsfacilities can be used in larger appliances and scale-out chassis. These multi-tenant facilities allow running independent BIG-IP instances, isolated at hardware level, even allowing using different versions of BIG-IP. The tenant BIG-IP instances can get allocated different amounts of hardware resources. In the next picture, the different tenants are shown in different colored bars using several blades (grey bars). Using chassis-based platforms allows to scale data plane performance and increase redundancy by adding blades to the systems without the need of a reconfiguration in the CIS/OpenShift side of things. BIG-IP control plane scalability options When using very large OpenShfit clusters with either a large number of services exposed or a large number of Pods and there is a high number of changes, these will trigger many events in the Kubernetes API. These events are processed by CIS and ultimately in the BIG-IP's control plane. In these cases, the following strategies can be used to improve BIG-IP's control plane scalability: Dissagregate the different projects in different BIG-IPs. These might be multiple BIG-IP VEs or instances in F5 vCMP or F5OS tenants when using hardware platforms. Use a 2-tier architecture, which reduces the number of Kubernetes objects and events that the BIG-IP is exposed to. In the upcoming months, CIS will be available in BIG-IP Next. This is a re-architecture of BIG-IP and incorporates major scalability improvements in the control plane. Multi-cluster OpenShift Since CIS version 2.14 it is also possible that BIG-IP load balances between 2 or more clusters in Active-Active, Active-Standby, or Ratio modes. 1-tier or 2-tier arrangements are possible. Next, it shows a single BIG-IP exposing workloads from 2 OpenShift clusters. Please note that OpenShift clusters don't require to be running with the same version, so this arrangement is also interesting for performing OpenShift upgrades. When using CIS in multi-cluster mode, an additional CIS instance in a secondary cluster is needed for redundancy. If there are more than 2 OpenShift clusters, no additional CIS instances are needed. Therefore, a typical BIG-IP cluster of 2 units load balancing 2 or more OpenShift clusters will always require 4 CIS instances. For each BIG-IP, one of the CIS instances has the (P)rimary role and is in charge of making changes in the BIG-IP by default. The (S)econdary CIS will be on standby. Both CIS instances access all OpenShift clusters. A more comprehensive view of this can be seen in the next diagram, which considers having more than 2 OpenShift clusters. OpenShift clusters that don't host a CIS instance are referred to as remotely managed. Conclusion F5 BIG-IPs provides unmatched deployment options and features with Openshift; these include: The support of OpenShift's CNIs which allows sending the traffic directly instead of using hostNetwork (which implies a security risk) or using the common NodePort which incursthe additional kube-proxy indirection. Both 1-tier or 2-tier arrangements (or both types simultaneously) are possible. F5´s Container Ingress Services provides the ability to handle multiple OpenShift clusters, exposing its services in a single VIP. This is a unique feature in the industry. To complete the circle, this integration also provides IP address management (IPAM) which provides great flexibility toDevOps teams. All these are available regardless. The BIG-IP is a Virtual Edition, an appliance or a chassis platform allowing great scalability and multi-tenancy options. The follow-up articleBIG-IP deployment with OpenShift—application publishing focuses on DevOps and applications. In this, it is described how CIS can also unleash all traffic management and security features in a Kubernetes native way. We are driven by your requirements. If you have any, please provide feedback through this post's comments section, your sales engineer, or via ourgithub repository.2.2KViews1like10CommentsAutomate NetApp ONTAP Storage Management with Private and Secure API Governance
Using automation, frequently leveraging REST APIs, is a common approach for configuring and maintaining solutions like F5 BIG-IP appliances or NetApp ONTAP storage clusters. This article proposes to use the F5 Distributed Cloud HTTPS load balancers, coupled with the API Security module, to make remote ONTAP API access exclusively available to enterprise operations centers, not visible or reachable through the general Internet. Beyond access control, the API features of interest include automatic discovery of active API endpoints, WAF security layered upon the traffic and the ability to impose a positive security model whereby only conforming API activity is allowed to reach ONTAP solutions. The NetApp deployments being governed by storage administrators can be fully hybrid in nature, including remote on-premises ONTAP clusters, including physical and virtual appliances, and public cloud-based offerings from hyperscalers like AWS, Azure and Google. The Distributed Cloud and ONTAP Testbed The following diagram demonstrates the overall use case. F5 Distributed Cloud (XC) offers points of presence in approximately 30 worldwide metropolitan networks. The aggregate bandwidth of the interconnections totals more than 14 Tbps. By deploying Customer Edge (CE) nodes within enterprise locations which are also equipped with NetApp ONTAP storage, whether it be on-premises locations are in any of the major hyperscalers, XC will allow for secure, private communications. A representative test bed was created that utilized two ONTAP deployment types, an on-premises approach based in a facility in Redmond, Washington and a cloud-based Cloud Volumes ONTAP (CVO) in AWS East-2, located in Columbus, Ohio. The on-premises site made use of virtualized ONTAP using an ESXi 7.X hypervisor and the companion NetApp Deploy virtual machine, which as the name implies, is used to instantiate an ONTAP cluster. Primary and secondary operations centers, where ONTAP automation could control deployments with secure Rest API calls were setup in San Jose and Ottawa, Canada. The ability to project access to exclusively the operations centers harnessed Distributed Cloud’s HTTPS load balancer capabilities. Unlike traditional load balancers which frequently see a “virtual server” projected to one side of a network appliance, and private origin pool members on the other side of the appliance, XC is a distributed load balancer approach. The public side might be projected into the global DNS and incoming transactions are attracted to the nearest global point of presence by the full support of IP anycast. Thus, the public face of the load balancer is distributed to many physical locations. The choice of where is up to the enterprise. The origin pool may be one or many servers, in this case ONTAP clusters, at one-to-many locations. In this particular use case, the HTTP Load Balancer did not leverage the international points of presence, called Regional Edge (RE) nodes, but rather was exclusively implemented in the CE located at the operations center. The following diagram reflects the focused setup of secured API calls between an operations center and an ONTAP cluster. Some of the key points to consider. The names used to reach remote ONTAP services are completely private DNS domain names. They are exclusively projected out of the “inside” interface of the San Jose CE site for use by operations staff and hosts, the name of services map to the local inside interface IP address of the CE node. As such, the services are reachable from no other place, ever. The services each map to different, color-coded origin pools, one for the Deploy service (green) and the ONTAP appliance itself (orange). The San Jose-based load balancers will deliver operations traffic to the configured origin pool members in Redmond. The traffic will flow across the high-speed F5 fabric. API Discovery with F5 Distributed Cloud NetApp provides extensive documentation around supported API calls for ONTAP workflows, an example of which can be found here. Some brief, high-level examples might be contacting the Deploy instance to inquire about ONTAP clusters configured at the remote Redmond site, from the San Jose Operations center: C:\Users\steve>nslookup netapp04.busdevf5.io Name: netapp04.busdevf5.io Address: 10.150.98.3 <----- Inside interface of local San Jose CE node C:\Users\steve>curl -k -X GET "https://netapp04.busdevf5.io/api/v3/clusters" -H "accept:application/hal+json" --user admin:De********** { "num_records": 1, "records": [ { "id": "de67e558-7c8c-11ee-836d-000c29fa32ee", "name": "f5netappclusterE" } ] } We see the cluster is named “f5netappclusterE”, the cluster id value can be used as a key value to drill further down with both monitoring and configuration commands for this deployment. REST API commands directed against the ONTAP appliance itself, via load balancer domain name netapp05.busdevf5.io might be used, as simple examples, to inquire on the NFS protocol services configuration, including export policies: curl -k -X GET "https://netapp05.busdevf5.io/api/protocols/nfs/services" --user admin:De**** curl -k -X GET "https://netapp05.busdevf5.io/api/protocols/nfs/export-policies" --user admin:De**** Volumes may be configured and monitored, with commands such as the following that call for a list of all volumes and then drills into one particular volume “RAG_Secure_Files” based upon the UUID value returned in the first command. Output is trimmed for brevity; potentially interesting fields are highlighted in yellow: curl -k -X GET "https://netapp05.busdevf5.io/api/storage/volumes" -H "accept:application/hal+json" --user admin:De**** { "records": [ { "uuid": "0d9190b3-187d-11ef-ba6d-00a0b8d77b39", "name": "RAG_Source_Documents_2024", "href": "/api/storage/volumes/0d9190b3-187d-11ef-ba6d-00a0b8d77b39" "uuid": "35002d11-187d-11ef-ba6d-00a0b8d77b39", "name": "Vectors", "href": "/api/storage/volumes/35002d11-187d-11ef-ba6d-00a0b8d77b39" "uuid": "af65215a-e717-11ee-86e2-00a0b8d77b39", "name": "RAG_Secure_Documents", "href": "/api/storage/volumes/af65215a-e717-11ee-86e2-00a0b8d77b39" curl -k -X GET "https://netapp05.busdevf5.io/api/storage/volumes/af65215a-e717-11ee-86e2-00a0b8d77b39" -H "accept:application/hal+json" --user admin:De***** "create_time": "2024-03-21T00:12:20+00:00", "language": "c.utf_8", "name": "RAG_Secure_Documents", "size": 1130258432, "state": "online", "style": "flexvol", aggregates": "name": "f5netappclusterE_01_VM_DISK_1", "uuid": "642cdad1-5405-4b9b-a889-08050bfa96d7" svm": { "name": "svm0", "uuid": "71e04cf8-7c90-11ee-bb71-00a0b8d77b39", "_links": { "self": { "href": "/api/svm/svms/71e04cf8-7c90-11ee-bb71-00a0b8d77b39" "space": { "size": 1130258432, "available": 1072046080, "used": 1699840 The extensive set of NetApp APIs are enriched by Distributed Cloud through providing a secure, true multi-site and multi-cloud approach to private connectivity. There is no need to engage in multiple-cloud VPN or remote access solutions. The skillset to troubleshoot multiple cloud access approaches is offset by dealing with a single, consistent connectivity approach, a turnkey platform for private reachability. The following section drills deeper into API-specific security features. API Security for Remote NetApp Control Plane Tasks Since the F5 Distributed Cloud is an in-line solution, performing a proxy operation through the configured load balancers, it has the advantage of seeing every transaction, in both directions. As seen from the discovered traffic pane below, over the last six hours, 1,400 transactions have been proxied, covering five different API endpoints terminating on the Redmond ONTAP appliance. Interestingly, since the out-of-the-box WAF ruleset is set to maximum risk aversion, the fact that the user agent is “Curl” prompts a high threat level. This can be ameliorated with a single click in the Security Analytics pane, where the matching WAF events can be added to a WAF exclusion rule, permanently or for a temporary duration of up to seven days. Also, take note above of the yellow-highlighted ability to download the API Specification. This will be extremely useful and will be covered shortly. By clicking on any one of the API endpoints, the operator can see a set of probability distribution function (PDF) curves, to see long term QoS performance such as latency boundaries or prevalence of errors. The following image provides an example of the types of metrics tracked automatically for a sample API endpoint. As mentioned, due to the use of Curl a “report” event, as opposed to a “block” event, is occurring with the selected stringent WAF ruleset chosen for the HTTPS load balancers. The following demonstrates the Security Analytics pane, where a security event is raised, including the provided rationale behind why the report is being generated. In this case, it is simply the presence of the Curl user agent. With one click, an exception to the WAF rules is quickly created to silence the events. Implement a Positive Security Model for ONTAP APIs One of the more interesting use cases of the Distributed Cloud is to preserve uptime for ONTAP solutions, by precluding accidental API commands which might impair service. For instance, API calls are often bundled together in scripts to allow automation to quickly set up, or perhaps take a full detailed configuration inventory of appliances. The Distributed Cloud may be allowed to run for a certain period of time, perhaps 48 hours. The resultant discovered API traffic harnessed to create a known-good schema of the expected traffic. The file format saved is specified by the Open API Specification (OAS) and is historically often called a Swagger file. This auto-Swagger generation feature of XC allows an enterprise to immediately re-load the saved Swagger file as a definition of acceptable traffic. Future traffic violating the Swagger parameters, not just the endpoint but actual request and response parameter values, too, can be flagged or even stopped in its tracks. Since this is a real-time inline solution, items monitored might be a variable expected to be floating point but in actuality is carrying a string or a JSON array. When violations are found, the operator may choose to allow a “fall through” approach whereby the traffic is flagged as “Shadow API” traffic or, for ironclad deployments, rather than fall through, all violations can simply be blocked, as in keeping with the strictest interpretation of a positive security model. After downloading the API specification file, as highlighted in an earlier screenshot from the API discovery pane, we can analyze the Swagger file using a JSON capable viewer, such as the one here. The full specification discovered automatically by the Distributed Cloud solution can be reviewed, including expected fields and their corresponding data types, again in both request and response directions. Interestingly, some API calls to /api/storage/volumes have inadvertently left a trailing slash. As a result this is correctly recorded as a separate endpoint request. At this point, the enterprise has a choice. If full blocking of non-matching API activity is warranted, the “API Validation” menu allows this option. However, in many cases, a more nuanced response of fall-through is required. Take, for instance, a CICD pipeline where applications updates are being rolled out frequently, perhaps weekly, but the corresponding API documentation lags for a few days. If the Swagger file is being updated only after that gap in terms of days, the risk of applications simply breaking is quite real. To accommodate application changes, the "allow but flagging" of non-documented API endpoints occurs, this is the “Shadow” API traffic. This will be brought to the attention of the operator as seen in the following image where traffic involving a NetAppStorage Virtual Machine (SVM), the fundamental unit of multi-tenancy, is proxied but the related API endpoints in this scenario are outside the API definition being used by Distributed Cloud. The following demonstrates the graph depiction of APIs, as opposed to the tabular format, and highlights the fact that it is shadow traffic. By clicking on the shadow API entry an operator, while perhaps opening a ticket to investigate this activity, might choose to follow either of these paths after some consideration: Rate limit users sending to this API endpoint, perhaps allow 1 request in any given 10 second interval, so as not to break an application but otherwise limiting connectivity by sending HTTP 429 Too Many Requests for excessive traffic Immediately and permanently close access to the API endpoint, by sending HTTP 403 Forbidden responses to any future clients This approach may allow a simple manner of controlling what ONTAP modifications can be made, or configuration details retrieved, beyond RBAC on the appliance itself. Simply “learn” an API definition over time, and then implement blocking or throttling of traffic outside these boundaries going forward. Summary This article demonstrated tactical use cases for surrounding ONTAP API transactions, regardless of on-premises or public cloud-based form factors, with security by means of private communications and deep API-layer visibility and controls. Possibilities exist beyond this starting point. Consider layering in Distributed Cloud service policies, such as GEO-IP rulesets. If for regulatory reasons an enterprise chooses to limit the breadth of non-EU operations centers with respect to certain controls over EU-housed ONTAP clusters, while still allowing European operations unfettered control, GEO-IP may help. HTTPS distributed load balancers, with the ability to project service availability only where it should exist, and the intertwined coupling to remote hybrid origin pools, both on premises and in cloud, was also discussed. Rich API security and control plane features like rate-limiting or imposing a learned Open API Specification upon critical storage control traffic makes for an interesting approach to governing ONTAP appliances.87Views0likes0CommentsBIG-IP Next Automation: AS3 Basics
I need a little Mr. Miyagi right now to grab my face and intently look me in the eye and give me a "Concentrate! Focus power!" For those of you youngins' who don't know who that is, he's the OG Karate Kid mentor. Anyway, I have a thousand things I want to say about AS3 but in this article, I'll attempt to cut this down to a narrow BIG-IP Next-specific context to get you started. It helps that last December I did a five-part streaming series on AS3 in the BIG-IP classic context. If you haven't seen that, you have my blessing to stop right now, take some time to digest AS3 conceptually and practice against workloads and configurations in BIG-IP classic that you know and understand, before returning here to embrace all the newness of BIG-IP Next. AS3 is FOUNDATIONAL in BIG-IP Next In classic BIG-IP, you could edit the bigip.conf file directly, use tmsh commands, or iControlREST commands to imperatively create/modify/delete BIG-IP objects. With the exception of system configuration and shared configuration objects, this is not the case with BIG-IP Next. All application configuration is AS3 at its lowest state level. This doesn't mean you have to work primarily in AS3 configuration. If you utilize the migration utility in Central Manager, it will generate the AS3 necessary to get your apps up and running. Another option is to use the built-in http FAST template (we'll cover FAST in later articles) to build out an application from scratch in the GUI. But if you use features outside the purview of that template, or you need to edit your migration output, you'll need to work in the AS3 configuration declaration, even if just a little bit. Apples to Apples It's a fun card game, no? My family takes it to snarky absurd levels of sarcasm, to the point that when we play with "outsiders" we get lots of blank looks and stares as we're all rolling on the floor laughing. Oh well, to each his own. But we're here to talk about AS3, right? Well, in BIG-IP Next, there is a compatibility API for AS3, such that you can take a declaration from BIG-IP classic and as long as the features within that declaration are supported, it should "just work" via the Central Manager API. That's pretty cool, right? Let's start with a basic application declaration from the recent video posted by Mark_Dittmerexploring the API differences between classic and Next. { "class": "ADC", "schemaVersion": "3.0.0", "id": "generated-for-testing", "Tenant_1": { "class": "Tenant", "App_1": { "class": "Application", "Service_1": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.0.1" ], "virtualPort": 80, "pool": "Pool_1" }, "Pool_1": { "class": "Pool", "members": [ { "servicePort": 80, "serverAddresses": [ "10.1.0.1", "10.1.0.2" ] } ] } } } } A simple VIP with a pool with two pool members. A toy config to be sure, but it is useful here to show the format (JSON) of an AS3 declaration and some of the schema as well. With the compatibility API, this same declaration can be posted to a classic BIG-IP like this: POST https://<BIG-IP IP Address>/mgmt/shared/appsvcs/declare Or a BIG-IP Next instance like this: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/declare?target_address=<BIG-IP Next instance IP Address> For those already embracing AS3, this compatibility API in BIG-IP Next should make the transition easier. AS3 Workflow in BIG-IP Next With BIG-IP classic, you had to install the AS3 package (technically an iControl LX, or sometimes referenced as an iApps v2 package) onto each BIG-IP system you wanted to use the AS3 declarative configuration model on. Each BIG-IP was an island, and the configuration management of the overall system of BIG-IPs was reliant on an external system for source of truth. With BIG-IP Next, the Central Manager API has native AS3 support so there are no packages to install to prepare the environment. Also, Central Manager is the centralized AS3 interface for all Next instances. This has several benefits: A singular and centralized source of truth for your configuration management No external package management requirements Tremendous improvement in API performance management since most of the heavy lifting is offloaded from the instances and onto Central Manager and the control-plane functionality that remains on the instance is intentionally designed for API-first operations The general application deployment workflow introduced exclusively for Next, which I'll reference as the documents API, is twofold: Create an application service First, you create the application service on Central Manager. You can use the same JSON declaration from the section above here, only the API endpoint is different: POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents A successful transaction will result in an application service document on Central Manager. A couple notes on this at time of writing: Documents created through the API are not validated against the journeys migration tool that is available for use in the Central Manager GUI. Documents are not schema validated at the attribute level of classes, so whereas a class used in classic might be supported in Next, some of the attributes might not be. This means that whereas the document creation process can appear successful, the deployment will fail if classes and/or class attributes supported in classic BIG-IP are present in the AS3 declarations when an attempt to apply to an instance occurs. Deploy the application service Assuming, however, all your AS3 work is accurate to the Next-supported schema, you post the specified document by ID to the target BIG-IP Next instance, here as a JSON payload versus a query parameter on the compatibility API shown earlier. POST https://<Central Manager IP Address>/api/v1/spaces/default/appsvcs/documents/<Document ID>/deployments { "target": "<BIG-IP Next Instance IP Address>" } At this point, your service should be available to receive traffic on the instance it was deployed on. Next Up... Now that we have the theory in place, join me next time where we'll take a look at working with a couple application services through both approaches. Resources CM App Services Management AS3 Schema AS3 User Guide (classic, but useful) AS3 Reference Guide (classic, but useful) AS3 Foundations (streaming series)708Views0likes3CommentsF5 Distributed Cloud Customer Edge Migration Centos to RHEL
In this article, I will introduce a process to migrate a Customer Edge site from End of Life Centos OS to RHEL Operating System. Introduction: Back in December 2023, F5 Distributed Cloud Customer Edges image was based on Red Hat Enterprise Linux or RHEL. Operating System Prior to that the Customer Edge ran on Centos 7.x Operating System, which has been announced End of Life . In this article, I will provide a migration strategy from Centos to RHEL OS for customer edge sites that are in a SaaS-Hybrid Edge Deployment pattern (#2 in the slide below) where the VIP is on the Regional Edge and the tunnel termination and SNAT are on the customer edge. While we are using this deployment pattern as an example, the concepts for other patterns are the same with a few caveats which I will include at the end of this article. High-Level Concepts: Before we discuss the migration phases, I want to introduce a few concepts that we will be utilizing. The first concept is what we call a Virtual Site. A virtual Site provides us the ability to perform a given configuration on set (or group) of Sites. The second term is Origin Pool. An origin pool is a mechanism to configure a set of endpoints grouped together into a resource pool used in the load balancer configuration. The typical CE Site deployment consists of a HA cluster that discovers endpoints via a origin pool picked via the CE Site. This discovery is typically via Private DNS or RFC-1918 IP ranges, all though other methods are available. When we introduce the virtual site construct we will perform this discovery via a "Virtual Site" and not the original "CE Site". As depicted below on the right hand side of the drawing, you will see the origin pool is now discovered from all 6 nodes in the virtual site and will route traffic to the endpoint per the LB algorithm. Also, the Virtual Site construct can be utilized for more advanced HA design scenarios and even for additional bandwidth between RE and CE, but this will be discussed in other articles. Virtual Site Setup: Perquisites: Current Centos Customer Edge Site. New RHEL OS Customer Site We first start to setup the virtual site construct by logging into our Distributed Cloud tenant. Once logged in: Navigate to "Shared Configuration" Under "Manage" chose "Virtual Site" Provide a Name, Description, Site Type (in this case CE), and a Site Expression Once the Virtual Site label is created, we navigate to the existing Centos CE Cluster and add the Site Expression that we created in previous step to the site Labels section Goto Multi-Cloud Network Connect tile Goto "Manage" "Site Management" and choose the Site, Cloud Deployment site, or Secure Mesh Site. This will depend on how and where the site was deployed. Once you have the correct site click on the 3 ellipses at the right and go to Manage Configuration and Edit Add virtual Site Label Type in the Key from “Site Selector Expression” my example is ”netta-az-vsite” and click Assign a Custom Key ‘netta-az-vsite’ Type in Value from “Site Selector Expression” my example is ”true” and click Assign a Custom Key ‘true’ Proceed with adding this same label to all sites that will be in the virtual site. Virtual Site Origin Pool Configuration: Now that we have our virtual site configured, we need to configure the origin pool and discover from the virtual site. Go to Multi-Cloud Application Connect In origin pool configuration choose the discovery method, IP or DNS of Origin on given sites Under Site or Virtual Site, choose Virtual Site and pick your virtual site from drop-down: Choose the "Virtual Site" configured in the previous step. Rest of config should be the same Validate origin is successfully discovered from newly created Virtual Site. Go to HTTP LB Performance Click on Origins Servers and you should see 2 origins, one form each site (centos and rhel) in virtual site Migration: Now that we have the virtual site and the virtual site origin pool discovery method built, we can start the migration. Goto the HTTP LB and add the additional virtual site origin pool under the Origins section Leverage weights and Priorities with the 2 origin pools to start the migration from the Centos Site to the Virtual site origin pool. Typical starting point is both origin pools will have a Priority of 1 and Weight will be in a value to equal 100. SO Centos origin pool has a weight of 95 and Virtual Site Origin Pool 5 and decrement and increment both as you migrate. Once 100% of traffic is on the Virtual site origin pool remove the Virtual Site label from the centos site. Remove the original Centos Site origin pool form the HTTP LB Delete the Centos Cluster Additional Info: In the above example for the Customer Edge (CE) deployment, we were leveraging the RE's to publish VIPs to the internet and the CE's were used as tunnel termination points as well as SNAT to origin members. If you move the VIP to the CE there are a few caveats with the way to advertise that VIP to the network. For example to leverage all nodes within the cluster, you will need to provide a VIP Advertisement policy that consisted of an out-of-band DNS LB option or nested LB option. Also as mentioned earlier in this article there can also be HA and bandwidth advantages to leveraging virtual sites as depicted below in the last slide. For more info on the migration process or CE design options, reach out to your F5 sales specialist.169Views0likes0CommentsBIG-IP Next Automation: Working with the AS3 API endpoints
In my last article I covered the basics of AS3 as it relates to getting started with automation with BIG-IP Next. I also walked through an application migration in a previous article that addresses some of the issues you'll need to work through moving to Next, but whereas I touched the AS3 slightly in the workflow, all the work was accomplished in the Central Manager web UI. In this article, I'll walk you through creating two applications, one a simple DNS load balancing application and the other a TLS-protected HTTP application with an associated iRule. For each application, I'll use the compatibility API and the documents API for working through the CRUD operations. Creating the declarations You can go about this a few different ways. You can start from the AS3 schema reference and climb up from scratch, you can spin up Visual Studio Code and work with the F5 Extension to interrogate your own BIG-IP configurations and use the AS3 Config Converter to automagically do the work for you, or you can just ask chatGPT to generate the AS3 for you to get started like I did. And after that didn't work without a lot of tweaking...I went back to VSCode. Example 1 - DNS application service declaration Here's what I ended up with for the DNS application service: { "$schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/latest/as3-schema.json", "class": "AS3", "declaration": { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:3a71dceb-f56c-4dc1-901a-2feae0244c46", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "vip.ns-cluster-1": { "layer4": "udp", "pool": "pool.ns-cluster-1", "translateServerAddress": true, "translateServerPort": true, "class": "Service_UDP", "profileUDP": { "bigip": "/Common/udp" }, "virtualAddresses": [ "10.100.100.100" ], "virtualPort": 53, "snat": "auto" }, "pool.ns-cluster-1": { "members": [ { "addressDiscovery": "static", "servicePort": 53, "serverAddresses": [ "10.10.100.101", "10.10.100.102", "10.10.100.103", "10.10.100.104" ], "shareNodes": true } ], "monitors": [ { "bigip": "/Common/udp" } ], "class": "Pool" } } } } } Note that in BIG-IP Next, there isn't an alternative to the AS3 class, so that wrapper for the ADC class declaration is unnecessary and will result in an error if posted. So the only change required at this time is to remove the wrapper, and change common/shared to tenant1/dnsapp1 as shown below. { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:3a71dceb-f56c-4dc1-901a-2feae0244c46", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "tenant1": { "class": "Tenant", "dnsapp1": { "class": "Application", "template": "shared", "vip.ns-cluster-1": { "layer4": "udp", "pool": "pool.ns-cluster-1", "translateServerAddress": true, "translateServerPort": true, "class": "Service_UDP", "profileUDP": { "bigip": "/Common/udp" }, "virtualAddresses": [ "10.100.100.100" ], "virtualPort": 53, "snat": "auto" }, "pool.ns-cluster-1": { "members": [ { "addressDiscovery": "static", "servicePort": 53, "serverAddresses": [ "10.10.100.101", "10.10.100.102", "10.10.100.103", "10.10.100.104" ], "shareNodes": true } ], "monitors": [ { "bigip": "/Common/udp" } ], "class": "Pool" } } } } But wait! There's more!Now that I'm channeling my inner Billy Mays, the declaration is not quite ready for Next. After a quick test or five or six, there are some problems with my schema in the move to Next. Here are the necessary changes, followed by the final declaration I'll used with the API endpoints. Swapped out the UDP monitor for ICMP since there is not currently a UDP monitor available Removed the profileUDP, layer4, and translateServerPort attributes from the Service_UDP class { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:3a71dceb-f56c-4dc1-901a-2feae0244c46", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "tenant1": { "class": "Tenant", "dnsapp1": { "class": "Application", "template": "shared", "vip.ns-cluster-1": { "pool": "pool.ns-cluster-1", "translateServerAddress": true, "class": "Service_UDP", "virtualAddresses": [ "10.100.100.100" ], "virtualPort": 53, "snat": "auto" }, "pool.ns-cluster-1": { "members": [ { "addressDiscovery": "static", "servicePort": 53, "serverAddresses": [ "10.10.100.101", "10.10.100.102", "10.10.100.103", "10.10.100.104" ], "shareNodes": true } ], "monitors": [ "icmp" ], "class": "Pool" } } } } Example 2 - TLS-protected HTTP application service with iRule declaration And here's the HTTP application service as converted in VSCode but without the AS3 class wrapper: { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:bd9c9728-8c20-4c4d-a625-68450e35e133", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "Common": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "vip.acme_labs": { "layer4": "tcp", "pool": "pool.acme_labs", "iRules": [ { "use": "/Common/Shared/full_uri_decode" } ], "translateServerAddress": true, "translateServerPort": true, "class": "Service_HTTPS", "serverTLS": "/Common/Shared/cssl.acme_labs", "profileHTTP": { "bigip": "/Common/http" }, "profileTCP": { "bigip": "/Common/tcp" }, "redirect80": false, "virtualAddresses": [ "172.16.101.133" ], "virtualPort": 443, "snat": "auto" }, "pool.acme_labs": { "loadBalancingMode": "least-connections-member", "members": [ { "addressDiscovery": "static", "servicePort": 80, "serverAddresses": [ "172.16.102.5" ], "shareNodes": true } ], "monitors": [ { "bigip": "/Common/http" } ], "class": "Pool" }, "www.acmelabs.com": { "class": "Certificate", "certificate": { "bigip": "/Common/www.acmelabs.com" }, "privateKey": { "bigip": "/Common/www.acmelabs.com" } }, "cssl.acme_labs": { "certificates": [ { "certificate": "/Common/Shared/www.acmelabs.com" } ], "class": "TLS_Server", "tls1_0Enabled": true, "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": false, "singleUseDhEnabled": false, "insertEmptyFragmentsEnabled": true }, "full_uri_decode": { "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1QgewogICMgZGVjb2RlIG9yaWdpbmFsIFVSSS4KICBzZXQgdG1wVXJpIFtIVFRQOjp1cmldCiAgc2V0IHVyaSBbVVJJOjpkZWNvZGUgJHRtcFVyaV0KICAjIHJlcGVhdCBkZWNvZGluZyB1bnRpbCB0aGUgZGVjb2RlZCB2ZXJzaW9uIGVxdWFscyB0aGUgcHJldmlvdXMgdmFsdWUuCiAgd2hpbGUgeyAkdXJpIG5lICR0bXBVcmkgfSB7CiAgICBzZXQgdG1wVXJpICR1cmkKICAgIHNldCB1cmkgW1VSSTo6ZGVjb2RlICR0bXBVcmldCiAgfQogIEhUVFA6OnVyaSAkdXJpCiAgbG9nIGxvY2FsMC4gIk9yaWdpbmFsIFVSSTogW0hUVFA6OnVyaV0iCiAgbG9nIGxvY2FsMC4gIkZ1bGx5IGRlY29kZWQgVVJJOiAkdXJpIgp9" } } } } } This is mostly ok with the exception of the certificate handling in lines 56-70. If I was posting this back to my local BIG-IP in place of the imperative configuration it'd be fine. But Central Manager has no context for where those certificates are so I'll need to do a little work here to prep the declaration. I need to drop the certificate and key into the Certificate class (your security-sense should be tingling, remember these are private keys so in your environment you'd be pulling these credentials in from a vault and NOT storing these in a file) and then updating the reference to the local object in the TLS_Server class. NOTE: It might be confusing for long-time BIG-IP users, but the TLS_Server class in AS3 is the equivalent of a client-ssl profile, and the TLS_CLIENT class in AS3 is the equivalent of a server-ssl profile. This change was made in AS3 to align more with industry-standard nomenclature. After these changes, and changes to Common/Shared, the updated declaration is shown below. { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:bd9c9728-8c20-4c4d-a625-68450e35e133", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "tenant2": { "class": "Tenant", "httpsapp1": { "class": "Application", "template": "shared", "vip.acme_labs": { "layer4": "tcp", "pool": "pool.acme_labs", "iRules": [ { "use": "/Common/Shared/full_uri_decode" } ], "translateServerAddress": true, "translateServerPort": true, "class": "Service_HTTPS", "serverTLS": "/Common/Shared/cssl.acme_labs", "profileHTTP": { "bigip": "/Common/http" }, "profileTCP": { "bigip": "/Common/tcp" }, "redirect80": false, "virtualAddresses": [ "172.16.101.133" ], "virtualPort": 443, "snat": "auto" }, "pool.acme_labs": { "loadBalancingMode": "least-connections-member", "members": [ { "addressDiscovery": "static", "servicePort": 80, "serverAddresses": [ "172.16.102.5" ], "shareNodes": true } ], "monitors": [ { "bigip": "/Common/http" } ], "class": "Pool" }, "www.acmelabs.com": { "class": "Certificate", "certificate": "-----BEGIN CERTIFICATE-----\nMIIHQTCCBimgAwIBAgIQFxO0vIztEEcAAAAAUQF6LjANBgkqhkiG9w0BAQsFADCBujELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDEyIEVudHJ1c3QsIEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEuMCwGA1UEAxMlRW50cnVzdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEwxSzAeFw0yMDAzMjYyMTExNTZaFw0yMjAzMTQyMTQxNTVaMGoxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdTZWF0dGxlMRowGAYDVQQKExFGNSBOZXR3b3JrcywgSW5jLjEYMBYGA1UEAwwPKi5lbWVhLmY1c2UuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAt6FDfpu8jBbE8dew0m5t2ax/p6LE0mI0BMJIZA1TxglDvQjgVethDPWp7rTr655ZNuYUZ4p/QV/Uummo0NxhE4VQyIK1tcKnGs/tX2BVmx/augrcqOpGwZKAeKsRDxB8UBS/BmovlQQgRqBym3lg7AewI20BwtSvrCSviGmByBPW7cjOFoe8n706XZvEDFiZgj/OuV2V1giCzqqKUJ5mLAqSh25465IVcTJQxKkok668rHOgpUO2GDav7cnrtLm71Oxv6m64gcQJ+e2xzaxa0/OfykuXn4W84RFKwm6im3lAbgNI+CwCjTNtXXs88TxMG49GuTol9ddeS+4aF9GvCQIDAQABo4IDkDCCA4wwKQYDVR0RBCIwIIIPKi5lbWVhLmY1c2UuY29tgg1lbWVhLmY1c2UuY29tMIIB9wYKKwYBBAHWeQIEAgSCAecEggHjAeEAdwBVgdTCFpA2AUrqC5tXPFPwwOQ4eHAlCBcvo6odBxPTDAAAAXEYy223AAAEAwBIMEYCIQDAvv+hvpE9l0BnPH3ouvKJOyTTrLNRK6qZiHrEm9G3iAIhAIlqyaByyF2OHUAqNnfk7DalviCjaHPzqEmYnsrMIXV9AHYAh3W/51l8+IxDmV+9827/Vo1HVjb/SrVgwbTq/16ggw8AAAFxGMttuQAABAMARzBFAiEAnH87ThX2oxA89e1wDaslF8zZrbu/OG8Jx3I7zqVAtkACIB90UYajoUjMoqTP36sb/tU6N776FNsflbScLedtiqPSAHcAVhQGmi/XwuzT9eG9RLI+x0Z2ubyZEVzA75SYVdaJ0N0AAAFxGMtt3wAABAMASDBGAiEAh6gVTPW97krycFbcH9OcLu/lTRSkfeCbMqUYBXlCtKICIQCmGMSIJNZYFIM3mTD0hb2VDGOMCjHkAE5hiJ5VuLEgswB1ALvZ37wfinG1k5Qjl6qSe0c4V5UKq1LoGpCWZDaOHtGFAAABcRjLbbQAAAQDAEYwRAIgdBV5qHR7nM97nmvdlSK3QLcsq+cr6qd+xns+9Wbv1pcCIBMdw4C5iEMKpwdyLRDR86jQC2v8op/klavXFfYGZ9QyMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwMwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5lbnRydXN0Lm5ldC9sZXZlbDFrLmNybDBLBgNVHSAERDBCMDYGCmCGSAGG+mwKAQUwKDAmBggrBgEFBQcCARYaaHR0cDovL3d3dy5lbnRydXN0Lm5ldC9ycGEwCAYGZ4EMAQICMGgGCCsGAQUFBwEBBFwwWjAjBggrBgEFBQcwAYYXaHR0cDovL29jc3AuZW50cnVzdC5uZXQwMwYIKwYBBQUHMAKGJ2h0dHA6Ly9haWEuZW50cnVzdC5uZXQvbDFrLWNoYWluMjU2LmNlcjAfBgNVHSMEGDAWgBSConB03bxTP8971PfNf6dgxgpMvzAdBgNVHQ4EFgQUaEk3Dl8YuTsNPJ0vhVIKTKZfnNIwCQYDVR0TBAIwADANBgkqhkiG9w0BAQsFAAOCAQEAD8DmFFgnU2veCzDyeoF12bbZfF9oA3nOTY7z2WjYy7/5hyKg6FXKwkXVji13g6RNFVQ03mqcXTN8/AhnHz7dnhWF39WhdH08suWLQrmIT2dPBKTF1aQcURIpOddemsZMx6NCFjgcAHLcK/nPDPsfMXq5tRXInjPyGd38TooIeAfGGPiTrgL3UU8ByQPxriOf4V5i66BOWH8wDViPBeXaDSdgcXhrDXAAt/nArVmI7orK+t/0iCzoeg9pGH39+/G1VansfbTcBbKnqVCxDplUiCXLlD17mN45n9estajf4tnpiXkqBIC14o742HAeqpV9T9wzUbJFo5BWMtpHtPZu2A==\n-----END CERTIFICATE-----", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAt6FDfpu8jBbE8dew0m5t2ax/p6LE0mI0BMJIZA1TxglDvQjgVethDPWp7rTr655ZNuYUZ4p/QV/Uummo0NxhE4VQyIK1tcKnGs/tX2BVmx/augrcqOpGwZKAeKsRDxB8UBS/BmovlQQgRqBym3lg7AewI20BwtSvrCSviGmByBPW7cjOFoe8n706XZvEDFiZgj/OuV2V1giCzqqKUJ5mLAqSh25465IVcTJQxKkok668rHOgpUO2GDav7cnrtLm71Oxv6m64gcQJ+e2xzaxa0/OfykuXn4W84RFKwm6im3lAbgNI+CwCjTNtXXs88TxMG49GuTol9ddeS+4aF9GvCQIDAQABAoIBAGeJbdz9QppaXEFgNDryOM37DR8gD4nwBRSJ1vdS7GFE6AS19Id9aAM+oMoPCNaZOgRSRj77QDVEK1XQLXdWSwYOrTXhPUN2tXHQuy6DysDkfRdY+IHlVm/egsGG8t9jlDQy/mJHjPygjvJDlVtEXPm4e//9fni0IzkUlkR7+MkuMT3vvKGYnUNTlI1hJokcNJ75r91O82j+qQsmvJG3FOUn0DpnEBgIvEbFvD3wMHY1K/fTUsBVJMKkjjXmjykGB9y7V4oKHQLsxH+lrUneWdD/s23hoVgAV31YeXtf7mI/eWPJt6DiGwTfaNcNcptvwugsR/7jCWaKS9Hya/qbJuECgYEA6wNm2doCmwl6ksbKSik0VgVha7DN3VjoFTDFcYZNfjB/kr6/xSODbAxJMwQdevFj2eWQxeHZnJc96x2xWzCA3mp1BhNzcfT8XRZ4LMcLUpcl1VXUEVc562vL8AdVuSODDc/rBXP/aFSXGdE+ZSPhYBrNlnK1FY10aaGsrEaRxL8CgYEAyAcyKVKY+wpdweiVXsUHIHAwQUmi8pKd1j5KlCjJkn2Wtiqex0v1eDy2/iKrZDWRiRFE4WOIb7A9GYm7FfqDyn9WvVNI0bz8Ywi+bCTdawGZ8H328q3R4/xIPprGmKV6olQHHUGZUNkLTK+cDHK4w9JRSf9kB6PUgGnBTgZoFjcCgYB/E2bQ03ZnOLfjl8QYV7Fp9hzYa1DVqFZN5wJMQW+zlSvWQHhXc72Ddh06jbYXHWF9mAkxRs8xQgKEGJknEtIL8gp3D5tz+iFfgF/Y7oPr07jsYy15du3lo3MxxfWPV2ls1YlieHeZhWvy1NblP4KFQdj6yemqzsMsvvQsbzgw5wKBgQCASZ0yQ3c6Cnv3UWP7VAIuG8XXGZMYYFA6h9jtDPu6qDFwxATxbRYR916lvzaNHo4oiprSszNd7npBVsRWZEUCKolHA5NAcSStn34BfeNELdK9Gwy2uCRVRAhRnpKgdAEi+yFU8i2SXKGSnU5H7Yvyi4D3JITTIY+4jBseH53CIQKBgByjoPYp+eMXpUmg4W5M1irXGm8sjrRBKvnxu9L+etvajWIb+AUAtoNoQmcKpf8bBK84PdCwiDSQmRDbWieT9RsSqbyWOcQf2C2L0qujUb+bM+kSTYp4oAV/rukoZ46NHjYBE3NbI7HcspWbpu5zl0Ke9pLvDwFwrmRy5KM7EiSh\n-----END RSA PRIVATE KEY-----" }, "cssl.acme_labs": { "certificates": [ { "certificate": "www.acmelabs.com" } ], "class": "TLS_Server", "tls1_0Enabled": true, "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": false, "singleUseDhEnabled": false, "insertEmptyFragmentsEnabled": true }, "full_uri_decode": { "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1QgewogICMgZGVjb2RlIG9yaWdpbmFsIFVSSS4KICBzZXQgdG1wVXJpIFtIVFRQOjp1cmldCiAgc2V0IHVyaSBbVVJJOjpkZWNvZGUgJHRtcFVyaV0KICAjIHJlcGVhdCBkZWNvZGluZyB1bnRpbCB0aGUgZGVjb2RlZCB2ZXJzaW9uIGVxdWFscyB0aGUgcHJldmlvdXMgdmFsdWUuCiAgd2hpbGUgeyAkdXJpIG5lICR0bXBVcmkgfSB7CiAgICBzZXQgdG1wVXJpICR1cmkKICAgIHNldCB1cmkgW1VSSTo6ZGVjb2RlICR0bXBVcmldCiAgfQogIEhUVFA6OnVyaSAkdXJpCiAgbG9nIGxvY2FsMC4gIk9yaWdpbmFsIFVSSTogW0hUVFA6OnVyaV0iCiAgbG9nIGxvY2FsMC4gIkZ1bGx5IGRlY29kZWQgVVJJOiAkdXJpIgp9" } } } } } I didn't mention it above, but you'll notice that the iRule is base64 encoded. The conversion to AS3 in VSCode did that automatically. You can do the same for the certificate and privateKey attributes as well if you want, but that'll need the base64 attribute within the curly brackets like the iRule. Billy Mays here again...buy 1, get another free!Like the DNS app, there are a few things native to classic in this declaration that aren't supported in Next, so we need to make a few more changes after a few tests: I removed profileHTTP and profileTCP attributes from the Service_HTTPS class. These are allowed, but since I am not setting anything non-default, I don't need them. As is, they were not acceptable referencing bigip classic profiles Removed layer4 and translateServerPort attributes from the Service_HTTPS class as they are not currently supported in Next Removed tls1_Enabled, singleUseDhEnabled, and insertEmptyFragmentsEnabled attributes from TLS_Server class as they are not currently supported in Next. Added the ciphers attribute with RSA value to the TLS_Server class. The instance would not accept the deployment without this, I got an expired or invalid certificate error without it. Changed the iRules refererence in the Service_HTTPS class from a classic BIG-IP object to a local declaration object. These final changes resulted in the following declaration I'll use with the API endpoints: { "class": "ADC", "schemaVersion": "3.37.0", "id": "urn:uuid:bd9c9728-8c20-4c4d-a625-68450e35e133", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "tenant2": { "class": "Tenant", "httpsapp1": { "class": "Application", "template": "shared", "vip.acme_labs": { "pool": "pool.acme_labs", "iRules": [ "full_uri_decode" ], "translateServerAddress": true, "class": "Service_HTTPS", "serverTLS": "cssl.acme_labs", "redirect80": false, "virtualAddresses": [ "172.16.101.133" ], "virtualPort": 443, "snat": "auto" }, "pool.acme_labs": { "loadBalancingMode": "least-connections-member", "members": [ { "addressDiscovery": "static", "servicePort": 80, "serverAddresses": [ "172.16.102.5" ], "shareNodes": true } ], "monitors": [ "http" ], "class": "Pool" }, "www.acmelabs.com": { "class": "Certificate", "certificate": "-----BEGIN CERTIFICATE-----\nMIIC3DCCAcSgAwIBAgIGAZAW7PncMA0GCSqGSIb3DQEBDQUAMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDAeFw0yNDA2MTQxMzI1NDdaFw0zNDA2MTIxMzI1NDdaMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMYpeRm4f1mPgW7STMM4gZXZ5p02nCWshNwVkaOLpRJAOdR2ZpuhLW4tWpAssvmTRlS0cFjZKA6ecVg4Q7+wvw7dIG8gVAviOqmHb6sDaomBTn3+ISFYW0Uxb1GNvZqlktJQI7hCsaS5Kf/f4pImVa8jQffWTdgLwxCm+0suaXy1XykVOCdOs1lsCOHjMoVREWxLIAtzMpqdO+8IRhSJgPJPf3GnY861T0LDjuT5rgwY1qK/H2NuEcPWOWVtqTN9aQAz9cKxDbJq48U8adzrl6G8uUYlEPEtneePErygy8wRk8KkVNkuDj5gQKxi3b3Q8/K7bPhh9aUnZRQWmhVTw2kCAwEAATANBgkqhkiG9w0BAQ0FAAOCAQEAOh3doWxnjb5j5XojnEtYUWJG6yw9a3xZhEiq7myWz7apmy5eAe0QAL9kFAuiBwgjqwzPCXzMDp21FdLC+o9Znx5A8kXE2W2G+h36kc21f3v0jumRdkU1zZ9py9iKHAOUSAYsALNWH4mosFFbodpqcFZL7Fqmh/AoIcqY3GqSWOZ6geYbMIOwTZFnsuE1LTjJrnypz1ZyglGoftzU9j501aq3eJ3YUyRIZ28/ARJxn4sUfdvjvs31EdFEOOC6hwN2U7JXdWWK/fATTenglSkUqChJRW6kRL7uFf6FCCZjXyGINJnOYVz+8gxDWA557+ogYfEquQVML5gvMK9Ff67W6A==\n-----END CERTIFICATE-----", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAxil5Gbh/WY+BbtJMwziBldnmnTacJayE3BWRo4ulEkA51HZmm6Etbi1akCyy+ZNGVLRwWNkoDp5xWDhDv7C/Dt0gbyBUC+I6qYdvqwNqiYFOff4hIVhbRTFvUY29mqWS0lAjuEKxpLkp/9/ikiZVryNB99ZN2AvDEKb7Sy5pfLVfKRU4J06zWWwI4eMyhVERbEsgC3Mymp077whGFImA8k9/cadjzrVPQsOO5PmuDBjWor8fY24Rw9Y5ZW2pM31pADP1wrENsmrjxTxp3OuXoby5RiUQ8S2d548SvKDLzBGTwqRU2S4OPmBArGLdvdDz8rts+GH1pSdlFBaaFVPDaQIDAQABAoIBAEUsIv7MfX/o7TifJnabGfkSOEM21ej8wOAGk3EwhO3LB6TXs9etuqsUH+HmCI/ATjOxTOpm22nG+y/dbCDU9MyeefzwnwYK8YlOIrfimGTpg1nNxQjby/hqWj5wqPf7xjWuDdn7RgGHNVcBcxirUwuw1g1KfJ/m8y+z6lKDIAWMuPegPFgQy0UoJmE5gjtdNYuRrPKESfjdgYhbmzl75k2zqm35Ngwgvp6YYq1jeGpDb4lDBDvn9KdpScC1y9w++7k4n1AyMZXsfgn3oSiFp9G6rZNraykOPYkQu309DVBqYtW0DHSU/xDYh1MTwJEwhcISYu12s2PIDGv/prgMRwUCgYEA7SdaqLT0B/btPkO84gnRx40rgsSM8gPewiVHerc95/tR6tCdMg1eNGJEK+biZMR/oxLQ3Ajr14BE3O8Dxhcqx/5vdo5qrX2oytDkl87oObK5rL0kdlmg/SQdnCsG/GkGtZlXLdMmjibSglGn23E69bsS0+IHspZnT2KHb1v1OZcCgYEA1ejfdHxmyOe+ke9QYn0umLLI/u6vDm6qkzEJrmzkpjrQrwftYRBeSr7CRJdRWtQ6dKA6kGZEfumFMg0ptFtwDGuLnzXek8UC3gKXjDnHyTugTXLprgB3A1AUYy0jvxmMTY8/AZLmDnqXma1WFnyxIUrTbzQq6uJPD4b33cWciv8CgYEAumnT1ocex1/uzqG6SEeFsYEjMZBEZjxqjlt1W13MeJxRoO1Ikz50zWJsycGcNa9L0SiKKluM3wGBn9T1N3GgfEJg5WU/L4517q7S8Q1/91KopsKqdakwZatM5yPfQutfjcGyCGBQjy6vDCcZdeIEgYICY7DpchTNslX1tbAoC5MCgYA9f9hOyz1Z4Zbeqik4R7lP2YcEFGdsBNExxFV+Onx6dkptKCBNWcFiR/necorHTGEKCs8LmPt0aXsL6tDks61BROI9geVeIrQyVBhyDmKsLmJmIfWhOyz8XNefs+ilFplJ6zc4Ip3V59USL82iZXMfmT20qRD1ut70Hd/BeQEKzQKBgQCoiTGlal7FaOHZmjvPOc6lzvOC2RIZL3yT5U1r9XsMFC2pPU/YinTc0cEpMmbeqLKuINjKOYyVp8HZEdpB6atU/WYDT2INe7VaphWpHkd5F56plzo0hlTDr1eFlHBsj23MVFR/UvpL0PeGzfnBd7ga2s0ymWDDnIhMJKzwu5GvDw==\n-----END RSA PRIVATE KEY-----" }, "cssl.acme_labs": { "certificates": [ { "certificate": "www.acmelabs.com" } ], "ciphers": "RSA", "class": "TLS_Server", "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": false }, "full_uri_decode": { "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1QgewogICMgZGVjb2RlIG9yaWdpbmFsIFVSSS4KICBzZXQgdG1wVXJpIFtIVFRQOjp1cmldCiAgc2V0IHVyaSBbVVJJOjpkZWNvZGUgJHRtcFVyaV0KICAjIHJlcGVhdCBkZWNvZGluZyB1bnRpbCB0aGUgZGVjb2RlZCB2ZXJzaW9uIGVxdWFscyB0aGUgcHJldmlvdXMgdmFsdWUuCiAgd2hpbGUgeyAkdXJpIG5lICR0bXBVcmkgfSB7CiAgICBzZXQgdG1wVXJpICR1cmkKICAgIHNldCB1cmkgW1VSSTo6ZGVjb2RlICR0bXBVcmldCiAgfQogIEhUVFA6OnVyaSAkdXJpCiAgbG9nIGxvY2FsMC4gIk9yaWdpbmFsIFVSSTogW0hUVFA6OnVyaV0iCiAgbG9nIGxvY2FsMC4gIkZ1bGx5IGRlY29kZWQgVVJJOiAkdXJpIgp9" } } } } } OK, we have our declarations handy, now we can move on to working this the API endpoints! CRUD operations We sure love our acronyms in tech, don't we? CRUD stands for create, read, update, and delete. These are the most common operations for interacting with an API. (If you've used the iControl REST interface before on classic BIG-IP, you know that we need to perform additional operations like running commands (load, save, run, etc), so that needed to be folded in somehow to the CRUD model. We'll address those use cases in future articles.) Before we can use the API endpoints, however, we need to be authenticated to the Central Manager. This requires a login request that returns a bearer token to be used in subsequent requests. I wrote a short bash script to get the token which I set to a local variable in my shell. First, the script: #!/bin/zsh token=$(curl -ks --location 'https://172.16.31.105/api/login' \ --header 'Content-Type: application/json' \ --data '{ "username": "admin", "password": "notsofastmyfriend" }' | jq -r '.access_token') echo $token Next, setting the token variable for use in future commands: jrahm@mymac as3testing % token=$(./gt.sh) jrahm@mymac as3testing % echo $token D1RrEpn1RCHpm5FrGCIiXrwu3coSO8vWGT8e8kHLd2QbeUUiGAgw6pFb1B2l2bHeG7KsrqiipfuNGbx/DaCyUDQ0niaDiQizHIj6w7xOIWLNd5e/Bz2emGskM959E7CnMRTV36qPpu0SLDJsdvThZf6wLvm9oe5cX25Uqzf2/6Y+eNxDLs2WjsA4IFFRO2QWkjrq807kxJIoIX8BvICSxyjlx7PEQkWBAdUV7z6zayX03FtA3lqR66dzzMtIr9L7na+T7/i5cqSETGYQYt1z4a996oA/jMcAEy5J6PsuinCdN3ZZNt5Bfi4ck/5/bA3RJEZR8niU5u77DGasckdcUlRjl0/8UOgmEq19BRopAGFCXvRyiX/g6CVR6NDNG5dlmVjVcJ2+IzYJ8utGfr7raKMIgDIEn/G1AVqy0kj+x2ANdHpo0PQG678JoXChHObiDwjcOMrUiW2cC/YMLp36lcBEgp0uySokSwwYBTJjLJezFE74I+x154yDIWYD0+I8xbIqAHA4a3IxMljR14wowIJp84SxfeuJcrcUAZESzw== Now that I don't have to worry about re-upping on my token while working with curl at the command line, let's work through each of these CRUD operations in order. Application service create operation The create operation is accomplished with an HTTP POST method. As we are creating an object, we need to send some data along with that. That data in our case is the AS3 declaration. I put each declaration in a file Compatibility API It's a single request to deploy the workload with the compatibility interface to the /api/v1/spaces/default/appsvcs/declare endpoint with a target_address of the instances as a query parameter. Interestingly, the successful declaration is returned to you in its entirety in the response. DNS App jrahm@mymac as3testing % curl -sk \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d "@dns-app.json" \ --location 'https://172.16.2.105/api/v1/spaces/default/appsvcs/declare?target_address=172.16.2.161' | jq . { "declaration": { "class": "ADC", "id": "urn:uuid:3a71dceb-f56c-4dc1-901a-2feae0244c46", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "schemaVersion": "3.37.0", "tenant1": { "class": "Tenant", "dnsapp1": { "class": "Application", "pool.ns-cluster-1": { "class": "Pool", "members": [ { "addressDiscovery": "static", "serverAddresses": [ "10.10.100.101", "10.10.100.102", "10.10.100.103", "10.10.100.104" ], "servicePort": 53, "shareNodes": true } ], "monitors": [ "icmp" ] }, "template": "shared", "vip.ns-cluster-1": { "class": "Service_UDP", "pool": "pool.ns-cluster-1", "snat": "auto", "translateServerAddress": true, "virtualAddresses": [ "10.100.100.100" ], "virtualPort": 53 } } } }, "results": [ { "code": 200, "host": "172.16.2.161", "message": "success", "runTime": 1948, "tenant": "tenant1" } ] } HTTPS App jrahm@mymac as3testing % curl -sk \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d "@https-app.json" \ --location 'https://172.16.2.105/api/v1/spaces/default/appsvcs/declare?target_address=172.16.2.161' | jq . { "declaration": { "class": "ADC", "id": "urn:uuid:bd9c9728-8c20-4c4d-a625-68450e35e133", "label": "Converted Declaration", "remark": "Generated by Automation Config Converter", "schemaVersion": "3.37.0", "tenant2": { "class": "Tenant", "httpsapp1": { "class": "Application", "cssl.acme_labs": { "certificates": [ { "certificate": "www.acmelabs.com" } ], "ciphers": "RSA", "class": "TLS_Server", "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": false }, "full_uri_decode": { "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1QgewogICMgZGVjb2RlIG9yaWdpbmFsIFVSSS4KICBzZXQgdG1wVXJpIFtIVFRQOjp1cmldCiAgc2V0IHVyaSBbVVJJOjpkZWNvZGUgJHRtcFVyaV0KICAjIHJlcGVhdCBkZWNvZGluZyB1bnRpbCB0aGUgZGVjb2RlZCB2ZXJzaW9uIGVxdWFscyB0aGUgcHJldmlvdXMgdmFsdWUuCiAgd2hpbGUgeyAkdXJpIG5lICR0bXBVcmkgfSB7CiAgICBzZXQgdG1wVXJpICR1cmkKICAgIHNldCB1cmkgW1VSSTo6ZGVjb2RlICR0bXBVcmldCiAgfQogIEhUVFA6OnVyaSAkdXJpCiAgbG9nIGxvY2FsMC4gIk9yaWdpbmFsIFVSSTogW0hUVFA6OnVyaV0iCiAgbG9nIGxvY2FsMC4gIkZ1bGx5IGRlY29kZWQgVVJJOiAkdXJpIgp9" } }, "pool.acme_labs": { "class": "Pool", "loadBalancingMode": "least-connections-member", "members": [ { "addressDiscovery": "static", "serverAddresses": [ "172.16.102.5" ], "servicePort": 80, "shareNodes": true } ], "monitors": [ "http" ] }, "template": "shared", "vip.acme_labs": { "class": "Service_HTTPS", "iRules": [ "full_uri_decode" ], "pool": "pool.acme_labs", "redirect80": false, "serverTLS": "cssl.acme_labs", "snat": "auto", "translateServerAddress": true, "virtualAddresses": [ "172.16.101.133" ], "virtualPort": 443 }, "www.acmelabs.com": { "certificate": "-----BEGIN CERTIFICATE-----\nMIIC3DCCAcSgAwIBAgIGAZAW7PncMA0GCSqGSIb3DQEBDQUAMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDAeFw0yNDA2MTQxMzI1NDdaFw0zNDA2MTIxMzI1NDdaMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMYpeRm4f1mPgW7STMM4gZXZ5p02nCWshNwVkaOLpRJAOdR2ZpuhLW4tWpAssvmTRlS0cFjZKA6ecVg4Q7+wvw7dIG8gVAviOqmHb6sDaomBTn3+ISFYW0Uxb1GNvZqlktJQI7hCsaS5Kf/f4pImVa8jQffWTdgLwxCm+0suaXy1XykVOCdOs1lsCOHjMoVREWxLIAtzMpqdO+8IRhSJgPJPf3GnY861T0LDjuT5rgwY1qK/H2NuEcPWOWVtqTN9aQAz9cKxDbJq48U8adzrl6G8uUYlEPEtneePErygy8wRk8KkVNkuDj5gQKxi3b3Q8/K7bPhh9aUnZRQWmhVTw2kCAwEAATANBgkqhkiG9w0BAQ0FAAOCAQEAOh3doWxnjb5j5XojnEtYUWJG6yw9a3xZhEiq7myWz7apmy5eAe0QAL9kFAuiBwgjqwzPCXzMDp21FdLC+o9Znx5A8kXE2W2G+h36kc21f3v0jumRdkU1zZ9py9iKHAOUSAYsALNWH4mosFFbodpqcFZL7Fqmh/AoIcqY3GqSWOZ6geYbMIOwTZFnsuE1LTjJrnypz1ZyglGoftzU9j501aq3eJ3YUyRIZ28/ARJxn4sUfdvjvs31EdFEOOC6hwN2U7JXdWWK/fATTenglSkUqChJRW6kRL7uFf6FCCZjXyGINJnOYVz+8gxDWA557+ogYfEquQVML5gvMK9Ff67W6A==\n-----END CERTIFICATE-----", "class": "Certificate", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAxil5Gbh/WY+BbtJMwziBldnmnTacJayE3BWRo4ulEkA51HZmm6Etbi1akCyy+ZNGVLRwWNkoDp5xWDhDv7C/Dt0gbyBUC+I6qYdvqwNqiYFOff4hIVhbRTFvUY29mqWS0lAjuEKxpLkp/9/ikiZVryNB99ZN2AvDEKb7Sy5pfLVfKRU4J06zWWwI4eMyhVERbEsgC3Mymp077whGFImA8k9/cadjzrVPQsOO5PmuDBjWor8fY24Rw9Y5ZW2pM31pADP1wrENsmrjxTxp3OuXoby5RiUQ8S2d548SvKDLzBGTwqRU2S4OPmBArGLdvdDz8rts+GH1pSdlFBaaFVPDaQIDAQABAoIBAEUsIv7MfX/o7TifJnabGfkSOEM21ej8wOAGk3EwhO3LB6TXs9etuqsUH+HmCI/ATjOxTOpm22nG+y/dbCDU9MyeefzwnwYK8YlOIrfimGTpg1nNxQjby/hqWj5wqPf7xjWuDdn7RgGHNVcBcxirUwuw1g1KfJ/m8y+z6lKDIAWMuPegPFgQy0UoJmE5gjtdNYuRrPKESfjdgYhbmzl75k2zqm35Ngwgvp6YYq1jeGpDb4lDBDvn9KdpScC1y9w++7k4n1AyMZXsfgn3oSiFp9G6rZNraykOPYkQu309DVBqYtW0DHSU/xDYh1MTwJEwhcISYu12s2PIDGv/prgMRwUCgYEA7SdaqLT0B/btPkO84gnRx40rgsSM8gPewiVHerc95/tR6tCdMg1eNGJEK+biZMR/oxLQ3Ajr14BE3O8Dxhcqx/5vdo5qrX2oytDkl87oObK5rL0kdlmg/SQdnCsG/GkGtZlXLdMmjibSglGn23E69bsS0+IHspZnT2KHb1v1OZcCgYEA1ejfdHxmyOe+ke9QYn0umLLI/u6vDm6qkzEJrmzkpjrQrwftYRBeSr7CRJdRWtQ6dKA6kGZEfumFMg0ptFtwDGuLnzXek8UC3gKXjDnHyTugTXLprgB3A1AUYy0jvxmMTY8/AZLmDnqXma1WFnyxIUrTbzQq6uJPD4b33cWciv8CgYEAumnT1ocex1/uzqG6SEeFsYEjMZBEZjxqjlt1W13MeJxRoO1Ikz50zWJsycGcNa9L0SiKKluM3wGBn9T1N3GgfEJg5WU/L4517q7S8Q1/91KopsKqdakwZatM5yPfQutfjcGyCGBQjy6vDCcZdeIEgYICY7DpchTNslX1tbAoC5MCgYA9f9hOyz1Z4Zbeqik4R7lP2YcEFGdsBNExxFV+Onx6dkptKCBNWcFiR/necorHTGEKCs8LmPt0aXsL6tDks61BROI9geVeIrQyVBhyDmKsLmJmIfWhOyz8XNefs+ilFplJ6zc4Ip3V59USL82iZXMfmT20qRD1ut70Hd/BeQEKzQKBgQCoiTGlal7FaOHZmjvPOc6lzvOC2RIZL3yT5U1r9XsMFC2pPU/YinTc0cEpMmbeqLKuINjKOYyVp8HZEdpB6atU/WYDT2INe7VaphWpHkd5F56plzo0hlTDr1eFlHBsj23MVFR/UvpL0PeGzfnBd7ga2s0ymWDDnIhMJKzwu5GvDw==\n-----END RSA PRIVATE KEY-----" } } } }, "results": [ { "code": 200, "host": "172.16.2.161", "message": "success", "runTime": 1950, "tenant": "tenant2" } ] } Documents API With this approach, you send the document first with the /api/v1/spaces/default/appsvcs/documents endpoint and then deploy with the /api/v1/spaces/default/appsvcs/documents/<id>/deployments endpoint. The document and deployment each have their own object ID, and then the deployment also has a task ID that can be referenced in the logs. DNS App jrahm@mymac as3testing % curl -skX POST \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d "@dns-app.json" \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents | jq . { "Message": "Application service created successfully", "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3" } }, "id": "d5d0a360-75ec-434c-9802-62083a26c4d3" } jrahm@mymac as3testing % curl -skX POST \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d '{"target": "172.16.2.161"}' \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/deployments | jq . { "Message": "Deployment task created successfully", "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/deployments" } }, "id": "ed48899b-fcb0-4a60-b8f2-2c0e012aa28d", "task_id": "771beda9-5ca4-4049-bebc-97b9d52da524" } HTTPS App jrahm@mymac as3testing % curl -skX POST \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d "@https-app.json" \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents | jq . { "Message": "Application service created successfully", "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab" } }, "id": "3102ce15-e3d4-498f-a466-60f4bf02c2ab" } jrahm@mymac as3testing % curl -skX POST \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d '{"target": "172.16.2.161"}' \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab/deployments | jq . { "Message": "Deployment task created successfully", "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab/deployments" } }, "id": "400e2b06-b451-4035-a26b-beaf90b283a5", "task_id": "f529800a-f515-4bec-9cfe-1f3214dec229" } Central Manager view of API-deployed apps This is the result in Central Manager after deploying the two applications via the two different methodologies. Notice the different naming scheme applied to each approach. Application service read operation The read operation is accomplished with an HTTP GET method. No payload is necessary on the request. Compatibility API Note here that both the DNS and HTTP apps will be returned, and for that matter, both could have been deployed together as well! Also note that this is for apps on the targeted instance only, however. The AS3 deployments follow the curl command options. jrahm@mymac as3testing % curl -sk \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ "https://172.16.2.105/api/v1/spaces/default/appsvcs/declare?target_address=172.16.2.161" | jq . { "class": "ADC", "controls": null, "schemaVersion": "3.0.0", "target": { "address": "172.16.2.161" }, "tenant1": { "class": "Tenant", "dnsapp1": { "class": "Application", "pool.ns-cluster-1": { "class": "Pool", "members": [ { "addressDiscovery": "static", "serverAddresses": [ "10.10.100.101", "10.10.100.102", "10.10.100.103", "10.10.100.104" ], "servicePort": 53, "shareNodes": true } ], "monitors": [ "icmp" ] }, "template": "shared", "vip.ns-cluster-1": { "class": "Service_UDP", "pool": "pool.ns-cluster-1", "snat": "auto", "translateServerAddress": true, "virtualAddresses": [ "10.100.100.101" ], "virtualPort": 53 } } }, "tenant2": { "class": "Tenant", "httpsapp1": { "class": "Application", "cssl.acme_labs": { "certificates": [ { "certificate": "www.acmelabs.com" } ], "ciphers": "RSA", "class": "TLS_Server", "tls1_1Enabled": true, "tls1_2Enabled": true, "tls1_3Enabled": false }, "full_uri_decode": { "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1QgewogICMgZGVjb2RlIG9yaWdpbmFsIFVSSS4KICBzZXQgdG1wVXJpIFtIVFRQOjp1cmldCiAgc2V0IHVyaSBbVVJJOjpkZWNvZGUgJHRtcFVyaV0KICAjIHJlcGVhdCBkZWNvZGluZyB1bnRpbCB0aGUgZGVjb2RlZCB2ZXJzaW9uIGVxdWFscyB0aGUgcHJldmlvdXMgdmFsdWUuCiAgd2hpbGUgeyAkdXJpIG5lICR0bXBVcmkgfSB7CiAgICBzZXQgdG1wVXJpICR1cmkKICAgIHNldCB1cmkgW1VSSTo6ZGVjb2RlICR0bXBVcmldCiAgfQogIEhUVFA6OnVyaSAkdXJpCiAgbG9nIGxvY2FsMC4gIk9yaWdpbmFsIFVSSTogW0hUVFA6OnVyaV0iCiAgbG9nIGxvY2FsMC4gIkZ1bGx5IGRlY29kZWQgVVJJOiAkdXJpIgp9" } }, "pool.acme_labs": { "class": "Pool", "loadBalancingMode": "least-connections-member", "members": [ { "addressDiscovery": "static", "serverAddresses": [ "172.16.102.5" ], "servicePort": 80, "shareNodes": true } ], "monitors": [ "http" ] }, "template": "shared", "vip.acme_labs": { "class": "Service_HTTPS", "iRules": [ "full_uri_decode" ], "pool": "pool.acme_labs", "redirect80": false, "serverTLS": "cssl.acme_labs", "snat": "auto", "translateServerAddress": true, "virtualAddresses": [ "172.16.101.133" ], "virtualPort": 443 }, "www.acmelabs.com": { "certificate": "-----BEGIN CERTIFICATE-----\nMIIC3DCCAcSgAwIBAgIGAZAW7PncMA0GCSqGSIb3DQEBDQUAMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDAeFw0yNDA2MTQxMzI1NDdaFw0zNDA2MTIxMzI1NDdaMC8xCzAJBgNVBAYTAlVTMSAwHgYDVQQDExdteXNlbGZzaWduZWQudGVzdC5sb2NhbDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMYpeRm4f1mPgW7STMM4gZXZ5p02nCWshNwVkaOLpRJAOdR2ZpuhLW4tWpAssvmTRlS0cFjZKA6ecVg4Q7+wvw7dIG8gVAviOqmHb6sDaomBTn3+ISFYW0Uxb1GNvZqlktJQI7hCsaS5Kf/f4pImVa8jQffWTdgLwxCm+0suaXy1XykVOCdOs1lsCOHjMoVREWxLIAtzMpqdO+8IRhSJgPJPf3GnY861T0LDjuT5rgwY1qK/H2NuEcPWOWVtqTN9aQAz9cKxDbJq48U8adzrl6G8uUYlEPEtneePErygy8wRk8KkVNkuDj5gQKxi3b3Q8/K7bPhh9aUnZRQWmhVTw2kCAwEAATANBgkqhkiG9w0BAQ0FAAOCAQEAOh3doWxnjb5j5XojnEtYUWJG6yw9a3xZhEiq7myWz7apmy5eAe0QAL9kFAuiBwgjqwzPCXzMDp21FdLC+o9Znx5A8kXE2W2G+h36kc21f3v0jumRdkU1zZ9py9iKHAOUSAYsALNWH4mosFFbodpqcFZL7Fqmh/AoIcqY3GqSWOZ6geYbMIOwTZFnsuE1LTjJrnypz1ZyglGoftzU9j501aq3eJ3YUyRIZ28/ARJxn4sUfdvjvs31EdFEOOC6hwN2U7JXdWWK/fATTenglSkUqChJRW6kRL7uFf6FCCZjXyGINJnOYVz+8gxDWA557+ogYfEquQVML5gvMK9Ff67W6A==\n-----END CERTIFICATE-----", "class": "Certificate", "privateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAxil5Gbh/WY+BbtJMwziBldnmnTacJayE3BWRo4ulEkA51HZmm6Etbi1akCyy+ZNGVLRwWNkoDp5xWDhDv7C/Dt0gbyBUC+I6qYdvqwNqiYFOff4hIVhbRTFvUY29mqWS0lAjuEKxpLkp/9/ikiZVryNB99ZN2AvDEKb7Sy5pfLVfKRU4J06zWWwI4eMyhVERbEsgC3Mymp077whGFImA8k9/cadjzrVPQsOO5PmuDBjWor8fY24Rw9Y5ZW2pM31pADP1wrENsmrjxTxp3OuXoby5RiUQ8S2d548SvKDLzBGTwqRU2S4OPmBArGLdvdDz8rts+GH1pSdlFBaaFVPDaQIDAQABAoIBAEUsIv7MfX/o7TifJnabGfkSOEM21ej8wOAGk3EwhO3LB6TXs9etuqsUH+HmCI/ATjOxTOpm22nG+y/dbCDU9MyeefzwnwYK8YlOIrfimGTpg1nNxQjby/hqWj5wqPf7xjWuDdn7RgGHNVcBcxirUwuw1g1KfJ/m8y+z6lKDIAWMuPegPFgQy0UoJmE5gjtdNYuRrPKESfjdgYhbmzl75k2zqm35Ngwgvp6YYq1jeGpDb4lDBDvn9KdpScC1y9w++7k4n1AyMZXsfgn3oSiFp9G6rZNraykOPYkQu309DVBqYtW0DHSU/xDYh1MTwJEwhcISYu12s2PIDGv/prgMRwUCgYEA7SdaqLT0B/btPkO84gnRx40rgsSM8gPewiVHerc95/tR6tCdMg1eNGJEK+biZMR/oxLQ3Ajr14BE3O8Dxhcqx/5vdo5qrX2oytDkl87oObK5rL0kdlmg/SQdnCsG/GkGtZlXLdMmjibSglGn23E69bsS0+IHspZnT2KHb1v1OZcCgYEA1ejfdHxmyOe+ke9QYn0umLLI/u6vDm6qkzEJrmzkpjrQrwftYRBeSr7CRJdRWtQ6dKA6kGZEfumFMg0ptFtwDGuLnzXek8UC3gKXjDnHyTugTXLprgB3A1AUYy0jvxmMTY8/AZLmDnqXma1WFnyxIUrTbzQq6uJPD4b33cWciv8CgYEAumnT1ocex1/uzqG6SEeFsYEjMZBEZjxqjlt1W13MeJxRoO1Ikz50zWJsycGcNa9L0SiKKluM3wGBn9T1N3GgfEJg5WU/L4517q7S8Q1/91KopsKqdakwZatM5yPfQutfjcGyCGBQjy6vDCcZdeIEgYICY7DpchTNslX1tbAoC5MCgYA9f9hOyz1Z4Zbeqik4R7lP2YcEFGdsBNExxFV+Onx6dkptKCBNWcFiR/necorHTGEKCs8LmPt0aXsL6tDks61BROI9geVeIrQyVBhyDmKsLmJmIfWhOyz8XNefs+ilFplJ6zc4Ip3V59USL82iZXMfmT20qRD1ut70Hd/BeQEKzQKBgQCoiTGlal7FaOHZmjvPOc6lzvOC2RIZL3yT5U1r9XsMFC2pPU/YinTc0cEpMmbeqLKuINjKOYyVp8HZEdpB6atU/WYDT2INe7VaphWpHkd5F56plzo0hlTDr1eFlHBsj23MVFR/UvpL0PeGzfnBd7ga2s0ymWDDnIhMJKzwu5GvDw==\n-----END RSA PRIVATE KEY-----" } } } } Documents API With this interface, Central Manager lists out all the documents, including the compatibility interface applications. jrahm@mymac as3testing % curl -sk \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents | jq ._embedded.appsvcs [ { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab" } }, "created": "2024-06-17T17:38:08.186126Z", "deployments": [ { "id": "400e2b06-b451-4035-a26b-beaf90b283a5", "instance_id": "a4148c93-5306-4605-b8bb-92d6b1f78c26", "target": { "instance_ip": "172.16.2.161" }, "last_successful_deploy_time": "2024-06-17T17:38:42.404675Z", "modified": "2024-06-17T17:38:42.404675Z", "last_record": { "id": "64894415-38d0-49f9-989d-8f00c88196b3", "task_id": "f529800a-f515-4bec-9cfe-1f3214dec229", "start_time": "2024-06-17T17:38:41.103539Z", "status": "completed" } } ], "deployments_count": { "total": 1, "completed": 1 }, "id": "3102ce15-e3d4-498f-a466-60f4bf02c2ab", "name": "httpsapp1", "tenant_name": "tenant2", "type": "AS3" }, { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/7938a0a2-b5d4-4687-99f8-e73d9e6b3d51" } }, "created": "2024-06-17T17:52:41.397543Z", "deployments": [ { "id": "0c50d882-f8d1-4833-af31-2b71e465f2f5", "instance_id": "a4148c93-5306-4605-b8bb-92d6b1f78c26", "target": { "instance_ip": "172.16.2.161" }, "last_successful_deploy_time": "2024-06-17T17:54:51.531445Z", "modified": "2024-06-17T17:54:51.531445Z", "last_record": { "id": "a8f786a5-f1c6-4f99-83bb-59cc024e1c34", "task_id": "ee1a3afa-c9d4-4e29-9271-632bbb93b6e7", "start_time": "2024-06-17T17:54:50.167979Z", "status": "completed" } } ], "deployments_count": { "total": 1, "completed": 1 }, "id": "7938a0a2-b5d4-4687-99f8-e73d9e6b3d51", "modified": "2024-06-17T17:54:50.164813Z", "name": "tenant1.dnsapp1.NzKPI4xZ", "tenant_name": "default", "type": "AS3" }, { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/87ec6d3a-063d-4660-b32a-08cf183a21a8" } }, "created": "2024-06-17T17:50:02.621622Z", "deployments": [ { "id": "5da24b69-491e-45a1-b8eb-18395c4b2b12", "instance_id": "a4148c93-5306-4605-b8bb-92d6b1f78c26", "target": { "instance_ip": "172.16.2.161" }, "last_successful_deploy_time": "2024-06-17T17:50:03.929715Z", "modified": "2024-06-17T17:50:03.929715Z", "last_record": { "id": "1f3bc580-da07-4c26-b4d2-7e8bcb632869", "task_id": "dc8fbdc8-4dd0-4aeb-9e7d-cf3038d42c07", "start_time": "2024-06-17T17:50:02.640417Z", "status": "completed" } } ], "deployments_count": { "total": 1, "completed": 1 }, "id": "87ec6d3a-063d-4660-b32a-08cf183a21a8", "name": "tenant2.httpsapp1.NzKPI4xZ", "tenant_name": "default", "type": "AS3" }, { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3" } }, "created": "2024-06-17T17:56:04.957896Z", "deployments": [ { "id": "ed48899b-fcb0-4a60-b8f2-2c0e012aa28d", "instance_id": "a4148c93-5306-4605-b8bb-92d6b1f78c26", "target": { "instance_ip": "172.16.2.161" }, "last_successful_deploy_time": "2024-06-17T17:56:34.410606Z", "modified": "2024-06-17T17:56:34.410606Z", "last_record": { "id": "7178d940-5ae7-4c18-bca6-6f7d14604d5e", "task_id": "771beda9-5ca4-4049-bebc-97b9d52da524", "start_time": "2024-06-17T17:56:33.123687Z", "status": "completed" } } ], "deployments_count": { "total": 1, "completed": 1 }, "id": "d5d0a360-75ec-434c-9802-62083a26c4d3", "name": "dnsapp1", "tenant_name": "tenant1", "type": "AS3" } ] Application service update operation For the update operation, this could be an HTTP PUT or PATCH method, depending on what the endpoints support. PUT is supposed to be a total replacement and PATCH a partial replacement, but I've found the implementations of many APIs to not follow this pattern. These methods require a payload with the request. In this section forward, we'll focus more on the mechanics of the API rather than the specifics on the application services, so I might work with one or the other unless both need attention. Compatibility API This is where I throw a curveball at you! As the compatibility interface is intended to match BIG-IP classic AS3 behavior so it is in fact, uh, compatible, the operation for an update is actually still a POST as if you're creating the application service for the first time, so there's no need to do anything new here. Make the change to your declaration and POST as shown in the create section and you're good to go. Documents API To modify the AS3 application service, the API reference states that the PUT method should be used, and the declaration should be complete. So I changed the virtual server IP address in the declaration and sent a PUT request to the appropriate document ID and it was successfully deployed. jrahm@mymac as3testing % curl -skX PUT \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d "@dns-app.json" \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3 | jq . { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3" } }, "deployments": [ { "Message": "Update deployment task created", "id": "ed48899b-fcb0-4a60-b8f2-2c0e012aa28d", "task_id": "b07fa2de-7d73-4c7e-988a-1383cc45e441" } ], "id": "d5d0a360-75ec-434c-9802-62083a26c4d3", "message": "Application service updated successfully" } Application service delete operation An HTTP DELETE method performs the delete operation. Typically you just need the object ID in the request URL to remove the desired object. This is the fun part, at least in the lab environment. BLOW STUFF UP! Just kidding, but not really. I, like the Joker before me, like to make things go bye bye. Maybe if the Joker could have been a force for good he'd be a great chaos engineer. Compatibility API This is where I put up the RED FLAG and caution you to know what you're doing here. If you send a DELETE to the compatibility interface with an empty payload you can blow away ALL the AS3 configuration on that instance. So don't do that... Instead, make sure you include the tenant name in the URI as shown below. jrahm@mymac as3testing % curl -skX DELETE \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ --location 'https://172.16.2.105/api/v1/spaces/default/appsvcs/declare/tenant1?target_address=172.16.2.161' { "declaration":{}, "results":[ { "code":200, "host":"172.16.2.161", "message":"success", "runTime":1331, "tenant":"tenant1" } ] } Documents API You have two options here. You can delete the deployment only (you'll need to provide the document ID and the deployment ID) and then choose whether to the leave the draft or delete it (I show the document delete as well): jrahm@mymac as3testing % curl -skX DELETE \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d '{"target": "172.16.2.161"}' \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/deployments/ed48899b-fcb0-4a60-b8f2-2c0e012aa28d | jq . { "Message": "Delete Deployment task created successfully", "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/deployments/ed48899b-fcb0-4a60-b8f2-2c0e012aa28d" } }, "id": "ed48899b-fcb0-4a60-b8f2-2c0e012aa28d", "task_id": "9c5a8fe0-d8b9-4b41-a47f-3283586c88f1" } jrahm@mymac as3testing % curl -skX DELETE \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d '{"target": "172.16.2.161"}' \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/ | jq . { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/d5d0a360-75ec-434c-9802-62083a26c4d3/" } }, "id": "d5d0a360-75ec-434c-9802-62083a26c4d3", "message": "The application has been deleted successfully" } Or you can delete the document outright in one step which will clean up the deployment as well: jrahm@mymac as3testing % curl -skX DELETE \ -H "Authorization: Bearer $token" \ -H "Content-Type: application/json" \ -d '{"target": "172.16.2.161"}' \ https://172.16.2.105/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab/ | jq . { "_links": { "self": { "href": "/api/v1/spaces/default/appsvcs/documents/3102ce15-e3d4-498f-a466-60f4bf02c2ab/" } }, "deployments": [ { "Message": "Delete Deployment task created successfully", "id": "/declare/3102ce15-e3d4-498f-a466-60f4bf02c2ab/deployments/400e2b06-b451-4035-a26b-beaf90b283a5", "task_id": "73001fa0-2690-4906-96d6-52c2bb162bb0" } ], "id": "3102ce15-e3d4-498f-a466-60f4bf02c2ab", "message": "The application delete has been submitted successfully" } One more AS3 schema insight This article focused on the API endpoints and to make things simpler I used a declaration that works with both approaches. That said, if you are starting out with BIG-IP Next, you don't need the ADC or Tenant classes in your declaration, you can instead use a named document and start at the application class. Check out this diff in VSCode for the DNS app used in this article. Next up... I've been configuration-focused in the first couple of articles in the automation series. In the next article, I'll walk through some of the BIG-IP Next Postman collection, looking at system as well as configuration things. The visual experience in Postman might be a little easier on the eyes for those getting started than a bunch of curl commands. Stay tuned! Resources BIG-IP Next AS3 Schema BIG-IP Next API Reference Manage Application Services on Central Manager with AS3349Views3likes1Comment