azure
246 TopicsAzure Virtual Machine Scale Set (VMSS) with F5 LTM
Hi So basically we have F5 in azure and worked perfect with pools that have only one pool member. And now we are about to enable azure scale set (auto scaling) of virtual machines (IIS servers) , and these VMs in the VMSS needed to be added / removed automatically to / from the F5 LTM pool. How could I solve this issue? What I am basically asking, is how can I fully automate adding / removing pool members according to VMSS ? Is there a script to be installed in F5 that does this automatically? Thanks101Views0likes3CommentsSecure and Seamless Cloud Application Migration with F5 Distributed Cloud and Nutanix
Introduction F5 Distributed Cloud (XC) offers SaaS-based security, networking, and application management services for multicloud environments, on-premises infrastructures, and edge locations. F5 Distributed Cloud Services Customer Edge (CE) enhances these capabilities by integrating into a customer’s environment, enabling centralized management via the F5 Distributed Cloud Console while being fully operated by the customer. F5 Distributed Cloud Services Customer Edge (CE) can be deployed in public clouds, on-premises, or at the edge. Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. Nutanix Cloud Clusters (NC2) extend on-premises data centers to public clouds, maintaining the simplicity of the Nutanix software stack with a unified management console. NC2 runs AOS and AHV on public cloud instances, offering the same CLI, user interface, and APIs as on-premises environments. This article explores how F5 Distributed Cloud and Nutanix collaborate to deliver secure and seamless application services across various types of cloud application migrations. Whether migrating applications to the cloud, repatriating them from public clouds, or transitioning into a hybrid multicloud environment, F5 Distributed Cloud and Nutanix ensure optimal performance and security at all times. Illustration F5 Distributed Cloud App Connect securely connect distributed application services across hybrid and multicloud environments. It operates seamlessly with a platform of web application and API protection (WAAP) services, safeguarding apps and APIs against a wide range of threats through robust security policies including an integrated WAF, DDoS protection, bot management, and other security tools. This enables the enforcement of consistent and comprehensive security policies across all applications without the need to configure individual custom policies for each app and environment. Additionally, it provides centralized observability by providing clear insights into performance metrics, security posture, and operational statuses across all cloud platforms. In this section, we illustrate how to utilize F5 Distributed App Connect with Nutanix for different cloud application migration scenarios. Cloud Migration In our example, we have a VMware environment within a data center located in San Jose. Our goal is to migrate the on-premises application nutanix.f5-demo.com from the VMware environment to a multicloud environment by distributing the application workloads across Nutanix Cloud Clusters (NC2) on AWS and Nutanix Cloud Clusters (NC2) on Azure. First, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as Nutanix Cloud Clusters (NC2) on Azure. F5 Distributed Cloud App Connect addresses the issue of IP overlapping, enabling us to deploy application workloads using the same IP addresses as those in the VMware environment in the San Jose data center. Next, we create origin pools on the F5 Distributed Cloud Console. In our example, we create two origin pools: nutanix-nc2-aws-pool for origin servers on NC2 on AWS and nutanix-nc2-azure-pool for origin servers on NC2 on Azure. To ensure minimal application services disruption, we update the HTTP Load Balancer for nutanix.f5-demo.com to include both new origin pools, and we assign them with a higher weight than the existing pool vmware-sj-pool so that the origin servers on Nutanix Cloud Clusters (NC2) on AWS and on Nutanix Cloud Clusters (NC2) on Azure will receive more traffic compared to the origin servers in the VMware environment in the San Jose data center. Note that web application firewall (WAF) nutanix-demo is enabled. Finally, we remove vmware-sj-pool to complete the cloud migration. Cloud Repatriation In this example, xc.f5-demo.com is deployed in a multicloud environment across AWS and Azure. Our objective is to migrate the application back to the Nutanix environment in the San Jose data center from the public clouds. To begin, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads in Nutanix AHV. We deploy the application workloads using the same IP addresses as those in the public clouds because IP overlapping is not a concern with F5 Distributed Cloud App Connect. On the F5 Distributed Cloud Console, we create an origin pool nutanix-sj-pool with the origin servers originating from the Nutanix environment in the San Jose data center. We then update the HTTP Load Balancer for xc.f5-demo.com to include the new origin pool, and assign it with a higher weight than both existing pools: xc-aws-pool with origin servers on AWS and xc-azure-pool with origin servers on Azure. As a result, the origin servers in the Nutanix environment, located in the San Jose data center will receive more traffic compared to origin servers in other pools. To ensure all applications receive the same level of security protection, web application firewall (WAF) nutanix-demo is also applied here. To complete the cloud repatriation, we remove xc-aws-pool and xc-azure-pool. The application service experiences minimal disruption during and after the migration. Hybrid Multicloud Our goal in this example is to bring xc-nutanix.f5-demo.com into a hybrid multicloud environment, as it is presently deployed solely in the San Jose data center. We first deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as on Nutanix Cloud Clusters (NC2) on Azure. We create an origin pool with origin servers originating from each of the F5 Distributed Cloud Customer Edge (CE) sites on the F5 Distributed Cloud Console. Next, we update the HTTP Load Balancer for xc-nutanix.f5-demo.com so that it includes all origin pools: nutanix-sj-pool (Nutanix AHV in our San Jose data center), nutanix-nc2-aws-pool (NC2 on AWS), and nutanix-nc2-azure-pool (NC2 on Azure). Note that web application firewall (WAF) nutanix-demo is applied here as well so that we can ensure a consistent level of security protection across all applications no matter where they are deployed. xc-nutanix.f5-demo.com is now in a hybrid multicloud environment. F5 Distributed Cloud Console is the centralized console for configuration management and observability. It provides real-time metrics and analytics, which allows us proactively monitor security events. Additionally, its integrated AI assistant delivers real-time insights and actionable recommendations of security events, enhancing our understanding of the security events and enabling more informed decision-making. This enables us to swiftly detect and respond to emerging threats, thereby sustaining a robust security posture. Conclusion Cloud application migration can be complex and challenging. F5 Distributed Cloud and Nutanix collaborate to offer a secure and streamlined solution that minimizes risk and disruption during and after the migration process, including those migrating from VMware environments. This ensures a seamless cloud application transition while maintaining business continuity throughout the entire process and beyond.251Views1like0CommentsEdge Client OAuth with Azure
Hello All, I tried OAuth feature on Edge Client with Azure as IDP. It works, I receive the Access Token and connect successfully. The problem is that Policy does not parse the JWT token and just stores it as secure variable. So I have no information about the user. I can parse it with an irule, but I expected to be parsed automatically, lilke when you use an OAuth Client in VPE. Am I missing something?68Views0likes0CommentsActive/Active load balancing examples with F5 BIG-IP and Azure load balancer
Background A couple years ago I wrote an article about some practical considerations using Azure Load Balancer. Over time it's been used by customers, so I thought to add a further article that specifically discusses Active/Active load balancing options. I'll use Azure's standard load balancer as an example, but you can apply this to other cloud providers. In fact, the customer I helped most recently with this very question was running in Google Cloud. This article focuses on using standard TCP load balancers in the cloud. Why Active/Active? Most customers run 2x BIG-IP's in an Active/Standby cluster on-premises, and it's extremely common to do the same in public cloud. Since simplicity and supportability are key to successful migration projects, often it's best to stick with architectures you know and can support. However, if you are confident in your cloud engineering skills or if you want more than 2x BIG-IP's processing traffic, you may consider running them all Active. Of course, if your total throughput for N number of BIG-IP's exceeds the throughput that N-1 can support, the loss of a single VM will leave you with more traffic than the remaining device(s) can handle. I recommend choosing Active/Active only if you're confident in your purpose and skillset. Let's define Active/Active Sometimes this term is used with ambiguity. I'll cover three approaches using Azure load balancer, each slightly different: multiple standalone devices Sync-Only group using Traffic Group None Sync-Failover group using Traffic Group None Each of these will use a standard TCP cloud load balancer. This article does not cover other ways to run multiple Active devices, which I've outlined at the end for completeness. Multiple standalone appliances This is a straightforward approach and an ideal target for cloud architectures. When multiple devices each receive and process traffic independently, the overhead work of disaggregating traffic to spread between the devices can be done by other solutions, like a cloud load balancer. (Other out-of-scope solutions could be ECMP, BGP, DNS load balancing, or gateway load balancers). Scaling out horizontally can be a matter of simple automation and there is no cluster configuration to maintain. The only limit to the number of BIG-IP's will be any limits of the cloud load balancer. The main disadvantage to this approach is the fear of misconfiguration by human operators. Often a customer is not confident that they can configure two separate devices consistently over time. This is why automation for configuration management is ideal. In the real world, it's also a reason customers consider our next approach. Clustering with a sync-only group A Sync-Only device group allows us to sync some configuration data between devices, but not fail over configuration objects in floating traffic groups between devices, as we would in a Sync-Failover group. With this approach, we can sync traffic objects between devices, assign them to Traffic Group None, and both devices will be considered Active. Both devices will process traffic, but changes only need to be made to a single device in the group. In the example pictured above: The 2x BIG-IP devices are in a Sync-Only group called syncGroup /Common partition is not synced between devices /app1 partition is synced between devices the /app1 partition has Traffic Group None selected the /app1 partition has the Sync-Only group syncGroup selected Both devices are Active and will process traffic received on Traffic Group None The disadvantage to this approach is that you can create an invalid configuration by referring to objects that are not synced. For example, if Nodes are created in /Common, they will exist on the device on which they were created, but not on other devices. If a Pool in /app1 then references Nodes from /Common, the resulting configuration will be invalid for devices that do not have these Nodes configured. Another consideration is that an operator must use and understand partitions. These are simple and should be embraced. However, not all customers understand the use of partitions and many prefer to use /Common only, if possible. The big advantage here is that changes only need to be made on a single device, and they will be replicated to other devices (up to 32 devices in a Sync-Only group). The risk of inconsistent configuration due to human error is reduced. Each device has a small green "Active" icon in the top left hand of the console, reminding operators that each device is Active and will process incoming traffic on Traffic Group None. Failover clustering using Traffic Group None Our third approach is very similar to our second approach. However, instead of a Sync-Only group, we will use a Sync-Failover group. A Sync-Failover group will sync all traffic objects in the default /Common partition, allowing us to keep all traffic objects in the default partition and avoid the use of additional partitions. This creates a traditional Active/Standby pair for a failover traffic group, and a Standby device will not respond to data plane traffic. So how do we make this Active/Active? When we create our VIPs in Traffic Group None, all devices will process traffic received on these Virtual Servers. One device will show "Active" and the other "Standby" in their console, but this is only the status for the floating traffic group. We don't need to use the floating traffic group, and by using Traffic Group None we have an Active/Active configuration in terms of traffic flow. The advantage here is similar to the previous example: human operators only need to configure objects in a single device, and all changes are synced between device group members (up to 8 in a Sync-Failover group). Another advantage is that you can use the /Common partition, which was not possible with the previous example. The main disadvantage here is that the console will show the word "Active" and "Standby" on devices, and this can confuse an operator that is familiar only with Active/Standby clusters using traffic groups for failover. While this third approach is a very legitimate approach and technically sound, it's worth considering if your daily operations and support teams have the knowledge to support this. Other considerations Source NAT (SNAT) It is almost always a requirement that you SNAT traffic when using Active/Active architecture, and this especially applies to the public cloud, where our options for other networking tricks are limited. If you have a requirement to see true source IP and need to use multiple devices in Active/Active fashion, consider using Azure or AWS Gateway Load Balancer options. Alternative solutions like NGINX and F5 Distributed Cloud may also be worth considering in high-value, hard-requirement situations. Alternatives to a cloud load balancer This article is not referring to F5 with Azure Gateway Load Balancer, or to F5 with AWS Gateway Load Balancer. Those gateway load balancer solutions are another way for customers to run appliances as multiple standalone devices in the cloud. However, they typically require routing, not proxying the traffic (ie, they don't allow destination NAT, which many customers intend with BIG-IP). This article is also not referring to other ways you might achieve Active/Active architectures, such as DNS-based high availability, or using routing protocols, like BGP or ECMP. Note that using multiple traffic groups to achieve Active/Active BIG-IP's - the traditional approach on-prem or in private cloud - is not practical in public cloud, as briefly outlined below. Failover of traffic groups with Cloud Failover Extension (CFE) One option for Active/Standby high availability of BIG-IP is to use the CFE , which can programmatically update IP addresses and routes in Azure at time of device failure. Since CFE does not support Active/Active scenarios, it is appropriate only for failover of a single traffic group (ie., Active/Standby). Conclusion Thanks for reading! In general I see that Active/Standby solutions work for many customers, but if you are confident in your skills and have a need for Active/Active F5 BIG-IP devices in the cloud, please reach out if you'd like me to walk you through these options and explore any other possibilities. Related articles Practical Considerations using F5 BIG-IP and Azure Load Balancer Deploying F5 BIG-IP with Azure Cross-Region Load Balancer2.3KViews2likes2CommentsRunning F5 with managed Azure RedHat OpenShift
Summary In early 2020, Microsoft and RedHat announced a new release of Azure RedHat OpenShift. This article shows how to set up F5 to integrate with this offering. This is also an easy demo. Background OpenShift is now available as a managed service in Azure called ARO (as in, Azure RedHat OpenShift). Microsoft has published a tutorial to deploy a cluster into an existing virtual network, but this article shows a way to deploy an environment with F5 integrated in a single deployment. Use this for demo or learning purposes. Deploying Azure RedHat OpenShift (ARO) You can run OpenShift on your own servers on-premises or in the cloud. For example, these instructions were the way I first learned to deploy a cluster on AWS. Eric Ji from F5 recently published a guide that walks through these instructions and he includes deployment of F5 Container Ingress Services. This method is supported and gives you a high level of control. ARO is a deployment option where your servers are managed by Azure. Patching, upgrading, repair, and DR are all handled for you, along with joint support from Microsoft and RedHat. Microsoft have done a great job of documenting the process to deploy ARO in the tutorial already mentioned. If you were to follow their instructions, after about 35 minutes your deployment would produce something like this (image taken straight from OpenShift's announcement article): Microsoft's instructions to create the demo above require that you have the User Access Administrator role, or that you pass in the credentials of a ServicePrincipal that has contributor rights over the Resource Group in which the existing VNET resides. Deploying F5 + ARO Another way to build out the same environment in Azure is this automated demo, which will include the deployment of F5 and also takes around 35 minutes to complete. Click here to deploy this demo: https://github.com/mikeoleary/azure-redhat-openshift-f5 This does not require a User Access Administrator, but does require that you have a ServicePrincipal with Contributor permissions on the subscription. A ServicePrincipal is a principal in Azure ActiveDirectory to which you can assign roles at a scope like Resource Group or Subscription. For this demo, I recommend creating a ServicePrincipal and then assigning it the role of Contributor over your Subscription, or the Resource Group in which you intend to deploy. If you follow this demo, you'll have an environment that looks more like this: This demo adds the following resources to the environment. You could add these resources manually yourself, if you have an existing OpenShift environment. Adds 3x subnets for the F5 BIG-IP VM Deploys F5 VM's into those subnets using this ARM template Adds the BIG-IP into the OpenShift network following these instructions Installs CIS in OpenShift following these instructions. Deploys an app into OpenShift This includes a Route resource that is detected by CIS CIS then populates the app's pod IP addresses as pool members in BIG-IP Output values are added to the deployment, for users to verify successful completion Post-deployment verification This demo will deploy an app in OpenShift that is exposed by an OpenShift Route, and this requires that you manually change your DNS record on the Internet to point to the IP address value of the deployment output called publicExternalLoadBalancerAddress. After you have made this DNS change (optionally, use a local hosts record), you should see your demo app available on the Internet, like this: The outputs of this demo will also give you the public URL's of BIG-IP's and your OpenShift cluster. You can login to all of these to see the configuration at work. Deleting your environment Don't forget to delete your environment if you are just testing. I find the easiest way to do this is just to delete the Resource Group into which you deployed originally. You can delete individual resources via the Azure portal if you choose, but do remember that the Read-Only Resource Group that is created by ARO is deleted by deleting the OpenShift cluster resource, which is in the Resource Group into which you originally deployed. Conclusion To summarize, ARO allows us to deploy an OpenShift environment quickly. Integration with F5 is much like an on-prem installation of OpenShift. You integrate the BIG-IP with the OpenShift network, then deploy CIS so that it can configure the BIG-IP to expose your applications. Thanks for reading! Any questions, please leave a comment and I'll respond, thanks!1.4KViews1like1CommentCreate a BIG-IP HA Pair in Azure
Use an Azure ARM template to create a high availability (active-standby) pair of BIG-IP Virtual Edition instances in Microsoft Azure. When one BIG-IP VE goes standby, the other becomes active, the virtual server address is reassigned from one external NIC to another. Today, let’s walk through how to create a high availability pair of BIG-IP VE instances in Microsoft Azure. When we’re done, we’ll have an active-standby pair of BIG-IP VEs. To start, go to the F5 Networks Github repository. Click F5-azure-arm-templates. Then go to Supported>failover. You have several options at this point. You can chose which templates to use based on your needs, failing over via API calls, via upstream load balancers, and NIC counts. Read each readme to determine your desired deployment strategy. When you already have your subnets and existing IP addresses defined but to see how it works, let’s deploy a new stack. Click new stack and scroll down to the Deploy button. If you have a trial or production license from F5, you can use the BYOL or BIG-IQ as license server options but in this case we’re going to choose the PAYG option. Click Deploy and the template opens in the Azure portal. Now we simply fill out the fields. We’ll create a new Resource Group and set a password for the BIG-IP VEs. When you get to the questions: The DNS label is used as part of the URL. Instance Name is just the name of the VM in Azure. Instance Type determines how much memory and CPU you’ll have. Image Name determines how many BIG-IP modules you can run (and you can choose the latest BIG-IP version). Licensed Bandwidth determines the maximum throughput of the traffic going through BIG-IP. Select the Number of External IP addresses (we’ll start with one but can add more later). For instance, if you plan on running more than one application behind the BIG-IP, then you’ll need the appropriate external IP addresses. Vnet Address Prefix is for the address ranges of you subnets (we’ll leave at default). The next 3 fields (Tenant ID, Client ID, Service Principal Secret) have to do with security. Rather than using your own credentials to modify resources in Azure, you can create an Active Directory application and assign permissions to it. The last two fields also go together. Managed Routes let you route traffic from other external networks through the BIG-IPs. The Route Table Tag means that anytime this tag is found in the route table, routes that have this destination are updated so that the next hop is the IP address of the active BIG-IP VE. This is useful if you want all outbound traffic to go through the BIG-IP or if you want to send traffic from a bunch of different Vnets through the BIG-IP. We’ll leave the rest as default but the Restricted Src Address is good way to put IP addresses on my network – the ones that are allowed to connect to the BIG-IP. We’ll agree to the terms and click Purchase. We’re redirected to the Dashboard with the Deployment in Progress indicator. This takes about 15 minutes. Once finished we’ll go check all the resources in the Resource Group. Let’s find out where the virtual server address is located since this is associated with one of the external NICs, which have ‘ext’ in the name. Click the one you want. Then click IP Configuration under Settings. When you look at the IP Configuration for these NICs, whenever the NIC has two IP addresses that’s the NIC for the active BIG-IP. The Primary IP address is the BIG-IP Self IP and the Secondary IP is the virtual server address. If we look at the other external NIC we’ll see that it only has one Self IP and that’s the Primary and it doesn’t have the Secondary virtual server address. The virtual server address is assigned to the active BIG-IP. When we force the active BIG-IP to standby, the virtual server address is reassigned from one NIC to the other. To see this, we’ll log into the BIG-IPs and on the active BIG-IP, we’ll click Force to Standby and the other BIG-IP becomes Active. When we go back to Azure, we can see that the virtual server IP is no longer associated with the external NIC. And if we wait a few minutes, we’ll see that the address is now associated with the other NIC. So basically how BIG-IP HA works in the Azure cloud is by reassigning the virtual server address from one BIG-IP to another. Thanks to our TechPubs group and check out the demo video. ps6.9KViews0likes6CommentsDistributed Cloud Support for NAS Migrations from On-Premises Approaches to Azure NetApp Files
F5 Distributed Cloud (XC) Secure Multicloud Networking (MCN) connects and secures distributed applications across offices, data centers, and various cloud platforms. Frequently the technology is web-based, meaning traffic is often carried on ports like TCP port 443, however other traffic types are also prevalent in an enterprise’s traffic mix. Examples include SSH or relational database protocols. One major component of networked traffic is Network-Attached Storage (NAS), a protocol in the past frequently carried over LANs between employees in offices and co-located NAS appliances, perhaps in wiring closets or server rooms. An example of such an appliance would be the ONTAP family from NetApp which can take on physical or virtual form factors. NAS protocols are particularly useful as they integrate file stores into operating systems such as Microsoft Windows or Linux distributions as directories, mounted for easy access to files at any time, often permanently. This contrasts with SSH file transfers, which are often ephemeral actions and not so tightly integral to host operating system health. With the rise of remote work, often the NAS appliances see increasing file reads-and-writes to these directories, traversing wide-area links. In fact, one study analyzing fundamental traffic changes due to the Covid-19 pandemic saw a 22 percent increase in file transfer protocol (FTP) in a single year, suggesting access to files has undergone significant foundational changes in recent years. Distributed Cloud and the Movement towards Centralized Enterprise Storage A traditional concern about serving NAS files to offices from a centralized point, such as a cloud-instantiated file repository, is latency and reliability. With F5’s Distributed Cloud offering a 12 Tbps aggregate backbone and dedicated RE-to-RE links, the behavior of the network component is both highly durable and performant. The efficiencies of a centralized corporate file distribution point, with the required 9’s of guaranteed uptime of modern cloud services, and the logic of moving towards cloud-served NAS solutions makes a lot of sense. With on-premises storage appliances replaced by a secure, networked service eliminates the need to maintain costly spares, which are effectively a shadow NAS appliance infrastructure and onerous RMA procedures. All of this enables accomplishing the goal of shrinking/greening office wiring closets. To demonstrate this centralized model for a NAS architecture, a configuration was created whereby a west coast simulated office was connected by F5 Distributed Cloud to Azure NetApp Files (ANF) instantiated in Azure East-2 region. ANF is Microsoft Azure’s newest native file serving solution, managed by NetApp, with data throughputs that increase in lock step with the amount of reserved storage pool capacity. Different quality of service (QoS) levels are selectable by the consumer. In the streamlined ANF configuration workflow, where various transaction latency thresholds may be requested, even the most demanding relational database operations are typically accommodated. Microsoft offers additional details on ANF here, however, this article should serve to sufficiently demonstrate the ANF and F5 Distributed Cloud Secure MCN solutions for most readers. Distributed Cloud and Azure NetApp Files Deployment Example NAS in the enterprise today largely involves use of either NFS or SMB protocols, both of which can be used within Windows and Linux environments and make remote directories appear and perform as if local to users. In our example, a western US point of presence was leveraged to serve as the simulated remote office and standard Linux hosts to serve as the consumers of NetApp volumes. In the east, a corporate VNET was deployed in an Azure resource group (RG) in US-East-2, with one subnet delegated to provide Azure NetApp Files (ANF). To securely connect the west coast office to the eastern Azure ANF service, F5 Distributed Cloud Secure MCN was utilized to create a Layer 3 multi-cloud network offering. This is achieved by easily dropping an F5 customer edge (CE) virtual appliance into both the office and the Azure VNET in the east. The CE is a 2-port security appliance. The inside interfaces on both CEs were attached to a global virtual network, and exclusive layer-3 associations to allow simple connectivity and fully preserve privacy. In keeping with the promise of SaaS, Distributed Cloud users require no routing protocol setup. The solution takes care of the control plane, including routing and encryption. This concept could be scaled to hundreds of offices, if equipped with CEs, and easily attached to the same global virtual network. CEs, at boot-up, automatically attach via IP Sec (or SSL) tunnels to geographically close F5 backbone nodes, called regional edge (RE) sites. Like tunnel establishment, routing tables are updated under-the-hood to allow for a turn-key security relationship between Azure NetApp File volumes and consuming offices. The setup is depicted as follows: Setup Azure NetApp Files (ANF) Volumes in Minutes To put the centralized approach to offering NAS volumes for remote offices or locations into practice, a series of quick steps are undertaken, which can all be done through the standard Microsoft Azure portal. The four steps are listed below, with screenshots provided for key points in the brief process: If not starting from an existing Resource Group (RG), create a new RG and add an Azure VNET to it. Delegate one subnet in your VNET to support ANF. Under “Delegate Subnet to a Service” select from the pull-down-list the entry “Microsoft.NetApp/volumes”. Within the Resource Group, choose “Create” and make a NetApp account. This will appear in the Azure Marketplace listings as “Azure NetApp Files”. In your NetApp account, under “Storage service” create a capacity pool. The pool should be sized appropriately, larger is typically better, since numerous volumes, supporting your choice of NFS3/4 and SMB protocols, will be created from this single, large disk pool. Create your first volume, select size, NAS protocols to support, and QoS parameters that meet your business requirements. As seen below, when adding a capacity pool simply follow the numerical sequence to add your pool, with a newly created sample 2 TiB pool highlighted; 1,024 TiB (1 PiB) are possible (click image to enlarge). Interestingly, the capacity pool shown is the “Standard” service level, as opposed to “Premium” and “Ultra”. With QoS type of Auto selected, Azure NetApp Files provides increasing throughput in terms of megabytes per second as the number of TiB in the pool increases. The throughput also increases with service levels; for standard, as shown, 8 megabytes per second per TiB will be allocated. Beyond throughput, ANF also provides the lowest latency averages for reads and writes in the Azure portfolio of storage offerings. As such, ANF is a very good fit for database deployments that must see constrained, average latency for mission-critical transactions. Deeper discussion around ANF service levels may be explored through the Microsoft document here. The next screenshot shows the simple click-through sequence for adding a volume to the capacity pool, simply click on volumes and the “+Add volume” button. A resulting sample volume is displayed in the figure with key parameters highlighted. In the above volume (“f5-distributed-cloud-vol-001”) the NAS protocol selected was NFSv3 and the size of the volume (“Quota”) was set to 100GiB. Setup F5 Distributed Cloud Office-to-Azure Connectivity To access the volume in a secured and highly responsive manner, from corporate headquarters, remote offices or existing data centers, three items from F5 Distributed Cloud are required: A customer edge (CE) node, normally with 2-ports, must be deployed in the Azure RG VNET. This establishes the Azure instance as a “site” within the Distributed Cloud dashboard. Hub and spoke architectures may also be used if required, where VNET peering can also allow the secure multi-cloud network (MCN) solution to operate seamlessly. A CE is deployed at a remote office or datacenter, where file storage services are required by various lines of business. The CE is frequently deployed as a virtual appliance or installed on a bare metal server and typically has 2-ports. To instantiate a layer-3 MCN service, the inside ports of the two CEs are “joined” to a virtual global network created by the enterprise in the Distributed Cloud console, although REST API and Terraform are also deployment options. By having each inside port of the Azure and office CE’s joined to the same virtual network, the “inside” subnets can now communicate with each other, securely, with traffic normally exchanged over encrypted high-speed IPSec tunnels into the F5 XC global fabric. The following screenshot demonstrates adding the Azure CE inside interface to a global virtual network, allowing MCN connectivity to remote office clients requiring access to volumes. Further restrictions, to prevent unauthorized clients, are found within NAS protocols themselves, such as simple Export policies in NFS and ACL rules in SMB/CIFS, which can be configured quickly within ANF. Remote Office Access – Establish Read/Write File Access to Azure ANF over F5 Distributed Cloud With both ANF configured and F5 Distributed Cloud now providing a layer-3 muticloud network (MCN) solution, to patch enterprise offices to the centralized storage, some confirmation of the solution working as expected was desired. First off, a choice in protocols was made. When configuring ANF, the normal choices for access are NFSv3/v4 or SMB/CIFS or both protocols concurrently. Historically, Microsoft hosts made use of SMB/CIFS and Linux/Unix hosts preferred NFS, however today both protocols are used throughout enterprises. One example being long-time SAMBA server (SMB/CIFS) support in the world of Linux. Azure NetApp Files will provide all the necessary command samples to get hosts connected without difficulty. For instance, to mount the volume to a folder off the Linux user home directory, such as the sample folder “f5-distributed-cloud-vol-001”, per the ANF suggestion the following one command will connect the office Linux host to the central storage in Azure-East-2: sudo mount -t nfs -o rw,hard,rsize=262144,wsize=262144,vers=3,tcp 10.0.9.4:/f5-distributed-cloud-vol-001 f5-distributed-cloud-vol-001 At this point the volume is available for day-to-day tasks, including read and write operations, as if the NAS solution were local to the office, often literally down the hallway. Remote Office Access - Demonstration of Azure ANF over F5 Distributed Cloud in Action To repeatedly exercise file writes from a west coast US office to an east coast ANF deployment in Azure-East-2 (Richmond, Virgina) a simple shell script was used to perpetually write a file to a volume, delete it, and repeat over time. The following sample wrote a file of 20,000 bytes to the ANF service, waited a few seconds, and then removed the file before beginning another cycle. At the lowest common denominator, packet analysis for the ensuing traffic from the western US office will indicate both network and application latency sample values. As depicted in the following Wireshark trace, the TCP response to a transmitted segment carrying an NFS command, was observed to be just 74.5 milliseconds. This prompt round-trip latency for a cross-continent data plane suggests a performant Distributed Cloud MCN service level. This is easily seen as the offset from the reference timestamp (time equal to zero) of the NFS v3 Create Call. Click on image to expand. Analyzing the NAS response from ANF (packet 185) arrives less than 1 millisecond later, suggesting a very responsive, well-tuned NFS control plane offered by ANF. To measure the actual, write-time of a file from west coast to east coast, the following trace demonstrates the 20,000 byte file write exercise from the shell script. In this case, the TCP segments making up the file, specifically the large packet body lengths called out in the screenshot, are delivered efficiently without TCP retransmissions, TCP zero window events, nor having any indicators of layer 3 and 4 health concerns. The entirety of the write is measured at the packet layer to take only 150.8 milliseconds. Since packet-level analysis is not the most turnkey, easy method to monitor file read and write performance, a set of Linux and Windows utilities can also be leveraged. The Linux utility nfsiostat was concurrently used with the test file writes and produced similar, good latency measurements. Nfsiostat monitoring of the file write testing, from west coast to east coast, for the 20,000-byte file, has indicated an average write time to ANF of 151 milliseconds. The measurements presented here are simply observational, to present rapid, digestible techniques for readers interested in service assurance for running ANF over an XC L3 MCN offering. For more rigorous monitoring treatments, Microsoft provides guidance on performing one’s own measurements of Azure NetApp Files here. Summary As enterprise-class customers continue to rapidly look towards cloud for compute performance, GPU access, and economies-of-scale savings for key workloads, the benefits of a centralized, scalable storage counterpart to this story exists. F5 Distributed Cloud offers the reach and performance levels to securely tie existing offices and data centers to cloud-native storage solutions. One example of this approach to modernize storage was covered in this article, the turn-key ability to begin transitioning from traditionally on-premises NAS appliances to cloud-native scalable volumes. The Azure NetApp Files approach to serving read/write volumes allows modern hosts, including Windows and Linux distributions, to utilize virtually unlimited folder sizes with service levels adjustable to business needs.298Views0likes0CommentsDeploying F5 WAF in front of Azure Web App Services
Does anyone know of a supported architecture for deploying an Azure F5 WAF in front of Azure Web App Services to handle the SSL and ASM services against traffic destined for an Azure Web App Service (App Service not just an app server running in Azure).238Views0likes2CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies, F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here is a common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.9KViews5likes2CommentsHow I did it - "Integrating Fortanix SDKMS with the BIG-IP"
Let Me Ask You a Question I need to implement a key management service (KMS) to manage my organization’s TLS keys. The KMS/HSM needs to be cloud-agnostic, secure, scalable, and available to handle crypto operations offloaded from web applications deployed on a variety of platforms across the globe. What’s more, there is a requirement that the organization maintains full control and ownership of the KMS and it’s operations so using a SaaS offering is not an option. Here’s the question, what can I use? Why Fortanix? Answering the above question isn’t an easy one. When I hear phrases like “scalable and highly available” and “...across the globe”, I immediately start looking at the public cloud. But, I still need to be cloud agnostic and maintain full control so cloud HSMs and SaaS offerings don’t fit the bill. To address the above requirements, I turned to one of F5’s partners and deployed the Fortanix Self-Defending Key Management Service (SDKMS). The Fortanix SDKMS system checks all the boxes, including: Cloud Agnostic - I am able to make use of KMS services across all public cloud platforms as well as on-premises. Secure/Scalable/Highly-Available - Deploys in the public cloud on hardware that utilizes Intel SGX technology and runs every operation in HSM-grade security. Full Control - Deploys in your infrastructure and provides web-based UI for centralized management. Solution Overview In this article, we’ll walk through configuring the BIG-IP to offload TLS crypto operations to a Fortanix SDKMS. The deployment process is quite similar to F5’s integration with Equinix SmartKey, (Fortanix SDKMS SaaS offering). The following steps are based upon F5’s guidance for implementing a network HSM. Okay, let’s take a look at how I did it. Prerequisites F5 BIG-IP LTM 14.1.0 or later - Virtual Edition (VE) utilized for this article. Both hardware and virtual edition platforms support network HSM integration. Additionally, you will need to provide a license covering the network HSM module. Fortanix SDKMS Cluster - We’re going to go right to the source, (Fortanix installation guidance). Setting up the SDKMS cluster is relatively straightforward and runs on Azure’s new DC-series instances, (currently in preview). Currently, the DC-series is available for public preview in the US East region only. The cool thing about the DC-series is that it makes use of Intel SGX technology. A better way to put it is it allows Azure infrastructure customers to make use of SGX. Fortanix SDKMS makes use of the underlying SGX technology to provide secure scalable data at rest. Step 1 - Create SDKMS Application Assuming the prerequisites have been met, (i.e. I have a Fortanix SDKMS stood up), the first thing I need to do is create an application object in SDKMS. The application object can access certificates, keys, and secrets that will be used by my application, (delivered via the BIG-IP).. A. Login to the Fortanix SDKMS UI. Select the ‘Apps’ icon from the left blade and then the ‘+’ to open a new application form. B. Provide the name of the application and select ‘API Key’ for the authentication method. C. Create a group and assign the application to the group. The group represents a collection of security objects, (applications, keys, certificates, etc.) that are available to members of the group. D. Select ‘Save’ to create the application, (see below left). With the application created, select ‘COPY API KEY’, (see above right) to capture the api key and store for later use. The key will be used by the BIG-IP as the password to authenticate calls to SDKMS. Step 2 - Install Fortanix Plugin Now that we have SDKMS prepared, I need to turn my attention to the BIG-IP. In this step, I will use my favorite ssh client to log into the BIG-IP as root. From there I will use the following commands to download and install the Fortanix plugin onto the BIG-IP. The plugin, (RPM) is available for download from here. cd /shared/ mkdir nethsm cd nethsm curl -O https://d2bzqwib4mjc49.cloudfront.net/3.11.1281/fortanix-pkcs11-4.25.2353-0.x86_64.rpm rpm -ivh ./fortanix-pkcs11-4.25.2353-0.x86_64.rpm Step 3 - Configure BIG-IP netHSM Integration A. Add the Fortanix HSM library to the BIG-IP tmsh create sys crypto fips external-hsm vendor auto pkcs11-lib-path /opt/fortanix/pkcs11/fortanix_pkcs11.sofortanix-pkcs11-3.11.1281-0.x86_64.rpm B. Configure the netHSM partition tmsh create sys crypto fips nethsm-partition auto password <copied API key> C. Restart the pkcs11d service bigstart restart pkcs11d tmm D. Testing Connectivity I will now use the BIG-IP management GUI to test connectivity between the BIG-IP and SDKMS. After logging into the BIG-IP GUI navigate to System --> Certificate Management --> HSM Management --> External HSM. Under the 'Partitions' section check the checkbox next to the partition in the partition list and select 'Test'. Below is example output of a successful connectivity test. Step 4 - Import SSL Certificate/key to BIG-IP and SDKMS A. Import private key into SDKMS Now that we have our external HSM, (Fortanix), https://fortanix.aserracorp.com integrated with our BIG-IP let’s put it to use. To start with, I will import a private key into SDKMS. Login to the Fortanix SDKMS UI and select the ‘Security Objects’ icon from the left blade and then the ‘+’ to open a new security object form. Provide the name for the key and select ‘Import’. Select 'RSA' for the object type Select 'Base64' and upload the my key. Associate the key to previously created group. Select ‘Import' to create the security object, (see below). B. Import SSL Certificate and netHSM Key Pointer into BIG-IP With the SDKMS now hostng the private key, I now import the corresponding certificate into the BIG-IP. Additionally, I must create a key resource pointing to the Fortanix SDKMS-hosted key. Login to the BIG-IP management GUI and navigate to System --> Certificate Management --> SSL Certificate List --> Import Select 'Certificate' for import type and provide a name. Browse to and upload certificate, select 'Import', (see below left). navigate to System --> Certificate Management --> SSL Certificate List --> Import Select 'Key' for import type and provide a name. The name must match the security object name of the SDKMS-stored key. Select 'From NetHSM' for 'Key Source', select 'Import', (see below right). C. Create SSL Profile and Attach to Virtual Server The last thing I need to do is create a Client SSL profile and associate it with my virtual server. Login to the BIG-IP management GUI and navigate to Local Traffic --> Profiles --> SSL --> CLIENT --> + Provide a name and check the 'Custom' checkbox. In the 'Certificate Key Chain' section select 'Add' Select the previously imported certificate and key from the drop-down menus Select 'Finished' to create the profile, (see below left) navigate to Local Traffic --> Virtual Servers and select the appropriate virtual server Under the 'SSLProfile (Client)' section select the previously create SSL profile, (see below right). Select 'Update' to save the modified virtual server. Well, that's how I did it. Now with the setup and configuration completed, my application, (https://app.aserracorp.com) is now secured with the BIG-IP offloading the crypto workload to Fortanix SDKMS. Additional Links Fortanix Self-Defending Key Management Service Setting Up the Network HSM BIG-IP Local Traffic Manager Azure Confidential Computing1.5KViews0likes0Comments