VoltMesh
9 TopicsF5 Distributed Cloud Services: API Security and Control
The previous article gave great coverage of API Discovery functionality on the F5 Distributed Cloud Services Web App and API Protection (XC WAAP) platform. This one gives an overview of what is available in terms of API control features. The XC WAAP platform provides a rich set of techniques to securely deliver applications and APIs. It goes beyond industry-standard features like WAF and DoS and introduces cutting-edge AI/ML technologies that significantly increase accuracy and decrease the number of false positives. The following paragraphs give an overview of available technologies and provide links to get started. Per API Request Analysis Per requests analysis name is self-explaining. The technique analyzes every single API call and extracts various metrics out of it such as request/response size, latency, etc. Based on observed data it builds a model that identifies a valid request structure for a particular API endpoint. Once the model is applied to the traffic it automatically catches any request that deviates from learned models and notifies a user. This approach is good to discover any per request-based threats. Getting started documentation Time-Series Anomaly Detection TS anomaly detection is somewhat similar to the previous mechanism but instead of looking into each API request, it monitors the traffic stream as a whole. Therefore, metrics of interest relate to the entire stream such as requests rate, errors, latency, and throughput. Then statistical algorithms detect the following types of anomalies: Unexpected spikes/drops Unexpected traffic patterns Such detection instantly notifies an administrator about any event that potentially means a service disruption. Whether it is a traffic drop caused by a misconfiguration or a traffic spike due to a DoS attack. Getting Started Documentation User Behavior Analysis Behavior analysis uses AI and ML to profile valid user behavior. It detects clear suspicious behavior based on WAF events, login failures, L7 policy violations, and other anomalous activity based on user behavioral aspects. Once enabled an administrator can observe a whole list of suspicious users in the HTTP load balancer app firewall dashboard. Getting started documentation Web Application Firewall As a platform with a mature security stack, the XC WAAP matches and exceeds all industry-standard features. In addition to that, it significantly simplifies the WAAP configuration. It only asks for application language, CMS, and web server type to set up the best level of protection for an application. Getting started documentation Service Policies Another name of the service policy is the L7 policy. When applied to a load balancer it allows to granularly define what traffic is allowed to a service and what is not in terms of L7 metrics such as hostname, method, path, or any other property available from a request. Service policy consists of rules that match a request by one or more metrics and then apply one of the following actions allow, deny or check next. Getting started documentation Rate Limiting Rate limiting protects an application or an API from excessive usage via limiting the rate of requests from a single user or client executed in a period of time. There is a variety of ways to identify a user: By IP address By cookie By HTTP Header By query string parameter The feature can limit the rate or total amount of requests which an entity is allowed to execute. Getting started documentation DoS Protection Similar to rate limiting DoS protection helps to keep service up and running under an excessive amount of traffic. However, opposite to rate limiting XC WAAP's DoS protection operates on L3/4 and deals with much higher volumes of traffic. Getting started documentation Allow/Deny Lists Self-explaining feature as well. Allows to set up lists of trusted and untrusted IP prefixes. Clients from trusted prefixes bypass all security checks. Clients from untrusted prefixes are simply blocked before any security check happens. Both lists support the expiration time setting. Meaning that rule continues to exist but isn't applied to the traffic after expiration time comes. That is handy to set up temporary rules e.g. for testing or security purposes. Getting started documentation Conclusion All security features above are available on top of the F5 Distributed Cloud Services HTTP load balancer. A combination of those provides reliable protection for an API and automatically supports it in up and running conditions.1.3KViews4likes2CommentsUsing Terraform and F5® Distributed Cloud Mesh to establish secure connectivity between clouds
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. UsingF5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows. In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps Deploy simple application web servers and VPC, VNETS in AWS and Azure respectively using Terraform. Create virtualF5® Distributed Cloud sites for AWS and Azure using Terraform provider for Distributed Cloud platform. These virtual sites provide abstraction for AWS VPCs and AZURE VNETs which can then be managed and used in aggregate. Use terraform provider to configureF5® Distributed Cloud Mesh ingress and egress gateways to provide connectivity to the Distributed Cloud backbone. Configure services such as security policies, DNS, HTTP Load balancer andF5® Distributed Cloud WAAP which are required to establish secure connectivity between applications. Terraform provider for F5® Distributed Cloud F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here. Here are the steps to deploy simple application usingF5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the READMEon F5's DevCentral Git. git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git cd f5-digital-customer-engagement-center/ git checkout mcn # checkout to multi cloud branch cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts customize admin.auto.tfvars.example as per your needs cp admin.auto.tfvars.example admin.auto.tfvars ./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc. ./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS ./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure This will create the following objects on AWS and Azure 3 VPC and VNET networks on each cloud respectively 3F5® Distributed Cloud Mesh nodes on each cloud seen as master-0 3 backend application on each cloud seen as scsmcn-workstation here projectPrefix is scsmcn in admin.auto.tfvars file 1 jump box on each cloud to test Create 6 http load balancers one for each node and can be accessed through F5® Distributed Cloud Console Create 6F5® Distributed Cloud sites which can be accessed via F5® Distributed Cloud Console F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearestF5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast. Run setup.sh script to deploy theF5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure. ./setup.sh Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of volterraedge/volterra from the dependency lock file - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed volterraedge/volterra v0.10.0 Terraform has been successfully initialized! random_id.buildSuffix: Creating... random_id.buildSuffix: Creation complete after 0s [id=c9o] volterra_virtual_site.site: Creating... volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: buildSuffix = "73da" volterraVirtualSite = "scsmcn-site-73da" created random build suffix and virtual site aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer,F5® Distributed Cloud aws site and origin servers ./aws-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/aws from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Reusing previous version of hashicorp/template from the dependency lock file - Using previously-installed hashicorp/null v3.1.0 - Using previously-installed hashicorp/template v2.2.0 - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/aws v3.60.0 - Using previously-installed hashicorp/random v3.1.0 Terraform has been successfully initialized! An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.aws_instances.volterra["bu1"] will be read during apply # (config refers to values not yet known) <= data "aws_instances" "volterra" { + id = (known after apply) + ids = (known after apply) ..... truncated output .... volterra_app_firewall.waf: Creating... module.vpc["bu2"].aws_vpc.this[0]: Creating... aws_key_pair.deployer: Creating... module.vpc["bu3"].aws_vpc.this[0]: Creating... module.vpc["bu1"].aws_vpc.this[0]: Creating... aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29] Apply complete! Resources: 90 added, 0 changed, 0 destroyed. Outputs: bu1JumphostPublicIp = "54.213.205.230" vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}" azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers andF5® Distributed Cloud azure site. ./azure-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/azurerm from the dependency lock file - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed hashicorp/azurerm v2.78.0 Terraform has been successfully initialized! ..... truncated output .... azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside] Apply complete! Resources: 58 added, 2 changed, 12 destroyed. Outputs: azureJumphostPublicIps = [ "20.190.21.3", ] After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console. AWS Sites on F5® Distributed Cloud Console Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics. Azure Sites on F5® Distributed Cloud Console Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics. Analytics on F5® Distributed Cloud Console To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access. API First F5® Distributed Cloud Console F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details. Benefits OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated. Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated. App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.1.4KViews2likes3CommentsJourney to the Multi-Cloud Challenges
Introduction The proliferation of internet-based applications, digital transformations accelerated by the pandemic, an increase in multi-cloud adoption, and the rise of the distributed cloud paradigm allbringnew business opportunitiesas well as new operational challenges. According to Propeller Insights Survey; 75% of all organizations are deploying apps in multiple clouds. 63% of those organizations are usingthree or more clouds. And 56% are findingit difficult to manage workloads across different cloud providers, citing challenges with security, reliability, and connectivity. Below I outline some of the common challenges F5 has seen and illustrate how F5 Distributed Cloud is able to address those challenges.For the purpose of the following examples I am using this demo architecture. Challenge #1: IP Conflict and IP exhaustion As organizations accelerate their digital transformation, they begin to experience significant network growth and changes. As their adoption of multiple public clouds and edge providers expands, they begin to encounter challenges with IP overlap and IP exhaustion. Typically, thesechallenges seldom happen on the Internet as IP addresses are centrally managed. However, this challenge is common for non-Internet traffic becauseorganizations use private/reserved IP ranges (RFC1918) within their networks and any organization is free to use any private ranges they want. This presents a increasingly common problem as networks expand into public clouds, with the ease of infrastructure bootstrapping using automation, the needs of multi-cloud networking, and finally mergers and acquisitions. The F5 Distributed Cloud canhelp organizations overcome IP conflict and IP exhaustion challenges by provisioning multiple apps with a single IP address. How to Provision Multiple Apps with a Single IP Address (~8min) Challenge #2: Easy consumable application services via service catalogue A multi-cloudparadigm causes applications to be very distributed. We often seeapplications running on multiple on-prem data centers, at the edge, and inpublic cloud infrastructure. Making those applicationseasily available ofteninvolves many infrastructureand security control changes - not an easy task. This includes common tasks such as service advertisement, updates to network routing and switching, changing firewall rules, and provisioning DNS.In this demo, wedemonstrate how to seamlessly provision and advertise services, to and from public cloud providers anddata centers. This capability enables an organization to seamlessly provision services and create consumable service catalogues. How to Seamlessly Provision Services to/from the Cloud Edge (~4min) Challenge #3: Operational (Day-2) Complexities Often users have multiple discreet tools managing their infrastructure and each toolprovides their owndashboard for telemetry, visibility, and observability. Users need access to all these tools into a single consistent view so they can tell exactly what is happening in their environments. F5 Distributed Cloud Console provides a 'single pane of glass' for telemetry, visibility and observability providing operational efficiency for Day-2 operations designed to reduce total cost of ownership. Get a Single Pane of Glass on Telemetry, Visibility and Observability (~7min) Challenge #4: Cloud Vendor lock-in impede business agility. Most organizations do not want their cloud workload locked into a particular cloud provider.Cloud vendor lock-in can bea major barrier to the adoption of cloud computing and CIO's show some concern with vendor lock-in perFlexera's2020 CIO Priorities Report,To avoid cloud lock-in, create application resiliency,and get back some of the freedoms of cloud consumption - movingworkload from cloud to cloud - organizations need to be able to dynamically movecloud providers quickly and easilyin the unlikely event that one cloud provider becomes unavailable. Workload Portability - How to Seamlessly Move Workloads from Cloud to Cloud (~4min) Challenge #5: Consistent Security Policiesacross clouds How do you ensure that every security policy you require is applied and enforced consistentlyacross the entire fleet of endpoints? According to F5 2020 State of Application Services Report, 59% of respondents said thatapplying consistent security policies across all company applications was one of their biggestchallenges in multi-cloud security. This demo shows how to apply consistent security policies(WAF) across a fleet of cloud workloads deployed at the edge. This helps reduce risk, increasecompliance, and helps maintaineffective governance. How to Apply Consistent Security Policies Across Clouds (~5min) Challenge #6: Complexities of multiple cloud networking and integration with AWS transit gateway – management of security controls. A multi-cloud strategy introducescomplexities aroundnetworking and security control between clouds and within clouds. Within one cloud (e.g., AWS VPC), an organization may use the AWS transit gateway (TGW) to stitch together the Inter-VPC communication. Managingmultiple VPCs attached to a TGW is, by itself, a challenge in managing security control between VPC. In this demo, we show a simple way to leverage the F5 Distributed Cloud integration with AWS TGW to manage security policy across VPCs(also known as East-West traffic). This demo also demonstrates connecting an AWS VPC with other cloud providers such as Azure, GCP, or an on-prem cloud solution in order to unify theconnectivity and reachability of your workload. Multi-Cloud Integration with AWS Transit Gateway (~19min)1.1KViews1like0CommentsAPI Discovery and Auto Generation of Swagger Schema
APIDiscovery API discovery helps inlearning, searching,and identifying loopholes when app communicatesvia APIsespecially in a large micro-services ecosystem.After discovery, the systemanalyzesthe APIs, then will know how to secure the APIs from malicious requests. Customer Challenges With large ecosystems, integrations,and distributed microservices, there is a huge increase in the number of API attacks. Most of these API attacks authenticated users and are very hard to detect. Also, security controls for web apps are totally different than security controls for APIs. Customersincreased adoption of microservice architecture has increased the complexity of managing, debugging,and securingapplications. With microservices,all the traffic between apps is using REST/gRPC APIs that are multiplexed onthesame network port (eg. HTTP 443) and as a result, the network level security and connectivityisnot very usefulanymore. Withcustomers adoptinga multi-cloud strategy, services might be residing on different cloud/vpc, so customers need a uniform way to apply app-level policies which will enforce zero trust API security acrossservice-to-servicecommunication. Also,the SecOps team has an administrative overhead to manually track the changes toAPI’sin a multi-cloud environment.Dev/DevOps team needs an easier tool to debug the apps. Solution Description Volterra’s AI/ML learning engine uniquely discovers API endpoints used during aservice-to-servicecommunication. This feature provides users debugging capabilities for distributed apps that are deployed on a single cloud or across multiple clouds. API endpoints discovered could be used to create API-level security policy to enforce tighter service-to-service communication. Also, the swagger schema helps in analyzing and debugging the request logs.Secopscan download and use it for audit purposes. This also serves as an input to thethird-partytools. Step by step process to enable API discovery and auto-generation of swagger schema There are mainly three steps in enabling the API discovery and downloading the swagger schema: Step1 .Enabling the API discoverywhich is to create an app-type config. Step2 .Deploying an applicationon Kubernetes and creating the HTTP load balancer. Step3 .Observing an appthrough service mesh graph and HTTP load balancer dashboardwhich includes downloading of swagger schema. Let us seehowto configureonVoltConsole: Step 1 Create anapplication namespaceunder generalnamespace tab: General > My Namespaces > Add name space. Step 2 Next,we need to create an app type which is under the shared tab: Shared > AI&ML > Add app type Here, while creating a new apptype,we need to enable the API discovery feature and markup setting to learn from direct traffic. Save and exit. Step 3 Next,let'sgo to the app tab to deploy the app: App > Applications >Virtual K8s > Add virtual K8s While creating new virtual K8s, you give it a name and selectves-io-all-res as a virtual site.Save and exit.Wehave towait till the cluster status is ready. Next, we will have to download thekubeconfigfile for the virtual K8S cluster. Now, we will use theKubectlto deploy the applicationusing the below command: kubeconfig$YOURKUBECONFIGFILE apply -fkubenetes-manifest.yml–namespace $YOURNAMESPACE Step 4 Now, we can go to theVoltConsoleto go and check the status of this app. We will wait for the pods to become ready and once all the pods are ready, we can now create theload balancer(LB)for frontend service. Next, we give the LB aname andlabelthe HTTP LB asves-io/app_typeand value as default (which is the name of the app type). Provide the domain nameunder basic config. This guideprovides instructions on how to delegate your DNS domain to Volterra using theVoltConsole. Delegating the domain to Volterra enables Volterra to manage the domain and be the authoritative domain name server for the domain. Volterra checks the DNS domain configured in the virtual host and verifies that the tenant owns that domain. Step 5 Now select theoriginpool, andcreate a new poolwith theoriginserver type and name as below.Also, select the network on the site as vK8s network on site.Select the correct port number for the service and save & exit the configuration. Step 6 Now,let'snavigate to HTTP LB dashboard > security monitoring > API endpoints. Here you willsee all the API endpoints that are discovered on the HTTPLBand you can download the swagger right from the HTTP LB dashboard page. Step 7 Now ,letsnavigate to the service mesh graph where we can view the entire application micro-services and how they are connected, and how the east-west and north-south traffic is flowing. App Namespace -> Mesh -> Service Mesh-> API endpoints tab Step 8 Now,let'sclick on the API endpoints tab, and we will see all the API endpoints discovered for the default app type. Now you can click on the Schema and then go to the PII& learned APIdata, andclick on the download button to download the swagger schema for each specific API endpoint. If we want to download the swagger schema for the entire application, then we go back to the API endpoints tab and download swagger as seen below: Conclusion With the increased attack surfaces with distributed microservices and other large ecosystems, having a feature such as API discovery andbeable to auto-generate the swagger file really helps in mitigating the malicious API attacks. This feature provides users debugging capabilities for distributed apps that are deployed on a single cloud or across multiple clouds. In addition,API endpoints discovered could be used to create API level security policy to enforce tighter service-to-service communication. The request logs are continuously analyzed to learn the schema for every learned API endpoint. The generated swagger schema could be downloaded through UI or Volterra API.SecOps can download and use it for audit purposesandDev/DevOps team can use to see the delta variables/path between differentversion.It can be used as input to otherthird-partytoolsand is less administrative overhead of managing the documents.2.8KViews2likes4CommentsExperience F5 Distributed Cloud with Multi-Cloud Sites and Distributed Apps
Do your applications span multiple clouds? Do you want to simplify and secure connectivity between app services in different clouds or on-prem? Do you want to optimize or improve performance of apps serving a geographically dispersed user base? F5 Distributed Cloud Services is a no-brainer choice for any and all of the above use-cases. To experience how F5 Distributed Cloud addresses these and other scenarios, F5 has released a new interactive simulator: a guided hands-on experience with F5 Distributed Cloud Mesh, F5 Distributed Cloud Stack, and F5 Distributed Cloud Console… plus a few simulated command-line interfaces to test/validate your multi-cloud and distributed cloud connectivity. It takes just a few minutes to walk through any one of these guided simulations, and experience what F5 Distributed Cloud can do for your own applications: Multi-Cloud Networking: Cloud-to-Cloud via HTTP Load Balancer One of the core F5 Distributed Cloud use-cases is multi-cloud networking. Use the simulated F5 Distributed Cloud Mesh and F5 Distributed Cloud Console to set up Layer 7 connectivity between an Amazon VPC site, Azure VNET site, and use an HTTP Load Balancer to securely route network traffic. Multi-Cloud Networking: Cloud-to-Cloud via Sites As opposed to using a Load Balancer, F5 Distributed Cloud Mesh can also enable Layer 3 connectivity between clouds. Use a simulated F5 Distributed Cloud Console to set up an Amazon VPC site, Azure VNET site, and a secure network between the two clouds with end-to-end monitoring. Multi-Cloud Networking Brownfield: Cloud-to-Cloud via Sites While the above use-cases focused on a greenfield deployment of new networks in each of the cloud service providers, this simulation leverages existing virtual networks to securely connect (Layer 3) between AWS & Azure with F5 Distributed Cloud Mesh and F5 Distributed Cloud Console. Cluster to cluster AWS Azure HTTP Load Balancer F5 Distributed Cloud reduces IT complexity by simplifying operation and deployment of workloads to multiple Kubernetes clusters, especially across multiple clouds. Use simulated the F5 Distributed Cloud Console to add an AWS EKS and Azure AKS cluster into the F5 Distributed Cloud Sites, use service discovery, and create an HTTP Load Balancer to expose an NGINX web server service running on EKS to a workload running on AKS. Cluster to cluster AWS Azure TCP Load Balancer Connect Kubernetes clusters and workloads by way of TCP Load Balancer (Layer 4) via the simulated F5 Distributed Cloud Console experience. Add an AWS EKS and Azure AKS cluster into the F5 Distributed Cloud Sites, use service discovery, and create a TCP Load Balancer to expose a MySQL database service running on EKS to a workload running on AKS. Modern Apps in a Distributed Cloud with F5 Distributed Cloud (CLI with kubectl) F5 Distributed Cloud’s approach to modern, distributed apps is enabling a service, multiple services, or even an entire app to be distributed closer to end-users via the F5 Distributed Cloud global network, resulting in dramatically improved app performance. Deploy 3 app services to virtual Kubernetes (vK8s) instantiated in all the F5 Distributed Cloud regional edge sites globally by way of a simulated Command Line Interface. Modern Apps in a Distributed Cloud with F5 Distributed Cloud (Deploy with F5 Distributed Cloud Console) As in the previous use-case, improve time-to-market, performance, and global availability of your distributed apps by leveraging the F5 Distributed Cloud Global Network. However, in this case use the simulated F5 Distributed Cloud Console to add a new front-end service workload for a sample application. Don’t miss this opportunity to go hands-on with the F5 Distributed Cloud use-cases: explore these interactive demos to try your hand at multi-cloud connectivity and deploying distributed workloads. Once you run through simulator, sign up for F5 Distributed Cloud in any one of the tiers available to you and your organization!1.3KViews2likes0CommentsMulti-Cloud Networking with F5 Distributed Cloud
One of F5 Distributed Cloud's core features is to provide a simple way to interconnect resources in multiple clouds creating a seamless communication space for application services. In conjunction with centralized control and management plane, it becomes a single pane of glass for configuration, security, and visibility eliminating excessive administration overhead of managing multiple clouds. This article gives an overview of the F5 Distributed Cloud components and features, which contribute to the multi-cloud transit story. Even though those components are the foundation for many other F5 Distributed Cloud Services, the focus of this article is traffic transit. There are many location options to deploy applications or their parts. Public clouds, private clouds, CDNs, ADNs, and so on. All of them have their pros and cons or even unique capabilities that force owners to distribute application components evenly. Even though it sounds like the right decision, it brings significant overhead to developers and DevOps teams. The reason for it is total platforms incompatibility. This not only requires a separate set of knowledge for each platform but also custom solutions to stitch them together to create a boundless and secure communication space. F5 Distributed Cloud solves this challenge by providing a unified platform that allows creating remote application deployment sites and securely connecting them via adding to a virtual global network. The article below reviews the F5 Distributed Cloud components that help to interconnect clouds together. F5 Distributed Cloud Global Network From a multi-cloud networking perspective, F5 Distributed Cloud is a fast and efficient backbone so-called F5 Distributed Cloud global network. It allows to push traffic through with multi-terabit rates and connect remote sites using overlay networks. The backbone consists of two main components. The first is 21 points of presence (PoPs) located all over the world allowing to connect customer sites to as close PoP as possible. The second is redundant private connections in between PoPs, to tier-1 carriers, public cloud providers, SaaS services providing high-speed, reliable traffic transfer with much less latency than it is over the Internet. The picture below represents the logical topology of the F5 Distributed Cloud connecting multiple remote sites. F5 Distributed Cloud global network forms a physical network infrastructure that is securely shared by all tenants/customers. Users can create overlay networks on top of it to interconnect resources in remote sites or expose application services to the Internet. F5 Distributed Cloud Site As per F5 Distributed Cloud documentation "Site is a physical or cloud location where F5 Distributed Cloud Nodes are deployed". That is true, however, it doesn't give much clarity from a multi-cloud transit point of view. From that perspective, a site is a remote network that uses a cluster of the F5 Distributed Cloud node(s) as a gateway to connect it back to the F5 Distributed Cloud global network. In simple words, if the F5 Distributed Cloud global network is a trunk then sites are the leaves. F5 Distributed Cloud unifies and simplifies the approach to create and interconnect sites whether they locate in a public cloud or in a private data center. The DevOps team doesn’t need to manually manage resources in a remote location. Instead, the F5 Distributed Cloud automatically creates a remote network and deploys F5 Distributed Cloud node(s) to attach it back to the global backbone. Currently, F5 Distributed Cloud supports sites in the following site locations: AWS VPC AWS TGW GCP VPC Azure VNET Physical Data Center Edge location After a site is registered against F5 Distributed Cloud Console and nodes are deployed networking and application delivery configuration is unified regardless of site's type. F5 Distributed Cloud Node To better understand how inter-site connectivity is organized it is useful to take a closer look at the F5 Distributed Cloud nodes. F5 Distributed Cloud nodes are Linux-based software appliances that act as a K8S cluster and form a super-converged infrastructure to deliver networking and application delivery services. The picture below represents a logical diagram of services running on a F5 Distributed Cloud node. A cluster of nodes locates at the edge of a site and either run customer workloads or acts as a gateway to interconnect site-local resources to the F5 Distributed Cloud global network. Site Deployment Modes F5 Distributed Cloud sites can be deployed in two modes on-a-stick or default gateway. With on-a-stick mode, the ingress and egress traffic from the node is handled on a single network interface as shown in the diagram. In this mode, the network functionality will be either load balancer, a gateway for Kubernetes, API gateway, a generic proxy. Mostly this mode is used as a stub site to run application workloads in using F5 Distributed Cloud Mesh services such as virtual K8s. With default-gateway mode, the network functions like routing and firewall between inside and outside networks can be enabled in addition to the functionality available in on-a-stick mode. This mode is suitable not only for running workloads but also to act as a router and forward traffic to internal resources deployed behind the site. Inter-Site Networking Once the F5 Distributed Cloud nodes are deployed across the sites then inter-site connectivity can be configured via F5 Distributed Cloud Console. The picture below shows a logical diagram of all components participating in the connectivity setup. Connectivity configuration includes three steps. First of all, virtual networks have to be created with proper subnetting. Following network types are available: Per-Site - Even though this network has been configured once globally, it is instantiated as an independent network on each of the site. As a result, this network that is instantiated on one site cannot talk to the same network instantiated on another site. Global - This network is exactly the same across all sites on which it is instantiated. Any endpoint that is a member of this network can talk to other members irrespective of what site they belong. Site-Local : There can only be one network of this type on a particular site and is automatically configured by the system during bootstrap. This can also be considered as a site-local outside network and needs access to the public internet for registration during the site bring-up process. Site-Local-Inside: There can only be one network of this type on a particular site and is automatically configured during bootstrap for sites with two interfaces (default gateway). Public: This is a conceptual virtual network that represents the Public Internet. It is only present on the F5 Distributed Cloud Regional Edge sites and not on customer sites. Site-Local-Service: This is another conceptual virtual network and it represents the service network of Kubernetes within the site. This is the network from which cluster IP for k8s service configuration is allocated. VER-Internal: This is another conceptual virtual network and it represents where all control plane services for the site exist. This is only used if the control plane needs access to tenant services. For example, a site needs access to customers’ secrets that are stored in Hashicorp Vault running on another site. Then those networks have to be attached to one of the F5 Distributed Cloud node network interfaces. There are two types of interfaces - “physical” and “logical” that can be configured: For every networking device (eg. eth0, eth1) on a node within a site, a “physical” interface can be configured. “Logical” interfaces are child interfaces of a physical interface, for example, VLANs. Lastly, a network connector defines connectivity type between networks. There are three types of network connectors: Direct: inside-network and outside-network are connected directly. Endpoints in each virtual network can initiate connections to others. If inside or outside are connected to other external networks, then they can be configured with dynamic routing to exchange routes (currently, only BGP is supported for dynamic routing) SNAT: inside-network is connected using SNAT to outside-network. Endpoints from inside-network can initiate connections to outside-network. However, endpoints from outside-network cannot initiate connections to the inside-network. Forward-Proxy: Along with SNAT, a forward proxy can be configured. This enables the capability of inspection of HTTP and TLS connections. Using a forward proxy, URL filtering and visibility of different hosts accessed from the inside network can be configured. Mixing network type with network connectors gives a user a wide variety of options to let resources located in one site talk to resources in the other site or even to the Internet and opposite. To get an example or practice configuring inter-site connectivity take a look at the step-by-step videos (link1, Link2) or use a simulator (link1, link2).1.1KViews3likes0CommentsGetting Hands on with F5 solutions for Multi-Cloud Networking and Distributed Workloads
With today's rapid adoption of multiple clouds by enterprises to solve challenges such as rapid delivery, high-availability, flexibility, cost-effectiveness and avoidance of vendor lock-in, new solutions are needed to provide connectivity across public clouds, private clouds and on-premise data centers. Multi-Cloud Networking (MCN) solutions are now available to address these requirements, however they are largely focused on Layer 3 of the OSI Model, providing a logical, secure, software-defined network. F5 believes that it is necessary to deliver the capabilities of this MCN approach - and go further and provide a truly scalable application fabric that extends networking and security capabilities up to and including the Application Layer 7. This allows for multi-layer security, improved observability, analytics, and multi-tenancy and self-serve not just for NetOps, but also for developers, DevOps and SecOps teams. One of the challenges of MCN is the difficulty of building and maintaining a consistent and standard networking approach across different clouds. We see the opportunity to solve this by shifting towards a Distributed Cloud model, discussed by F5 CTO Geng Lin in a recent blog. To help developers explore how to start experiencing and embracing this new Distributed Cloud model, F5 has created a series of Guided Walkthroughs leveraging Volterra’s capabilities to demonstrate two important uses cases: 1. Connecting Kubernetes (K8S) clusters across clouds 2. Supporting globally distributed application workloads. The Scenario covered by Walkthroughs is an e-Commerce sample application (provided) for which the developer is looking to improve the front-end application experience by implementing a globally distributed Find-a-Store service. The Guide covers how to securely connect between Kubernetes deployments and also distribute app services globally to improve performance. Everything you need is provided: the application, automation scripts, etc. - just bring your cloud provider account. The walkthroughs for starting with AWS, Azure or GCP, available at the links below: ·Amazon Elastic Kubernetes Service (EKS) ·Azure Kubernetes Service (AKS) ·Google Kubernetes Engine (GKE)864Views0likes0CommentsIntroducing a New Live Stream Show, "At The Edge"!
First off, I'm very excited to be joining the DevCentral team. If you missed it last Thursday, I was introduced as a new host and will be working alongside Jason Rahm and John Wagnon. It's an honor and I look forward to serving the F5 Community. You'll get to know me further over time as I'll be rotating in as a co-host on our DevCentral Connects live stream every Thursday. Just as DevCentral changes, application architectures are changing, too, and the underlying infrastructure must keep up. Edge computing is the next infrastructure revolution that will allow for virtually unlimited architectures. We're introducing "At The Edge" as a new show that sets out to demystify edge computing for the F5 community as well as open up discussions for all of the supporting topics such as modern application architecture, automation, and how this all affects business outcomes. We start Tuesday, October 5th at 9:30am Pacific Time. My first guest will be Eric Chen , Solutions Architect with F5 and we'll be talking all about multi-cloud networking and the benefits it provides. Look for this new show, streaming the first Tuesday of every month and on YouTube, LinkedIn, Twitter and Facebook.287Views2likes1CommentWhat is the Edge?
Where oh where to begin? "The Edge" excitement today is reminiscent of "The Cloud" of many moons ago. Everyone, I mean EVERYONE, had a "to the cloud" product to advertise. CS Lewis (The Chronicles of Narnia) wrote an essay titled "The Death of Words" where he bemoaned the decay of words that transitioned from precise meanings to something far more vague. One example he used was gentleman, which had a clear objective meaning (a male above the station of yeoman whose family possessed a coat of arms) but had decayed (and is to this day) to a subjective state of referring to someone well-mannered. This is the case with industry shifts like cloud and edge, and totally works to the advantage of marketing/advertising. The result, however, is usually confusion. In this article, I'll briefly break down the edge in layman's terms, then link out to the additional reading you should do to familiarize yourself with the edge, why it's hot, and how F5 can help with your plans. What is edge computing? The edge, plainly, is all about distribution, taking services once available only in private datacenters and public clouds and shifting them out closer to where the requests are, whether those requests are coming from humans or machines. This shift of services is comprehensive, so while technologies from the infancy of the edge like CDNs are still in play, the new frontier of compute, security, apps, storage, etc, enhances the user experience and broadens the scope of real-time possibilities. CDNs were all about distributing content. The modern edge is all about application and data distribution. Where is the edge, though? But, you say, how is that not the cloud? Good question. Edge computing builds on the technology developed in the cloud era, where de-centralized compute and storage architectures were honed. But the clouds are still regional datacenters. A good example to bring clarity might be an industrial farm. Historically, data from these locations would be sent to a centralized datacenter or cloud for processing, and depending on the workloads, tractors or combines might be idle (or worse: errant) while waiting for feedback. With edge computing, a local node (consider this an enterprise edge) would be gathering all that data, processing, analyzing, and responding in real-time to the equipment, and then sending up to the datacenter/cloud anything relevant for further processing or reporting. Another example would be self-driving car or gaming technology, where perhaps the heavy compute for these is at the telco edge instead of having to backhaul all of it to a centralized processing hub. Where is the edge? Here, there, and everywhere. The edge, conceptually, can be at any point in between the user (be it human, animal, or machine) and the datacenter/cloud. Physically, though, understand that just like "serverless" applications still have to run on an actual server somewhere, edge technology isn't magic, it has to be hosted somewhere as well. The point is that host knows no borders; it can be in a provider, a telco, an enterprise, or even in your own home (see Lori's "Find My Cat" use case). The edge is coming for you The stats I've seen from Gartner and others are pretty shocking. 76% already have plans to deploy at the edge, and 75% of data will be processed at the edge by 2025? I'm no math major, but that sounds like one plus two, carry the three, uh, tomorrow! Are you ready for this? The good news is we are here to help. The best leaps forward in anything in our industry have always come from efforts bringing simplicity to the complexities. Abstraction is the key. Think of the progression of computer languages and how languages like C abstract the needs in Assembler, or how dynamically typed languages like python even abstract away the need for types. Or how hypervisors abstract lower level resources and allow you to carve out compute. Whether you're a netops persona thankful for tools that abstract BGP configurations from the differing syntax of various routers, or a developer thankful for libraries that abstract the nuances of different DNS providers so you can generate your SSL certificates with Let's Encrypt, all of that is abstraction. I like to know what's been abstracted. That's practical at times, but not often. Maybe in academia. Frankly, the cost associated to knowing "all the things" ins't one for which most orgs will pay. Volterra delivers that abstraction, to the compute stack and the infrastructure connective tissue, in spades, thus removing the tenuous manual stitching required to connect and secure your edge services. General Edge Resources Extending Adaptive Applications to the Edge Edge 2.0 Manifesto: Redefining Edge Computing Living on the Edge: How we got here Increasing Diversity of Location and Users is Driving Business to the Edge Application Edge Integration: A Study in Evolution The role of cloud in edge-native applications Edge Design & Data | The Edgevana Podcast (Youtube) Volterra Specific Resources Volterra and Power of the Distributed Cloud (Youtube) Multi-Cloud Networking with Volterra (Youtube) Network Edge App: Self-Service Demo (Youtube) Volterra.io Videos477Views4likes0Comments