VoltMesh
16 TopicsAPI Discovery and Auto Generation of Swagger Schema
APIDiscovery API discovery helps inlearning, searching,and identifying loopholes when app communicatesvia APIsespecially in a large micro-services ecosystem.After discovery, the systemanalyzesthe APIs, then will know how to secure the APIs from malicious requests. Customer Challenges With large ecosystems, integrations,and distributed microservices, there is a huge increase in the number of API attacks. Most of these API attacks authenticated users and are very hard to detect. Also, security controls for web apps are totally different than security controls for APIs. Customersincreased adoption of microservice architecture has increased the complexity of managing, debugging,and securingapplications. With microservices,all the traffic between apps is using REST/gRPC APIs that are multiplexed onthesame network port (eg. HTTP 443) and as a result, the network level security and connectivityisnot very usefulanymore. Withcustomers adoptinga multi-cloud strategy, services might be residing on different cloud/vpc, so customers need a uniform way to apply app-level policies which will enforce zero trust API security acrossservice-to-servicecommunication. Also,the SecOps team has an administrative overhead to manually track the changes toAPI’sin a multi-cloud environment.Dev/DevOps team needs an easier tool to debug the apps. Solution Description Volterra’s AI/ML learning engine uniquely discovers API endpoints used during aservice-to-servicecommunication. This feature provides users debugging capabilities for distributed apps that are deployed on a single cloud or across multiple clouds. API endpoints discovered could be used to create API-level security policy to enforce tighter service-to-service communication. Also, the swagger schema helps in analyzing and debugging the request logs.Secopscan download and use it for audit purposes. This also serves as an input to thethird-partytools. Step by step process to enable API discovery and auto-generation of swagger schema There are mainly three steps in enabling the API discovery and downloading the swagger schema: Step1 .Enabling the API discoverywhich is to create an app-type config. Step2 .Deploying an applicationon Kubernetes and creating the HTTP load balancer. Step3 .Observing an appthrough service mesh graph and HTTP load balancer dashboardwhich includes downloading of swagger schema. Let us seehowto configureonVoltConsole: Step 1 Create anapplication namespaceunder generalnamespace tab: General > My Namespaces > Add name space. Step 2 Next,we need to create an app type which is under the shared tab: Shared > AI&ML > Add app type Here, while creating a new apptype,we need to enable the API discovery feature and markup setting to learn from direct traffic. Save and exit. Step 3 Next,let'sgo to the app tab to deploy the app: App > Applications >Virtual K8s > Add virtual K8s While creating new virtual K8s, you give it a name and selectves-io-all-res as a virtual site.Save and exit.Wehave towait till the cluster status is ready. Next, we will have to download thekubeconfigfile for the virtual K8S cluster. Now, we will use theKubectlto deploy the applicationusing the below command: kubeconfig$YOURKUBECONFIGFILE apply -fkubenetes-manifest.yml–namespace $YOURNAMESPACE Step 4 Now, we can go to theVoltConsoleto go and check the status of this app. We will wait for the pods to become ready and once all the pods are ready, we can now create theload balancer(LB)for frontend service. Next, we give the LB aname andlabelthe HTTP LB asves-io/app_typeand value as default (which is the name of the app type). Provide the domain nameunder basic config. This guideprovides instructions on how to delegate your DNS domain to Volterra using theVoltConsole. Delegating the domain to Volterra enables Volterra to manage the domain and be the authoritative domain name server for the domain. Volterra checks the DNS domain configured in the virtual host and verifies that the tenant owns that domain. Step 5 Now select theoriginpool, andcreate a new poolwith theoriginserver type and name as below.Also, select the network on the site as vK8s network on site.Select the correct port number for the service and save & exit the configuration. Step 6 Now,let'snavigate to HTTP LB dashboard > security monitoring > API endpoints. Here you willsee all the API endpoints that are discovered on the HTTPLBand you can download the swagger right from the HTTP LB dashboard page. Step 7 Now ,letsnavigate to the service mesh graph where we can view the entire application micro-services and how they are connected, and how the east-west and north-south traffic is flowing. App Namespace -> Mesh -> Service Mesh-> API endpoints tab Step 8 Now,let'sclick on the API endpoints tab, and we will see all the API endpoints discovered for the default app type. Now you can click on the Schema and then go to the PII& learned APIdata, andclick on the download button to download the swagger schema for each specific API endpoint. If we want to download the swagger schema for the entire application, then we go back to the API endpoints tab and download swagger as seen below: Conclusion With the increased attack surfaces with distributed microservices and other large ecosystems, having a feature such as API discovery andbeable to auto-generate the swagger file really helps in mitigating the malicious API attacks. This feature provides users debugging capabilities for distributed apps that are deployed on a single cloud or across multiple clouds. In addition,API endpoints discovered could be used to create API level security policy to enforce tighter service-to-service communication. The request logs are continuously analyzed to learn the schema for every learned API endpoint. The generated swagger schema could be downloaded through UI or Volterra API.SecOps can download and use it for audit purposesandDev/DevOps team can use to see the delta variables/path between differentversion.It can be used as input to otherthird-partytoolsand is less administrative overhead of managing the documents.2.7KViews2likes4CommentsF5 Distributed Cloud Services: API Security and Control
The previous article gave great coverage of API Discovery functionality on the F5 Distributed Cloud Services Web App and API Protection (XC WAAP) platform. This one gives an overview of what is available in terms of API control features. The XC WAAP platform provides a rich set of techniques to securely deliver applications and APIs. It goes beyond industry-standard features like WAF and DoS and introduces cutting-edge AI/ML technologies that significantly increase accuracy and decrease the number of false positives. The following paragraphs give an overview of available technologies and provide links to get started. Per API Request Analysis Per requests analysis name is self-explaining. The technique analyzes every single API call and extracts various metrics out of it such as request/response size, latency, etc. Based on observed data it builds a model that identifies a valid request structure for a particular API endpoint. Once the model is applied to the traffic it automatically catches any request that deviates from learned models and notifies a user. This approach is good to discover any per request-based threats. Getting started documentation Time-Series Anomaly Detection TS anomaly detection is somewhat similar to the previous mechanism but instead of looking into each API request, it monitors the traffic stream as a whole. Therefore, metrics of interest relate to the entire stream such as requests rate, errors, latency, and throughput. Then statistical algorithms detect the following types of anomalies: Unexpected spikes/drops Unexpected traffic patterns Such detection instantly notifies an administrator about any event that potentially means a service disruption. Whether it is a traffic drop caused by a misconfiguration or a traffic spike due to a DoS attack. Getting Started Documentation User Behavior Analysis Behavior analysis uses AI and ML to profile valid user behavior. It detects clear suspicious behavior based on WAF events, login failures, L7 policy violations, and other anomalous activity based on user behavioral aspects. Once enabled an administrator can observe a whole list of suspicious users in the HTTP load balancer app firewall dashboard. Getting started documentation Web Application Firewall As a platform with a mature security stack, the XC WAAP matches and exceeds all industry-standard features. In addition to that, it significantly simplifies the WAAP configuration. It only asks for application language, CMS, and web server type to set up the best level of protection for an application. Getting started documentation Service Policies Another name of the service policy is the L7 policy. When applied to a load balancer it allows to granularly define what traffic is allowed to a service and what is not in terms of L7 metrics such as hostname, method, path, or any other property available from a request. Service policy consists of rules that match a request by one or more metrics and then apply one of the following actions allow, deny or check next. Getting started documentation Rate Limiting Rate limiting protects an application or an API from excessive usage via limiting the rate of requests from a single user or client executed in a period of time. There is a variety of ways to identify a user: By IP address By cookie By HTTP Header By query string parameter The feature can limit the rate or total amount of requests which an entity is allowed to execute. Getting started documentation DoS Protection Similar to rate limiting DoS protection helps to keep service up and running under an excessive amount of traffic. However, opposite to rate limiting XC WAAP's DoS protection operates on L3/4 and deals with much higher volumes of traffic. Getting started documentation Allow/Deny Lists Self-explaining feature as well. Allows to set up lists of trusted and untrusted IP prefixes. Clients from trusted prefixes bypass all security checks. Clients from untrusted prefixes are simply blocked before any security check happens. Both lists support the expiration time setting. Meaning that rule continues to exist but isn't applied to the traffic after expiration time comes. That is handy to set up temporary rules e.g. for testing or security purposes. Getting started documentation Conclusion All security features above are available on top of the F5 Distributed Cloud Services HTTP load balancer. A combination of those provides reliable protection for an API and automatically supports it in up and running conditions.1.3KViews4likes2CommentsUsing Terraform and F5® Distributed Cloud Mesh to establish secure connectivity between clouds
It is not uncommon for companies to have applications deployed independently in AWS, Azure and GCP. When these applications are required to communicate with each other, these companies must deal with operational overhead and new set of challenges such as skills gap, patching security vulnerabilities and outages, leading to bad customer experience. Setting up individual centers of excellence for managing each cloud is not the answer as it leads to siloed management and often proves costly. This is where F5 Distributed Cloud Mesh can help. UsingF5® Distributed Cloud Mesh, you can establish secure connectivity with minimal changes to existing application deployments. You can do so without any outages or extended maintenance windows. In this blog we will go over a multi-cloud scenario in which we will establish secure connectivity between applications running in AWS and Azure. To show this, we will follow these steps Deploy simple application web servers and VPC, VNETS in AWS and Azure respectively using Terraform. Create virtualF5® Distributed Cloud sites for AWS and Azure using Terraform provider for Distributed Cloud platform. These virtual sites provide abstraction for AWS VPCs and AZURE VNETs which can then be managed and used in aggregate. Use terraform provider to configureF5® Distributed Cloud Mesh ingress and egress gateways to provide connectivity to the Distributed Cloud backbone. Configure services such as security policies, DNS, HTTP Load balancer andF5® Distributed Cloud WAAP which are required to establish secure connectivity between applications. Terraform provider for F5® Distributed Cloud F5® Distributed Cloud terraform provider can be used to configure Distributed Cloud Mesh Objects and these objects represent desired state of the system. The desired state of the system could be configuring a http/tcp load balancer, vk8s cluster, service mesh, enabling api security etc. Terraform F5® Distributed Cloud provider has more than 100 resources and data resources. Some of the resources which will be using in this example are for Distributed Cloud Services like Cloud HTTP load balancer, F5® Distributed Cloud WAAP and F5® Distributed Cloud Sites creation in AWS and Azure. You can find a list of resources here. Here are the steps to deploy simple application usingF5® Distributed Cloud terraform provider on AWS & Azure. I am using below repository to create the configuration. You can also refer to the READMEon F5's DevCentral Git. git clone https://github.com/f5devcentral/f5-digital-customer-engagement-center.git cd f5-digital-customer-engagement-center/ git checkout mcn # checkout to multi cloud branch cd solutions/volterra/multi-cloud-connectivity/ # change dir for multi cloud scripts customize admin.auto.tfvars.example as per your needs cp admin.auto.tfvars.example admin.auto.tfvars ./setup.sh # Run setup.sh file to deploy the Volterra sites which identifies services in AWS, Azure etc. ./aws-setup.sh # Run aws-setup.sh file to deploy the application and infrastructure in AWS ./azure-setup.sh # Run azure-setup.sh file to deploy the application and infrastructure in Azure This will create the following objects on AWS and Azure 3 VPC and VNET networks on each cloud respectively 3F5® Distributed Cloud Mesh nodes on each cloud seen as master-0 3 backend application on each cloud seen as scsmcn-workstation here projectPrefix is scsmcn in admin.auto.tfvars file 1 jump box on each cloud to test Create 6 http load balancers one for each node and can be accessed through F5® Distributed Cloud Console Create 6F5® Distributed Cloud sites which can be accessed via F5® Distributed Cloud Console F5® Distributed Cloud Mesh does all the stitching of the VPCs and VNETs for you, you don’t need to create any transit gateway, also it stitches VPCs & VNETs to the F5® Distributed Cloud Application Delivery Network. Client when accessing backend application will use the nearestF5® Distributed Cloud Regional network http load balancer to minimize the latency using Anycast. Run setup.sh script to deploy theF5® Distributed Cloud sites this will create a virtual sites that will identify services deployed in AWS and Azure. ./setup.sh Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of volterraedge/volterra from the dependency lock file - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed volterraedge/volterra v0.10.0 Terraform has been successfully initialized! random_id.buildSuffix: Creating... random_id.buildSuffix: Creation complete after 0s [id=c9o] volterra_virtual_site.site: Creating... volterra_virtual_site.site: Creation complete after 2s [id=3bde7bd5-3e0a-4fd5-b280-7434ee234117] Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Outputs: buildSuffix = "73da" volterraVirtualSite = "scsmcn-site-73da" created random build suffix and virtual site aws-setup.sh file to deploy the vpc, webservers and jump host, http load balancer,F5® Distributed Cloud aws site and origin servers ./aws-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/aws from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Reusing previous version of hashicorp/template from the dependency lock file - Using previously-installed hashicorp/null v3.1.0 - Using previously-installed hashicorp/template v2.2.0 - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/aws v3.60.0 - Using previously-installed hashicorp/random v3.1.0 Terraform has been successfully initialized! An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.aws_instances.volterra["bu1"] will be read during apply # (config refers to values not yet known) <= data "aws_instances" "volterra" { + id = (known after apply) + ids = (known after apply) ..... truncated output .... volterra_app_firewall.waf: Creating... module.vpc["bu2"].aws_vpc.this[0]: Creating... aws_key_pair.deployer: Creating... module.vpc["bu3"].aws_vpc.this[0]: Creating... module.vpc["bu1"].aws_vpc.this[0]: Creating... aws_route53_resolver_rule_association.bu["bu3"]: Creation complete after 1m18s [id=rslvr-rrassoc-d4051e3a5df442f29] Apply complete! Resources: 90 added, 0 changed, 0 destroyed. Outputs: bu1JumphostPublicIp = "54.213.205.230" vpcId = "{\"bu1\":\"vpc-051565f673ef5ec0d\",\"bu2\":\"vpc-0c4ad2be8f91990cf\",\"bu3\":\"vpc-0552e9a05bea8013e\"}" azure-setup.sh will execute terraform scripts to deploy webservers, vnet, http load balancer , origin servers andF5® Distributed Cloud azure site. ./azure-setup.sh Initializing modules... Initializing the backend... Initializing provider plugins... - Reusing previous version of volterraedge/volterra from the dependency lock file - Reusing previous version of hashicorp/random from the dependency lock file - Reusing previous version of hashicorp/azurerm from the dependency lock file - Using previously-installed volterraedge/volterra v0.10.0 - Using previously-installed hashicorp/random v3.1.0 - Using previously-installed hashicorp/azurerm v2.78.0 Terraform has been successfully initialized! ..... truncated output .... azurerm_private_dns_a_record.inside["bu11"]: Creation complete after 2s [id=/subscriptions/187fa2f3-5d57-4e6a-9b1b-f92ba7adbf42/resourceGroups/scsmcn-rg-bu11-73da/providers/Microsoft.Network/privateDnsZones/shared.acme.com/A/inside] Apply complete! Resources: 58 added, 2 changed, 12 destroyed. Outputs: azureJumphostPublicIps = [ "20.190.21.3", ] After running terraform script you can sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on System on the left and then Site List to list the sites, you can also enter into search string to search a particular site, Below you can find list of virtual sites deployed for Azure and AWS, status of these sites can be seen using F5® Distributed Cloud Console. AWS Sites on F5® Distributed Cloud Console Now in order to see the connectivity of sites to the Regional Edges, click System --> Site Map --> Click on the appropriate site you want to focus and then Connectivity --> Click AWS, Below you can see the AWS virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on AWS. Provides system and application level metrics. Azure Sites on F5® Distributed Cloud Console Below you can see the Azure virtual sites created on F5® Distributed Cloud Console, this provides visibility, throughput, reachability and health of the infrastructure provisioned on Azure. Provides system and application level metrics. Analytics on F5® Distributed Cloud Console To check the status of the application, sign in into the F5® Distributed Cloud Console at https://www.volterra.io/products/voltconsole. Click on the application tab --> HTTP load balancer --> select appropriate load balancer --> click Request. Below you can see various matrices for applications deployed into the AWS and Azure cloud, you can see latency at different levels like client to lb, lb to server and server to application. Also it provides HTTP requests with Error codes on application access. API First F5® Distributed Cloud Console F5® Distributed Cloud Console helps many operational tasks like visibility into request types, JSON payload of the request indicating browser type, device type, tenant and also request came on which http load balancers and many more details. Benefits OpEx Reduction: Single simplified stack can be used to manage apps in different clouds. For example, the burden of configuring security policies at different locations is avoided. Also transit cost associated with public cloud can be eliminated. Reduce Operational Complexity: Network expert is not required as F5® Distributed Cloud Console provides simplified way to configure and manage network and resources both at customer edge location and public cloud. Your NetOps or DevOps person can easily deploy the infrastructure or applications without network expertise. Adoption of a new cloud provider is accelerated. App User Experience: Customers don’t have to learn different visibility tools, F5® Distributed Cloud Console provided end to end visibility of applications which results in better user experience. Origin server or LB can be moved closer to the customer which reduces latency for apps and APIs which results in better experience.1.3KViews2likes3CommentsExperience F5 Distributed Cloud with Multi-Cloud Sites and Distributed Apps
Do your applications span multiple clouds? Do you want to simplify and secure connectivity between app services in different clouds or on-prem? Do you want to optimize or improve performance of apps serving a geographically dispersed user base? F5 Distributed Cloud Services is a no-brainer choice for any and all of the above use-cases. To experience how F5 Distributed Cloud addresses these and other scenarios, F5 has released a new interactive simulator: a guided hands-on experience with F5 Distributed Cloud Mesh, F5 Distributed Cloud Stack, and F5 Distributed Cloud Console… plus a few simulated command-line interfaces to test/validate your multi-cloud and distributed cloud connectivity. It takes just a few minutes to walk through any one of these guided simulations, and experience what F5 Distributed Cloud can do for your own applications: Multi-Cloud Networking: Cloud-to-Cloud via HTTP Load Balancer One of the core F5 Distributed Cloud use-cases is multi-cloud networking. Use the simulated F5 Distributed Cloud Mesh and F5 Distributed Cloud Console to set up Layer 7 connectivity between an Amazon VPC site, Azure VNET site, and use an HTTP Load Balancer to securely route network traffic. Multi-Cloud Networking: Cloud-to-Cloud via Sites As opposed to using a Load Balancer, F5 Distributed Cloud Mesh can also enable Layer 3 connectivity between clouds. Use a simulated F5 Distributed Cloud Console to set up an Amazon VPC site, Azure VNET site, and a secure network between the two clouds with end-to-end monitoring. Multi-Cloud Networking Brownfield: Cloud-to-Cloud via Sites While the above use-cases focused on a greenfield deployment of new networks in each of the cloud service providers, this simulation leverages existing virtual networks to securely connect (Layer 3) between AWS & Azure with F5 Distributed Cloud Mesh and F5 Distributed Cloud Console. Cluster to cluster AWS Azure HTTP Load Balancer F5 Distributed Cloud reduces IT complexity by simplifying operation and deployment of workloads to multiple Kubernetes clusters, especially across multiple clouds. Use simulated the F5 Distributed Cloud Console to add an AWS EKS and Azure AKS cluster into the F5 Distributed Cloud Sites, use service discovery, and create an HTTP Load Balancer to expose an NGINX web server service running on EKS to a workload running on AKS. Cluster to cluster AWS Azure TCP Load Balancer Connect Kubernetes clusters and workloads by way of TCP Load Balancer (Layer 4) via the simulated F5 Distributed Cloud Console experience. Add an AWS EKS and Azure AKS cluster into the F5 Distributed Cloud Sites, use service discovery, and create a TCP Load Balancer to expose a MySQL database service running on EKS to a workload running on AKS. Modern Apps in a Distributed Cloud with F5 Distributed Cloud (CLI with kubectl) F5 Distributed Cloud’s approach to modern, distributed apps is enabling a service, multiple services, or even an entire app to be distributed closer to end-users via the F5 Distributed Cloud global network, resulting in dramatically improved app performance. Deploy 3 app services to virtual Kubernetes (vK8s) instantiated in all the F5 Distributed Cloud regional edge sites globally by way of a simulated Command Line Interface. Modern Apps in a Distributed Cloud with F5 Distributed Cloud (Deploy with F5 Distributed Cloud Console) As in the previous use-case, improve time-to-market, performance, and global availability of your distributed apps by leveraging the F5 Distributed Cloud Global Network. However, in this case use the simulated F5 Distributed Cloud Console to add a new front-end service workload for a sample application. Don’t miss this opportunity to go hands-on with the F5 Distributed Cloud use-cases: explore these interactive demos to try your hand at multi-cloud connectivity and deploying distributed workloads. Once you run through simulator, sign up for F5 Distributed Cloud in any one of the tiers available to you and your organization!1.2KViews2likes0CommentsMulti-Cloud Networking with F5 Distributed Cloud
One of F5 Distributed Cloud's core features is to provide a simple way to interconnect resources in multiple clouds creating a seamless communication space for application services. In conjunction with centralized control and management plane, it becomes a single pane of glass for configuration, security, and visibility eliminating excessive administration overhead of managing multiple clouds. This article gives an overview of the F5 Distributed Cloud components and features, which contribute to the multi-cloud transit story. Even though those components are the foundation for many other F5 Distributed Cloud Services, the focus of this article is traffic transit. There are many location options to deploy applications or their parts. Public clouds, private clouds, CDNs, ADNs, and so on. All of them have their pros and cons or even unique capabilities that force owners to distribute application components evenly. Even though it sounds like the right decision, it brings significant overhead to developers and DevOps teams. The reason for it is total platforms incompatibility. This not only requires a separate set of knowledge for each platform but also custom solutions to stitch them together to create a boundless and secure communication space. F5 Distributed Cloud solves this challenge by providing a unified platform that allows creating remote application deployment sites and securely connecting them via adding to a virtual global network. The article below reviews the F5 Distributed Cloud components that help to interconnect clouds together. F5 Distributed Cloud Global Network From a multi-cloud networking perspective, F5 Distributed Cloud is a fast and efficient backbone so-called F5 Distributed Cloud global network. It allows to push traffic through with multi-terabit rates and connect remote sites using overlay networks. The backbone consists of two main components. The first is 21 points of presence (PoPs) located all over the world allowing to connect customer sites to as close PoP as possible. The second is redundant private connections in between PoPs, to tier-1 carriers, public cloud providers, SaaS services providing high-speed, reliable traffic transfer with much less latency than it is over the Internet. The picture below represents the logical topology of the F5 Distributed Cloud connecting multiple remote sites. F5 Distributed Cloud global network forms a physical network infrastructure that is securely shared by all tenants/customers. Users can create overlay networks on top of it to interconnect resources in remote sites or expose application services to the Internet. F5 Distributed Cloud Site As per F5 Distributed Cloud documentation "Site is a physical or cloud location where F5 Distributed Cloud Nodes are deployed". That is true, however, it doesn't give much clarity from a multi-cloud transit point of view. From that perspective, a site is a remote network that uses a cluster of the F5 Distributed Cloud node(s) as a gateway to connect it back to the F5 Distributed Cloud global network. In simple words, if the F5 Distributed Cloud global network is a trunk then sites are the leaves. F5 Distributed Cloud unifies and simplifies the approach to create and interconnect sites whether they locate in a public cloud or in a private data center. The DevOps team doesn’t need to manually manage resources in a remote location. Instead, the F5 Distributed Cloud automatically creates a remote network and deploys F5 Distributed Cloud node(s) to attach it back to the global backbone. Currently, F5 Distributed Cloud supports sites in the following site locations: AWS VPC AWS TGW GCP VPC Azure VNET Physical Data Center Edge location After a site is registered against F5 Distributed Cloud Console and nodes are deployed networking and application delivery configuration is unified regardless of site's type. F5 Distributed Cloud Node To better understand how inter-site connectivity is organized it is useful to take a closer look at the F5 Distributed Cloud nodes. F5 Distributed Cloud nodes are Linux-based software appliances that act as a K8S cluster and form a super-converged infrastructure to deliver networking and application delivery services. The picture below represents a logical diagram of services running on a F5 Distributed Cloud node. A cluster of nodes locates at the edge of a site and either run customer workloads or acts as a gateway to interconnect site-local resources to the F5 Distributed Cloud global network. Site Deployment Modes F5 Distributed Cloud sites can be deployed in two modes on-a-stick or default gateway. With on-a-stick mode, the ingress and egress traffic from the node is handled on a single network interface as shown in the diagram. In this mode, the network functionality will be either load balancer, a gateway for Kubernetes, API gateway, a generic proxy. Mostly this mode is used as a stub site to run application workloads in using F5 Distributed Cloud Mesh services such as virtual K8s. With default-gateway mode, the network functions like routing and firewall between inside and outside networks can be enabled in addition to the functionality available in on-a-stick mode. This mode is suitable not only for running workloads but also to act as a router and forward traffic to internal resources deployed behind the site. Inter-Site Networking Once the F5 Distributed Cloud nodes are deployed across the sites then inter-site connectivity can be configured via F5 Distributed Cloud Console. The picture below shows a logical diagram of all components participating in the connectivity setup. Connectivity configuration includes three steps. First of all, virtual networks have to be created with proper subnetting. Following network types are available: Per-Site - Even though this network has been configured once globally, it is instantiated as an independent network on each of the site. As a result, this network that is instantiated on one site cannot talk to the same network instantiated on another site. Global - This network is exactly the same across all sites on which it is instantiated. Any endpoint that is a member of this network can talk to other members irrespective of what site they belong. Site-Local : There can only be one network of this type on a particular site and is automatically configured by the system during bootstrap. This can also be considered as a site-local outside network and needs access to the public internet for registration during the site bring-up process. Site-Local-Inside: There can only be one network of this type on a particular site and is automatically configured during bootstrap for sites with two interfaces (default gateway). Public: This is a conceptual virtual network that represents the Public Internet. It is only present on the F5 Distributed Cloud Regional Edge sites and not on customer sites. Site-Local-Service: This is another conceptual virtual network and it represents the service network of Kubernetes within the site. This is the network from which cluster IP for k8s service configuration is allocated. VER-Internal: This is another conceptual virtual network and it represents where all control plane services for the site exist. This is only used if the control plane needs access to tenant services. For example, a site needs access to customers’ secrets that are stored in Hashicorp Vault running on another site. Then those networks have to be attached to one of the F5 Distributed Cloud node network interfaces. There are two types of interfaces - “physical” and “logical” that can be configured: For every networking device (eg. eth0, eth1) on a node within a site, a “physical” interface can be configured. “Logical” interfaces are child interfaces of a physical interface, for example, VLANs. Lastly, a network connector defines connectivity type between networks. There are three types of network connectors: Direct: inside-network and outside-network are connected directly. Endpoints in each virtual network can initiate connections to others. If inside or outside are connected to other external networks, then they can be configured with dynamic routing to exchange routes (currently, only BGP is supported for dynamic routing) SNAT: inside-network is connected using SNAT to outside-network. Endpoints from inside-network can initiate connections to outside-network. However, endpoints from outside-network cannot initiate connections to the inside-network. Forward-Proxy: Along with SNAT, a forward proxy can be configured. This enables the capability of inspection of HTTP and TLS connections. Using a forward proxy, URL filtering and visibility of different hosts accessed from the inside network can be configured. Mixing network type with network connectors gives a user a wide variety of options to let resources located in one site talk to resources in the other site or even to the Internet and opposite. To get an example or practice configuring inter-site connectivity take a look at the step-by-step videos (link1, Link2) or use a simulator (link1, link2).1.1KViews3likes0CommentsJourney to the Multi-Cloud Challenges
Introduction The proliferation of internet-based applications, digital transformations accelerated by the pandemic, an increase in multi-cloud adoption, and the rise of the distributed cloud paradigm allbringnew business opportunitiesas well as new operational challenges. According to Propeller Insights Survey; 75% of all organizations are deploying apps in multiple clouds. 63% of those organizations are usingthree or more clouds. And 56% are findingit difficult to manage workloads across different cloud providers, citing challenges with security, reliability, and connectivity. Below I outline some of the common challenges F5 has seen and illustrate how F5 Distributed Cloud is able to address those challenges.For the purpose of the following examples I am using this demo architecture. Challenge #1: IP Conflict and IP exhaustion As organizations accelerate their digital transformation, they begin to experience significant network growth and changes. As their adoption of multiple public clouds and edge providers expands, they begin to encounter challenges with IP overlap and IP exhaustion. Typically, thesechallenges seldom happen on the Internet as IP addresses are centrally managed. However, this challenge is common for non-Internet traffic becauseorganizations use private/reserved IP ranges (RFC1918) within their networks and any organization is free to use any private ranges they want. This presents a increasingly common problem as networks expand into public clouds, with the ease of infrastructure bootstrapping using automation, the needs of multi-cloud networking, and finally mergers and acquisitions. The F5 Distributed Cloud canhelp organizations overcome IP conflict and IP exhaustion challenges by provisioning multiple apps with a single IP address. How to Provision Multiple Apps with a Single IP Address (~8min) Challenge #2: Easy consumable application services via service catalogue A multi-cloudparadigm causes applications to be very distributed. We often seeapplications running on multiple on-prem data centers, at the edge, and inpublic cloud infrastructure. Making those applicationseasily available ofteninvolves many infrastructureand security control changes - not an easy task. This includes common tasks such as service advertisement, updates to network routing and switching, changing firewall rules, and provisioning DNS.In this demo, wedemonstrate how to seamlessly provision and advertise services, to and from public cloud providers anddata centers. This capability enables an organization to seamlessly provision services and create consumable service catalogues. How to Seamlessly Provision Services to/from the Cloud Edge (~4min) Challenge #3: Operational (Day-2) Complexities Often users have multiple discreet tools managing their infrastructure and each toolprovides their owndashboard for telemetry, visibility, and observability. Users need access to all these tools into a single consistent view so they can tell exactly what is happening in their environments. F5 Distributed Cloud Console provides a 'single pane of glass' for telemetry, visibility and observability providing operational efficiency for Day-2 operations designed to reduce total cost of ownership. Get a Single Pane of Glass on Telemetry, Visibility and Observability (~7min) Challenge #4: Cloud Vendor lock-in impede business agility. Most organizations do not want their cloud workload locked into a particular cloud provider.Cloud vendor lock-in can bea major barrier to the adoption of cloud computing and CIO's show some concern with vendor lock-in perFlexera's2020 CIO Priorities Report,To avoid cloud lock-in, create application resiliency,and get back some of the freedoms of cloud consumption - movingworkload from cloud to cloud - organizations need to be able to dynamically movecloud providers quickly and easilyin the unlikely event that one cloud provider becomes unavailable. Workload Portability - How to Seamlessly Move Workloads from Cloud to Cloud (~4min) Challenge #5: Consistent Security Policiesacross clouds How do you ensure that every security policy you require is applied and enforced consistentlyacross the entire fleet of endpoints? According to F5 2020 State of Application Services Report, 59% of respondents said thatapplying consistent security policies across all company applications was one of their biggestchallenges in multi-cloud security. This demo shows how to apply consistent security policies(WAF) across a fleet of cloud workloads deployed at the edge. This helps reduce risk, increasecompliance, and helps maintaineffective governance. How to Apply Consistent Security Policies Across Clouds (~5min) Challenge #6: Complexities of multiple cloud networking and integration with AWS transit gateway – management of security controls. A multi-cloud strategy introducescomplexities aroundnetworking and security control between clouds and within clouds. Within one cloud (e.g., AWS VPC), an organization may use the AWS transit gateway (TGW) to stitch together the Inter-VPC communication. Managingmultiple VPCs attached to a TGW is, by itself, a challenge in managing security control between VPC. In this demo, we show a simple way to leverage the F5 Distributed Cloud integration with AWS TGW to manage security policy across VPCs(also known as East-West traffic). This demo also demonstrates connecting an AWS VPC with other cloud providers such as Azure, GCP, or an on-prem cloud solution in order to unify theconnectivity and reachability of your workload. Multi-Cloud Integration with AWS Transit Gateway (~19min)1KViews1like0CommentsGetting Hands on with F5 solutions for Multi-Cloud Networking and Distributed Workloads
With today's rapid adoption of multiple clouds by enterprises to solve challenges such as rapid delivery, high-availability, flexibility, cost-effectiveness and avoidance of vendor lock-in, new solutions are needed to provide connectivity across public clouds, private clouds and on-premise data centers. Multi-Cloud Networking (MCN) solutions are now available to address these requirements, however they are largely focused on Layer 3 of the OSI Model, providing a logical, secure, software-defined network. F5 believes that it is necessary to deliver the capabilities of this MCN approach - and go further and provide a truly scalable application fabric that extends networking and security capabilities up to and including the Application Layer 7. This allows for multi-layer security, improved observability, analytics, and multi-tenancy and self-serve not just for NetOps, but also for developers, DevOps and SecOps teams. One of the challenges of MCN is the difficulty of building and maintaining a consistent and standard networking approach across different clouds. We see the opportunity to solve this by shifting towards a Distributed Cloud model, discussed by F5 CTO Geng Lin in a recent blog. To help developers explore how to start experiencing and embracing this new Distributed Cloud model, F5 has created a series of Guided Walkthroughs leveraging Volterra’s capabilities to demonstrate two important uses cases: 1. Connecting Kubernetes (K8S) clusters across clouds 2. Supporting globally distributed application workloads. The Scenario covered by Walkthroughs is an e-Commerce sample application (provided) for which the developer is looking to improve the front-end application experience by implementing a globally distributed Find-a-Store service. The Guide covers how to securely connect between Kubernetes deployments and also distribute app services globally to improve performance. Everything you need is provided: the application, automation scripts, etc. - just bring your cloud provider account. The walkthroughs for starting with AWS, Azure or GCP, available at the links below: ·Amazon Elastic Kubernetes Service (EKS) ·Azure Kubernetes Service (AKS) ·Google Kubernetes Engine (GKE)854Views0likes0CommentsCreate an internal HTTP Load-Balancer on Volterra with Terraform
Problem this snippet solves: How to create an internal HTTP Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-http-lb.tf file) How to use this snippet: Pre-Requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-http-origin-pool" { name = "sample-http-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-http-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-http-lb" { depends_on = [volterra_origin_pool.sample-http-origin-pool] //Mandatory "Metadata" name = "sample-http-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] http { dns_volterra_managed = false } //End of mandatory "Basic configuration" //Optional "Default Origin server" default_route_pools { pool { name = "sample-http-origin-pool" namespace = "mynamespace" } weight = 1 } //End of optional "Default Origin server" //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found703Views0likes1CommentCreate an Internet exposed HTTPS Load-Balancer on Volterra with Terraform (Origin handled by a Volterra node)
Problem this snippet solves: How to create an Internet exposed HTTPS Load-Balancer with VoltMesh where the Origin is reachable through a Volterra node. The Origin is HTTP based but will be exposed on the Internet over HTTPS. Two steps are needed: Creation of the Origin (1-origin.tf file) Creation of the Load-Balancer (2-https-lb.tf file) How to use this snippet: Pre-requirements: Have a Volterra API Certificate. Please see this page for the API Certificate generation:https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } Encode in base 64 the public key of the TLS certificate you want to use in the HTTPS load-balancer, From a shell, run: base64 publicpart_of_tls_certificate.pem Get the Volterra vesctl tool: https://gitlab.com/volterra.io/vesctl/blob/main/README.md Then in your home directory, create a .vesconfig file with the following lines: server-urls:https://<tenant>.console.ves.volterra.io/api key: /<full path to>/private_key.key cert: /<full path to>/certificate.cert Then in the folder where you have installed vesctl, run: ./vesctl.darwin-amd64 request secrets get-public-key > tenant-public-key ./vesctl.darwin-amd64 request secrets get-policy-document --namespace shared --name ves-io-allow-volterra > ves-io-allow-volterra-policy ./vesctl.darwin-amd64 request secrets encrypt --policy-document ves-io-allow-volterra-policy --public-key tenant-public-key privkey.pem > blindfolded-privkey Where privkey.pem is the private key of your TLS certificate. The Volterra encrypted TLS key will be available in the blindfolded-privkey file. In the directory where your terraform files are, run: terraform init Then: terraform apply Code : //========================================================================== //Definition of the Origin, 1-origin.tf //Start of the TF file resource "volterra_origin_pool" "sample-https-origin-pool" { name = "sample-https-origin-pool" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" origin_servers { private_ip { ip = "10.17.20.13" //From which interface of the node onsite the IP of the service is reachable. Value are inside_network / outside_network or both. outside_network = true //Site definition site_locator { site { name = "name-of-the-site" namespace = "system" tenant = "name-of-the-tenant" } } } labels = { } } no_tls = true port = "80" endpoint_selection = "LOCALPREFERED" loadbalancer_algorithm = "LB_OVERRIDE" } //End of the file //========================================================================== //========================================================================== //Definition of the Load-Balancer, 2-https-lb.tf //Start of the TF file resource "volterra_http_loadbalancer" "sample-https-lb" { depends_on = [volterra_origin_pool.sample-https-origin-pool] //Mandatory "Metadata" name = "sample-https-lb" //Name of the namespace where the origin pool must be deployed namespace = "mynamespace" //End of mandatory "Metadata" //Mandatory "Basic configuration" domains = ["mydomain.internal"] https { add_hsts = true http_redirect = true tls_parameters { no_mtls = true tls_config { default_security = true } tls_certificates { certificate_url = "string:/// " } secret_encoding_type = "EncodingNone" } } } } default_route_pools { pool { name = "sample-https-origin-pool" namespace = "mynamespace" } weight = 1 } //Mandatory "VIP configuration" advertise_on_public_default_vip = true //End of mandatory "VIP configuration" //Mandatory "Security configuration" no_service_policies = true no_challenge = true disable_rate_limit = true disable_waf = true //End of mandatory "Security configuration" //Mandatory "Load Balancing Control" source_ip_stickiness = true //End of mandatory "Load Balancing Control" } //End of the file //========================================================================== Tested this on version: No Version Found652Views2likes0CommentsCreate a VPC VoltMesh AWS site (two interfaces node)
Problem this snippet solves: How to create a VoltMesh node inside an existing VPC. The VoltMesh node will be a two interfaces node and so could be used as both an ingress or egress gateway for the VPC. How to use this snippet: Pre-Requirements: Get and create the following from the AWS console: Get the ID of the VPC in which you want to deploy the VoltMesh node Get the ID of the "workload subnet" where are sitting the ressources you want to expose with the VoltMesh node in the VPC Create and get the ID of the following: One subnet (/28 for instance) that will be used as "outside" subnet for the VoltMesh node ie handling the Internet connectivity One subnet (/28 for instance) that will be used as "inside" subnet for the VoltMesh node For more information regarding our AWS concepts, please refer to: https://www.volterra.io/docs/how-to/site-management/create-aws-site Have entered your AWS account credentials within the Volterra console. Please refer to: https://www.volterra.io/docs/how-to/site-management/cloud-credentials Have a Volterra API Certificate. Please see this page for the API Certificate generation: https://volterra.io/docs/how-to/user-mgmt/credentials Extract the certificate and the key from the .p12: openssl pkcs12 -info -in certificate.p12 -out private_key.key -nodes -nocerts openssl pkcs12 -info -in certificate.p12 -out certificate.cert -nokeys Create a variables.tf Terraform variables file: variable "api_cert" { type = string default = "/<full path to>/certificate.cert" } variable "api_key" { type = string default = "/<full path to>/private_key.key" } variable "api_url" { type = string default = "https://<tenant_name>.console.ves.volterra.io/api" } Create a main.tf Terraform file: terraform { required_version = ">= 0.12.9, != 0.13.0" required_providers { volterra = { source = "volterraedge/volterra" version = ">=0.0.6" } } } provider "volterra" { api_cert = var.api_cert api_key = var.api_key url = var.api_url } In the directory where your terraform files are, run: terraform init Then: terraform apply Code : resource "volterra_aws_vpc_site" "aws-vpc-example" { name = "aws-vpc-example" namespace = "system" aws_region = " " assisted = false instance_type = "t3.xlarge" //AWS credentials entered in the Volterra Console aws_cred { name = " " namespace = "system" tenant = " " } vpc { vpc_id = " " } ingress_egress_gw { aws_certified_hw = "aws-byol-multi-nic-voltmesh" no_forward_proxy = true no_global_network = true no_inside_static_routes = true no_outside_static_routes = true no_network_policy = true } //Availability zones and subnet options for the Volterra Node az_nodes { //AWS AZ aws_az_name = " " //Site local outside subnet outside_subnet { existing_subnet_id = " " } //Site local inside subnet inside_subnet { existing_subnet_id = " " } //Workload subnet workload_subnet { existing_subnet_id = " " } } //Mandatory logs_streaming_disabled = true //Mandatory no_worker_nodes = true } Tested this on version: No Version Found533Views1like0Comments