Multicloud Networking
66 TopicsF5 Distributed Cloud - Customer Edge Site - Deployment & Routing Options
F5 Distributed Cloud Customer Edge (CE) software deployment models for scale and routing for enterprises deploying multi-cloud infrastructure. Today's service delivery environments are comprised of multiple clouds in a hybrid cloud environment. How your multi-cloud solution attaches to your existing on-prem and cloud networks can be the difference between a successful overlay fabric, and one that leave you wanting more out of your solution. Learn your options with F5 Distributed Cloud Customer Edge software.11KViews18likes3CommentsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clustersand/or host aK8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clustersand/orrun a managed K8s platform itself.Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can eitherintegrate with your K8s platform,run insideyour K8s platform, host virtual K8s(Namespace-aaS), or run a K8s platformin your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on ahypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gatewayarchitecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate withexisting K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use includeAppStack capabilities, then architectures #3 and #4 are possible for you.In these scenarios, our XC nodeactually runs your K8son your workloads. We are no longer integrating XC with your existing K8s platform. XCisthe platform. A simple way to run K8s workloads is to use avirtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are somerestrictions(for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) Inmanaged k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefitsinclude what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress controlcan be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies,etc—allof these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud8.9KViews29likes5CommentsA complete Multi-Cloud Networking walkthrough with F5 Distributed Cloud
F5 Distributed Cloud – Multi-Cloud Networking F5 Distributed Cloud (F5 XC) provides a Software-as-a-Service based platform to connect, deliver, secure, and operate your networks and applications across any environment. This walkthrough contains two sections. The first section uses F5 Distributed Cloud Network Connect to network across cloud locations and providers with simplified provisioning and end-to-end security. The second part uses F5 Distributed Cloud App Connect, and shows how to securely connect distributed workloads across cloud and edge locations with integrated app security. Distributed Cloud Network Connect Network Connect helps customers establish a multi-cloud networking fabric with end-to-end cloud orchestration, a gateway that implements L3-L7 functions to enforce network connectivity and security and a unified policy with central visibility for collaboration across NetOps & SecOps. 1. Deploy F5 XC Customer Edge Site(s) Step 1: Establish a multi-cloud networking fabric by deploying F5 XC Customer Edge (CE) sites (cloud, edge, on-prem) ➡️ See the following article and connected video to learn how to use the Distributed Cloud Console to deploy a CE in AWS and in Azure, and then how to route traffic between each of the sites. Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments ➡️ F5 XC can orchestrate private connectivity, including AWS PrivateLink, Azure CloudLink, and many other private transport providers. The following article covers this capability in greater detail. Using F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure Step 2: Customers onboard required VPC/VNets to the F5 XC CE sites to participate in the multi-cloud fabric. F5 XC then orchestrates cloud networking constructs to attract traffic from these VPCs (termed as spokes) and then enforce L3-L7 network services. Cloud orchestration includes things such as creating AWS TGW, route table updates, setting up Azure VNet peering, configuring AWS direct connect -or- Azure Express Route and related resources to establish private connectivity and many more. ➡️ See the following series of articles to learn how to use the Infrastructure as Code utility Terraform to deploy and connect Distributed Cloud CE’s in AWS, Azure, and Google Cloud Overview & AWS Deployment with F5 Distributed Cloud Multi-Cloud Networking AWS to Azure via Layer 3 & Global Network with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: A step-by-step walkthrough using Terraform with Distributed Cloud Network Connect in AWS MCN 1: Deploy a F5 XC CE Site MCN 2: Cookie cutter architecture - fully orchestrated: attach spoke VPC/VNets seamlessly. MCN 3: Sites deployed across the globe to establish a multi-cloud networking fabric. 2. Configure Network Segments in Distributed Cloud Step 1: Configure Network Segments. These Network Segments will provide an end-to-end global isolated network. MCN 4: Configure a global Network Segment Step 2: Associate F5 XC CE Sites (incl. VLANs/interfaces for on-prem/edge sites), onboarded VPCs/VNets to these network segments to create an isolated network within the multi-cloud networking fabric. ➡️ Steps 4, 6, and 10+ in the following article show how to connect the Distributed Cloud Global Network use it to route traffic between different CE Sites Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments 3. Define Security Policies Step 1: Define security policies such as forward proxy policies, network security policies, traffic policers for your entire multi-cloud networking fabric with the power of labels to easily express the intent without complexities such as IP addresses. MCN 5: Enhanced Firewall Policy with the power of labels 4. Integrate with 3rd Party NFV services such as Palo Alto Networks Firewall Step 1: Seamlessly provision NFV services such as Big-IP AWAF, Palo Alto Networks Firewall, into any F5 XC CE site MCN 6: Orchestrate 3rd party firewalls like Palo Alto Step 2: Use the power of labels to easily express the intent to steer traffic to these 3rd party NFV appliances. MCN 7: Seamlessly steer traffic towards 3rd party NFV services such as PAN firewall ➡️ Learn how to deploy a Palo Alto Firewall using Distributed Cloud and a Palo Alto Panorama server, and then redirect traffic to the firewall using Enhanced Firewall Policies Easily Deploy Your Palo Alto NGFW with F5 Distributed Cloud Services 5. Monitor & Troubleshoot your Network NetOps and SecOps can collaborate using a single platform to monitor & troubleshoot networking issues across the multi-cloud fabric. MCN 8: Powerful monitoring dashboards & troubleshooting tools for your entire secure multi-cloud network fabric. Distributed Cloud App Connect App Connect helps customers simply deliver applications across their multi-cloud networking fabric including the internet without worrying about underlying networking via the distributed proxy architecture with full self-service capability and application isolation via namespaces. 1. Establish a Secure Multi-Cloud Network Fabric Utilize Multi-Cloud Network Connect to deploy F5 XC CE sites in environments that host your applications. 2. Discover Any App running Anywhere Step 1: Simply discover all apps running across your environments by configuring service discoveries. Use DNS based service discovery to discover legacy apps and K8s/consul-based service discovery to discover modern apps. MCN 9: Discover apps in any environment - sample showing apps discovered in a K8s cluster. 3. Deliver Any App Anywhere, incl. the Public Internet Step 1: Configure a Load Balancer which will connect apps (Origins) discovered in any environment and then deliver it (Advertise) to any environment. MCN 10: Leverage distributed proxy architecture to connect an App running in Azure to AWS – without configuring ANY networking. Step 2: Apps can be delivered (Advertised) directly to the internet using F5 XC’s performant anycast global backbone, with DNS delegation & TLS cert management by simply selecting VIP advertisement as ‘Internet’. MCN 11: Live traffic graph showing seamlessly connecting App in Azure -> AWS and then delivering the App in AWS to the public internet. ➡️ Navigate each step of the process, from deploying CE’s to using App Connect to connect app services locally and advertise the frontend to the Internet. The following collection of articles use the Distributed Cloud Console to facilitate the deployment, and demonstrate how to automate the process using the Infrastructure as Code utility Terraform to orchestrate everything. Use F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites Azure & Layer 7 Networking with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: Using Terraform to connect backend-send services via Distributed Cloud App Connect in Azure 4. Secure your Apps Step 1: Secure Apps with industry leading application security services such as WAF, Bot, L7 DoS, API security, client-side defense and many more with a single click. MCN 12: One click application security for all your applications – anywhere ➡️ The following demo guide shows how to deploy web app globally and secure it. Distributed Cloud WAAP + CDN Demo Guide 5. Monitor & Troubleshoot your Apps SecOps, NetOps and DevOps can collaborate using a single platform to monitor & troubleshoot application issues across the multi-cloud fabric. MCN 13: Performance & Security dashboards for every application namespace - each namespace contains many load balancers. MCN 14: Performance & Security dashboard for each Load Balancer MCN 15: Various other security & performance tools to help maintain a healthy secure performant multi-cloud application fabric. Conclusion Using the Network Connect and App Connect services in Distributed Cloud, it's easy to deploy, connect, and secure apps that run in multiple clouds. The F5 platform automatically handles the connectivity, routing, and allows customized access, enabling apps to be deployed globally or privately in just a few clicks. Additional Resources Distributed Cloud Network Connect Distributed Cloud App Connect Demo Guide: F5 XC MCN6.2KViews3likes1CommentF5 Distributed Cloud - Regional Decryption with Virtual Sites
In this article we discuss how the F5 Distributed Cloud can be configured to support regulatory demands for TLS termination of traffic to specific regions around the world. The article provides insight into the F5 Distributed Cloud global backbone and application delivery network (ADN). The article goes on to inspect how the F5 Distriubted Cloud is able to achieve these custom topologies in a multi-tenant architecture while adhearing to the "rules of the internet" for route summarization. Read on to learn about the flexibility of F5's SaaS platform providing application delivery and security solutions for your applications.5.6KViews17likes2CommentsMulti-Cluster, Multi-Cloud Networking for K8S with F5 Distributed Cloud – Architecture Pattern
Application is the center of attention for majority organization. It is an important asset for business. Application evolves from simple application to complex application with multitude of integrated systems and distributed – simple to complex. Application becoming so complex. Complexities are the real threat to business regardless from security or operation aspect. Security, clouds, multi-cloud networking and so on exist because of application. Those technologies will cease to exist without existence of application. F5 Distributed Cloud is design to address business most important asset - application. It bring F5 vision to reality for our customer - “Secure, Deliver and Optimize every app and API anywhere”5.1KViews8likes3CommentsF5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
Here in this example solution, we will be using Terraform to deploy an AWS Elastic Kubernetes Service cluster running the Arcadia Finance test web application serviced by F5 NGINX Kubernetes Ingress Controller and protected by NGINX App Protect WAF. We will supplement this with F5 Distributed Cloud Web App and API Protection to provide complimentary security at the edge. Everything will be tied together using GitHub Actions for CI/CD and Terraform Cloud to maintain state.5KViews4likes0CommentsUse F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites
Introduction Modern apps are comprised of many smaller components and can take advantage of today’s agile computing landscape. One of the challenges IT Admins and Security Operations face is securely controlling access to all the components of distributed apps while business development grows or changes hands with mergers and acquisitions, or as contracts change. F5 Distributed Cloud (F5XC) makes it very easy to provide uniform access to distributed apps regardless of where the components live. Solution Overview Arcadia Finance is a distributed app with modules that run in multiple Kubernetes clusters and in multiple locations. To expedite development in a key part of the Arcadia Finance distributed app, the business has decided to outsource work on the Refer A Friend module. IT Ops must now relocate the Refer A Friend module to a separate location exclusive to the new contractor where its team of developers have access to work on it. Because the app is modular, IT has shared a copy of the Refer A Friend container to the developer, and now that it is up and running in the new site, traffic to the module needs to transition away from the one that had been developed in house to the one now managed by the contractor. Logical Topology Distributed App Overview The Refer A Friend endpoint is called by the Arcadia Finance frontend pod in Kubernetes (K8s) when a user of the service wants to invite a friend to join. The pod does this by making an HTTP request to the location “refer-a-friend.demo.internal/app3/”. The endpoint “refer-a-friend.demo.internal” is registered to the K8s cluster with an F5XC HTTP Load Balancer policy with its VIP advertised internally to specific sites, including the K8s cluster. F5XC uses the cluster’s K8s API to register services and make them available anywhere within the customer tenant’s configured global network. Three sites are used by the company that owns Arcadia Finance to deliver the distributed app. The core of the app lives in a K8s cluster in Azure, the administration and monitoring of the app is in the customer’s legacy site in AWS. To maintain security, the new contractor only has access to GCP where they’ll continue developing the Refer A Friend module. An F5XC global virtual network connects all three sites, and all three sites are in a site mesh group to streamline communication between the different app modules. Steps to deploy To reach the app externally, an HTTP Load Balancer policy is configured using an origin pool that connects to the K8s “frontend” service, and the origin pool uses a Kubernetes Site in F5XC to access the frontend service. A second HTTP Load Balancer policy is configured with its origin pool, a static IP that lives in Azure and is accessed via a registered Azure VNET Site. When the Refer A Friend module is needed, a pod in the K8s cluster connects to the Refer A Friend internal VIP advertised by the HTTP Load Balancer policy. This connection is then tunneled by F5XC to an endpoint where the module runs. With development to the Refer A Friend module turned over to the contractor, we only need to change the HTTP Load Balancer policy to use an origin pool located in the contractor’s Cloud GCP VPC Site. The origin policy for the GCP located module is nearly identical to the one used in Azure. Now when a user in the Arcadia App goes to refer a friend, the callout the app makes is now routed to the new location where it is managed and run by the new contractor. Demo Watch the following video for information about this solution and a walkthrough using the steps above in the F5 Distributed Cloud Console. Conclusion Using F5 Distributed Cloud with modern day distributed apps, it’s almost too easy to route requests intended for a specific module to a new location regardless of the provider and provider specific requirements or the IP space the new module runs in. This is the true power of using F5 Distributed Cloud to glue together modern day distributed apps.3.9KViews4likes0CommentsWhat is Multi-Cloud Networking?
What is Multi-Cloud Networking? Multi-cloud networking (MCN), as a technology, aims to provide easy network connectivity between cloud environments. For the purpose of our definition, we need to imagine our datacenter as a cloud. You can loosely define a cloud environment as 'anywhere you run workloads.' The concept is nebulous… literally. Clouds come in all shapes and sizes, from 'running on Pi' to AWS / GCP / Azure. MCN is to clouds, as Internet is to network. AWS’s DirectConnect, Azure ExpressRoute and GCP’s Direct Link were early forms of MCN, aimed at joining portions of their own clouds together with customer datacenters. Insertion of transport virtual appliances in clouds has become another mechanism for MCN through time. Its strength is its flexibility and agility. One other notable MCN concept is the transport provider. Some circuit providers offer 'short-hop' transport to various cloud providers by routing. This option offers significant throughput versus the SDN router but lacks the agility. This is a popular option for hybrid cloud enterprises.With all of these options, you can make individual connections to each cloud, potentially in a hub and spoke fashion or full mesh. Challenges With Multi-Cloud Networking The top-most concern should be scalability, in every way. You need to be concerned about scale in routing, licensing, metered cloud costs, not to mention the knowledge to understand all of the nuance features of each cloud provider and so many more things. All of this is operational overhead, which can be significant. Another serious challenge is IP addressing. The sheer volume of it is one thing. Anyone who works with modern applications today can tell you that it's hard to even find a workload sometimes, with how massive things can get. DNS is one possible option to assist, but you've got to account for all of the native cloud workloads, too.. with their different DNS interfaces. Another common challenge is IP overlap. If you're curious what I mean, lets say your employer acquires a piece of software that lives in GCP, but you're already in AWS. You start going down the path of routing when you suddenly notice that both cloud environments are 10.1.x.x/16.This means localized routing all over the place and we know how much router people love one-offs, am I right? The next challenge is one I've already hinted at: How many indepth nerd knobs do you want to know by how many security vendors? You've got to strategize to minimize this sort of potential sprawl and standardize on the vendors that can do the most for you. Advantages of Multi-Cloud Networking The greatest advantage is really multi-cloud transit.Understanding so many different and new technologies is a daunting task. With multi-cloud transit, data centers route through the same SDN routers as your cloud application flows, allowing you to see each cloud provider as a metered resource for app consumption. No need to worry about addressing, DNS, or routing for each environment. Another substantial benefit is the enablement of a shared security model. When you can route between these environments, you can also easily aggregate logs, integrate with SIEMs and manage automated security policies with ease. Network fluidity is another substantial benefit. When your COO comes to you and says that you need to integrate a newly acquired network segment, you have no problems. One of the very cool benefits of SDN is the ability to route by software object. When we think of routing in traditional networks we want our packet to get to 10.10.10.4 by way of 192.168.3.1, but an SDN router sends our packet to 10.10.10.4 by way of f5xc_gcp_router4. This also means that your app developer can stamp out their app in AWS to send another packet to 10.10.10.4 by way of f5xc_aws_router16 or such. Overlap no longer matters when you route through an SDN core. Conclusion Giving your modern application networks the flexibility to grow on demand, to assimilate new application network segments in minutes instead of months... Ultimately, I really believe that MCN - when done right - like Chuck Mangione said (well, with a flugel horn), 'Feels So Good.' The designs you can build with it are SO much more scalable and translate everything from physical data centers to clouds in a clean, easy to manage fashion.3.8KViews8likes0CommentsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.7KViews8likes0CommentsDeploy High-Availability and Latency-sensitive workloads with F5 Distributed Cloud
Introduction F5 Distributed Cloud Services delivers virtual Kubernetes (vK8s) capabilities to simplify deployment and management of distributed workloads across multiple clouds and regions. At the core of this solution is Distributed Cloud's multi-cloud networking service, enabling connectivity between locations. In Distributed Cloud, every location is identified as a site, and K8s clusters running in multiple sites can be managed by the platform. This greatly simplifies the deployment and networking of infrastructure and workloads. Centralized databases require significant compute and memory resources and need to be configured with High Availability (HA). Meanwhile, latency-sensitive workloads require placement as close to an end-users’ region as possible. Distributed Cloud handles each scenario with a consistent approach to the app and infrastructure configuration, using multi-cloud networking with advanced mesh, and with Layer 4 and/or Layer 7 load balancing. It also protects the full application ecosystem with consistent and robust security policies. While Regional Edge (RE) sites deliver many benefits of time-to-value and agility, there are many instances where customers may find it useful to deploy compute jobs in the location or region of their choice. This may be a cloud region, or a physical location in closer proximity to the other app services, or due to regulatory or other requirements such as lower latency. In addition, the RE deployments have more constraints in terms of pre-configured options for memory and compute power; in cases where it is necessary to deploy a workload demanding more resources or specific requirements such as high memory or compute, the Customer Edge (CE) deployment may be a better fit. One of the most common scenarios for such a demanding workload is database deployment in a High-Availability (HA) configuration. An example would be a PostgreSQL database deployed across several compute nodes running within a Kubernetes environment, which is a perfect fit for a CE deployment. We’ll break down this specific example in the content that follows with links to other resources useful in such undertaking. Deployment architecture F5 Distributed Cloud Services provide a mechanism to easily deploy Kubernetes apps by using virtual Kubernetes (vK8s), which helps to distribute app services across a global network while making them available closer to users. You can easily combine RE and CE sites in one vK8s deployment to ease application management and securely communicate between regional deployments and backend applications. Configuration of our CE starts with the deployment of F5 CE Site, which provides ways to easily connect and manage the multi-cloud infrastructure. The Distributed Cloud CE Site works with other CE and F5-provided RE Sites, which results in a robust distributed app infrastructure with full mesh connectivity, and ease of management as if it were a single K8S cluster. From architecture standpoint a centralized backend or database deployed in a CE Site provides an ideal platform that other sites can connect with. We can provision several nodes in a CE for a high-availability configuration for a PostgreSQL database cluster. The services within this cluster can then be exposed to other app services, such as deployments in RE sites, by way of a TCP load balancer. Thus, the app services that consume database objects could reside close to the end-user if they are deployed in F5 Distributed Cloud Regional Edge, resulting in the following optimized architecture: Prepare environment for HA Load F5 Distributed Cloud Services allows creating customer edge sites with worker nodes on a wide variety of cloud providers: AWS, Azure, GCP. The pre-requisite is a Distributed Cloud CE Site or App Stack, and once deployed, you can expose the services created on these edge sites via a Site mesh and any additional Load Balancers. A single App Stack edge Site may support one or more virtual sites, which is similar to a logical grouping of site resources. A single virtual site can be deployed across multiple CEs, thus creating a multi-cloud infrastructure. It’s also possible to place several virtual sites into one CE, each with their own policy settings for more granular security and app service management. It is also feasible for several virtual sites to share both the same and different CE sites as underlying resources. During the creation of sites & virtual sites labels such as site name, site type and others can be used to organize site resources. The shows how VK8S clusters can be deployed across multiple CEs with virtual sites to control distributed cloud infrastructure. Note that this architecture shows four virtual clusters assigned to CE sites in different ways. In our example, we can start by creating a AWS VPC site with worker nodes do as described here. When the site is created, the label must be assigned. Use the ves.io/siteName label to name the site. Follow these instructions to configure the site. As soon as edge site is created and the label is assigned, create a virtual site, as described here. The virtual site should be of the type CE and the label must be ves.io/siteName with operation == and the name of the AWS VPC site. Note the virtual site name, as it will be required later. At this point, our edge site for the HA Database deployment is ready. Now create the VK8S cluster. Select both virtual sites (one on CE and one on RE) by using the corresponding label: The all-res one will be used for the deployment of workloads on all RE’s. Environment for both RE and CE deployments is ready. Deploy HA Postgres to CE We will use Helm charts to deploy a PostgreSQL cluster configuration with help of Bitnami, which provides ready-made Helm charts for HA databases: MongoDB, MariaDB, PostgreSQL, etc. in the following repository: https://charts.bitnami.com/bitnami. In general, these Helm charts work very similarly, so the example used here can be applied to most other databases or services. An important key in values for the database is clusterDomain. The value is constructed this way: {sitename}.{tenant_id}.tenant.local. Note that site_id here is Edge site id, not the virtual one. You can get this information from site settings. Open the JSON settings of the site in AWS VPC Site list. Tenant id and site name will be shown as tenant and name fields of the object: VK8S supports only non-root containers, so these values must be specified: containerSecurityContext: runAsNonRoot: true To deploy the load to a predefined virtual site, specify: commonAnnotations: ves.io/virtual-sites: "{namespace}/{virtual site name}" When deployed, HA Database exposes its connection via a set of services. For PostgresDB the service name might look like: ha-postgres-postgresql-ha-postgresql, on port 5432. To review the services list of the deployments, select Services tab of the VK8S cluster. Even though RE deployment and CE deployment are in one VK8S namespace, they are not accessible directly. Services need to be first exposed as Load Balancers. Expose CE service to RE deployment To access HA Database deployed to CE site, we will need to expose the database service via a TCP Load Balancer. TCP Load Balancer is created by using the Origin Pool. To create the Origin Pool for vk8s deployed service follow these instructions. As soon as Origin Pool is ready, TCP Load Balancer can be created, as described here. This load balancer needs to be accessible only from the RE network, or in other words to be advertised there. Therefore, when creating TCP Load Balancer specify “Advertise Custom” for “Where to Advertise the VIP” field. Click “Configure” and select “vK8s Service Network on RE” for “Select Where to Advertise” field, as well as “Virtual Site Reference” and “ves-io-shared/ves-io-all-res“ for subsequent settings. Also, make sure to specify domain name for the “Domain” field. This makes it possible to access the service via the TCP Load Balancer domain and port. If the domain is specified as re2ce.internal and port is 5432, the connection to the DB might be performed from the RE using these settings. RE to CE Connectivity At this point, the HA Database Workload is deployed to the CE environment. This workload implements a central data storage, which takes advantage of compute-intensive resources provided by the CE. While the CE is an ideal fit for compute-heavy operations, it is typically optimized for a single region of the cloud where the CE is deployed. This architecture could be complemented by a multi-region architecture where end-users from regions other than the CE may reduce latency delays by adding regional edge services by moving some of the data and compute capability off of the CE and to the RE close to the end-users’ region. Moving services with data access points to the edge raises questions of caching and updates propagation. The ideal use-cases for such services are around not overly compute-heavy but rather time- and latency-sensitive workloads – those that require decision-making at the compute edge. These edge services still require secure connectivity back to core, and in our case we can stand up a mock service in the Regional Edge to consume the data from the centralized Customer Edge and present it to end-users. The NGINX reverse-proxy server is a handy solution to implement data access decisions on the edge. NGINX has several plugins, allowing access to backend systems via HTTP protocol. PostgreSQL does not provide such an adapter natively, but NGINX has a module just for that: NGINX OpenResty can be compiled with Postgres HTTP module, allowing to do GET/POST requests to access and modify data. To enable access to Postgres database the upstream tag is used this way: upstream database { postgres_server re2ce.internal dbname=haservicesdb user=haservices password=haservicespass; } As soon as the upstream setup, the queries can be performed: location /data { postgres_pass database; postgres_query "SELECT * FROM articles"; } Unfortunately, postgres_query and postgres_pass does not support caching, so additional proxy_pass needs to be configured: location / { rds_json on; proxy_buffering on; proxy_cache srv; proxy_ignore_headers Cache-Control; proxy_cache_methods GET HEAD POST; proxy_cache_valid 200 302 30s; proxy_cache_valid 404 10s; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504 http_429; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://localhost:8080/data; } Note the additional rds_json directive above, it's used to convert the response from binary to JSON. Now that the data is cached on the Regional Edge, when the central server is unavailable or inaccessible, the cached response is returned. This is an ideal situation for how a distributed multi-region app may be designed and connected, where the deployment on RE creates a service accessible to the end-users via a Load Balancer. Enhanced Security Posture with Distributed Cloud Of course, we’re using NGINX with the PostgreSQL module for illustration purposes only; exposing databases this way in production is not secure. However, this gives us an opportunity to think through how publicly accessible service endpoints can potentially be open to attacks. A Web App Firewall (WAF) is provided as part of the Web App & API Protection (WAAP) sets of services within F5 Distributed Cloud and secure all of the services exposed in our architecture with a consistent set of protection and controls. For example, with just a few clicks, we can protect the Load Balancer that exposes an external web port to end-users on the RE using WAF and bot protection services. Similarly, other services on the CE can also be protected with the same consistent security policies. Monitoring & Visibility All of the networking, performance, and security data and analytics are readily available to send-users within F5 Distributed Cloud dashboards. For our example it is a list of all connections from RE to CE, via the TCP load balancer detailed for each RE site: Another useful data point is a chart and detail of HTTP load balancer requests: Conclusion In summary, the success of a distributed cloud architecture is dependent on placing the right types of workloads on the right cloud infrastructure. F5 Distributed Cloud provides and securely connects various types of distributed app-ready infrastructure, which we used in our example such as the Customer Edge and Regional Edge. A compute-heavy centralized database workload that requires high availability can take advantage of vK8s for ease of deployment and config with Helm charts, scalability, and control. The CE workload can then be exposed via Load Balancers to other services, deployed in other clouds or regions, such as the Regional Edge service we utilized here. All of the distributed cloud infrastructure, networking, security and insights are available in one place with F5 Distributed Cloud services. Additional Material Now that you've seen how to build our solution, try it out for yourself! Product Simulator: A guided simulation in a sandbox environment covering each step in this solution Demo Guide: A comprehensive package, including a step-by-step guide and the images needed to walk through this solution every step of the way in your own environment. This includes the scripts needed to automate a deployment, including the images that support the sample application. Links GitHub: Demo Guide - HA DB with CE and RE Simulator: High Availability Workloads Product Page: Distributed Cloud Multi-Cloud Transit Product Page: Distributed Cloud Web App & API Protection (WAAP) Tech Doc: Deploying Distributed Cloud in AWS VPC's Tech Doc: Virtual Kubernetes (vK8s) on Distributed Cloud3.6KViews4likes0Comments