connectivity
5 TopicsSeamless App Connectivity with F5 and Nutanix Cloud Services
Modern enterprise networks face the challenge of connecting diverse cloud services and partner ecosystems, expanding routing domains and distributed application connections. As network complexity grows, the need to onboard applications quickly become paramount despite intricate networking operations. Automation tools like Ansible help manage network and security devices. But it is still hard to connect distributed applications across new systems like clouds and Kubernetes. This article explores F5 Distributed Cloud Services (XC), which facilitates intent-based application-to-application connectivity across varied infrastructures. Expanding and increasingly sophisticated enterprise networks Traditional data center networks typically rely on VLAN-isolated architectures with firewall ACLs for intra-network security. With digital transformation and cloud migrations, new systems work with existing networks. They move from VLANs to labels and namespaces like Kubernetes and Red Hat OpenShift. Future trends point to increased edge computing adoption, further complicating network infrastructures. Managing these complexities raises costs due to design, implementation, and testing impacts of network changes. Network infrastructure is becoming more complex due to factors such as increased NAT settings, routing adjustments, added ACLs, and the need to remove duplicate IP addresses when integrating new and existing networks. This growing complexity drives up the costs of design, implementation, and testing, amplifying the impact of network changes A typical network uses firewalls to control access between VLANs or isolated networks based on security policies. Access lists (ACLs) are organized by source and destination applications or clients. The order of ACLs is important because shorter prefixes can override longer ones. In some cases, ACLs exceed 10,000 lines, making it difficult to identify which rules impact application connectivity. As a result, ACLs for discontinued applications often remain in the firewall configuration, posing potential security risks. However, refactoring ACLs requires significant effort and testing. F5 XC addresses the ACL issue by managing all application connectivity across on-premises, data centers, and clouds. Its load balancer connects applications through VLANs, VPCs, and other networks. Each load balancer has its own application control and associated ACLs, ensuring changes to one load balancer don’t affect others. When an application is discontinued, its load balancer is deleted, preventing leftover configurations and reducing security risks. SNAT/DNAT is commonly used to resolve IP conflicts, but it reduces observability since each NAT table between the client and application must be referenced. ACL implementations vary across router/firewall vendors—some apply filters before NAT, others after. Logs may follow this behavior, making IP address normalization difficult. Pre-NAT IP addresses must be managed across the entire network, adding complexity and increasing costs. F5 XC simplifies distributed load balancing by terminating client sessions and forwarding them to endpoints. Clients and endpoints only need IP reachability to the CE interface. The load balancer can use a VIP for client access, but global management isn't required, as the client-side network sets it. It provides clear source/destination data, making application access easy to track without managing NAT IP addresses. F5 Distributed Cloud MCN/Network as a Service F5XC offers an all-in-one software package for networking and security, managed through a single console with intent-based configuration. This eliminates the need for separate appliance setups. Furthermore, firewall, load balancer, WAF changes, and function updates are done via SDN without causing downtime. The F5 XC Customer Edge (CE) works across physical appliances, VMs, containers, and clouds, creating overlay tunnels between CEs. Acting as an application gateway, it isolatesrouting domains and terminates L3 routing, while maintaining best practices for cloud, Kubernetes, and on-prem networks. This simplifies physical network configurations, speeding up provisioning and reducing costs. How CE works in enterprise network This design is straightforward: CEs are deployed in the cloud and the user’s data center. In this example, the data center CE connects four networks: Underlay network for CE-to-CE connectivity Cloud application network (VPC/VNET) Branch office network Data center application network The data center CE connects the branch office and application networks to their respective VRFs, while the cloud CE links the cloud network to a VRF. The application runs on VLAN 100 (192.168.1.1) in the data center, while clients are in the branch office. The load balancer configuration is as follows: Load balancer: Branch office VRF Domain: ent-app1.com Endpoint (Origin pool): Onpre-VLAN100 VRF, 192.168.1.1 Clients access "ent-app1.com," which resolves to a VIP or CE interface address. The CE identifies the application’s location, and since the endpoint is in the same CE within the Onpre-VLAN100 VRF, it sets the source IP as the CE and forwards traffic to 192.168.1.1. Next, the application is migrated to the cloud. The CE facilitates this transition from on-premises to the cloud by specifying the endpoint. The load balancer configuration is as follows: Load Balancer: Branch Office VRF Domain: ent-app1.com Endpoint (Origin Pool): AWS-VPC VRF, 10.0.0.1 When traffic arrives at the CE from a client in the branch office, the CE in the data center forwards the traffic to the CE on AWS through an overlay tunnel. The only configuration change required is updating the endpoint via the console. There is no need to modify the firewall, switch, or router in the underlay network. Kubernetes networking is unique because it abstracts IP addresses. Applications use FQDNs instead of IP addresses and are isolated by namespace or label. When accessing the external network, Kubernetes uses SNAT to map the IP address to that of the Kubernetes node, masking the source of the traffic. CE can run as a pod within a Kubernetes namespace. Combining a CE on Kubernetes with a CE on-premises offers flexible external access to Kubernetes applications without complex configurations like Multus. F5XC simplifies connectivity by providing a common configuration for linking Kubernetes to on-premises and cloud environments via a load balancer. The load balancer configuration is as follows: Load Balancer: Kubernetes Domain: ent-app1.com Endpoint (Origin Pool): Onprem-vlan100 VRF, 192.168.1.1 / AWS-VPC VRF, 10.0.0.1 * Related blog https://community.f5.com/kb/technicalarticles/multi-cluster-multi-cloud-networking-for-k8s-with-f5-distributed-cloud-%E2%80%93-archite/307125 Load balancer observability The Load Balancer in F5XC offers advanced observability features, including end-to-end network latency, L7 request logs, and API visualization. It automatically aggregates data from various logs and metrics, (see below). You can also generate API graphs from the aggregated data. This allows users to identify which APIs are in use and create policies to block unnecessary ones. Demo movie The video demonstration shows how F5 XC connects applications across different environments, including VM, Kubernetes, on-premises, and cloud. CE delivers both L3 and L7 connectivity across these environments. Conclusion F5 Distributed Cloud Services (F5XC) simplifies enterprise networking by integrating applications seamlessly across VMs, Kubernetes, on-premises, and cloud environments. By minimizing physical network modifications and leveraging best practices from diverse infrastructure systems, F5XC enables efficient, scalable, and secure application connectivity. Key Benefits: - Simplified physical network design - Network as a Service for seamless application communication and visibility - Integration with new networking paradigms like Kubernetes and SASE Looking Ahead As new networking systems emerge, F5XC remains adaptable, leveraging APIs for self-service network configurations and enabling future-ready enterprise networks.151Views1like0CommentsSite-to-Site Connectivity in F5 Distributed Cloud Network Connect – Reference Architecture
Purpose This guide describes the reference architecture for deploying F5 Distributed Cloud’s (XC) Multicloud Network Connect service to interconnect their workload across private connectivity or the internet. It enumerates the options available to an F5 Distributed Cloud user to configure site-to-site connectivity using F5 XC Customer Edges (CEs) and explains them in detail to help the user make informed decisions to choose the correct topology for their use case. Audience This guide is for technical readers, including network admins and architects who want to understand how the Multicloud Network Connect service works and what network topology they must use to interconnect their workloads across data centers, branches, and public clouds. This guide assumes the reader is familiar with networking concepts like routing, default gateway configurations, IPSec and SSL encryption, and private connectivity solutions provided by public clouds like AWS Direct Connect and Azure Express Route. Introduction Ensuring workload reachability across data centers, branches, and/or public cloud can be challenging and operationally complex if done in the traditional way. The network teams must design, configure, and maintain multiple networking and security equipment and need expertise across many vendor solutions that provide functionality like NAT, SD-WAN, VPN, firewalls, access control lists (ACLs), etc. This gets even more nuanced while connecting two networks that have overlapping IP address CIDRs which is often in the case of hybrid cloud deployments and during mergers and acquisitions. F5 XC Multicloud Network Connect provides a simple way to configure these interconnections and manage access and security policies across multiple heterogeneous environments, from a single console. It abstracts the complexities by taking the user intent and automating the underlying networking and security while providing the flexibility to choose to connect over a private network or the public internet. Customer Edge as a Gateway To provide site-to-site reachability and ensure enforcement of the security policies, traffic must flow through the CE site. For this, the CE’s Site Local Inside (SLI) IP address must be used as the default gateway or as the next hop to reach the networks on other sites. Figure:Using CE as the gateway Physical vs. Logical Connectivity Two or more CE sites can be physically connected in multiple ways. But this does not automatically allow networks on different sites to be L3 routable to each other by default. For this, the user must associate networks with segments. The physical connection dictates the path the packets will take while going from one site to the other, and the logical connection connects the VLANs (on-prem) and the VPCs/VNETs (in the public cloud) using network overlay and provides segmentation. Note: Multicloud Network Connect provides Layer 3 connectivity between networks. Configuring L3 connectivity is not required to have app-to-app connectivity across the sites. This is done using the distributed load balancer feature under App Connect. Physical Transit Options Over F5 Global Network Backbone A CE site is always connected to the two nearest Regional Edges (REs) for redundancy, using IPSec or SSL tunnels. The REs across different regions are connected via a global, private F5 backbone network. F5’s global network backbone provides high-speed, private transit across regions where Regional Edges are located. Users can use the CE-RE tunnels over the internet to securely connect to this backbone locally and leverage the private connectivity to connect across regions. Figure: Default CE-CE connectivity over REs and F5 network backbone Pros: No need to manage underlay networking if using CE-RE tunnels over the internet. High-speed private transit between geographically distant regions, at no extra cost. End-to-end encryption of traffic between sites. Option to have end-to-end private connectivity Cons: Throughput is limited to the bandwidth of the two tunnels per site. When To Use: The easiest way to connect when private connectivity is not available between data centers or to the cloud VPCs in the case of hybrid cloud. IPSec/SSL tunnels over the internet are acceptable, but you do not want to manage multiple VPN tunnels or SD-WAN devices. Connecting geographically distant sites and you need better end-to-end latency and reliability than going over the internet. Direct Site-to-Site Over Internet If security regulations prevent the use of F5’s private backbone network, users can connect the CE sites directly to each other using IPSec tunnels (SSL encryption is not supported in this case). This is done using the Site Mesh Group (SMG) feature. Note: Even when the CEs are a part of Site Mesh Group, they will still connect to the REs using encrypted tunnels as this is required for control plane connectivity. When the sites are in an SMG, the data path traffic flows through the CE-CE tunnels as the preferred path. If this link fails, as a backup, the traffic gets routed to the REs and over the F5 network backbone to the other site. The number of tunnels on each link between two CE sites depends on the number of control nodes they have. Two single-node sites are connected using only one tunnel. If any one of these sites has three control nodes, it forms three tunnels to the other site. Figure:Number of tunnels between sites in a SMG Pros: Sites are directly connected, so the data path is not dependent on RE. Easy connectivity over the internet Traffic is always encrypted in transit Eliminates the need to manually configure cross-site VPNs or SD-WAN. Cons: Encryption and decryption require more CPU resources on CE nodes as performance requirements increase. Note: L3 Mode Enhanced Performance can be enabled on the sites to get more performance from available CPU and memory resources, but this must be enabled when the site is used only for L3 connectivity as it reduces the resources available for L7 features. Direct Site-to-Site Over Customer’s Network Backbone The CE-CE tunnels can also be configured to be connected over a private network if end-to-end private connectivity is required. Customer can leverage their existing private connectivity between data centers provisioned using private NaaS providers like Equinix. The sites can either be connected directly using SMG where the connections are encrypted or using the DC Cluster Group (DCG) feature, which connects the sites using IP-in-IP tunnels (no encryption). The number of tunnels on each link between two CE sites depends on the number of control nodes they have. Two single-node sites are connected using only one tunnel. If any one of these sites has three control nodes, it forms three tunnels to the other site. A DCG will give better performance, while an SMG is more secure. Unlike SMG, DCG does not fall back to sending RE-CE tunnels if private connectivity fails. Pros: Data path confined within the customer’s private perimeter. Sites are directly connected, so the data path is not dependent on RE. Option to choose encrypted or unencrypted transit. Simplifies the ACLs on the physical network and allows users to manage segmentation using the F5 XC console. Cons: Customer needs to manage the private connectivity across data centers or from the data center to the public cloud. Direct Site-to-Site Connectivity Topologies For direct site-to-site connectivity, sites can be grouped into Site Mesh Group (SMG) or DC Cluster Group (DCG). These allow sites to connect either in Full Mesh or Hub-Spoke topologies as described below: Full Mesh Site Mesh Group All sites that are part of a full-mesh SMG are connected to every other site using IPSec tunnels, forming a full-mesh topology. Figure:Sites in full mesh Site Mesh Group When To Use: In Hybrid cloud use cases. When all sites have equal functionality (e.g. connecting workloads across data centers). High fault tolerance is required for site-to-site connectivity (not dependent on any one site for transit). Hub-Spoke Site Mesh Group This mode allows the sites to be grouped into a hub SMG and a spoke SMG. The sites within the hub SMG are connected using full mesh topology. The sites within the spoke SMG are connected to the sites in hub SMG only and not to other sites in the spoke SMG. Figure: Sites in Hub-Spoke Site Mesh Group Some characteristics of Hub-Spoke SMG: The hub can have multiple sites for redundancy, but it usually has one site in most customer use cases. A hub site can be a spoke site for a different Hub-Spoke SMG. A CE site can be a spoke for multiple hubs. When To Use: For data center/cloud to edge/branch connectivity use cases. Full Mesh DC Cluster Group DC Cluster Group only supports full mesh topology. Every site in a DCG is connected to every other site using IP-in-IP tunnels. Traffic is not encrypted in transit, but DCG is only supported when sites can be connected over a private network. Figure: Sites in DC Cluster Group When To Use: Connecting VLANs on a data center to VLANs on other data centers or to public cloud VPCs/VNETs, when there is private connectivity between them. When the security regulations allow unencrypted traffic over the private transit. Offline Survivability CE Sites require control plane connectivity to the REs and Global Controller (GC) to exchange routes, renew certificates, and decrypt blindfolded secrets. To enable business continuity during an upstream outage, the Offline Survivability feature can be enabled on all sites in a Full Mesh SMG or DCG. The feature is not supported for Hub-Spoke SMG. With this feature enabled the sites can continue normal operations for 7 days without connecting to the REs and the GC. With the offline survivability feature enabled on a CE site, the local control plane becomes the certificate authority in case of connectivity loss. The decrypted secrets and certificates are cached locally on the CE. So, this feature is not turned on by default to allow the user to decide if enabling it aligns with the security regulations of the company. Logical Connectivity Once the physical transit is configured and the connection topology is chosen the workloads on the networks across the sites can be connected using segments or the applications on one site can be delivered to any other site by configuring a distributed load balancer. Connect Networks - Segmentation Users can create segments and add data center VLANs or public cloud VPCs/VNETs to them. All such networks added to a segment become part of a common routing domain and all workloads on these networks can reach each other using the CE sites as gateways. Users must ensure the networks added to a segment do not overlap. Segments are isolated layer 3 domains. So, workloads on one segment cannot access workloads from other segments by default. However, users can configure Segment Connectors to allow traffic from one segment to another. Figure: Segmentation Connect Applications - Distributed Load Balancing Instead of allowing the workloads to route directly to one another, the user can configure a distributed load balancer to publish a service from one site to other sites. This is done by adding the service endpoints to an origin pool of a load balancer object and advertising it using a custom VIP to another site or multiple sites. This allows the client to connect to the service as a local resource. Using Distributed Load Balancing, an LB admin can configure policies to expose the required API of the application only to the required sites. This reduces the attack surface and increases app security. Figure: Distributed Load Balancing E.g. in the figure above, the on-prem database is advertised to client apps on AWS and Azure which can access the DB using their local VIP, and the on-prem application is advertised to the client on Azure only. Decision Flow to Choose Physical Connectivity Options Once you understand the various physical and logical connectivity options, the below chart can help you to make an informed decision based on the connectivity requirements, infrastructure/platform available, and security restrictions. Once the connectivity is decided, you can choose to connect the network or only publish apps to sites where required based on the application requirements. Related Articles F5XC Site Site Mesh Group DC Cluster Group Segmentation Distributed Load Balancer356Views1like0CommentsPort oversuscription in BIG-IP models
Is there any kind of port oversuscription as there is in othe manufaturers as in Cisco? I can't find anything about it, I can't find how to connect the different phisical ports to your bussiness switches to avoid oversuscription and avoid to saturate the backplane, ASIC or anything that share bandwidth, memory, or CPU.... Is there any "conectivity best practice" by model? Or documentatión about how the physical ports are distributed internally by ASICs? Thank you348Views0likes1Commentmonitor fails on adfs proxy
i have a customer that is currently upgrading their adfs proxies. Their plan is for the servers to not be a part of the domain, but the BigIP then fails to reach them(health monitor fails, and ping also fails). If the upgraded server is rejoined to the domain, the health monitor works just fine. Is this by design, or are there ways to make the communication work, even if the servers aren't part of the domain?263Views0likes1CommentLost in Translation...in Italy
I've been travelling recently. To places and fields that have limited to no mobile connectivity and this can be a challenge when a challenge arises. Immediately following Mobile World Congress in Barcelona earlier this month, my family embarked on a multi-week European vacation. After spending a couple more days in Spain, we jumped on the rail to Paris for a couple days and then on to Rome for 10 days. The Eiffel Tower along with 'I see London, I see France, I see Daddy's....' request was our daughter's and Italy was something we've wanted to do for a while. During the train ride - which was fantastic - we saw vineyards, castles, the Alps, old bunkers and tons of scenery you never get on an airplane. It's almost like eavesdropping on these remote lives as you pass by at 187 mph while they hang their clothes to dry or tend to their fields. Yes, mobile connectivity was very spotty but it was not a big deal since we were enjoying the views and had no reason to 'connect.' I even turned the phone off at various times just for the peace. In the major cities like Paris and Rome and if you have roaming of course, you're able to connect to one of the available 3G mobile networks within that country. While not LTE, you get decent connectivity and can accomplish many of the mobile tasks that have become commonplace - email, maps, navigation, browsing and so forth. Incidentally, if you want to learn how LTE Roaming works, check out this video we did at MWC15. It is when you venture out, Griswold style, when you can get into trouble. While in Rome, we visited many of the typical tourism destinations like The Coliseum, Pantheon, Vatican, Spanish Steps, Trevi Fountain and others. It seemed like our entire trip was going exactly as planned and we were having a wonderful time. That is, until the day we left Rome to return to Barcelona for our flight home. The real adventure was about to begin - like the last 20 minutes of a movie when you think everything is wrapped up and that last big crisis hits. We bought, what we thought, were rail tickets from Rome back to Barcelona. They were less expensive than our inbound rail, which for some reason, didn't fire off the warning bells but we thought that since it was direct, it should be fine. We get on the train and have a nice semi-private area to stretch out and relax on the trip back. As we start the journey, everything seems great - the scenery, the company and we packed some good snacks for the ride. The conductor came through, verified our tickets and we felt like we could unwind. After a little while, we can see the Mediterranean Sea but it is on the wrong side of the train. A little concerned, I asked a uniformed staff if this was the train to Barcelona and was assured that it was. OK, maybe we go South for a few stops but turn around and head North. Seemed reasonable. We arrive at the Pompei station and get to see Mt. Vesuvius but at this point, we start to get concerned. I find the rail staff for a second time and again asked if this is the train to Barcelona. Even adding that we're going South and wondered if it turns and goes North (up & around, etc.) at some point. Again I'm told that we are going to Barcelona. More time passes and as we get further South, connectivity gets spotty. As it goes in and out, I search, 'does the train from Rome to Barcelona go under the Mediterranean Sea?' There is the English Channel Chunnel so maybe this does the same thing? Nope. Now I'm panicked. I find yet a third staff member and ask where are we going. It is at that point I learn that we are not headed for Barcelona Spain but Barcellona (Pozzo di Gotto) Italy. We're supposed to be on our return home flight from Spain in less than 36 hours and we're heading for Sicily. He says there is an airport in Catania and we might be able to get a flight to Barcelona. But with no connectivity, we can't see what is available and didn't want to risk arriving with no flights. I ask when is the next train back to Rome...at least get back there. We're told to get off at the next stop, San Giovanni, and we might be able to catch the overnight. Frantically, we grab our stuff, jump off and look around. Pretty grim. After a couple ups and downs of stairs with our bags, we finally make it to the ticket window. I explain that we want to go to Barcelona and the agent tells us, we just missed the train. I pull out a mobile translator and again attempt to communicate. I get frustrated, the agent gets frustrated and we're stuck. I grab a piece of paper and write SPAIN on it and his eyes finally light up but there is no path from where we are to Barcelona. It's 18:00 hours and we have less than 24 hours to reach Spain. While a crowd gathers behind us, we ask about a train to Milan. Luckily, there is one and it leaves in 30 minutes. We'll take it! It's an overnight and we don't arrive in Milan until 11am the next day. Down to 20 hours before our plane leaves for to LAX. Hopefully we can get on a Milan to Barcelona flight but without connectivity, there's no way of knowing. We get on the train and I start crying - not so much because we're lost in a foreign country but the relief we're finally going in the right direction. Since we can't determine our next steps, the only thing to do is attempt to rest in this little 3 bunk room. The conductors on this route helped as much they could and told us that this happens to people almost every other month. We're not alone but we're nowhere near a solution. We finally get to Milan and immediately jump in a cab to take the 45 minute/90Euro ride to the airport. We also have some 3G connectivity and at least see there are a few flights to Barcelona but also rely on the cab driver to point us in the right direction. In addition, the connectivity is so spotty on the way that trying to book a flight becomes impossible. At the airport we find the ticket window and buy some of the last remaining tickets for the last flight to Barcelona that day. After not bathing or sleeping for two days, I still felt relieved. We just might make it. As an eerie aside, just the day before our flight path went over the same location as the crashed Germanwings plane. Our jaws dropped when we learned of the tragedy. We land in Barcelona with 12 hours to spare. Get to our hotel, eat, shower and collapse for a few hours. Not taking any chances, we head out early, make our flight and am grateful to finally be home to tell this story. I learned a lot about geography, mobile connectivity, communication, security, and about myself. We've become so dependent on connectivity and it seems that we've become one with our mobile devices but when there is no signal and they can't help in a crisis, paper, pencils and people still matter. While harrowing, it was an amazing adventure and a fitting end to our wonderful trip. ps Technorati Tags: travel,3g,mobile,lte,lost,italy,train,adventure,silva,connectivity Connect with Peter: Connect with F5:294Views0likes0Comments