Extending F5 ADSP: Multi-Tailnet Egress

Tailscale tailnets make private networking simple, secure, and efficient. They’re quick to establish, easy to operate, and provide strong identity and network-level protection through zero-trust WireGuard mesh networking. However, while tailnets are secure, applications inside these environments still need enterprise-grade application security, especially when exposed beyond the mesh.

This is where F5 Distributed Cloud (XC) App Stack comes in. As F5 XC’s Kubernetes-native platform, App Stack integrates directly with Tailscale to extend F5 ADSP into tailnets. The result is that applications inside tailnets gain the same enterprise-grade security, performance, and operational consistency as in traditional environments, while also taking full advantage of Tailscale networking.

Our mission is to Deliver and Secure Every App. In this article, I’ll walk through 5-easy steps on how to extend F5 ADSP into Tailscale tailnets and make applications accessible through F5 XC with enterprise-grade reliability and security.

 

 

Scope and Focus

This guide is focused on extending F5 ADSP into Tailscale. It will not cover:

  • Load balancer configuration and traffic management policies
  • Web Application Firewall (WAF) setup and rule configuration
  • DDoS protection and bot management configuration
  • SSL/TLS certificate management and security policies
  • Virtual K8s/Virtual Sites configuration
  • Tailscale setup and configuration

These topics are well-documented in DevCentral and other community resources.

Instead, this article will show how to:

  • Deploy tailnet egress proxies using Terraform and App Stack
  • Configure origin pools that target services inside tailnets
  • Test and validate connectivity through F5 XC

 

Demo Environment

The objective is to connect the Regional Edge (RE) and Customer Edge (CE) nodes in vK8s to Tailscale and expose applications in tailnets as F5 XC origin pools.

For this demo, I have set up the following environment:

  • vK8s site: 2 Customer Edges (CEs) + 1 Regional Edge (RE)

    F5 XC vK8s dashboard showing RE and CE sites
  • Applications/nodes residing two tailnets (tail40b056, tail88356)
  • Applications are containerized openspeedtest web services running in Docker

    Tailscale console will showing the deployed application nodes in the tailnet

 

Step-by-Step Deployment

 

1. Clone the project repository

git clone https://github.com/f5devcentral/f5xc-tailnet-egress

 

2. Configure deployment variables

cp terraform.tfvars.example terraform.tfvars

Update terraform.tfvars as needed. Example:

k8s_namespace = "f-tabrani" container_registry = "regd.izpzi.com" tailnets = [ { tailnet_name = "tail40b056" tailnet_key = "tskey-auth-kme2kAy8PX11CNTRL-xxxxx" use_k8s_secret = true services = [ { endpoint = "openspeedtest.tail40b056.ts.net" protocol = "tcp" port = 3000 f5xc_origin_pool = "ost-ts-yunohave-com" } ] }, { tailnet_name = "tail88356" tailnet_key = "tskey-auth-kgchrTFDx421CNTRL-xxxxx" use_k8s_secret = true services = [ { endpoint = "openspeedtest.tail88356.ts.net" protocol = "tcp" port = 3000 f5xc_origin_pool = "ost-ts1-yunohave-com" } ] } ]

Configuration breakdown:

  • k8s_namespace: Namespace where Kubernetes resources (pods, services, secrets) will be deployed.
  • container_registry: Registry for container images (defaults to Docker Hub if unspecified).
  • tailnets: Config for each Tailscale network.
    • tailnet_name: Unique identifier from the Tailscale console.
    • tailnet_key: Authentication token, stored as a K8s secret if use_k8s_secret = true.
    • services: Applications within the tailnet to expose. Each service maps to a F5 XC origin pool.

 

3. Deploy with the script

./deploy.sh

The script will:

  • Validate prerequisites (Terraform, kubectl, curl)
  • Apply Terraform configuration
  • Deploy Kubernetes manifests
  • Create F5 XC origin pools for your services

 

4. Verify deployment

Check pod status:

kubectl get pods -l app=tailscale-egress -n <namespace>

View logs:

kubectl logs -l app=tailscale-egress -c tailscale-<tailnet-name> -n <namespace>

Port forward and test application ports if needed:

kubectl port-forward svc/tailscale-egress <local-port>:<service-port> -n <namespace>

 

Tailscale console will showing the deployed proxy pods across the tailnets

The origin pools created target RE services by default. To target CE services, change the Site/Vsite settings. Additional changes such as TLS and Health Checks might also be needed if endpoints are to utilize them. 

F5 XC Origin pool settings targeting both the services on the RE and CE for this origin pool.

 

5.  Attach to Load Balancer

Finally, attach the origin pools to a load balancer in F5 XC. At this point, applications inside Tailscale tailnets are securely accessible via F5 XC.

 

 

How it works: Under the hood

 

StatefulSet/Container Design

  • Each proxy pod combines Envoy + one or more Tailscale containers.
  • Envoy listeners connect via Tailscale’s built-in HTTP proxy (using HTTP CONNECT).
  • Multiple Tailscale containers allow access across multiple tailnets.

Scalability

As the proxy pods are deployed in Kubernetes, they can be scaled across in numbers or across sites. 

  • Scale pods horizontally by increasing replicas.
  • Extend coverage by deploying proxies across additional sites in your vK8s.

 

Conclusion

By integrating Tailscale with  F5 ADSP, organizations can securely extend enterprise application delivery and security into zero-trust tailnet environments.

This integration bridges modern mesh networking with proven application delivery and protection.  It ensures apps remain consistent, secure, and performant, no matter where they’re deployed.

 

 

Published Aug 20, 2025
Version 1.0

2 Comments

  • How does attaching origin pools to F5 XC load balancers impact performance and failover when accessing services inside tailnets?

    • fads's avatar
      fads
      Icon for Employee rankEmployee

      To help answer the question, think of this particular use-case as an enhanced Tailscale Funnel service.

      You can expose your Tailscale nodes or services to the internet, with the following additional benefits:

      • Choice of URL, service name, and ports (instead of being tied to defaults).
      • Load balancing across Tailscale nodes, across tailnets, or even non-Tailscale backends.
      • Application security controls (WAAP, DDoS, bot defense, API enforcement) built in.

       

      In terms of performance and resiliency, when you attach tailnet-reachable services as origin pools behind an F5 XC Load Balancer, you gain:

      • Global Anycast entry: traffic lands at the closest XC Regional Edge (RE).
      • Distributed Cloud backbone: once inside the XC fabric, traffic rides over F5’s private global backbone, a highly-optimized network interconnecting Regional Edges and Core Sites. This means predictable latency, stronger SLAs, and resilience even across geographies.
      • WAAP + policies: TLS termination, L7 inspection, and rate limiting before traffic hits your service.
      • Health-based failover: automatic removal of unhealthy endpoints and redirection to healthy pool members, even across different tailnets or regions, without client changes.

       

      Performance considerations:

      • For tailscale nodes/services, end-to-end latency is mostly dictated by network distance and tailnet path quality.
      • XC adds a small per-request overhead (WAAP, TLS, L7 features) but often reduces overall latency thanks to:
        • RE/CE locality (nearest entry point)
        • Optimized routing across the XC backbone (avoiding unpredictable public internet paths)

       

      Failover behavior:

      • Deterministic and fast. If a node or path fails health checks, it’s removed immediately, and traffic is shifted to healthy nodes.
      • Because failover can leverage the global backbone, users are seamlessly redirected to healthy origins in other regions, without client-side DNS changes.