Egress control for Kubernetes using F5 Distributed Cloud Services


When using F5 Distributed Cloud Services (F5 XC) to manage your Kubernetes (K8s) workloads, egress firewalling based on K8s namespaces or labels is easy. While network firewalls have no visibility into which K8s workload initiated outbound traffic - and therefore cannot apply security policies based on workload - we can use a platform like F5 XC Managed Kubernetes (mK8s) to achieve this.


Applying security policies to outbound traffic is common practice. Security teams inspect Internet-bound traffic in order to detect/prevent Command & Control traffic, allow select users to browse select portions of the Internet, or for visibility into outbound traffic. Often the allow/deny decision is based on a combination of user, source IP, and destination website. Here's an awesome walk through of outbound inspection. 

Typical outbound inspection performed by a network-based device.Typical outbound inspection performed by a network-based device.

Network-based firewalls cannot do the same for K8s workloads because pods are ephemeral. They can be short-lived, their IP addresses are temporary and reused, and all pods on the same node making outbound connections will have the same source IP on the external network. In short, a network device cannot distinguish traffic from one pod versus another. 

Which microservice is making this outbound request? Should it be allowed?Which microservice is making this outbound request? Should it be allowed?

Problem statement

In my cluster I have two apps, app1 and app2, in namespaces app1-ns and app2-ns.

  • For HTTP traffic, I want
    • app1 to reach out to * but nothing else
    • app2 to reach out to the REST API at but nothing else, even other subdomains of
  • For non-HTTP traffic, I want
    • app1 to be able to reach a partner's public IP address on port 22
    • app2 to reach Google's DNS server at on port 25
  • I want no other traffic (TCP, UDP) to egress from my pods (ie., HTTP or non-HTTP).

What about a Service Mesh?

A service mesh will control traffic within your K8s cluster, both East-West (between services) and North-South (traffic to/from the cluster). Indeed, egress control is a feature of some service meshes, and a service mesh is a good solution to this problem. 

Istio's egress control is a great starting point to read more about a service mesh with egress control. By using an egress gateway, Istio's sidecars will force traffic destined for a particular destination through a proxy, and this proxy can enforce Istio policies. This solves our problem, although I've heard customers voice reasonable concerns:

  • what about non-HTTP traffic?
  • what if the egress gateway is bypassed?
  • can our security team configure the mesh or configuration as code?
  • a mesh may require an additional learning / admin overhead
  • a mesh is often managed by a different team than traditional security

What about a NetworkPolicy?

A NetworkPolicy is a K8s resource that can define networking rules, including allow/deny rules by namespace, labels, src/dest pods, destination domains, etc. However NetworkPolicies must be enforced by the network plugin in your distribution (eg Calico), so they're not an option for everyone. They're also probably not a scalable solution when you consider the same concerns of service meshes above, but they are possible. Read more about NetworkPolicies for egress control with Istio, and check out this article from Monzo to see a potential solution involving NetworkPolicies.

Other ideas

Read the article from Monzo linked above to see what others have done. You could watch (or serve) DNS requests from Pods and then very quickly update outbound firewall rules to allow/disallow traffic to the IP address in the DNS response. NeuVector had a great article on this. You could also use a dedicated outbound proxy per domain name, as Monzo did, although this wouldn't scale to a large number, so some kind of exceptions would need to be made. I read an interesting article on Falco also, which is a tool that can monitor outbound connections from pods using eBPF. Generally speaking, these other ideas will bring the same concerns to teams without mesh skills: K8s and mesh networking can be unfamiliar and difficult to operate.

Me and KubernetesMe and Kubernetes

Solving egress control with F5 XC Managed Kubernetes

Another way we can control outbound traffic specific to K8s namespace or labels is by using a K8s distribution that includes these features. In a technical sense, this works just like a mesh. By injecting a sidecar container for security controls into pods, the platform can control networking. However the mesh is not managed separately in this case. The security policies of the platform provide a GUI, easy reuse of policies, and generally an experience identical to that used for traditional egress control with the platform.

Solving for our problem statement

If I am using Virtual K8s (vK8s) or Managed K8s (mK8s), my pods are running on the F5 XC platform. These containers may be on-prem or in F5's PoPs, but the XC platform is natively aware of each pod. Here's how to solve our problem with XC when you have a managed K8s cluster.

1. Firstly we will create a known key so we can have a label in XC to match a label we will apply to our K8s pods. I have created a known key egress-ruleset by following this how-to guide.

2. For HTTP and HTTPS traffic, create a forward proxy policy. Since we want rules to apply to pods based on their labels, choose "Custom Rule List" when creating rules. Rule 1: set the source to be anything with a known label of egress-ruleset=app1 and allow access to TLS domain with suffix of Rule 2: Same as 1, but allow access to HTTP path of suffix Rules 3 and 4 are the same, but where the source endpoint matches egress-ruleset=app2. Rule 5, the last, can be a Deny All rule.

3. For non HTTP(S) traffic, create multiple firewall policies for traffic ingressing, egressing, or originating from an F5 Gateway. I've recommended multiple policies because a policy applies to a group of endpoints defined by IP or label. I've used three policies in my examples (one for label egress-ruleset=app1 and another for app2, and one for all endpoints). Use the policies to allow TCP traffic as desired.

4. Create and deploy a Managed K8s cluster and an App Stack site that is attached to this cluster. When creating the App Stack site, you can attached the policies you created from steps 1 and 2. You can have multiple policies layered in order, for policies of both types (forward proxy and firewall).

5. Deploy your K8s workload and label your pods with egress-ruleset and a value of app1 or app2. Finally validate your policies are in effect by using kubectl exec against a pod running in your cluster.

We have now demonstrated that outbound traffic from our pods is allowed only to destinations we have configured.

We can now control outbound traffic specific to the microservice that is the source of the traffic.We can now control outbound traffic specific to the microservice that is the source of the traffic.

Application namespaces

Another way to solve this problem uses Namespaces only and not labels. If you create your Application Namespace in the XC console (not K8s Namespace) and deploy your workloads in the corresponding K8s namespace, you can use the built-in label of means you won't need to create your own label (Step 1) but you will need to have a 1:1 relationship between K8s namespaces and Application Namespaces in XC. Plus, your granularity for endpoints is not fine-grained at the level of pod labels, but instead is at the namespace level.

Further Reading

Enterprise-level outbound firewalling from products like F5's SSLO will do more than simple egress control, such as selectively pass traffic to 3rd party inspection devices. Egress control in XC is not integrating with other devices, but the security controls fit the nature of typical microservices. Still, we could layer simple outbound rules performed in K8s with enterprise-wide inspection rules performed by SSLO for further control of outbound traffic, including integration with 3rd party devices.

While this example used mK8s, I'll make note of another helpful article that explains how labels can be used for controlling network traffic when using Virtual K8s (vK8s).


Egress control for Kubernetes workloads, where security policy can be based on namespace labels, can be enforced with a service mesh that supports egress control, or a managed K8s solution like F5 XC that integrates network security policies natively into the K8s networking layer. Consider practical concerns, like management overhead and existing skill sets, and reach out if I or another F5'er can help explain more about egress control using F5 XC!

Finally thank you to my colleague Steve Iannetta @netta2 who helped me prepare this. Please do reach out if you want to do this yourself or have more in-depth K8s traffic management questions.

Updated Jul 03, 2023
Version 3.0

Was this article helpful?


  • Hi SamFok_hk 

    This article only applies to F5 XC and mK8s (Managed Kubernetes). If you bring your own cluster, like Openshift, and deploy a CE into it, I think you'd have to make some networking changes within the cluster to be able to use egress control in this way. So the short answer, I think, is no, the same egress control does not apply if using a CE Mesh as a pod in OCP.