cancel
Showing results for 
Search instead for 
Did you mean: 
linjing
F5 Employee
F5 Employee

In the previous two articles , we analyzed why Kubernetes (K8s) egress traffic needs to be securely controlled. And analyzed the characteristics of 6 categories of solutions. It can be clearly seen that the integration of external security facilities and K8s is an effective solution to achieve refined control of outbound traffic. Whether it is the technical implementation of security management and control, or the idea of ​​allowing multiple teams to collaborate to implement defense-in-depth, the solution has good support and coverage.

In this article, we will describe some details and usage scenarios of the F5 Container Egress Traffic (CES) solution.

CES can be understood as a solution that consists of a controller running inside K8s, and F5 AFM (virtual or hardware F5) running outside K8s:

  • CES Control: Containers running in K8s. This component is the control plane component, which is responsible for converting the outbound policy deployed in K8s into the configuration on the external data plane component.
  • F5 BIG-IP AFM: A data plane component that runs outside of K8s. Accept the configuration delivered by the CES controller, and execute specific access control rules, such as access control lists, current limiting, and traffic programming.

  • CNI: CNI is the choice of the user environment itself and is not included in the CES scheme. But different CNIs will have different effects on the functionality of the CES solution.  We had demostrated full functions with kube-ovn CNI(a CNCF project). CES is the certified egress solution for kube-ovn CNI.

F5 Container Egress Service (CES) is an innovation project of InnovateF5.

InnovateF5 is an innovation platform sponsored by the Office of the CTO. InnovateF5 is a resource center to bring new ideas to life. It provides hosting infrastructure, project tools, code repos, skill matchmaking, and mentorship for new and forming innovation teams. Inside InnovateF5, new ideas thrive!

Overall structure

One or more CES controllers can run within each K8s cluster, and shared or independent F5 AFM instances (virtual or hardware F5) can be deployed outside the cluster for each cluster. The user implements policy deployment by issuing CRD resources in the K8s cluster, and the controller automatically discovers policy changes and deploys them to the F5 AFM instance. On the network, pods actively send outbound traffic to F5 without SNAT. Technically, the CNI, policy routing, overlay tunnel and other methods can be used to steer egress traffic to F5. The CES controller is open sourced on the F5 Devcentral github.

high-level-arch.jpeg

 

Strategy design:

In the previous article, we have analyzed several times that the collaboration between the security team and the development team and the platform team is an important prerequisite for implementing a reliable outbound security strategy. Therefore, in the F5 CES solution, three different K8s resources (CRDs) are specially designed. These three resources are oriented to different team roles. The following is the specific CRD description corresponding to the role:

CRD name Scope Description Roles
ClusterEgressRule Cluster global This dimension policy is the overall level policy of the cluster, which is used to control the general and overall access control of the cluster. For example, the cluster access control of basic public services such as enterprise NTP and DNS. The scope's policy applies to outbound access control for all services in the cluster. Cluster Administrator,
Security Team
NamespaceEgressRule namespace This dimension policy takes effect on a single namespace or project. It is used to control access to services outside the cluster by all services within a specific NS or project. Strategies within different namespaces or projects do not affect each other. *This function requires CNI support. Project team,
Application operation, and maintenance team
ServiceEgressRule K8s service This dimension policy controls the access of the relevant endpoint container associated with a specific K8s service to the external service of the cluster. Only valid for specific services. Project team
application, operation maintenance,
microservice owner

In practice, the team responsible for the overall security of the enterprise can set basic general outbound security policies for each cluster, such as uniformly allowing all pods to access specific services within the enterprise, such as NTP services. Different project teams are responsible for the unified egress strategy required in their own project namespaces, such as an external service that all applications in a project/ns need to access. In each application, a team may responsible for some specific microservices, the team is responsible for setting the external access required by the specific microservice. Through such hierarchical settings, you can finely set the policies of each dimension from the overall cluster, to the application, to the specific microservice unit. When the security team of the enterprise discovers that the project team or microservice team has opened the wrong egress policy, it can issue a prohibitive policy to cover it in the global dimension, which can implement temporary control when auditing or discovering security incidents.

image-20211124111140911.png

Since all policies are implemented through CRDs, so the security rule setting process can be easily embedded into the DevSecOps pipeline, and the idea of ​​security left-shifting can be implemented.

 

Value and Capability

Challenges to Solve

  • Frequent Changes in Outbound Control Policy Caused by Container IP Dynamics
  • Different role groups have different requirements for setting the scope of the strategy, and the strategy needs to match the role in multiple dimensions

  • Dynamic bandwidth limit requirements for outgoing traffic

  • Protocol Deep Security Inspection Requirements

  • Traffic programmable advanced requirements based on access control events

  • Outbound traffic visualization requirements

  • Can work with CNIs that support setting same CIDR for different namespaces, to achieve strong tenant traffic isolation

Ability to Provide

  • Dynamic IP ACL control at Cluster/Pod/NS granularity

  • Cluster/Pod/NS granular FQDN ACL control

  • time-based access control

  • Refinement of outbound SNAT policy settings

  • Match traffic event triggering with programmable

  • match traffic redirection

  • Protocol security, compliance detection

  • IP Address Intelligence Database

  • Traffic match log

  • Traffic matching visual report

  • Protocol detection visual report

  • TCP/IP Errors report

  • NAT Control and Logging

  • Data flow visualization tracking

  • Access rule visualization simulation

  • Transparent detection mode

  • High-speed log outgoing

For more usage scenarios and configuration cases, please refer to Wiki.

Version history
Last update:
‎19-Jul-2022 10:21
Updated by:
Contributors