F5 Cloud-Native Functions For Secure Ingress

Outline:

  • Securing Ingress to Your Clusters
  • How F5 can help
  • Technical bit on how it works 

This is an addition to a series of articles that introduce some features of the latest release of F5's BIG-IP Next Cloud-Native Functions.

Securing Ingress to Your Clusters

In addition to routing traffic through the TMM proxy pod to the Internet, you can now use the same pod to load balance directly to workloads running inside Kubernetes. This improves performance, latency, and resource consumption, which are especially important in use cases such as for Multi-Access Edge Computing (MEC). This is part of an ongoing effort to merge existing features in BIG-IP Next Service Proxy for Kubernetes (SPK) into a single software bundle, making it even simpler for telco operators to consolidate 5G data plane functionality.

As Kubernetes projects move past the initial rollout of application workloads and confirm basic access from external clients, many users are now exploring how to secure that traffic flow in and out of the worker nodes with a carrier-grade network firewall to address the below issues:

  • Lack of a central point for network traffic for both ingress and egress 
  • Lack of an easy way to apply access control lists consistently across deployments and clusters
  • Lack of visibility of North-South traffic

How F5 can help

As mentioned in a previous article, F5 CNFs can extend Kubernetes with more capabilities and even protect your cloud-native 5G infrastructure by enabling granular control and visibility of traffic to the network functions deployed within your clusters. 

Technical Overview

The two features you can use to achieve this are the ContextSecure and Firewall Policy which give you many parameters for protecting ingress traffic. These are custom resource definitions (CRDs) deployed using the Kubernetes API. This means there is native integration with K8s and also the ability to seamlessly plug into your automation or CI/CD toolset of choice. 

A quick diagram shows what this achieves:

The below policy will enable the use of access control lists (ACLs) that define the set of users to allow or deny access to resources beyond the firewall. Statistics and logging are built in to provide telemetry data and visibility into any potential bad actors, which you can easily mitigate against using native yaml manifests for policy configuration.

Firewall Policy CR

Below is an example Firewall policy that can be referenced and enabled in your ContextSecure listener custom resource. 

apiVersion: "k8s.f5net.com/v1"
kind: F5BigFwPolicy
metadata:
  name: "fwpolicy"
spec:
  rule:
     - name: accept-test
      ipProtocol: any
      action: "accept"
      source:
        addresses:
          - "10.1.20.1/24"
        vlans:
          - "ue-vlan"
      logging: true
    - name: reject-test
      ipProtocol: any
      action: "reject"
      source:
        addresses:
          - "10.1.10.5"
      logging: true
    - name: drop-test
      ipProtocol: any
      action: "drop"
      source:
        addresses:
          - "10.1.10.6"
      logging: true
    - name: drop-all
      action: "drop"
      logging: true
      ipProtocol: any
      source:
        addresses:
          - "::0/0"
          - "0.0.0.0/0"

 

When this policy is applied to a listener that is processing ingress traffic, you can control any combination of single IP addresses, IP ranges, ports, and IP protocol

Context Secure CR

Below is an example Context Secure listener that references the above policy. 

apiVersion: "k8s.f5net.com/v1"
kind: F5BigContextSecure
metadata:
  name: sc1n1
  namespace: ns-1n1
service:
  #The k8s service object name for the internal applications (Pods), and creates a round-robin load balancing pool using the Service Endpoints
  name: "nginx-1n1-svc"
  #the k8s service object port value
  port: 80
spec:
  destinationAddress: "100.100.200.155"
  destinationPort: 80
  profile: "fastL4"
  ipProtocol: "tcp"
  profile: "tcp"
  snat:
    type: automap
  logProfile: "logprofile"
  firewallEnforcedPolicy: "fwpolicy"
  vlans:
    vlanList:
    - ue-vlan

 

Here is the application workload running in Kubernetes in a namespace watched by F5 CNF.

# kubectl get po -owide -n ns-1n1
NAME                        READY   STATUS    RESTARTS   AGE   IP                                  NODE                                 NOMINATED NODE   READINESS GATES
nginx-1n1-57dcb5c8d-g74xn   1/1     Running   0          13d   fd74:ca9b:3a09:868c:172:18:0:4a2b   worker-116.f5tokyo.local             <none>           <none>

To test, we can use curl to send a simple request from a client that is in the "allowed list".

curl http://100.100.200.155

 

You'll be able to see that depending on the client IP address as defined in the firewall policy above, the corresponding accept, drop, or reject rules get hit and counters incremented. These are stats that are "pullable" or "scraped" by API so can be easily integrated in solutions like Prometheus.

 

tmctl command is available in the debug container

In the logs, you can see the destination matches the internal IP address of the NGINX pod. Note that all of these logs can be sent to a central stack that manages your telemetry and logging messages that enable a comprehensive view of your cluster.

Logs for accept:

Message from syslogd@f5-tmm-7c487f59c7-l58qh at Jan 29 10:23:43 ...
 tmm[41]Jan 29 2024 01:24:22,f5-tmm-7c487f59c7-l58qh,fd74:ca9b:3a09:868c:172:18:0:4a32,10.1.10.2,36812,100.100.200.155,80,fd74:ca9b:3a09:868c:172:18:0:4a32,fd74:ca9b:3a09:868c:172:18:0:4a2b,18027,8080,Jan 29 2024 01:24:22,,TCP,Accept
 
Message from syslogd@f5-tmm-7c487f59c7-l58qh at Jan 29 10:23:43 ...
 tmm[41]Jan 29 2024 01:24:22,f5-tmm-7c487f59c7-l58qh,fd74:ca9b:3a09:868c:172:18:0:4a32,10.1.10.2,36812,100.100.200.155,80,fd74:ca9b:3a09:868c:172:18:0:4a32,fd74:ca9b:3a09:868c:172:18:0:4a2b,18027,8080,Jan 29 2024 01:24:22,,TCP,Established
 
Message from syslogd@f5-tmm-7c487f59c7-l58qh at Jan 29 10:23:43 ...
 tmm[41]Jan 29 2024 01:24:22,f5-tmm-7c487f59c7-l58qh,fd74:ca9b:3a09:868c:172:18:0:4a32,10.1.10.2,36812,100.100.200.155,80,fd74:ca9b:3a09:868c:172:18:0:4a32,fd74:ca9b:3a09:868c:172:18:0:4a2b,18027,8080,Jan 29 2024 01:24:22,,TCP,Closed

 

Logs for reject:

# curl http://100.100.200.155
curl: (7) Failed connect to 100.100.200.155:80; Connection refused #immediate rst
 
 
Message from syslogd@f5-tmm-7c487f59c7-l58qh at Jan 29 10:15:16 ...
 tmm[41]Jan 29 2024 01:15:56,f5-tmm-7c487f59c7-l58qh,fd74:ca9b:3a09:868c:172:18:0:4a32,10.1.10.2,36786,100.100.200.155,80,,,,,Jan 29 2024 01:15:56,,TCP,Reject

 

Logs for drop:


# curl http://100.100.200.155
<retries occur>
curl: (7) Failed connect to 100.100.200.155:80; Connection refused
 
Message from syslogd@f5-tmm-7c487f59c7-l58qh at Jan 29 09:13:18 ...
 tmm[41]Jan 29 2024 00:13:57,f5-tmm-7c487f59c7-l58qh,fd74:ca9b:3a09:868c:172:18:0:4a32,cccc::100,49058,64:ff9b::a01:461e,80,,,,,Jan 29 2024 00:13:57,,TCP,Drop

 

 

These are just a couple of simple examples showing how easy it is to further enable security for your Kubernetes infrastructure using the same engine that powers this protection in all F5 hardware and software. 

Published Jan 30, 2024
Version 1.0
No CommentsBe the first to comment