cancel
Showing results for 
Search instead for 
Did you mean: 
Login & Join the DevCentral Connects Group to watch the Recorded LiveStream (May 12) on Basic iControl Security - show notes included.
MichaelOLeary
F5 Employee
F5 Employee

Summary

How do you preserve the “real” IP address to a Pod (container) that is running in a Kubernetes cluster? Recently Eric Chen and I got asked this question by a customer and we’ll share how we solved this problem.

What's the Problem?

In Kubernetes a “pod” is not sitting on your external network, instead it is sitting comfortably on its own network. The issue is that using NodePort to expose the pod will source NAT (SNAT) and the pod will see the IP address of another internal IP address. You could use a “externalTrafficPolicy” to preserve the IP address, but that has some other limitations (you have to steer traffic to the node where the pod is running).

0151T000003pfdkQAA.png

Possible Solutions

X-Forwarded-For

The obvious solution to preserve the source IP address is to have an external load balancer insert an X-Forwarded-For header into HTTP requests and have the pod use the HTTP header value instead of the client IP address. The load balancer would change a request from:

GET /stuff HTTP/1.1
host: mypod.example.com

to:

GET /stuff HTTP/1.1
host: mypod.example.com
X-Forwarded-For: 192.0.2.10

This works well for HTTP traffic, but does not work for TCP traffic.  

Proxy Protocol

Proxy Protocol is a method of preserving the source IP address over a TCP connection. Instead of manipulating the traffic at Layer 7 (modifying the application traffic); we will manipulate the TCP traffic to add a prefix to the connection with the source IP address information. At the TCP level this would change the request to:

PROXY TCP[IP::version] [IP::remote_addr] [IP::local_addr] [TCP::remote_port] [TCP::local_port]
GET /stuff HTTP/1.1
host: mypod.example.com

 

Note that the application payload is not modified, but has data appended to it. Some implications of proxy protcol:

  • In the case of encrypted traffic, there is no need for the load balancer to terminate the SSL/TLS connection. 
  • The destination of the proxy protocol traffic must be able to remove prefix prior to processing the application traffic (otherwise it would believe the connection was somehow corrupted). I.e., both endpoints of a connection need to support proxy protocol.

 

0151T000003pfdpQAA.png

Customer's Problem Statement

The customer wanted to use both a BIG-IP at the edge of their Kubernetes cluster AND use NGINX as the Ingress Controller within the Kubernetes. The BIG-IP would be responsible for providing L4 TCP load balancing to NGINX that would be acting as a L7 HTTP/HTTPS Ingress Controller.

Using the solutions outlined earlier we were able to provide a solution that met these requirements.

  1. BIG-IP utilizes proxy protocol to preserve the source IP address over a TCP connection. BIG-IP can be the sender of proxy protocol with this iRule.
  2. NGINX is configured to receive proxy protocol connections from the BIG-IP and uses X-Forwarded-For headers to preserve the source IP address to the pod.
 

0151T000003pfduQAA.jpg

To deploy the solution we leveraged Container Ingress Services to deploy the proxy protocol configuration and bring traffic to the NGINX Ingress Controller. NGINX Ingress Controller was configured to use proxy protocol for connections originating from the BIG-IP.

Conclusion

Using F5 BIG-IP and NGINX we were able to provide a solution to the customers requirements and provide better visibility into the “real” IP address of their clients.

Comments
MichaelOLeary
F5 Employee
F5 Employee

Thanks for reading this post and please reach out with any questions!

Version history
Last update:
‎20-Aug-2020 09:27
Updated by:
Contributors