Forum Discussion
F5 CIS -> NGINX Plus Ingress Controller Integration
Hi,
I'm using F5 BIG-IP and NGINX Plus Ingress Controller (NPIC) integrated via IngressLink. While attempting to forward the client IP and port by enabling the Proxy Protocol, we encountered the following issue and are seeking assistance.
Configuration
BIG-IP: Proxy Protocol enabled via iRule
NPIC: Proxy Protocol enabled by adding proxy-protocol: "true" in the ConfigMap during deployment
Issue
When the Proxy Protocol setting is added to the NPIC ConfigMap, the integration with BIG-IP breaks, and routing to pods through NPIC fails.
If this setting is removed, IngressLink functions normally: a Virtual Server is automatically created in the BIG-IP GUI, and responses through the NPIC path work correctly. However, in this case, direct requests to the BIG-IP Virtual Server IP fail.
In other words, while F5 CIS installation and IngressLink integration are partially functioning, access via the BIG-IP Virtual Server IP completely fails.
If anyone has experienced a similar issue or can offer insights into the cause and how to resolve it, your advice would be greatly appreciated.
Any debugging tips or relevant documentation would also be a great help.
Thank you.
2 Replies
You mean proxy protocol enabling on nginx ? What about big-ip with an irule as it is before that ?
https://my.f5.com/manage/s/article/K40512493
Then nginx should just trust the ip address in the proxy protocol.
https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/
- Jungbin_Kim
Nimbostratus
Hello. Currently, the Proxy Protocol is enabled on the NGINX Plus Ingress Controller via a ConfigMap, and on the BIG-IP the Proxy Protocol is implemented through an iRule like the one shown in the picture.
When sending a request through the VIP (192.168.10.5), it returns an RST packet, and it appears that the BIG-IP side is terminating the connection.Thanks you.
<BIG-IP iRule>
#PROXY Protocol Receiver iRule
# iRule used for F5 IngressLink
# Layer 4 irule since BIG-IP is passthroughwhen CLIENT_ACCEPTED {
set proxyheader "PROXY "
if {[IP::version] eq 4} {
append proxyheader "TCP4 "
} else {
append proxyheader "TCP6 "
}
append proxyheader "[IP::remote_addr] [IP::local_addr] [TCP::remote_port] [TCP::local_port]\r\n"
}when SERVER_CONNECTED {
TCP::respond $proxyheader
}<NGINX Plus Ingress Controller Configmap>
k get cm nginx-config -o yamlapiVersion: v1
data:
proxy-protocol: "True"
real-ip-header: proxy_protocol
set-real-ip-from: 0.0.0.0/0
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"proxy-protocol":"True","real-ip-header":"proxy_protocol","set-real-ip-from":"0.0.0.0/0"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-config","namespace":"nginx-ingress"}}
creationTimestamp: "2025-06-12T08:25:14Z"
name: nginx-config
namespace: nginx-ingress
resourceVersion: "99778"
uid: f31d6563-dd1f-43ca-abfb-765096d8ea9f<kubectl exec -it <nginx-plus-ingress-controller-pods> -- nginx -T>
...
server {
listen 80 proxy_protocol;listen [::]:80 proxy_protocol;
listen 443 ssl proxy_protocol;listen [::]:443 ssl proxy_protocol;
ssl_certificate $secret_dir_path/nginx-ingress-cafe-secret;
ssl_certificate_key $secret_dir_path/nginx-ingress-cafe-secret;
set_real_ip_from 0.0.0.0/0;real_ip_header proxy_protocol;
server_tokens "on";
server_name cafe.example.com;
...<Connected Test>
k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane 6d17h v1.29.15 192.168.200.193 <none> Ubuntu 22.04.4 LTS 5.15.0-141-generic containerd://1.7.27
w1 Ready <none> 6d17h v1.29.15 192.168.40.150 <none> Ubuntu 22.04.4 LTS 5.15.0-141-generic containerd://1.7.27k get pods
NAME READY STATUS RESTARTS AGE
coffee-6b8b6d6486-khw6l 1/1 Running 0 6d
coffee-6b8b6d6486-xlwhv 1/1 Running 0 6d
nginx-ingress-85954c6b6f-tjrk5 1/1 Running 0 5d19h
tea-9d8868bb4-g4dzk 1/1 Running 0 6d
tea-9d8868bb4-m2r9x 1/1 Running 0 6d
tea-9d8868bb4-ncrjt 1/1 Running 0 6dk get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-mlqzp 1/1 Running 0 6d17h
calico-node-7bp9f 1/1 Running 0 6d17h
calico-node-sqcn6 1/1 Running 0 6d17h
coredns-76f75df574-5xm8z 1/1 Running 0 6d17h
coredns-76f75df574-k687p 1/1 Running 0 6d17h
etcd-m1 1/1 Running 3 6d17h
k8s-bigip-ctlr-deployment-78f56d5dc9-77vn6 1/1 Running 0 2d17h
kube-apiserver-m1 1/1 Running 3 6d17h
kube-controller-manager-m1 1/1 Running 1 6d17h
kube-proxy-t6lg6 1/1 Running 0 6d17h
kube-proxy-xvjhd 1/1 Running 0 6d17h
kube-scheduler-m1 1/1 Running 3 6d17hk get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
cafe-ingress nginx cafe.example.com 192.168.10.5 80, 443 6dk get ingresslinks.cis.f5.com
NAME IPAMVSADDRESS AGE
nginx-ingress 192.168.10.5 2d17hk get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-svc ClusterIP 10.102.72.120 <none> 80/TCP 6d
nginx-ingress NodePort 10.111.109.219 <none> 80:31757/TCP,443:31116/TCP,8081:32198/TCP 6d16h
tea-svc ClusterIP 10.111.194.148 <none> 80/TCP 6dcurl --resolve cafe.example.com:31116:192.168.10.5 https://cafe.example.com:31116/coffee --insecure -v
* Added cafe.example.com:31116:192.168.10.5 to DNS cache
* Hostname cafe.example.com was found in DNS cache
* Trying 192.168.10.5:31116...
* connect to 192.168.10.5 port 31116 failed: 연결이 거부됨
* Failed to connect to cafe.example.com port 31116 after 2 ms: 연결이 거부됨
* Closing connection 0
curl: (7) Failed to connect to cafe.example.com port 31116 after 2 ms: 연결이 거부됨<TCPDump (nnnp option)>
tcpdump -nni ex_vlan:nnnp | grep 192.168.10.5tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ex_vlan:nnnp, link-type EN10MB (Ethernet), capture size 65535 bytes
08:58:45.539743 IP 192.168.201.133.52056 > 192.168.10.5.443: Flags [S], seq 2579005298, win 64240, options [mss 1460,sackOK,TS val 1186308277 ecr 0,nop,wscale 7], length 0 in slot1/tmm0 lis= port=1.1 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=63 inport=23 haunit=0 priority=0 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0
08:58:45.539790 IP 192.168.10.5.443 > 192.168.201.133.52056: Flags [S.], seq 402156018, ack 2579005299, win 23360, options [mss 1460,nop,wscale 9,sackOK,TS val 1980236067 ecr 1186308277], length 0 out slot1/tmm0 lis=/cis_partition/Shared/ingress_link_crd_192_168_10_5_443 port=1.1 trunk= flowtype=64 flowid=400001793700 peerid=0 conflags=4000024 inslot=63 inport=23 haunit=1 priority=3 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0
08:58:45.541573 IP 192.168.201.133.52056 > 192.168.10.5.443: Flags [.], ack 1, win 502, options [nop,nop,TS val 1186308280 ecr 1980236067], length 0 in slot1/tmm0 lis=/cis_partition/Shared/ingress_link_crd_192_168_10_5_443 port=1.1 trunk= flowtype=64 flowid=400001793700 peerid=0 conflags=4000024 inslot=63 inport=23 haunit=1 priority=0 peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0
08:58:45.542273 IP 192.168.10.5.443 > 192.168.201.133.52056: Flags [R.], seq 1, ack 1, win 0, length 0 out slot1/tmm0 lis=/cis_partition/Shared/ingress_link_crd_192_168_10_5_443 port=1.1 trunk= flowtype=64 flowid=400001793700 peerid=0 conflags=4808024 inslot=63 inport=23 haunit=1 priority=3 rst_cause="[0x30884a4:5322] No route to host" peerremote=00000000:00000000:00000000:00000000 peerlocal=00000000:00000000:00000000:00000000 remoteport=0 localport=0 proto=0 vlan=0
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com