Forum Discussion
GRPC through F5 Virtual Server [RST_STREAM with error code: INTERNAL_ERROR]
Similar to what you saw I am sending plaintext gRPC through mTLS tunnel between F5 BIG-IP and Linkerd mesh proxy and I see similar errors in the linkerd logs. Outside of using calico or cilium and advertising service ip for the Nginx Ingress to that F5 BIG-IP is not in the path I see no other way.
I see similar errors when I send traffic through F5 to plaintext grpc pod, so even decryption from what I see is not needed to see the issue where F5 just send GOWAY before the server response.
Strange if I send traffic to 9001 from the F5 that gRPC with TLS I see no issues so for me the issue is with plaintext gRPC even when it is inside TLS tunnel (to an ingress) or MTLS tunnel (to linkerd enabled pod) . Interesting stuff.
This tool can be used for testing plaIntext or insecure (tls)
grpcurl -insecure X.X.X.X:X list
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpcbin-plain
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: grpcbin-plain
template:
metadata:
labels:
app: grpcbin-plain
spec:
nodeSelector:
kubernetes.io/hostname: worker-1
containers:
- name: grpcbin
image: moul/grpcbin
ports:
- containerPort: 9001
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: grpcbin-plain
namespace: test
spec:
type: NodePort
selector:
app: grpcbin-plain
ports:
- name: grpcs
port: 9001
targetPort: 9001
- name: grpc
port: 9000
targetPort: 9000
Linkerd logs:
[ 62.724921s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http: linkerd_app_inbound::policy::http: Request authorized server.group=policy.linkerd.io server.kind=server server.name=my-server route.group= route.kind=default route.name=default authz.group=policy.linkerd.io authz.kind=serverauthorization authz.name=allow-f5-and-authenticated client.tls=Some(Established { client_id: Some(ClientId(Name("bigip.test.serviceaccount.identity.linkerd.cluster.local"))), negotiated_protocol: None }) client.ip=192.168.1.5
[ 62.724966s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http: tower::buffer::worker: service.ready=true processing request
[ 62.724972s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: linkerd_proxy_http::client: method=POST uri=https://192.168.1.55:443/grpc.reflection.v1.ServerReflection/ServerReflectionInfo version=HTTP/2.0
[ 62.724976s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: linkerd_proxy_http::client: headers={"content-type": "application/grpc", "user-agent": "grpcurl/v1.9.3 grpc-go/1.61.0", "te": "trailers", "grpc-accept-encoding": "gzip", "l5d-client-id": "bigip.test.serviceaccount.identity.linkerd.cluster.local"}
[ 62.725014s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(5), flags: (0x4: END_HEADERS) }
[ 62.725026s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Data { stream_id: StreamId(5) }
[ 62.725253s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 8 }
[ 62.725261s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Ping { ack: false, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
[ 62.725265s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Ping { ack: true, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
[ 62.725271s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Headers { stream_id: StreamId(5), flags: (0x5: END_HEADERS | END_STREAM) }
[ 62.725275s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Reset { stream_id: StreamId(5), error_code: NO_ERROR }
[ 62.725297s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: hyper::proto::h2::client: client request body error: error writing a body to connection: send stream capacity unexpectedly closed
[ 62.725327s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(5), flags: (0x5: END_HEADERS | END_STREAM) }
[ 62.725334s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_write: send frame=Reset { stream_id: StreamId(5), error_code: NO_ERROR }
[ 62.729421s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
[ 62.729457s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
[ 62.729467s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http: linkerd_proxy_http::server: The client is shutting down the connection res=Err(hyper::Error(Io, Kind(UnexpectedEof)))
[ 62.729700s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}: linkerd_app_core::serve: Connection closed reason=connection error: unexpected end of file reason.sources=[unexpected end of file] client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000
[ 152.726774s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
[ 152.726795s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::proto::connection: Connection::poll; connection error error=GoAway(b"", NO_ERROR, Library)
Right at the end F5 works like the direct connection but then sends GOAWAY.
JRahm if you are still interested I tagged you.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com