Mar 27, 2026 - For details about updated CVE-2025-53521 (BIG-IP APM vulnerability), refer to K000156741.

Forum Discussion

Reqbaln's avatar
Reqbaln
Icon for Nimbostratus rankNimbostratus
May 24, 2023

GRPC through F5 Virtual Server [RST_STREAM with error code: INTERNAL_ERROR]

Hello everyone. 

We have a GRPC service running in a K8s cluster and it's reachable through Nginx ingress from inside the cluster.  

We need to access the GRPC service from outside the cluster through F5 Virtual server and we've configured it as described in this guide https://my.f5.com/manage/s/article/K08041451

So the traffic route should be: External Client (GRPC) -> F5 Virtual server (GRPC) -> Nginx ingress running in a k8s cluster (GRPC) -> GRPC Server. However, this rote doesn't work using the VIP as we are getting this error:

Error Message   Json Failed to list services: rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: INTERNAL_ERROR

Please note that this traffic route is working as expected: Internal Client (GRPC)-> Nginx ingress running in a k8s cluster (GRPC) -> GRPC Server. 

 

What could be the issue here? 

 

Thanks!

 

4 Replies

  • hi Reqbaln ,

      I don't have experience with this setup, but have you taken a tcpdump on the BIG-IP to capture the transaction? You can enable some flags in tcpdump to allow for the tls keys to be included in the capture so it's easily decrypted in wireshark. I wrote a python script to actually take the capture, download it, and decrypt it for you for these situations.

    On the solution info, do you have a confirmed working monitor with the nginx ingress as a pool member, and do you get successful feedback when running this from the BIG-IP command line?

    nghttp -ynv https://<server_IP_address>:<port number>
  • JRahm 

    Thank you for response 

    Here what I got when I run nghttp 

    nghttp -ynv https://thanos-query-k8s-web-pp.com
    [ 0.012] Connected
    The negotiated protocol: h2
    [ 0.079] recv SETTINGS frame <length=18, flags=0x00, stream_id=0>
    (niv=3)
    [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):10]
    [SETTINGS_INITIAL_WINDOW_SIZE(0x04):32768]
    [SETTINGS_MAX_HEADER_LIST_SIZE(0x06):32768]
    [ 0.079] send SETTINGS frame <length=12, flags=0x00, stream_id=0>
    (niv=2)
    [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
    [SETTINGS_INITIAL_WINDOW_SIZE(0x04):65535]
    [ 0.079] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
    ; ACK
    (niv=0)
    [ 0.079] send PRIORITY frame <length=5, flags=0x00, stream_id=3>
    (dep_stream_id=0, weight=201, exclusive=0)
    [ 0.079] send PRIORITY frame <length=5, flags=0x00, stream_id=5>
    (dep_stream_id=0, weight=101, exclusive=0)
    [ 0.079] send PRIORITY frame <length=5, flags=0x00, stream_id=7>
    (dep_stream_id=0, weight=1, exclusive=0)
    [ 0.079] send PRIORITY frame <length=5, flags=0x00, stream_id=9>
    (dep_stream_id=7, weight=1, exclusive=0)
    [ 0.079] send PRIORITY frame <length=5, flags=0x00, stream_id=11>
    (dep_stream_id=3, weight=1, exclusive=0)
    [ 0.079] send HEADERS frame <length=52, flags=0x25, stream_id=13>
    ; END_STREAM | END_HEADERS | PRIORITY
    (padlen=0, dep_stream_id=11, weight=16, exclusive=0)
    ; Open new stream
    :method: GET
    :path: /
    :scheme: https
    :authority: thanos-query-k8s-web-pp.com
    accept: */*
    accept-encoding: gzip, deflate
    user-agent: nghttp2/1.40.0
    [ 0.090] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
    ; ACK
    (niv=0)
    [ 4.092] recv RST_STREAM frame <length=4, flags=0x00, stream_id=13>
    (error_code=INTERNAL_ERROR(0x02))
    [ 4.092] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
    (last_stream_id=0, error_code=NO_ERROR(0x00), opaque_data(0)=[])
    Some requests were not processed. total=1, processed=0

     

    For the tcpdump, unfortunately I don't have access to the virtual server right now. will check it once I have it. 

    Thanks!

  • Reqbaln - If your post was solved it would be helpful to the community to select *Accept As Solution*.
    This helps future readers find answers more quickly and confirms the efforts of those who helped.
    Thanks for being part of our community.
    Lief

  • Similar to what you saw I am sending plaintext gRPC through mTLS tunnel between F5 BIG-IP and Linkerd mesh proxy and I see similar errors in the linkerd logs. Outside of using calico or cilium  and advertising service ip for the Nginx Ingress to that F5 BIG-IP is not in the path I see no other way.

     

    I see similar errors when I send traffic through F5 to plaintext grpc pod, so even decryption from what I see is not needed to see the issue where F5 just send GOWAY before the server response.

     

    Strange if I send traffic to 9001 from the F5 that gRPC with TLS I see no issues so for me the issue is with plaintext gRPC even when it is inside TLS tunnel (to an ingress) or MTLS tunnel (to linkerd enabled pod) . Interesting stuff.

     

     

    This tool can be used for testing plaIntext or insecure (tls)

     

    GitHub - fullstorydev/grpcurl: Like cURL, but for gRPC: Command-line tool for interacting with gRPC servers · GitHub

     

    grpcurl -insecure X.X.X.X:X list

     

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: grpcbin-plain
      namespace: test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: grpcbin-plain
      template:
        metadata:
          labels:
            app: grpcbin-plain
        spec:
          nodeSelector:
            kubernetes.io/hostname: worker-1
          containers:
            - name: grpcbin
              image: moul/grpcbin
              ports:
                - containerPort: 9001
                - containerPort: 9000
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: grpcbin-plain
      namespace: test
    spec:
      type: NodePort
      selector:
        app: grpcbin-plain
      ports:
        - name: grpcs
          port: 9001
          targetPort: 9001
        - name: grpc
          port: 9000
          targetPort: 9000

     

     

    Linkerd logs:

     

    [ 62.724921s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http: linkerd_app_inbound::policy::http: Request authorized server.group=policy.linkerd.io server.kind=server server.name=my-server route.group= route.kind=default route.name=default authz.group=policy.linkerd.io authz.kind=serverauthorization authz.name=allow-f5-and-authenticated client.tls=Some(Established { client_id: Some(ClientId(Name("bigip.test.serviceaccount.identity.linkerd.cluster.local"))), negotiated_protocol: None }) client.ip=192.168.1.5
    
    [ 62.724966s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http: tower::buffer::worker: service.ready=true processing request
    
    [ 62.724972s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: linkerd_proxy_http::client: method=POST uri=https://192.168.1.55:443/grpc.reflection.v1.ServerReflection/ServerReflectionInfo version=HTTP/2.0
    
    [ 62.724976s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: linkerd_proxy_http::client: headers={"content-type": "application/grpc", "user-agent": "grpcurl/v1.9.3 grpc-go/1.61.0", "te": "trailers", "grpc-accept-encoding": "gzip", "l5d-client-id": "bigip.test.serviceaccount.identity.linkerd.cluster.local"}
    
    [ 62.725014s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(5), flags: (0x4: END_HEADERS) }
    
    [ 62.725026s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Data { stream_id: StreamId(5) }
    
    [ 62.725253s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 8 }
    
    [ 62.725261s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Ping { ack: false, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
    
    [ 62.725265s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=Ping { ack: true, payload: [2, 4, 16, 16, 9, 14, 7, 7] }
    
    [ 62.725271s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Headers { stream_id: StreamId(5), flags: (0x5: END_HEADERS | END_STREAM) }
    
    [ 62.725275s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_read: received frame=Reset { stream_id: StreamId(5), error_code: NO_ERROR }
    
    [ 62.725297s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2: hyper::proto::h2::client: client request body error: error writing a body to connection: send stream capacity unexpectedly closed
    
    [ 62.725327s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_write: send frame=Headers { stream_id: StreamId(5), flags: (0x5: END_HEADERS | END_STREAM) }
    
    [ 62.725334s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_write: send frame=Reset { stream_id: StreamId(5), error_code: NO_ERROR }
    
    [ 62.729421s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::codec::framed_read: received frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
    
    [ 62.729457s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:Connection{peer=Server}: h2::proto::connection: Connection::poll; IO error error=UnexpectedEof
    
    [ 62.729467s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http: linkerd_proxy_http::server: The client is shutting down the connection res=Err(hyper::Error(Io, Kind(UnexpectedEof)))
    
    [ 62.729700s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}: linkerd_app_core::serve: Connection closed reason=connection error: unexpected end of file reason.sources=[unexpected end of file] client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000
    
    [ 152.726774s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::codec::framed_write: send frame=GoAway { error_code: NO_ERROR, last_stream_id: StreamId(0) }
    
    [ 152.726795s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.5:32228 server.addr=10.10.226.120:9000}:server{port=9000}:http:http:profile:h2:Connection{peer=Client}: h2::proto::connection: Connection::poll; connection error error=GoAway(b"", NO_ERROR, Library)

     

     

    Right at the end F5 works like the direct connection but then sends GOAWAY.

     

     

     

    JRahm​ if you are still interested I tagged you.