Forum Discussion

Nishal_Rai's avatar
Nishal_Rai
Icon for Cirrostratus rankCirrostratus
Nov 30, 2023

What does BigIP Internal Module, N/A and Blocked by DoS Layer 7 Enforcer refers to, in HTTPAnalytics

Hello Everyone,

I got stumbled upon the terms mentioned below when I exported the pdf from HTTP Analytics

  • BigIP Internal Module
  • N/A
  • Blocked by DoS Layer 7 Enforcer.

Is anyone familiar with, what this refers to in the HTTP Analytics in the F5 BIG-IP system, and if yes, can you please share your insights or experiences?


Regarding, maximum server latency at 19X.XXX.XXX.XXX:443
As per the F5 article,
Server latency is the time (in ms) from when the BIG-IP system sends the first request byte to the web application server until the BIG-IP system receives the first response byte.

So when the end user requests to the web application server (onboarded on f5 big-ip), the request is received by the destinated virtual IP hosted on the f5 big-ip, then the BIG-IP forwards to the real backend node. If so, does this server latency refer to the actual server latency between the actual end user and the server or just between F5 BIG-IP and the actual node server?

And, does client network latency means the client-side RTT mentioned in the f5 article?
If not, what does this refer to?


Thanks.

6 Replies

  • Hi  Nishal,

    Please refer the following article for the xplanation of all the 62 pramenters 

     

    https://my.f5.com/manage/s/article/K34553249

    POOLIP IP/Textual Dimension IP of the pool-member that served the transaction. In case that a transaction was replied by the BIG-IP or blocked by BIG-IP before it reached the server, this field will have a textual value telling that and the explaining the which module made the reply, example for possible results:
    "BigIP Internal Module"
    "Response by RAM CACHE"
    Blocked by DoS Layer 7 Enforcer"

     

    IsInternalActivity Boolean Dimension

    Values can be 0 or 1.
    Internal activity means a transaction that was generated by JS that was injected by BIG-IP.
    It is useful in differentiating between traffic that the client asked to perform, vs traffic that was generated by BIG-IP own tools.

     

     HTH

    🙏

    • Nishal_Rai's avatar
      Nishal_Rai
      Icon for Cirrostratus rankCirrostratus

      Hello,

      Thank you for sharing the resource detailing the HTTP analytics profile for F5 BIG-IP.


      I'm still a bit unclear regarding the distinction between server latency and application response time metrics.

      From my understanding, in the case of the virtual server filter, the server latency metric measures the delay between the real end user and the F5-hosted virtual server IP address—is this accurate?


      Regarding the BIG-IP pool member, I assume the server latency measures the delay between the F5 self-IP address and the destinated actual node (server) —is my understanding correct here?


      The confusion persists similarly with application response time. Could you clarify these distinctions?

      So to consider the actual server latency and application response time, which metrics should I consider, is it the virtual server or the pool member?

      Additionally, I encountered "N/A" while applying the filter and exporting the PDF of the HTTP analytics. It appears to share most metrics with the data mentioned earlier. Could you shed some light on this?





  • Pleae find  explanation of all the metrics you are seeing in BIGIQ pdf reports to get more clear understanding:

    https://techdocs.f5.com/en-us/bigiq-8-0-0/monitoring-managing-applications-using-big-iq/monitoring-application-services/identify-app-issues-require-mitigation-appadmin/http-metrics-index.html

     

     
    HTTP Metric
    HTTP metrics reflect the quantity, volume and speed of the HTTP traffic processed by your managed BIG-IP systems. Metric sets categorize the metric data according to an aspect of the traffic's progress throughout the transaction process. The table below defines the metric set and the kind of metric data collected.
     
    Metric Set
    Metric Set Definition
    Metric
    Metric Definition
    Transactions
    Each initiated request between the client and BIG-IP system, regardless of the outcome.
    Avg/s
    Average number of transactions per second that were processed by the BIG-IP system.
    Total
    Total number of transactions processed by the BIG-IP system.
    Request Volume
    The volume (in bytes) of a request that is processed by the BIG-IP system.
    Avg Size
    The average number of bytes sent per transaction request.
    Throughput
    The average rate of bytes per second sent in transaction requests.
    Volume
    The total number of bytes sent in all transaction requests.
    Response Volume
    The volume (in bytes) of a response that is processed by the BIG-IP system.
    Avg Size
    The average number of bytes sent per transaction response.
    Throughput
    The average rate of bytes per second sent in transaction responses.
    Volume
    The total number of bytes sent in all transaction responses.
    Server Latency
    Server latency is the time (in ms) from when the BIG-IP system sends the first request byte to the web application server, until the BIG-IP system receives the first response byte.
    Avg
    The average server latency observed by the system.
     
    Trans Count
    Total number of transactions processed by the BIG-IP system.
    Max
    The highest server latency observed by the system.
    Page Load Time
    Page load time is the time (in ms) from when the client sends the first byte of a request until the last byte of the response is received by the client.
    Page load time is how long (in milliseconds) it takes from the time an end user makes a request for a web page, until the web page from the application server finishes loading on the client-side browser.
    Trans Count
    The number of client responses from the system that include page load time information.
    Max
    The longest page load time observed by the system.
    Avg
    The average page load time observed by the system.
    Application Response Time
    The time (in ms) from when the server receives the first request byte from the BIG-IP system until the server sends the first byte of the response.
    Avg
    The average application response time observed by the system.
    Min
    The shortest application response time observed by the system.
    StdDev
    The the standard deviation (in ms) of all application response times observed by the system.
    Trans Count
    The number of application response times observed by the system.
    Max
    The longest application response time observed by the system.
    E2E time
    The time (in ms) from when the client sends the first packet of a request until the client receives the last packet of the response.
    Max
    The longest client end to end time observed by the system.
    Min
    The shortest client client end to end time observed by the system.
    StdDev
    The standard deviation (in ms) for all observed client end to end time.
    Trans Count
    The number of client responses that include client end to end time information.
    Avg
    The average client end to end time for all observed transactions.
    Client Side RTT
    Client side round trip time (RTT) is the sum of time (in ms) observed from when the first byte from a client request is received by the BIG-IP system and when the first byte of a response is sent from the BIG-IP system to the client. Or, Client TTFB not including request duration.
    StdDev
    The standard deviation (in ms) for all observed client side RTTs.
    Min
    The shortest client side RTT for all observed transactions.
    Max
    The longest client side RTT for all observed transactions.
    Avg
    The average client side RTT for all observed transactions.
    Server Side RTT
    Server side round trip time (RTT) is the sum of the times (in ms) observed from when the server receives the first request byte from the BIG-IP system and from when the BIG-IP receives the first byte of the response from the server. Or the time observed from when the BIG-IP system sends the first request byte, until it receives the first response byte, not including application response time.
    Trans Count
    The number of server responses to the system that include RTT information.
    StdDev
    The standard deviation (in ms) for all observed server side RTTs.
    Avg
    The average server side RTT for all observed transactions.
    Max
    The longest server side RTT observed by the system.
    Min
    The shortest server side RTT observed by the system.
    Request Duration
    The time it takes (in ms) the BIG-IP system to send the first byte until the last byte of a request to the server.
    Max
    The longest request duration observed by the system
    Trans Count
    The number of requests observed by the system.
    StdDev
    The standard deviation (in ms) of request duration for all observed requests.
    Avg
    The average request duration for all observed requests.
    Min
    The shortest request duration observed by the system.
    Responses Duration
    The time it takes (in ms) the BIG-IP system to send the first byte until the last byte of a response to the client.
     
     
    Trans Count
    The number of responses observed by the system.
    Avg
    The average response duration for all observed responses.
    Max
    The longest response duration observed by the system.
    Min
    The shortest response duration observed by the system.
    StdDev
    The standard deviation (in ms) of response duration for all observed responses.
     
     

     
     
    HTH
     
    🙏
     
     
  • I appreciate your insight and efforts on this matter. However, I've already visited the mentioned documentation, I have also mentioned it in my first post.


    But, I find myself a bit puzzled about the server latency metrics, particularly those related to virtual servers and pool members as mentioned in my last post.




    I noticed there are metrics labeled as client network latency and server network latency within both the virtual server and pool member filter types. I'm struggling a bit to discern their exact meaning and how they differ.


    Could you offer some guidance or explanation to help clarify these metrics? Your input would be greatly appreciated!

  • Client Side = Traffic Initiated by User laptop browser to F5 VIP which is also called a listener and always on an EXT interface of LB is called CLIENT Side TRAFFIC- One set of 3 Way TCP handshake packets,  in return LAPTOP or Browser think F5 is a real server oand requesting a response packet from F5, the Laptop or real requester  cannot see F5 is just a Proxy and not a real pool member or does not host any application.

    Server Side =  Traffic Initiated on bahalf of actual user by F5 Internal Self IP and traverse through F5 Internal Interface connecting to the pool members or backend pool memebrs is called SERVER Side TRAFFIC and Second set of 3 Way TCP handshake packets , in return backend pool members think F5 is a original requester and never see a LAPTOP or a web browser is an original requestre and return the  response packet back to G5 Internal interface it never rerns the packets directly to the orginal requester or user laptop bew browser, if it tries to give a repsonse packet back to the original requester the reson could be no SNAT happening and it wil cause Asynchronous Routing and thus firewall if there is in the traffic flow will start dropping the acketsit does not see the starting ackets in the connectin table,  as , the backend pool member cannot see F5 is not a original requester and dont know that F5 is just a Proxy and requesting packets or response on behalf of someone else and not a original requester

     

     

  • Hello,

    Sorry for the late response.


    So what is the differnece between the virtual server and the pool memeber - client and server side traffic. However, they do have identical approximate values on the metrics, so which one should I consider the actual metric value.



    As per my understanding, the actual end-user request is accepted by the destinated virtual server of f5 big-ip, so client-side traffic makes better sense for this.

    Once the request is validated, the virtual server directs to its destinated pool member where f5 big-ip on behalf of the client initiates the request to the server, so server-side traffic stands out for this.


    Is this correct? and if so, then what is the actual difference between client and server network latency between the virtual server and pool member in the above metric data?