Forum Discussion

Andreas_Lamprec's avatar
Andreas_Lamprec
Icon for Nimbostratus rankNimbostratus
Nov 27, 2012

Analytics module (Advanced Visibility and reporing) historical data

Hello!

 

I've enabled the AVR module with "minimal" resource provisioning on a LTM VE version 11.2.1.

 

It seems there is very limited storage of historical data. For the last hour, the graphs show data in 5 minute interval. When you go back longer than one hour, there is only one sample per hour, and if you go back even further, LTM stores only one sample per day.

 

However, it would be very helpful to have more accurate data for the last week or even the last month, but i could not find any setting on the profile level or a system setting which would influence the historical data storage behaviour of the AVR module.

 

Does anybody else have an idea if this can be changed and if yes, where to change it?

 

Greetings!

 

  • Hi again!

     

     

    I provisioned the AVR module now as "nominal", but that did not change the behaviour.

     

    It runs now for about 12 hours in that mode, so if there would be any change in behaviour, one should see it.

     

     

    It seems AVR remembers data in finer granularity for only the last two hours. If you extend the display to show more than that, you get only one datapoint per hour.

     

     

    Here's an example, where i exported the data as CSV:

     

     

    "time,158.226.1.11:80,158.226.1.21:80,158.226.1.13:80,158.226.1.23:80,158.226.1.12:80,158.226.1.22:81,158.226.1.22:80,158.226.1.12:81,Total"

     

    "11/29/12 06:30,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 06:35,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 06:40,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 06:45,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 06:50,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 06:55,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 07:00,0,0,0,0,0,0,0,0,0"

     

    "11/29/12 07:05,0.19,0.19,0.02,0.02,0.01,0.00,0.00,0,0.42"

     

    "11/29/12 07:10,0.03,0.03,0.01,0.01,0.00,0,0.01,0.00,0.08"

     

    "11/29/12 07:15,0.01,0.01,0.02,0.02,0.01,0.00,0.00,0,0.08"

     

    "11/29/12 07:20,0.02,0.02,0.01,0.01,0.00,0,0.01,0.00,0.07"

     

    "11/29/12 07:25,0.06,0.06,0.01,0.01,0.01,0.00,0.00,0,0.15"

     

    "11/29/12 07:30,0.11,0.11,0.01,0.02,0.00,0,0.01,0.00,0.26"

     

    "11/29/12 07:35,0.04,0.04,0.01,0.01,0.01,0.00,0.00,0,0.12"

     

    "11/29/12 07:40,0.13,0.13,0.01,0.01,0.00,0,0.01,0.00,0.29"

     

    "11/29/12 07:45,0.04,0.04,0.01,0.01,0.01,0.00,0.00,0,0.11"

     

    "11/29/12 07:50,0.05,0.05,0.01,0.01,0.00,0,0.01,0.00,0.14"

     

    "11/29/12 07:55,0.15,0.15,0.01,0.01,0.01,0.00,0.00,0,0.33"

     

    "11/29/12 08:00,0.02,0.02,0.01,0.01,0.00,0,0.01,0.00,0.07"

     

    "11/29/12 08:05,0.07,0.08,0.01,0.01,0.01,0.00,0.00,0,0.18"

     

    "11/29/12 08:10,0.02,0.02,0.04,0.05,0.00,0,0.01,0.00,0.14"

     

    "11/29/12 08:15,0.03,0.03,0.01,0.01,0.01,0.00,0.00,0,0.09"

     

    "11/29/12 08:20,0.06,0.06,0.01,0.01,0.00,0,0.01,0.00,0.15"

     

     

    Any other ideas?

     

     

    Greetings
  • I can't find any data on how data is stored and collected historically or how it is configured via the GUI. I suspect what you're seeing is the intended behaviour but I'd suggest you talk to F5 support and confirm this. If you do, please post back.

    btw, I found these bigdb entries but I wouldn't mess with them without F5 support confirming it's safe;

    
    sys db avr.distributedreporting.maxtiersize {
        value "4"
    }
    sys db avr.stats.distributedfactor {
        value "2"
    }
    sys db avr.stats.internal.maxentitiespertable {
        value "20000"
    }
    sys db avr.trafficcapture.external.syslogseverity {
        value "info"
    }
    sys db avr.trafficcapture.internal.enabled {
        value "true"
    }
    sys db avr.trafficcapture.internal.maxentriesperfile {
        value "10"
    }
    sys db avr.trafficcapture.internal.maxtransactions {
        value "1000"
    }
    
  • Hi!

     

     

    Thanks for the replies so far. I contacted F5 support but they are'nt too much interested into my request.

     

    In tmsh, one can use a command like

     

    show /analytics report { view-by pool-member measures \

     

    { average-server-latency average-tps transactions average-page-load-time } range now--now-10m limit 20 }

     

    to display the analytics data.

     

    if there now would be a way to get this data per SNMP or iControl, i could use a tool like Cacti or even Excel to store the historical data for later use. But it seems AVR is the module which suffers from not having either an SNMP- or an iControl programmatical Interface.

     

    So if noone else has a better idea, i will write a perl script which is run per cron every 10 minutes directly on the F5 and saves the data to a CSV file.

     

    Greetings

     

    Andreas

     

     

  • Interesting, I thought Enterprise Manager could now collect and report on AVR statistics and that would use iControl.
  • Hello!

     

     

    We opened a support case and after a long time we got the answer that there is no chance to change the granularity of the saved data.

     

     

    The full answer was:

     

     

    By design, external data is a requirement for the granularity. Internal data collection is limited to allow for storage and performance limitations. For this reason, AVR aggregates data according to the following:

     

     

    First 2 hours it keeps data aggregated every 5 minutes

     

    After 8 hours it keeps data aggregated every 1 hours (60 minutes)

     

    After 2 days it keeps data aggregated every 4 hours (240 minutes)

     

    After 2 weeks it keeps data aggregated every 24 hours

     

    After 2 months it keeps data aggregated every 1 week

     

    After 1 year it keeps data aggregated every 4 weeks

     

     

     

    Other internal limits are;

     

    chart capacity is limited to 1 year

     

    Captured transaction data is limited to 1000 records and although new records are being displayed in the gui, the old records must be cleared before the new records are stored.

     

     

    Greetings!

     

    Andreas