BIG-IP Report

Problem this snippet solves:

Overview

This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them.

This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles.

The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world.

It's easy to setup and use and only requires auditor (read-only) permissions on your devices.

Demo/Preview

Interactive demo

http://loadbalancing.se/bigipreportdemo/

Screen shots

The main report:

The device overview:

Certificate details:

How to use this snippet:

Installation instructions

BigipReport REST

This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6).

Updated Oct 16, 2024
Version 11.0
  • Upgraded Cypress to patch vulnerabilities and make the vulnerability scanners calm down. 🙂

    Working on a new version where there's a view for nodes too.

    Kind regards,
    Patrik

  • Hi Patrik

    Still have a small issue with the docker version of BigIP Report. It used to take around 5-6 minutes at each execution with a Windows VM Server.

    Now it's dockerized, it can take 30 to 106 minutes to execute. Noticed that seems to block on those steps : 

    2023-09-20 11:19:18 <device_A:Caching nodes.5757715
    2023-09-20 11:19:18 <device_B>:Caching monitors
    2023-09-20 11:19:33 <device_A:Caching monitors77379
    2023-09-20 11:21:35 <device_B>:Caching Pools.7717258
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 296.7917296


    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 297.7953358
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 300.7993795
    2023-09-20 11:21:54 <device_A:Caching Pools
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 301.8015409
    2023-09-20 11:24:02 <device_A:Caching Policies00432
    2023-09-20 11:24:17 <device_A:Caching datagroups965
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 463.0189142
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 464.0201473


    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 465.021371
    2023-09-20 11:24:49 <device_A:Caching iRules0349808
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 491.0570585

    2023-09-20 11:25:06 <device_A:Caching profiles0633
    2023-09-20 11:25:14 <device_B>:Caching Policies40969
    2023-09-20 11:25:30 <device_B>:Caching datagroups032
    2023-09-20 11:26:00 <device_B>:Caching iRules1423518
    2023-09-20 11:26:16 <device_B>:Caching profiles61971
    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 844.5591946


    Wait: 0, Run: 2, Done: 6, Fail: 0, Time: 1010.7925092 31797

     

    2023-09-20 11:39:16 <device_B>:Caching Virtual servers
    2023-09-20 11:39:33 <device_A:Caching Virtual servers
    2023-09-20 11:40:18 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:40:34 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:40:50 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:41:06 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:41:10 <device_B>:Detecting orphaned pools
    2023-09-20 11:41:10 <device_B>:Stats: VS:311 P:307 R:44 POL:0 DG:5 C:10 M:199 ASM:0 T:1455.6709484
    2023-09-20 11:41:23 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:41:28 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:41:43 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:41:58 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:42:00 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:42:15 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:42:33 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:42:48 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:43:07 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:43:26 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:43:41 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:44:00 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:44:10 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:44:24 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:44:42 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:44:57 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:45:15 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:45:17 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:45:36 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:45:49 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:46:08 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:46:24 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:46:40 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:46:55 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:47:14 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:47:30 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:47:46 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:48:03 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:48:16 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:48:33 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:48:48 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:49:03 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    Wait: 0, Run: 1, Done: 7, Fail: 0, Time: 1935.1637433

    2023-09-20 11:49:19 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:49:22 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:49:42 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:49:55 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:50:14 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:50:33 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:50:49 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:51:05 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:51:20 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:51:36 <device_A:Polling policy reference information for /Common//<Random_ASM_Policy>
    2023-09-20 11:51:54 <device_A:Detecting orphaned pools
    2023-09-20 11:51:54 <device_A:Stats: VS:97 P:187 R:274 POL:16 DG:61 C:26 M:57 ASM:99 T:2099.2386718
    2023-09-20 11:51:55 Checking for missing data
    2023-09-20 11:51:55 <device_B> does not have any Policy data
    2023-09-20 11:51:55 No missing data was detected, sending alerts and compiling the report
    2023-09-20 11:51:55 No certificate alerting channel has been enabled
    2023-09-20 11:51:55 Support Checks has been disabled, skipping
    2023-09-20 11:51:55 No failed devices alerting channel has been enabled

    And this for all devices of the same customer.

    On our shared infra, which has lot more objects, that work much better : 

    2023-09-20 11:19:09 <device_X>:Stats: VS:699 P:623 R:204 POL:213 DG:96 C:144 M:239 ASM:87 T:134.3199104

     

    I checked CPU and memory usage on the docker, doesn't seem really high.

    Network doesn't seem to be the problem, same path used than the previous setup.

     

    Any idea ?

  • Hi Mathieu

    Which version did you run on the Windows server? The old version was a bit faster due to how REST handles Virtualserver stats. The SOAP version could get them all in one go whereas the REST API needs to poll each and every virtual server.

    Another thing you can do is to check the MaxJobs setting here and set it to the same as the number of device groups although from the looks of it you have an issue with one particular machine so that probably won't help.

    Docker should not add that much overhead normally. If it did I reckon nobody would use it. 🙂

    TimRiker usually has many good ideas. Any input?

    Kind regards,
    Patrik

  • Are you running the same config outside of Docker? The box that runs our report had some network issues that caused the report to take forever. Fixing the network sped it up again. The same xml file and version outside of docker should run about the same speed from what I've tested.

  • Hi Patrik, Tim

    I confirm that's not related to the docker. It seems the issue is located to two specific devices for a single customer.

    Coulnd't find a difference in the configuration yet...

    Br, 

  • v5.7.9 is out with security patches. Like always, docker images has been built and for the brave souls running :latest that tag has been update too.

    No need to update config file between this and the previous version.