Mar 27, 2026 - For details about updated CVE-2025-53521 (BIG-IP APM vulnerability), refer to K000156741.

Forum Discussion

KrishnaS's avatar
KrishnaS
Icon for Nimbostratus rankNimbostratus
Apr 12, 2026

F5 BIG-IP DNS/Audit Logs — Structured Format for SIEM Ingestion

Hello Team,

We are working on adding ingestion support for F5 BIG-IP DNS and Audit logs into a SIEM, with the goal of normalising events to the OCSF standard. For other BIG-IP event types, we use Telemetry Streaming to forward logs in structured JSON format, which makes normalisation straightforward.

However, DNS and Audit logs appear to be emitted only in syslog text format, and we have not found a way to obtain them in structured JSON. Additionally, we were unable to locate any official schema documentation describing the available fields for these log types. This makes it challenging to reliably parse and map the events to a standard schema.

Can someone please help if there are any schema documentation available for DNS and Audit logs, or if there is any supported way to forward these logs in JSON or any other structured format?

Any guidance or documentation would be greatly appreciated.

Thanks,
Krishna

6 Replies

  • Hi KrishnaS​ ,

     

    I would recommend you check out BIG-IP  iRules Assistant, I used it to generate a starter iRule that will convert DNS requests to JSON format.  You can expand as needed and log to a pool if you want. Keep in mind this would need to be attached to as many WIP's that need this logging.  Also test this in a non-prod environment and be sure you are comfortable with its performance.  

     

    # This iRule code has the following requirements:
    # - DNS Services addon license (called GTM before 12.0) and a DNS profile enabled where applicable (required by: "DNS::question", "DNS_REQUEST")
    
    when DNS_REQUEST priority 500 {
        # Declare variable to store client IP
        set client_ip [IP::client_addr]
        # Declare variable to store DNS question name
        set qname [DNS::question name]
        # Declare variable to store DNS question type
        set qtype [DNS::question type]
        # Declare variable to store DNS question class
        set qclass [DNS::question class]
    
        # Assemble the extracted fields into a JSON-formatted string
        set json [format "{\"client_ip\":\"%s\",\"question_name\":\"%s\",\"question_type\":\"%s\",\"question_class\":\"%s\"}" \
            $client_ip \
            $qname \
            $qtype \
            $qclass]
    
        # Log the JSON string to syslog with local0.info facility and level
        log local0.info $json
    }

     

  • Hi KrishnaS​ ,

     

    For audit logs you might need to consider a local shell script to convert and send to a SIEM.. This will have to be fully tested on a demo BIG-IP and performance tested as well.  The script will likely not survive reboots etc but may be enough to get audit logs converted to JSON and sent out. 

     

    #!/bin/bash
    #============================================================================
    # bigip_audit_to_json.sh
    # Tails /var/log/audit, parses each line to JSON, forwards to a collector.
    # Designed for BIG-IP 14.x-17.x audit log format (mcpd/httpd/tmsh events).
    #
    # Usage: ./bigip_audit_to_json.sh <DEST_IP> <DEST_PORT> [tcp|udp]
    # Example: ./bigip_audit_to_json.sh 10.1.1.50 5514 udp
    #
    # Run as: nohup ./bigip_audit_to_json.sh 10.1.1.50 5514 udp &
    # Stop:   kill $(cat /var/run/audit_to_json.pid)
    #============================================================================
    
    DEST_IP="${1:?Usage: $0 <DEST_IP> <DEST_PORT> [tcp|udp]}"
    DEST_PORT="${2:?Usage: $0 <DEST_IP> <DEST_PORT> [tcp|udp]}"
    PROTO="${3:-udp}"
    AUDIT_LOG="/var/log/audit"
    HOSTNAME=$(hostname -s)
    PID_FILE="/var/run/audit_to_json.pid"
    
    echo $$ > "$PID_FILE"
    
    # --- JSON-escape a string ------------------------------------------------
    json_escape() {
        printf '%s' "$1" | sed 's/\\/\\\\/g; s/"/\\"/g; s/\t/\\t/g'
    }
    
    # --- Send to collector ----------------------------------------------------
    send_json() {
        local json="$1"
        if [[ "$PROTO" == "tcp" ]]; then
            echo "$json" > /dev/tcp/"$DEST_IP"/"$DEST_PORT" 2>/dev/null
        else
            echo "$json" > /dev/udp/"$DEST_IP"/"$DEST_PORT" 2>/dev/null
        fi
    }
    
    # --- Parse and forward ----------------------------------------------------
    tail -F "$AUDIT_LOG" | while IFS= read -r line; do
    
        # Skip empty lines
        [[ -z "$line" ]] && continue
    
        # Extract timestamp (Mon DD HH:MM:SS or ISO format)
        ts=$(echo "$line" | grep -oP '^\w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2}' )
        if [[ -n "$ts" ]]; then
            # Convert syslog timestamp to ISO-8601
            year=$(date +%Y)
            iso_ts=$(date -d "$ts $year" '+%Y-%m-%dT%H:%M:%SZ' 2>/dev/null || echo "$ts")
        else
            iso_ts=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
        fi
    
        # Extract daemon/process (e.g., mcpd, httpd, tmsh, sshd)
        daemon=$(echo "$line" | grep -oP '(?<=\s)\S+(?=\[\d+\]:)' | head -1)
        pid=$(echo "$line" | grep -oP '(?<=\[)\d+(?=\]:)' | head -1)
    
        # Extract the message body (everything after "PID]: ")
        msg=$(echo "$line" | sed -n 's/.*\[[0-9]*\]: *//p')
        [[ -z "$msg" ]] && msg="$line"
    
        # --- Parse audit-specific fields from message body ---
        user=$(echo "$msg" | grep -oP '(?<=user )\S+' | head -1)
        partition=$(echo "$msg" | grep -oP '(?<=\[)\S+(?=\])' | head -1)
        action=""
        object=""
        status=""
        trans_id=$(echo "$msg" | grep -oP '(?<=transaction #)\d+' | head -1)
        client_ip=$(echo "$msg" | grep -oP '(?<=client )\d+\.\d+\.\d+\.\d+' | head -1)
    
        # Detect action keywords
        for act in create modify delete run login logout; do
            if echo "$msg" | grep -qi "\b${act}\b"; then
                action="$act"
                break
            fi
        done
    
        # Extract object path (tmsh-style /Common/... or /Partition/...)
        object=$(echo "$msg" | grep -oP '/\S+/\S+' | head -1)
    
        # Detect success/failure
        if echo "$msg" | grep -qi "fail\|error\|denied\|unauthorized"; then
            status="failure"
        else
            status="success"
        fi
    
        # --- Build JSON -------------------------------------------------------
        safe_msg=$(json_escape "$msg")
    
        json=$(cat <<EOF
    {"timestamp":"${iso_ts}","hostname":"${HOSTNAME}","daemon":"${daemon:-unknown}","pid":"${pid:-0}","user":"${user:-unknown}","partition":"${partition:-}","action":"${action:-unknown}","object":"${object:-}","status":"${status}","transaction_id":"${trans_id:-}","client_ip":"${client_ip:-}","raw":"${safe_msg}","log_type":"bigip_audit","ocsf_class_uid":3002,"ocsf_class_name":"Account Change","ocsf_category_uid":3}
    EOF
    )
    
        # Strip newline from heredoc
        json=$(echo "$json" | tr -d '\n')
    
        send_json "$json"
    
    done

     

  • Hello Jeff_Granieri​ 

    Thanks for the iRule suggestion.

    Following up on a post previously shared by my colleague Krishna — we are currently working together on this setup and facing an issue with logs not reaching AWS S3.

    Current Setup:

    • iRule attached to the Virtual Server, logging DNS_REQUEST in JSON format using log local0.info
    • HSL Log Destination configured with a pool pointing to 127.0.0.1:6514
    • Log Publisher -> DNS Logging Profile -> attached to the Virtual Server
    • Telemetry Streaming configured with a listener on port 6514
    • Consumer configured to AWS S3 (eu-north-1)
    • Port 6514 is confirmed open (verified via netstat)

    Expected Flow:
    F5 BIG-IP -> Telemetry Streaming -> AWS S3

    Issue:
    Despite the above configuration, we are not receiving any data in AWS S3.
    We would appreciate any guidance on what we might be missing or additional checks we should perform.


    My declaration 

    {
      "class": "Telemetry",
      "My_Listener": {
        "class": "Telemetry_Listener",
        "port": 6514
      },
      "My_Consumer": {
        "class": "Telemetry_Consumer",
        "type": "AWS_S3",
        "region": "eu-north-1",
        "bucket": "f5-dns-community-test",
        "username": "Access key",
        "passphrase": {
          "cipherText": "Secret Key"
        }
      }
    }

     

    • Jeff_Granieri's avatar
      Jeff_Granieri
      Icon for Employee rankEmployee

      HI jainzeel13​   looks like your declaration is missing the system poller which might be why TS is not sending anything...  

      {
        "class": "Telemetry",
        "My_System": {
          "class": "Telemetry_System",
          "systemPoller": {
            "interval": 300
          }
        },
        "My_Listener": {
          "class": "Telemetry_Listener",
          "port": 6514
        },
        "My_Consumer": {
          "class": "Telemetry_Consumer",
          "type": "AWS_S3",
          "region": "eu-north-1",
          "bucket": "f5-dns-community-test",
          "username": "AKIA________________",
          "passphrase": {
            "cipherText": "actual_secret_key_here"
          }
        }
      }

      you can try to add debugging :  

      restcurl -X POST /mgmt/shared/telemetry/declare -d '{

      "class": "Telemetry",

      "controls": {

      "class": "Controls",

      "logLevel": "debug"

      }

      }'

      *** make sure you set the loglevel off debug when you are done ***

      and then monitor restnoded.log for the TS polling and check for error response codes...  

  • Hello Jeff_Granieri​

    Thank you for sharing the bash script.

    We’ve tested it and wanted to clarify: will this script convert all types of audit logs into JSON format? For example, if we have different categories such as authentication failure (have a message fields in which it's metioned that it failed to authenticate), authentication successes, and network-related audit logs etc, will the script handle and convert each of these log types correctly into JSON?

    • Jeff_Granieri's avatar
      Jeff_Granieri
      Icon for Employee rankEmployee

      Hi jainzeel13​ ,

       

      Its looking at   /var/log/audit.  You should test each audit message to confirm the script covers that you need.  Feel free to make any changes to adjustments as needed.