For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

How I did it.....again "High-Performance S3 Load Balancing with F5 BIG-IP"

Introduction 

Welcome back to the "How I did it" series! In the previous installment, we explored the high‑performance S3 load balancing of Dell ObjectScale with F5 BIG‑IP. This follow‑up builds on that foundation with BIG‑IP v21.x’s S3‑focused profiles and how to apply them in the wild. We’ll also put the external monitor to work, validating health with real PUT/GET/DELETE checks so your S3-compatible backends aren’t just “up,” they’re truly dependable.

New S3 Profiles for the BIG-IP…..well kind of

A big part of why F5 BIG-IP excels is because of its advanced traffic profiles, like TCP and SSL/TLS. These profiles let you fine-tune connection behavior—optimizing throughput, reducing latency, and managing congestion—while enforcing strong encryption and protocol settings for secure, efficient data flow.  

Available with version 21.x the BIG-IP now includes new S3-specific profiles, (s3-tcp and s3-default-clientssl).  These profiles are based off existing default parent profiles, (tcp and clientssl respectively) that have been customized or “tuned” to optimize s3 traffic. Let’s take a closer look.

Anatomy of a TCP Profile

The BIG-IP includes a number of pre-defined TCP profiles that define how the system manages TCP traffic for virtual servers, controlling aspects like connection setup, data transfer, congestion control, and buffer tuning. These profiles allow administrators to optimize performance for different network conditions by adjusting parameters such as initial congestion window, retransmission timeout, and algorithms like Nagle’s or Delayed ACK.

The s3-tcp, (see below) has been tweaked with respect to data transfer and congestion window sizes as well as memory management to optimize typical S3 traffic patterns (i.e. high-throughput data transfer, varying request sizes,  large payloads, etc.).

Tweaking the Client SSL Profile for S3

Client SSL profiles on BIG-IP define how the system terminates and manages SSL/TLS sessions from clients at the virtual server. They specify critical parameters such as certificates, private keys, cipher suites, and supported protocol versions, enabling secure decryption for advanced traffic handling like HTTP optimization, security policies, and iRules.

The s3-default-clientssl has been modified, (see below) from the default client ssl profile to optimize SSL/TLS settings for high-throughput object storage traffic, ensuring better performance and compatibility with S3-specific requirements.

Advanced S3-compatible health checking with EAV

Has anyone ever told you how cool BIG-IP Extended Application Verification (EAV) aka external monitors are? Okay, I suppose “coolness” is subjective, but EAVs are objectively cool.  Let me prove it to you.

Health monitoring of backend S3-compatible servers typically involves making an HTTP GET request to either the exposed S3 ingest/egress API endpoint or a liveness probe.  Get a 200 and all's good.  Wouldn’t it be cool if you could verify a backend server's health by verifying it can actually perform the operations as intended?  Fortunately, we can do just that using an EAV monitor.  Therefore, based on the transitive property, EAVs are cool.   —mic drop

The bash script located at the bottom of the page performs health checks on S3-compatible storage by executing PUT, GET, and DELETE operations on a test object.  The health check creates a temporary health check file with timestamp, retrieves the file to verify read access, and removes the test file to clean up.  If all three operations return the expected HTTP status code, the node is marked up otherwise the node is marked down.  

Installing and using the EAV health check

Import the monitor script
  1. Save the bash script, (.sh) extension, (located at the bottom of this page) locally and import the file onto the BIG-IP.  Log in to the BIG-IP Configuration Utility and navigate to System > File Management > External Monitor Program File List > Import.
  2. Use the file selector to navigate to and select the newly created. bash file, provide a name for the file and select 'Import'.
Create a new external monitor
  1. Navigate to Local Traffic > Monitors > Create
  2. Provide a name for the monitor. Select 'External' for the type, and select the previously uploaded file for the 'External Program'.  The 'Interval' and 'Timeout' settings can be modified or left at the default as desired.  In addition to the backend host and port, the monitor must pass three (3) additional variables to the backend:
    1. bucket - The name of an existing bucket where the monitor can place a small text file.  During the health check, the monitor will create a file, request the file and delete the file.
    2. access_key - S3-compatible access key with permissions to perform the above operations on the specified bucket.
    3. secret_key - corresponding S3-compatible secret key.
  3. Select 'Finished' to create the monitor.
Associate the monitor with the pool
  1. Navigate to Local Traffic > Pools > Pool List and select the relevant backend S3 pool.  Under 'Health Monitors' select the newly created monitor and move from 'Available' to the 'Active'.  Select 'Update' to save the configuration.

Additional Links

How I did it - "High-Performance S3 Load Balancing of Dell ObjectScale with F5 BIG-IP"

F5 BIG-IP v21.0 brings enhanced AI data delivery and ingestion for S3 workflows

Overview of BIG-IP EAV external monitors

EAV Bash Script

#!/bin/bash

################################################################################
# S3 Health Check Monitor for F5 BIG-IP (External Monitor - EAV)
################################################################################
#
# Description:
#   This script performs health checks on S3-compatible storage by
#   executing PUT, GET, and DELETE operations on a test object. It uses AWS
#   Signature Version 4 for authentication and is designed to run as a BIG-IP
#   External Application Verification (EAV) monitor.
#
# Usage:
#   This script is intended to be configured as an external monitor in BIG-IP.
#   BIG-IP automatically provides the first two arguments:
#     $1 - Pool member IP address (may be IPv6-mapped format: ::ffff:x.x.x.x)
#     $2 - Pool member port number
#
#   Additional arguments must be configured in the monitor's "Variables" field:
#     bucket      - S3 bucket name
#     access_key  - Access key for authentication
#     secret_key  - Secret key for authentication
#
# BIG-IP Monitor Configuration:
#   Type: External
#   External Program: /path/to/this/script.sh
#   Variables:
#     bucket="your-bucket-name"
#     access_key="your-access-key"
#     secret_key="your-secret-key"
#
# Health Check Logic:
#   1. PUT - Creates a temporary health check file with timestamp
#   2. GET - Retrieves the file to verify read access
#   3. DELETE - Removes the test file to clean up
#   Success: All three operations return expected HTTP status codes
#   Failure: Any operation fails or times out
#
# Exit Behavior:
#   - Prints "UP" to stdout if all checks pass (BIG-IP marks pool member up)
#   - Silent exit if any check fails (BIG-IP marks pool member down)
#
# Requirements:
#   - openssl (for SHA256 hashing and HMAC signing)
#   - curl (for HTTP requests)
#   - xxd (for hex encoding)
#   - Standard bash utilities (date, cut, sed, awk)
#
# Notes:
#   - Handles IPv6-mapped IPv4 addresses from BIG-IP (::ffff:x.x.x.x)
#   - Uses AWS Signature Version 4 authentication
#   - Logs activity to syslog (local0.notice)
#   - Creates temporary files that are automatically cleaned up
#
# Author: [Gregory Coward/F5]
# Version: 1.0
# Last Modified: 12/2025
#
################################################################################

# ===== PARAMETER CONFIGURATION =====

# BIG-IP automatically provides these
HOST="$1"        # Pool member IP (may include ::ffff: prefix for IPv4)
PORT="$2"        # Pool member port
BUCKET="${bucket}"          # S3 bucket name
ACCESS_KEY="${access_key}"  # S3 access key
SECRET_KEY="${secret_key}"  # S3 secret key

OBJECT="${6:-healthcheck.txt}"  # Test object name (default: healthcheck.txt)

# Strip IPv6-mapped IPv4 prefix if present (::ffff:10.1.1.1 -> 10.1.1.1)
# BIG-IP may pass IPv4 addresses in IPv6-mapped format
if [[ "$HOST" =~ ^::ffff: ]]; then
    HOST="${HOST#::ffff:}"
fi

# ===== S3/AWS CONFIGURATION =====

ENDPOINT="http://$HOST:$PORT"   # S3 endpoint URL
SERVICE="s3"                    # AWS service identifier for signature
REGION=""                       # AWS region (leave empty for S3 compatible such as MinIO/Dell)

# ===== TEMPORARY FILE SETUP =====

# Create temporary file for health check upload
TMP_FILE=$(mktemp)
printf "Health check at %s\n" "$(date)" > "$TMP_FILE"

# Ensure temp file is deleted on script exit (success or failure)
trap "rm -f $TMP_FILE" EXIT

# ===== CRYPTOGRAPHIC HELPER FUNCTIONS =====

# Calculate SHA256 hash and return as hex string
# Input: stdin
# Output: hex-encoded SHA256 hash
hex_of_sha256() {
    openssl dgst -sha256 -hex | sed 's/^.* //'
}

# Sign data using HMAC-SHA256 and return hex signature
# Args: $1=hex-encoded key, $2=data to sign
# Output: hex-encoded signature
sign_hmac_sha256_hex() {
    local key_hex="$1"
    local data="$2"
    printf "%s" "$data" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$key_hex" | awk '{print $2}'
}

# Sign data using HMAC-SHA256 and return binary as hex
# Args: $1=hex-encoded key, $2=data to sign
# Output: hex-encoded binary signature (for key derivation chain)
sign_hmac_sha256_binary() {
    local key_hex="$1"
    local data="$2"
    printf "%s" "$data" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$key_hex" -binary | xxd -p -c 256
}

# ===== AWS SIGNATURE VERSION 4 IMPLEMENTATION =====

# Generate AWS Signature Version 4 for S3 requests
# Args:
#   $1 - HTTP method (PUT, GET, DELETE, etc.)
#   $2 - URI path (e.g., /bucket/object)
#   $3 - Payload hash (SHA256 of request body, or empty hash for GET/DELETE)
#   $4 - Content-Type header value (empty string if not applicable)
# Output: pipe-delimited string "Authorization|Timestamp|Host"
aws_sig_v4() {
    local method="$1"
    local uri="$2"
    local payload_hash="$3"
    local content_type="$4"

    # Generate timestamp in AWS format (YYYYMMDDTHHMMSSZ)
    local timestamp=$(date -u +"%Y%m%dT%H%M%SZ" 2>/dev/null || gdate -u +"%Y%m%dT%H%M%SZ")
    local datestamp=$(date -u +"%Y%m%d")

    # Build host header (include port if non-standard)
    local host_header="$HOST"
    if [ "$PORT" != "80" ] && [ "$PORT" != "443" ]; then
        host_header="$HOST:$PORT"
    fi

    # Build canonical headers and signed headers list
    local canonical_headers=""
    local signed_headers=""

    # Include Content-Type if provided (for PUT requests)
    if [ -n "$content_type" ]; then
        canonical_headers="content-type:${content_type}"$'\n'
        signed_headers="content-type;"
    fi

    # Add required headers (must be in alphabetical order)
    canonical_headers="${canonical_headers}host:${host_header}"$'\n'
    canonical_headers="${canonical_headers}x-amz-content-sha256:${payload_hash}"$'\n'
    canonical_headers="${canonical_headers}x-amz-date:${timestamp}"

    signed_headers="${signed_headers}host;x-amz-content-sha256;x-amz-date"

    # Build canonical request (AWS Signature V4 format)
    # Format: METHOD\nURI\nQUERY_STRING\nHEADERS\n\nSIGNED_HEADERS\nPAYLOAD_HASH
    local canonical_request="${method}"$'\n'
    canonical_request+="${uri}"$'\n\n'  # Empty query string (double newline)
    canonical_request+="${canonical_headers}"$'\n\n'
    canonical_request+="${signed_headers}"$'\n'
    canonical_request+="${payload_hash}"

    # Hash the canonical request
    local canonical_hash
    canonical_hash=$(printf "%s" "$canonical_request" | hex_of_sha256)

    # Build string to sign
    local algorithm="AWS4-HMAC-SHA256"
    local credential_scope="$datestamp/$REGION/$SERVICE/aws4_request"
    local string_to_sign="${algorithm}"$'\n'
    string_to_sign+="${timestamp}"$'\n'
    string_to_sign+="${credential_scope}"$'\n'
    string_to_sign+="${canonical_hash}"

    # Derive signing key using HMAC-SHA256 key derivation chain
    # kSecret = HMAC("AWS4" + secret_key, datestamp)
    # kRegion = HMAC(kSecret, region)
    # kService = HMAC(kRegion, service)
    # kSigning = HMAC(kService, "aws4_request")
    local k_secret
    k_secret=$(printf "AWS4%s" "$SECRET_KEY" | xxd -p -c 256)
    local k_date
    k_date=$(sign_hmac_sha256_binary "$k_secret" "$datestamp")
    local k_region
    k_region=$(sign_hmac_sha256_binary "$k_date" "$REGION")
    local k_service
    k_service=$(sign_hmac_sha256_binary "$k_region" "$SERVICE")
    local k_signing
    k_signing=$(sign_hmac_sha256_binary "$k_service" "aws4_request")

    # Calculate final signature
    local signature
    signature=$(sign_hmac_sha256_hex "$k_signing" "$string_to_sign")

    # Return authorization header, timestamp, and host header (pipe-delimited)
    printf "%s|%s|%s" \
        "${algorithm} Credential=${ACCESS_KEY}/${credential_scope}, SignedHeaders=${signed_headers}, Signature=${signature}" \
        "$timestamp" \
        "$host_header"
}

# ===== HTTP REQUEST FUNCTION =====

# Execute HTTP request using curl with AWS Signature V4 authentication
# Args:
#   $1 - HTTP method (PUT, GET, DELETE)
#   $2 - Full URL
#   $3 - Authorization header value
#   $4 - Timestamp (x-amz-date header)
#   $5 - Host header value
#   $6 - Payload hash (x-amz-content-sha256 header)
#   $7 - Content-Type (optional, empty for GET/DELETE)
#   $8 - Data file path (optional, for PUT with body)
# Output: HTTP status code (e.g., 200, 404, 500)
do_request() {
    local method="$1"
    local url="$2"
    local auth="$3"
    local timestamp="$4"
    local host_header="$5"
    local payload_hash="$6"
    local content_type="$7"
    local data_file="$8"
    
    # Build curl command with required headers
    local cmd="curl -s -o /dev/null --connect-timeout 5 --write-out %{http_code} \"$url\""
    cmd="$cmd -X $method"
    cmd="$cmd -H \"Host: $host_header\""
    cmd="$cmd -H \"x-amz-date: $timestamp\""
    cmd="$cmd -H \"x-amz-content-sha256: $payload_hash\""

    # Add optional headers
    [ -n "$content_type" ] && cmd="$cmd -H \"Content-Type: $content_type\""
    cmd="$cmd -H \"Authorization: $auth\""
    [ -n "$data_file" ] && cmd="$cmd --data-binary @\"$data_file\""

    # Execute request and return HTTP status code
    eval "$cmd"
}

# ===== MAIN HEALTH CHECK LOGIC =====

# ===== STEP 1: PUT (Upload Test Object) =====

# Calculate SHA256 hash of the temp file content
UPLOAD_HASH=$(openssl dgst -sha256 -binary "$TMP_FILE" | xxd -p -c 256)
CONTENT_TYPE="application/octet-stream"

# Generate AWS Signature V4 for PUT request
SIGN_OUTPUT=$(aws_sig_v4 "PUT" "/$BUCKET/$OBJECT" "$UPLOAD_HASH" "$CONTENT_TYPE")
AUTH_PUT=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT")
DATE_PUT=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT")
HOST_PUT=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT")

# Execute PUT request (expect 200 OK)
PUT_STATUS=$(do_request "PUT" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_PUT" "$DATE_PUT" "$HOST_PUT" "$UPLOAD_HASH" "$CONTENT_TYPE" "$TMP_FILE")

# ===== STEP 2: GET (Download Test Object) =====

# SHA256 hash of empty body (for GET requests with no payload)
EMPTY_HASH="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"

# Generate AWS Signature V4 for GET request
SIGN_OUTPUT=$(aws_sig_v4 "GET" "/$BUCKET/$OBJECT" "$EMPTY_HASH" "")
AUTH_GET=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT")
DATE_GET=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT")
HOST_GET=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT")

# Execute GET request (expect 200 OK)
GET_STATUS=$(do_request "GET" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_GET" "$DATE_GET" "$HOST_GET" "$EMPTY_HASH" "" "")

# ===== STEP 3: DELETE (Remove Test Object) =====

# Generate AWS Signature V4 for DELETE request
SIGN_OUTPUT=$(aws_sig_v4 "DELETE" "/$BUCKET/$OBJECT" "$EMPTY_HASH" "")
AUTH_DEL=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT")
DATE_DEL=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT")
HOST_DEL=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT")

# Execute DELETE request (expect 204 No Content)
DEL_STATUS=$(do_request "DELETE" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_DEL" "$DATE_DEL" "$HOST_DEL" "$EMPTY_HASH" "" "")

# ===== LOG RESULTS =====

# Log all operation results for troubleshooting
#logger -p local0.notice "S3 Monitor: PUT=$PUT_STATUS GET=$GET_STATUS DEL=$DEL_STATUS"

# ===== EVALUATE HEALTH CHECK RESULT =====

# BIG-IP considers the pool member "UP" only if this script prints "UP" to stdout
# Check if all operations returned expected status codes:
#   PUT: 200 (OK)
#   GET: 200 (OK)
#   DELETE: 204 (No Content)
if [ "$PUT_STATUS" -eq 200 ] && [ "$GET_STATUS" -eq 200 ] && [ "$DEL_STATUS" -eq 204 ]; then
    #logger -p local0.notice "S3 Monitor: UP"
    echo "UP"   

fi

# If any check fails, script exits silently (no "UP" output)
# BIG-IP will mark the pool member as DOWN

 

Published Dec 17, 2025
Version 1.0
No CommentsBe the first to comment