berlin
6 TopicsWS-Exfil-Shield: Catching What WAFs Miss After the 101 Handshake
Problem WAFs inspect the WebSocket upgrade and individual frames against signatures and content profiles, but they do not correlate behavior across the lifetime of an established WebSocket session. All major WAF vendors document the same gap: inspection stops at the HTTP upgrade handshake; post-upgrade WebSocket frames are not correlated across a session. Three threat patterns exploit the post-upgrade behavioral blind spot: **C2 Beacon Timing**: WebSocket C2 channels are documented in active campaigns — e.g. PhantomCaptcha (SentinelLabs, Oct 2025) used a multi-stage WebSocket RAT with wss:// C2 and Base64/JSON commands; LightSpy (Huntress, macOS variant) uses WebSockets for command delivery and control. The behavioral signal is timing — beaconing implants tend toward regular intervals, humans do not. WAFs and most network controls do not analyze inter-frame timing across a session. **Credential Stuffing Over WebSocket**: 1000 credential pairs over one connection appear as one HTTP event to perimeter controls. Verizon DBIR 2025: compromised credentials were the initial access vector in 22% of breaches; the median daily share of credential stuffing in SSO authentication logs was 19%. **Exfiltration Signals**: /export paths, Authorization headers, and oversized payloads are visible at the handshake; per-frame inspection (where enabled) sees content but not session-level patterns. BSI Lagebericht 2025 (reporting period July 2024 - June 2025): 72% of analyzed ransomware incidents included a data leak; double extortion (encryption + exfiltration) is the dominant attack model. Solution Single iRule. No backend changes. Two-stage behavioral detection: not "what does this frame contain?" but "what does this connection do over time? — and does the payload confirm it?" L1 Suspicious URI/header regex + string BWC throttle + HSL alert L2 C2 beacon timing (CoV) online statistics Sideband check → quarantine/pass + close L3 High frame rate sliding window IP block + TCP close L4 Quarantined reconnect sideband verdict Block/release/honeypot + AI analysis **L1 - Exfiltration signals at the handshake**: HTTP_REQUEST checks the upgrade URI against a regex for known exfiltration endpoints (/export, /download, /dump, /backup, /extract) and scans headers for Authorization, X-API-Key, X-Secret. On match: BWC policy attached server-to-client (1 Mbps throttle) + HSL alert. No block — /export might be legitimate. Throttling buys the SOC time to investigate without disrupting a potentially valid operation. **L2 - CoV² online algorithm**: Welford-inspired, 5 table entries per connection regardless of session length. CoV² (no sqrt() in BIG-IP Tcl) < 0.0225 with >=5 samples = machine-like timing. On detection: iRule sends timing metadata to the sideband service and waits up to 500ms for a verdict. FALSE_POSITIVE (allowlisted IP) → session continues untouched; CONFIRMED or timeout → quarantine table set + TCP close. The quarantined IP will be routed to the honeypot on its next connection attempt (L4). **L3 - Frame rate sliding window**: WS_CLIENT_FRAME tracks frame count per connection within a 10-second window. At 5 frames in 10 seconds: TCP close + IP written to blocklist with 1-hour TTL. On any subsequent reconnect attempt, HTTP_REQUEST rejects the connection immediately. The sliding window resets when the window expires, allowing legitimate high-frequency bursts to pass without false positives. **L4 - Two-stage verification with closed-loop AI verdict**: When a quarantined IP reconnects, HTTP_REQUEST issues a QUARANTINE_CHECK to the sideband service before routing. Three outcomes apply at handshake time: PENDING (analysis still in progress) or sideband timeout → connection is silently routed to the honeypot pool via `pool quarantine_pool`. The attacker's implant keeps running, unaware it is isolated. After 5 frames are collected in the honeypot, the AI analyzer (Claude) classifies payload semantics independent of timing: agent identifiers, command structure, encoding patterns. Claude's verdict is pushed back to the sideband service and cached against the source IP, which closes the loop: CONFIRMED → next QUARANTINE_CHECK returns CONFIRMED, iRule emits a C2_CONFIRMED HSL event (source=claude, confidence, family) and rejects the handshake; the IP is held in a permanent 24h block. In the test run, the third reconnect after CONFIRMED never completes the WS upgrade — the client sees `InvalidMessage: did not receive a valid HTTP response`. FALSE_POSITIVE → quarantine entry deleted, the IP is released and subsequent sessions continue normally. This matters because C2 frameworks implement jitter — randomized beacon intervals designed to defeat timing-based detection. At >27% jitter (in our test corpus), CoV rises above threshold and L2 stops firing; the AI layer is jitter-immune because it inspects payload semantics, not cadence. Neither signal alone is sufficient. Reference result on a `{"t":"ping","id":"c2agent01"}` corpus: verdict=C2, confidence=0.95, family "Generic C2 Heartbeat", with indicators including "structured JSON protocol with type field", "persistent agent identifier across all frames", "repetitive ping pattern (5/5 frames identical)", "no human interaction artifacts", and "deterministic payload, no entropy". **Sideband**: iRule = sensor, endpoint = actor. Used at two points in the flow: L2 (timing verdict) and L4 (QUARANTINE_CHECK). Pluggable TCP port 9000 listener: SIEM (Splunk/QRadar), SOAR (auto-block via iControl REST), or AI analyzer (reference implementation included). catch{} ensures a non-responding endpoint never delays traffic. Impact - Defense in depth with Advanced WAF: WAF guards handshake, signatures, and frame content; WS-Exfil-Shield adds session-level behavioral detection. - All thresholds in RULE_INIT — tuning without redeployment; sideband endpoint swappable. - Graduated enforcement: throttle → quarantine → AI verify → block or release. Each layer independently tunable. - Full audit trail: CONNECT, L1_SIGNAL, C2_BEACON, QUARANTINE, RATE_LIMIT, BLOCKED, C2_CONFIRMED (source=claude, confidence, family) + FALSE_POSITIVE from AI layer. Demo https://www.youtube.com/watch?v=-XRipP0p_oc Code # WS-Exfil-Shield iRule # F5 AppWorld Berlin 2026 - iRules Contest # # Four-layer WebSocket security with graduated response: # Layer 1: Connection-level exfiltration signals (URL, headers) → BWC throttle # Layer 2: C2 beacon timing fingerprint (CoV-based behavioral analysis) → quarantine + TCP close # Layer 3: High-frequency frame rate detection (credential stuffing) → TCP close + IP block # Layer 4: Quarantined reconnect → sideband verdict check → block/release/honeypot # # Two-stage verification: # Stage 1 (L2): CoV² detects machine-like timing → sideband confirms → quarantine + TCP close # Stage 2 (L4): Reconnect → sideband QUARANTINE_CHECK returns Claude payload verdict: # CONFIRMED → permanent 24h block + C2_CONFIRMED HSL event + reject # FALSE_POSITIVE → quarantine released, session continues normally # PENDING/timeout → route to honeypot (Claude still analyzing) # # External dependencies (pre-configured on BIG-IP): # - BWC policy : ws_exfil_throttle (Network > Bandwidth Controllers, 1 Mbps) # - HSL pool : siem_hsl_pool (LTM > Pools, UDP 514, points to SIEM/syslog receiver) # # Requirements: BIG-IP TMOS 21.x when RULE_INIT { set static::beacon_min_samples 5 set static::beacon_cv_threshold 0.15 ;# CoV < 0.15 = machine-like timing set static::cs_frame_limit 5 ;# max frames per cs_window milliseconds set static::cs_window 3000 ;# sliding window size in milliseconds set static::cs_block_ttl 3600 ;# IP blocklist TTL in seconds set static::bwc_policy "ws_exfil_throttle" } # --------------------------------------------------------------------------- # PROCEDURES # --------------------------------------------------------------------------- proc check_beacon_fingerprint { conn_id } { set count [table lookup "bcn_count_${conn_id}"] if { $count eq "" || $count < $static::beacon_min_samples } { return 0 } set sum_t [table lookup "bcn_sumt_${conn_id}"] set sum_t2 [table lookup "bcn_sumt2_${conn_id}"] set n $count set mean [expr { double($sum_t) / $n }] if { $mean <= 0 } { return 0 } set variance [expr { double($sum_t2) / $n - $mean * $mean }] if { $variance < 0 } { set variance 0 } # CoV² comparison avoids sqrt (not available in BIG-IP Tcl) set cov_sq [expr { $variance / ($mean * $mean) }] set cv_thresh_sq [expr { $static::beacon_cv_threshold * $static::beacon_cv_threshold }] if { $cov_sq < $cv_thresh_sq } { return 1 } return 0 } proc hsl_send { event data } { HSL::send $static::hsl "\{\"event\":\"${event}\",\"ts\":[clock seconds],${data}\}\n" } # --------------------------------------------------------------------------- # EVENTS # --------------------------------------------------------------------------- when HTTP_REQUEST { if { [string tolower [HTTP::header "Upgrade"]] eq "websocket" } { if { ![info exists static::hsl] } { set static::hsl [HSL::open -proto UDP -pool siem_hsl_pool] } set conn_id "[IP::client_addr]:[TCP::client_port]" set client_ip [IP::client_addr] set uri [HTTP::uri] # Blocklist check (Layer 3 + confirmed C2 carry-over) if { [table lookup "cs_blocked_${client_ip}"] ne "" } { log local0.warning "WS-Exfil-Shield: BLOCKED ip=$client_ip conn=$conn_id" call hsl_send "BLOCKED" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\"" reject return } # Layer 4: Quarantined IP reconnect — check sideband for Claude verdict. # CONFIRMED: Claude analyzed honeypot frames and confirmed C2 → permanent block. # FALSE_POSITIVE: Claude found no C2 indicators → release quarantine, continue normally. # PENDING/timeout: Claude still analyzing → keep routing to honeypot. if { [table lookup "quar_${client_ip}"] ne "" } { set quar_action "honeypot" catch { set sb [connect -timeout 100 -protocol TCP 10.10.2.1 9000] if { $sb ne "" } { send -timeout 100 $sb "{\"conn_id\":\"$conn_id\",\"ip\":\"$client_ip\",\"threat\":\"QUARANTINE_CHECK\"}\n" set qverdict [recv -timeout 500 $sb] close $sb if { [string match "*\"verdict\":\"CONFIRMED\"*" $qverdict] } { set quar_action "block" } elseif { [string match "*\"verdict\":\"FALSE_POSITIVE\"*" $qverdict] } { set quar_action "release" } } } if { $quar_action eq "block" } { table set "cs_blocked_${client_ip}" 1 86400 86400 log local0.warning "WS-Exfil-Shield: C2_CONFIRMED ip=$client_ip conn=$conn_id" call hsl_send "C2_CONFIRMED" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\"" reject return } elseif { $quar_action eq "release" } { table delete "quar_${client_ip}" log local0.info "WS-Exfil-Shield: FALSE_POSITIVE ip=$client_ip conn=$conn_id" call hsl_send "FALSE_POSITIVE" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\"" # fall through to normal processing } else { log local0.info "WS-Exfil-Shield: QUARANTINE ip=$client_ip conn=$conn_id" call hsl_send "QUARANTINE" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\"" pool quarantine_pool return } } # Layer 1: Exfiltration signals in WebSocket upgrade request set threat "" if { [regexp -nocase {/(export|download|dump|backup|extract)} $uri] } { set threat "EXFIL_ENDPOINT" } if { $threat eq "" } { foreach hdr { Authorization X-API-Key X-Secret } { if { [HTTP::header $hdr] ne "" } { set threat "SENSITIVE_HEADER"; break } } } if { $threat ne "" } { log local0.warning "WS-Exfil-Shield: L1_SIGNAL threat=$threat ip=$client_ip uri=$uri" call hsl_send "L1_SIGNAL" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\",\"threat\":\"$threat\",\"uri\":\"$uri\"" # Throttle server→client bandwidth to slow active exfiltration BWC::policy attach $static::bwc_policy } table set "ws_start_${conn_id}" [clock clicks -milliseconds] indef 3600 log local0.info "WS-Exfil-Shield: CONNECT ip=$client_ip conn=$conn_id uri=$uri" call hsl_send "CONNECT" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\",\"uri\":\"$uri\"" } } when WS_CLIENT_FRAME { set conn_id "[IP::client_addr]:[TCP::client_port]" set client_ip [IP::client_addr] set now [clock clicks -milliseconds] # --- Layer 2: C2 Beacon Timing Fingerprint --- set last_ts [table lookup "bcn_last_${conn_id}"] if { $last_ts ne "" } { set interval [expr { $now - $last_ts }] set count [table lookup "bcn_count_${conn_id}"] set sum_t [table lookup "bcn_sumt_${conn_id}"] set sum_t2 [table lookup "bcn_sumt2_${conn_id}"] if { $count eq "" } { set count 0 } if { $sum_t eq "" } { set sum_t 0 } if { $sum_t2 eq "" } { set sum_t2 0 } if { $interval > 0 } { incr count set sum_t [expr { $sum_t + $interval }] set sum_t2 [expr { $sum_t2 + $interval * $interval }] table set "bcn_count_${conn_id}" $count indef 3600 table set "bcn_sumt_${conn_id}" $sum_t indef 3600 table set "bcn_sumt2_${conn_id}" $sum_t2 indef 3600 } if { [call check_beacon_fingerprint $conn_id] } { set mean_interval [expr { $sum_t / $count }] log local0.warning "WS-Exfil-Shield: C2_BEACON ip=$client_ip conn=$conn_id samples=$count mean_interval=${mean_interval}ms" call hsl_send "C2_BEACON" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\",\"samples\":$count,\"mean_interval_ms\":$mean_interval" # Stage 1 sideband: timing verdict determines quarantine vs pass # CONFIRMED/timeout → quarantine (honeypot collects payload for Stage 2 Claude analysis) # FALSE_POSITIVE → session continues untouched set action "quarantine" catch { set sb [connect -timeout 100 -protocol TCP 10.10.2.1 9000] if { $sb ne "" } { send -timeout 100 $sb "{\"conn_id\":\"$conn_id\",\"ip\":\"$client_ip\",\"threat\":\"C2_BEACON\",\"mean_interval_ms\":$mean_interval}\n" set verdict [recv -timeout 500 $sb] close $sb if { [string match "*\"verdict\":\"FALSE_POSITIVE\"*" $verdict] } { set action "pass" } } } if { $action eq "quarantine" } { table set "quar_${client_ip}" 1 indef 1800 TCP::close } # action "pass": allowlisted IP — session continues untouched return } } table set "bcn_last_${conn_id}" $now indef 3600 # --- Layer 3: High-frequency frame rate (credential stuffing indicator) --- set window_start [table lookup "cs_window_${conn_id}"] set frame_count [table lookup "cs_frames_${conn_id}"] if { $window_start eq "" } { set window_start $now table set "cs_window_${conn_id}" $now indef 3600 } if { $frame_count eq "" } { set frame_count 0 } set elapsed [expr { $now - $window_start }] if { $elapsed >= $static::cs_window } { table set "cs_window_${conn_id}" $now indef 3600 table set "cs_frames_${conn_id}" 1 indef 3600 } else { incr frame_count table set "cs_frames_${conn_id}" $frame_count indef 3600 if { $frame_count >= $static::cs_frame_limit } { log local0.warning "WS-Exfil-Shield: RATE_LIMIT ip=$client_ip conn=$conn_id frames=${frame_count} in ${elapsed}ms" call hsl_send "RATE_LIMIT" "\"ip\":\"$client_ip\",\"conn\":\"$conn_id\",\"frames\":$frame_count,\"elapsed_ms\":$elapsed" table set "cs_blocked_${client_ip}" 1 $static::cs_block_ttl $static::cs_block_ttl TCP::close return } } }182Views0likes1CommentLayered Virtual Server iRule Solution for ICAP File Upload Scanning on BIG-IP
Problem Our client is running a WebApp where his customers are able to upload documents. A BIG-IP Cluster is used to balance the WebApp. Our client wanted to scan the files via an existing ICAP Solution. After several tests with the standard ICAP and Request Adapt solution we noticed the application workflow breaks when a virus is detected and the Upload is not completed. From troubleshooting with the client we narrowed down the root cause to the ADAPT Profile returning "respond". The BIG-IP sends this to the customer endpoint and no response is sent to the backend Server, which then breaks the workflow for the Upload. Solution With the root cause found, we implemented a layered approach. We configured two virtual servers. The first Virtual Server acts as the outer layer. In the initial request is processed by an LTM Policy which checks if the request is a file upload. This sets a pointer for POST Requests to the Endpoint which triggers the Layered iRule processing. A GET Request is directly bypassed to the Content VS behind the outer layer. If a POST is received, we save the HTTP Request in "req_headers" and send the request to the Content VS. In the Content VS iRule the Request is again checked for POST or GET Requests and the ADAPT Profile is activated accordingly. If the ICAP Result is "respond" a custom response is crafted as HTTP 406 with an X-Virus-Found Header The responses are sent back trough the Layered VS. If the status code equals 406 and the Header X-Virus-Found is present the request is checked if it was already resend to the backend App. If it was not the HTTP::retry is used to resend the request to the backend, without the malicious content. Impact As the clients Web Application was old and there was no cost effective way to implement a workaround on Application side or the purchase a new ICAP Solution, the iRules combined with LTM Policy helped to client to scan the uploads for malicious content, keeping the App safe, while using existing technologies. Code iRule VS Layered (Outer Virtual Server) when RULE_INIT { # Set to "1" for debugging set static::debug 0 } when CLIENT_ACCEPTED { # Initialize the retry variable. It is required to resend the request. set retries 0 if { $static::debug } { log local0. "*** Retry set ***" } } when HTTP_REQUEST { if { $static::debug } { log local0. "*** Layered VS: Request received Number $retries ***" } # Entry condition from the LTM Policy and check of the file size if { ([info exists avscan]) and [HTTP::header Content-Length] < 10000000 } { if { $static::debug } { log local0. "*** Layered VS: Found POST ***" } if { $static::debug } { log local0. "*** Layered VS: Content-Length is [HTTP::header Content-Length] ***" } # Check whether this is a retried request if { $retries == 1 } { # If the request is retried, remove the Content-Length header # so that an empty POST is sent HTTP::header remove Content-Length if { $static::debug } { log local0. "*** Retried Request: Content-Length Header removed ***" } } # Store all request headers in a variable. # This variable is needed later for the retry. set req_headers [HTTP::request] if { $static::debug } { log local0. "*** Layered VS: Got Request $req_headers ***" } # Forward to the actual virtual server with the pool virtual icap_content_vs } # Handle all other requests, e.g. GET, by sending them directly to the VS with the pool else { if { $static::debug } { log local0. "*** Layered VS: Found GET ***" } virtual icap_content_vs } } when HTTP_RESPONSE { if { $static::debug } { log local0. "*** Layered VS: Response Received ***" } # Check whether the server response came from the iRule on the Content VS if { [HTTP::status] equals "406" and [HTTP::header exists "X-Virus-Found"] } { if { $static::debug } { log local0. "*** Layered VS: Found Virus ***" } if { $static::debug } { log local0. "*** Layered VS: Request Headers are $req_headers ***" } # If the retry counter is 0, resend the request, but without content if { $retries == 0 } { if { $static::debug } { log local0. "*** Retrying Request ***" } HTTP::retry $req_headers incr retries return } } # Reset state after response processing set retries 0 unset req_headers } Content VS iRule when RULE_INIT { # Set to "1" for debugging set static::debug 0 } when HTTP_REQUEST { if { $static::debug } { log local0. "*** Content VS: Request Received ***" } # Entry condition from the LTM Policy and check of the file size if { ([info exists avscan])and [HTTP::header exists Content-Length] and [HTTP::header Content-Length] < 10000000 } { if { $static::debug } { log local0. "*** File found ***" } if { $static::debug } { log local0. "*** Content-Length is [HTTP::header Content-Length] ***" } # Enable the ADAPT profile to access the internal virtual server ADAPT::enable enable } else { ADAPT::enable disable } } when ADAPT_REQUEST_RESULT { if { $static::debug } { log local0. "*** ADAPT Result is: [ADAPT::result] ***" } # Check the result returned by the ICAP server (respond case) if { [ADAPT::result] contains "respond" } { if { $static::debug } { log local0. "*** Modified ADAPT Result is: [ADAPT::result] ***" } # If the ICAP return value indicates that a virus was detected, # send a manual response and trigger the retry function # in the ir_AVScan_Layered iRule HTTP::respond 406 -version auto X-Virus-Found "Virus" } } Demo Would habe loved to create a demo. Unfortunately I have no access to the App.103Views0likes1CommentGeneric iRule based on datagroup parsing
The creation of this iRule comes from a migration project from Apache configuration to F5 Big IP. Different constraints lead to this approach of storing the configuration elements from the Apache conf in a datagroup that is then parsed by this iRule to dynamically derive the rules to apply to traffic. These are some simple or complex rules, but they are all uniformly stored in the datagroup, that can be modified by non-F5 friendly persons without impacting the rest of the configuration.244Views5likes3CommentsSUPER-WEBSOCKET-HANDSHAKE-LOGGER™® (SWHL) iRule
The contest submission covers a so called SUPER-WEBSOCKET-HANDSHAKE-LOGGER™® (SWHL) iRule. The genius idea behind this iRule is to log and correlate every single WEBSOCKET-Handshakes via the WS_REQUEST and WS_RESPONSE events. The iRule uses a well-selected iRule syntax and it has been carefully tested on TMOS v16, v17 and v21 units. How to use: Save the iRule to your device. Attach it to your virtual server. Adjust the $static::super_websocket_handshake_logger(DEBUG_SOURCE) variable to match your client-ip address or client-subnet. Perform websocket request. Open your BAS and type: ~# tail -f /var/log/ltm | grep "SUPER-WEBSOCKET-HANDSHAKE-LOGGER" Enjoy the lovely iRule! when RULE_INIT { # SUPER-WEBSOCKET-HANDSHAKE-LOGGER iRule by Kai Wilke set static::super_websocket_handshake_logger(DEBUG_SOURCE) "10.11.12.0/24" ;# CIDR-Notation } when WS_REQUEST { set swl_requestID "" if { [IP::addr [IP::client_addr] equals $static::super_websocket_handshake_logger(DEBUG_SOURCE)] == 0 } then { return } set swl_requestID "[clock clicks][TMM::cmp_unit]" log -noname local0.debug "SUPER-WEBSOCKET-HANDSHAKE-LOGGER | $swl_requestID | [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] | WS-REQUEST | [set httpRequest "[HTTP::method] [HTTP::host][HTTP::uri]"]" foreach header [HTTP::header names] { log -noname local0.debug "SUPER-WEBSOCKET-HANDSHAKE-LOGGER | $swl_requestID | [IP::client_addr]:[TCP::client_port] -> [IP::local_addr]:[TCP::local_port] | WS-REQUEST-HEADER | $header: [HTTP::header value $header]" } } when WS_RESPONSE { if { $swl_requestID eq "" } then { return } log -noname local0.debug "SUPER-WEBSOCKET-HANDSHAKE-LOGGER | $swl_requestID | [IP::local_addr]:[TCP::local_port] -> [IP::client_addr]:[TCP::client_port] | WS-RESPONSE | $httpRequest" foreach header [HTTP::header names] { log -noname local0.debug "SUPER-WEBSOCKET-HANDSHAKE-LOGGER | $swl_requestID | [IP::local_addr]:[TCP::local_port] -> [IP::client_addr]:[TCP::client_port] | WS-RESPONSE-HEADER | $header: [HTTP::header value $header]" } } Cheers, Kai104Views0likes0CommentsWS-Shield: WebSocket Abuse Detection & Adaptive Enforcement Gateway
Problem WebSocket traffic introduces a fundamentally different security model from traditional HTTP. After the initial upgrade request, communication becomes long-lived, bidirectional, and frame-based, with no ongoing request/response structure for conventional controls to inspect. Existing WebSocket protections already provide important controls such as payload signature inspection, frame and message size limits, protocol compliance, origin enforcement, and structured content validation. These protections are valuable during the upgrade phase and for known attacks within frame content. The remaining challenge is per‑client behavioral analysis across live frame streams. Once a session is established, the protocol itself offers no native mechanism to evaluate how a specific client behaves over time: How fast frames are being sent Whether payloads are repetitive and automation-like Whether oversized frames are being used for resource exhaustion Whether abusive users reconnect across clustered devices Whether cumulative risk should trigger proportional enforcement Common session-layer abuse patterns include: High-rate message floods from a single client Low-and-slow bots staying below rate thresholds Oversized frames intended to exhaust backend resources Reconnect evasion across clustered load balancers Lack of adaptive per-client scoring during live sessions This is where iRules are uniquely positioned. Running directly in the F5 TMM fast path, iRules can inspect every WebSocket frame in real time, maintain per-client state across the session lifetime, and enforce graduated responses, without application changes, external agents, or protocol redesign. WS-Shield extends policy enforcement from the upgrade handshake into the active WebSocket session itself. Solution WS-Shield is a five-layer behavioral enforcement engine implemented entirely in iRules. It continuously evaluates client behavior across WebSocket frames and calculates a cumulative abuse score using multiple independent signals, then applies proportional responses based on threat level. Layer 1 — Upgrade Gate (HTTP_REQUEST) Five checks run before the 101 Switching Protocols response is sent: Source IP checked against ws_blocked_ips Origin validated against ws_allowed_origins Authentication token required: Sec-WebSocket-Protocol: Bearer.<jwt> or ?token= query parameter Token validated through sideband HTTP call (200 / 401) configurable fail-open if auth service unavailable Redis cluster pre-check: previously abusive clients can be blocked before handshake completion Layer 2 — Rate Analysis (WS_CLIENT_DATA) Per-client message volume is tracked in a sliding time window using session table state. Projected frame rate contributes to the abuse score. Detects: Floods Bursts Reconnect storms Sustained automation traffic Layer 3 — Payload Size Analysis Frame size is scored independently of rate. A single oversized frame can raise risk even if sent slowly. This detects low-frequency resource exhaustion attempts. Layer 4 — Entropy / Repetition Analysis A lightweight unique-byte approximation evaluates the first 512 bytes of each payload. Low-entropy traffic such as repetitive templates or bot-generated filler contributes to the abuse score. This detects slow bots that intentionally remain below rate thresholds. Tested Result: a client sending repetitive 300-byte payloads every 0.5 seconds was disconnected at score 100 while still below all configured rate thresholds. Layer 5 — Cumulative Score with Decay Signals from rate, payload size, and entropy feed a weighted abuse score. Clean frames reduce score gradually, allowing legitimate bursts to recover naturally while sustained abuse escalates. Adaptive Behavioral Scoring: The "Leaky Bucket" Model WS-Shield moves away from "binary" blocking (Allow vs. Deny) and adopts a fluid reputation system. We treat the cumulative abuse score like a Leaky Bucket. 1. The Scoring Dynamics The Inflow (Risk Accumulation): Every frame is a potential "drop" of risk. If a client sends a 100KB frame, we add +50 to the bucket. If they send a low-entropy (repetitive) bot payload, we add +25. The Leak (Automatic Decay): Every time the client behaves—sending a "clean" frame that passes all checks—the score decays by 1. The Outcome: This distinguishes between a malicious actor (who fills the bucket faster than it can leak) and a power user (who might have a temporary burst that decays back to a "Green Zone" naturally). 2. Graduated Enforcement Tiers The iRule maps the "water level" of the bucket to four distinct enforcement actions, ensuring we only use the "heavy hammer" when absolutely necessary. Score Level Enforcement State Action Business Logic 0 - 29 TRUSTED None Normal operational flow. 30 - 59 SUSPICIOUS Warn | HSL::send Log metadata to SIEM for behavioral profiling. 60 - 79 RESTRICTED Throttle | BWC::policy attach Adaptive Throttling. We preserve the session but limit bandwidth to protect the backend. 80 - 99 SUPPRESSED Drop | WS::frame drop Silent Discard. The client thinks they are sending data, but it never reaches the server. 100+ TERMINATED Disconnect | WS::disconnect Hard Block. RFC 6455 1008 close code issued and IP blacklisted in Redis. Cluster-Wide Threat Sharing Threat state is stored in Redis using automatic expiry. On new connections, prior threat state can be consulted before application data is exchanged. Benefits: Reconnect deterrence Cross-node reputation sharing Immediate pre-enforcement Consistent cluster behavior A client disconnected on one device cannot simply reconnect elsewhere and start clean. Outbound DLP Controls Server-to-client text frames can also be inspected. Example controls: Payment card (PAN) detection Sensitive data suppression Policy-based frame dropping Binary and control frames pass normally. Architecture Overview Impact Better Protection for Real-Time Apps Designed for: AI streaming interfaces Financial trading feeds Chat / collaboration systems Gaming backends IoT control channels These are environments where a single abusive client can impact many legitimate users. Reduced Backend Load Adaptive throttling and frame dropping suppress abusive traffic before it reaches origin servers. Faster Incident Response Structured JSON logs provide immediate visibility into: Who was abusive Why action was taken Which thresholds triggered enforcement No Application Changes Required Protection is implemented entirely in the traffic layer. No SDKs, agents, or backend modifications required. Reusable and Extensible New signals can be added easily: Geo scoring JWT claims logic URI-based weighting AI token controls Additional DLP patterns Operational Simplicity Runs on existing F5 infrastructure using native iRules capabilities. Minimal external dependencies: Redis lightweight auth service No new hardware or architecture redesign required. Code # ============================================================================= # WS-Shield: WebSocket Abuse Detection & Adaptive Enforcement Gateway # ============================================================================= # Author : Kostas Injeyan + vibe coding # Version : 5.0 (tested on TMOS 21.x) # TMOS : 21.x+ # Tags : appworld 2026, berlin, irules # # OVERVIEW # -------- # Existing WebSocket protections already provide important controls such as # payload signature inspection, frame/message size enforcement, protocol # validation, origin checks, structured content inspection, and configurable # timing thresholds. # # Additional volumetric protections can detect abnormal HTTP transaction # patterns and server stress during traditional request/response traffic # and during the initial WebSocket upgrade phase. # # However, long-lived WebSocket sessions introduce a different traffic model: # persistent bidirectional frame streams where abuse often appears as: # - Per-client message floods # - Oversized payload abuse # - Low-and-slow repetitive bot traffic # - Reconnect evasion across clustered devices # # These session behaviors benefit from adaptive controls such as: # - Per-client sliding-window behavioral scoring # - Multi-factor scoring (rate + size + entropy) # - Graduated enforcement (warn → throttle → drop → close) # - Dynamic bandwidth controls tied to abuse score # - Shared cluster threat intelligence # # WS-Shield extends enforcement beyond the handshake by applying real-time # adaptive controls throughout the active WebSocket session. # # SECURITY MODEL # -------------- # Layer 1 Upgrade Gate # Origin validation, token presence, auth sideband validation, # Redis reputation pre-check before HTTP 101 response # # Layer 2 Rate Analysis # Sliding-window per-client message rate detection # # Layer 3 Payload Size Analysis # Oversized frames scored independently of rate # # Layer 4 Entropy / Repetition Analysis # Detects slow bots sending repetitive low-variance payloads # # Layer 5 Cumulative Score with Decay # Rate, size, and entropy signals feed a weighted abuse score. # Clean frames gradually reduce score while sustained abuse escalates. # # ENFORCEMENT MODEL # ----------------- # Warn → BWC Throttle → Silent Frame Drop → RFC6455 Close (1008) # # All actions emit structured JSON logs to HSL / SIEM. # # WHAT IT DOES # ------------ # 1. Validates Origin against ws_allowed_origins # 2. Requires token (Bearer subprotocol or query parameter) # 3. Validates token via auth sideband call (200 / 401 / fail-open) # 4. Checks Redis reputation before handshake completion # 5. Tracks per-client frame rate in sliding windows # 6. Scores oversized payloads independently # 7. Detects repetitive low-entropy bot traffic # 8. Maintains cumulative abuse score with decay # 9. Applies graduated enforcement tiers # 10. Synchronizes threat state to Redis # 11. Dynamically attaches BWC throttling # 12. Inspects outbound frames for PAN / DLP patterns # 13. Sends structured audit events to HSL # # BONUS ELEMENTS (contest rubric) # -------------------------------- # [x] Procedures # ws_entropy — unique-byte entropy approximation # (TMOS expr has no log() — Shannon not directly # computable; approximation preserves 0-8 scale # and correctly identifies repetitive bot payloads) # ws_score — centralised scoring weights, single edit to retune # ws_log — structured JSON to HSL + local0 fallback # ws_redis_set — SETEX via TCP sideband, auto-expiry, fail-safe # ws_redis_get — GET via TCP sideband, graceful on outage # ws_auth_validate — token validation via HTTP GET sideband # returns 1 (valid) / 0 (rejected) / -1 (unreachable) # # [x] Sideband (two independent uses) # Redis: SETEX/GET over bare connect/send/recv for cluster threat state. # Tested: seeding wsshield:<ip>=100:close causes HTTP_REQUEST to return # 403 before handshake — cluster pre-block confirmed. # Auth service: HTTP GET /validate?token=<value> over TCP sideband. # Tested: invalid token → 401, unreachable → fail-open with log. # # [x] Bandwidth Controller # BWC::policy attach per abusive session at SCORE_THROTTLE. # Pre-attached at CLIENT_ACCEPTED for Redis-flagged clients. # Tested: Active Policies=1, Packets(dropped)=251, Bytes(dropped)=16.8K # at max-user-rate=100kbps under sustained flood. # # EVENT FLOW # ---------- # Requires WebSocket profile attached clientside AND serverside on the VS. # # CLIENT_ACCEPTED — init table state; open HSL handle; pre-attach BWC # if Redis shows this IP already above THROTTLE # HTTP_REQUEST — origin → token present → auth sideband → Redis block # WS_CLIENT_FRAME — pre-drop if score >= DROP; else WS::collect frame # WS_CLIENT_DATA — rate + size + entropy; score update; set disc_flag # WS_CLIENT_FRAME_DONE — WS::disconnect if disc_flag=1 (only valid here) # WS_SERVER_FRAME — collect text frames (opcode 1) for DLP # WS_SERVER_DATA — PAN regex; drop matching frames # CLIENT_CLOSED — final score → Redis; explicit table cleanup # # DEPENDENCIES & SETUP # -------------------- # All objects below must exist before attaching the iRule to a VS. # # 1. DATA GROUPS # # tmsh create ltm data-group internal ws_allowed_origins type string records add { # "https://yourapp.com" { } # } # tmsh create ltm data-group internal ws_blocked_ips type string # # Add IP to block list at any time: # tmsh modify ltm data-group internal ws_blocked_ips records add { "10.1.2.3" { } } # # 2. BANDWIDTH CONTROLLER POLICY # # tmsh create net bwc policy ws_abuse_bwc { dynamic enabled max-user-rate 1mbps } # # 3. HSL LOG POOL (ws_log also writes to local0 as fallback) # # tmsh create ltm pool ws_hsl_pool members add { 192.168.1.100:514 { } } # # 4. VIRTUAL SERVER PROFILES # WebSocket profile MUST be attached both clientside and serverside — # without both, WS_CLIENT_DATA will not fire: # # If using a custom HTTP profile with response-headers-permitted, add: # Upgrade Connection Sec-WebSocket-Accept — otherwise 101 headers are # stripped and clients fail to complete the handshake. # # 5. REDIS (any RESP-compatible instance reachable from BIG-IP data plane) # # docker run -d -p 6379:6379 redis:alpine # redis-cli -h <REDIS_HOST> -p 6379 ping # expect: PONG # # Test cluster pre-block: # redis-cli -h <REDIS_HOST> -p 6379 setex "wsshield:10.1.2.3" 3600 "100:close" # # 6. AUTH SERVICE (HTTP GET /validate?token=<value> → 200 or 401) # # A mock Flask auth service is provided (auth_server.py). # Deploy with Docker Compose on any host reachable from BIG-IP: # # docker run -d -p 8888:8888 -v /path/to/auth_server.py:/app/auth_server.py \ # python:3.11-alpine sh -c "pip install flask -q && python3 /app/auth_server.py" # # Test: # curl "http://<AUTH_HOST>:8888/validate?token=abc123" # → 200 # curl "http://<AUTH_HOST>:8888/validate?token=bad" # → 401 # # 7. ATTACH THE IRULE # # tmsh modify ltm virtual <vs_name> rules add { websocket } # tmsh save sys config # ============================================================================= when RULE_INIT { # --- Rate analysis (sliding window) ---------------------------------------- set ::RATE_WINDOW 10 ;# seconds — window width set ::RATE_WARN 60 ;# projected msgs/window — score += 20 set ::RATE_THROTTLE 120 ;# projected msgs/window — score += 40 + BWC set ::RATE_DROP 200 ;# projected msgs/window — score += 60 # --- Payload size (per single frame) --------------------------------------- set ::PAYLOAD_WARN 8192 ;# bytes — score += 15 set ::PAYLOAD_DROP 65536 ;# bytes — score += 50 # --- Entropy (unique-byte ratio, 0-8 scale) -------------------------------- # TMOS expr has no log() — approximated as (unique_bytes/len)*8.0 # Repetitive bot payloads ("AAA...") → near 0; normal text → 3-5 set ::ENTROPY_MIN 1.5 ;# below this — score += 25 # --- Cumulative score thresholds ------------------------------------------ set ::SCORE_WARN 30 ;# log only set ::SCORE_THROTTLE 60 ;# BWC attach + Redis write set ::SCORE_DROP 80 ;# silent frame drop set ::SCORE_CLOSE 100 ;# RFC 6455 close code 1008 # --- Redis sideband ------------------------------------------------------- # Bare connect/send/recv — correct TMOS sideband API (no SIDEBAND:: namespace) set ::REDIS_HOST "192.168.120.220" ;# change to your own set ::REDIS_PORT 6379 set ::REDIS_PFX "wsshield:" set ::REDIS_TTL 3600 # --- Auth service sideband ------------------------------------------------ # HTTP GET /validate?token=<value> → 200 (valid) or 401 (rejected) # Fail-open: unreachable auth service logs warning and allows the upgrade set ::AUTH_HOST "192.168.120.220" ;# change to your own set ::AUTH_PORT 8888 } # ----------------------------------------------------------------------------- # PROC: ws_entropy # Unique-byte entropy approximation over a 512-byte payload sample. # TMOS expr does not support log() so Shannon entropy is not directly # computable. Approximation: (distinct_byte_values / sample_length) * 8.0 # preserves the 0-8 bits/byte scale and correctly identifies low-variety # content: # "AAAA..." → unique=1, score=0.016 (correctly flagged as bot) # Normal JSON → unique~60, score~1-2 (near threshold — tested) # Random data → unique~200,score~3-5 (clean) # Returns 8.0 for empty payloads (not suspicious). # ----------------------------------------------------------------------------- proc ws_entropy { payload } { set sample [string range $payload 0 511] set len [string length $sample] if { $len == 0 } { return 8.0 } array set seen {} foreach byte [split $sample ""] { set seen($byte) 1 } return [expr { ([array size seen] / double($len)) * 8.0 }] } # ----------------------------------------------------------------------------- # PROC: ws_score # Centralised scoring weights — all score deltas live here. # No magic numbers in event handlers. Retune the entire model by editing # this one proc without touching any event logic. # ----------------------------------------------------------------------------- proc ws_score { event } { switch $event { "rate_warn" { return 20 } "rate_throttle" { return 40 } "rate_drop" { return 60 } "payload_warn" { return 15 } "payload_hard" { return 50 } "low_entropy" { return 25 } default { return 0 } } } # ----------------------------------------------------------------------------- # PROC: ws_log # Structured JSON event to HSL pool + local0 fallback. # HSL::send avoids TMM log rate limiting and integrates with any syslog SIEM. # local0 fallback means events appear in /var/log/ltm even without a live # HSL pool destination — useful during deployment and troubleshooting. # Fields: ts (ISO-8601 UTC), src (client IP), event, score, detail. # ----------------------------------------------------------------------------- proc ws_log { hsl src event score detail } { set ts [clock format [clock seconds] -format "%Y-%m-%dT%H:%M:%SZ" -gmt 1] set msg "\{\"ts\":\"${ts}\",\"src\":\"${src}\",\"event\":\"${event}\",\"score\":${score},\"detail\":\"${detail}\"\}" HSL::send $hsl $msg log local0. "wsshield: $msg" } # ----------------------------------------------------------------------------- # PROC: ws_redis_set # SETEX via TCP sideband — bare connect/send/recv (correct TMOS API). # SETEX ensures keys auto-expire; no external cleanup required. # connect() wrapped in catch so Redis outage degrades gracefully without # throwing a runtime error that would affect the connection. # ----------------------------------------------------------------------------- proc ws_redis_set { key value ttl } { set dest "${::REDIS_HOST}:${::REDIS_PORT}" if { [catch { set conn [connect -timeout 1000 -idle 5 -status cs $dest] } err] } { return 0 } if { $conn eq "" } { return 0 } set cmd "*4\r\n\$5\r\nSETEX\r\n\$[string length $key]\r\n${key}\r\n\$[string length $ttl]\r\n${ttl}\r\n\$[string length $value]\r\n${value}\r\n" send $conn $cmd recv -timeout 2000 -status rs 128 $conn close $conn return 1 } # ----------------------------------------------------------------------------- # PROC: ws_redis_get # GET via TCP sideband. Returns value string on hit, "" on miss or error. # Parses RESP bulk string reply: $<len>\r\n<data>\r\n # Nil reply ($-1\r\n) falls through regexp and returns "". # ----------------------------------------------------------------------------- proc ws_redis_get { key } { set dest "${::REDIS_HOST}:${::REDIS_PORT}" if { [catch { set conn [connect -timeout 1000 -idle 5 -status cs $dest] } err] } { return "" } if { $conn eq "" } { return "" } set cmd "*2\r\n\$3\r\nGET\r\n\$[string length $key]\r\n${key}\r\n" send $conn $cmd set resp [recv -timeout 2000 -status rs 512 $conn] close $conn if { [regexp {\$(\d+)\r\n(.+)\r\n} $resp _ len val] } { return $val } return "" } # ----------------------------------------------------------------------------- # PROC: ws_auth_validate # Validates the WebSocket auth token via HTTP GET sideband to the auth service. # Uses HTTP/1.0 deliberately — connection closes after response, no chunked # parsing needed, recv terminates cleanly. # Returns: # 1 — auth service reachable and returned 200 (token valid) # 0 — auth service reachable and returned non-200 (token rejected) # -1 — auth service unreachable (caller should fail open and log warning) # ----------------------------------------------------------------------------- proc ws_auth_validate { token } { set dest "${::AUTH_HOST}:${::AUTH_PORT}" if { [catch { set conn [connect -timeout 1000 -idle 5 -status cs $dest] } err] } { return -1 } if { $conn eq "" } { return -1 } set req "GET /validate?token=${token} HTTP/1.0\r\nHost: ${::AUTH_HOST}\r\nConnection: close\r\n\r\n" send $conn $req set resp [recv -timeout 3000 -status rs 512 $conn] close $conn if { [regexp {HTTP/1\.[01] (\d+)} $resp _ status] } { return [expr { $status == 200 ? 1 : 0 }] } return -1 } # ============================================================================= # EVENT: CLIENT_ACCEPTED # TCP connection established — before any HTTP is seen. # Initialise per-connection state here so all subsequent events have a valid # table key and HSL handle regardless of whether the connection upgrades. # Table key is IP+port scoped to prevent state collision between simultaneous # connections from the same client. # Redis pre-check: if this IP was scored above THROTTLE in a previous session # on any pool member, attach BWC immediately before the first byte of # application data — a client cannot escape throttling by reconnecting. # ============================================================================= when CLIENT_ACCEPTED { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" # State vector: "score msg_count window_start disconnect_flag" # disconnect_flag is set by WS_CLIENT_DATA, consumed by WS_CLIENT_FRAME_DONE # because WS::disconnect is only valid in the FRAME_DONE context. table set "${tkey}_state" "0 0 [clock seconds] 0" indef $::REDIS_TTL set hsl [HSL::open -proto UDP -pool ws_hsl_pool] table set "${tkey}_hsl" $hsl indef $::REDIS_TTL # Cluster-wide BWC pre-enforcement set stored [call ws_redis_get "${::REDIS_PFX}${client_ip}"] if { $stored ne "" } { set cached_score [lindex [split $stored ":"] 0] if { $cached_score >= $::SCORE_THROTTLE } { BWC::policy attach ws_abuse_bwc "${client_ip}:[TCP::client_port]" table set "${tkey}_bwc" 1 indef $::REDIS_TTL } } } # ============================================================================= # EVENT: HTTP_REQUEST # Gate-check the WebSocket upgrade before the 101 is sent. # Non-upgrade requests return immediately — regular HTTP on same VS unaffected. # # Five sequential checks; first failure responds and returns: # 1. Manual IP block list (data group) # 2. Origin header vs ws_allowed_origins data group # 3. Auth token present (Bearer subprotocol or ?token= query param) # 4. Token validation via auth service sideband (fail-open if unreachable) # 5. Redis cluster pre-block (score >= CLOSE → 403 before handshake) # ============================================================================= when HTTP_REQUEST { if { not ([HTTP::header exists "Upgrade"] && [string tolower [HTTP::header "Upgrade"]] eq "websocket") } { return } set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" set hsl [table lookup "${tkey}_hsl"] # 1. Manual block list if { [class match $client_ip equals ws_blocked_ips] } { call ws_log $hsl $client_ip "blocked_ip" 100 "ws_blocked_ips" HTTP::respond 403 content "Forbidden\n" return } # 2. Origin validation set origin [HTTP::header "Origin"] if { $origin eq "" || not [class match $origin equals ws_allowed_origins] } { call ws_log $hsl $client_ip "bad_origin" 100 $origin HTTP::respond 403 content "Forbidden: invalid origin\n" return } # 3. Token presence set token "" if { [HTTP::header exists "Sec-WebSocket-Protocol"] } { foreach proto [split [HTTP::header "Sec-WebSocket-Protocol"] ","] { set proto [string trim $proto] if { [string match "Bearer.*" $proto] } { set token [string range $proto 7 end] break } } } if { $token eq "" } { set token [URI::query [HTTP::uri] "token"] } if { $token eq "" } { call ws_log $hsl $client_ip "no_token" 50 "missing auth on upgrade" HTTP::respond 401 content "Unauthorized: missing token\n" return } # 4. Token validation via auth service sideband # Returns: 1=valid, 0=rejected by auth service, -1=unreachable (fail open) set auth_result [call ws_auth_validate $token] if { $auth_result == 0 } { call ws_log $hsl $client_ip "invalid_token" 50 "auth service rejected token" HTTP::respond 401 content "Unauthorized: invalid token\n" return } elseif { $auth_result == -1 } { call ws_log $hsl $client_ip "auth_unavailable" 0 "auth service unreachable fail-open" } # 5. Redis cluster pre-block set stored [call ws_redis_get "${::REDIS_PFX}${client_ip}"] if { $stored ne "" } { set cached_score [lindex [split $stored ":"] 0] if { $cached_score >= $::SCORE_CLOSE } { call ws_log $hsl $client_ip "cluster_block" $cached_score "pre-blocked via Redis" HTTP::respond 403 content "Forbidden: threat score exceeded\n" return } } } # ============================================================================= # EVENT: WS_CLIENT_FRAME # Entry point for each inbound frame — payload not yet buffered. # High-score path: drop immediately with no buffering (minimal CPU cost for # clients being actively suppressed — no point collecting a payload we will # discard). # Normal path: WS::collect frame buffers the payload and fires WS_CLIENT_DATA. # ============================================================================= when WS_CLIENT_FRAME { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" set state [table lookup "${tkey}_state"] if { $state eq "" } { return } if { [lindex $state 0] >= $::SCORE_DROP } { call ws_log [table lookup "${tkey}_hsl"] $client_ip \ "frame_drop" [lindex $state 0] "pre-drop score=[lindex $state 0]" WS::frame drop return } WS::collect frame } # ============================================================================= # EVENT: WS_CLIENT_DATA # Full frame payload buffered by WS::collect. Three-axis analysis runs here. # # WS::disconnect is NOT valid in this context (TMOS restriction) — when the # score crosses CLOSE, disc_flag=1 is written to the state table and # WS_CLIENT_FRAME_DONE executes the actual disconnect. # ============================================================================= when WS_CLIENT_DATA { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" set hsl [table lookup "${tkey}_hsl"] set now [clock seconds] set state [table lookup "${tkey}_state"] if { $state eq "" } { set state "0 0 $now 0" } set score [lindex $state 0] set msg_count [lindex $state 1] set window_start [lindex $state 2] set disc_flag [lindex $state 3] set delta 0 # --- A. Rate analysis ------------------------------------------------------ # Project message count to full-window equivalent rate. # Window resets when elapsed >= RATE_WINDOW; count starts at 1. set elapsed [expr { $now - $window_start }] if { $elapsed >= $::RATE_WINDOW } { set msg_count 1 set window_start $now } else { incr msg_count } set rate [expr { $elapsed > 0 ? int($msg_count / double($elapsed) * $::RATE_WINDOW) : $msg_count }] if { $rate >= $::RATE_DROP } { set delta [expr { $delta + [call ws_score "rate_drop"] }] } elseif { $rate >= $::RATE_THROTTLE } { set delta [expr { $delta + [call ws_score "rate_throttle"] }] } elseif { $rate >= $::RATE_WARN } { set delta [expr { $delta + [call ws_score "rate_warn"] }] } # --- B. Payload size ------------------------------------------------------- # Scored independently — a single oversized frame is an indicator of # resource exhaustion intent regardless of message rate. set payload [WS::payload] set plen [string length $payload] if { $plen >= $::PAYLOAD_DROP } { set delta [expr { $delta + [call ws_score "payload_hard"] }] } elseif { $plen >= $::PAYLOAD_WARN } { set delta [expr { $delta + [call ws_score "payload_warn"] }] } # --- C. Entropy ------------------------------------------------------------ # Catches bots that evade rate limits by spacing messages out but still # generate highly uniform, low-variety content (tested: slow bot sending # 300-byte "AAA..." disconnected at score 100 with rate=40 — well below # every rate threshold, entropy alone drove the disconnect). if { $plen > 0 && [call ws_entropy $payload] < $::ENTROPY_MIN } { set delta [expr { $delta + [call ws_score "low_entropy"] }] } # --- D. Score update with decay ------------------------------------------- # Clean frames (delta==0) decay score by 1, floored at 0. # Sustained legitimate traffic recovers from short bursts automatically. if { $delta == 0 } { set score [expr { $score > 0 ? $score - 1 : 0 }] } else { set score [expr { $score + $delta }] } # --- E. Graduated enforcement --------------------------------------------- if { $score >= $::SCORE_CLOSE } { call ws_log $hsl $client_ip "disconnect_flagged" $score \ "score=${score} rate=${rate} plen=${plen}" call ws_redis_set "${::REDIS_PFX}${client_ip}" "${score}:close" $::REDIS_TTL set disc_flag 1 } elseif { $score >= $::SCORE_THROTTLE } { call ws_log $hsl $client_ip "throttle" $score "rate=${rate}" # Guard with table lookup — attach BWC only once per connection if { [table lookup "${tkey}_bwc"] eq "" } { BWC::policy attach ws_abuse_bwc "${client_ip}:[TCP::client_port]" table set "${tkey}_bwc" 1 indef $::REDIS_TTL } # Write to Redis — other pool members pre-throttle on next connect call ws_redis_set "${::REDIS_PFX}${client_ip}" "${score}:throttle" $::REDIS_TTL } elseif { $score >= $::SCORE_WARN } { call ws_log $hsl $client_ip "warn" $score "rate=${rate} plen=${plen}" } table set "${tkey}_state" "$score $msg_count $window_start $disc_flag" indef $::REDIS_TTL WS::release } # ============================================================================= # EVENT: WS_CLIENT_FRAME_DONE # Only valid context for WS::disconnect in the TMOS WebSocket API. # Reads disc_flag written by WS_CLIENT_DATA and issues RFC 6455 close # code 1008 (Policy Violation). The two-event handoff is a TMOS requirement — # WS::disconnect cannot be called from within WS_CLIENT_DATA. # ============================================================================= when WS_CLIENT_FRAME_DONE { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" set state [table lookup "${tkey}_state"] if { $state eq "" } { return } if { [lindex $state 3] == 1 } { call ws_log [table lookup "${tkey}_hsl"] $client_ip \ "ws_disconnect" [lindex $state 0] "RFC 6455 code 1008 policy violation" WS::disconnect 1008 "Policy violation: abuse score exceeded" } } # ============================================================================= # EVENT: WS_SERVER_FRAME # Collect text frames (opcode 1) from the server for DLP inspection. # Binary frames (opcode 2) and control frames (ping/pong) pass through # unmodified to avoid interfering with application framing and keepalives. # ============================================================================= when WS_SERVER_FRAME { if { [WS::frame type] == 1 } { WS::collect frame } } # ============================================================================= # EVENT: WS_SERVER_DATA # PAN heuristic on buffered server-to-client text frame. # Four groups of four digits, optionally separated by spaces or hyphens. # Extend with additional patterns (SSN, IBAN, API keys) for full DLP coverage. # Non-matching frames released normally with WS::release. # ============================================================================= when WS_SERVER_DATA { set payload [WS::payload] if { [regexp {\d{4}[ \-]?\d{4}[ \-]?\d{4}[ \-]?\d{4}} $payload] } { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" call ws_log [table lookup "${tkey}_hsl"] $client_ip \ "dlp_block" 0 "PAN pattern in server->client frame" WS::frame drop return } WS::release } # ============================================================================= # EVENT: CLIENT_CLOSED # TCP close — clean or reset. Write final score to Redis for post-session # audit trail. Explicit table delete keeps session table lean during # high-churn periods rather than waiting for TTL expiry. # ============================================================================= when CLIENT_CLOSED { set client_ip [IP::client_addr] set tkey "wsshield_${client_ip}_[TCP::client_port]" set state [table lookup "${tkey}_state"] set hsl [table lookup "${tkey}_hsl"] if { $state ne "" && $hsl ne "" } { set score [lindex $state 0] call ws_log $hsl $client_ip "session_closed" $score "final score=${score}" call ws_redis_set "${::REDIS_PFX}${client_ip}" "${score}:closed" $::REDIS_TTL } table delete "${tkey}_state" table delete "${tkey}_hsl" table delete "${tkey}_bwc" } Test Evidence All enforcement tiers were validated live on TMOS 21.x against a jmalloc echo-server backend with Redis and a Flask auth service running on a Synology NAS. Test Result Bad origin 403 before handshake Invalid token 401, auth service rejection confirmed Auth service unreachable Fail-open with auth_unavailable log Redis cluster pre-block 403 before handshake, cluster_block event Rate flood (300 msg @ 50/sec) warn → throttle → ws_disconnect 1008 Entropy bot (AAA... @ 0.5s) Disconnect at score 100, rate=40, entropy alone triggered BWC throttle Active Policies=1, 251 packets dropped, 16.8K bytes suppressed at 100kbps DLP outbound block PAN frame dropped before client delivery, dlp_block confirmed auth-docker-compose.yml version: "3" services: ws-auth: image: python:3.11-alpine container_name: ws-auth working_dir: /app volumes: - /volume1/docker/ws-auth/auth_server.py:/app/auth_server.py command: sh -c "pip install flask -q && python3 auth_server.py" ports: - "8888:8888" restart: unless-stopped auth_server.py """ WS-Shield Mock Auth Service --------------------------- Simple HTTP server that validates Bearer tokens for WS-Shield testing. Valid tokens: any token in the VALID_TOKENS set below Invalid tokens: anything else → 401 Unreachable test: stop this server and observe iRule fail-open behaviour Run: pip install flask python3 auth_server.py Endpoints: GET /validate?token=<value> → 200 OK or 401 Unauthorized GET /health → 200 OK (for monitoring) Deploy on Synology as a Container Manager stack or run directly. Update AUTH_HOST in the iRule RULE_INIT to point at this server. """ from flask import Flask, request, jsonify app = Flask(__name__) # Add your valid tokens here — in production replace with JWT verification, # database lookup, or OAuth introspection call. VALID_TOKENS = { "abc123", "prod-token-xyz", "test-token-001", "appworld-2026", } @app.route("/validate") def validate(): token = request.args.get("token", "") if not token: return jsonify({"error": "missing token"}), 401 if token in VALID_TOKENS: return jsonify({"valid": True, "token": token}), 200 return jsonify({"valid": False, "error": "invalid token"}), 401 @app.route("/health") def health(): return jsonify({"status": "ok"}), 200 if __name__ == "__main__": print("WS-Shield mock auth service running on 0.0.0.0:8888") app.run(host="0.0.0.0", port=8888, debug=False)130Views0likes0Comments- 578Views3likes1Comment