BIG-IP
1333 TopicsOverview of MITRE ATT&CK Tactic - TA0011 Command and Control
Introduction In modern days, cyber violations, command and control are one of the main set of techniques with which attackers can gain control over the system within a victim’s network. Once control is gained over the system, the attackers can steal sensitive data, move laterally and blend into normal activity. Command and Control (MITRE ATT&CK Tactic TA0011) represents another critical stage of the adversary lifecycle, where the adversaries focus on communicating with the systems under their control. There are multiple ways to achieve this, either by mimicking the expected traffic flow to avoid detection or mimicking a normal behavior of the compromised system. To avoid the vulnerability, it is important for defenders to understand how communication is established to any system in the network and the various levels of stealth depending on the network structure. This article walks through the most common Command and Control techniques, and how F5 solutions provide strong defense against them. T1071 - Application Layer Protocol To communicate with the systems, the adversaries blend in with the existing traffic of the OSI layer protocols to avoid detection/network filtering. The results of these commands will be embedded within the protocol traffic between the client and the server. T1071.001 - Web Services Adversaries mimic normal, expected HTTP/HTTPS traffic that carries web data to communicate with the systems under their control within a victim network. T1071.002 - File Transfer Protocol Protocols used to implement this technique includes SMB, FTP, FTPS and TFTP. The malicious data is concealed within the fields and headers of the packets produced from these protocols. T1071.003 - Mail Protocols Protocols carrying electronic mail such as SMTP/S, POP3/S, and IMAP is utilized by concealing the data within the email messages themselves. T1071.004 - DNS An administrative function in computer networking is served by the DNS Protocol, and DNS traffic may also be allowed even before the authentication of the network. Data is concealed in the fields and headers of these packets. T1071.005 - Publish/Subscribe Protocols For message distribution managed by a centralized broker, where Publish/Subscribe design utilizes MQTT, XMPP, AMQP and STOMP protocols. T1092 - Communication Through Removable Media On disconnected networks, command and control between the compromised hosts can be performed using removable media to execute commands from system to system. For a successful execution, both systems need to be compromised and need to replicate the removable media through lateral movement. T1659 - Content Injection Adversaries may also gain control over the victim’s system by injecting malicious content into the systems, by initially accessing the compromised data-transfer channels where the traffic can be manipulated or content can be injected. T1132 – Data Encoding Another technique to gain control over the system is by encoding the information using a standard data encoding system. Encoding includes the use of ASCII, Unicode, Base64, MIME or other binary-to-text encoding systems. T1132.001 - Standard Encoding Data Encoding schemes utilized for Standard Encoding includes ASCII, Unicode, hexadecimal, Base64 and MIME. Data compression, such as gzip, are also an example of standard encoding. T1132.002 - Non-Standard Encoding Data Encoded in the message body of an HTTP request, such as modified Base64, is utilized as encoding schemes. T1001 – Data Obfuscation Obfuscation of command-and-control communication is hidden as part of this technique, making it even more difficult to discover or decipher. The focus is to make the communication less conspicuous and hidden, by incorporating several methods, which create below sub-techniques: T1001.001 - Junk Data Adversaries may abuse the protocols by adding random, meaningless junk data to the protocols, which can prevent trivial methods for decoding or deciphering the traffic. T1001.002 - Steganography Steganographic sub-techniques are used to transfer hidden digital data messages between systems, such as images or document files. T1001.003 - Protocol or Service Impersonation Adversaries can impersonate legitimate protocols or web services, to command-and-control traffic by blending in with legitimate network traffic. T1568 – Dynamic Resolution To establish connections dynamically to command-and-control the infrastructure and prevent any detections, adversaries use malware sharing a common algorithm with the infrastructure to dynamically adjust the parameters, such as a domain name, IP address, or port number. T1568.001 - Fast Flux DNS Fast Flux DNS is used to hide a command-and-control channel behind an array of rapidly changing IP addresses linked to a single domain resolution. T1568.002 - Domain Generation Algorithm Rather than relying on a list of static IP addresses or domains, adversaries may utilize Domain Generation Algorithms to dynamically identify a destination domain for command-and-control traffic. T1568.003 - DNS Calculations Instead of utilizing the predetermined port number or the actual IP address, to dynamically determine which port and IP address to use, adversaries calculate on addresses returned in DNS results. T1573 – Encrypted Channel Adversaries rely on an encrypted algorithm channel to conceal command-and-control traffic rather than depending on any inherent protections by the communication protocols. T1573.001 - Symmetric Cryptography Symmetric Encryption Algorithms, such as AES, DES, 3DES, Blowfish and RC4, use keys for plaintext encryption and ciphertext decryption. T1573.002 - Asymmetric Cryptography Asymmetric cryptography, or public key cryptography, uses a keypair per party: one public and one private. The sender encrypts the data with the receiver’s public key, and the receiver decrypts the data with their private key. T1008 – Fallback Channels If the primary channel is compromised or inaccessible, then in order to maintain reliable command and control, adversaries use fallback communication channels. T1665 – Hide Infrastructure To hide and evade detection of the command-and-control infrastructure, adversaries identify and filter traffic from defensive tools, masking malicious domains to abuse the true destination, and otherwise hiding malicious contents to delay discovery and prolong the effectiveness of adversary infrastructure. T1105 – Ingress Tool Transfer Tools or other files transfer from an external adversary-controlled source into the compromised environment through controlled channels or protocols such as FTP. Also, adversaries may spread tools across the compromised environment as part of Lateral Movement. T1104 –Multi-Stage Channels To make detection more difficult, adversaries create multiple stages for command-and-control for several functions and different conditions. T1095 – Non-Application Layer Protocol To communicate between the host and command-and-control server, adversaries use non-application layer protocols, such as ICMP (Internet Control Message Protocol), UDP (User Datagram Protocol), SOCKS (Secure Sockets), or SOL (Serial over LAN). T1571 – Non-Standard Port Adversaries communicate using port pairings that are not associated with the protocol, for, say, HTTPS over port 8088 or port 587 as opposed to the traditional port 443. T1572 – Protocol Tunneling Another approach to avoid detection/network filtering is to explicitly encapsulate a protocol within another protocol to enable routing of network packets which otherwise not reach their intended destination, such as SMB, RDP. T1090 – Proxy To direct network communications to a command-and-control server to avoid direct connections to the infrastructure and override the existing actual communication paths to avoid suspicion and manage command-and-control communications inside a compromised environment, proxy act as an intermediary between the systems, such as, HTRAN, ZXProxy and ZXPortMap. T1090.001 - Internal Proxy Internal proxies are primarily used to conceal the actual destination while reducing the need for multiple connections to external systems, such as peer-to-peer (p2p) networking protocols. T1090.002 - External Proxy External proxy is used to mask the true destination of the traffic with port redirectors. Purchased infrastructure such as Virtual Private Servers which are the compromised systems outside the victim's network, are generally used for these purposes. T1090.003 - Multi-Hop Proxy Multiple proxies can also be chained together to abuse the actual traffic directions, making it more difficult for defenders to trace malicious activity and identify its source. T1090.004 - Domain Fronting Adversaries can even misuse Content Delivery Networks (CDNs) routing schemes to infect the actual HTTPS traffic destination or traffic tunneled through HTTPS. T1219 – Remote Access Tools To access the target system remotely and establish an interactive command-and-control within the network, remote access tools are used to bridge a session between two trusted hosts through a graphical interface, a CLI, or a hardware-level access (KVM, Keyboard, Video, Mouse) over IP solutions. T1219.001 - IDE Tunneling IDE Tunneling combines SSH, port forwarding, file sharing and letting the developers gain access as if they are local, by encapsulating the entire session and tunneling protocols alongside SSH, allowing the attackers to blend in with the actual development workflow. T1219.002 - Remote Desktop Software Adversary may access the target systems interactively through desktop support software, which provides a graphical interface to the remote adversary, such as VNC, Team Viewer, AnyDesk, LogMein, are commonly used legitimate support software. T1219.003 - Remote Access Hardware To access the legitimate hardware through commonly used legitimate tools, including IP-based keyboard, video, or mouse (KVM) devices such as TinyPilot and PiKVM. T1205 – Traffic Signaling Traffic signaling is used to hide open ports or any other malicious functionality to prolong command-and-control over the compromised system. T1205.001 - Port Knocking To hide the open ports for persistence, port knocking is included, to enable the port, in which the adversary sends a series of attempted connections to a predefined sequence of closed ports. T1205.002 - Socket filters Socket Filters are filters to allow or disallow certain types of data through the socket. If packets received by the network interface match the filtering criteria, desired actions are triggered. T1102 – Web Service Adversaries use an existing, legitimate external Web Service to transfer data to/from the compromised system. Also, web service providers commonly use SSL/TLS encryption, which gives adversaries an additional level of protection. T1102.001 - Dead Drop Resolver Adversaries post content called dead drop resolver on Web Services with encoded domains. These resolvers will redirect the victims to the infected domain/IP addresses. T1102.002 - Bidirectional Communication Once the system is infected, they can send the output back to the Web Service Channel. T1102.003 - One-Way Communication Compromised Systems may not return any output at all in a few cases where adversaries tend to send only one way instructions and do not want any response. How F5 Can Help F5 security solutions provide multiple different functionalities to secure and protect applications and APIs across various platforms including Clouds, Edge, On-prem or Hybrid. F5 supports risk management solutions mentioned below to effectively mitigate and protect against command-and-control techniques: Web Application Firewall (WAF): WAF is supported by all the F5 deployment modes, which is an adaptable, multi-layered security solution that defends web applications against a broad spectrum of threats, regardless of where they are deployed. API Security: F5 offers to ease the security of APIs with F5 Web Application and API Protection (WAAP) solutions, which protects API endpoints and other API dependencies by restricting the API definitions using specified rules and schemas. Rate-Limiting & Bot Protection: Brute-force, credential stuffing, and session attacks can be mitigated with configurable thresholds and automated bot protection. For more information, please contact your local F5 sales team. Conclusion Command and Control (C2) encompasses the methods adversaries employ to communicate with compromised systems within a target network. Adversaries disguise their C2 traffic as legitimate network activity to evade detection. To defend against Command-and-Control techniques, defenders should gain a clear understanding of implementation of robust segmentation and egress filtering using Web Application Firewalls (WAF) to limit communication channels and regularly monitor traffic for anomalous patterns and leverage threat intelligence to identify any C2 indicator. Additionally, employing endpoint detection and response (EDR) using API Security solutions can help detect and block malicious C2 activity at the host level. Reference links MITRE | ATT&CK Tactic 09 – Command and Control MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®31Views0likes0CommentsModernizing F5 BIG-IP Synchronized HA Pairs with Ansible Validated Content
I wanted to provide an update to a previous article I released a few months ago where we developed Ansible Automation Platform code to help with migrating Standalone Legacy platforms (non-iSeries, iSeries and Viprion Instances) to our Modern Architectures (rSeries and Velos) using F5OS Tenant instances. I am happy to announce that the code for Synchronized HA pairs has been completed, and we have uploaded it to Ansible Automation Hub as Validated Content. What is Ansible Automation Hub Validated Content? Ansible validated content collections contain pre-built YAML content (such as playbooks or roles) to address the most common automation use cases. You can use Ansible validated content out-of-the-box or as a learning opportunity to develop your skills. It's a trusted starting point to bootstrap your automation: use it, customize it, and learn from it. Due to the focus on customization and the intent for this content to be modified, it is not subject to the same support requirements as our certified collections. To this end, any issues with this content should be filed directly at the source repository for that collection. Why Synchronized HA Pairs? This is a very common use case for a lot of our customers who want resiliency and redundancy, especially for their applications and services. The biggest issue with migrating an HA Pair is that because of the way they are set up, things like Management IP Addresses and Master Keys are essential to the transition process. Even mismatched versions during upgrades cannot synchronize during the process of upgrading to major/minor releases. What does the updated Validated Code do? Standalone Migrations – Where you can change the Management IP, due to the nature of being a standalone device, an outage will occur during the transition period. o There are 2 options for Playbooks Single Playbook for the full migration 2 Parts where Part 1 – Does backups and does a big start stop of the unit Part 2 – Migrates the standalone device HA Pairs – Combined – This code is designed for a customer who just needs to transition both HA Units but isn’t concerned about an outage window. It will migrate both units at the same time to F5OS Tenants. The Playbooks for this Code are broken apart in specific areas Part 1 – Backup the Information Part 2 – Ensure Both Units are offline and Migrate both units at the same time. HA Pairs – Sequential – This code is designed for customers who need to migrate one unit at a time and maintain availability of their applications. It will migrate the Standby Unit first as part of the code When ready to transition the active unit, it will place it in Standby and make the Transitioned Standby unit the Active Node transferring services to it Then the previously Active Unit (now standby) will be migrated There are playbooks to the Code to break apart specific areas of the transition Part 1 – Backup the Information Part 2 – Ensure the Standby Devices are offline (via Management IP) and Migrate the Standby Unit Part 3 – Transition the Standby to become the Active Unit and Begin Transitioning the New Standby Unit (Previously Active Unit) similarly to Part 2 This code has been tested and validated against many different platforms, and there are plans to continue testing for other use cases. The Transition can be Like-for-Like versioning (i.e. 16.1.x to 16.1.x) within the same family tree or can be an upgrade at the same time (i.e. 15.1.10 à 17.5.1.3 or even 21.0.0) These are Ansible Playbooks with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. What is the future of the code? I plan on adding some Validation code to separate roles/playbooks so customers could have points of references for testing, i.e. (ping tests and pool tests) before and after the transition, QKView Backups, and other information provided on the state of the unit prior to transition to ensure when migrated it can be validated that everything is the way it was. Notes about the Code The code is not designed to handle Non-VLANed infrastructure (F5OS is designed to be multi-tenant and setup with VLANs to deal with Multi-Tenancy) If your BIG-IPs use Untagged networks, they will need to be migrated to VLANed prior to using this code. Has not been validated/tested with FIPS-based environments Has not been validated/tested F5 DNS environments – Coming Soon HA Pairs must retain the Management IP address from source to destination; the code will ensure that the source device is powered off prior to transitioning it. Cool Additions Override variables are allowed as extra_vars to create flexibility in your deployment override_cpu - This allows you to set the CPUs of the Tenant OS. If the memory override isn’t set, it will be set to the same formula that the F5OS Gui would calculate. DEFAULT is set to 4 CPU override_disk_size - This allows you to set the Disk Space of the Tenant OS. DEFAULT is set to 120GB override_memory - This allows you to set the Memory of the Tenant OS. Be warned if over-provisioned, the Tenant may not start. DEFAULT is calculated by the CPU counts formula used in the GUI. tenant_nodes - This allows you to set the slot for the Tenant OS if there are multiple slots associated with your F5OS Partition. DEFAULT is an array object and it is set to [1] cryptos - This allows you to set the Crypto on the Tenant OS to either enabled or disabled. DEFAULT is set to enabled Variables for deployments – the code is designed to utilize specific hostnames and group names to execute the code. These variables allow connectivity to BIGIP and F5OS Tenants. When creating these hosts, you will need to provide When creating hosts in AAP, you will need to provide the following information ansible_host: - This is the IP Address of the device of the host ansible_user: - This is the username to login to the device ansible_password: - this is the password to login to the device; if using a credential in AAP, you would associate that variables information here as a reference. i.e. Standalone deployments host_vars f5_destination_partition – This is the F5OS Partition information f5_destination_tenant – This is the F5OS Tenant information f5_source – This is the source device HA Pair Deployments group_vars ha_pair_destination_chassis – contains a group of 2 hosts for the destination tenants to be deployed to (can be 2 hosts with the same information or different) ha_pair_source – contains a group of 2 hosts for the source BIG-IP Devices in a synchronized HA Pair. ha_pair_source_dynamic - this group is created automatically throughout the code to program the new Tenant OSes after deployment (DOES NOT NEED TO BE CREATED) Demos/Information We have uploaded a new demo video below, you’ll see an migration of a synchronized HA Pair of BIG-IPs running as Viprion Tenants on F5 B2250 Blades running 15.1.10 transitioning to a pair of rSeries R5800s Tenant OSs running 17.5.1.x — demonstrating a smooth modernization process. Watch the synchronized HA migration Demo Video If you want to check out the information and demo video on the Standalone migrations, check out my other article at – Modernizing F5 Platforms with Ansible | DevCentral You can access the validated content via Ansible Automation Hub (Need Red Hat Account with AAP) https://console.redhat.com/ansible/automation-hub/repo/validated/f5networks/f5_platform_modernization/ Or you can access the direct code from our GitHub Repository https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community/partners/system integrators — so as I always say, feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible!32Views3likes0CommentsOverview of MITRE ATT&CK Tactic - TA0002 Execution
Introduction: Execution refers to the methods adversaries use to run malicious code on a target system. This tactic includes a range of techniques designed to execute payloads after gaining access to the network. It is a key stage in the attack lifecycle, as it allows attackers to activate their malicious actions, such as deploying malware, running scripts, or exploiting system vulnerabilities. Successful execution can lead to deeper system control, enabling attackers to perform actions like data theft, system manipulation, or establishing persistence for future exploitation. Now, let’s dive into the various techniques under the Execution tactic and explore how attackers use them. 1. T1651: Cloud Administration Command: Cloud management services can be exploited to execute commands within virtual machines. If an attacker gains administrative access to a cloud environment, they may misuse these services to run commands on the virtual machines. Furthermore, if an adversary compromises a service provider or a delegated administrator account, they could also exploit trusted relationships to execute commands on connected virtual machines. 2. T1059: Command and Scripting Interpreter The misuse of command and script interpreters allows adversaries to execute commands, scripts, or binaries. These interfaces, such as Unix shells on macOS and Linux, Windows Command Shell, and PowerShell are common across platforms and provide direct interaction with systems. Cross-platform interpreters like Python, as well as those tied to client applications (e.g., JavaScript, Visual Basic), can also be misused. Attackers may embed commands and scripts in initial access payloads or download them later via an established C2 (Command and Control) channel. Commands may also be executed via interactive shells or through remote services to enable remote execution. (.001) PowerShell: As PowerShell is already part of Windows, attackers often exploit it to execute commands discreetly without triggering alarms. It’s often used for things like finding information, moving across networks, and running malware directly in memory. This helps avoid detection because nothing is written to disk. Attackers can also execute PowerShell scripts without launching the powershell.exe program by leveraging.NET interfaces. Tools like Empire, PowerSploit, and PoshC2 make it even easier for attackers to use PowerShell for malicious purposes. Example - Remote Command Execution (.002) AppleScript: AppleScript is an macOS scripting language designed to control applications and system components through inter-application messages called AppleEvents. These AppleEvent messages can be sent by themselves or with AppleScript. They can find open windows, send keystrokes, and interact with almost any open application, either locally or remotely. AppleScript can be executed in various ways, including through the command-line interface (CLI) and built-in applications. However, it can also be abused to trigger actions that exploit both the system and the network. (.003) Windows Command Shell: The Windows Command Prompt (CMD) is a lightweight, simple shell on Windows systems, allowing control over most system aspects with varying permission levels. However, it lacks the advanced capabilities of PowerShell. CMD can be used from a distance using Remote Services. Attackers may use it to execute commands or payloads, often sending input and output through a command-and-control channel. Example - Remote Command Execution (.004) Unix Shell: Unix shells serve as the primary command-line interface on Unix-based systems. They provide control over nearly all system functions, with certain commands requiring elevated privileges. Unix shells can be used to run different commands or payloads. They can also run shell scripts to combine multiple commands as part of an attack. Example - Remote Command Execution (.005) Visual Basic: Visual Basic (VB) is a programming language developed by Microsoft, now considered a legacy technology. Visual Basic for Applications (VBA) and VBScript are derivatives of VB. Malicious actors may exploit VB payloads to execute harmful commands, with common attacks, including automating actions via VBScript or embedding VBA content (like macros) in spear-phishing attachments. (.006) Python: Attackers often use popular scripting languages, like Python, due to their interoperability, cross-platform support, and ease of use. Python can be run interactively from the command line or through scripts that can be distributed across systems. It can also be compiled into binary executables. With many built-in libraries for system interaction, such as file operations and device I/O, attackers can leverage Python to download and execute commands, scripts, and perform various malicious actions. Example - Code Injection (.007) JavaScript: JavaScript (JS) is a platform-independent scripting language, commonly used in web pages and runtime environments. Microsoft's JScript and JavaScript for Automation (JXA) on macOS are based on JS. Adversaries exploit JS to execute malicious scripts, often through Drive-by Compromise or by downloading scripts as secondary payloads. Since JS is text-based, it is often obfuscated to evade detection. Example - XSS (.008) Network Device CLI: Network devices often provide a CLI or scripting interpreter accessible via direct console connection or remotely through telnet or SSH. These interfaces allow interaction with the device for various functions. Adversaries may exploit them to alter device behavior, manipulate traffic, load malicious software by modifying configurations, or disable security features and logging to avoid detection. (.009) Cloud API: Cloud APIs offer programmatic access to nearly all aspects of a tenant, available through methods like CLIs, in-browser Cloud Shells, PowerShell modules (e.g., Azure for PowerShell), or SDKs for languages like Python. These APIs provide administrative access to major services. Malicious actors with valid credentials, often stolen, can exploit these APIs to perform malicious actions. (.010) AutoHotKey & AutoIT: AutoIT and AutoHotkey (AHK) are scripting languages used to automate Windows tasks, such as clicking buttons, entering text, and managing programs. Attackers may exploit AHK (.ahk) and AutoIT (.au3) scripts to execute malicious code, like payloads or keyloggers. These scripts can also be embedded in phishing payloads or compiled into standalone executable files (.011) Lua: Lua is a cross-platform scripting and programming language, primarily designed for embedding in applications. It can be executed via the command-line using the standalone Lua interpreter, through scripts (.lua), or within Lua-embedded programs. Adversaries may exploit Lua scripts for malicious purposes, such as abusing or replacing existing Lua interpreters to execute harmful commands at runtime. Malware examples developed using Lua include EvilBunny, Line Runner, PoetRAT, and Remsec. (.012) Hypervisor CLI: Hypervisor CLIs offer extensive functionality for managing both the hypervisor and its hosted virtual machines. On ESXi systems, tools like “esxcli” and “vim-cmd” allow administrators to configure and perform various actions. Attackers may exploit these tools to enable actions like File and Directory Discovery or Data Encrypted for Impact. Malware such as Cheerscrypt and Royal ransomware have leveraged this technique. 3. T1609: Container Administration Command Adversaries may exploit container administration services, like the Docker daemon, Kubernetes API server, or kubelet, to execute commands within containers. In Docker, attackers can specify an entry point to run a script or use docker exec to execute commands in a running container. In Kubernetes, with sufficient permissions, adversaries can gain remote execution by interacting with the API server, kubelet, or using commands like kubectl exec within the cluster. 4. T1610: Deploy Container Containers can be exploited by attackers to run malicious code or bypass security measures, often through the use of harmful processes or weak settings, such as missing network rules or user restrictions. In Kubernetes environments, attackers may deploy containers with elevated privileges or vulnerabilities to access other containers or the host node. They may also use compromised or seemingly benign images that later download malicious payloads. 5. T1675: ESXi Administration Command ESXi administration services can be exploited to execute commands on guest machines within an ESXi virtual environment. ESXi-hosted VMs can be remotely managed via persistent background services, such as the VMware Tools Daemon Service. Adversaries can perform malicious activities on VMs by executing commands through SDKs and APIs, enabling follow-on behaviors like File and Directory Discovery, Data from Local System, or OS Credential Dumping. 6. T1203: Exploitation for Client Execution Adversaries may exploit software vulnerabilities in client applications to execute malicious code. These exploits can target browsers, office applications, or common third-party software. By exploiting specific vulnerabilities, attackers can achieve arbitrary code execution. The most valuable exploits in an offensive toolkit are often those that enable remote code execution, as they provide a pathway to gain access to the target system. Example: Remote Code Execution 7. T1674: Input Injection Input Injection involves adversaries simulating keystrokes on a victim’s computer to carry out actions on their behalf. This can be achieved through several methods, such as emulating keystrokes to execute commands or scripts, or using malicious USB devices to inject keystrokes that trigger scripts or commands. For example, attackers have employed malicious USB devices to simulate keystrokes that launch PowerShell, enabling the download and execution of malware from attacker-controlled servers. 8. T1559: Inter-Process Communication Inter-Process Communication (IPC) is commonly used by processes to share data, exchange messages, or synchronize execution. It also helps prevent issues like deadlocks. However, IPC mechanisms can be abused by adversaries to execute arbitrary code or commands. The implementation of IPC varies across operating systems. Additionally, command and scripting interpreters may leverage underlying IPC mechanisms, and adversaries might exploit remote services—such as the Distributed Component Object Model (DCOM)—to enable remote IPC-based execution. (.001) Component Object Model (Windows): Component Object Model (COM) is an inter-process communication (IPC) mechanism in the Windows API that allows interaction between software objects. A client object can invoke methods on server objects via COM interfaces. Languages like C, C++, Java, and Visual Basic can be used to exploit COM interfaces for arbitrary code execution. Certain COM objects also support functions such as creating scheduled tasks, enabling fileless execution, and facilitating privilege escalation or persistence. (.002) Dynamic Data Exchange (Windows): Dynamic Data Exchange (DDE) is a client-server protocol used for one-time or continuous inter-process communication (IPC) between applications. Adversaries can exploit DDE in Microsoft Office documents—either directly or via embedded files—to execute commands without using macros. Similarly, DDE formulas in CSV files can trigger unintended operations. This technique may also be leveraged by adversaries on compromised systems where direct access to command or scripting interpreters is restricted. (.003) XPC Services(macOS): macOS uses XPC services for inter-process communication, such as between the XPC Service daemon and privileged helper tools in third-party apps. Applications define the communication protocol used with these services. Adversaries can exploit XPC services to execute malicious code, especially if the app’s XPC handler lacks proper client validation or input sanitization, potentially leading to privilege escalation. 9. T1106: Native API Native APIs provide controlled access to low-level kernel services, including those related to hardware, memory management, and process control. These APIs are used by the operating system during system boot and for routine operations. However, adversaries may abuse native API functions to carry out malicious actions. By using assembly directly or indirectly to invoke system calls, attackers can bypass user-mode security measures such as API hooks. Also, attackers may try to change or stop defensive tools that track API use by removing functions or changing sensor behavior. Many well-known exploit tools and malware families—such as Cobalt Strike, Emotet, Lazarus Group, LockBit 3.0, and Stuxnet—have leveraged Native API techniques to bypass security mechanisms, evade detection, and execute low-level malicious operations. 10. T1053: Scheduled Task/Job This technique involves adversaries abusing task scheduling features to execute malicious code at specific times or intervals. Task schedulers are available across major operating systems—including Windows, Linux, macOS, and containerized environments—and can also be used to schedule tasks on remote systems. Adversaries commonly use scheduled tasks for persistence, privilege escalation, and to run malicious payloads under the guise of trusted system processes. (.002) At: The “At” utility is available on Windows, Linux, and macOS for scheduling tasks to run at specific times. Adversaries can exploit “At” to execute programs at system startup or on a set schedule, helping them maintain persistence. It can also be misused for remote execution during lateral movement or to run processes under the context of a specific user account. In Linux environments, attackers may use “At “to break out of restricted environments, aiding in privilege escalation. (.003) Cron: The “cron” utility is a time-based job scheduler used in Unix-like operating systems. The “crontab” file contains scheduled tasks and the times at which they should run. These files are stored in system-specific file paths. Adversaries can exploit “cron” in Linux or Unix environments to execute programs at startup or on a set schedule, maintaining persistence. In ESXi environments, “cron” jobs must be created directly through the “crontab” file. (.005) Scheduled Task: Adversaries can misuse Windows Task Scheduler to run programs at startup or on a schedule, ensuring persistence. It can also be exploited for remote execution during lateral movement or to run processes under specific accounts (e.g., SYSTEM). Similar to System Binary Proxy Execution, attackers may hide one-time executions under trusted system processes. They can also create "hidden" tasks that are not visible to defender tools or manual queries. Additionally, attackers may alter registry metadata to further conceal these tasks. (.006) Systemd Timers: Systemd timers are files with a .timer extension used to control services in Linux, serving as an alternative to Cron. They can be activated remotely via the systemctl command over SSH. Each .timer file requires a corresponding .service file. Adversaries can exploit systemd timers to run malicious code at startup or on a schedule for persistence. Timers placed in privileged paths can maintain root-level persistence, while user-level timers can provide user-level persistence. (.007) Container Orchestration Job: Container orchestration jobs automate tasks at specific times, similar to cron jobs on Linux. These jobs can be configured to maintain a set number of containers, helping persist within a cluster. In Kubernetes, a CronJob schedules a Job that runs containers to perform tasks. Adversaries can exploit CronJobs to deploy Jobs that execute malicious code across multiple nodes in a cluster. 11. T1648: Serverless Execution Cloud providers offer various serverless resources such as compute functions, integration services, and web-based triggers that adversaries can exploit to execute arbitrary commands, hijack resources, or deploy functions for further compromise. Cloud events can also trigger these serverless functions, potentially enabling persistent and stealthy execution over time. An example of this is Pacu, a well-known open-source AWS exploitation framework, which leverages serverless execution techniques. 12. T1229: Shared Modules Shared modules are executable components loaded into processes to provide access to reusable code, such as custom functions or Native API calls. Adversaries can abuse this mechanism to execute arbitrary payloads by modularizing their malware into shared objects that perform various malicious functions. On Linux and macOS, the module loader can load shared objects from any local path. On Windows, the loader can load DLLs from both local paths and Universal Naming Convention (UNC) network paths. 13. T1072: Software Deployment Tools Adversaries may exploit centralized management tools to execute commands and move laterally across enterprise networks. Access to endpoint or configuration management platforms can enable remote code execution, data collection, or destructive actions like wiping systems. SaaS-based configuration management tools can also extend this control to cloud-hosted instances and on-premises systems. Similarly, configuration tools used in network infrastructure devices may be abused in the same way. The level of access required for such activity depends on the system’s configuration and security posture. 14. T1569: System Services System services and daemons can be abused to execute malicious commands or programs, whether locally or remotely. Creating or modifying services allows execution of payloads for persistence—particularly if set to run at startup—or for temporary, one-time actions. (.001) Launchctl (MacOS): launchctl interacts with launchd, the service management framework for macOS. It supports running subcommands via the command line, interactively, or from standard input. Adversaries can use launchctl to execute commands and programs as Launch Agents or Launch Daemons, either through scripts or manual commands. (.002) Service Execution (Windows): The Windows Service Control Manager (services.exe) manages services and is accessible through both the GUI and system utilities. Tools like PsExec and sc.exe can be used for remote execution by specifying remote servers. Adversaries may exploit these tools to execute malicious content by starting new or modified services. This technique is often used for persistence or privilege escalation. (.003) Systemctl (Linux): systemctl is the main interface for systemd, the Linux init system and service manager. It is typically used from a shell but can also be integrated into scripts or applications. Adversaries may exploit systemctl to execute commands or programs as systemd services. 15. T1204: User Execution Users may be tricked into running malicious code by opening a harmful file or link, often through social engineering. While this usually happens right after initial access, it can occur at other stages of an attack. Adversaries might also deceive users to enable remote access tools, run malicious scripts, or coercing users to manually download and execute malware. Tech support scams often use phishing, vishing, and fake websites, with scammers spoofing numbers or setting up fake call centers to steal access or install malware. (.001) Malicious Link: Users may be tricked into clicking on a link that triggers code execution. This could also involve exploiting a browser or application vulnerability (Exploitation for Client Execution). Additionally, links might lead users to download files that, when executed, deliver malware file. (.002) Malicious File: Users may be tricked into opening a file that leads to code execution. Adversaries often use techniques like masquerading and obfuscating files to make them appear legitimate, increasing the chances that users will open and execute the malicious file. (.003) Malicious Image: Cloud images from platforms like AWS, GCP, and Azure, as well as popular container runtimes like Docker, can be backdoored. These compromised images may be uploaded to public repositories and users might unknowingly download and deploy an instance or container, bypassing Initial Access defenses. Adversaries may also use misleading names to increase the chances of users mistakenly deploying the malicious image. (.004) Malicious Copy and Paste: Users may be deceived into copying and pasting malicious code into a Command or Scripting Interpreter. Malicious websites might display fake error messages or CAPTCHA prompts, instructing users to open a terminal or the Windows Run Dialog and run arbitrary, often obfuscated commands. Once executed, the adversary can gain access to the victim's machine. Phishing emails may also be used to trick users into performing this action. 16. T1047: Windows Management Instrumentation WMI (Windows Management Instrumentation) is a tool designed for programmers, providing a standardized way to manage and access data on Windows systems. It serves as an administrative feature that allows interaction with system components. Adversaries can exploit WMI to interact with both local and remote systems, using it to perform actions such as gathering information for discovery or executing commands and payloads. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid, thereby reducing security risks. F5 bot and risk management solutions can also stop bad bots and automation. This can make your modern applications safer. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Struts2 RCE using F5 BIG-IP For more details on the other mitigation techniques of MITRE ATT&CK Execution Tactic TA0002, please reach out to your local F5 team. Reference Links: MITRE ATT&CK® Execution, Tactic TA0002 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs414Views2likes0CommentsF5 BIG-IP and NetApp StorageGRID - Providing Fast and Scalable S3 API for AI apps
F5 BIG-IP, an industry-leading ADC solution, can provide load balancing services for HTTPS servers, with full security applied in-flight and performance levels to meet any enterprise’s capacity targets. Specific to the S3 API, the object storage and retrieval protocol that rides upon HTTPS, an aligned partnering solution exists from NetApp, which allows a large-scale set of S3 API targets to ingest and provide objects. Automatic backend synchronization allows any node to be offered up as a target by a server load balancer like BIG-IP. This allows overall storage node utilization to be optimized across the node set, and scaled performance to reach the highest S3 API bandwidth levels, all while offering high availability to S3 API consumers. If one node fails or is undergoing maintenance, the overall service continues. S3 compatible storage is becoming popular for AI applications due to its superior performance over traditional protocols such as NFS or CIFS, as well as enabling repatriation of data from the cloud to on-prem. These are scenarios where the amount of data faced is large, this drives the requirement for new levels of scalability and performance; S3 compatible object storages such as NetApp StorageGRID are purpose-built to reach such levels. Sample BIG-IP and StorageGRID Configuration This document is based upon tests and measurements using the following lab configuration. All devices in the lab were virtual machine-based offerings. The S3 service to be projected to the outside world, depicted in the above diagram and delivered to the client via the external network, will use a BIG-IP virtual server (VS) which is tied to an origin pool of three large-capacity StorageGRID nodes. The BIG-IP maintains the integrity of the NetApp nodes by frequent HTTP-based health checks. Should an unhealthy node be detected, it will be dropped from the list of active pool members. When content is written via the S3 protocol to any node in the pool, the other members are synchronized to serve up content should they be selected by BIG-IP for future read requests. The key recommendations and observations in building the lab include: Setup a local certificate authority such that all nodes can be trusted by the BIG-IP. Typically the local CA-signed certificate will incorporate every node’s FQDN and IP address within the listed subject alternate names (SAN) to make the backend solution streamlined with one single certificate. Different F5 profiles, such as FastL4 or FastHTTP, can be selected to reach the right tradeoff between the absolute capacity of stateful traffic load-balanced versus rich layer 7 functions like iRules or authentication. Modern techniques such as multi-part uploads or using HTTP Ranges for downloads can take large objects, and concurrently move smaller pieces across the load balancer, lowering total transaction times, and spreading work over more CPU cores. The S3 protocol, at its core, is a set of REST API calls. To facilitate testing, the widely used S3Browser (www.s3browser.com) was used to quickly and intuitively create S3 buckets on the NetApp offering and send/retrieve objects (files) through the BIG-IP load balancer. Setup the BIG-IP and StorageGrid Systems The StorageGrid solution is an array of storage nodes, provisioned with the help of an administrative host, the “Grid Manager”. For interactive users, no thick client is required as on-board web services allow a streamlined experience all through an Internet browser. The following is an example of Grid Manager, taken from a Chrome browser; one sees the three Storage Nodes setup have been successfully added. The load balancer, in our case the BIG-IP, is set up with a virtual server to support HTTPS traffic and distributed that traffic, which is S3 object storage traffic, to the three StorageGRID nodes. The following screenshot demonstrates that the BIG-IP is setup in a standard HA (active-passive pair) configuration and the three pool members are healthy (green, health checks are fine) and receiving/sending S3 traffic, as the byte counts are seen in the image to be non-zero. On the internal side of the BIG-IP, TCP port 18082 is being used for S3 traffic. To do testing of the solution, including features such as multi-part uploads and downloads, a popular S3 tool, S3Browser, was downloaded and used. The following shows the entirety of the S3Browser setup. Simply create an account (StorageGRID-Account-01 in our example) and point the REST API endpoint at the BIG-IP Virtual Server that is acting as the secure front door for our pool of NetApp nodes. The S3 Access Key ID and Secret values are generated at turn-up time of the NetApp appliances. All S3 traffic will, of course, be SSL/TLS encrypted. BIG-IP will intercept the SSL traffic (high-speed decrypt) and then re-encrypt when proxying the traffic to a selected origin pool member. Other valid load balancer setups exist; one might include an “off load” approach to SSL, whereby the S3 nodes safely co-located in a data center may prefer to receive non-SSL HTTP S3 traffic. This may see an overall performance improvement in terms of peak bandwidth per storage node, but this comes at the tradeoff of security considerations. Experimenting with S3 Protocol and Load Balancing With all the elements in place to start understanding the behavior of S3 and spreading traffic across NetApp nodes, a quick test involved creating a S3 bucket and placing some objects in that new bucket. Buckets are logical collections of objects, conceptually not that different from folders or directories in file systems. In fact, a S3 bucket could even be mounted as a folder in an operating system such as Linux. In their simplest form, most commonly, buckets can simply serve as high-capacity, performant storage and retrieval targets for similarly themed structured or unstructured data. In the first test, we created a new bucket (“audio-clip-bucket”) and uploaded four sample files to the new bucket using S3Browser. We then zeroed the statistics for each pool member on the BIG-IP, to see if even this small upload would spread S3 traffic across more than a single NetApp device. Immediately after the upload, the counters reflect that two StorageGRID nodes were selected to receive S3 transactions. Richly detailed, per-transaction visibility can be obtained by leveraging the F5 SSL Orchestrator (SSLO) feature on the BIG-IP, whereby copies of the bi-directional S3 traffic decrypted within the load balancer can be sent to packet loggers, analytics tools, or even protocol analyzers like Wireshark. The BIG-IP also has an onboard analytics tool, Application Visibility and Reporting (AVR) which can provide some details on the nuances of the S3 traffic being proxied. AVR demonstrates the following characteristics of the above traffic, a simple bucket creation and upload of 4 objects. With AVR, one can see the URL values used by S3, which include the bucket name itself as well as transactions incorporating the object names as URLs. Also, the HTTP methods used included both GETS and PUTS. The use of HTTP PUT is expected when creating a new bucket. S3 is not governed by a typical standards body document, such as an IETF Request for Comment (RFC), but rather has evolved out of AWS and their use of S3 since 2006. For details around S3 API characteristics and nomenclature, this site can be referenced. For example, the expected syntax for creating a bucket is provided, including the fact that it should be an HTTP PUT to the root (/) URL target, with the bucket configuration parameters including name provided within the HTTP transaction body. Achieving High Performance S3 with BIG-IP and StorageGRID A common concern with protocols, such as HTTP, is head-of-line blocking, where one large, lengthy transaction blocks subsequent desired, and now queued transactions. This is one of the reasons for parallelism in HTTP 1.X, where loading 30 or more objects to paint a web page will often utilize two, four, or even more concurrent TCP sessions. Another performance issue when dealing with very large transactions is, without parallelism, even those most performant networks will see an established TCP session reach a maximum congestion window (CWND) where no more segments may be in put inflight until new TCP ACKs arrive back. Advanced TCP options like TCP exponential windowing or TCP SACK can help, but regardless of this, the achievable bandwidth of any one TCP session is bounded and may also frequently task only one core in multi-core CPUs. With the BIG-IP serving as the intermediary, large S3 transactions may default to “multi-part” uploads and downloads. The larger objects become a series of smaller objects that conveniently can be load-balanced by BIG-IP across the entire cluster of NetApp nodes. As displayed in the following diagram, we are asking for multi-part uploads to kick in for objects larger than 5 megabytes. After uploading a 20-megabyte file (technically, 20,000,000 bytes) the BIG-IP shows the traffic distributed across multiple NetApp nodes to the tune of 160.9 million bits. The incoming bits, incoming from the perspective of the origin pool members, confirm the delivery of the object with a small amount of protocol overhead (bits divided by eight to reach bytes). The value of load balancing manageable chunks of very large objects will pay dividends over time with faster overall transaction completion times due to the spreading of traffic across NetApp nodes, more TCP sessions reaching high congestion window values, and no single-core bottle necks in multicore equipment. Tuning BIG-IP for High Performance S3 Service Delivery The F5 BIG-IP offers a set of different profiles it can run its Local Traffic Manager (LTM) module in accordance with; LTM is the heart of the server load balancing function. The most performant profile in terms of attainable traffic load is the “FastL4” profile. This, and other profiles such as “OneConnect” or “FastHTTP”, can be tied to a virtual server, and details around each profile can be found here within the BIG-IP GUI: The FastL4 profile can increase virtual server performance and throughput for supported platforms by using the embedded Packet Velocity Acceleration (ePVA) chip to accelerate traffic. The ePVA chip is a hardware acceleration field programmable gate array (FPGA) that delivers high-performance L4 throughput by offloading traffic processing to the hardware acceleration chip. The BIG-IP makes flow acceleration decisions in software and then offloads eligible flows to the ePVA chip for that acceleration. For platforms that do not contain the ePVA chip, the system performs acceleration actions in software. Software-only solutions can increase performance in direct relationship to the hardware offered by the underlying host. As examples of BIG-IP virtual edition (VE) software running on mid-grade hardware platforms, results with Dell can be found here and similar experiences with HPE Proliant platforms are here. One thing to note about FastL4 as the profile to underpin a performance mode BIG-IP virtual server is that it is layer 4 oriented. For certain features that involve layer 7 HTTP related fields, such as using iRules to swap HTTP headers or perform HTTP authentication, a different profile might be more suitable. A bonus of FastL4 are some interesting specific performance features catering to it. In the BIG-IP version 17 release train, there is a feature to quickly tear down, with no delay, TCP sessions no longer required. Most TCP stacks implement TCP “2MSL” rules, where upon receiving and sending TCP FIN messages, the socket enters a lengthy TCP “TIME_WAIT” state, often minutes long. This stems back to historically bad packet loss environments of the very early Internet. A concern was high latency and packet loss might see incoming packets arrive at a target very late, and the TCP state machine would be confused if no record of the socket still existed. As such, the lengthy TIME_WAIT period was adopted even though this is consuming on-board resources to maintain the state. With FastL4, the “fast” close with TCP reset option now exists, such that any incoming TCP FIN message observed by BIG-IP will result in TCP RESETS being sent to both endpoints, normally bypassing TIME_WAIT penalties. OneConnect and FastHTTP Profiles As mentioned, other traffic profiles on BIG-IP are directed towards Layer 7 and HTTP features. One interesting profile is F5’s “OneConnect”. The OneConnect feature set works with HTTP Keep-Alives, which allows the BIG-IP system to minimize the number of server-side TCP connections by making existing connections available for reuse by other clients. This reduces, among other things, excessive TCP 3-way handshakes (Syn, Syn-Ack, Ack) and mitigates the small TCP congestion windows that new TCP sessions start with and only increases with successful traffic delivery. Persistent server-side TCP connections ameliorate this. When a new connection is initiated to the virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP system applies the OneConnect source mask to the IP address in the request to determine whether it is eligible to reuse the existing idle connection. If it is eligible, the BIG-IP system marks the connection as non-idle and sends a client request over it. If the request is not eligible for reuse, or an idle server-side flow is not found, the BIG-IP system creates a new server-side TCP connection and sends client requests over it. The last profile considered is the “Fast HTTP” profile. The Fast HTTP profile is designed to speed up certain types of HTTP connections and again strives to reduce the number of connections opened to the back-end HTTP servers. This is accomplished by combining features from the TCP, HTTP, and OneConnect profiles into a single profile that is optimized for network performance. A resulting high performance HTTP virtual server processes connections on a packet-by-packet basis and buffers only enough data to parse packet headers. The performance HTTP virtual server TCP behavior operates as follows: the BIG-IP system establishes server-side flows by opening TCP connections to pool members. When a client makes a connection to the performance HTTP virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP LTM system marks the connection as non-idle and sends a client request over the connection. Summary The NetApp StorageGRID multi-node S3 compatible object storage solution fits well with a high-performance server load balancer, thus making the F5 BIG-IP a good fit. S3 protocol can itself be adjusted to improve transaction response times, such as through the use of multi-part uploads and downloads, amplifying the default load balancing to now spread even more traffic chunks over many NetApp nodes. BIG-IP has numerous approaches to configuring virtual servers, from highest performance L4-focused profiles to similar offerings that retain L7 HTTP awareness. Lab testing was accomplished using the S3Browser utility and results of traffic flows were confirmed with both the standard BIG-IP GUI and the additional AVR analytics module, which provides additional protocol insight.1.3KViews3likes0CommentsOverview of MITRE ATT&CK Tactic : TA0009 - Collection
This article is a continuation of our MITRE ATT&CK series. In this article, we focus on the Collection tactic, and the techniques adversaries use to gather, stage, and organize data from compromised systems before exfiltration. As attackers progress through an intrusion, Collection becomes critical for assembling sensitive files, credentials, screenshots, and other high‑value information that will fuel data theft, espionage, or destructive operations.53Views2likes0CommentsOverview of MITRE ATT&CK Tactic: TA0040 - Impact
This article focuses on the Impact Tactic, and the techniques adversaries use to manipulate, disrupt or damage the systems and data as they reach the final stage of an attack. This is one of the critical tactics, as it highlights the adverse effects attackers can cause, including exploitation, operational disruption, data destruction, or financial gain51Views1like0CommentsOverview of MITRE ATT&CK Tactic - TA0010 Exfiltration
Introduction In current times of cyber vulnerabilities, data theft is the ultimate objective with which attackers monetize their presence within a victim network. Once valuable information is identified and collected, the attackers can package sensitive data, bypass perimeter defences, and finalize the breach. Exfiltration (MITRE ATT&CK Tactic TA0010) represents a critical stage of the adversary lifecycle, where the adversaries focus on extracting data from the systems under their control. There are multiple ways to achieve this, either by using encryption and compression to avoid detection or utilizing the command-and-control channel to blend in with normal network traffic. To avoid this data loss, it is important for defenders to understand how data is transferred from any system in the network and the various transmission limits imposed to maintain stealth. This article walks through the most common Exfiltration techniques and how F5 solutions provide strong defense against them. T1020 - Automated Exfiltration To exfiltrate the data, adversaries may use automated processing after gathering the sensitive data during collection. T1020.001 – Traffic Duplication Traffic mirroring is a native feature for some devices for traffic analysis, which can be used by adversaries to automate data exfiltration. T1030 – Data Transfer Size Limits Exfiltration of the data in limited-size packets instead of whole files to avoid network data transfer threshold alerts. T1048 – Exfiltration over Alternative Protocol Stealing of data over a different protocol or channel other than the command-and-control channel created by the adversary. T1048.001 – Exfiltration Over Symmetric Encrypted Non-C2 Protocol Symmetric Encryption uses shared or the same keys/secrets on all the channels, which requires an exchange of the value used to encrypt and decrypt the data. This symmetric encryption leads to the implementation of Symmetric Cryptographic Algorithms, like RC4, AES, baked into the protocols, resulting in multiple layers of encryption. T1048.002 – Exfiltration Over Asymmetric Encrypted Non-C2 Protocol Asymmetric encryption algorithms or public-key cryptography require a pair of cryptographic keys that can encrypt/decrypt data from the corresponding keys on each end of the channel. T1048.003 – Exfiltration Over Unencrypted Non-C2 Protocol Instead of encryption, adversaries may obfuscate the routine channel without encryption within network protocols either by custom or publicly available encoding/compression algorithms (base64, hex-code) and embedding the data. T1041 – Exfiltration Over C2 Channel Adversaries can also steal the data over command-and-control channels and encode the data into normal communications. T1011 – Exfiltration Over Other Network Medium Exfiltration can also occur through a wired Internet connection, for example, a WiFi connection, modem, cellular data connection or Bluetooth. T1011.001 – Exfiltration Over Bluetooth Bluetooth can also be used to exfiltrate the data instead of a command-and-control channel in case the command-and-control channel is a wired Internet connection. T1052 – Exfiltration Over Physical Medium Under circumstances, such as an air-gapped network compromise, exfiltration occurs through a physical medium. Adversaries can exfiltrate data using a physical medium, for example, say a removable drive. Some examples of such media include external hard drives, USB drives, cellular phones, or MP3 players. T1052.001 – Exfiltration Over USB One such circumstance is where the adversary may attempt to exfiltrate data over a USB connected physical device, which can be used as the final exfiltration point or to hop between other disconnected systems. T1567 – Exfiltration Over Web Services Adversaries may use legitimate external Web Service to exfiltrate the data instead of their command-and-control channel. T1567.001 – Exfiltration to Code Repository To exfiltrate the data to a code repository, rather than adversary’s command-and-control channel. These code repositories are accessible via an API over HTTPS. T1567.002 – Exfiltration to Cloud Storage To exfiltrate the data to a cloud storage, rather than their primary command-and-control channel. These cloud storage services allow storage, editing and retrieval of the exfiltrated data. T1567.003 – Exfiltration to Text Storage Sites To exfiltrate the data to a text storage site, rather than their primary command-and-control. These text storage sites, like pastebin[.]com, are used by developers to share code. T1567.004 – Exfiltration Over Webhook Adversaries also exfiltrate the data to a webhook endpoint, which are simple mechanisms for allowing a server to push data over HTTP/S to a client. The creation of webhooks is supported by many public services, such as Discord and Slack, that can be used by other services, like GitHub, Jira, or Trello. T1029 – Scheduled Transfer To exfiltrate the data, the adversaries may schedule data exfiltration only at certain times of the day or at certain intervals, blending the traffic patterns with general activity. T1537 – Transfer Data to Cloud Account Many a times, exfiltration of data can also be through transferring the data through sharing/syncing and creating backups of cloud environment to another cloud account under adversary control on the same service. How F5 Can Help F5 offers a comprehensive suite of security solutions designed to safeguard applications and APIs across diverse environments, including cloud, edge, on-premises, and hybrid platforms. These solutions enable robust risk management to effectively mitigate and protect against MITRE ATT&CK Exfiltration threats, delivering advanced functionalities such as: Web Application Firewall (WAF): Available across all F5 products, the WAF is a flexible, multi-layered security solution that protects web applications from a wide range of threats. It delivers consistent defense, whether applications are deployed on-premises, in the cloud, or in hybrid environments. HTTPS Encryption: F5 provides robust HTTPS encryption to secure sensitive data in transit, ensuring protected communication between users and applications by preventing unauthorized access or data interception. Protecting sensitive data with Data Guard: F5's WAF Data Guard feature prevents sensitive data leakage by detecting and blocking exposure of confidential information, such as credit card numbers and PII. It uses predefined patterns and customizable policies to identify transmissions of sensitive data in application responses or inputs. This proactive mechanism secures applications against data theft and ensures compliance with regulatory standards. For more information, please contact your local F5 sales team. Conclusion Adversaries Exfiltration of data often aims to steal sensitive information by packaging it to evade detection, using methods such as compression or encryption. They may transfer the data through command-and-control channels or alternate paths while applying stealth techniques like transmission size limitations. To defend against these threats, F5 provides a layered approach with its advanced offerings. The Web Application Firewall (WAF) identifies and neutralizes malicious traffic aimed at exploiting application vulnerabilities. HTTPS encryption ensures secure data transmission, preventing unauthorized interception during the attack. Meanwhile, a data guard policy set helps detect and block exposure of confidential information, such as credit card numbers and PII. Together, these F5 solutions effectively counteract data exfiltration attempts and safeguard critical assets. Reference links MITRE | ATT&CK Tactic 10 – Exfiltration MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®66Views1like0CommentsWhat's new in BIG-IP v21.0?
Introduction In November of 2025 F5 released the latest version of BIG-IP software, v21.0. This release is packed with fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. New SSL Orchestrator Features SNI Preservation SNI (Server Name Indication) Preservation is now supported for Inbound Gateway Mode. This preserves the client’s original SNI information as traffic passes through the reverse proxy, allowing backend TLS servers to access and use this information. This enables accurate application routing and supports security workflows like threat detection and compliance enforcement. Previous software versions required custom iRules to enable this functionality. Note: SNI preservation is enabled by default. However, if you have existing Inbound Gateway Topologies, you must redeploy them for the change to take effect. iRule Control for Service Entry and Return Previously, iRules were only available on the entry (ingress) side, limiting customization to traffic entering the Inspection Service. iRule control is now extended to the return-side traffic of Inspection Services. You can now apply iRules on both sides of an Inspection Service (L2, L3, HTTP). This enhancement provides full control over traffic entering and leaving the Inspection Service, enabling more flexible, powerful, and fine-grained traffic handling. The Services page will now include configuration for iRules on service entry and iRules on service return. A typical use-case for this feature is what we call Header Enrichment. In this case, iRules are used to add headers to the payload before sending it to the Inspection Service. The headers could contain the authenticated username/group membership of the person who initiated the connection. This information can be useful for Inspection Services for either logging, policy enforcement, or both. The benefit of this feature is that the authenticated username/group membership header can be removed from the payload on egress, preventing it from being leaked to origin servers. New Access Policy Manager (APM) Features Expanded Exclusion Support for Locked Client Mode Previously, APM-locked client mode allowed a maximum of 10 exclusions, preventing administrators from adding more than 10 destinations. This limitation has now been removed, and the exclusion list can contain more than 10 entries. OAuth Authorization Server Max Claims Data Support The max claim data size is set to 8kb by default, but a large claim size can lead to excessive memory consumption. You must allocate the right amount of memory dynamically as required based on claims configuration. New Features in BIG-IP v21.0.0 Control Plane Performance and Scalability Improvements The BIG-IP 21.0.0 release introduces significant improvements to the BIG-IP control plane, including better scalability and support for large-scale configurations (up to 1 million objects). This includes MCPD efficiency enhancements and eXtremeDB scale improvements. AI Data Delivery Optimize performance and simplify configuration with new S3 data storage integrations. Use cases include secure ingestion for fine-tuning and batch inference, high-throughput retrieval for RAG and embeddings generation, policy-driven model artifact distribution with observability, and controlled egress with consistent security and compliance. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Model Context Protocol (MCP) support for AI traffic Accelerate and scale AI workloads with support for MCP that enables seamless communication between AI models, applications, and data sources. This enhances performance, secures connections, and streamlines deployment for AI workloads. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Migrating BIG-IP from Entrust to Alternative Certificate Authorities Entrust is soon to be delisted as a certificate authority by many major browsers. Following a variety of compliance failures with industry standards in recent years, browsers like Google Chrome and Mozilla made their distrust for Entrust certificates public last year. As such, Entrust certificates issued on or after November 12, 2024, are deemed insecure by most browsers. Conclusion Upgrade your BIG-IP to version 21.0 today to take advantage of these fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. Related Content SSL Orchestrator Release Notes BIG-IP Release Notes BLOG F5 BIG-IP v21.0: Control plane, AI data delivery and security enhancements Press Release F5 launches BIG-IP v21.0 Introduction to BIG-IP SSL Orchestrator178Views2likes0CommentsModernizing F5 Platforms with Ansible
I’ve been meaning to publish this article for some time now. Over the past few months, I’ve been building Ansible automation that I believe will help customers modernize their F5 infrastructure. This especially true for those looking to migrate from legacy BIG-IP hardware to next-generation platforms like VELOS and rSeries. As I explored tools like F5 Journeys and traditional CLI-based migration methods, I noticed a significant amount of manual pre-work was still required. This includes: Ensuring the Master Key used to encrypt the UCS archive is preserved and securely handled Storing UCS, Master Key and information assets in a backup host Pre-configuring all VLANs and properly tagging them on the VELOS partition before deploying a Tenant OS To streamline this, I created an Ansible Playbook with supporting roles tailored for Red Hat Ansible Automation Platform. It’s built to perform a lift-and-shift migration of a F5 BIG-IP configuration from one device to another—with optional OS upgrades included. In the demo video below, you’ll see an automated migration of a F5 i10800 running 15.1.10 to a VELOS BX110 Tenant OS running 17.5.0—demonstrating a smooth, hands-free modernization process. Currently Working Velos Velos Controller/Partition running (F5OS-C 1.8.1) - which allows Tenant Management IP to be in a different VLAN Migrates a standalone F5 BIG-IP i10800 to a VELOS BX110 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) rSeries Shares MGMT IP with the same subnet as the Chassis Partition. Migrates a standalone F5 BIG-IP i10800 to a R5000 Tenant OS VLAN'ed Source tenant required (Doesn’t support non-vlan tenants) Handles: Configuration and crypto backup UCS creation, transfer, and validation F5OS System VLAN Creation, and Association to Tenant - (Does Not manage Interface to VLAN Mapping) F5 OS Tenant provisioning and deployment inline OS upgrades during the migration Roadmap / What's Next Expanding Testing to include Viprion/iSeries (Using VCMP) Tenant Testing. Supporting hardware-to-virtual platform migrations Adding functionality for HA (High Availability) environments Watch the Demo Video View the Source Code on GitHub https://github.com/f5devcentral/f5-bd-ansible-platform-modernization This project is built for the community—so feel free to take it, fork it, and expand it. Let’s make F5 platform modernization as seamless and automated as possible.
1.2KViews4likes2CommentsBIG-IP for Scalable App Delivery & Security in Hybrid Environments
Scope: As enterprises deploy multiple instances of the same applications across diverse infrastructure platforms such as VMware, OpenShift, Nutanix, and public cloud environments and across geographically distributed locations to support redundancy and facilitate seamless migration, they face increasing challenges in ensuring consistent performance, centralized security, and operational visibility. The complexity of managing distributed application traffic, enforcing uniform security policies, and maintaining high availability across hybrid environments introduces significant operational overhead and risk, hindering agility and scalability. F5 BIG-IP Application Delivery and Security address this challenge by providing a unified, policy-driven approach to manage secure workloads across hybrid multi-cloud environments. It can be used to scale up application services on existing infrastructure or with new business models. Introduction: This article highlights how F5 BIG-IP deploys identical application workloads across multiple environments. This ensur high availability, seamless traffic management, and consistent performance. By supporting smooth workload transitions and zero-downtime deployments, F5 helps organizations maintain reliable, secure, and scalable applications. From a business perspective, it enhances operational agility, supports growing traffic demands, reduces risk during updates, and ultimately delivers a reliable, secure, and high-performance application experience that meets customer expectations and drives growth. This use case covers a typical enterprise setup with the following environments: VMware (On-Premises) Nutanix (On-Premises) Google Cloud Platform (GCP) Architecture: As illustrated in the diagram, when new application workloads are provisioned across environments such as AWS, GCP, VMware (on-prem), Nutanix (on-prem & VMware) BIG-IP ensures seamless integration with existing services. Platforms Supported Environments VMware On-Prem, GCP, Azure Nutanix On-Prem, AWS, Azure This article outlines the deployment in VMware platform. For deployment in other platforms like Nutanix and GCP, refer the detailed guide below. F5 Scalable Enterprise Workload Deployments Complete Guide Scalable Enterprise Workload Deployment Across Hybrid Environments Enterprise applications are deployed smoothly across multiple environments to address diverse customer needs. With F5’s advanced Application Delivery and Security features, organizations can ensure consistent performance, high availability, and robust protection across all deployment platforms. F5 provides a unified and secure application experience across cloud, on-premises, and virtualized environments. Workload Distribution Across Environments Workloads are distributed across the following environments: VMware: App A & App B OpenShift: App B Nutanix: App B & App C → VMware: Add App C → OpenShift: Add App A & App C → Nutanix: Add App A Applications being used: A → Juice Shop (Vulnerable web app for security testing) B → DVWA (Damn Vulnerable Web Application) C → Mutillidae Initial Infrastructure: & B, Nutanix: App B &C, GCP: App B. VMware: In the VMware on-premises environment, Applications A and B are deployed and connected to two separate load balancers. This forms the existing infrastructure. These applications are actively serving user traffic with delivery and security managed by BIG-IP. Web Application Firewall (WAF) is enabled, which will prevent any malicious threats. The corresponding logs can be found under BIG-IP > Security > Event Logs Note: This initial deployment infrastructure has also been implemented on Nutanix and GCP. For the full details, please consult the complete guide here Adding additional workloads: To demonstrate BIG-IP’s ability to support evolving enterprise demands, we will introduce new workloads across all environments. This will validate its seamless integration, consistent security enforcement, and support for continuous delivery across hybrid infrastructures. VMware: Let us add additional application-3 (mutillidae) to the VMware on-premises environment. Try to access the application through BIG-IP virtual server. Apply the WAF policy to the newly created virtual server, then verify the same by simulating malicious attacks. Nutanix: The use case described for VMware is equally applicable and supported when deploying BIG-IP on Nutanix Bare Metal as well as Nutanix on VMware. For demonstration purposes, the Nutanix Community Edition hypervisor is booted as a virtual machine within VMware. Inside this hypervisor, a new virtual machine is created and provisioned using the BIG-IP image downloaded from the F5 Downloads portal. Once the BIG-IP instance is online, an additional VM hosting the application workload is deployed. This application VM is then associated with a BIG-IP virtual server, ensuring that the application remains isolated and protected from direct external exposure. GCP (Google Cloud Platform): The use case discussed above for VMware is also applicable and supported when deploying BIG-IP on public cloud platforms such as Azure, AWS, and GCP. For demonstration purposes, GCP is selected as the cloud environment for deploying BIG-IP. Within the same project where the BIG-IP instance is provisioned, an additional virtual machine hosting application workloads is deployed and associated with the BIG-IP virtual server. This setup ensures that the application workloads remain protected behind BIG-IP, preventing direct external exposure. Key Resources: Please refer to the detailed guide below, which outlines the deployment of Nutanix on VMware and GCP, and demonstrates how BIG-IP delivers consistent security, traffic management, and application delivery across hybrid environments. F5 Scalable Enterprise Workload Deployments Complete Guide Conclusion: This demonstration clearly illustrates that BIG-IP’s Application Delivery and Security capabilities offer a robust, scalable, and consistent solution across both multi-cloud and on-premises environments. By deploying BIG-IP across diverse platforms, organizations can achieve uniform application security, while maintaining reliable connectivity, strong encryption, and comprehensive protection for both modern and legacy workloads. This unified approach allows businesses to seamlessly scale infrastructure and address evolving user demands without sacrificing performance, availability, or security. With BIG-IP, enterprises can confidently deliver applications with resilience and speed, while maintaining centralized control and policy enforcement across heterogeneous environments. Ultimately, BIG-IP empowers organizations to simplify operations, standardize security, and accelerate digital transformation across any environment. References: F5 Application Delivery and Security Platform BIG-IP Data Sheet F5 Hybrid Security Architectures: One WAF Engine, Total Flexibility Distributed Cloud (XC) Github Repo BIG-IP Github Repo323Views2likes0Comments