security
18087 TopicsOverview of MITRE ATT&CK Tactic : TA0004 - Privilege Escalation
Introduction The Privilege Escalation tactic in the MITRE ATT&CK, covers techniques that adversaries use to gain higher-level permissions on compromised systems or networks. After gaining initial access, attackers frequently need elevated rights to access sensitive resources, execute restricted operations, or maintain persistence. Techniques include exploiting OS vulnerabilities, misconfigurations, or weaknesses in security controls to move from user-level to admin or root privileges. This may involve abusing elevation control mechanisms (like sudo, setuid, or UAC), manipulating accounts or tokens, leveraging scheduled tasks, or exploiting valid credentials. Techniques and Sub-Techniques T1548 – Abuse Elevation Control Mechanisms This technique involves bypassing or abusing OS mechanisms that restrict elevated execution, such as sudo, UAC, or setuid binaries. Here, adversaries exploit misconfigurations or weak rules to run commands with higher privileges. This often requires no exploit code but just permission misuse. Once elevated, attackers gain access to restricted system operations. T1548.001 – Setuid and Setgid Here, attackers run the programs with elevated permissions by abusing setuid/setgid bits on Unix systems. This allows execution as another user, often root, without needing the password. T1548.002 – Bypass User Account Control Adversaries exploit UAC weaknesses to elevate privileges without user approval.This grants admin-level execution while maintaining user-level stealth. T1548.003 – Sudo and Sudo Caching In these mis-configured sudo rules or cached credentials allow attackers to run privileged commands. They escalate without full authentication or bypass intended restrictions. T1548.004 – Elevated Execution with Prompt Here, malicious actors deceive users into granting elevated rights to a malicious process. This uses social engineering rather than technical exploitation. Temporary Elevated Cloud Access Cloud platforms issue temporary privileges through roles or tokens. Misconfigured role assumptions or temporary credentials can be abused to obtain short-term high-level access. TCC Manipulation This happens when attackers tamper with macOS’s privacy-control system to wrongfully grant apps access to sensitive resources like the camera, microphone, or full disk. It essentially bypasses user consent protections. T1134 - Access Token Manipulation Adversaries modify or steal Windows access tokens to make malicious processes run with the permission of another user. By impersonating these tokens, attackers can bypass access controls, escalate privileges, and perform actions as though they are legitimate users or even SYSTEM. Token Impersonation/Theft Here attackers duplicate and impersonate another user’s token, allowing their process to operate with the privileges of the legitimate user, this technique is frequently used to gain higher-level privileges on Windows machines. Create Process with Token Adversaries use a stolen or duplicated token to spawn a new process under the security context of a higher-privilege user, enabling the execution of actions with elevated permissions. Make and Impersonate Token Attackers generate new tokens using credentials they possess, then impersonate a target user's identity to gain unauthorized access and escalate their privileges. Parent PID Spoofing This technique manipulates the parent process ID (PPID) of a new process, so it appears to have a trusted parent, helping adversaries evade defenses or gain higher privileges. SID-History Injection Here, adversaries inject SID-History attributes into access tokens or Active Directory to spoof the permissions, this technique enables attackers to sidestep traditional group membership rules, granting them privileges that would normally be restricted. T1098 - Account Manipulation It refers to actions taken by attackers to preserve their access using compromised accounts, such as modifying credentials, group memberships, or account settings. By changing permissions or adding credentials, adversaries can escalate privileges, maintain persistence, or create hidden backdoors for future access. Additional Cloud Credentials Adversaries add their own keys, passwords, or service principal credentials to victim cloud accounts, enabling escalation without detection. This allows them to use new credentials and bypass standard log or security controls in cloud environments. Additional Email Delegate Permissions Attackers may grant themselves high-level permissions on email accounts, allowing unauthorized access, control or forwarding of sensitive communications, which can give visibility into victim correspondence for further attacks. Additional Cloud Roles Adversaries assign new privileged roles to compromised accounts, expanding permissions and enabling wider access to cloud resources. SSH Authorized Keys Attackers append or modify their public keys to SSH authorized_keys files on target machines. This technique bypass password authentication and allows undetected logins to compromised systems. Device Registration Adversaries register malicious devices with victim accounts, often in MFA or management portals to maintain ongoing access. This can allow attackers to access resources as trusted endpoints. Additional Container Cluster Roles Attackers grant their accounts extra permissions or roles in container orchestration systems such as Kubernetes. These elevated roles allow broader control over cluster resources and enable cluster-wide compromise. Additional Local or Domain Groups Adversaries add their accounts to privileged local or domain groups, gaining higher-level access and capabilities. This manipulates group memberships for escalation, persistence, and dominance within target environments. T1547 – Boot or Logon Autostart Execution Attackers abuse programs that automatically run during boot or login. These locations can be modified to launch malicious code with elevated privileges. This provides persistence and often higher-level execution. It is commonly achieved by manipulating registry keys, services, or startup folders. Registry Run Keys / Startup Folder: Attackers add malicious programs to Windows Registry run keys or Startup folders to ensure automatic execution when a user logs in. This technique provides persistent and often stealthy privilege escalation on system reboot and login. Authentication Package: By installing a malicious authentication package (DLL), adversaries can intercept credentials or execute code with system-level privileges during the Windows authentication process, enabling privilege escalation and persistence. Time Providers: Attackers register malicious DLLs as Windows time providers DLLs responsible for time synchronization so that their code is loaded by system processes on boot or at scheduled intervals, allowing stealthy system-level access and persistence. Winlogon Helper DLL: Adversaries plant a helper DLL in Winlogon’s registry settings so it loads with each user logon, running malicious code with high privileges and ensuring execution whenever the system starts or a user logs in. Security Support Provider: Inserting a rogue Security Support Provider (SSP) DLL allows attackers to monitor or manipulate authentication and system logins, potentially capturing credentials and persisting with SYSTEM privileges at the operating system level. Kernel Modules and Extensions: Attackers load malicious modules or kernel extensions to run arbitrary code in kernel space, giving them unrestricted control over the system, hiding their presence, or manipulating low-level OS behavior for privilege escalation. Re-opened Applications: On macOS, adversaries abuse property list files that track reopened applications after reboot, ensuring their chosen programs or payloads relaunch automatically and persistently escalate privileges upon user login. LSASS Driver: Modifying or adding an LSASS (Local Security Authority Subsystem Service) driver gives attackers persistent system-level code execution, potentially accessing or controlling authentication processes. Shortcut Modification: By altering shortcut files (LNKs), adversaries ensure that opening a benign application or file instead executes attacker-controlled code, effectively leveraging user actions for privilege escalation and persistence. Port Monitors: Attackers install or hijack port monitoring DLLs, which Windows loads to manage printers, so that their code runs with SYSTEM privileges when the service starts, enabling privilege escalation and persistence. Print Processors: Planting a malicious print processor DLL, the software Windows uses to handle print jobs causes Windows to execute attacker code as SYSTEM whenever print functions are called, creating a persistence and privilege escalation method. XDG Autostart Entries: On Linux desktop environments, adversaries use XDG-compliant autostart entries to launch malicious programs automatically at user login, gaining persistent execution and the ability to operate with user or escalated privileges. Active Setup: Attackers add or modify Active Setup registry keys to ensure their payloads execute with elevated privileges during user profile initialization, such as when a new user logs in. Login Items: On macOS, adversaries add login items that point to their malicious applications or scripts, guaranteeing code execution with the user’s privileges whenever a login event occurs. T1037 - Boot or Logon Initialization Scripts It refers to the use of scripts that are automatically executed during system startup or user logon to help adversaries maintain persistence on a machine. By modifying these scripts, attackers can ensure their malicious code runs every time the system boots. Logon Script (Windows): Scripts configured in Windows to run automatically during user or group logon can be exploited by adversaries to execute malicious code with the user’s privileges, enabling persistence or escalation. Login Hook: A login hook is an macOS mechanism that allows scripts or executables to run automatically upon a user’s login, which attackers may abuse to achieve persistence or elevate privileges. Network Logon Script: These are scripts assigned via Active Directory or Group Policy to execute during network logon, potentially allowing adversaries to introduce or persist malicious code in a domain environment. RC Scripts: On Unix-like systems, RC (run command) scripts control startup processes. Attackers who modify these can ensure their code runs with elevated privileges every time the system boots. Startup Items: Files or programs set to launch automatically during boot or user login can be manipulated by attackers, allowing persistent or privileged execution at startup. T1543 – Create or Modify System Process Attackers modify or create system services or daemons that run with high privileges. By altering service configurations, they ensure malicious code executes as SYSTEM/root. This provides long-term persistence and elevated access. Launch Agent: Attackers can create or modify launch agents on macOS to automatically execute malicious payloads whenever a user logs in, helping maintain persistence at the user level. Systemd Service: By altering systemd service files on Linux, adversaries can ensure their code runs as a background service during startup, maintaining continuous access to the system. Windows Service: Attackers abuse Windows service configurations to install or modify services that launch malicious programs on startup or at defined intervals, allowing persistent and privileged access. Launch Daemon: On macOS, launch daemons are set up to run background processes with elevated privileges before user login, often used by attackers to achieve system-wide persistence. Container Service: Adversaries may create or modify container or cluster management services (like Docker or Kubernetes agents) to repeatedly execute malicious code inside containers as part of persistence. T1484 - Domain or Tenant Policy Modification Adversaries changing configuration settings in a domain or tenant environment, such as Active Directory or cloud identity services, to bypass security controls and escalate privileges. This can include editing group policy objects, trust relationships, or federation settings, which may impact large numbers of users or systems across an organization. Attackers leverage this technique to gain persistent elevated access and make detection or remediation much more difficult. Group Policy Modification: Attackers may alter Group Policy Objects (GPOs) in Active Directory environments to subvert security settings and gain elevated privileges across the domain. By doing, these attackers can deploy malicious tasks, change user rights or disable security controls on many systems simultaneously. Trust Modification: Adversaries change domain or tenant trust relationships, such as adding, removing or altering trust properties between domains or tenants to expand their access and ensure continued control. This can let attackers move laterally, escalate privileges across multiple domains. T1611 – Escape to Host In virtualized environments, attackers attempt to escape a container or VM. If successful, they gain access to the underlying host system, which has higher privileges. This usually arises due to weaknesses in the hypervisor or insufficient separation between virtual environments. Hence, it gives complete control to the attacker over every workload operating on that host. T1546 – Event Triggered Execution Attackers use system events like service start, scheduled job, user login, etc. to trigger malicious code. These triggers often run with SYSTEM or administrative privileges. By hijacking legitimate event handlers, the attacker executes commands without raising suspicion. It also enables persistence tied to normal system operations. Change Default File Association: Attackers alter file type associations so that opening a file triggers malicious code, helping them gain persistence or escalate privileges. Screensaver: Adversaries can replace system screensavers with malicious executables, causing code to run automatically when the screensaver activates. Windows Management Instrumentation Event Subscription: By setting up WMI event subscriptions, attackers ensure their code executes in response to specific system events, establishing stealthy persistence on Windows. Unix Shell Configuration Modification: Modifying shell configuration files like .bashrc or.profile allows adversaries to start malicious code whenever a user opens a terminal session. Trap: Attackers abuse shell trap commands to execute code in response to system signals (e.g., shutdown, logoff, or errors), enhancing persistence or privilege escalation. LC_LOAD_DYLIB Addition: By adding malicious the LC_LOAD_DYLIB header to macOS binaries, attackers can force the system to load rogue dynamic libraries during execution. Netsh Helper DLL: Attackers register malicious DLLs as Netsh helpers, ensuring their code loads whenever Netsh is used, aiding persistence or privilege escalation. Accessibility Features: Abusing Windows accessibility tools (like Sticky Keys) lets attackers invoke system shells or backdoors at the login screen, bypassing standard authentication. AppCert DLLs: Adversaries inject DLLs via AppCert DLL Registry keys, so their code runs in every process creation, creating broad persistence. AppInit DLLs: Attackers exploit AppInit DLL Registry values to ensure their DLLs are loaded into multiple processes, maintaining persistence. Application Shimming: By creating or modifying Windows application shims, adversaries force the system to redirect legitimate programs to launch malicious code. Image File Execution Options Injection: Modifying Image File Execution Options (IFEO) in Registry allows attackers to set debuggers that hijack normal application launches for persistence. PowerShell Profile: Malicious code in PowerShell profile scripts will auto-run whenever PowerShell starts, providing persistence and privilege escalation opportunities. Emond: Attackers place malicious rules in macOS’s Emond event monitor daemon, causing code to run in response to system events. Component Object Model Hijacking: By hijacking references to COM objects in Windows, adversaries ensure their code launches when certain applications or system routines are invoked. Installer Packages: Attackers may leverage installer scripts or packages to deploy persistent code during application installation or updates. Udev Rules: By modifying Linux’s udev rules, adversaries configure devices to trigger the execution of rogue code during events like hardware insertion. Python Startup Hooks: Attackers add code to Python startup scripts or modules, causing their payload to run automatically whenever Python interpreter is launched. T1068 – Exploitation for Privilege Escalation Attackers exploit software or OS vulnerabilities to gain elevated rights. This may target kernel flaws, driver bugs, or misconfigured services. By triggering the vulnerability, adversaries escalate from low-privilege to SYSTEM/root. This is one of the most direct and powerful escalation methods. T1574 – Hijack Execution Flow This technique alters how the system resolves and launches programs. Attackers place malicious files where high-privilege processes expect legitimate ones. When the privileged process starts, it inadvertently loads or executes the attacker code. This leverages DLL search order hijacking, path hijacking, and similar methods. DLL: Attackers exploit the way Windows applications load Dynamic Link Libraries (DLLs), tricking them into running malicious DLLs for code execution or privilege escalation. Dylib Hijacking: Adversaries target macOS by placing malicious dylib files in directories searched by applications, causing them to be loaded instead of legitimate libraries. Executable Installer File Permissions Weakness: Attackers leverage weak permissions on installer files to replace or modify executables, allowing unauthorized code execution with high privileges. Dynamic Linker Hijacking: This technique manipulates the loading process of shared libraries (DLLs or dylibs), often abusing environment variables (like PATH) or loader settings to ensure malicious libraries are loaded first. Path Interception by PATH Environment Variable: Adversaries modify the PATH environment variable, influencing where the system searches for executables and libraries, enabling malicious code to be loaded. Path Interception by Search Order Hijacking: Attackers exploit insecure search orders for files or DLLs, placing malicious files in locations that applications check before trusted locations. Path Interception by Unquoted Path: By taking advantage of unquoted paths in executable calls, adversaries' plant malicious files that are incorrectly loaded by the system, allowing code execution. Services File Permissions Weakness: Weak permissions on Windows service files enable attackers to replace service executables with malicious content, gaining persistent system access. Services Registry Permissions Weakness: Adversaries exploit weak registry settings of Windows services, altering keys to redirect service execution to their malicious code. COR_PROFILER: Attackers abuse the COR_PROFILER environment variable to hijack the way . NET applications load profiling DLLs, gaining code execution during app runtime. KernelCallbackTable: This involves altering callback tables in the Windows kernel to redirect the execution flow, enabling arbitrary code to run with elevated privileges. AppDomainManager: By subverting the AppDomainManager in .NET applications, adversaries gain control over the loading of assemblies, potentially executing malicious payloads during application startup. T1055 – Process Injection This involves injecting malicious code into legitimate processes. Injected processes often run with higher privileges than the attacker initially has. It enables evasion of security tools by blending into trusted processes. Successful injection allows execution under a more privileged security context. Dynamic-link Library Injection: Injects malicious DLLs into live processes to execute unauthorized code in the process memory, enabling attackers to evade defenses and elevate privileges. Portable Executable Injection: Loads or maps a malicious executable (EXE) into the address space of another process, running code under the guise of a legitimate application. Thread Execution Hijacking: Redirects the execution flow of an active thread in a process to run attacker-controlled code, often used for stealthy payload delivery. Asynchronous Procedure Call (APC): Delivers malicious code by queuing attacker-specified functions (APCs) to run in the context of another process or thread. Thread Local Storage (TLS): Uses TLS callbacks within a process to execute injected code when the process loads DLLs, often leveraging this for covert malware execution. Ptrace System Calls: Exploits ptrace debugging capabilities (on Unix/Linux) to inject and execute malicious code within the address space of a targeted process. Proc Memory: Modifies memory structures directly through the /proc filesystem (Linux/Unix) to inject or alter code in running processes for persistence or privilege escalation. Extra Window Memory Injection: Injects code into special memory regions (like window memory in Windows GUI processes) to achieve code execution in those processes. Process Hollowing: Creates a legitimate process, then swaps its memory with attacker code, making malware run under the mask of valid processes to evade detection. Process Doppelgänging: Leverages Windows Transactional NTFS (TxF) and process creation mechanisms to run malicious code in a way that appears legitimate and avoids conventional monitoring. VDSO Hijacking: Modifies the Virtual Dynamic Shared Object (VDSO) in Linux to execute injected code during system or process startup routines. ListPlanting: Manipulates application or window list memory, using this entrypoint for code injection into legitimate processes without overtly altering their main execution flow. T1053 – Scheduled Task/Job Attackers create or modify scheduled tasks to run malware with elevated privileges. These jobs often execute under SYSTEM, root, or service accounts. It provides both persistence and privilege escalation. The scheduled execution blends into normal automated system behavior. At: Attackers use the "at" scheduling utility on Windows or Unix-like systems to set up tasks that run at specific times, enabling persistence or timed execution of malicious programs. Cron: By adding entries to cron on Unix/Linux systems, adversaries can schedule their malicious code to execute automatically at regular intervals, maintaining access without user interaction. Scheduled Task: Threat actors abuse operating system scheduling features (like Windows Task Scheduler) to run unwanted commands or software on startup or according to a set schedule for persistence. Systemd Timers: In Linux environments, attackers configure systemd timers to trigger services or executables at designated times, ensuring regular execution of their payloads even after restarts. Container Orchestration Job: Adversaries leverage cluster scheduling platforms (such as Kubernetes Cron Jobs) to deploy containers that repeatedly execute malicious code across multiple nodes, providing scalable and automated persistence in cloud-native environments. T1078 – Valid Accounts Adversaries use stolen credentials to access legitimate user, admin, or service accounts for initial access, persistence, or privilege escalation, often bypassing security controls by blending in with normal activity. Default Accounts: These are pre-configured accounts built into operating systems or applications, such as guest or administrator; attackers exploit weak, unchanged, or known passwords on these accounts to gain unauthorized access. Domain Accounts: Managed by Active Directory, domain accounts allow users, administrators, or services to access resources across an organization’s network; adversaries leverage compromised domain credentials for lateral movement or privileged actions. Local Accounts: Accounts specific to a single machine or device, often with administrative privileges; attackers use compromised local credentials to escalate rights or maintain control over endpoints. Cloud Accounts: These are accounts for cloud platforms or services (like AWS, Azure, GCP); Those adversaries who obtain these credentials can gain significant control, escalate privileges, or persist in cloud environments. How F5 can help? F5 security solutions, including BIG-IP, NGINX, and Distributed Cloud, provide robust defenses against privilege escalation risks by enforcing strict access controls, role-based permissions, and session validation. These protections mitigate risks from vulnerabilities and misconfigurations that adversaries exploit to elevate privileges. F5’s security capabilities also offer monitoring and threat detection mechanisms that help identify anomalous activities indicative of privilege escalation attempts. For more information, please contact your local F5 sales team. Conclusion Privilege escalation is a critical cyberattack tactic that allows attackers to move from limited access to elevated permissions, often as administrator or root on compromised systems. This expanded control lets attackers disable security measures, steal sensitive data, persist in the environment, and launch more damaging attacks. Preventing and detecting privilege escalation requires layered defenses, vigilant access management, and regular security monitoring to minimize risk and respond quickly to unauthorized privilege gains. Reference Links: MITRE ATT&CK® Privilege Escalation, Tactic TA0004 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs25Views0likes0CommentsIllegal Metacharacter in Parameter Name in Json Data
Dears, Can someone tell what is the issue here as the BIG IP is reporting the illegal metacharacter "#" in parameter name but the highlighted part of the violation doesnt contain metacharacter # in the first place and the parameter which BIG IP displayed in the highlighted part is actually not a parameter. I believe the issue is with the BIG IP only. Any suggestions here, please? I think issue is that BIG IP is not paring the Json payload properly19Views0likes3CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
53Views1like0CommentsKASM Workspaces Integration with F5 BIG-IP Access Policy Manager (APM)
Introduction F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article I’ll show you how to secure your Kasm Workspace using F5 BIG-IP APM. APM is a key component of the F5 Application Delivery and Security Platform (ADSP). APM covers both Application Delivery, Security and is a key component of Zero Trust. Kasm Workspaces Kasm Workspaces is a containerized streaming platform designed for secure, web-based access to desktops, applications, and web browsing. It leverages container technology to deliver virtualized environments directly to users' browsers, enhancing security, scalability, and performance. Commonly used for remote work, cybersecurity, and DevOps workflows, Kasm Workspaces provides a flexible and customizable solution for organizations needing secure and efficient access to virtual resources. As noted in the Kasm Documentation, the Kasm Workspaces Web App Role servers should not be exposed directly to the public. That’s where F5 BIG-IP APM can help. Demo Video Deployment Prerequisites F5 BIG-IP version 17.x Access version 10.x Kasm Workspaces version 1.17 installed and configured properly Configure using Automation Toolchain with AS3 and FAST Templates The F5 BIG-IP Automation Toolchain is a suite of tools designed to automate the deployment, configuration, and management of F5 BIG-IP devices. It enables efficient and consistent management using declarative APIs, templates, and integrations with popular automation frameworks. Application services (FAST) templates are predefined configurations that streamline the deployment and management of applications by providing consistent and repeatable setups. NOTE: The configuration using the Automation Toolchain is well-documented in this DevCentral article, which also includes demo videos: How I did it - “Delivering Kasm Workspaces three ways” Configure Manually Using a Virtual Server This article will focus on the manual configuration of the BIG-IP using a Virtual Server. Configuring it this way will give you a deeper understanding of how all the components work together to create a cohesive solution. Network Environment Linux “External” client IP: 10.1.10.4 BIG-IP “External” Self IP: 10.1.10.10 BIG-IP “Internal” Self IP: 10.1.20.10 Kasm Workspace IP: 10.1.20.23 BIG-IP Configuration Create HTTP Monitor: First, let’s create the HTTP Monitor for the Kasm Workspace server. From Local Traffic > Monitors > click the green plus sign to add a new one. Give it a name, “Kasm-Monitor” in this example Set the Type to HTTP Enter the following for the Send String: GET /api/__healthcheck\r\n Enter the following for the Receive String: OK It should look like this: Set Reverse to Yes and click Finished Create Pool: Next we’ll create the Pool From Local Traffic > Pools > Pool List > click the plus sign to add a new one Give it a name, “Kasm-Pool” in this example Select the Health Monitor you created previously and click the arrows to move it to Active Under Resources specify a Node Name, “Kasm-Server” in this example Specify the IP Address, “10.1.20.23” in this example Set the Service Port to 443, then click Add Click Finished Create Virtual Server: Next we’ll create the Virtual Server From Local Traffic > Virtual Servers > Virtual Server List > click the plus sign to add a new one Give it a Name, “vs_kasm” in this example. Keep the Type as Standard. Set the Destination to the IP Address you want the BIG-IP to listen on for connections to the Kasm server, “10.1.10.100” in this example. Set the Service Port to HTTPS, port 443. Click Finished at the bottom Click on the Virtual Server you just created Click Resources Set the Default Pool to kasm_pool, then click Update The Kasm Virtual Server Status should eventually change to Green when the Health Monitor is successful. NOTE: The Virtual Server configuration in this example has been simplified for demonstration purposes. Additional configuration options will be covered later in this article. Kasm Workspaces Configuration The Kasm Workspace will need a Zone configured with the default settings. Login as Admin and check this from Infrastructure > Zones. You will need at least one Workspace. In this example, I have a Workspace with Chrome, Firefox, Terminal and Ubuntu Jammy Click the WORKSPACES Tab at the top of the screen to see what the Workspace looks like Your view should look like this: Test Kasm Workspaces Login as a User NOTE: The IP Address used to connect to the Kasm Workspaces through the BIG-IP is the Virtual Server listening IP Address 10.1.10.100 When the Workspace loads, click Firefox Choose the option to Launch Session in a new Tab After a moment, Firefox will load Here you can see the F5.com website displayed NOTE: The browser pop-up blocker can prevent the Kasm Workspace applications from successfully launching. You can disable the pop-up blocker or create an exception for the BIG-IP Virtual IP (10.1.10.100). Enable SSL Decryption Enabling SSL Decryption allows you to fully inspect the requests and payloads passing through BIG-IP. From Local Traffic > Virtual Servers > click Virtual Server List Then click the name of your Virtual Server, “vs_kasm” in this example In the Configuration section, set the Protocol Profile (Client) to http Set the SSL Profile (Client) to clientssl Set the SSL Profile (Server) to serverssl NOTE: If you have created your own Client and Server SSL Profiles, you should add them here. The instructions above are for demonstration purposes only. Scroll to the bottom and click Update You’re done! Conclusion F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article, you learned how to secure your Kasm Workspace using F5 BIG-IP APM. Related Content How I did it - “Delivering Kasm Workspaces three ways” Download Kasm Workspaces Kasm Documentation
32Views1like0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow/networking
89Views1like0CommentsIntroducing the New F5 Bot Defense Self-Service UI
For more information about Bot Defense Advanced features, see Bot Defense Overview Step 1: Sign Up for Bot Defense Advanced A customer's F5 account team will help their Bot Defense infrastructure and policies configured to protect their applications. NOTE: User permissions must include one or more of the following permissions. If a user does not have any of these roles, they should contact their Bot Defense administrator or TAM.: f5xc-bot-defense-admin role f5xc-bot-defense-user role f5xc-bot-defense-monitor role f5xc-bot-defense-report role Step 2: Decide What You Want to Protect Users should then decide which endpoints they want to protect with Bot Defense. For information about what to consider when configuring web and mobile endpoints, see the following information: Protect Web-Based Endpoints Protect Mobile Endpoints Step 3: Configure Your Bot Defense Infrastructure Important: If F5 Operations has already configured a user's Bot Defense infrastructure, they can skip this step. Users can now manage their configure their Bot Defense infrastructures from the F5 Distributed Cloud Console. A Bot Defense deployment can consist of multiple Test and Production infrastructures (subscription limits will determine how many can be added / managed). To configure a Bot Defense infrastructure, configure the following settings: Traffic type Infrastructure type Region Access control list For detailed instructions, see Configure the Bot Defense Infrastructure. Step 4: Configure Your Bot Policies Bot Defense Advanced provides three system policies that allow users to control system configuration settings: Bot Endpoint Policy Bot Allowlist Policy Bot Network Policy The F5 Operations team performs an analysis of a customer's endpoints and creates the initial version of each policy. Users can then deploy the policies in the Bot Defense Test infrastructure provided by F5. If preferred, users can also work with your F5 Operations Team to manage their policies. Important: F5 strongly recommends that users deploy and thoroughly test policy updates in the Test infrastructure provided by F5 before deploying to Production infrastructure. Step 5: Test Your Configuration Deploy your policies in the Test infrastructure provided to you by F5 to test your Bot Defense deployment and help ensure that Bot Defense policies are properly configured, that JavaScript tags are injected in your application pages correctly, or that you have correctly integrated the mobile SDK. Step 6: Deploy Policies in Your Production Environment Important: F5 strongly recommends that you deploy and thoroughly test policy updates in the Test infrastructure provided to you by F5 before you deploy in your Production infrastructure. After you verify in your Test infrastructure that Bot Defense is configured correctly and correctly identifies automated traffic, you can deploy your policies yourself or work with your F5 Operations team to deploy your policies in your Production infrastructure. Step 7: Enable Bot Defense on an HTTP Load Balancer To configure Bot Defense on an HTTP load balancer, users must complete the following tasks on each HTTP load balancer where they want to enable Bot Defense: Enable the Bot Defense workspace on one or more HTTP load balancers. Configure how Bot Defense will inject JavaScript tags in the HTTP pages of the application. If protecting mobile endpoints, enable and configure the F5 Distributed Cloud Mobile SDK. For detailed instructions, see Configure Bot Defense on an HTTP Load Balancer. Step 8: Deploy Bot Detection Rules Important: Bot detection rule self-service management is a limited availability feature. Contact your F5 account team for information. F5 supplies customers with a set of initial bot detection rules. Most rules are turned off, with a subset of rules turned on by default. It is recommended for users to monitor their traffic for approximately two weeks to observe how rules that are turned on affect their traffic. After this time, users can now use the Distributed Cloud Console to turn rules on and off to make changes to how Bot Defense handles traffic. Important: F5 recommends that you deploy each rule in a Test infrastructure before you deploy in your production infrastructure. For information about bot detection rules, see Bot Detection Rules Overview. Bot Defense Advanced Self-Service Policy Management Demo: Related Resources: Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) Protecting Your Web Applications Against Critical OWASP Automated Threats Making Mobile SDK Integration Ridiculously Easy with F5 XC Mobile SDK Integrator JavaScript Supply Chains, Magecart, and F5 XC Client-Side Defense (Demo) Bots, Fraud, and the OWASP Automated Threats Project (Overview) Protecting Your Native Mobile Apps with F5 XC Mobile App Shield Enabling F5 Distributed Cloud Client-Side Defense in BIG-IP 17.1 Bot Defense for Mobile Apps in XC WAAP Part 1: The Bot Defense Mobile SDK F5 Distributed Cloud WAAP Distributed Cloud Services Overview Enable and Configure Bot Defense - F5 Distributed Cloud Service
344Views1like0CommentsF5 Distributed Cloud (XC) Custom Routes: Capabilities, Limitations, and Key Design Considerations
This article explores how Custom Routes work in F5 Distributed Cloud (XC), why they differ architecturally from standard Load Balancer routes, and what to watch out for in real-world deployments, covering backend abstraction, Endpoint/Cluster dependencies, and critical TLS trust and Root CA requirements.159Views2likes1CommentOverview of MITRE ATT&CK Tactic - TA0011 Command and Control
Introduction In modern days, cyber violations, command and control are one of the main set of techniques with which attackers can gain control over the system within a victim’s network. Once control is gained over the system, the attackers can steal sensitive data, move laterally and blend into normal activity. Command and Control (MITRE ATT&CK Tactic TA0011) represents another critical stage of the adversary lifecycle, where the adversaries focus on communicating with the systems under their control. There are multiple ways to achieve this, either by mimicking the expected traffic flow to avoid detection or mimicking a normal behavior of the compromised system. To avoid the vulnerability, it is important for defenders to understand how communication is established to any system in the network and the various levels of stealth depending on the network structure. This article walks through the most common Command and Control techniques, and how F5 solutions provide strong defense against them. T1071 - Application Layer Protocol To communicate with the systems, the adversaries blend in with the existing traffic of the OSI layer protocols to avoid detection/network filtering. The results of these commands will be embedded within the protocol traffic between the client and the server. T1071.001 - Web Services Adversaries mimic normal, expected HTTP/HTTPS traffic that carries web data to communicate with the systems under their control within a victim network. T1071.002 - File Transfer Protocol Protocols used to implement this technique includes SMB, FTP, FTPS and TFTP. The malicious data is concealed within the fields and headers of the packets produced from these protocols. T1071.003 - Mail Protocols Protocols carrying electronic mail such as SMTP/S, POP3/S, and IMAP is utilized by concealing the data within the email messages themselves. T1071.004 - DNS An administrative function in computer networking is served by the DNS Protocol, and DNS traffic may also be allowed even before the authentication of the network. Data is concealed in the fields and headers of these packets. T1071.005 - Publish/Subscribe Protocols For message distribution managed by a centralized broker, where Publish/Subscribe design utilizes MQTT, XMPP, AMQP and STOMP protocols. T1092 - Communication Through Removable Media On disconnected networks, command and control between the compromised hosts can be performed using removable media to execute commands from system to system. For a successful execution, both systems need to be compromised and need to replicate the removable media through lateral movement. T1659 - Content Injection Adversaries may also gain control over the victim’s system by injecting malicious content into the systems, by initially accessing the compromised data-transfer channels where the traffic can be manipulated or content can be injected. T1132 – Data Encoding Another technique to gain control over the system is by encoding the information using a standard data encoding system. Encoding includes the use of ASCII, Unicode, Base64, MIME or other binary-to-text encoding systems. T1132.001 - Standard Encoding Data Encoding schemes utilized for Standard Encoding includes ASCII, Unicode, hexadecimal, Base64 and MIME. Data compression, such as gzip, are also an example of standard encoding. T1132.002 - Non-Standard Encoding Data Encoded in the message body of an HTTP request, such as modified Base64, is utilized as encoding schemes. T1001 – Data Obfuscation Obfuscation of command-and-control communication is hidden as part of this technique, making it even more difficult to discover or decipher. The focus is to make the communication less conspicuous and hidden, by incorporating several methods, which create below sub-techniques: T1001.001 - Junk Data Adversaries may abuse the protocols by adding random, meaningless junk data to the protocols, which can prevent trivial methods for decoding or deciphering the traffic. T1001.002 - Steganography Steganographic sub-techniques are used to transfer hidden digital data messages between systems, such as images or document files. T1001.003 - Protocol or Service Impersonation Adversaries can impersonate legitimate protocols or web services, to command-and-control traffic by blending in with legitimate network traffic. T1568 – Dynamic Resolution To establish connections dynamically to command-and-control the infrastructure and prevent any detections, adversaries use malware sharing a common algorithm with the infrastructure to dynamically adjust the parameters, such as a domain name, IP address, or port number. T1568.001 - Fast Flux DNS Fast Flux DNS is used to hide a command-and-control channel behind an array of rapidly changing IP addresses linked to a single domain resolution. T1568.002 - Domain Generation Algorithm Rather than relying on a list of static IP addresses or domains, adversaries may utilize Domain Generation Algorithms to dynamically identify a destination domain for command-and-control traffic. T1568.003 - DNS Calculations Instead of utilizing the predetermined port number or the actual IP address, to dynamically determine which port and IP address to use, adversaries calculate on addresses returned in DNS results. T1573 – Encrypted Channel Adversaries rely on an encrypted algorithm channel to conceal command-and-control traffic rather than depending on any inherent protections by the communication protocols. T1573.001 - Symmetric Cryptography Symmetric Encryption Algorithms, such as AES, DES, 3DES, Blowfish and RC4, use keys for plaintext encryption and ciphertext decryption. T1573.002 - Asymmetric Cryptography Asymmetric cryptography, or public key cryptography, uses a keypair per party: one public and one private. The sender encrypts the data with the receiver’s public key, and the receiver decrypts the data with their private key. T1008 – Fallback Channels If the primary channel is compromised or inaccessible, then in order to maintain reliable command and control, adversaries use fallback communication channels. T1665 – Hide Infrastructure To hide and evade detection of the command-and-control infrastructure, adversaries identify and filter traffic from defensive tools, masking malicious domains to abuse the true destination, and otherwise hiding malicious contents to delay discovery and prolong the effectiveness of adversary infrastructure. T1105 – Ingress Tool Transfer Tools or other files transfer from an external adversary-controlled source into the compromised environment through controlled channels or protocols such as FTP. Also, adversaries may spread tools across the compromised environment as part of Lateral Movement. T1104 –Multi-Stage Channels To make detection more difficult, adversaries create multiple stages for command-and-control for several functions and different conditions. T1095 – Non-Application Layer Protocol To communicate between the host and command-and-control server, adversaries use non-application layer protocols, such as ICMP (Internet Control Message Protocol), UDP (User Datagram Protocol), SOCKS (Secure Sockets), or SOL (Serial over LAN). T1571 – Non-Standard Port Adversaries communicate using port pairings that are not associated with the protocol, for, say, HTTPS over port 8088 or port 587 as opposed to the traditional port 443. T1572 – Protocol Tunneling Another approach to avoid detection/network filtering is to explicitly encapsulate a protocol within another protocol to enable routing of network packets which otherwise not reach their intended destination, such as SMB, RDP. T1090 – Proxy To direct network communications to a command-and-control server to avoid direct connections to the infrastructure and override the existing actual communication paths to avoid suspicion and manage command-and-control communications inside a compromised environment, proxy act as an intermediary between the systems, such as, HTRAN, ZXProxy and ZXPortMap. T1090.001 - Internal Proxy Internal proxies are primarily used to conceal the actual destination while reducing the need for multiple connections to external systems, such as peer-to-peer (p2p) networking protocols. T1090.002 - External Proxy External proxy is used to mask the true destination of the traffic with port redirectors. Purchased infrastructure such as Virtual Private Servers which are the compromised systems outside the victim's network, are generally used for these purposes. T1090.003 - Multi-Hop Proxy Multiple proxies can also be chained together to abuse the actual traffic directions, making it more difficult for defenders to trace malicious activity and identify its source. T1090.004 - Domain Fronting Adversaries can even misuse Content Delivery Networks (CDNs) routing schemes to infect the actual HTTPS traffic destination or traffic tunneled through HTTPS. T1219 – Remote Access Tools To access the target system remotely and establish an interactive command-and-control within the network, remote access tools are used to bridge a session between two trusted hosts through a graphical interface, a CLI, or a hardware-level access (KVM, Keyboard, Video, Mouse) over IP solutions. T1219.001 - IDE Tunneling IDE Tunneling combines SSH, port forwarding, file sharing and letting the developers gain access as if they are local, by encapsulating the entire session and tunneling protocols alongside SSH, allowing the attackers to blend in with the actual development workflow. T1219.002 - Remote Desktop Software Adversary may access the target systems interactively through desktop support software, which provides a graphical interface to the remote adversary, such as VNC, Team Viewer, AnyDesk, LogMein, are commonly used legitimate support software. T1219.003 - Remote Access Hardware To access the legitimate hardware through commonly used legitimate tools, including IP-based keyboard, video, or mouse (KVM) devices such as TinyPilot and PiKVM. T1205 – Traffic Signaling Traffic signaling is used to hide open ports or any other malicious functionality to prolong command-and-control over the compromised system. T1205.001 - Port Knocking To hide the open ports for persistence, port knocking is included, to enable the port, in which the adversary sends a series of attempted connections to a predefined sequence of closed ports. T1205.002 - Socket filters Socket Filters are filters to allow or disallow certain types of data through the socket. If packets received by the network interface match the filtering criteria, desired actions are triggered. T1102 – Web Service Adversaries use an existing, legitimate external Web Service to transfer data to/from the compromised system. Also, web service providers commonly use SSL/TLS encryption, which gives adversaries an additional level of protection. T1102.001 - Dead Drop Resolver Adversaries post content called dead drop resolver on Web Services with encoded domains. These resolvers will redirect the victims to the infected domain/IP addresses. T1102.002 - Bidirectional Communication Once the system is infected, they can send the output back to the Web Service Channel. T1102.003 - One-Way Communication Compromised Systems may not return any output at all in a few cases where adversaries tend to send only one way instructions and do not want any response. How F5 Can Help F5 security solutions provide multiple different functionalities to secure and protect applications and APIs across various platforms including Clouds, Edge, On-prem or Hybrid. F5 supports risk management solutions mentioned below to effectively mitigate and protect against command-and-control techniques: Web Application Firewall (WAF): WAF is supported by all the F5 deployment modes, which is an adaptable, multi-layered security solution that defends web applications against a broad spectrum of threats, regardless of where they are deployed. API Security: F5 offers to ease the security of APIs with F5 Web Application and API Protection (WAAP) solutions, which protects API endpoints and other API dependencies by restricting the API definitions using specified rules and schemas. Rate-Limiting & Bot Protection: Brute-force, credential stuffing, and session attacks can be mitigated with configurable thresholds and automated bot protection. For more information, please contact your local F5 sales team. Conclusion Command and Control (C2) encompasses the methods adversaries employ to communicate with compromised systems within a target network. Adversaries disguise their C2 traffic as legitimate network activity to evade detection. To defend against Command-and-Control techniques, defenders should gain a clear understanding of implementation of robust segmentation and egress filtering using Web Application Firewalls (WAF) to limit communication channels and regularly monitor traffic for anomalous patterns and leverage threat intelligence to identify any C2 indicator. Additionally, employing endpoint detection and response (EDR) using API Security solutions can help detect and block malicious C2 activity at the host level. Reference links MITRE | ATT&CK Tactic 09 – Command and Control MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®34Views0likes0CommentsHands-On Quantum-Safe PKI: A Practical Post-Quantum Cryptography Implementation Guide
Is your Public Key Infrastructure quantum-ready? Remember waaay back when we built the PQC CNSA 2.0 Implementation guide in October 2025? So long ago! Due to popular request, we've expanded the lab to cover the more widely needed NIST FIPS 203/204/205 quantum standards. The below GitHub lab guide will still walk you through building a quantum resistant certificate authority using OpenSSL but we've made some fun adjustments to reflect more real world scenarios. This guide currently covers: Building quantum safe certificate authority for FIPS 203/204/205 use cases Building quantum safe certificate authority for CNSA 2.0 use cases OpenSSL 3.5 parallel install for PQC-specific use cases OpenSSL 3.x + OQS library installation when you cannot update to 3.5.x. Why learn and implement post-quantum cryptography (PQC) now? While quantum computing is a fascinating area of science, all technological advancements can be misused. Nefarious people and nation-states are extracting encrypted data to decrypt at a later date when quantum computers become available, a practice you better know by now called "harvest now, decrypt later." Close your post-quantum cryptographic knowledge gap so you can get secured sooner and reduce the impact(s) that might not surface until later. Ignorance is not bliss when it comes to cryptography and regulatory fines, so let's get started. The GitHub lab provides step-by-step instructions to create: Quantum-resistant Root CA using ML-DSA-87 (FIPS and CNSA 2.0) Algorithm flexibility based on your compliance needs Quantum-safe server and client certificates OCSP and CRL revocation for quantum-resistant certificates Access the Complete Lab Guide on GitHub → At A Glance: OpenSSL Quantum-Resistant CA Learning Paths This repository currently offers two learning tracks. Select the path that aligns with your organization's requirements: FIPS 203/204/205 Path CNSA 2.0 Path Target Audience Commercial organizations, compliance needs Government contractors, classified systems Compliance Standard NIST Quantum Safe FIPS standards NSA Commercial National Security Algorithm Suite 2.0 Algorithm Flexibility Full FIPS algorithm suites (ML-DSA-44/65/87, SLH-DSA) Restricted to CNSA 2.0 approved (ML-DSA-65/87 only) Use Case General quantum-resistant infrastructure National security systems, defense contracts What This Lab Guide Achieves Complete PKI Hierarchy Implementation The lab walks through building an internal PKI infrastructure from scratch, including: Root Certificate Authority: Using ML-DSA-87 providing the highest quantum-ready NIST security level Intermediate Certificate Authority: Intermediate CA using ML-DSA-65 for operational certificate issuance End-Entity Certificates: Server and user certificates with comprehensive Subject Alternative Names (SANs) for real-world applications Revocation Infrastructure: Both Certificate Revocation Lists (CRL) and Online Certificate Status Protocol (OCSP) implementation Security Best Practices: Restrictive Unix file permissions, secure key storage, and backup procedures throughout, preferred practices for lab and internal testing scenarios Key Takeaways After completing one or more of the labs, you will: Understand Quantum Threats: Grasp why current RSA/ECDSA cryptography is vulnerable and how quantum-resistant algorithms provide protection Master ML-DSA Cryptography: Gain hands-on experience with both ML-DSA-65 (Level 3 security) and ML-DSA-87 (Level 5 security) algorithms Configure Modern PKI Features: Implement SANs with DNS, IP, email, and URI entries, plus both CRL and OCSP revocation mechanisms Troubleshoot Effectively: Learn to diagnose and resolve common issues with quantum-resistant certificates Prepare for Migration: Understand the practical steps needed to transition existing PKI infrastructure to quantum-resistant algorithms Who Should Read This Guide Enterprise Security Teams migrating to quantum-resistant algorithms Government Contractors requiring CNSA 2.0 compliance for classified systems Financial Institutions protecting long-term transaction records from quantum threats Healthcare Organizations securing patient data with regulatory requirements Cloud Service Providers implementing quantum-safe infrastructure for customers PKI Consultants preparing for post-quantum migration projects DevOps Engineers building quantum-ready CI/CD certificate pipelines Crossfit Trainers Find something interesting for once to yell at random intervals to anyone within earshot Access the Complete Lab Guide on GitHub → About This Guide We built the first guide for NSA Suite B in the distant past (2017) to learn ECC and modern cipher requirements. We built more recent second guide for CNSA 2.0 but it's quite specific for US federal audiences. That lead us to build a NIST FIPS PQC guide which should apply to more practical use cases. In the spirit of Learn Python the Hard Way, it focuses on manual repetition, hands-on interactions and real-world scenarios. It provides the practical experiences needed to implement quantum-resistant PKI in production environments. By building it on GitHub, other PKI fans can help where we may have missed something; or simply to expand on it with additional modules or forks. Have at it! Frequently Asked Questions (FAQS) Q: What is CNSA 2.0? A: CNSA 2.0 (Commercial National Security Algorithm Suite 2.0) is the NSA's updated cryptographic standard requiring quantum-resistant algorithms. Q: When do I need to implement quantum-resistant cryptography? A: The NSA and NIST mandate CNSA 2.0 and FIPS 20X implementation by 2030. Organizations should begin now due to "harvest now, decrypt later" attacks where adversaries collect encrypted data today for future quantum decryption. Q: What is ML-DSA (Dilithium)? A: ML-DSA (Module-Lattice Digital Signature Algorithm), formerly known as Dilithium, is a NIST-standardized quantum-resistant digital signature algorithm specified in FIPS 204, available in OpenSSL through the OQS provider. Q: What is ML-KEM (Kyber)? A: Kyber is an IND-CCA2-secure key encapsulation mechanism (KEM), whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. Kyber-512 aims at security roughly equivalent to AES-128, Kyber-768 aims at security roughly equivalent to AES-192, and Kyber-1024 aims at security roughly equivalent to AES-256. But quantumy (it's a word). Q: Is this guide suitable for production use? A: NOPE. While the guide teaches production-ready techniques and CNSA 2.0 compliance, always use Hardware Security Modules (HSMs) and air-gapped systems for production Root CAs (cold storage too). The lab is great for internal environments or test harnesses where you may need to test against new quantum-resistant signatures and such. ALWAYS rely on trusted public PKI infrastructure for production cryptography. Reference Links NIST Post-Quantum Cryptography Standards - Official NIST PQC project page with FIPS 204 (ML-DSA) specifications NSA CNSA 2.0 Algorithm Requirements - NSA's official CNSA 2.0 announcement and requirements Open Quantum Safe Project - Home of the OQS provider enabling quantum-resistant algorithms in OpenSSL OQS Provider for OpenSSL 3 - GitHub repository for the OQS provider with installation instructions RFC 5280: Internet X.509 PKI - Essential standard for X.509 certificate and CRL profiles OpenSSL 3.0 Documentation - Comprehensive OpenSSL documentation for understanding commands and options FIPS 204: ML-DSA Standard - The official Module-Lattice-Based Digital Signature Standard54Views2likes0CommentsAutomating ACMEv2 Certificate Management on BIG-IP
While we often associate and confuse Let's Encrypt with ACMEv2, the former is ultimately a consumer of the latter. The "Automated Certificate Management Environment" (ACME) protocol describes a system for automating the renewal of PKI certificates. The ACME protocol can be used with public services like Let's Encrypt, but also with internal certificate management services. In this article we explore the more generic support of ACME (version 2) on the F5 BIG-IP.14KViews12likes28Comments