security
2909 TopicsOverview of MITRE ATT&CK Tactic : TA0004 - Privilege Escalation
Introduction The Privilege Escalation tactic in the MITRE ATT&CK, covers techniques that adversaries use to gain higher-level permissions on compromised systems or networks. After gaining initial access, attackers frequently need elevated rights to access sensitive resources, execute restricted operations, or maintain persistence. Techniques include exploiting OS vulnerabilities, misconfigurations, or weaknesses in security controls to move from user-level to admin or root privileges. This may involve abusing elevation control mechanisms (like sudo, setuid, or UAC), manipulating accounts or tokens, leveraging scheduled tasks, or exploiting valid credentials. Techniques and Sub-Techniques T1548 – Abuse Elevation Control Mechanisms This technique involves bypassing or abusing OS mechanisms that restrict elevated execution, such as sudo, UAC, or setuid binaries. Here, adversaries exploit misconfigurations or weak rules to run commands with higher privileges. This often requires no exploit code but just permission misuse. Once elevated, attackers gain access to restricted system operations. T1548.001 – Setuid and Setgid Here, attackers run the programs with elevated permissions by abusing setuid/setgid bits on Unix systems. This allows execution as another user, often root, without needing the password. T1548.002 – Bypass User Account Control Adversaries exploit UAC weaknesses to elevate privileges without user approval.This grants admin-level execution while maintaining user-level stealth. T1548.003 – Sudo and Sudo Caching In these mis-configured sudo rules or cached credentials allow attackers to run privileged commands. They escalate without full authentication or bypass intended restrictions. T1548.004 – Elevated Execution with Prompt Here, malicious actors deceive users into granting elevated rights to a malicious process. This uses social engineering rather than technical exploitation. Temporary Elevated Cloud Access Cloud platforms issue temporary privileges through roles or tokens. Misconfigured role assumptions or temporary credentials can be abused to obtain short-term high-level access. TCC Manipulation This happens when attackers tamper with macOS’s privacy-control system to wrongfully grant apps access to sensitive resources like the camera, microphone, or full disk. It essentially bypasses user consent protections. T1134 - Access Token Manipulation Adversaries modify or steal Windows access tokens to make malicious processes run with the permission of another user. By impersonating these tokens, attackers can bypass access controls, escalate privileges, and perform actions as though they are legitimate users or even SYSTEM. Token Impersonation/Theft Here attackers duplicate and impersonate another user’s token, allowing their process to operate with the privileges of the legitimate user, this technique is frequently used to gain higher-level privileges on Windows machines. Create Process with Token Adversaries use a stolen or duplicated token to spawn a new process under the security context of a higher-privilege user, enabling the execution of actions with elevated permissions. Make and Impersonate Token Attackers generate new tokens using credentials they possess, then impersonate a target user's identity to gain unauthorized access and escalate their privileges. Parent PID Spoofing This technique manipulates the parent process ID (PPID) of a new process, so it appears to have a trusted parent, helping adversaries evade defenses or gain higher privileges. SID-History Injection Here, adversaries inject SID-History attributes into access tokens or Active Directory to spoof the permissions, this technique enables attackers to sidestep traditional group membership rules, granting them privileges that would normally be restricted. T1098 - Account Manipulation It refers to actions taken by attackers to preserve their access using compromised accounts, such as modifying credentials, group memberships, or account settings. By changing permissions or adding credentials, adversaries can escalate privileges, maintain persistence, or create hidden backdoors for future access. Additional Cloud Credentials Adversaries add their own keys, passwords, or service principal credentials to victim cloud accounts, enabling escalation without detection. This allows them to use new credentials and bypass standard log or security controls in cloud environments. Additional Email Delegate Permissions Attackers may grant themselves high-level permissions on email accounts, allowing unauthorized access, control or forwarding of sensitive communications, which can give visibility into victim correspondence for further attacks. Additional Cloud Roles Adversaries assign new privileged roles to compromised accounts, expanding permissions and enabling wider access to cloud resources. SSH Authorized Keys Attackers append or modify their public keys to SSH authorized_keys files on target machines. This technique bypass password authentication and allows undetected logins to compromised systems. Device Registration Adversaries register malicious devices with victim accounts, often in MFA or management portals to maintain ongoing access. This can allow attackers to access resources as trusted endpoints. Additional Container Cluster Roles Attackers grant their accounts extra permissions or roles in container orchestration systems such as Kubernetes. These elevated roles allow broader control over cluster resources and enable cluster-wide compromise. Additional Local or Domain Groups Adversaries add their accounts to privileged local or domain groups, gaining higher-level access and capabilities. This manipulates group memberships for escalation, persistence, and dominance within target environments. T1547 – Boot or Logon Autostart Execution Attackers abuse programs that automatically run during boot or login. These locations can be modified to launch malicious code with elevated privileges. This provides persistence and often higher-level execution. It is commonly achieved by manipulating registry keys, services, or startup folders. Registry Run Keys / Startup Folder: Attackers add malicious programs to Windows Registry run keys or Startup folders to ensure automatic execution when a user logs in. This technique provides persistent and often stealthy privilege escalation on system reboot and login. Authentication Package: By installing a malicious authentication package (DLL), adversaries can intercept credentials or execute code with system-level privileges during the Windows authentication process, enabling privilege escalation and persistence. Time Providers: Attackers register malicious DLLs as Windows time providers DLLs responsible for time synchronization so that their code is loaded by system processes on boot or at scheduled intervals, allowing stealthy system-level access and persistence. Winlogon Helper DLL: Adversaries plant a helper DLL in Winlogon’s registry settings so it loads with each user logon, running malicious code with high privileges and ensuring execution whenever the system starts or a user logs in. Security Support Provider: Inserting a rogue Security Support Provider (SSP) DLL allows attackers to monitor or manipulate authentication and system logins, potentially capturing credentials and persisting with SYSTEM privileges at the operating system level. Kernel Modules and Extensions: Attackers load malicious modules or kernel extensions to run arbitrary code in kernel space, giving them unrestricted control over the system, hiding their presence, or manipulating low-level OS behavior for privilege escalation. Re-opened Applications: On macOS, adversaries abuse property list files that track reopened applications after reboot, ensuring their chosen programs or payloads relaunch automatically and persistently escalate privileges upon user login. LSASS Driver: Modifying or adding an LSASS (Local Security Authority Subsystem Service) driver gives attackers persistent system-level code execution, potentially accessing or controlling authentication processes. Shortcut Modification: By altering shortcut files (LNKs), adversaries ensure that opening a benign application or file instead executes attacker-controlled code, effectively leveraging user actions for privilege escalation and persistence. Port Monitors: Attackers install or hijack port monitoring DLLs, which Windows loads to manage printers, so that their code runs with SYSTEM privileges when the service starts, enabling privilege escalation and persistence. Print Processors: Planting a malicious print processor DLL, the software Windows uses to handle print jobs causes Windows to execute attacker code as SYSTEM whenever print functions are called, creating a persistence and privilege escalation method. XDG Autostart Entries: On Linux desktop environments, adversaries use XDG-compliant autostart entries to launch malicious programs automatically at user login, gaining persistent execution and the ability to operate with user or escalated privileges. Active Setup: Attackers add or modify Active Setup registry keys to ensure their payloads execute with elevated privileges during user profile initialization, such as when a new user logs in. Login Items: On macOS, adversaries add login items that point to their malicious applications or scripts, guaranteeing code execution with the user’s privileges whenever a login event occurs. T1037 - Boot or Logon Initialization Scripts It refers to the use of scripts that are automatically executed during system startup or user logon to help adversaries maintain persistence on a machine. By modifying these scripts, attackers can ensure their malicious code runs every time the system boots. Logon Script (Windows): Scripts configured in Windows to run automatically during user or group logon can be exploited by adversaries to execute malicious code with the user’s privileges, enabling persistence or escalation. Login Hook: A login hook is an macOS mechanism that allows scripts or executables to run automatically upon a user’s login, which attackers may abuse to achieve persistence or elevate privileges. Network Logon Script: These are scripts assigned via Active Directory or Group Policy to execute during network logon, potentially allowing adversaries to introduce or persist malicious code in a domain environment. RC Scripts: On Unix-like systems, RC (run command) scripts control startup processes. Attackers who modify these can ensure their code runs with elevated privileges every time the system boots. Startup Items: Files or programs set to launch automatically during boot or user login can be manipulated by attackers, allowing persistent or privileged execution at startup. T1543 – Create or Modify System Process Attackers modify or create system services or daemons that run with high privileges. By altering service configurations, they ensure malicious code executes as SYSTEM/root. This provides long-term persistence and elevated access. Launch Agent: Attackers can create or modify launch agents on macOS to automatically execute malicious payloads whenever a user logs in, helping maintain persistence at the user level. Systemd Service: By altering systemd service files on Linux, adversaries can ensure their code runs as a background service during startup, maintaining continuous access to the system. Windows Service: Attackers abuse Windows service configurations to install or modify services that launch malicious programs on startup or at defined intervals, allowing persistent and privileged access. Launch Daemon: On macOS, launch daemons are set up to run background processes with elevated privileges before user login, often used by attackers to achieve system-wide persistence. Container Service: Adversaries may create or modify container or cluster management services (like Docker or Kubernetes agents) to repeatedly execute malicious code inside containers as part of persistence. T1484 - Domain or Tenant Policy Modification Adversaries changing configuration settings in a domain or tenant environment, such as Active Directory or cloud identity services, to bypass security controls and escalate privileges. This can include editing group policy objects, trust relationships, or federation settings, which may impact large numbers of users or systems across an organization. Attackers leverage this technique to gain persistent elevated access and make detection or remediation much more difficult. Group Policy Modification: Attackers may alter Group Policy Objects (GPOs) in Active Directory environments to subvert security settings and gain elevated privileges across the domain. By doing, these attackers can deploy malicious tasks, change user rights or disable security controls on many systems simultaneously. Trust Modification: Adversaries change domain or tenant trust relationships, such as adding, removing or altering trust properties between domains or tenants to expand their access and ensure continued control. This can let attackers move laterally, escalate privileges across multiple domains. T1611 – Escape to Host In virtualized environments, attackers attempt to escape a container or VM. If successful, they gain access to the underlying host system, which has higher privileges. This usually arises due to weaknesses in the hypervisor or insufficient separation between virtual environments. Hence, it gives complete control to the attacker over every workload operating on that host. T1546 – Event Triggered Execution Attackers use system events like service start, scheduled job, user login, etc. to trigger malicious code. These triggers often run with SYSTEM or administrative privileges. By hijacking legitimate event handlers, the attacker executes commands without raising suspicion. It also enables persistence tied to normal system operations. Change Default File Association: Attackers alter file type associations so that opening a file triggers malicious code, helping them gain persistence or escalate privileges. Screensaver: Adversaries can replace system screensavers with malicious executables, causing code to run automatically when the screensaver activates. Windows Management Instrumentation Event Subscription: By setting up WMI event subscriptions, attackers ensure their code executes in response to specific system events, establishing stealthy persistence on Windows. Unix Shell Configuration Modification: Modifying shell configuration files like .bashrc or.profile allows adversaries to start malicious code whenever a user opens a terminal session. Trap: Attackers abuse shell trap commands to execute code in response to system signals (e.g., shutdown, logoff, or errors), enhancing persistence or privilege escalation. LC_LOAD_DYLIB Addition: By adding malicious the LC_LOAD_DYLIB header to macOS binaries, attackers can force the system to load rogue dynamic libraries during execution. Netsh Helper DLL: Attackers register malicious DLLs as Netsh helpers, ensuring their code loads whenever Netsh is used, aiding persistence or privilege escalation. Accessibility Features: Abusing Windows accessibility tools (like Sticky Keys) lets attackers invoke system shells or backdoors at the login screen, bypassing standard authentication. AppCert DLLs: Adversaries inject DLLs via AppCert DLL Registry keys, so their code runs in every process creation, creating broad persistence. AppInit DLLs: Attackers exploit AppInit DLL Registry values to ensure their DLLs are loaded into multiple processes, maintaining persistence. Application Shimming: By creating or modifying Windows application shims, adversaries force the system to redirect legitimate programs to launch malicious code. Image File Execution Options Injection: Modifying Image File Execution Options (IFEO) in Registry allows attackers to set debuggers that hijack normal application launches for persistence. PowerShell Profile: Malicious code in PowerShell profile scripts will auto-run whenever PowerShell starts, providing persistence and privilege escalation opportunities. Emond: Attackers place malicious rules in macOS’s Emond event monitor daemon, causing code to run in response to system events. Component Object Model Hijacking: By hijacking references to COM objects in Windows, adversaries ensure their code launches when certain applications or system routines are invoked. Installer Packages: Attackers may leverage installer scripts or packages to deploy persistent code during application installation or updates. Udev Rules: By modifying Linux’s udev rules, adversaries configure devices to trigger the execution of rogue code during events like hardware insertion. Python Startup Hooks: Attackers add code to Python startup scripts or modules, causing their payload to run automatically whenever Python interpreter is launched. T1068 – Exploitation for Privilege Escalation Attackers exploit software or OS vulnerabilities to gain elevated rights. This may target kernel flaws, driver bugs, or misconfigured services. By triggering the vulnerability, adversaries escalate from low-privilege to SYSTEM/root. This is one of the most direct and powerful escalation methods. T1574 – Hijack Execution Flow This technique alters how the system resolves and launches programs. Attackers place malicious files where high-privilege processes expect legitimate ones. When the privileged process starts, it inadvertently loads or executes the attacker code. This leverages DLL search order hijacking, path hijacking, and similar methods. DLL: Attackers exploit the way Windows applications load Dynamic Link Libraries (DLLs), tricking them into running malicious DLLs for code execution or privilege escalation. Dylib Hijacking: Adversaries target macOS by placing malicious dylib files in directories searched by applications, causing them to be loaded instead of legitimate libraries. Executable Installer File Permissions Weakness: Attackers leverage weak permissions on installer files to replace or modify executables, allowing unauthorized code execution with high privileges. Dynamic Linker Hijacking: This technique manipulates the loading process of shared libraries (DLLs or dylibs), often abusing environment variables (like PATH) or loader settings to ensure malicious libraries are loaded first. Path Interception by PATH Environment Variable: Adversaries modify the PATH environment variable, influencing where the system searches for executables and libraries, enabling malicious code to be loaded. Path Interception by Search Order Hijacking: Attackers exploit insecure search orders for files or DLLs, placing malicious files in locations that applications check before trusted locations. Path Interception by Unquoted Path: By taking advantage of unquoted paths in executable calls, adversaries' plant malicious files that are incorrectly loaded by the system, allowing code execution. Services File Permissions Weakness: Weak permissions on Windows service files enable attackers to replace service executables with malicious content, gaining persistent system access. Services Registry Permissions Weakness: Adversaries exploit weak registry settings of Windows services, altering keys to redirect service execution to their malicious code. COR_PROFILER: Attackers abuse the COR_PROFILER environment variable to hijack the way . NET applications load profiling DLLs, gaining code execution during app runtime. KernelCallbackTable: This involves altering callback tables in the Windows kernel to redirect the execution flow, enabling arbitrary code to run with elevated privileges. AppDomainManager: By subverting the AppDomainManager in .NET applications, adversaries gain control over the loading of assemblies, potentially executing malicious payloads during application startup. T1055 – Process Injection This involves injecting malicious code into legitimate processes. Injected processes often run with higher privileges than the attacker initially has. It enables evasion of security tools by blending into trusted processes. Successful injection allows execution under a more privileged security context. Dynamic-link Library Injection: Injects malicious DLLs into live processes to execute unauthorized code in the process memory, enabling attackers to evade defenses and elevate privileges. Portable Executable Injection: Loads or maps a malicious executable (EXE) into the address space of another process, running code under the guise of a legitimate application. Thread Execution Hijacking: Redirects the execution flow of an active thread in a process to run attacker-controlled code, often used for stealthy payload delivery. Asynchronous Procedure Call (APC): Delivers malicious code by queuing attacker-specified functions (APCs) to run in the context of another process or thread. Thread Local Storage (TLS): Uses TLS callbacks within a process to execute injected code when the process loads DLLs, often leveraging this for covert malware execution. Ptrace System Calls: Exploits ptrace debugging capabilities (on Unix/Linux) to inject and execute malicious code within the address space of a targeted process. Proc Memory: Modifies memory structures directly through the /proc filesystem (Linux/Unix) to inject or alter code in running processes for persistence or privilege escalation. Extra Window Memory Injection: Injects code into special memory regions (like window memory in Windows GUI processes) to achieve code execution in those processes. Process Hollowing: Creates a legitimate process, then swaps its memory with attacker code, making malware run under the mask of valid processes to evade detection. Process Doppelgänging: Leverages Windows Transactional NTFS (TxF) and process creation mechanisms to run malicious code in a way that appears legitimate and avoids conventional monitoring. VDSO Hijacking: Modifies the Virtual Dynamic Shared Object (VDSO) in Linux to execute injected code during system or process startup routines. ListPlanting: Manipulates application or window list memory, using this entrypoint for code injection into legitimate processes without overtly altering their main execution flow. T1053 – Scheduled Task/Job Attackers create or modify scheduled tasks to run malware with elevated privileges. These jobs often execute under SYSTEM, root, or service accounts. It provides both persistence and privilege escalation. The scheduled execution blends into normal automated system behavior. At: Attackers use the "at" scheduling utility on Windows or Unix-like systems to set up tasks that run at specific times, enabling persistence or timed execution of malicious programs. Cron: By adding entries to cron on Unix/Linux systems, adversaries can schedule their malicious code to execute automatically at regular intervals, maintaining access without user interaction. Scheduled Task: Threat actors abuse operating system scheduling features (like Windows Task Scheduler) to run unwanted commands or software on startup or according to a set schedule for persistence. Systemd Timers: In Linux environments, attackers configure systemd timers to trigger services or executables at designated times, ensuring regular execution of their payloads even after restarts. Container Orchestration Job: Adversaries leverage cluster scheduling platforms (such as Kubernetes Cron Jobs) to deploy containers that repeatedly execute malicious code across multiple nodes, providing scalable and automated persistence in cloud-native environments. T1078 – Valid Accounts Adversaries use stolen credentials to access legitimate user, admin, or service accounts for initial access, persistence, or privilege escalation, often bypassing security controls by blending in with normal activity. Default Accounts: These are pre-configured accounts built into operating systems or applications, such as guest or administrator; attackers exploit weak, unchanged, or known passwords on these accounts to gain unauthorized access. Domain Accounts: Managed by Active Directory, domain accounts allow users, administrators, or services to access resources across an organization’s network; adversaries leverage compromised domain credentials for lateral movement or privileged actions. Local Accounts: Accounts specific to a single machine or device, often with administrative privileges; attackers use compromised local credentials to escalate rights or maintain control over endpoints. Cloud Accounts: These are accounts for cloud platforms or services (like AWS, Azure, GCP); Those adversaries who obtain these credentials can gain significant control, escalate privileges, or persist in cloud environments. How F5 can help? F5 security solutions, including BIG-IP, NGINX, and Distributed Cloud, provide robust defenses against privilege escalation risks by enforcing strict access controls, role-based permissions, and session validation. These protections mitigate risks from vulnerabilities and misconfigurations that adversaries exploit to elevate privileges. F5’s security capabilities also offer monitoring and threat detection mechanisms that help identify anomalous activities indicative of privilege escalation attempts. For more information, please contact your local F5 sales team. Conclusion Privilege escalation is a critical cyberattack tactic that allows attackers to move from limited access to elevated permissions, often as administrator or root on compromised systems. This expanded control lets attackers disable security measures, steal sensitive data, persist in the environment, and launch more damaging attacks. Preventing and detecting privilege escalation requires layered defenses, vigilant access management, and regular security monitoring to minimize risk and respond quickly to unauthorized privilege gains. Reference Links: MITRE ATT&CK® Privilege Escalation, Tactic TA0004 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs39Views0likes0CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
62Views2likes0CommentsKASM Workspaces Integration with F5 BIG-IP Access Policy Manager (APM)
Introduction F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article I’ll show you how to secure your Kasm Workspace using F5 BIG-IP APM. APM is a key component of the F5 Application Delivery and Security Platform (ADSP). APM covers both Application Delivery, Security and is a key component of Zero Trust. Kasm Workspaces Kasm Workspaces is a containerized streaming platform designed for secure, web-based access to desktops, applications, and web browsing. It leverages container technology to deliver virtualized environments directly to users' browsers, enhancing security, scalability, and performance. Commonly used for remote work, cybersecurity, and DevOps workflows, Kasm Workspaces provides a flexible and customizable solution for organizations needing secure and efficient access to virtual resources. As noted in the Kasm Documentation, the Kasm Workspaces Web App Role servers should not be exposed directly to the public. That’s where F5 BIG-IP APM can help. Demo Video Deployment Prerequisites F5 BIG-IP version 17.x Access version 10.x Kasm Workspaces version 1.17 installed and configured properly Configure using Automation Toolchain with AS3 and FAST Templates The F5 BIG-IP Automation Toolchain is a suite of tools designed to automate the deployment, configuration, and management of F5 BIG-IP devices. It enables efficient and consistent management using declarative APIs, templates, and integrations with popular automation frameworks. Application services (FAST) templates are predefined configurations that streamline the deployment and management of applications by providing consistent and repeatable setups. NOTE: The configuration using the Automation Toolchain is well-documented in this DevCentral article, which also includes demo videos: How I did it - “Delivering Kasm Workspaces three ways” Configure Manually Using a Virtual Server This article will focus on the manual configuration of the BIG-IP using a Virtual Server. Configuring it this way will give you a deeper understanding of how all the components work together to create a cohesive solution. Network Environment Linux “External” client IP: 10.1.10.4 BIG-IP “External” Self IP: 10.1.10.10 BIG-IP “Internal” Self IP: 10.1.20.10 Kasm Workspace IP: 10.1.20.23 BIG-IP Configuration Create HTTP Monitor: First, let’s create the HTTP Monitor for the Kasm Workspace server. From Local Traffic > Monitors > click the green plus sign to add a new one. Give it a name, “Kasm-Monitor” in this example Set the Type to HTTP Enter the following for the Send String: GET /api/__healthcheck\r\n Enter the following for the Receive String: OK It should look like this: Set Reverse to Yes and click Finished Create Pool: Next we’ll create the Pool From Local Traffic > Pools > Pool List > click the plus sign to add a new one Give it a name, “Kasm-Pool” in this example Select the Health Monitor you created previously and click the arrows to move it to Active Under Resources specify a Node Name, “Kasm-Server” in this example Specify the IP Address, “10.1.20.23” in this example Set the Service Port to 443, then click Add Click Finished Create Virtual Server: Next we’ll create the Virtual Server From Local Traffic > Virtual Servers > Virtual Server List > click the plus sign to add a new one Give it a Name, “vs_kasm” in this example. Keep the Type as Standard. Set the Destination to the IP Address you want the BIG-IP to listen on for connections to the Kasm server, “10.1.10.100” in this example. Set the Service Port to HTTPS, port 443. Click Finished at the bottom Click on the Virtual Server you just created Click Resources Set the Default Pool to kasm_pool, then click Update The Kasm Virtual Server Status should eventually change to Green when the Health Monitor is successful. NOTE: The Virtual Server configuration in this example has been simplified for demonstration purposes. Additional configuration options will be covered later in this article. Kasm Workspaces Configuration The Kasm Workspace will need a Zone configured with the default settings. Login as Admin and check this from Infrastructure > Zones. You will need at least one Workspace. In this example, I have a Workspace with Chrome, Firefox, Terminal and Ubuntu Jammy Click the WORKSPACES Tab at the top of the screen to see what the Workspace looks like Your view should look like this: Test Kasm Workspaces Login as a User NOTE: The IP Address used to connect to the Kasm Workspaces through the BIG-IP is the Virtual Server listening IP Address 10.1.10.100 When the Workspace loads, click Firefox Choose the option to Launch Session in a new Tab After a moment, Firefox will load Here you can see the F5.com website displayed NOTE: The browser pop-up blocker can prevent the Kasm Workspace applications from successfully launching. You can disable the pop-up blocker or create an exception for the BIG-IP Virtual IP (10.1.10.100). Enable SSL Decryption Enabling SSL Decryption allows you to fully inspect the requests and payloads passing through BIG-IP. From Local Traffic > Virtual Servers > click Virtual Server List Then click the name of your Virtual Server, “vs_kasm” in this example In the Configuration section, set the Protocol Profile (Client) to http Set the SSL Profile (Client) to clientssl Set the SSL Profile (Server) to serverssl NOTE: If you have created your own Client and Server SSL Profiles, you should add them here. The instructions above are for demonstration purposes only. Scroll to the bottom and click Update You’re done! Conclusion F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article, you learned how to secure your Kasm Workspace using F5 BIG-IP APM. Related Content How I did it - “Delivering Kasm Workspaces three ways” Download Kasm Workspaces Kasm Documentation
36Views1like0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow/networking
93Views1like0CommentsIntroducing the New F5 Bot Defense Self-Service UI
For more information about Bot Defense Advanced features, see Bot Defense Overview Step 1: Sign Up for Bot Defense Advanced A customer's F5 account team will help their Bot Defense infrastructure and policies configured to protect their applications. NOTE: User permissions must include one or more of the following permissions. If a user does not have any of these roles, they should contact their Bot Defense administrator or TAM.: f5xc-bot-defense-admin role f5xc-bot-defense-user role f5xc-bot-defense-monitor role f5xc-bot-defense-report role Step 2: Decide What You Want to Protect Users should then decide which endpoints they want to protect with Bot Defense. For information about what to consider when configuring web and mobile endpoints, see the following information: Protect Web-Based Endpoints Protect Mobile Endpoints Step 3: Configure Your Bot Defense Infrastructure Important: If F5 Operations has already configured a user's Bot Defense infrastructure, they can skip this step. Users can now manage their configure their Bot Defense infrastructures from the F5 Distributed Cloud Console. A Bot Defense deployment can consist of multiple Test and Production infrastructures (subscription limits will determine how many can be added / managed). To configure a Bot Defense infrastructure, configure the following settings: Traffic type Infrastructure type Region Access control list For detailed instructions, see Configure the Bot Defense Infrastructure. Step 4: Configure Your Bot Policies Bot Defense Advanced provides three system policies that allow users to control system configuration settings: Bot Endpoint Policy Bot Allowlist Policy Bot Network Policy The F5 Operations team performs an analysis of a customer's endpoints and creates the initial version of each policy. Users can then deploy the policies in the Bot Defense Test infrastructure provided by F5. If preferred, users can also work with your F5 Operations Team to manage their policies. Important: F5 strongly recommends that users deploy and thoroughly test policy updates in the Test infrastructure provided by F5 before deploying to Production infrastructure. Step 5: Test Your Configuration Deploy your policies in the Test infrastructure provided to you by F5 to test your Bot Defense deployment and help ensure that Bot Defense policies are properly configured, that JavaScript tags are injected in your application pages correctly, or that you have correctly integrated the mobile SDK. Step 6: Deploy Policies in Your Production Environment Important: F5 strongly recommends that you deploy and thoroughly test policy updates in the Test infrastructure provided to you by F5 before you deploy in your Production infrastructure. After you verify in your Test infrastructure that Bot Defense is configured correctly and correctly identifies automated traffic, you can deploy your policies yourself or work with your F5 Operations team to deploy your policies in your Production infrastructure. Step 7: Enable Bot Defense on an HTTP Load Balancer To configure Bot Defense on an HTTP load balancer, users must complete the following tasks on each HTTP load balancer where they want to enable Bot Defense: Enable the Bot Defense workspace on one or more HTTP load balancers. Configure how Bot Defense will inject JavaScript tags in the HTTP pages of the application. If protecting mobile endpoints, enable and configure the F5 Distributed Cloud Mobile SDK. For detailed instructions, see Configure Bot Defense on an HTTP Load Balancer. Step 8: Deploy Bot Detection Rules Important: Bot detection rule self-service management is a limited availability feature. Contact your F5 account team for information. F5 supplies customers with a set of initial bot detection rules. Most rules are turned off, with a subset of rules turned on by default. It is recommended for users to monitor their traffic for approximately two weeks to observe how rules that are turned on affect their traffic. After this time, users can now use the Distributed Cloud Console to turn rules on and off to make changes to how Bot Defense handles traffic. Important: F5 recommends that you deploy each rule in a Test infrastructure before you deploy in your production infrastructure. For information about bot detection rules, see Bot Detection Rules Overview. Bot Defense Advanced Self-Service Policy Management Demo: Related Resources: Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) Protecting Your Web Applications Against Critical OWASP Automated Threats Making Mobile SDK Integration Ridiculously Easy with F5 XC Mobile SDK Integrator JavaScript Supply Chains, Magecart, and F5 XC Client-Side Defense (Demo) Bots, Fraud, and the OWASP Automated Threats Project (Overview) Protecting Your Native Mobile Apps with F5 XC Mobile App Shield Enabling F5 Distributed Cloud Client-Side Defense in BIG-IP 17.1 Bot Defense for Mobile Apps in XC WAAP Part 1: The Bot Defense Mobile SDK F5 Distributed Cloud WAAP Distributed Cloud Services Overview Enable and Configure Bot Defense - F5 Distributed Cloud Service
346Views1like0CommentsOverview of MITRE ATT&CK Tactic - TA0011 Command and Control
Introduction In modern days, cyber violations, command and control are one of the main set of techniques with which attackers can gain control over the system within a victim’s network. Once control is gained over the system, the attackers can steal sensitive data, move laterally and blend into normal activity. Command and Control (MITRE ATT&CK Tactic TA0011) represents another critical stage of the adversary lifecycle, where the adversaries focus on communicating with the systems under their control. There are multiple ways to achieve this, either by mimicking the expected traffic flow to avoid detection or mimicking a normal behavior of the compromised system. To avoid the vulnerability, it is important for defenders to understand how communication is established to any system in the network and the various levels of stealth depending on the network structure. This article walks through the most common Command and Control techniques, and how F5 solutions provide strong defense against them. T1071 - Application Layer Protocol To communicate with the systems, the adversaries blend in with the existing traffic of the OSI layer protocols to avoid detection/network filtering. The results of these commands will be embedded within the protocol traffic between the client and the server. T1071.001 - Web Services Adversaries mimic normal, expected HTTP/HTTPS traffic that carries web data to communicate with the systems under their control within a victim network. T1071.002 - File Transfer Protocol Protocols used to implement this technique includes SMB, FTP, FTPS and TFTP. The malicious data is concealed within the fields and headers of the packets produced from these protocols. T1071.003 - Mail Protocols Protocols carrying electronic mail such as SMTP/S, POP3/S, and IMAP is utilized by concealing the data within the email messages themselves. T1071.004 - DNS An administrative function in computer networking is served by the DNS Protocol, and DNS traffic may also be allowed even before the authentication of the network. Data is concealed in the fields and headers of these packets. T1071.005 - Publish/Subscribe Protocols For message distribution managed by a centralized broker, where Publish/Subscribe design utilizes MQTT, XMPP, AMQP and STOMP protocols. T1092 - Communication Through Removable Media On disconnected networks, command and control between the compromised hosts can be performed using removable media to execute commands from system to system. For a successful execution, both systems need to be compromised and need to replicate the removable media through lateral movement. T1659 - Content Injection Adversaries may also gain control over the victim’s system by injecting malicious content into the systems, by initially accessing the compromised data-transfer channels where the traffic can be manipulated or content can be injected. T1132 – Data Encoding Another technique to gain control over the system is by encoding the information using a standard data encoding system. Encoding includes the use of ASCII, Unicode, Base64, MIME or other binary-to-text encoding systems. T1132.001 - Standard Encoding Data Encoding schemes utilized for Standard Encoding includes ASCII, Unicode, hexadecimal, Base64 and MIME. Data compression, such as gzip, are also an example of standard encoding. T1132.002 - Non-Standard Encoding Data Encoded in the message body of an HTTP request, such as modified Base64, is utilized as encoding schemes. T1001 – Data Obfuscation Obfuscation of command-and-control communication is hidden as part of this technique, making it even more difficult to discover or decipher. The focus is to make the communication less conspicuous and hidden, by incorporating several methods, which create below sub-techniques: T1001.001 - Junk Data Adversaries may abuse the protocols by adding random, meaningless junk data to the protocols, which can prevent trivial methods for decoding or deciphering the traffic. T1001.002 - Steganography Steganographic sub-techniques are used to transfer hidden digital data messages between systems, such as images or document files. T1001.003 - Protocol or Service Impersonation Adversaries can impersonate legitimate protocols or web services, to command-and-control traffic by blending in with legitimate network traffic. T1568 – Dynamic Resolution To establish connections dynamically to command-and-control the infrastructure and prevent any detections, adversaries use malware sharing a common algorithm with the infrastructure to dynamically adjust the parameters, such as a domain name, IP address, or port number. T1568.001 - Fast Flux DNS Fast Flux DNS is used to hide a command-and-control channel behind an array of rapidly changing IP addresses linked to a single domain resolution. T1568.002 - Domain Generation Algorithm Rather than relying on a list of static IP addresses or domains, adversaries may utilize Domain Generation Algorithms to dynamically identify a destination domain for command-and-control traffic. T1568.003 - DNS Calculations Instead of utilizing the predetermined port number or the actual IP address, to dynamically determine which port and IP address to use, adversaries calculate on addresses returned in DNS results. T1573 – Encrypted Channel Adversaries rely on an encrypted algorithm channel to conceal command-and-control traffic rather than depending on any inherent protections by the communication protocols. T1573.001 - Symmetric Cryptography Symmetric Encryption Algorithms, such as AES, DES, 3DES, Blowfish and RC4, use keys for plaintext encryption and ciphertext decryption. T1573.002 - Asymmetric Cryptography Asymmetric cryptography, or public key cryptography, uses a keypair per party: one public and one private. The sender encrypts the data with the receiver’s public key, and the receiver decrypts the data with their private key. T1008 – Fallback Channels If the primary channel is compromised or inaccessible, then in order to maintain reliable command and control, adversaries use fallback communication channels. T1665 – Hide Infrastructure To hide and evade detection of the command-and-control infrastructure, adversaries identify and filter traffic from defensive tools, masking malicious domains to abuse the true destination, and otherwise hiding malicious contents to delay discovery and prolong the effectiveness of adversary infrastructure. T1105 – Ingress Tool Transfer Tools or other files transfer from an external adversary-controlled source into the compromised environment through controlled channels or protocols such as FTP. Also, adversaries may spread tools across the compromised environment as part of Lateral Movement. T1104 –Multi-Stage Channels To make detection more difficult, adversaries create multiple stages for command-and-control for several functions and different conditions. T1095 – Non-Application Layer Protocol To communicate between the host and command-and-control server, adversaries use non-application layer protocols, such as ICMP (Internet Control Message Protocol), UDP (User Datagram Protocol), SOCKS (Secure Sockets), or SOL (Serial over LAN). T1571 – Non-Standard Port Adversaries communicate using port pairings that are not associated with the protocol, for, say, HTTPS over port 8088 or port 587 as opposed to the traditional port 443. T1572 – Protocol Tunneling Another approach to avoid detection/network filtering is to explicitly encapsulate a protocol within another protocol to enable routing of network packets which otherwise not reach their intended destination, such as SMB, RDP. T1090 – Proxy To direct network communications to a command-and-control server to avoid direct connections to the infrastructure and override the existing actual communication paths to avoid suspicion and manage command-and-control communications inside a compromised environment, proxy act as an intermediary between the systems, such as, HTRAN, ZXProxy and ZXPortMap. T1090.001 - Internal Proxy Internal proxies are primarily used to conceal the actual destination while reducing the need for multiple connections to external systems, such as peer-to-peer (p2p) networking protocols. T1090.002 - External Proxy External proxy is used to mask the true destination of the traffic with port redirectors. Purchased infrastructure such as Virtual Private Servers which are the compromised systems outside the victim's network, are generally used for these purposes. T1090.003 - Multi-Hop Proxy Multiple proxies can also be chained together to abuse the actual traffic directions, making it more difficult for defenders to trace malicious activity and identify its source. T1090.004 - Domain Fronting Adversaries can even misuse Content Delivery Networks (CDNs) routing schemes to infect the actual HTTPS traffic destination or traffic tunneled through HTTPS. T1219 – Remote Access Tools To access the target system remotely and establish an interactive command-and-control within the network, remote access tools are used to bridge a session between two trusted hosts through a graphical interface, a CLI, or a hardware-level access (KVM, Keyboard, Video, Mouse) over IP solutions. T1219.001 - IDE Tunneling IDE Tunneling combines SSH, port forwarding, file sharing and letting the developers gain access as if they are local, by encapsulating the entire session and tunneling protocols alongside SSH, allowing the attackers to blend in with the actual development workflow. T1219.002 - Remote Desktop Software Adversary may access the target systems interactively through desktop support software, which provides a graphical interface to the remote adversary, such as VNC, Team Viewer, AnyDesk, LogMein, are commonly used legitimate support software. T1219.003 - Remote Access Hardware To access the legitimate hardware through commonly used legitimate tools, including IP-based keyboard, video, or mouse (KVM) devices such as TinyPilot and PiKVM. T1205 – Traffic Signaling Traffic signaling is used to hide open ports or any other malicious functionality to prolong command-and-control over the compromised system. T1205.001 - Port Knocking To hide the open ports for persistence, port knocking is included, to enable the port, in which the adversary sends a series of attempted connections to a predefined sequence of closed ports. T1205.002 - Socket filters Socket Filters are filters to allow or disallow certain types of data through the socket. If packets received by the network interface match the filtering criteria, desired actions are triggered. T1102 – Web Service Adversaries use an existing, legitimate external Web Service to transfer data to/from the compromised system. Also, web service providers commonly use SSL/TLS encryption, which gives adversaries an additional level of protection. T1102.001 - Dead Drop Resolver Adversaries post content called dead drop resolver on Web Services with encoded domains. These resolvers will redirect the victims to the infected domain/IP addresses. T1102.002 - Bidirectional Communication Once the system is infected, they can send the output back to the Web Service Channel. T1102.003 - One-Way Communication Compromised Systems may not return any output at all in a few cases where adversaries tend to send only one way instructions and do not want any response. How F5 Can Help F5 security solutions provide multiple different functionalities to secure and protect applications and APIs across various platforms including Clouds, Edge, On-prem or Hybrid. F5 supports risk management solutions mentioned below to effectively mitigate and protect against command-and-control techniques: Web Application Firewall (WAF): WAF is supported by all the F5 deployment modes, which is an adaptable, multi-layered security solution that defends web applications against a broad spectrum of threats, regardless of where they are deployed. API Security: F5 offers to ease the security of APIs with F5 Web Application and API Protection (WAAP) solutions, which protects API endpoints and other API dependencies by restricting the API definitions using specified rules and schemas. Rate-Limiting & Bot Protection: Brute-force, credential stuffing, and session attacks can be mitigated with configurable thresholds and automated bot protection. For more information, please contact your local F5 sales team. Conclusion Command and Control (C2) encompasses the methods adversaries employ to communicate with compromised systems within a target network. Adversaries disguise their C2 traffic as legitimate network activity to evade detection. To defend against Command-and-Control techniques, defenders should gain a clear understanding of implementation of robust segmentation and egress filtering using Web Application Firewalls (WAF) to limit communication channels and regularly monitor traffic for anomalous patterns and leverage threat intelligence to identify any C2 indicator. Additionally, employing endpoint detection and response (EDR) using API Security solutions can help detect and block malicious C2 activity at the host level. Reference links MITRE | ATT&CK Tactic 09 – Command and Control MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®36Views0likes0CommentsHands-On Quantum-Safe PKI: A Practical Post-Quantum Cryptography Implementation Guide
Is your Public Key Infrastructure quantum-ready? Remember waaay back when we built the PQC CNSA 2.0 Implementation guide in October 2025? So long ago! Due to popular request, we've expanded the lab to cover the more widely needed NIST FIPS 203/204/205 quantum standards. The below GitHub lab guide will still walk you through building a quantum resistant certificate authority using OpenSSL but we've made some fun adjustments to reflect more real world scenarios. This guide currently covers: Building quantum safe certificate authority for FIPS 203/204/205 use cases Building quantum safe certificate authority for CNSA 2.0 use cases OpenSSL 3.5 parallel install for PQC-specific use cases OpenSSL 3.x + OQS library installation when you cannot update to 3.5.x. Why learn and implement post-quantum cryptography (PQC) now? While quantum computing is a fascinating area of science, all technological advancements can be misused. Nefarious people and nation-states are extracting encrypted data to decrypt at a later date when quantum computers become available, a practice you better know by now called "harvest now, decrypt later." Close your post-quantum cryptographic knowledge gap so you can get secured sooner and reduce the impact(s) that might not surface until later. Ignorance is not bliss when it comes to cryptography and regulatory fines, so let's get started. The GitHub lab provides step-by-step instructions to create: Quantum-resistant Root CA using ML-DSA-87 (FIPS and CNSA 2.0) Algorithm flexibility based on your compliance needs Quantum-safe server and client certificates OCSP and CRL revocation for quantum-resistant certificates Access the Complete Lab Guide on GitHub → At A Glance: OpenSSL Quantum-Resistant CA Learning Paths This repository currently offers two learning tracks. Select the path that aligns with your organization's requirements: FIPS 203/204/205 Path CNSA 2.0 Path Target Audience Commercial organizations, compliance needs Government contractors, classified systems Compliance Standard NIST Quantum Safe FIPS standards NSA Commercial National Security Algorithm Suite 2.0 Algorithm Flexibility Full FIPS algorithm suites (ML-DSA-44/65/87, SLH-DSA) Restricted to CNSA 2.0 approved (ML-DSA-65/87 only) Use Case General quantum-resistant infrastructure National security systems, defense contracts What This Lab Guide Achieves Complete PKI Hierarchy Implementation The lab walks through building an internal PKI infrastructure from scratch, including: Root Certificate Authority: Using ML-DSA-87 providing the highest quantum-ready NIST security level Intermediate Certificate Authority: Intermediate CA using ML-DSA-65 for operational certificate issuance End-Entity Certificates: Server and user certificates with comprehensive Subject Alternative Names (SANs) for real-world applications Revocation Infrastructure: Both Certificate Revocation Lists (CRL) and Online Certificate Status Protocol (OCSP) implementation Security Best Practices: Restrictive Unix file permissions, secure key storage, and backup procedures throughout, preferred practices for lab and internal testing scenarios Key Takeaways After completing one or more of the labs, you will: Understand Quantum Threats: Grasp why current RSA/ECDSA cryptography is vulnerable and how quantum-resistant algorithms provide protection Master ML-DSA Cryptography: Gain hands-on experience with both ML-DSA-65 (Level 3 security) and ML-DSA-87 (Level 5 security) algorithms Configure Modern PKI Features: Implement SANs with DNS, IP, email, and URI entries, plus both CRL and OCSP revocation mechanisms Troubleshoot Effectively: Learn to diagnose and resolve common issues with quantum-resistant certificates Prepare for Migration: Understand the practical steps needed to transition existing PKI infrastructure to quantum-resistant algorithms Who Should Read This Guide Enterprise Security Teams migrating to quantum-resistant algorithms Government Contractors requiring CNSA 2.0 compliance for classified systems Financial Institutions protecting long-term transaction records from quantum threats Healthcare Organizations securing patient data with regulatory requirements Cloud Service Providers implementing quantum-safe infrastructure for customers PKI Consultants preparing for post-quantum migration projects DevOps Engineers building quantum-ready CI/CD certificate pipelines Crossfit Trainers Find something interesting for once to yell at random intervals to anyone within earshot Access the Complete Lab Guide on GitHub → About This Guide We built the first guide for NSA Suite B in the distant past (2017) to learn ECC and modern cipher requirements. We built more recent second guide for CNSA 2.0 but it's quite specific for US federal audiences. That lead us to build a NIST FIPS PQC guide which should apply to more practical use cases. In the spirit of Learn Python the Hard Way, it focuses on manual repetition, hands-on interactions and real-world scenarios. It provides the practical experiences needed to implement quantum-resistant PKI in production environments. By building it on GitHub, other PKI fans can help where we may have missed something; or simply to expand on it with additional modules or forks. Have at it! Frequently Asked Questions (FAQS) Q: What is CNSA 2.0? A: CNSA 2.0 (Commercial National Security Algorithm Suite 2.0) is the NSA's updated cryptographic standard requiring quantum-resistant algorithms. Q: When do I need to implement quantum-resistant cryptography? A: The NSA and NIST mandate CNSA 2.0 and FIPS 20X implementation by 2030. Organizations should begin now due to "harvest now, decrypt later" attacks where adversaries collect encrypted data today for future quantum decryption. Q: What is ML-DSA (Dilithium)? A: ML-DSA (Module-Lattice Digital Signature Algorithm), formerly known as Dilithium, is a NIST-standardized quantum-resistant digital signature algorithm specified in FIPS 204, available in OpenSSL through the OQS provider. Q: What is ML-KEM (Kyber)? A: Kyber is an IND-CCA2-secure key encapsulation mechanism (KEM), whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. Kyber-512 aims at security roughly equivalent to AES-128, Kyber-768 aims at security roughly equivalent to AES-192, and Kyber-1024 aims at security roughly equivalent to AES-256. But quantumy (it's a word). Q: Is this guide suitable for production use? A: NOPE. While the guide teaches production-ready techniques and CNSA 2.0 compliance, always use Hardware Security Modules (HSMs) and air-gapped systems for production Root CAs (cold storage too). The lab is great for internal environments or test harnesses where you may need to test against new quantum-resistant signatures and such. ALWAYS rely on trusted public PKI infrastructure for production cryptography. Reference Links NIST Post-Quantum Cryptography Standards - Official NIST PQC project page with FIPS 204 (ML-DSA) specifications NSA CNSA 2.0 Algorithm Requirements - NSA's official CNSA 2.0 announcement and requirements Open Quantum Safe Project - Home of the OQS provider enabling quantum-resistant algorithms in OpenSSL OQS Provider for OpenSSL 3 - GitHub repository for the OQS provider with installation instructions RFC 5280: Internet X.509 PKI - Essential standard for X.509 certificate and CRL profiles OpenSSL 3.0 Documentation - Comprehensive OpenSSL documentation for understanding commands and options FIPS 204: ML-DSA Standard - The official Module-Lattice-Based Digital Signature Standard66Views2likes0CommentsAutomating ACMEv2 Certificate Management on BIG-IP
While we often associate and confuse Let's Encrypt with ACMEv2, the former is ultimately a consumer of the latter. The "Automated Certificate Management Environment" (ACME) protocol describes a system for automating the renewal of PKI certificates. The ACME protocol can be used with public services like Let's Encrypt, but also with internal certificate management services. In this article we explore the more generic support of ACME (version 2) on the F5 BIG-IP.14KViews12likes28CommentsOverview of MITRE ATT&CK Tactic - TA0002 Execution
Introduction: Execution refers to the methods adversaries use to run malicious code on a target system. This tactic includes a range of techniques designed to execute payloads after gaining access to the network. It is a key stage in the attack lifecycle, as it allows attackers to activate their malicious actions, such as deploying malware, running scripts, or exploiting system vulnerabilities. Successful execution can lead to deeper system control, enabling attackers to perform actions like data theft, system manipulation, or establishing persistence for future exploitation. Now, let’s dive into the various techniques under the Execution tactic and explore how attackers use them. 1. T1651: Cloud Administration Command: Cloud management services can be exploited to execute commands within virtual machines. If an attacker gains administrative access to a cloud environment, they may misuse these services to run commands on the virtual machines. Furthermore, if an adversary compromises a service provider or a delegated administrator account, they could also exploit trusted relationships to execute commands on connected virtual machines. 2. T1059: Command and Scripting Interpreter The misuse of command and script interpreters allows adversaries to execute commands, scripts, or binaries. These interfaces, such as Unix shells on macOS and Linux, Windows Command Shell, and PowerShell are common across platforms and provide direct interaction with systems. Cross-platform interpreters like Python, as well as those tied to client applications (e.g., JavaScript, Visual Basic), can also be misused. Attackers may embed commands and scripts in initial access payloads or download them later via an established C2 (Command and Control) channel. Commands may also be executed via interactive shells or through remote services to enable remote execution. (.001) PowerShell: As PowerShell is already part of Windows, attackers often exploit it to execute commands discreetly without triggering alarms. It’s often used for things like finding information, moving across networks, and running malware directly in memory. This helps avoid detection because nothing is written to disk. Attackers can also execute PowerShell scripts without launching the powershell.exe program by leveraging.NET interfaces. Tools like Empire, PowerSploit, and PoshC2 make it even easier for attackers to use PowerShell for malicious purposes. Example - Remote Command Execution (.002) AppleScript: AppleScript is an macOS scripting language designed to control applications and system components through inter-application messages called AppleEvents. These AppleEvent messages can be sent by themselves or with AppleScript. They can find open windows, send keystrokes, and interact with almost any open application, either locally or remotely. AppleScript can be executed in various ways, including through the command-line interface (CLI) and built-in applications. However, it can also be abused to trigger actions that exploit both the system and the network. (.003) Windows Command Shell: The Windows Command Prompt (CMD) is a lightweight, simple shell on Windows systems, allowing control over most system aspects with varying permission levels. However, it lacks the advanced capabilities of PowerShell. CMD can be used from a distance using Remote Services. Attackers may use it to execute commands or payloads, often sending input and output through a command-and-control channel. Example - Remote Command Execution (.004) Unix Shell: Unix shells serve as the primary command-line interface on Unix-based systems. They provide control over nearly all system functions, with certain commands requiring elevated privileges. Unix shells can be used to run different commands or payloads. They can also run shell scripts to combine multiple commands as part of an attack. Example - Remote Command Execution (.005) Visual Basic: Visual Basic (VB) is a programming language developed by Microsoft, now considered a legacy technology. Visual Basic for Applications (VBA) and VBScript are derivatives of VB. Malicious actors may exploit VB payloads to execute harmful commands, with common attacks, including automating actions via VBScript or embedding VBA content (like macros) in spear-phishing attachments. (.006) Python: Attackers often use popular scripting languages, like Python, due to their interoperability, cross-platform support, and ease of use. Python can be run interactively from the command line or through scripts that can be distributed across systems. It can also be compiled into binary executables. With many built-in libraries for system interaction, such as file operations and device I/O, attackers can leverage Python to download and execute commands, scripts, and perform various malicious actions. Example - Code Injection (.007) JavaScript: JavaScript (JS) is a platform-independent scripting language, commonly used in web pages and runtime environments. Microsoft's JScript and JavaScript for Automation (JXA) on macOS are based on JS. Adversaries exploit JS to execute malicious scripts, often through Drive-by Compromise or by downloading scripts as secondary payloads. Since JS is text-based, it is often obfuscated to evade detection. Example - XSS (.008) Network Device CLI: Network devices often provide a CLI or scripting interpreter accessible via direct console connection or remotely through telnet or SSH. These interfaces allow interaction with the device for various functions. Adversaries may exploit them to alter device behavior, manipulate traffic, load malicious software by modifying configurations, or disable security features and logging to avoid detection. (.009) Cloud API: Cloud APIs offer programmatic access to nearly all aspects of a tenant, available through methods like CLIs, in-browser Cloud Shells, PowerShell modules (e.g., Azure for PowerShell), or SDKs for languages like Python. These APIs provide administrative access to major services. Malicious actors with valid credentials, often stolen, can exploit these APIs to perform malicious actions. (.010) AutoHotKey & AutoIT: AutoIT and AutoHotkey (AHK) are scripting languages used to automate Windows tasks, such as clicking buttons, entering text, and managing programs. Attackers may exploit AHK (.ahk) and AutoIT (.au3) scripts to execute malicious code, like payloads or keyloggers. These scripts can also be embedded in phishing payloads or compiled into standalone executable files (.011) Lua: Lua is a cross-platform scripting and programming language, primarily designed for embedding in applications. It can be executed via the command-line using the standalone Lua interpreter, through scripts (.lua), or within Lua-embedded programs. Adversaries may exploit Lua scripts for malicious purposes, such as abusing or replacing existing Lua interpreters to execute harmful commands at runtime. Malware examples developed using Lua include EvilBunny, Line Runner, PoetRAT, and Remsec. (.012) Hypervisor CLI: Hypervisor CLIs offer extensive functionality for managing both the hypervisor and its hosted virtual machines. On ESXi systems, tools like “esxcli” and “vim-cmd” allow administrators to configure and perform various actions. Attackers may exploit these tools to enable actions like File and Directory Discovery or Data Encrypted for Impact. Malware such as Cheerscrypt and Royal ransomware have leveraged this technique. 3. T1609: Container Administration Command Adversaries may exploit container administration services, like the Docker daemon, Kubernetes API server, or kubelet, to execute commands within containers. In Docker, attackers can specify an entry point to run a script or use docker exec to execute commands in a running container. In Kubernetes, with sufficient permissions, adversaries can gain remote execution by interacting with the API server, kubelet, or using commands like kubectl exec within the cluster. 4. T1610: Deploy Container Containers can be exploited by attackers to run malicious code or bypass security measures, often through the use of harmful processes or weak settings, such as missing network rules or user restrictions. In Kubernetes environments, attackers may deploy containers with elevated privileges or vulnerabilities to access other containers or the host node. They may also use compromised or seemingly benign images that later download malicious payloads. 5. T1675: ESXi Administration Command ESXi administration services can be exploited to execute commands on guest machines within an ESXi virtual environment. ESXi-hosted VMs can be remotely managed via persistent background services, such as the VMware Tools Daemon Service. Adversaries can perform malicious activities on VMs by executing commands through SDKs and APIs, enabling follow-on behaviors like File and Directory Discovery, Data from Local System, or OS Credential Dumping. 6. T1203: Exploitation for Client Execution Adversaries may exploit software vulnerabilities in client applications to execute malicious code. These exploits can target browsers, office applications, or common third-party software. By exploiting specific vulnerabilities, attackers can achieve arbitrary code execution. The most valuable exploits in an offensive toolkit are often those that enable remote code execution, as they provide a pathway to gain access to the target system. Example: Remote Code Execution 7. T1674: Input Injection Input Injection involves adversaries simulating keystrokes on a victim’s computer to carry out actions on their behalf. This can be achieved through several methods, such as emulating keystrokes to execute commands or scripts, or using malicious USB devices to inject keystrokes that trigger scripts or commands. For example, attackers have employed malicious USB devices to simulate keystrokes that launch PowerShell, enabling the download and execution of malware from attacker-controlled servers. 8. T1559: Inter-Process Communication Inter-Process Communication (IPC) is commonly used by processes to share data, exchange messages, or synchronize execution. It also helps prevent issues like deadlocks. However, IPC mechanisms can be abused by adversaries to execute arbitrary code or commands. The implementation of IPC varies across operating systems. Additionally, command and scripting interpreters may leverage underlying IPC mechanisms, and adversaries might exploit remote services—such as the Distributed Component Object Model (DCOM)—to enable remote IPC-based execution. (.001) Component Object Model (Windows): Component Object Model (COM) is an inter-process communication (IPC) mechanism in the Windows API that allows interaction between software objects. A client object can invoke methods on server objects via COM interfaces. Languages like C, C++, Java, and Visual Basic can be used to exploit COM interfaces for arbitrary code execution. Certain COM objects also support functions such as creating scheduled tasks, enabling fileless execution, and facilitating privilege escalation or persistence. (.002) Dynamic Data Exchange (Windows): Dynamic Data Exchange (DDE) is a client-server protocol used for one-time or continuous inter-process communication (IPC) between applications. Adversaries can exploit DDE in Microsoft Office documents—either directly or via embedded files—to execute commands without using macros. Similarly, DDE formulas in CSV files can trigger unintended operations. This technique may also be leveraged by adversaries on compromised systems where direct access to command or scripting interpreters is restricted. (.003) XPC Services(macOS): macOS uses XPC services for inter-process communication, such as between the XPC Service daemon and privileged helper tools in third-party apps. Applications define the communication protocol used with these services. Adversaries can exploit XPC services to execute malicious code, especially if the app’s XPC handler lacks proper client validation or input sanitization, potentially leading to privilege escalation. 9. T1106: Native API Native APIs provide controlled access to low-level kernel services, including those related to hardware, memory management, and process control. These APIs are used by the operating system during system boot and for routine operations. However, adversaries may abuse native API functions to carry out malicious actions. By using assembly directly or indirectly to invoke system calls, attackers can bypass user-mode security measures such as API hooks. Also, attackers may try to change or stop defensive tools that track API use by removing functions or changing sensor behavior. Many well-known exploit tools and malware families—such as Cobalt Strike, Emotet, Lazarus Group, LockBit 3.0, and Stuxnet—have leveraged Native API techniques to bypass security mechanisms, evade detection, and execute low-level malicious operations. 10. T1053: Scheduled Task/Job This technique involves adversaries abusing task scheduling features to execute malicious code at specific times or intervals. Task schedulers are available across major operating systems—including Windows, Linux, macOS, and containerized environments—and can also be used to schedule tasks on remote systems. Adversaries commonly use scheduled tasks for persistence, privilege escalation, and to run malicious payloads under the guise of trusted system processes. (.002) At: The “At” utility is available on Windows, Linux, and macOS for scheduling tasks to run at specific times. Adversaries can exploit “At” to execute programs at system startup or on a set schedule, helping them maintain persistence. It can also be misused for remote execution during lateral movement or to run processes under the context of a specific user account. In Linux environments, attackers may use “At “to break out of restricted environments, aiding in privilege escalation. (.003) Cron: The “cron” utility is a time-based job scheduler used in Unix-like operating systems. The “crontab” file contains scheduled tasks and the times at which they should run. These files are stored in system-specific file paths. Adversaries can exploit “cron” in Linux or Unix environments to execute programs at startup or on a set schedule, maintaining persistence. In ESXi environments, “cron” jobs must be created directly through the “crontab” file. (.005) Scheduled Task: Adversaries can misuse Windows Task Scheduler to run programs at startup or on a schedule, ensuring persistence. It can also be exploited for remote execution during lateral movement or to run processes under specific accounts (e.g., SYSTEM). Similar to System Binary Proxy Execution, attackers may hide one-time executions under trusted system processes. They can also create "hidden" tasks that are not visible to defender tools or manual queries. Additionally, attackers may alter registry metadata to further conceal these tasks. (.006) Systemd Timers: Systemd timers are files with a .timer extension used to control services in Linux, serving as an alternative to Cron. They can be activated remotely via the systemctl command over SSH. Each .timer file requires a corresponding .service file. Adversaries can exploit systemd timers to run malicious code at startup or on a schedule for persistence. Timers placed in privileged paths can maintain root-level persistence, while user-level timers can provide user-level persistence. (.007) Container Orchestration Job: Container orchestration jobs automate tasks at specific times, similar to cron jobs on Linux. These jobs can be configured to maintain a set number of containers, helping persist within a cluster. In Kubernetes, a CronJob schedules a Job that runs containers to perform tasks. Adversaries can exploit CronJobs to deploy Jobs that execute malicious code across multiple nodes in a cluster. 11. T1648: Serverless Execution Cloud providers offer various serverless resources such as compute functions, integration services, and web-based triggers that adversaries can exploit to execute arbitrary commands, hijack resources, or deploy functions for further compromise. Cloud events can also trigger these serverless functions, potentially enabling persistent and stealthy execution over time. An example of this is Pacu, a well-known open-source AWS exploitation framework, which leverages serverless execution techniques. 12. T1229: Shared Modules Shared modules are executable components loaded into processes to provide access to reusable code, such as custom functions or Native API calls. Adversaries can abuse this mechanism to execute arbitrary payloads by modularizing their malware into shared objects that perform various malicious functions. On Linux and macOS, the module loader can load shared objects from any local path. On Windows, the loader can load DLLs from both local paths and Universal Naming Convention (UNC) network paths. 13. T1072: Software Deployment Tools Adversaries may exploit centralized management tools to execute commands and move laterally across enterprise networks. Access to endpoint or configuration management platforms can enable remote code execution, data collection, or destructive actions like wiping systems. SaaS-based configuration management tools can also extend this control to cloud-hosted instances and on-premises systems. Similarly, configuration tools used in network infrastructure devices may be abused in the same way. The level of access required for such activity depends on the system’s configuration and security posture. 14. T1569: System Services System services and daemons can be abused to execute malicious commands or programs, whether locally or remotely. Creating or modifying services allows execution of payloads for persistence—particularly if set to run at startup—or for temporary, one-time actions. (.001) Launchctl (MacOS): launchctl interacts with launchd, the service management framework for macOS. It supports running subcommands via the command line, interactively, or from standard input. Adversaries can use launchctl to execute commands and programs as Launch Agents or Launch Daemons, either through scripts or manual commands. (.002) Service Execution (Windows): The Windows Service Control Manager (services.exe) manages services and is accessible through both the GUI and system utilities. Tools like PsExec and sc.exe can be used for remote execution by specifying remote servers. Adversaries may exploit these tools to execute malicious content by starting new or modified services. This technique is often used for persistence or privilege escalation. (.003) Systemctl (Linux): systemctl is the main interface for systemd, the Linux init system and service manager. It is typically used from a shell but can also be integrated into scripts or applications. Adversaries may exploit systemctl to execute commands or programs as systemd services. 15. T1204: User Execution Users may be tricked into running malicious code by opening a harmful file or link, often through social engineering. While this usually happens right after initial access, it can occur at other stages of an attack. Adversaries might also deceive users to enable remote access tools, run malicious scripts, or coercing users to manually download and execute malware. Tech support scams often use phishing, vishing, and fake websites, with scammers spoofing numbers or setting up fake call centers to steal access or install malware. (.001) Malicious Link: Users may be tricked into clicking on a link that triggers code execution. This could also involve exploiting a browser or application vulnerability (Exploitation for Client Execution). Additionally, links might lead users to download files that, when executed, deliver malware file. (.002) Malicious File: Users may be tricked into opening a file that leads to code execution. Adversaries often use techniques like masquerading and obfuscating files to make them appear legitimate, increasing the chances that users will open and execute the malicious file. (.003) Malicious Image: Cloud images from platforms like AWS, GCP, and Azure, as well as popular container runtimes like Docker, can be backdoored. These compromised images may be uploaded to public repositories and users might unknowingly download and deploy an instance or container, bypassing Initial Access defenses. Adversaries may also use misleading names to increase the chances of users mistakenly deploying the malicious image. (.004) Malicious Copy and Paste: Users may be deceived into copying and pasting malicious code into a Command or Scripting Interpreter. Malicious websites might display fake error messages or CAPTCHA prompts, instructing users to open a terminal or the Windows Run Dialog and run arbitrary, often obfuscated commands. Once executed, the adversary can gain access to the victim's machine. Phishing emails may also be used to trick users into performing this action. 16. T1047: Windows Management Instrumentation WMI (Windows Management Instrumentation) is a tool designed for programmers, providing a standardized way to manage and access data on Windows systems. It serves as an administrative feature that allows interaction with system components. Adversaries can exploit WMI to interact with both local and remote systems, using it to perform actions such as gathering information for discovery or executing commands and payloads. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid, thereby reducing security risks. F5 bot and risk management solutions can also stop bad bots and automation. This can make your modern applications safer. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Struts2 RCE using F5 BIG-IP For more details on the other mitigation techniques of MITRE ATT&CK Execution Tactic TA0002, please reach out to your local F5 team. Reference Links: MITRE ATT&CK® Execution, Tactic TA0002 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs434Views2likes0CommentsCertificate Automation for BIG-IP using CyberArk Certificate Manager, Self-Hosted
The issue of reduced lifetimes of TLS certificates is top of mind today. This topic touches upon reducing the risks associated with human day-to-day management tasks for such critical components of secure enterprise communications. Allowing a TLS certificate to expire, by simple operator error often, can preclude the bulk of human or automated transactions from ever completing. In the context of e-commerce, as only one single example, such an outage could be financially devastating. Questions abound: why are certificate lifetimes being lowered; how imminent is this change; will it affect all certificates? An industry association composed of interested parties, including many certificate authority (CA) operators, is the CA/Browser Forum. In a 29-0 vote in 2025, it was agreed public TLS certificates should rapidly evolve from the current 398 day de-facto lifetime standard to a phased arrival at a 47 day limit by March 2029. An ancillary requirement, demonstrating the domain is properly owned, known as Domain Control Validation (DCV) will drop to ten days. Although the governance of certificate lifecycles overtly pertains to public certificates, the reality is enterprise-managed, so called private CAs, likely need to fall in lock step with these requirements. Pervasive client-side software elements, such as Google Chrome, are used transparently by users with certificates that may be public or enterprise issued, and having a single set of criteria for accepting or rejecting a certificate is reasonable. Why Automated Certificate Management on BIG-IP, Now More than Ever? A principal driver for shortening certificate (cert) lifetimes; the first phase will reduce public certs to 200-day durations this coming March 15, 2026, is simply to lessen the exposure window should the cert be compromised and mis-used by an adversary. Certificates, and their corresponding private keys, can be manually maintained using human-touch. The BIG-IP TMUI interface has a click-ops path for tying certificates and keys to SSL profiles, for virtual servers that project HTTPS web sites and services to consumers. However, this requires something valuable, head count, and diligence to ensure a certificate is refreshed, perhaps through an enterprise CA solution like Microsoft Certificate Authority. It is critical this is done, always and without fail, well in advance of expiry. An automated solution that can take a “set it and forget it” approach to maintain both initial certificate deployment and the critical task of timely renewals is now more beneficial than ever. Lab Testing to Validate BIG-IP with CyberArk Trusted Protection Platform (TPP) A test bed was created that involved, at first, a BIG-IP in front of an HTTP/HTTPS server fleet, a Windows 2019 domain controller and a Windows 10 client to test BIG-IP virtual servers with. Microsoft Certificate Authority was installed on the server to allow for the issuance of enterprise certs for any of the HTTPS virtual servers created on the BIG-IP. Here is the lab layout, where virtual machines were leveraged to create the elements, including BIG-IP virtual edition (VE). The lab is straight forward; upon the Windows 2019 domain controller the Microsoft Certificate Authority component was installed. Microsoft SQL server 2019 was also installed, along with SQL Management Studio. In an enterprise production environment, these components would likely never share the domain controller host platform but are fine for this lab setup. Without an offering to shield the complexity and various manual processes of key and cert management, an operator will need to be well-versed with an enterprise CA solution like Microsoft’s. A typical launching sequence from Server Manager is shown below, with the sample lab CA and a representative list of issued certificates with various end dates. Unequipped with a solution like that from CyberArk, a typical workflow might be to install the web interface, in addition to the Microsoft CA and generate web server certificates for each virtual server (also frequently called “each application”) configured on the BIG-IP. A frequent approach is to create a unique web server template in Microsoft CA, with all certificates generated manually following the fixed, user specified certificate lifetime. As seen below, we are not installing anything but the core server role of Certificate Authority, the web interface for requesting certificates is not required and is not installed as a role. CyberArk Certificate Manager, Self-Hosted – Three High-Value Use Cases The self-hosted certificate and key management solution from CyberArk is a mature, tested offering having gained a significant user base and still may be known by previous names such as Venafi TLS Protect, or Venafi Trust Protection Platform (TPP). CyberArk acquired Venafi in 2024. Three objectives were sought in the course of the succinct proof-of-concept lab exercise that represented expected use cases: 1. Discover all existing BIG-IP virtual server TLS certificates 2. Renew certificates and change self-signed instances to enterprise PKI-issued certificates 3. Create completely new certificates and private keys and assign to BIG-IP new virtual servers The following diagram reflects the addition of CyberArk Certificate Manager, or Venafi TPP if you have long-term experience with the solution, to the Windows Server 2019 instance. Use Case One – Discover all BIG-IP Existing Certificates Already Deployed In our lab solution, to re-iterate the pivotal role of CyberArk Certificate Manager (Venafi TPP) in certificate issuance, we have created a “PolicyTree” policy called “TestingCertificates”. This will be where we will discover all of our BIG-IP virtual servers and their corresponding SSL Client and SSL server profiles. An SSL Client profile, for example, dictates how TLS will behave when a client first attempts a secure connection, including the certificate, potentially a certificate chain if signage was performed with an intermediate CA, and protocol specific features like support for TLS 1.3 and PQC NIST FIPS 203 support. Here are the original contents of the TestingCertificates folder, before running an updated discovery, notice how both F5 virtual servers (VS) are listed and the certificates used by a given VS. This is an example of the traditional CyberArk GUI look and feel. A simple workflow exists within the CyberArk platform to visually set up a virtual server and certificate discovery job, it can be run manually once, when needed, or set to operate on a regular schedule. This screenshot shows the fields required for the discovery job, and also provides an example of the evolved, streamlined approach to the user interface, referred to as the newer “Aperture” style view. Besides the enormous time savings of the first-time discovery of BIG-IP virtual servers, and certificates and keys they use in the form of SSL profiles, we can also look for new applications stood up on the BIG-IP through on-going CyberArk discovery runs. In the above example, we see a new web service implemented at the FQDN of www.twotitans.com has just been discovered. Clicking the certificate, one thing to note is the certificate is self-signed. In real enterprise environments, there may be a need to re-issue such a certificate with the enterprise CA, as part of a solid security posture. Another, even more impactful use case is when all enterprise certificates need to be easily and quickly switched from a legacy CA to a new CA the enterprise wants to move to quickly and painlessly. We see with one click on a certificate discovered that some key information is imparted. On this one screen, an operator might note that this particular certificate may warrant some improvements. It is seen that only 2048 bits are used in the certificate; the key is not making use of advanced storage and on, such as a NetHSM, and the certificate itself has not been built to support revocation mechanisms such as Content Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP). Use Case Two - Renew Certificates and Change Self-signed Instance to Enterprise PKI-Issued Certificates The automated approach of a solution like CyberArk’s likely means manual interactive certificate renewal is not going to be prevalent. However, for the purpose of our demonstration, we can examine a current certificate, alive and active on a BIG-IP supporting the application, s3.example.com. This is the “before” situation (double-click image for higher resolution). The result upon clicking the “Renew Now” button is a new policy-specific updated 12-month lifetime will be applied to a newly minted certificate. As seen in the following diagram, the certificate and its corresponding private key are automatically installed on the SSL Client Profile on the BIG-IP that houses the certificate. The s3.example.com application seamlessly continues to operate, albeit with a refreshed certificate. A tactical usage of this automatic certificate renewal and touchless installation is grabbing any virtual servers running with self-signed certificates and updating these certificates to be signed by the enterprise PKI CA or intermediate CA. Another toolkit feature now available is to switch out the entire enterprise PKI from one CA to another CA, quickly. In our lab setup, we have a Microsoft CA configured; it is named “vlab-SERVERDC1-ca”. The following certificate, ingested through discovery by CyberArk from the BIG-IP, is self-signed. Such certificates can be created directly within the BIG-IP TMUI GUI, although frequently they are quickly generated with the OpenSSL utility. Being self-signed, traffic through into this virtual will typically cause browser security risk pop-ups. They may be clicked through by users in many cases, or the certificate may even be downloaded from the browser and installed in the client’s certificate store to get around a perceived annoyance. This, however, can be troublesome in more locked-down enterprise environments where an Active Directory group policy object (GPO) can be pushed to domain clients, precluding any self-signed certificates being resolved with a few clicks around a pop-up. It is more secure and more robust to have authorized web services, vetted, and then incorporated into the enterprise PKI environment. This is the net result of using CyberArk Certificate Manager, coupled with something like the Microsoft enterprise CA, to re-issue the certificate (double-click). Use Case Three - Create Completely New Certificates and Private Keys and Assign to BIG-IP New Virtual Servers Through the CyberArk GUI, the workflows to create new certificates are intuitive. Per the following image, right-click on a policy and follow the “+Add” menu. We will add a server certificate and store it on the BIG-IP certificate and key list for future usage. A basic set of steps that were followed: Through the BIG-IP GUI, setup the application on the BIG-IP as per a normal configuration, including the origin pool, the client SSL profile, and a virtual server on port 443 that ties these elements together. Create, on CyberArk, the server certificate with the details congruent with the virtual server, such as common name, subject alternate name list, key length desired. On CyberArk, create a virtual server entry that binds the certificate just created to the values defined on the BIG-IP. The last step will look like this. Once the certificate is selected for “Renewal” the necessary elements will automatically be downloaded to the BIG-IP. As seen, the client’s SSL profile has now been updated with the new certificate and key signed by the enterprise CA. Summary This article demonstrated an approach to TLS certificate and key management for applications of all types, which harnesses the F5 BIG-IP for both secure and scalable delivery. With the rise in the number of applications that require TLS security, including advanced features enabled by BIG-IP, like TLS1.3 and PQC, coupled with the industry’s movement towards very short certificate lifecycle, the automation discussed will become indispensable to many organizations. The ability to both discover existing applications, switch out entire enterprise PKI offerings smoothly, and to agilely create new BIG-IP centered applications was touched upon.58Views2likes0Comments