security
18088 TopicsOverview of MITRE ATT&CK Tactic: TA0008 - Lateral Movement
This article focuses on the Lateral Movement tactic, and the techniques adversaries use to move across the network by remotely accessing and controlling additional systems. Understanding this tactic is crucial because it shows how a small initial compromise can rapidly escalate into a large-scale intrusion.30Views0likes0CommentsThe End of ClientAuth EKU…Oh Mercy…What to do?
If you’ve spent any time recently monitoring the cryptography and/or public key infrastructure (PKI) spaces…beyond that ever-present “post-quantum” thing, you may have read that starting in May of 2026, Google Chrome Root Program Policy will start requiring public certificate authorities (CAs) to stop issuing certificates with the Client Authentication Extended Key Usage (ClientAuth EKU) extension. While removing ClientAuth EKU from TLS server certificates correctly reduces the scope of these certificates, some internal client certificate authenticated TLS machine-to-machine and API workloads could fail when new/renewed certificates are received from a public CA. Read more here for details and options.626Views4likes2CommentsOverview of MITRE ATT&CK Tactic : TA0004 - Privilege Escalation
Introduction The Privilege Escalation tactic in the MITRE ATT&CK, covers techniques that adversaries use to gain higher-level permissions on compromised systems or networks. After gaining initial access, attackers frequently need elevated rights to access sensitive resources, execute restricted operations, or maintain persistence. Techniques include exploiting OS vulnerabilities, misconfigurations, or weaknesses in security controls to move from user-level to admin or root privileges. This may involve abusing elevation control mechanisms (like sudo, setuid, or UAC), manipulating accounts or tokens, leveraging scheduled tasks, or exploiting valid credentials. Techniques and Sub-Techniques T1548 – Abuse Elevation Control Mechanisms This technique involves bypassing or abusing OS mechanisms that restrict elevated execution, such as sudo, UAC, or setuid binaries. Here, adversaries exploit misconfigurations or weak rules to run commands with higher privileges. This often requires no exploit code but just permission misuse. Once elevated, attackers gain access to restricted system operations. T1548.001 – Setuid and Setgid Here, attackers run the programs with elevated permissions by abusing setuid/setgid bits on Unix systems. This allows execution as another user, often root, without needing the password. T1548.002 – Bypass User Account Control Adversaries exploit UAC weaknesses to elevate privileges without user approval.This grants admin-level execution while maintaining user-level stealth. T1548.003 – Sudo and Sudo Caching In these mis-configured sudo rules or cached credentials allow attackers to run privileged commands. They escalate without full authentication or bypass intended restrictions. T1548.004 – Elevated Execution with Prompt Here, malicious actors deceive users into granting elevated rights to a malicious process. This uses social engineering rather than technical exploitation. Temporary Elevated Cloud Access Cloud platforms issue temporary privileges through roles or tokens. Misconfigured role assumptions or temporary credentials can be abused to obtain short-term high-level access. TCC Manipulation This happens when attackers tamper with macOS’s privacy-control system to wrongfully grant apps access to sensitive resources like the camera, microphone, or full disk. It essentially bypasses user consent protections. T1134 - Access Token Manipulation Adversaries modify or steal Windows access tokens to make malicious processes run with the permission of another user. By impersonating these tokens, attackers can bypass access controls, escalate privileges, and perform actions as though they are legitimate users or even SYSTEM. Token Impersonation/Theft Here attackers duplicate and impersonate another user’s token, allowing their process to operate with the privileges of the legitimate user, this technique is frequently used to gain higher-level privileges on Windows machines. Create Process with Token Adversaries use a stolen or duplicated token to spawn a new process under the security context of a higher-privilege user, enabling the execution of actions with elevated permissions. Make and Impersonate Token Attackers generate new tokens using credentials they possess, then impersonate a target user's identity to gain unauthorized access and escalate their privileges. Parent PID Spoofing This technique manipulates the parent process ID (PPID) of a new process, so it appears to have a trusted parent, helping adversaries evade defenses or gain higher privileges. SID-History Injection Here, adversaries inject SID-History attributes into access tokens or Active Directory to spoof the permissions, this technique enables attackers to sidestep traditional group membership rules, granting them privileges that would normally be restricted. T1098 - Account Manipulation It refers to actions taken by attackers to preserve their access using compromised accounts, such as modifying credentials, group memberships, or account settings. By changing permissions or adding credentials, adversaries can escalate privileges, maintain persistence, or create hidden backdoors for future access. Additional Cloud Credentials Adversaries add their own keys, passwords, or service principal credentials to victim cloud accounts, enabling escalation without detection. This allows them to use new credentials and bypass standard log or security controls in cloud environments. Additional Email Delegate Permissions Attackers may grant themselves high-level permissions on email accounts, allowing unauthorized access, control or forwarding of sensitive communications, which can give visibility into victim correspondence for further attacks. Additional Cloud Roles Adversaries assign new privileged roles to compromised accounts, expanding permissions and enabling wider access to cloud resources. SSH Authorized Keys Attackers append or modify their public keys to SSH authorized_keys files on target machines. This technique bypass password authentication and allows undetected logins to compromised systems. Device Registration Adversaries register malicious devices with victim accounts, often in MFA or management portals to maintain ongoing access. This can allow attackers to access resources as trusted endpoints. Additional Container Cluster Roles Attackers grant their accounts extra permissions or roles in container orchestration systems such as Kubernetes. These elevated roles allow broader control over cluster resources and enable cluster-wide compromise. Additional Local or Domain Groups Adversaries add their accounts to privileged local or domain groups, gaining higher-level access and capabilities. This manipulates group memberships for escalation, persistence, and dominance within target environments. T1547 – Boot or Logon Autostart Execution Attackers abuse programs that automatically run during boot or login. These locations can be modified to launch malicious code with elevated privileges. This provides persistence and often higher-level execution. It is commonly achieved by manipulating registry keys, services, or startup folders. Registry Run Keys / Startup Folder: Attackers add malicious programs to Windows Registry run keys or Startup folders to ensure automatic execution when a user logs in. This technique provides persistent and often stealthy privilege escalation on system reboot and login. Authentication Package: By installing a malicious authentication package (DLL), adversaries can intercept credentials or execute code with system-level privileges during the Windows authentication process, enabling privilege escalation and persistence. Time Providers: Attackers register malicious DLLs as Windows time providers DLLs responsible for time synchronization so that their code is loaded by system processes on boot or at scheduled intervals, allowing stealthy system-level access and persistence. Winlogon Helper DLL: Adversaries plant a helper DLL in Winlogon’s registry settings so it loads with each user logon, running malicious code with high privileges and ensuring execution whenever the system starts or a user logs in. Security Support Provider: Inserting a rogue Security Support Provider (SSP) DLL allows attackers to monitor or manipulate authentication and system logins, potentially capturing credentials and persisting with SYSTEM privileges at the operating system level. Kernel Modules and Extensions: Attackers load malicious modules or kernel extensions to run arbitrary code in kernel space, giving them unrestricted control over the system, hiding their presence, or manipulating low-level OS behavior for privilege escalation. Re-opened Applications: On macOS, adversaries abuse property list files that track reopened applications after reboot, ensuring their chosen programs or payloads relaunch automatically and persistently escalate privileges upon user login. LSASS Driver: Modifying or adding an LSASS (Local Security Authority Subsystem Service) driver gives attackers persistent system-level code execution, potentially accessing or controlling authentication processes. Shortcut Modification: By altering shortcut files (LNKs), adversaries ensure that opening a benign application or file instead executes attacker-controlled code, effectively leveraging user actions for privilege escalation and persistence. Port Monitors: Attackers install or hijack port monitoring DLLs, which Windows loads to manage printers, so that their code runs with SYSTEM privileges when the service starts, enabling privilege escalation and persistence. Print Processors: Planting a malicious print processor DLL, the software Windows uses to handle print jobs causes Windows to execute attacker code as SYSTEM whenever print functions are called, creating a persistence and privilege escalation method. XDG Autostart Entries: On Linux desktop environments, adversaries use XDG-compliant autostart entries to launch malicious programs automatically at user login, gaining persistent execution and the ability to operate with user or escalated privileges. Active Setup: Attackers add or modify Active Setup registry keys to ensure their payloads execute with elevated privileges during user profile initialization, such as when a new user logs in. Login Items: On macOS, adversaries add login items that point to their malicious applications or scripts, guaranteeing code execution with the user’s privileges whenever a login event occurs. T1037 - Boot or Logon Initialization Scripts It refers to the use of scripts that are automatically executed during system startup or user logon to help adversaries maintain persistence on a machine. By modifying these scripts, attackers can ensure their malicious code runs every time the system boots. Logon Script (Windows): Scripts configured in Windows to run automatically during user or group logon can be exploited by adversaries to execute malicious code with the user’s privileges, enabling persistence or escalation. Login Hook: A login hook is an macOS mechanism that allows scripts or executables to run automatically upon a user’s login, which attackers may abuse to achieve persistence or elevate privileges. Network Logon Script: These are scripts assigned via Active Directory or Group Policy to execute during network logon, potentially allowing adversaries to introduce or persist malicious code in a domain environment. RC Scripts: On Unix-like systems, RC (run command) scripts control startup processes. Attackers who modify these can ensure their code runs with elevated privileges every time the system boots. Startup Items: Files or programs set to launch automatically during boot or user login can be manipulated by attackers, allowing persistent or privileged execution at startup. T1543 – Create or Modify System Process Attackers modify or create system services or daemons that run with high privileges. By altering service configurations, they ensure malicious code executes as SYSTEM/root. This provides long-term persistence and elevated access. Launch Agent: Attackers can create or modify launch agents on macOS to automatically execute malicious payloads whenever a user logs in, helping maintain persistence at the user level. Systemd Service: By altering systemd service files on Linux, adversaries can ensure their code runs as a background service during startup, maintaining continuous access to the system. Windows Service: Attackers abuse Windows service configurations to install or modify services that launch malicious programs on startup or at defined intervals, allowing persistent and privileged access. Launch Daemon: On macOS, launch daemons are set up to run background processes with elevated privileges before user login, often used by attackers to achieve system-wide persistence. Container Service: Adversaries may create or modify container or cluster management services (like Docker or Kubernetes agents) to repeatedly execute malicious code inside containers as part of persistence. T1484 - Domain or Tenant Policy Modification Adversaries changing configuration settings in a domain or tenant environment, such as Active Directory or cloud identity services, to bypass security controls and escalate privileges. This can include editing group policy objects, trust relationships, or federation settings, which may impact large numbers of users or systems across an organization. Attackers leverage this technique to gain persistent elevated access and make detection or remediation much more difficult. Group Policy Modification: Attackers may alter Group Policy Objects (GPOs) in Active Directory environments to subvert security settings and gain elevated privileges across the domain. By doing, these attackers can deploy malicious tasks, change user rights or disable security controls on many systems simultaneously. Trust Modification: Adversaries change domain or tenant trust relationships, such as adding, removing or altering trust properties between domains or tenants to expand their access and ensure continued control. This can let attackers move laterally, escalate privileges across multiple domains. T1611 – Escape to Host In virtualized environments, attackers attempt to escape a container or VM. If successful, they gain access to the underlying host system, which has higher privileges. This usually arises due to weaknesses in the hypervisor or insufficient separation between virtual environments. Hence, it gives complete control to the attacker over every workload operating on that host. T1546 – Event Triggered Execution Attackers use system events like service start, scheduled job, user login, etc. to trigger malicious code. These triggers often run with SYSTEM or administrative privileges. By hijacking legitimate event handlers, the attacker executes commands without raising suspicion. It also enables persistence tied to normal system operations. Change Default File Association: Attackers alter file type associations so that opening a file triggers malicious code, helping them gain persistence or escalate privileges. Screensaver: Adversaries can replace system screensavers with malicious executables, causing code to run automatically when the screensaver activates. Windows Management Instrumentation Event Subscription: By setting up WMI event subscriptions, attackers ensure their code executes in response to specific system events, establishing stealthy persistence on Windows. Unix Shell Configuration Modification: Modifying shell configuration files like .bashrc or.profile allows adversaries to start malicious code whenever a user opens a terminal session. Trap: Attackers abuse shell trap commands to execute code in response to system signals (e.g., shutdown, logoff, or errors), enhancing persistence or privilege escalation. LC_LOAD_DYLIB Addition: By adding malicious the LC_LOAD_DYLIB header to macOS binaries, attackers can force the system to load rogue dynamic libraries during execution. Netsh Helper DLL: Attackers register malicious DLLs as Netsh helpers, ensuring their code loads whenever Netsh is used, aiding persistence or privilege escalation. Accessibility Features: Abusing Windows accessibility tools (like Sticky Keys) lets attackers invoke system shells or backdoors at the login screen, bypassing standard authentication. AppCert DLLs: Adversaries inject DLLs via AppCert DLL Registry keys, so their code runs in every process creation, creating broad persistence. AppInit DLLs: Attackers exploit AppInit DLL Registry values to ensure their DLLs are loaded into multiple processes, maintaining persistence. Application Shimming: By creating or modifying Windows application shims, adversaries force the system to redirect legitimate programs to launch malicious code. Image File Execution Options Injection: Modifying Image File Execution Options (IFEO) in Registry allows attackers to set debuggers that hijack normal application launches for persistence. PowerShell Profile: Malicious code in PowerShell profile scripts will auto-run whenever PowerShell starts, providing persistence and privilege escalation opportunities. Emond: Attackers place malicious rules in macOS’s Emond event monitor daemon, causing code to run in response to system events. Component Object Model Hijacking: By hijacking references to COM objects in Windows, adversaries ensure their code launches when certain applications or system routines are invoked. Installer Packages: Attackers may leverage installer scripts or packages to deploy persistent code during application installation or updates. Udev Rules: By modifying Linux’s udev rules, adversaries configure devices to trigger the execution of rogue code during events like hardware insertion. Python Startup Hooks: Attackers add code to Python startup scripts or modules, causing their payload to run automatically whenever Python interpreter is launched. T1068 – Exploitation for Privilege Escalation Attackers exploit software or OS vulnerabilities to gain elevated rights. This may target kernel flaws, driver bugs, or misconfigured services. By triggering the vulnerability, adversaries escalate from low-privilege to SYSTEM/root. This is one of the most direct and powerful escalation methods. T1574 – Hijack Execution Flow This technique alters how the system resolves and launches programs. Attackers place malicious files where high-privilege processes expect legitimate ones. When the privileged process starts, it inadvertently loads or executes the attacker code. This leverages DLL search order hijacking, path hijacking, and similar methods. DLL: Attackers exploit the way Windows applications load Dynamic Link Libraries (DLLs), tricking them into running malicious DLLs for code execution or privilege escalation. Dylib Hijacking: Adversaries target macOS by placing malicious dylib files in directories searched by applications, causing them to be loaded instead of legitimate libraries. Executable Installer File Permissions Weakness: Attackers leverage weak permissions on installer files to replace or modify executables, allowing unauthorized code execution with high privileges. Dynamic Linker Hijacking: This technique manipulates the loading process of shared libraries (DLLs or dylibs), often abusing environment variables (like PATH) or loader settings to ensure malicious libraries are loaded first. Path Interception by PATH Environment Variable: Adversaries modify the PATH environment variable, influencing where the system searches for executables and libraries, enabling malicious code to be loaded. Path Interception by Search Order Hijacking: Attackers exploit insecure search orders for files or DLLs, placing malicious files in locations that applications check before trusted locations. Path Interception by Unquoted Path: By taking advantage of unquoted paths in executable calls, adversaries' plant malicious files that are incorrectly loaded by the system, allowing code execution. Services File Permissions Weakness: Weak permissions on Windows service files enable attackers to replace service executables with malicious content, gaining persistent system access. Services Registry Permissions Weakness: Adversaries exploit weak registry settings of Windows services, altering keys to redirect service execution to their malicious code. COR_PROFILER: Attackers abuse the COR_PROFILER environment variable to hijack the way . NET applications load profiling DLLs, gaining code execution during app runtime. KernelCallbackTable: This involves altering callback tables in the Windows kernel to redirect the execution flow, enabling arbitrary code to run with elevated privileges. AppDomainManager: By subverting the AppDomainManager in .NET applications, adversaries gain control over the loading of assemblies, potentially executing malicious payloads during application startup. T1055 – Process Injection This involves injecting malicious code into legitimate processes. Injected processes often run with higher privileges than the attacker initially has. It enables evasion of security tools by blending into trusted processes. Successful injection allows execution under a more privileged security context. Dynamic-link Library Injection: Injects malicious DLLs into live processes to execute unauthorized code in the process memory, enabling attackers to evade defenses and elevate privileges. Portable Executable Injection: Loads or maps a malicious executable (EXE) into the address space of another process, running code under the guise of a legitimate application. Thread Execution Hijacking: Redirects the execution flow of an active thread in a process to run attacker-controlled code, often used for stealthy payload delivery. Asynchronous Procedure Call (APC): Delivers malicious code by queuing attacker-specified functions (APCs) to run in the context of another process or thread. Thread Local Storage (TLS): Uses TLS callbacks within a process to execute injected code when the process loads DLLs, often leveraging this for covert malware execution. Ptrace System Calls: Exploits ptrace debugging capabilities (on Unix/Linux) to inject and execute malicious code within the address space of a targeted process. Proc Memory: Modifies memory structures directly through the /proc filesystem (Linux/Unix) to inject or alter code in running processes for persistence or privilege escalation. Extra Window Memory Injection: Injects code into special memory regions (like window memory in Windows GUI processes) to achieve code execution in those processes. Process Hollowing: Creates a legitimate process, then swaps its memory with attacker code, making malware run under the mask of valid processes to evade detection. Process Doppelgänging: Leverages Windows Transactional NTFS (TxF) and process creation mechanisms to run malicious code in a way that appears legitimate and avoids conventional monitoring. VDSO Hijacking: Modifies the Virtual Dynamic Shared Object (VDSO) in Linux to execute injected code during system or process startup routines. ListPlanting: Manipulates application or window list memory, using this entrypoint for code injection into legitimate processes without overtly altering their main execution flow. T1053 – Scheduled Task/Job Attackers create or modify scheduled tasks to run malware with elevated privileges. These jobs often execute under SYSTEM, root, or service accounts. It provides both persistence and privilege escalation. The scheduled execution blends into normal automated system behavior. At: Attackers use the "at" scheduling utility on Windows or Unix-like systems to set up tasks that run at specific times, enabling persistence or timed execution of malicious programs. Cron: By adding entries to cron on Unix/Linux systems, adversaries can schedule their malicious code to execute automatically at regular intervals, maintaining access without user interaction. Scheduled Task: Threat actors abuse operating system scheduling features (like Windows Task Scheduler) to run unwanted commands or software on startup or according to a set schedule for persistence. Systemd Timers: In Linux environments, attackers configure systemd timers to trigger services or executables at designated times, ensuring regular execution of their payloads even after restarts. Container Orchestration Job: Adversaries leverage cluster scheduling platforms (such as Kubernetes Cron Jobs) to deploy containers that repeatedly execute malicious code across multiple nodes, providing scalable and automated persistence in cloud-native environments. T1078 – Valid Accounts Adversaries use stolen credentials to access legitimate user, admin, or service accounts for initial access, persistence, or privilege escalation, often bypassing security controls by blending in with normal activity. Default Accounts: These are pre-configured accounts built into operating systems or applications, such as guest or administrator; attackers exploit weak, unchanged, or known passwords on these accounts to gain unauthorized access. Domain Accounts: Managed by Active Directory, domain accounts allow users, administrators, or services to access resources across an organization’s network; adversaries leverage compromised domain credentials for lateral movement or privileged actions. Local Accounts: Accounts specific to a single machine or device, often with administrative privileges; attackers use compromised local credentials to escalate rights or maintain control over endpoints. Cloud Accounts: These are accounts for cloud platforms or services (like AWS, Azure, GCP); Those adversaries who obtain these credentials can gain significant control, escalate privileges, or persist in cloud environments. How F5 can help? F5 security solutions, including BIG-IP, NGINX, and Distributed Cloud, provide robust defenses against privilege escalation risks by enforcing strict access controls, role-based permissions, and session validation. These protections mitigate risks from vulnerabilities and misconfigurations that adversaries exploit to elevate privileges. F5’s security capabilities also offer monitoring and threat detection mechanisms that help identify anomalous activities indicative of privilege escalation attempts. For more information, please contact your local F5 sales team. Conclusion Privilege escalation is a critical cyberattack tactic that allows attackers to move from limited access to elevated permissions, often as administrator or root on compromised systems. This expanded control lets attackers disable security measures, steal sensitive data, persist in the environment, and launch more damaging attacks. Preventing and detecting privilege escalation requires layered defenses, vigilant access management, and regular security monitoring to minimize risk and respond quickly to unauthorized privilege gains. Reference Links: MITRE ATT&CK® Privilege Escalation, Tactic TA0004 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs54Views1like0CommentsIllegal Metacharacter in Parameter Name in Json Data
Dears, Can someone tell what is the issue here as the BIG IP is reporting the illegal metacharacter "#" in parameter name but the highlighted part of the violation doesnt contain metacharacter # in the first place and the parameter which BIG IP displayed in the highlighted part is actually not a parameter. I believe the issue is with the BIG IP only. Any suggestions here, please? I think issue is that BIG IP is not paring the Json payload properly53Views0likes3CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
84Views2likes0CommentsKASM Workspaces Integration with F5 BIG-IP Access Policy Manager (APM)
Introduction F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article I’ll show you how to secure your Kasm Workspace using F5 BIG-IP APM. APM is a key component of the F5 Application Delivery and Security Platform (ADSP). APM covers both Application Delivery, Security and is a key component of Zero Trust. Kasm Workspaces Kasm Workspaces is a containerized streaming platform designed for secure, web-based access to desktops, applications, and web browsing. It leverages container technology to deliver virtualized environments directly to users' browsers, enhancing security, scalability, and performance. Commonly used for remote work, cybersecurity, and DevOps workflows, Kasm Workspaces provides a flexible and customizable solution for organizations needing secure and efficient access to virtual resources. As noted in the Kasm Documentation, the Kasm Workspaces Web App Role servers should not be exposed directly to the public. That’s where F5 BIG-IP APM can help. Demo Video Deployment Prerequisites F5 BIG-IP version 17.x Access version 10.x Kasm Workspaces version 1.17 installed and configured properly Configure using Automation Toolchain with AS3 and FAST Templates The F5 BIG-IP Automation Toolchain is a suite of tools designed to automate the deployment, configuration, and management of F5 BIG-IP devices. It enables efficient and consistent management using declarative APIs, templates, and integrations with popular automation frameworks. Application services (FAST) templates are predefined configurations that streamline the deployment and management of applications by providing consistent and repeatable setups. NOTE: The configuration using the Automation Toolchain is well-documented in this DevCentral article, which also includes demo videos: How I did it - “Delivering Kasm Workspaces three ways” Configure Manually Using a Virtual Server This article will focus on the manual configuration of the BIG-IP using a Virtual Server. Configuring it this way will give you a deeper understanding of how all the components work together to create a cohesive solution. Network Environment Linux “External” client IP: 10.1.10.4 BIG-IP “External” Self IP: 10.1.10.10 BIG-IP “Internal” Self IP: 10.1.20.10 Kasm Workspace IP: 10.1.20.23 BIG-IP Configuration Create HTTP Monitor: First, let’s create the HTTP Monitor for the Kasm Workspace server. From Local Traffic > Monitors > click the green plus sign to add a new one. Give it a name, “Kasm-Monitor” in this example Set the Type to HTTP Enter the following for the Send String: GET /api/__healthcheck\r\n Enter the following for the Receive String: OK It should look like this: Set Reverse to Yes and click Finished Create Pool: Next we’ll create the Pool From Local Traffic > Pools > Pool List > click the plus sign to add a new one Give it a name, “Kasm-Pool” in this example Select the Health Monitor you created previously and click the arrows to move it to Active Under Resources specify a Node Name, “Kasm-Server” in this example Specify the IP Address, “10.1.20.23” in this example Set the Service Port to 443, then click Add Click Finished Create Virtual Server: Next we’ll create the Virtual Server From Local Traffic > Virtual Servers > Virtual Server List > click the plus sign to add a new one Give it a Name, “vs_kasm” in this example. Keep the Type as Standard. Set the Destination to the IP Address you want the BIG-IP to listen on for connections to the Kasm server, “10.1.10.100” in this example. Set the Service Port to HTTPS, port 443. Click Finished at the bottom Click on the Virtual Server you just created Click Resources Set the Default Pool to kasm_pool, then click Update The Kasm Virtual Server Status should eventually change to Green when the Health Monitor is successful. NOTE: The Virtual Server configuration in this example has been simplified for demonstration purposes. Additional configuration options will be covered later in this article. Kasm Workspaces Configuration The Kasm Workspace will need a Zone configured with the default settings. Login as Admin and check this from Infrastructure > Zones. You will need at least one Workspace. In this example, I have a Workspace with Chrome, Firefox, Terminal and Ubuntu Jammy Click the WORKSPACES Tab at the top of the screen to see what the Workspace looks like Your view should look like this: Test Kasm Workspaces Login as a User NOTE: The IP Address used to connect to the Kasm Workspaces through the BIG-IP is the Virtual Server listening IP Address 10.1.10.100 When the Workspace loads, click Firefox Choose the option to Launch Session in a new Tab After a moment, Firefox will load Here you can see the F5.com website displayed NOTE: The browser pop-up blocker can prevent the Kasm Workspace applications from successfully launching. You can disable the pop-up blocker or create an exception for the BIG-IP Virtual IP (10.1.10.100). Enable SSL Decryption Enabling SSL Decryption allows you to fully inspect the requests and payloads passing through BIG-IP. From Local Traffic > Virtual Servers > click Virtual Server List Then click the name of your Virtual Server, “vs_kasm” in this example In the Configuration section, set the Protocol Profile (Client) to http Set the SSL Profile (Client) to clientssl Set the SSL Profile (Server) to serverssl NOTE: If you have created your own Client and Server SSL Profiles, you should add them here. The instructions above are for demonstration purposes only. Scroll to the bottom and click Update You’re done! Conclusion F5 BIG-IP Access Policy Manager (APM) is a key asset to securing containerized platforms like KASM Workspaces. In this article, you learned how to secure your Kasm Workspace using F5 BIG-IP APM. Related Content How I did it - “Delivering Kasm Workspaces three ways” Download Kasm Workspaces Kasm Documentation
50Views1like0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow/networking
96Views1like0CommentsIntroducing the New F5 Bot Defense Self-Service UI
For more information about Bot Defense Advanced features, see Bot Defense Overview Step 1: Sign Up for Bot Defense Advanced A customer's F5 account team will help their Bot Defense infrastructure and policies configured to protect their applications. NOTE: User permissions must include one or more of the following permissions. If a user does not have any of these roles, they should contact their Bot Defense administrator or TAM.: f5xc-bot-defense-admin role f5xc-bot-defense-user role f5xc-bot-defense-monitor role f5xc-bot-defense-report role Step 2: Decide What You Want to Protect Users should then decide which endpoints they want to protect with Bot Defense. For information about what to consider when configuring web and mobile endpoints, see the following information: Protect Web-Based Endpoints Protect Mobile Endpoints Step 3: Configure Your Bot Defense Infrastructure Important: If F5 Operations has already configured a user's Bot Defense infrastructure, they can skip this step. Users can now manage their configure their Bot Defense infrastructures from the F5 Distributed Cloud Console. A Bot Defense deployment can consist of multiple Test and Production infrastructures (subscription limits will determine how many can be added / managed). To configure a Bot Defense infrastructure, configure the following settings: Traffic type Infrastructure type Region Access control list For detailed instructions, see Configure the Bot Defense Infrastructure. Step 4: Configure Your Bot Policies Bot Defense Advanced provides three system policies that allow users to control system configuration settings: Bot Endpoint Policy Bot Allowlist Policy Bot Network Policy The F5 Operations team performs an analysis of a customer's endpoints and creates the initial version of each policy. Users can then deploy the policies in the Bot Defense Test infrastructure provided by F5. If preferred, users can also work with your F5 Operations Team to manage their policies. Important: F5 strongly recommends that users deploy and thoroughly test policy updates in the Test infrastructure provided by F5 before deploying to Production infrastructure. Step 5: Test Your Configuration Deploy your policies in the Test infrastructure provided to you by F5 to test your Bot Defense deployment and help ensure that Bot Defense policies are properly configured, that JavaScript tags are injected in your application pages correctly, or that you have correctly integrated the mobile SDK. Step 6: Deploy Policies in Your Production Environment Important: F5 strongly recommends that you deploy and thoroughly test policy updates in the Test infrastructure provided to you by F5 before you deploy in your Production infrastructure. After you verify in your Test infrastructure that Bot Defense is configured correctly and correctly identifies automated traffic, you can deploy your policies yourself or work with your F5 Operations team to deploy your policies in your Production infrastructure. Step 7: Enable Bot Defense on an HTTP Load Balancer To configure Bot Defense on an HTTP load balancer, users must complete the following tasks on each HTTP load balancer where they want to enable Bot Defense: Enable the Bot Defense workspace on one or more HTTP load balancers. Configure how Bot Defense will inject JavaScript tags in the HTTP pages of the application. If protecting mobile endpoints, enable and configure the F5 Distributed Cloud Mobile SDK. For detailed instructions, see Configure Bot Defense on an HTTP Load Balancer. Step 8: Deploy Bot Detection Rules Important: Bot detection rule self-service management is a limited availability feature. Contact your F5 account team for information. F5 supplies customers with a set of initial bot detection rules. Most rules are turned off, with a subset of rules turned on by default. It is recommended for users to monitor their traffic for approximately two weeks to observe how rules that are turned on affect their traffic. After this time, users can now use the Distributed Cloud Console to turn rules on and off to make changes to how Bot Defense handles traffic. Important: F5 recommends that you deploy each rule in a Test infrastructure before you deploy in your production infrastructure. For information about bot detection rules, see Bot Detection Rules Overview. Bot Defense Advanced Self-Service Policy Management Demo: Related Resources: Deploy Bot Defense on any Edge with F5 Distributed Cloud (SaaS Console, Automation) Protecting Your Web Applications Against Critical OWASP Automated Threats Making Mobile SDK Integration Ridiculously Easy with F5 XC Mobile SDK Integrator JavaScript Supply Chains, Magecart, and F5 XC Client-Side Defense (Demo) Bots, Fraud, and the OWASP Automated Threats Project (Overview) Protecting Your Native Mobile Apps with F5 XC Mobile App Shield Enabling F5 Distributed Cloud Client-Side Defense in BIG-IP 17.1 Bot Defense for Mobile Apps in XC WAAP Part 1: The Bot Defense Mobile SDK F5 Distributed Cloud WAAP Distributed Cloud Services Overview Enable and Configure Bot Defense - F5 Distributed Cloud Service
350Views1like0CommentsF5 Distributed Cloud (XC) Custom Routes: Capabilities, Limitations, and Key Design Considerations
This article explores how Custom Routes work in F5 Distributed Cloud (XC), why they differ architecturally from standard Load Balancer routes, and what to watch out for in real-world deployments, covering backend abstraction, Endpoint/Cluster dependencies, and critical TLS trust and Root CA requirements.165Views2likes1Comment