cloud
1741 TopicsOverview of MITRE ATT&CK Tactic: TA0008 - Lateral Movement
This article focuses on the Lateral Movement tactic, and the techniques adversaries use to move across the network by remotely accessing and controlling additional systems. Understanding this tactic is crucial because it shows how a small initial compromise can rapidly escalate into a large-scale intrusion.22Views0likes0CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
81Views2likes0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow/networking
96Views1like0CommentsOverview of MITRE ATT&CK Tactic - TA0011 Command and Control
Introduction In modern days, cyber violations, command and control are one of the main set of techniques with which attackers can gain control over the system within a victim’s network. Once control is gained over the system, the attackers can steal sensitive data, move laterally and blend into normal activity. Command and Control (MITRE ATT&CK Tactic TA0011) represents another critical stage of the adversary lifecycle, where the adversaries focus on communicating with the systems under their control. There are multiple ways to achieve this, either by mimicking the expected traffic flow to avoid detection or mimicking a normal behavior of the compromised system. To avoid the vulnerability, it is important for defenders to understand how communication is established to any system in the network and the various levels of stealth depending on the network structure. This article walks through the most common Command and Control techniques, and how F5 solutions provide strong defense against them. T1071 - Application Layer Protocol To communicate with the systems, the adversaries blend in with the existing traffic of the OSI layer protocols to avoid detection/network filtering. The results of these commands will be embedded within the protocol traffic between the client and the server. T1071.001 - Web Services Adversaries mimic normal, expected HTTP/HTTPS traffic that carries web data to communicate with the systems under their control within a victim network. T1071.002 - File Transfer Protocol Protocols used to implement this technique includes SMB, FTP, FTPS and TFTP. The malicious data is concealed within the fields and headers of the packets produced from these protocols. T1071.003 - Mail Protocols Protocols carrying electronic mail such as SMTP/S, POP3/S, and IMAP is utilized by concealing the data within the email messages themselves. T1071.004 - DNS An administrative function in computer networking is served by the DNS Protocol, and DNS traffic may also be allowed even before the authentication of the network. Data is concealed in the fields and headers of these packets. T1071.005 - Publish/Subscribe Protocols For message distribution managed by a centralized broker, where Publish/Subscribe design utilizes MQTT, XMPP, AMQP and STOMP protocols. T1092 - Communication Through Removable Media On disconnected networks, command and control between the compromised hosts can be performed using removable media to execute commands from system to system. For a successful execution, both systems need to be compromised and need to replicate the removable media through lateral movement. T1659 - Content Injection Adversaries may also gain control over the victim’s system by injecting malicious content into the systems, by initially accessing the compromised data-transfer channels where the traffic can be manipulated or content can be injected. T1132 – Data Encoding Another technique to gain control over the system is by encoding the information using a standard data encoding system. Encoding includes the use of ASCII, Unicode, Base64, MIME or other binary-to-text encoding systems. T1132.001 - Standard Encoding Data Encoding schemes utilized for Standard Encoding includes ASCII, Unicode, hexadecimal, Base64 and MIME. Data compression, such as gzip, are also an example of standard encoding. T1132.002 - Non-Standard Encoding Data Encoded in the message body of an HTTP request, such as modified Base64, is utilized as encoding schemes. T1001 – Data Obfuscation Obfuscation of command-and-control communication is hidden as part of this technique, making it even more difficult to discover or decipher. The focus is to make the communication less conspicuous and hidden, by incorporating several methods, which create below sub-techniques: T1001.001 - Junk Data Adversaries may abuse the protocols by adding random, meaningless junk data to the protocols, which can prevent trivial methods for decoding or deciphering the traffic. T1001.002 - Steganography Steganographic sub-techniques are used to transfer hidden digital data messages between systems, such as images or document files. T1001.003 - Protocol or Service Impersonation Adversaries can impersonate legitimate protocols or web services, to command-and-control traffic by blending in with legitimate network traffic. T1568 – Dynamic Resolution To establish connections dynamically to command-and-control the infrastructure and prevent any detections, adversaries use malware sharing a common algorithm with the infrastructure to dynamically adjust the parameters, such as a domain name, IP address, or port number. T1568.001 - Fast Flux DNS Fast Flux DNS is used to hide a command-and-control channel behind an array of rapidly changing IP addresses linked to a single domain resolution. T1568.002 - Domain Generation Algorithm Rather than relying on a list of static IP addresses or domains, adversaries may utilize Domain Generation Algorithms to dynamically identify a destination domain for command-and-control traffic. T1568.003 - DNS Calculations Instead of utilizing the predetermined port number or the actual IP address, to dynamically determine which port and IP address to use, adversaries calculate on addresses returned in DNS results. T1573 – Encrypted Channel Adversaries rely on an encrypted algorithm channel to conceal command-and-control traffic rather than depending on any inherent protections by the communication protocols. T1573.001 - Symmetric Cryptography Symmetric Encryption Algorithms, such as AES, DES, 3DES, Blowfish and RC4, use keys for plaintext encryption and ciphertext decryption. T1573.002 - Asymmetric Cryptography Asymmetric cryptography, or public key cryptography, uses a keypair per party: one public and one private. The sender encrypts the data with the receiver’s public key, and the receiver decrypts the data with their private key. T1008 – Fallback Channels If the primary channel is compromised or inaccessible, then in order to maintain reliable command and control, adversaries use fallback communication channels. T1665 – Hide Infrastructure To hide and evade detection of the command-and-control infrastructure, adversaries identify and filter traffic from defensive tools, masking malicious domains to abuse the true destination, and otherwise hiding malicious contents to delay discovery and prolong the effectiveness of adversary infrastructure. T1105 – Ingress Tool Transfer Tools or other files transfer from an external adversary-controlled source into the compromised environment through controlled channels or protocols such as FTP. Also, adversaries may spread tools across the compromised environment as part of Lateral Movement. T1104 –Multi-Stage Channels To make detection more difficult, adversaries create multiple stages for command-and-control for several functions and different conditions. T1095 – Non-Application Layer Protocol To communicate between the host and command-and-control server, adversaries use non-application layer protocols, such as ICMP (Internet Control Message Protocol), UDP (User Datagram Protocol), SOCKS (Secure Sockets), or SOL (Serial over LAN). T1571 – Non-Standard Port Adversaries communicate using port pairings that are not associated with the protocol, for, say, HTTPS over port 8088 or port 587 as opposed to the traditional port 443. T1572 – Protocol Tunneling Another approach to avoid detection/network filtering is to explicitly encapsulate a protocol within another protocol to enable routing of network packets which otherwise not reach their intended destination, such as SMB, RDP. T1090 – Proxy To direct network communications to a command-and-control server to avoid direct connections to the infrastructure and override the existing actual communication paths to avoid suspicion and manage command-and-control communications inside a compromised environment, proxy act as an intermediary between the systems, such as, HTRAN, ZXProxy and ZXPortMap. T1090.001 - Internal Proxy Internal proxies are primarily used to conceal the actual destination while reducing the need for multiple connections to external systems, such as peer-to-peer (p2p) networking protocols. T1090.002 - External Proxy External proxy is used to mask the true destination of the traffic with port redirectors. Purchased infrastructure such as Virtual Private Servers which are the compromised systems outside the victim's network, are generally used for these purposes. T1090.003 - Multi-Hop Proxy Multiple proxies can also be chained together to abuse the actual traffic directions, making it more difficult for defenders to trace malicious activity and identify its source. T1090.004 - Domain Fronting Adversaries can even misuse Content Delivery Networks (CDNs) routing schemes to infect the actual HTTPS traffic destination or traffic tunneled through HTTPS. T1219 – Remote Access Tools To access the target system remotely and establish an interactive command-and-control within the network, remote access tools are used to bridge a session between two trusted hosts through a graphical interface, a CLI, or a hardware-level access (KVM, Keyboard, Video, Mouse) over IP solutions. T1219.001 - IDE Tunneling IDE Tunneling combines SSH, port forwarding, file sharing and letting the developers gain access as if they are local, by encapsulating the entire session and tunneling protocols alongside SSH, allowing the attackers to blend in with the actual development workflow. T1219.002 - Remote Desktop Software Adversary may access the target systems interactively through desktop support software, which provides a graphical interface to the remote adversary, such as VNC, Team Viewer, AnyDesk, LogMein, are commonly used legitimate support software. T1219.003 - Remote Access Hardware To access the legitimate hardware through commonly used legitimate tools, including IP-based keyboard, video, or mouse (KVM) devices such as TinyPilot and PiKVM. T1205 – Traffic Signaling Traffic signaling is used to hide open ports or any other malicious functionality to prolong command-and-control over the compromised system. T1205.001 - Port Knocking To hide the open ports for persistence, port knocking is included, to enable the port, in which the adversary sends a series of attempted connections to a predefined sequence of closed ports. T1205.002 - Socket filters Socket Filters are filters to allow or disallow certain types of data through the socket. If packets received by the network interface match the filtering criteria, desired actions are triggered. T1102 – Web Service Adversaries use an existing, legitimate external Web Service to transfer data to/from the compromised system. Also, web service providers commonly use SSL/TLS encryption, which gives adversaries an additional level of protection. T1102.001 - Dead Drop Resolver Adversaries post content called dead drop resolver on Web Services with encoded domains. These resolvers will redirect the victims to the infected domain/IP addresses. T1102.002 - Bidirectional Communication Once the system is infected, they can send the output back to the Web Service Channel. T1102.003 - One-Way Communication Compromised Systems may not return any output at all in a few cases where adversaries tend to send only one way instructions and do not want any response. How F5 Can Help F5 security solutions provide multiple different functionalities to secure and protect applications and APIs across various platforms including Clouds, Edge, On-prem or Hybrid. F5 supports risk management solutions mentioned below to effectively mitigate and protect against command-and-control techniques: Web Application Firewall (WAF): WAF is supported by all the F5 deployment modes, which is an adaptable, multi-layered security solution that defends web applications against a broad spectrum of threats, regardless of where they are deployed. API Security: F5 offers to ease the security of APIs with F5 Web Application and API Protection (WAAP) solutions, which protects API endpoints and other API dependencies by restricting the API definitions using specified rules and schemas. Rate-Limiting & Bot Protection: Brute-force, credential stuffing, and session attacks can be mitigated with configurable thresholds and automated bot protection. For more information, please contact your local F5 sales team. Conclusion Command and Control (C2) encompasses the methods adversaries employ to communicate with compromised systems within a target network. Adversaries disguise their C2 traffic as legitimate network activity to evade detection. To defend against Command-and-Control techniques, defenders should gain a clear understanding of implementation of robust segmentation and egress filtering using Web Application Firewalls (WAF) to limit communication channels and regularly monitor traffic for anomalous patterns and leverage threat intelligence to identify any C2 indicator. Additionally, employing endpoint detection and response (EDR) using API Security solutions can help detect and block malicious C2 activity at the host level. Reference links MITRE | ATT&CK Tactic 09 – Command and Control MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®40Views0likes0CommentsGetting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments. With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter. This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric. In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route. Why NGINX Gateway Fabric on OpenShift? While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities: ✔ Certified & OpenShift-Ready The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs). ✔ High Performance, Low Complexity NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance. ✔ Advanced Traffic Capabilities Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns. ✔ AI/ML Readiness NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI. Prerequisites Before we begin, ensure you have: Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support). Access to the OpenShift Console and the oc CLI. Ability to pull images from ghcr.io or your internal mirror. Step 1: Installing the Operator from OperatorHub We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades. Log into the OpenShift Web Console as an administrator. Navigate to Operators > OperatorHub. Search for NGINX Gateway Fabric in the search box. Select the NGINX Gateway Fabric Operator card and click Install Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded. Once installed, the operator will manage NGF lifecycle automatically. Step 2: Configuring the NginxGatewayFabric Resource Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane. In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select Create NginxGatewayFabric. Select YAML view to configure the deployment specifics. Step 3: Configuring the NginxGatewayFabric Resource NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it. Option A - LoadBalancer (ROSA, ARO, Managed OpenShift) By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer. No additional steps required. Option B - NodePort with OpenShift Route (On-Prem/Hybrid) However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route. Update the NGF CR to use NodePort In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select NginxGatewayFabric. Select YAML view to directly edit the configuration specifics. Change the spec.nginx.service.type to NodePort: apiVersion: gateway.nginx.org/v1alpha1 kind: NginxGatewayFabric metadata: name: default namespace: nginx-gateway spec: nginx: service: type: NodePort Create the OpenShift Route: After applying the CR, create a Route to expose the NGINX Service. oc create route edge ngf \ --service=nginxgatewayfabric-sample-nginx-gateway-fabric\ --port=http \ -n nginx-gateway Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port. Step 4: Validating the Deployment Verify that the operator has deployed the control plane pods successfully. oc get pod -n nginx-gateway NAME READY STATUS RESTARTS AGE nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m oc get gatewayclass NAME CONTROLLER ACCEPTED AGE nginx gateway.nginx.org/nginx-gateway-controller True 4d1h You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources. Step 5: Functional Check with Gateway API To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute) Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation. Create a Gateway This resource opens the listener on the NGINX data plane. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: cafe spec: gatewayClassName: nginx listeners: - name: http port: 80 protocol: HTTP Create an HTTPRoute This binds the traffic to your backend service. apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: coffee spec: parentRefs: - name: cafe hostnames: - "cafe.example.com" rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: coffee port: 80 Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP. OpenShift 4.19 Compatibility Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19: Gateway API Version Pinning: OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment. oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/" gateway.networking.k8s.io/bundle-version: v1.2.1 gateway.networking.k8s.io/channel: standard However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements. Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box. Next Steps: AI Inference With NGF running, you are ready for advanced use cases: AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI. The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads. References: NGINX Gateway Fabric Operator on Red Hat Catalog F5 NGINX Gateway Fabric Certified for Red Hat OpenShift NGINX Gateway Fabric Installation Docs164Views1like0CommentsOverview of MITRE ATT&CK Tactic : TA0009 - Collection
This article is a continuation of our MITRE ATT&CK series. In this article, we focus on the Collection tactic, and the techniques adversaries use to gather, stage, and organize data from compromised systems before exfiltration. As attackers progress through an intrusion, Collection becomes critical for assembling sensitive files, credentials, screenshots, and other high‑value information that will fuel data theft, espionage, or destructive operations.67Views2likes0CommentsOverview of MITRE ATT&CK Tactic: TA0040 - Impact
This article focuses on the Impact Tactic, and the techniques adversaries use to manipulate, disrupt or damage the systems and data as they reach the final stage of an attack. This is one of the critical tactics, as it highlights the adverse effects attackers can cause, including exploitation, operational disruption, data destruction, or financial gain61Views1like0CommentsForwarding Logs to SIEM Tools via HTTP Proxy for F5 Distributed Cloud Global Log Receiver
Purpose This guide provides a solution for forwarding logs to SIEM tools that support syslog but lack HTTP/HTTPS ingestion capabilities. It covers the deployment and tuning of an HTTP Proxy log receiver configured to work with F5 Distributed Cloud (XC) Global Log Receiver settings. Audience: This guide is intended for technical professionals, including SecOps teams and Solution Architects, who are responsible for integrating SIEM tools with F5 XC Global Log Receiver. Readers should have a solid understanding of HTTP communication (methods, request body, reverse proxy), syslog, and data center network architecture. Familiarity with F5 XC concepts such as namespaces, log types, events, and XC-GLR is also required. Introduction: Problem Statement: SIEM tools often support syslog ingestion but lack HTTP/HTTPS log reception capabilities. Objective: Explain how to deploy and configure an HTTP Proxy to forward logs to F5 Distributed Cloud Global Log Receiver. Solution Overview: Architecture Diagram and workflow: Configuration Steps: Configure Global Log Receiver in F5 Distributed Cloud Console Navigate to: Home → Shared Configuration → Global Log Receiver Create or edit the Global Log Receiver settings for HTTP receiver Ensure the Global Log Receiver batch size is based on the payload size expected from F5 NGINX. Example configuration snap: Set Up NGINX as an HTTP Log Receiver Install NGINX on your designated server. Configure log_format Configure NGINX to accept HTTP POST requests only and forward access logs to syslog Example configuration snippet: log_format custom_log_format_1 escape=json $request_body; # Example: include request body only server { listen 443 ssl; server_name <logreceiver_server_name>; ssl_certificate /etc/ssl/<logreceiver_server_cert>; ssl_certificate_key /etc/ssl/<logreceiver_server_key>; # Other SSL/TLS configurations (e.g., protocols, ciphers) ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; client_body_in_single_buffer on; # The directive is recommended when using the $request_body variable, to save the number of copy operations involved client_body_in_file_only off; #default client_max_body_size 32M; # based on tuning gzip on; location /log_endpoint { # Allow only POST requests for sending log data limit_except POST { deny all; } # Configure access_log to write incoming data to a file # access_log /var/log/nginx/log_receiver.log custom_log_format_1; access_log syslog:server=127.0.0.1:514,facility=local7,tag=nginx,severity=info custom_log_format_1; proxy_pass http://localhost:8091/; # This dummy Internal server required to collect request_body variable. } } # dummy internal server to respond back 200 ok server { listen 8091; server_name localhost; location / { return 200 "Log received successfully."; } } Set Up rsyslog server Install/configure rsyslog on your designated server. Configure 60-nginx.conf file in /etc/rsyslog.d/ directory Sample 60-nginx.conf file #nginx.* @@127.0.0.1:514 :syslogtag, isequal, "[nginx]" /var/log/nginx-syslog/nginx-access-log.log References: F5 Distributed Cloud Global log receiver supports many log receivers natively: F5 Distributed Cloud Technical Knowledge page on "Configure Global Log receiver" Prerequisites: An external log collection system reachable publicly. The following IP address ranges are required to be added to your firewall's allowlist: 193.16.236.64/29 185.160.8.152/29123Views4likes0CommentsUsing AWS CloudHSM with F5 BIG-IP
With the release of TMOS version 17.5.1, BIG-IP now supports the latest AWS CloudHSM hardware security module (HSM) type, hsm2m.medium, and the latest AWS CloudHSM Client SDK, version 5. This article explains how to install and configure AWS CloudHSM Client SDK 5 on BIG-IP 17.5.1604Views1like4CommentsOverview of MITRE ATT&CK Tactic - TA0010 Exfiltration
Introduction In current times of cyber vulnerabilities, data theft is the ultimate objective with which attackers monetize their presence within a victim network. Once valuable information is identified and collected, the attackers can package sensitive data, bypass perimeter defences, and finalize the breach. Exfiltration (MITRE ATT&CK Tactic TA0010) represents a critical stage of the adversary lifecycle, where the adversaries focus on extracting data from the systems under their control. There are multiple ways to achieve this, either by using encryption and compression to avoid detection or utilizing the command-and-control channel to blend in with normal network traffic. To avoid this data loss, it is important for defenders to understand how data is transferred from any system in the network and the various transmission limits imposed to maintain stealth. This article walks through the most common Exfiltration techniques and how F5 solutions provide strong defense against them. T1020 - Automated Exfiltration To exfiltrate the data, adversaries may use automated processing after gathering the sensitive data during collection. T1020.001 – Traffic Duplication Traffic mirroring is a native feature for some devices for traffic analysis, which can be used by adversaries to automate data exfiltration. T1030 – Data Transfer Size Limits Exfiltration of the data in limited-size packets instead of whole files to avoid network data transfer threshold alerts. T1048 – Exfiltration over Alternative Protocol Stealing of data over a different protocol or channel other than the command-and-control channel created by the adversary. T1048.001 – Exfiltration Over Symmetric Encrypted Non-C2 Protocol Symmetric Encryption uses shared or the same keys/secrets on all the channels, which requires an exchange of the value used to encrypt and decrypt the data. This symmetric encryption leads to the implementation of Symmetric Cryptographic Algorithms, like RC4, AES, baked into the protocols, resulting in multiple layers of encryption. T1048.002 – Exfiltration Over Asymmetric Encrypted Non-C2 Protocol Asymmetric encryption algorithms or public-key cryptography require a pair of cryptographic keys that can encrypt/decrypt data from the corresponding keys on each end of the channel. T1048.003 – Exfiltration Over Unencrypted Non-C2 Protocol Instead of encryption, adversaries may obfuscate the routine channel without encryption within network protocols either by custom or publicly available encoding/compression algorithms (base64, hex-code) and embedding the data. T1041 – Exfiltration Over C2 Channel Adversaries can also steal the data over command-and-control channels and encode the data into normal communications. T1011 – Exfiltration Over Other Network Medium Exfiltration can also occur through a wired Internet connection, for example, a WiFi connection, modem, cellular data connection or Bluetooth. T1011.001 – Exfiltration Over Bluetooth Bluetooth can also be used to exfiltrate the data instead of a command-and-control channel in case the command-and-control channel is a wired Internet connection. T1052 – Exfiltration Over Physical Medium Under circumstances, such as an air-gapped network compromise, exfiltration occurs through a physical medium. Adversaries can exfiltrate data using a physical medium, for example, say a removable drive. Some examples of such media include external hard drives, USB drives, cellular phones, or MP3 players. T1052.001 – Exfiltration Over USB One such circumstance is where the adversary may attempt to exfiltrate data over a USB connected physical device, which can be used as the final exfiltration point or to hop between other disconnected systems. T1567 – Exfiltration Over Web Services Adversaries may use legitimate external Web Service to exfiltrate the data instead of their command-and-control channel. T1567.001 – Exfiltration to Code Repository To exfiltrate the data to a code repository, rather than adversary’s command-and-control channel. These code repositories are accessible via an API over HTTPS. T1567.002 – Exfiltration to Cloud Storage To exfiltrate the data to a cloud storage, rather than their primary command-and-control channel. These cloud storage services allow storage, editing and retrieval of the exfiltrated data. T1567.003 – Exfiltration to Text Storage Sites To exfiltrate the data to a text storage site, rather than their primary command-and-control. These text storage sites, like pastebin[.]com, are used by developers to share code. T1567.004 – Exfiltration Over Webhook Adversaries also exfiltrate the data to a webhook endpoint, which are simple mechanisms for allowing a server to push data over HTTP/S to a client. The creation of webhooks is supported by many public services, such as Discord and Slack, that can be used by other services, like GitHub, Jira, or Trello. T1029 – Scheduled Transfer To exfiltrate the data, the adversaries may schedule data exfiltration only at certain times of the day or at certain intervals, blending the traffic patterns with general activity. T1537 – Transfer Data to Cloud Account Many a times, exfiltration of data can also be through transferring the data through sharing/syncing and creating backups of cloud environment to another cloud account under adversary control on the same service. How F5 Can Help F5 offers a comprehensive suite of security solutions designed to safeguard applications and APIs across diverse environments, including cloud, edge, on-premises, and hybrid platforms. These solutions enable robust risk management to effectively mitigate and protect against MITRE ATT&CK Exfiltration threats, delivering advanced functionalities such as: Web Application Firewall (WAF): Available across all F5 products, the WAF is a flexible, multi-layered security solution that protects web applications from a wide range of threats. It delivers consistent defense, whether applications are deployed on-premises, in the cloud, or in hybrid environments. HTTPS Encryption: F5 provides robust HTTPS encryption to secure sensitive data in transit, ensuring protected communication between users and applications by preventing unauthorized access or data interception. Protecting sensitive data with Data Guard: F5's WAF Data Guard feature prevents sensitive data leakage by detecting and blocking exposure of confidential information, such as credit card numbers and PII. It uses predefined patterns and customizable policies to identify transmissions of sensitive data in application responses or inputs. This proactive mechanism secures applications against data theft and ensures compliance with regulatory standards. For more information, please contact your local F5 sales team. Conclusion Adversaries Exfiltration of data often aims to steal sensitive information by packaging it to evade detection, using methods such as compression or encryption. They may transfer the data through command-and-control channels or alternate paths while applying stealth techniques like transmission size limitations. To defend against these threats, F5 provides a layered approach with its advanced offerings. The Web Application Firewall (WAF) identifies and neutralizes malicious traffic aimed at exploiting application vulnerabilities. HTTPS encryption ensures secure data transmission, preventing unauthorized interception during the attack. Meanwhile, a data guard policy set helps detect and block exposure of confidential information, such as credit card numbers and PII. Together, these F5 solutions effectively counteract data exfiltration attempts and safeguard critical assets. Reference links MITRE | ATT&CK Tactic 10 – Exfiltration MITRE ATT&CK: What It Is, how it Works, Who Uses It and Why | F5 Labs MITRE ATT&CK®84Views1like0Comments