security
2848 TopicsIntroducing F5 Distributed Cloud Web App Scanning
F5 Distributed Cloud Web App Scanning is a powerful and proactive solution for discovering security vulnerabilities in web applications and APIs across distributed and dynamic environments. With its automation, scalability, detailed reporting, and seamless integration into the broader F5 security ecosystem, it’s a valuable tool for safeguarding modern applications from vulnerabilities and ensuring compliance with regulatory standards.21Views1like0CommentsF5 BIG-IQ What's New in v8.4.0?
Introduction Effective management—orchestration, visibility, and compliance—relies on consistent app services and security policies across on-premises and cloud deployments. Easily control all your BIG-IP devices and services with a single, unified management platform, F5® BIG-IQ®. Demo Video Upgrading to BIG-IQ Version 8.4 Supported upgrade paths You can upgrade from BIG-IQ 8.x.0 to BIG-IQ 8.4.0 version. New Features in BIG-IQ Version 8.4.0 BIG-IQ Support for AWS IMDSv2 AWS introduced a token-based Instance Metadata Service API (IMDSv2) that enhances security, requiring authentication for metadata access. Previously, BIG-IQ used the older IMDSv1, which does not require authentication and remained the default for launching instances. Without IMDSv2 support, instances that require this version could not be licensed, relicensed, or used for metadata-based features. For BIG-IQ, this limitation affected SSH key authentication and license activation, as its API calls to EC2 instances like m5.xlarge failed due to missing authentication token implementation. This release adds IMDSv2 support, which allows BIG-IQ to work properly in AWS environments that require IMDSv2. Instances can now be licensed, metadata-based features are functional, and SSH key authentication works well, ensuring full compatibility with AWS security standards. BIG-IQ Support for BIG-IP 17.5.0 BIG-IQ provides full support for BIG-IP 17.5.0, ensuring seamless discovery and compatibility across all modules. Users who upgrade to the BIG-IP 17.5.0 version retain the same functionality without disruptions, maintaining consistency in their management operations. Interoperability Support for BIG-IP Access 17.5.0 BIG-IQ supports the creation, import, modification, and deployment of BIG-IP Access 17.5.0 version configurations. This update ensures full interoperability between BIG-IQ and BIG-IP 17.5.0 for managing access policies. Support for AS3 Compatibility with BIG-IQ 8.4.0 With this release, the AS3 schema is fully compatible with BIG-IQ 8.4.0, enabling seamless deployment of applications using Application Templates through the BIG-IQ user interface. Venafi 22.x, 23.x, and 24.x Support for BIG-IQ BIG-IQ now integrates with Venafi 22.x, 23.x, and 24.x versions that enable centralized certificate lifecycle management for BIG-IP devices. This update introduces support for AES256 encryption, enhancing security beyond the existing OpenSSL algorithm. By automating certificate management, this integration eliminates the manual and time-consuming process of maintaining certificates across various BIG-IP devices. Supported BIG-IP Services BIG-IP 17.5.0 support BIG-IQ now includes support for the following services running on BIG-IP version 17.5.0: Access Policy Manager (APM) Advanced Firewall Manager (AFM) Application Delivery Controller (ADC) Web Application Security (ASM / WAF) Fraud Protection Service (FPS) Statistics and Monitoring Application Services Extension 3 (AS3) support BIG-IQ supports Application Services Extension 3 (AS3) version 3.53.0 and later. Declarative Onboarding (DO) support BIG-IQ supports Declarative Onboarding (DO) version 1.29 and later. All objects up to 17.5.0 are supported. BIG-IP SSL Orchestrator (SSLO) support BIG-IQ now supports SSLO RPM version 12.0. You can now discover, import, configure, and deploy configurations for managed BIG-IP devices running this RPM version. To learn more about features supported in this SSLO RPM version, refer to the F5 SSL Orchestrator Release Notes version 17.5.0-12.0. F5OS Platform Management Support to display the VELOS device information You can now see the details such as Model type, Serial Number, Platform Version, and Blade Configuration for the VELOS platform Support to export F5OS Inventory details You can now export the F5OS platform or devices inventory information into a .CSV format file regardless of the status or assignment. Support to delete remote backup You can now delete backup files stored in the F5OS rSeries or VELOS platforms. This will also delete the partition backup files, when you delete the local F5OS backup file in the BIG-IQ. Support IPv6 address for F5OS VELOS partition This release now supports IPv6 addresses for F5OS VELOS partitions. Export F5OS backups to the external server You can now store a copy of the F5OS backup remotely on an SCP or SFTP server. BIG-IQ License Management License pool properties enhancements The License Pool UI was enhanced to include the following: You can now select the number of registration keys displayed per page under the Registration Keys section. You can now view information about the Service Check Date, Max allowed Throughput Rate, Max Allowed VE Cores, and Permitted SW Version of the Registration keys. All licenses usage report You can now generate a CSV report that meticulously includes all licenses from the selected group. F5 Advanced Web Application Firewall (On-Box) service as an SSL Orchestrator Service BIG-IP SSL Orchestrator (SSLO) Support BIG-IQ 8.4.0 supports configuring and deploying Advanced WAF profiles within the SSL Orchestrator interface for all topologies. This update makes it easier to set up and manage Advanced WAF profiles. You can set them up directly within SSL Orchestrator. In addition, you can also validate the service as a service chain object. For this setup, you should have Application Security Manager (ASM) and Advanced Web Application Firewall (WAF) profiles set up, licensed, and provisioned on BIG-IQ. Security Policy enhancements SSL Orchestrator Security Policy now has the following enhancements while creating a new rule: A new drop-down list contains the "is" and "is not" operators to compare or negate your specified condition. A new condition, "IP Protocol," lets you match SSL traffic based on Internet Protocols such as TCP and UDP. With the new "Bypass (Client Hello)" setting in SSL Proxy Action, you can bypass traffic on certain conditions without triggering the TLS handshake. However, the SSL conditions such as "Server Certificate (Issuer DN, SANs, Subject DN)" and "Category Lookup (All)" do not have this setting enabled. In a custom security policy, you can now redirect the traffic to a remote URL for the specified conditions (matches). BIG-IQ Centralized Management Compatibility Matrix Refer to Knowledge Article K34133507 BIG-IQ Virtual Edition Supported Platforms BIG-IQ Virtual Edition Supported Platforms provides a matrix describing the compatibility between the BIG-IQ VE versions and the supported hypervisors and platforms. Conclusion Managing hundreds or thousands of apps across a hybrid, multicloud environment is complex. Your apps must be always available and secure, no matter where they're deployed, creating a need for a new kind of Application Delivery Controller (ADC)—one that provides holistic, unified visibility and management of apps, services, and infrastructure everywhere. F5® BIG-IQ® Centralized Management reduces complexity and administrative burden by providing a single platform to create, configure, provision, deploy, upgrade, and manage F5® BIG-IP® security and application delivery services. Related Content BIG-IQ 8.4.0 Product Documentation Boosting BIG-IP AFM Efficiency with BIG-IQ: Technical Use Cases and Integration Guide Blog: Five Key Benefits of Centralized Management118Views1like0CommentsA Guide to F5 Volumetric (Routed) DDoS Protection in F5 Distributed Cloud
Introduction F5 Volumetric (Routed) DDoS protection is a service in F5 Distributed Cloud (F5 XC) available for standard deployment and emergency use. F5 has over 100 engineers in its incident response team and 24/7 dedicated SOC analysts in 3 security operations centers around the world. This means F5 can help with the quick detection, mitigation, and resolution of Layer3-4 routed DDoS attacks. F5 Volumetric DDoS Protection stands out for several key reasons, especially for enterprises needing fully managed, hybrid, and multicloud-based DDoS mitigation with human-led and AI-assisted support. Here’s some of the ways Volumetric DDoS protection with F5 stands out: Fully Managed 24/7 Security Operations Center (SOC) F5’s SOC continuously monitors traffic for DDoS attacks in real time. Unlike purely automated solutions, human analysts intervene to fine-tune attack mitigation. The SOC provides expert-led response to mitigate complex or evolving threats. Hybrid Deployment Flexibility Cloud-based, always-on, or on-demand models for different use cases. Integrates with on-prem F5 BIG-IP solutions for a hybrid defense strategy. Helps reduce false positives by fine-tuning security policies. Advanced Attack Detection & AI-driven Mitigation Uses behavioral analytics to differentiate between legitimate traffic and attacks. Mitigates volumetric, application-layer, and multi-vector attacks. AI-assisted rules dynamically adapt to new attack patterns. Large-Scale Scrubbing Capacity Global scrubbing centers prevent volumetric DDoS attacks from overwhelming networks. Reduces the risk of downtime by filtering malicious traffic before it reaches critical infrastructure. F5 blocks volumetric DDoS attacks by denying offending /24 prefixes (via BGP) the ability to route to the Distributed Cloud scrubbing centers. (reference DevCentral) API-Driven and Customizable Security Policies Offers API integration for automated DDoS mitigation and security orchestration. Supports custom policies to protect specific applications from targeted attacks. Enterprise-Grade Support & Compliance Designed for large enterprises, financial institutions, and high-security industries. Meets compliance standards such as PCI DSS, GDPR, and SOC 2. Backed by F5’s global threat intelligence network. Logging & Observability Recently introduced is the capability to observe security events using external handlers via the Global Log Receiver (GLR) service. Organizations can use AWS S3 buckets, HTTP(s) servers, Datadog, Splunk, AWS CloudWatch, Azure Event Hubs and Blog Storage, Google Cloud Platform (GCP), Kafka Receiver, NewRelic, IBM QRadar, and SumoLogic, to store Distributed Cloud events. Then, they can use any platform to watch DDoS and other security events. If you’re curious how Distributed Cloud events look using ELK (Elasticsearch, Logstash, and Kibana), including how to set it up, see this related article in DevCentral. To configure Distributed Cloud to send events from Global Log Receiver, log in to the Distributed Cloud console and navigate to Shared Configuration > Manage > Global Log Receiver. Add a new item, and ensure the following: Log Type: Security Events Log Message Selection: Select logs from all namespaces For this example, I use Distributed Cloud App Connect to securely deliver events to an instance of ELK Stack running on AWS. To deliver the events locally with internal networking between Distributed Cloud and ELK Stack, I use a Customer Edge (CE) appliance, also in AWS. Having the CE deployed locally provides a secure endpoint with only local routing in the AWS VPC. ➡️ See the following documentation for how to deploy a CE in AWS. Next is to use App Connect with an HTTP Load Balancer. In this case, the origin pool is my ELK Stack receiver, and I’ve configured ELK to receive events over HTTP. Because I’ve configured the HTTP Load Balancer to be publicly available on the Internet to accept traffic from the Global Log Receiver, a Service Policy has been configured to restrict access to specific IP ranges. Although not shown, only traffic from the F5 Global Log Receiver designated IP ranges is allowed to access this load balancer. ➡️ See the following Allowlist reference documentation to learn which IP addresses to allow. To receive and process events in ELK, I’ve configured the following for logstash: root@3c99db3fa334:/etc/logstash/conf.d# cat 50-f5xc-logs.conf input { http { port => 8080 } } filter { json { source => "message" } } output { elasticsearch { hosts => ["localhost"] index => "f5xc-logs-%{+YYY.MM.dd}" } } In the ELK console, new messages are visible under Analytics > Discover. With messages arriving from GLR, we can now see many of the fields becoming searchable in the “message_parsed” hierarchy. Volumetric (Routed) DDoS events appear in the field “sec_event_type” with value “routed_ddos_sec_event”. The following alert and mitigation messages may be classified and searched as follows: New ongoing alert msg = “alert created” no “alert_ended_at” field present New and already completed alert msg = “alert created” alert_ended_at field present Completed ongoing alert msg = “alert completed” alert_started_at field present alert_ended_at field present New ongoing mitigation msg = “mitigation created” mitigation_ongoing = true no “mitigation_stop_time” field present New and already-completed mitigation msg = “mitigation created and completed” mitigation_ongoing = false migitation_stop_time field present Completed mitigation msg = “mitigation completed” mitigation_ongoing = false “mitigation_stop_time” field present Putting it all together in ELK, it’s easy to visualize each routed_ddos_sec_event with a filtered dashboard. Using the pie visual below allows security admins to decide what type of attacks have happened and whether any are still occurring. The dashboard visual can be added to other existing security dashboards in Kibana to provide a complete and robust overview of your security posture. Demo The following video further illustrates the capabilities of Volumetric (Routed) DDoS protection in Distributed Cloud. In it, I walk through the different ways protection can be activated and what some of the mitigation events and alerts look like in the console. 🎥 YouTube: https://youtu.be/jYiqog_tz2I Conclusion F5 Volumteric (Routed) DDoS protection combines integrated services to provide core-protect, auto-mitigation, security-analyst-initiated mitigations, and advanced deep packet inspection and filtering to provide the best protection available for Layer-3 and Layer-4 routed networking. Adding routed DDoS to networks is a simple onboarding process. F5 also provides emergency DDoS mitigation for customers who are actively being attacked. Observing DDoS attacks is not only available in the Distributed Cloud console but is also available directly in your monitoring platform of choice when using Global Log Receiver. Additional Resources 🎥 YouTube: Tour of Routed (Layer3 & Layer4) DDoS Protection in F5 Distributed Cloud How I did it - "Remote Logging with the F5 XC Global Log Receiver and Elastic" Deploy Secure Mesh Site v2 in AWS (ClickOps) Firewall and Proxy Server Allowlist Reference How To: Configure Global Log Receiver173Views4likes0CommentsIntroducing Secure MCN features on F5 Distributed Cloud
Introduction F5 Distributed Cloud Services offers many secure multi-cloud networking features. In the video linked below, I demonstrate how to connect a Secure Mesh Customer Edge (CE) Site running on VMware and using common hardware. This on-prem CE is joined to a site mesh group of three other CE's, two of which are run on the public cloud providers AWS and Azure. Secure Mesh CE is a newly enhanced feature in Distributed Cloud that allows CE's not running in public cloud providers to run on hardware with unique and different configurations. Specifically, it's now possible to deploy site mesh transit networking to all CE's having one, two, or more NIC's, with each CE having its own unique physical configuration for networking. See my article on Secure Mesh Site Networking to learn how to set up and configure secure mesh sites. In addition to secure mesh networking, on-prem CE's can be deployed without app management features, giving organizations the flexibility to conserve deployed resources. Organizations can now choose whether to deploy AppStack CE's, where the CE's can manage and run K8s compute workloads deployed at the site, or use networking-focused CE's freeing up resources that would otherwise be used managing the apps. Whether deploying an AppStack or Secure Mesh CE, both types support Distributed Cloud's comprehensive set of security features, including DDoS, WAF, API protection, Bot, and Risk management. Secure MCN deployment capabilities include the following capabilities: Secure Multi-Cloud Network Fabric (secure connectivity) Discover any app running anywhere across your environments Cloud/On-Prem Customer Edge (CE) Private link connectivity orchestration with F5 XC as-a-service using any transport provider ➡️ Example: AWS PrivateLink, Azure CloudLink, Private transport (IP, MPLS, etc) L3 Network Connect & L7 App Connect capabilities L3/L4 DDoS + Enhanced intent-based firewall policies Security Service insertion w/ support for BIG-IP and Palo Alto Firewalls Application Security Services - WAF, API Protection, L7 DoS, Bot Defense, Client-side defense and more SaaS and Automation for Security, Network, & Edge Compute Powerful monitoring dashboards & troubleshooting tools for the entire secure multi-cloud network fabric Gain visibility into how and which API's are being consumed in workflows ➡️ Monitor and troubleshoot apps including their API's In the following video, I introduce the components that make up a Secure MCN deployment, and then walk through configuring the security features and show how to observe app performance and remediate security related incidents. 0-3:32 - Overview of Secure MCN features 3:32-9:20 - Product Demo Resources Distributed Cloud App Delivery Fabric Workflow Guide (GitHub) Secure MCN Article Series Secure MCN Intro: Introducing Secure MCN features on F5 Distributed Cloud Secure MCN Part 1: Using Distributed Application Security Policies in Secure Multicloud Networking Customer Edge Sites Secure MCN Part 2: The App Delivery Fabric with Secure Multicloud Networking Secure MCN Part 3: The Secure Network Fabric with Multicloud Network Segmentation & Private Provider Network Connectivity Related Technical Articles 🔥 ➡️ Combining the key aspects of Secure MCN with GenAI apps: Protect multi-cloud and Edge Generative AI applications with F5 Distributed Cloud Scale Your DMZ with F5 Distributed Cloud Services Driving Down Cost & Complexity: App Migration in the Cloud How To Secure Multi-Cloud Networking with Routing & Web Application and API Protection Secure Mesh Site Networking (DevCentral) A Complete Multi-Cloud Networking Walkthrough (DevCentral) Product Documentation How-To Create Secure Mesh Sites Product Information Distributed Cloud Network Connect Distributed Cloud App Connect2KViews1like0CommentsFrom Terra Incognita to API Incognita: What Captain Cook Teaches Us About API Discovery
When I was young, my parents often took me on UK seaside holidays, including a trip to Whitby, which was known for its kippers (smoked herring), jet (a semi-precious stone), and the then vibrant fishing industry. Whitby was also where Captain Cook, a famous British explorer, learned seamanship. During the 1760s, Cook surveyed Newfoundland and Labrador's coasts, creating precise charts of harbors, anchorages, and dangerous waters, such as Trinity Bay and the Grand Banks. His work, crucial for fishing and navigation, demonstrated exceptional skill and established his reputation, leading to his Pacific expeditions. Cook's charts, featuring detailed observations and navigational hazards, were so accurate they remained in use well into the 20th century. This made me think about how cartography and API Discovery are very similar. API discovery is about mapping and finding unknown and undocumented APIs. When you know which APIs you're actually putting out there, you're much less likely to run into problems that could sink your application rollout like a ship that runs aground, or leaves you dead in the water like a vessel that's lost both mast and rudder in stormy seas. This inspired me to show F5’s process for finding cloud SaaS APIs with a simple REST API. I honored Captain Cook and his Newfoundland trip by using this. Roughly speaking...this is my architecture. I thought about a simple rest API running in AWS. I call it the “Cook API”. An overview of the API interface is as follows. "title": "Captain Cook's Newfoundland Mapping API", "description": "Imaginary Rest API for testing API Discovery", "available_endpoints": [ {"path": "/api/charts", "methods": ["GET", "POST"], "description": "Access and create mapping charts"}, {"path": "/api/charts/<chart_id>", "methods": ["GET"], "description": "Get details of a specific chart"}, {"path": "/api/hazards", "methods": ["GET"], "description": "Get current navigation hazards"}, {"path": "/api/journal", "methods": ["GET"], "description": "Access Captain Cook's expedition journal"}, {"path": "/api/vessels", "methods": ["GET"], "description": "Get information about expedition vessels"}, {"path": "/api/vessels/<vessel_id>", "methods": ["GET"], "description": "Get status of a specific vessel"}, {"path": "/api/resources", "methods": ["GET", "POST"], "description": "Manage expedition resources"} ], I set up a container running on a server in AWS that hosts my REST API. To test my API, I create a partial swagger file that represents only a subset of the APIs that the container is advertising. openapi: 3.0.0 info: title: Captain Cook's Newfoundland Mapping API (Simplified) description: > A simplified version of the API. This documentation shows only the Charts and Vessels endpoints. version: 1.0.0 contact: name: API Support email: support@example.com license: name: MIT url: https://opensource.org/licenses/MIT servers: - url: http://localhost:1234 description: Development server - url: http://your-instance-ip description: Instance tags: - name: Charts description: Mapping charts created by Captain Cook - name: Vessels description: Information about expedition vessels paths: /: get: summary: API welcome page and endpoint documentation description: Provides an overview of the available endpoints and API capabilities responses: '200': description: Successful operation content: application/json: schema: type: object properties: title: type: string example: "Captain Cook's Newfoundland Mapping API" description: type: string available_endpoints: type: array items: type: object properties: path: type: string methods: type: array items: type: string description: type: string /api/charts: get: summary: Get all mapping charts description: Retrieves a list of all mapping charts created during the Newfoundland expedition tags: - Charts responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: object additionalProperties: $ref: '#/components/schemas/Chart' post: summary: Create a new chart description: Adds a new mapping chart to the collection tags: - Charts requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/ChartInput' responses: '201': description: Chart created successfully content: application/json: schema: type: object properties: status: type: string example: "success" message: type: string example: "Chart added" id: type: string example: "6" '400': description: Invalid input content: application/json: schema: $ref: '#/components/schemas/Error' /api/charts/{chartId}: get: summary: Get a specific chart description: Retrieves a specific mapping chart by its ID tags: - Charts parameters: - name: chartId in: path required: true description: ID of the chart to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/Chart' '404': description: Chart not found content: application/json: schema: $ref: '#/components/schemas/Error' /api/vessels: get: summary: Get all expedition vessels description: Retrieves information about all vessels involved in the expedition tags: - Vessels responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: type: array items: $ref: '#/components/schemas/VesselBasic' /api/vessels/{vesselId}: get: summary: Get a specific vessel description: Retrieves detailed information about a specific vessel by its ID tags: - Vessels parameters: - name: vesselId in: path required: true description: ID of the vessel to retrieve schema: type: string responses: '200': description: Successful operation content: application/json: schema: type: object properties: status: type: string example: "success" data: $ref: '#/components/schemas/VesselDetailed' '404': description: Vessel not found content: application/json: schema: $ref: '#/components/schemas/Error' components: schemas: Chart: type: object properties: name: type: string example: "Trinity Bay" completed: type: boolean example: true date: type: string format: date example: "1763-06-14" landmarks: type: integer example: 12 risk_level: type: string enum: [Low, Medium, High] example: "Medium" ChartInput: type: object required: - name properties: name: type: string example: "St. Mary Bay" completed: type: boolean example: false date: type: string format: date example: "1767-05-20" landmarks: type: integer example: 7 risk_level: type: string enum: [Low, Medium, High, Unknown] example: "Medium" VesselBasic: type: object properties: id: type: string example: "HMS_Grenville" type: type: string example: "Survey Sloop" crew: type: integer example: 18 status: type: string enum: [Active, In-port, Damaged, Repairs] example: "Active" VesselDetailed: allOf: - $ref: '#/components/schemas/VesselBasic' - type: object properties: current_position: type: string example: "LAT: 48.2342°N, LONG: -53.4891°W" heading: type: string example: "Northeast" weather_conditions: type: string enum: [Favorable, Challenging, Dangerous] example: "Favorable" Error: type: object properties: status: type: string example: "error" message: type: string example: "Resource not found" The process to set up API discovery in F5 Distributed Cloud is very simple. I create an origin pool that points to my upstream REST API. I then create a load balancer in distributed cloud and o Associate the origin pool with the load balancer o Enable API definition and import my partial swagger file as my API inventory. Some Screenshots below. o Enable API Discovery Select Enable from Redirect Traffic Run a shell script that tests my API. Take a break or do something else for the API Discovery capabilities to populate the dashboard. The process of API Discovery to show up in the XC Security Dashboard can take several hours. Results Well, as predicted, API discovery has found that my Swagger file is only representing a subset of my APIs. API discovery has found an additional 4 APIs that were not included in the swagger file. Distributed cloud describes these as “Shadow” APIs, or APIs that you may not have known about. API Discovery has also discovered that sensitive data is being returned by couple of the APIs What Now? If this were a real-world situation, you would have to review what was found, paying special attention to APIs that may be returning sensitive data. Each of what we call “shadow” APIs could pose a security risk, so you should review each of these APIs. The good thing is that we are now using distributed cloud, and we can use distributed cloud to protect our APIs. It is very easy to allow only those APIs that your project team might be using. For the APIs that you are exposing through the platform, you can:- Implement JWT Authentication if none exists and authentication is required. Configure Rate Limiting Add a WAF Policy Implement Bot Protection Policy Continually log and monitor your API traffic. Also, you should update your API inventory to include the entirety of the APIs that the Application provides You should only expose the APIs that are being used All of these things are simple to set up in the F5 Distributed Cloud Application Conclusion You need effective API management and discovery. Detecting "Shadow" APIs is crucial to preventing sensitive data exposure. Much like Captain Cook charting unknown territories, the process of uncovering APIs previously hidden in the system reveals a need for precision and vigilance. Cook’s expeditions needed detailed maps and careful navigation to avoid hidden dangers. Modern API management needs tools that can accurately map and monitor every endpoint. By embracing this meticulous approach, we can not only safeguard sensitive data but also steer our digital operations toward a more secure and efficient future. To quote a pirate who happens to be an API security expert and likes a Haiku. Know yer API seas, Map each endpoint 'fore ye sail— Blind waters sink ships.74Views1like0CommentsPost-Quantum Cryptography: Building Resilience Against Tomorrow’s Threats
Modern cryptographic systems such as RSA, ECC (Elliptic Curve Cryptography), and DH (Diffie-Hellman) rely heavily on the mathematical difficulty of certain problems, like factoring large integers or computing discrete logarithms. However, with the rise of quantum computing, algorithms like Shor's and Grover's threaten to break these systems, rendering them insecure. Quantum computers are not yet at the scale required to break these encryption methods in practice, but their rapid development has pushed the cryptographic community to act now. This is where Post-Quantum Cryptography (PQC) comes in — a new wave of algorithms designed to remain secure against both classical and quantum attacks. Why PQC Matters Quantum computers exploit quantum mechanics principles like superposition and entanglement to perform calculations that would take classical computers millennia2. This threatens: Public-key cryptography: Algorithms like RSA rely on factoring large primes or solving discrete logarithms-problems quantum computers could crack using Shor’s algorithm. Long-term data security: Attackers may already be harvesting encrypted data to decrypt later ("harvest now, decrypt later") once quantum computers mature. Figure1: Cryptography evolution How PQC Works The National Institute of Standards and Technology (NIST) has led a multi-year standardization effort. Here are the main algorithm families and notable examples. Lattice-Based Cryptography. Lattice problems are believed to be hard for quantum computers. Most of the leading candidates come from this category. CRYSTALS-Kyber (Key Encapsulation Mechanism) CRYSTALS-Dilithium (Digital Signatures) Uses complex geometric structures (lattices) where finding the shortest vector is computationally hard, even for quantum computers Example: ML-KEM (formerly Kyber) establishes encryption keys using lattices but requires more data transfer (2,272 bytes vs. 64 bytes for elliptic curves) The below figure shows an illustration of how Lattice-based cryptography works. Imagine solving a maze with two maps-one public (twisted paths) and one private (shortest route). Only the private map holder can navigate efficiently Code-Based Cryptography Based on the difficulty of decoding random linear codes. Classic McEliece: Resistant to quantum attacks for decades. Pros: Very well-studied and conservative. Cons: Very large public key sizes. Relies on error-correcting codes. The Classic McEliece scheme hides messages by adding intentional errors only the recipient can fix. How it works: Key generation: Create a parity-check matrix (public key) and a secret decoder (private key). Encryption: Encode a message with random errors. Decryption: Use the private key to correct errors and recover the message Figure3: Code-Based Cryptography Illustration Multivariate & Hash-Based Quadratic Equations Multivariate These are based on solving systems of multivariate quadratic equations over finite fields and relies on solving systems of multivariate equations, a problem believed to be quantum-resistant. Hash-Based Use hash functions to construct secure digital signatures. SPHINCS+: Stateless and hash-based, good for long-term digital signature security. Challenges and Adoption Integration: PQC must work within existing TLS, VPN, and hardware stacks. Key sizes: PQC algorithms often require larger keys. For example, Classic McEliece public keys can exceed 1MB. Hybrid Schemes: Combining classical and post-quantum methods for gradual adoption. Performance: Lattice-based methods are fast but increase bandwidth usage. Standardization: NIST has finalized three PQC standards (e.g., ML-KEM) and is testing others. Organizations must start migrating now, as transitions can take decades. Adopting PQC with BIG-IP As of F5 BIG-IP 17.5, the BIG-IP now supports the widely implemented X25519Kyber768Draft00 cipher group for client-side TLS negotiations (BIG-IP as a TLS server). Other cipher groups and capabilities will become available in subsequent releases. Cipher walkthrough Let's take the supported cipher in v17.5.0 (Hybrid X25519_Kyber768) as an example and walk through it. X25519: A classical elliptic-curve Diffie-Hellman (ECDH) algorithm Kyber768: A post-quantum Key Encapsulation Mechanism (KEM) The goal is to securely establish a shared secret key between the two parties using both classical and quantum-resistant cryptography. Key Exchange X25519 Exchange: Alice and Bob exchange X25519 public keys. Each computes a shared secret using their own private key + the other’s public key: Kyber768 Exchange: Alice uses Bob’s Kyber768 public key to encapsulate a secret: Produces a ciphertext and a shared secret Bob uses his Kyber768 private key to decapsulate the ciphertext and recover the same shared secret: Both parties now have: A classical shared secret A post-quantum shared secret They combine them using a KDF (Key Derivation Function): Why the hybrid approach is being followed: If quantum computers are not practical yet, X25519 provides strong classical security. If a quantum computer arrives, Kyber768 keeps communications secure. Helps organizations migrate gradually from classical to post-quantum systems. Implementation guide F5 published article Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS to implement PQC on BIG-IP v17.5 Create a new Cipher Rule To create a new Cipher Rule, log in to the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Rules. Select Create. In the Name box, provide a name for the Cipher Rule. For Cipher Suits, select any of the suites from the provided Cipher Suites list. Use ALL or DEFAULT to list all of the available suites. For DH Groups, enter X25519KYBER768 to restrict to only this PQC cipher For Example: X25519KYBER768 For Signature Algorithms, select an algorithm. For example: DEFAULT Select Finished. Create a new Cipher Group In the BIG-IP Configuration Utility, go to Local Traffic > Ciphers > Groups Select Create. In the Name box, provide a name for the Cipher Group. Add the newly created Cipher Rule to Allow the following box or Restrict the Allowed list to the following in Group Details. All of the other details, including DH Group, Signature Algorithms, and Cipher Suites, will be reflected in the Group Audit as per the selected rule. Select Finished. Configure a Client SSL Profile In the BIG-IP Configuration Utility, go to Local Traffic > Profiles > SSL > Client. Create a new client SSL profile or edit an existing. For Ciphers, select the Cipher Group radio button and select the created group to enable Post-Quantum cryptography for this client’s SSL profile. NGINX Support for PQC We are pleased to announce support for Post Quantum Cryptography (PQC) starting NGINX Plus R33. NGINX provides PQC support using the Open Quantum Safe provider library for OpenSSL 3.x (oqs-provider). This library is available from the Open Quantum Safe (OQS) project. The oqs-provider library adds support for all post-quantum algorithms supported by the OQS project into network protocols like TLS in OpenSSL-3 reliant applications. All ciphers/algorithms provided by oqs-provider are supported by NGINX. To configure NGINX with PQC support using oqs-provider, follow these steps: Install the necessary dependencies sudo apt update sudo apt install -y build-essential git cmake ninja-build libssl-dev pkg-config Download and install liboqs git clone --branch main https://github.com/open-quantum-safe/liboqs.git cd liboqs mkdir build && cd build cmake -GNinja -DCMAKE_INSTALL_PREFIX=/usr/local -DOQS_DIST_BUILD=ON .. ninja sudo ninja install Download and install oqs-provider git clone --branch main https://github.com/open-quantum-safe/oqs-provider.git cd oqs-provider mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DOPENSSL_ROOT_DIR=/usr/local/ssl .. make -j$(nproc) sudo make install Download and install OpenSSL with oqs-provider support git clone https://github.com/openssl/openssl.git cd openssl ./Configure --prefix=/usr/local/ssl --openssldir=/usr/local/ssl linux-x86_64 make -j$(nproc) sudo make install_sw Configure OpenSSL for oqs-provider /usr/local/ssl/openssl.cnf: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect oqsprovider = oqsprovider_sect [default_sect] activate = 1 [oqsprovider_sect] activate = 1 Generate post quantum certificates export OPENSSL_CONF=/usr/local/ssl/openssl.cnf # Generate CA key and certificate /usr/local/ssl/bin/openssl req -x509 -new -newkey dilithium3 -keyout ca.key -out ca.crt -nodes -subj "/CN=Post-Quantum CA" -days 365 # Generate server key and certificate signing request (CSR) /usr/local/ssl/bin/openssl req -new -newkey dilithium3 -keyout server.key -out server.csr -nodes -subj "/CN=your.domain.com" # Sign the server certificate with the CA /usr/local/ssl/bin/openssl x509 -req -in server.csr -out server.crt -CA ca.crt -CAkey ca.key -CAcreateserial -days 365 Download and install NGINX Plus Configure NGINX to use the post quantum certificates server { listen 0.0.0.0:443 ssl; ssl_certificate /path/to/server.crt; ssl_certificate_key /path/to/server.key; ssl_protocols TLSv1.3; ssl_ecdh_curve kyber768; location / { return 200 "$ssl_curve $ssl_curves"; } } Conclusion By adopting PQC, we can future-proof encryption against quantum threats while balancing security and practicality. While technical hurdles remain, collaborative efforts between researchers, engineers, and policymakers are accelerating the transition. Related Content New Features in BIG-IP Version 17.5.0 K000149577: Enabling Post-Quantum Cryptography in F5 BIG-IP TMOS F5 NGINX Plus R33 Release Now Available | DevCentral549Views3likes5CommentsOverview of MITRE ATT&CK Execution Tactic (TA0002)
Introduction to Execution Tactic (TA0002): Execution refers to the methods adversaries use to run malicious code on a target system. This tactic includes a range of techniques designed to execute payloads after gaining access to the network. It is a key stage in the attack lifecycle, as it allows attackers to activate their malicious actions, such as deploying malware, running scripts, or exploiting system vulnerabilities. Successful execution can lead to deeper system control, enabling attackers to perform actions like data theft, system manipulation, or establishing persistence for future exploitation. Now, let’s dive into the various techniques under the Execution tactic and explore how attackers use them. 1. T1651: Cloud Administration Command: Cloud management services can be exploited to execute commands within virtual machines. If an attacker gains administrative access to a cloud environment, they may misuse these services to run commands on the virtual machines. Furthermore, if an adversary compromises a service provider or a delegated administrator account, they could also exploit trusted relationships to execute commands on connected virtual machines. 2. T1059: Command and Scripting Interpreter The misuse of command and script interpreters allows adversaries to execute commands, scripts, or binaries. These interfaces, such as Unix shells on macOS and Linux, Windows Command Shell, and PowerShell are common across platforms and provide direct interaction with systems. Cross-platform interpreters like Python, as well as those tied to client applications (e.g., JavaScript, Visual Basic), can also be misused. Attackers may embed commands and scripts in initial access payloads or download them later via an established C2 (Command and Control) channel. Commands may also be executed via interactive shells or through remote services to enable remote execution. (.001) PowerShell: As PowerShell is already part of Windows, attackers often exploit it to execute commands discreetly without triggering alarms. It’s often used for things like finding information, moving across networks, and running malware directly in memory. This helps avoid detection because nothing is written to disk. Attackers can also execute PowerShell scripts without launching the powershell.exe program by leveraging.NET interfaces. Tools like Empire, PowerSploit, and PoshC2 make it even easier for attackers to use PowerShell for malicious purposes. Example - Remote Command Execution (.002) AppleScript: AppleScript is an macOS scripting language designed to control applications and system components through inter-application messages called AppleEvents. These AppleEvent messages can be sent by themselves or with AppleScript. They can find open windows, send keystrokes, and interact with almost any open application, either locally or remotely. AppleScript can be executed in various ways, including through the command-line interface (CLI) and built-in applications. However, it can also be abused to trigger actions that exploit both the system and the network. (.003) Windows Command Shell: The Windows Command Prompt (CMD) is a lightweight, simple shell on Windows systems, allowing control over most system aspects with varying permission levels. However, it lacks the advanced capabilities of PowerShell. CMD can be used from a distance using Remote Services. Attackers may use it to execute commands or payloads, often sending input and output through a command-and-control channel. Example - Remote Command Execution (.004) Unix Shell: Unix shells serve as the primary command-line interface on Unix-based systems. They provide control over nearly all system functions, with certain commands requiring elevated privileges. Unix shells can be used to run different commands or payloads. They can also run shell scripts to combine multiple commands as part of an attack. Example - Remote Command Execution (.005) Visual Basic: Visual Basic (VB) is a programming language developed by Microsoft, now considered a legacy technology. Visual Basic for Applications (VBA) and VBScript are derivatives of VB. Malicious actors may exploit VB payloads to execute harmful commands, with common attacks, including automating actions via VBScript or embedding VBA content (like macros) in spear-phishing attachments. (.006) Python: Attackers often use popular scripting languages, like Python, due to their interoperability, cross-platform support, and ease of use. Python can be run interactively from the command line or through scripts that can be distributed across systems. It can also be compiled into binary executables. With many built-in libraries for system interaction, such as file operations and device I/O, attackers can leverage Python to download and execute commands, scripts, and perform various malicious actions. Example - Code Injection (.007) JavaScript: JavaScript (JS) is a platform-independent scripting language, commonly used in web pages and runtime environments. Microsoft's JScript and JavaScript for Automation (JXA) on macOS are based on JS. Adversaries exploit JS to execute malicious scripts, often through Drive-by Compromise or by downloading scripts as secondary payloads. Since JS is text-based, it is often obfuscated to evade detection. Example - XSS (.008) Network Device CLI: Network devices often provide a CLI or scripting interpreter accessible via direct console connection or remotely through telnet or SSH. These interfaces allow interaction with the device for various functions. Adversaries may exploit them to alter device behavior, manipulate traffic, load malicious software by modifying configurations, or disable security features and logging to avoid detection. (.009) Cloud API: Cloud APIs offer programmatic access to nearly all aspects of a tenant, available through methods like CLIs, in-browser Cloud Shells, PowerShell modules (e.g., Azure for PowerShell), or SDKs for languages like Python. These APIs provide administrative access to major services. Malicious actors with valid credentials, often stolen, can exploit these APIs to perform malicious actions. (.010) AutoHotKey & AutoIT: AutoIT and AutoHotkey (AHK) are scripting languages used to automate Windows tasks, such as clicking buttons, entering text, and managing programs. Attackers may exploit AHK (.ahk) and AutoIT (.au3) scripts to execute malicious code, like payloads or keyloggers. These scripts can also be embedded in phishing payloads or compiled into standalone executable files (.011) Lua: Lua is a cross-platform scripting and programming language, primarily designed for embedding in applications. It can be executed via the command-line using the standalone Lua interpreter, through scripts (.lua), or within Lua-embedded programs. Adversaries may exploit Lua scripts for malicious purposes, such as abusing or replacing existing Lua interpreters to execute harmful commands at runtime. Malware examples developed using Lua include EvilBunny, Line Runner, PoetRAT, and Remsec. (.012) Hypervisor CLI: Hypervisor CLIs offer extensive functionality for managing both the hypervisor and its hosted virtual machines. On ESXi systems, tools like “esxcli” and “vim-cmd” allow administrators to configure and perform various actions. Attackers may exploit these tools to enable actions like File and Directory Discovery or Data Encrypted for Impact. Malware such as Cheerscrypt and Royal ransomware have leveraged this technique. 3. T1609: Container Administration Command Adversaries may exploit container administration services, like the Docker daemon, Kubernetes API server, or kubelet, to execute commands within containers. In Docker, attackers can specify an entry point to run a script or use docker exec to execute commands in a running container. In Kubernetes, with sufficient permissions, adversaries can gain remote execution by interacting with the API server, kubelet, or using commands like kubectl exec within the cluster. 4. T1610: Deploy Container Containers can be exploited by attackers to run malicious code or bypass security measures, often through the use of harmful processes or weak settings, such as missing network rules or user restrictions. In Kubernetes environments, attackers may deploy containers with elevated privileges or vulnerabilities to access other containers or the host node. They may also use compromised or seemingly benign images that later download malicious payloads. 5. T1675: ESXi Administration Command ESXi administration services can be exploited to execute commands on guest machines within an ESXi virtual environment. ESXi-hosted VMs can be remotely managed via persistent background services, such as the VMware Tools Daemon Service. Adversaries can perform malicious activities on VMs by executing commands through SDKs and APIs, enabling follow-on behaviors like File and Directory Discovery, Data from Local System, or OS Credential Dumping. 6. T1203: Exploitation for Client Execution Adversaries may exploit software vulnerabilities in client applications to execute malicious code. These exploits can target browsers, office applications, or common third-party software. By exploiting specific vulnerabilities, attackers can achieve arbitrary code execution. The most valuable exploits in an offensive toolkit are often those that enable remote code execution, as they provide a pathway to gain access to the target system. Example: Remote Code Execution 7. T1674: Input Injection Input Injection involves adversaries simulating keystrokes on a victim’s computer to carry out actions on their behalf. This can be achieved through several methods, such as emulating keystrokes to execute commands or scripts, or using malicious USB devices to inject keystrokes that trigger scripts or commands. For example, attackers have employed malicious USB devices to simulate keystrokes that launch PowerShell, enabling the download and execution of malware from attacker-controlled servers. 8. T1559: Inter-Process Communication Inter-Process Communication (IPC) is commonly used by processes to share data, exchange messages, or synchronize execution. It also helps prevent issues like deadlocks. However, IPC mechanisms can be abused by adversaries to execute arbitrary code or commands. The implementation of IPC varies across operating systems. Additionally, command and scripting interpreters may leverage underlying IPC mechanisms, and adversaries might exploit remote services—such as the Distributed Component Object Model (DCOM)—to enable remote IPC-based execution. (.001) Component Object Model (Windows): Component Object Model (COM) is an inter-process communication (IPC) mechanism in the Windows API that allows interaction between software objects. A client object can invoke methods on server objects via COM interfaces. Languages like C, C++, Java, and Visual Basic can be used to exploit COM interfaces for arbitrary code execution. Certain COM objects also support functions such as creating scheduled tasks, enabling fileless execution, and facilitating privilege escalation or persistence. (.002) Dynamic Data Exchange (Windows): Dynamic Data Exchange (DDE) is a client-server protocol used for one-time or continuous inter-process communication (IPC) between applications. Adversaries can exploit DDE in Microsoft Office documents—either directly or via embedded files—to execute commands without using macros. Similarly, DDE formulas in CSV files can trigger unintended operations. This technique may also be leveraged by adversaries on compromised systems where direct access to command or scripting interpreters is restricted. (.003) XPC Services(macOS): macOS uses XPC services for inter-process communication, such as between the XPC Service daemon and privileged helper tools in third-party apps. Applications define the communication protocol used with these services. Adversaries can exploit XPC services to execute malicious code, especially if the app’s XPC handler lacks proper client validation or input sanitization, potentially leading to privilege escalation. 9. T1106: Native API Native APIs provide controlled access to low-level kernel services, including those related to hardware, memory management, and process control. These APIs are used by the operating system during system boot and for routine operations. However, adversaries may abuse native API functions to carry out malicious actions. By using assembly directly or indirectly to invoke system calls, attackers can bypass user-mode security measures such as API hooks. Also, attackers may try to change or stop defensive tools that track API use by removing functions or changing sensor behavior. Many well-known exploit tools and malware families—such as Cobalt Strike, Emotet, Lazarus Group, LockBit 3.0, and Stuxnet—have leveraged Native API techniques to bypass security mechanisms, evade detection, and execute low-level malicious operations. 10. T1053: Scheduled Task/Job This technique involves adversaries abusing task scheduling features to execute malicious code at specific times or intervals. Task schedulers are available across major operating systems—including Windows, Linux, macOS, and containerized environments—and can also be used to schedule tasks on remote systems. Adversaries commonly use scheduled tasks for persistence, privilege escalation, and to run malicious payloads under the guise of trusted system processes. (.002) At: The “At” utility is available on Windows, Linux, and macOS for scheduling tasks to run at specific times. Adversaries can exploit “At” to execute programs at system startup or on a set schedule, helping them maintain persistence. It can also be misused for remote execution during lateral movement or to run processes under the context of a specific user account. In Linux environments, attackers may use “At “to break out of restricted environments, aiding in privilege escalation. (.003) Cron: The “cron” utility is a time-based job scheduler used in Unix-like operating systems. The “crontab” file contains scheduled tasks and the times at which they should run. These files are stored in system-specific file paths. Adversaries can exploit “cron” in Linux or Unix environments to execute programs at startup or on a set schedule, maintaining persistence. In ESXi environments, “cron” jobs must be created directly through the “crontab” file. (.005) Scheduled Task: Adversaries can misuse Windows Task Scheduler to run programs at startup or on a schedule, ensuring persistence. It can also be exploited for remote execution during lateral movement or to run processes under specific accounts (e.g., SYSTEM). Similar to System Binary Proxy Execution, attackers may hide one-time executions under trusted system processes. They can also create "hidden" tasks that are not visible to defender tools or manual queries. Additionally, attackers may alter registry metadata to further conceal these tasks. (.006) Systemd Timers: Systemd timers are files with a .timer extension used to control services in Linux, serving as an alternative to Cron. They can be activated remotely via the systemctl command over SSH. Each .timer file requires a corresponding .service file. Adversaries can exploit systemd timers to run malicious code at startup or on a schedule for persistence. Timers placed in privileged paths can maintain root-level persistence, while user-level timers can provide user-level persistence. (.007) Container Orchestration Job: Container orchestration jobs automate tasks at specific times, similar to cron jobs on Linux. These jobs can be configured to maintain a set number of containers, helping persist within a cluster. In Kubernetes, a CronJob schedules a Job that runs containers to perform tasks. Adversaries can exploit CronJobs to deploy Jobs that execute malicious code across multiple nodes in a cluster. 11. T1648: Serverless Execution Cloud providers offer various serverless resources such as compute functions, integration services, and web-based triggers that adversaries can exploit to execute arbitrary commands, hijack resources, or deploy functions for further compromise. Cloud events can also trigger these serverless functions, potentially enabling persistent and stealthy execution over time. An example of this is Pacu, a well-known open-source AWS exploitation framework, which leverages serverless execution techniques. 12. T1229: Shared Modules Shared modules are executable components loaded into processes to provide access to reusable code, such as custom functions or Native API calls. Adversaries can abuse this mechanism to execute arbitrary payloads by modularizing their malware into shared objects that perform various malicious functions. On Linux and macOS, the module loader can load shared objects from any local path. On Windows, the loader can load DLLs from both local paths and Universal Naming Convention (UNC) network paths. 13. T1072: Software Deployment Tools Adversaries may exploit centralized management tools to execute commands and move laterally across enterprise networks. Access to endpoint or configuration management platforms can enable remote code execution, data collection, or destructive actions like wiping systems. SaaS-based configuration management tools can also extend this control to cloud-hosted instances and on-premises systems. Similarly, configuration tools used in network infrastructure devices may be abused in the same way. The level of access required for such activity depends on the system’s configuration and security posture. 14. T1569: System Services System services and daemons can be abused to execute malicious commands or programs, whether locally or remotely. Creating or modifying services allows execution of payloads for persistence—particularly if set to run at startup—or for temporary, one-time actions. (.001) Launchctl (MacOS): launchctl interacts with launchd, the service management framework for macOS. It supports running subcommands via the command line, interactively, or from standard input. Adversaries can use launchctl to execute commands and programs as Launch Agents or Launch Daemons, either through scripts or manual commands. (.002) Service Execution (Windows): The Windows Service Control Manager (services.exe) manages services and is accessible through both the GUI and system utilities. Tools like PsExec and sc.exe can be used for remote execution by specifying remote servers. Adversaries may exploit these tools to execute malicious content by starting new or modified services. This technique is often used for persistence or privilege escalation. (.003) Systemctl (Linux): systemctl is the main interface for systemd, the Linux init system and service manager. It is typically used from a shell but can also be integrated into scripts or applications. Adversaries may exploit systemctl to execute commands or programs as systemd services. 15. T1204: User Execution Users may be tricked into running malicious code by opening a harmful file or link, often through social engineering. While this usually happens right after initial access, it can occur at other stages of an attack. Adversaries might also deceive users to enable remote access tools, run malicious scripts, or coercing users to manually download and execute malware. Tech support scams often use phishing, vishing, and fake websites, with scammers spoofing numbers or setting up fake call centers to steal access or install malware. (.001) Malicious Link: Users may be tricked into clicking on a link that triggers code execution. This could also involve exploiting a browser or application vulnerability (Exploitation for Client Execution). Additionally, links might lead users to download files that, when executed, deliver malware file. (.002) Malicious File: Users may be tricked into opening a file that leads to code execution. Adversaries often use techniques like masquerading and obfuscating files to make them appear legitimate, increasing the chances that users will open and execute the malicious file. (.003) Malicious Image: Cloud images from platforms like AWS, GCP, and Azure, as well as popular container runtimes like Docker, can be backdoored. These compromised images may be uploaded to public repositories and users might unknowingly download and deploy an instance or container, bypassing Initial Access defenses. Adversaries may also use misleading names to increase the chances of users mistakenly deploying the malicious image. (.004) Malicious Copy and Paste: Users may be deceived into copying and pasting malicious code into a Command or Scripting Interpreter. Malicious websites might display fake error messages or CAPTCHA prompts, instructing users to open a terminal or the Windows Run Dialog and run arbitrary, often obfuscated commands. Once executed, the adversary can gain access to the victim's machine. Phishing emails may also be used to trick users into performing this action. 16. T1047: Windows Management Instrumentation WMI (Windows Management Instrumentation) is a tool designed for programmers, providing a standardized way to manage and access data on Windows systems. It serves as an administrative feature that allows interaction with system components. Adversaries can exploit WMI to interact with both local and remote systems, using it to perform actions such as gathering information for discovery or executing commands and payloads. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid, thereby reducing security risks. F5 bot and risk management solutions can also stop bad bots and automation. This can make your modern applications safer. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Struts2 RCE using F5 BIG-IP For more details on the other mitigation techniques of MITRE ATT&CK Execution Tactic TA0002, please reach out to your local F5 team. Reference Links: MITRE ATT&CK® Execution, Tactic TA0002 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs43Views1like0Comments