security
17824 TopicsPolicy Puppetry, Jumping the Line, and Camels
Notable news for the week of April 20th - April 27th, 2025. This week, your editor is Jordan_Zebor from F5 Security Incident Response Team. This week, I’m diving into some big updates around Generative AI security. Every time a new tech wave hits—whether it was social media, crypto, IoT, or now AI—you can bet attackers and security teams are right there, too. The latest threats, like Policy Puppetry and Line Jumping show just how fast things are moving. On the flip side, defenses like CaMeL are helping us stay one step ahead. If we want AI systems that people can trust, security engineers have to stay sharp and keep building smarter defenses. Policy Puppetry: A Universal Prompt Injection Technique It seems like weekly, new ways to perform prompt injection are being discovered. Recent research by HiddenLayer has uncovered "Policy Puppetry," a novel prompt injection technique that exploits the way LLMs interpret structured data formats such as XML and JSON. This works because the full context—trusted system instructions and user input alike—is flattened and presented to the LLM without inherent separation (something I will touch on later). By presenting malicious prompts that mimic policy files, attackers can override pre-existing instructions and even extract sensitive information, compromising the integrity of systems powered by LLMs like GPT-4, Claude, and Gemini. For security engineers, this discovery underscores a systemic vulnerability tied to LLMs' training data and context interpretation. Existing defenses, such as system prompts, are insufficient on their own to prevent this level of exploitation. The emergence of Policy Puppetry adds to the ongoing discussion about prompt injection as the most significant Generative AI threat vector, highlighting the urgent need for comprehensive safeguards in AI system design and deployment. MCP Servers and the "Line Jumping" Vulnerability Trail of Bits uncovered a critical vulnerability in the Model Context Protocol (MCP), a framework used by AI systems to connect with external servers and retrieve tool descriptions. Dubbed "line jumping," this exploit allows malicious MCP servers to embed harmful prompts directly into tool descriptions, which are processed by the AI before any tool is explicitly invoked. By bypassing the protocol’s safeguards, attackers can manipulate system behavior and execute unintended actions, creating a cascading effect that compromises downstream systems and workflows. This vulnerability undermines MCP's promises of Tool Safety and Connection Isolation. The protocol is designed to ensure that tools can only cause harm when explicitly invoked with user consent and to limit the impact of any compromised server through isolated connections. However, malicious servers bypass these protections by instructing the model to act as a message relay or proxy, effectively bridging communication between supposedly isolated components. This creates an architectural flaw akin to a security system that activates prevention mechanisms only after intruders have already breached it. Google's CaMeL: A Defense Against Prompt Injection In response to prompt injection threats, Google DeepMind has introduced CaMeL (Capability-based Model Execution Layer), a groundbreaking mechanism designed to enforce control and data flow integrity in AI systems. By associating “capabilities”—metadata that dictate operational limits—with every value processed by the model, CaMeL ensures untrusted inputs cannot exceed their designated influence. Instead of sprinkling more AI magic fairy dust on the problem, this approach leans on solid, well-established security principles concepts into the AI domain. By implementing a protective layer around the LLM, developers can reduce risks such as unauthorized operations and unexpected data exfiltration, even if the underlying model remains vulnerable to prompt injection. While CaMeL has not yet been tested broadly, its potential represents a significant advancement toward secure-by-design AI systems, establishing a new architectural standard for mitigating prompt injection vulnerabilities. That’s it for this week — hope you enjoyed!12Views0likes0CommentsSecure and Seamless Cloud Application Migration with F5 Distributed Cloud and Nutanix
Introduction F5 Distributed Cloud (XC) offers SaaS-based security, networking, and application management services for multicloud environments, on-premises infrastructures, and edge locations. F5 Distributed Cloud Services Customer Edge (CE) enhances these capabilities by integrating into a customer’s environment, enabling centralized management via the F5 Distributed Cloud Console while being fully operated by the customer. F5 Distributed Cloud Services Customer Edge (CE) can be deployed in public clouds, on-premises, or at the edge. Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. Nutanix Cloud Clusters (NC2) extend on-premises data centers to public clouds, maintaining the simplicity of the Nutanix software stack with a unified management console. NC2 runs AOS and AHV on public cloud instances, offering the same CLI, user interface, and APIs as on-premises environments. This article explores how F5 Distributed Cloud and Nutanix collaborate to deliver secure and seamless application services across various types of cloud application migrations. Whether migrating applications to the cloud, repatriating them from public clouds, or transitioning into a hybrid multicloud environment, F5 Distributed Cloud and Nutanix ensure optimal performance and security at all times. Illustration F5 Distributed Cloud App Connect securely connect distributed application services across hybrid and multicloud environments. It operates seamlessly with a platform of web application and API protection (WAAP) services, safeguarding apps and APIs against a wide range of threats through robust security policies including an integrated WAF, DDoS protection, bot management, and other security tools. This enables the enforcement of consistent and comprehensive security policies across all applications without the need to configure individual custom policies for each app and environment. Additionally, it provides centralized observability by providing clear insights into performance metrics, security posture, and operational statuses across all cloud platforms. In this section, we illustrate how to utilize F5 Distributed App Connect with Nutanix for different cloud application migration scenarios. Cloud Migration In our example, we have a VMware environment within a data center located in San Jose. Our goal is to migrate the on-premises application nutanix.f5-demo.com from the VMware environment to a multicloud environment by distributing the application workloads across Nutanix Cloud Clusters (NC2) on AWS and Nutanix Cloud Clusters (NC2) on Azure. First, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as Nutanix Cloud Clusters (NC2) on Azure. F5 Distributed Cloud App Connect addresses the issue of IP overlapping, enabling us to deploy application workloads using the same IP addresses as those in the VMware environment in the San Jose data center. Next, we create origin pools on the F5 Distributed Cloud Console. In our example, we create two origin pools: nutanix-nc2-aws-pool for origin servers on NC2 on AWS and nutanix-nc2-azure-pool for origin servers on NC2 on Azure. To ensure minimal application services disruption, we update the HTTP Load Balancer for nutanix.f5-demo.com to include both new origin pools, and we assign them with a higher weight than the existing pool vmware-sj-pool so that the origin servers on Nutanix Cloud Clusters (NC2) on AWS and on Nutanix Cloud Clusters (NC2) on Azure will receive more traffic compared to the origin servers in the VMware environment in the San Jose data center. Note that web application firewall (WAF) nutanix-demo is enabled. Finally, we remove vmware-sj-pool to complete the cloud migration. Cloud Repatriation In this example, xc.f5-demo.com is deployed in a multicloud environment across AWS and Azure. Our objective is to migrate the application back to the Nutanix environment in the San Jose data center from the public clouds. To begin, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads in Nutanix AHV. We deploy the application workloads using the same IP addresses as those in the public clouds because IP overlapping is not a concern with F5 Distributed Cloud App Connect. On the F5 Distributed Cloud Console, we create an origin pool nutanix-sj-pool with the origin servers originating from the Nutanix environment in the San Jose data center. We then update the HTTP Load Balancer for xc.f5-demo.com to include the new origin pool, and assign it with a higher weight than both existing pools: xc-aws-pool with origin servers on AWS and xc-azure-pool with origin servers on Azure. As a result, the origin servers in the Nutanix environment, located in the San Jose data center will receive more traffic compared to origin servers in other pools. To ensure all applications receive the same level of security protection, web application firewall (WAF) nutanix-demo is also applied here. To complete the cloud repatriation, we remove xc-aws-pool and xc-azure-pool. The application service experiences minimal disruption during and after the migration. Hybrid Multicloud Our goal in this example is to bring xc-nutanix.f5-demo.com into a hybrid multicloud environment, as it is presently deployed solely in the San Jose data center. We first deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as on Nutanix Cloud Clusters (NC2) on Azure. We create an origin pool with origin servers originating from each of the F5 Distributed Cloud Customer Edge (CE) sites on the F5 Distributed Cloud Console. Next, we update the HTTP Load Balancer for xc-nutanix.f5-demo.com so that it includes all origin pools: nutanix-sj-pool (Nutanix AHV in our San Jose data center), nutanix-nc2-aws-pool (NC2 on AWS), and nutanix-nc2-azure-pool (NC2 on Azure). Note that web application firewall (WAF) nutanix-demo is applied here as well so that we can ensure a consistent level of security protection across all applications no matter where they are deployed. xc-nutanix.f5-demo.com is now in a hybrid multicloud environment. F5 Distributed Cloud Console is the centralized console for configuration management and observability. It provides real-time metrics and analytics, which allows us proactively monitor security events. Additionally, its integrated AI assistant delivers real-time insights and actionable recommendations of security events, enhancing our understanding of the security events and enabling more informed decision-making. This enables us to swiftly detect and respond to emerging threats, thereby sustaining a robust security posture. Conclusion Cloud application migration can be complex and challenging. F5 Distributed Cloud and Nutanix collaborate to offer a secure and streamlined solution that minimizes risk and disruption during and after the migration process, including those migrating from VMware environments. This ensures a seamless cloud application transition while maintaining business continuity throughout the entire process and beyond.79Views0likes0CommentsF5 NGINX Automation Examples [Part 1-Deploy F5 NGINX Ingress Controller with App ProtectV5 ]
Introduction: Welcome to our initial article on F5 NGINX automation use cases, where we aim to provide deeper insights into the strategies and benefits of implementing NGINX solutions. This series uses the NGINX Automation Examples GitHub repo and CI/CD platform to deploy NGINX solutions based on DevSecOps principles. Our focus will specifically address the integration of NGINX with Terraform, two powerful tools that enhance application delivery and support infrastructure as code. Stay tuned for additional use cases that will be presented in the upcoming content! In this detailed example, we will demonstrate how to deploy an F5 NGINX Ingress Controller with the F5 NGINX App Protect version 5 in the AWS, GCP, and Azure Cloud. We will utilize Terraform to set up an AWS Elastic Kubernetes Service (EKS) cluster that hosts the Arcadia Finance test web application. The NGINX Ingress Controller will manage this application for Kubernetes and will have security measures provided by the NGINX App Protect version 5. To streamline the deployment process, we will integrate GitHub Actions for continuous integration and continuous deployment (CI/CD) while using an Amazon S3 bucket to manage the state of our Terraform configurations. Prerequisites: F5 NGINX One License AWS Account - Due to the assets being created, the free tier will not work GitHub Account Tools Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: S3 CI/CD: GitHub Action NGINX Ingress Controller: This solution provides comprehensive management for API gateways, load balancers, and Kubernetes Ingress Controllers, enhancing security and visibility in hybrid and multicloud environments, particularly at the edge of Kubernetes clusters. Consolidating technology streamlines operations and reduces the complexity of using multiple tools. NGINX App Protect WAF v5: A lightweight software security solution designed to deliver high performance and low latency. It supports platform-agnostic deployment, making it suitable for modern microservices and container-based applications. This version integrates both NGINX and Web Application Firewall (WAF) components within a single pod, making it particularly well-suited for scalable, cloud-native environments. Module 1: Deploy NGINX Ingress Controller with App Protect V5 in AWS Cloud Workflow Guides: Deploy NGINX Ingress Controller with App ProtectV5 in AWS Cloud Architecture Diagram Module 2: Deploy NGINX Ingress Controller with App Protect V5 in GCP Cloud Workflow Guides: Deploy NGINX Ingress Controller with App Protect V5 in GCP Cloud Architecture Diagram Module 3: Deploy NGINX Ingress Controller with App Protect V5 in Azure Workflow Guides: Deploy NGINX Ingress Controller with App Protect V5 in Azure Architecture Diagram Conclusion This article outlines deploying a robust security framework using the NGINX Ingress Controller and NGINX App Protect WAF version 5 for a sample web application hosted on AWS EKS. We leveraged the NGINX Automation Examples Repository and integrated it into a CI/CD pipeline for streamlined deployment. Although the provided code and security configurations are foundational and may not cover every possible scenario, they serve as a valuable starting point for implementing NGINX Ingress Controller and NGINX App Protect version 5 in your cloud environments.255Views2likes0CommentsMitigating OWASP Web Application Risk: Vulnerable and Outdated Components using F5 BIG-IP
This article provides information on the Struts 2 vulnerability (CVE-2017-5638) , one of the dangers posed by vulnerable and outdated components. It highlights how a single unpatched vulnerability in a widely used framework can lead to catastrophic consequences, including data breaches, server compromise, and damage to an organisation's reputation and how we can protect it using F5 BIG-IP Advanced WAF.21Views0likes0CommentsModern Applications-Demystifying Ingress solutions flavors
In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud Ingress Controller Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Ingress Controller The F5 XC Ingress Controller is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral JWT authorization with NGINX Ingress Controller My first CRD deployment with CIS | DevCentral BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes (LA) BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 NGINX Ingress Controller Overview of F5 BIG-IP Container Ingress Services NGINX One81Views0likes0CommentsLLMs And Trust, Google A2A Protocol And The Cost Of Politeness In AI: AI Friday
It's AI Friday! We're diving into the world of artificial intelligence like never before! 🎩 On this Hat Day edition (featuring NFL draft banter), we discuss fascinating topics like LLMs (Large Language Models) and their trust—or lack thereof—in humanity; Google’s innovative Agent-to-Agent (A2A) protocol, and how politeness towards AI incurs millions in operational costs. We also touch on pivotal AI conversations around zero-trust, agentic AI, and the dynamic collapse of traditional control and data planes. Join us as we dissect how AI shapes the future of human interaction, enterprise-level security, and even animal communication. Don’t miss out on this engaging, informative, and slightly chaotic conversation about cutting-edge advancements in AI. Remember to like, subscribe, and share with your community to ensure you never miss an episode of AI Friday! Articles: What do LLMs Think Of Us? At What Price, Politeness? Google Agent2Agent Protocol (A2A)19Views0likes0Commentsmaster key file on vcmp guest HA Cluster different?
Hi, I'm reading into master-unit key workings and how to restore a password-protected UCS on a HA-pair. The KB articles have been very helpful so far. I have 2 questions which i hope i can get an answer to. 1. When i use f5mku -K the password is the same. but when i look at the master key content located at /config/bigip/kstore/master file, the contents are different. it is hashed by a different salt, or encrypted by the unit key? 2. is the f5mku -K password the same as what you would enter here tmsh modify /sys crypto master-key prompt-for-password thanks a lot.21Views0likes1CommentBIG-IP Next for Kubernetes addressing today’s enterprise challenges
Enterprises have started adopting Kubernetes (K8s)—not just cloud service providers—as it offers strategic advantages in agility, cost efficiency, security, and future-proofing. Cloud Native Functions account for around 60% TCO savings Easier to deploy, manage, maintain, and scale. Easier to add and roll out new services. Kubernetes complexities With the move from traditional application deployments to microservices and containerized services, some complexities were introduced, Networking Challenges with Kubernetes Default Deployments Kubernetes networking has several inherent challenges when using default configurations that can impact performance, security, and reliability in production environments. Core Networking Challenges Flat Network Model All pods can communicate with all other pods by default (east-west traffic) No network segmentation between applications Potential security risks from excessive inter-pod communication Service Discovery Limitations DNS-based service discovery has caching behaviors that can delay updates No built-in load balancing awareness (can route to unhealthy pods during updates) Limited traffic shaping capabilities (all requests treated equally) Ingress Challenges No default ingress controller installed Multiple ingress controllers can conflict if not properly configured SSL/TLS termination requires manual certificate management Network Policy Absence No network policies applied by default (allow all traffic). Difficult to implement zero-trust networking principles No default segmentation between namespaces DNS Issues CoreDNS default cache settings may not be optimal. Pod DNS policies may not match application requirements. Nodelocal DNS cache not enabled by default Load-Balancing Problems Service `ClusterIP` is the default (no external access). NodePort` services can conflict on port allocations. Cloud provider load balancers can be expensive if overused CNI (Container Network Interface) Considerations Default CNI plugin may not support required features Network performance varies significantly between CNI choices IP address management challenges at scale Performance-Specific Issues kube-proxy inefficiencies Default iptables mode becomes slow with many services IPVS (IP Virtual Server) mode requires explicit configuration Service mesh sidecars can double latency Pod Network Overhead Additional hops for cross-node communication Encapsulation overhead with some CNI plugins No QoS guarantees for network traffic Multicluster Communication No default solution for cross-cluster networking Complex to establish secure connections between clusters Service discovery doesn’t span clusters by default Security Challenges No default encryption between pods No default authentication for service-to-service communication. All namespaces are network-accessible to each other by default. External traffic can bypass ingress controllers if misconfigured. These challenges highlight why most production Kubernetes deployments require significant, complex customization beyond the default configuration. Figure 1 shows those workarounds being implemented and how complicated our setup would be, with multiple addons required to overcome Kubernetes limitations. In the following section, we are exploring how BIG-IP Next for Kubernetes simplifies and enhances application delivery and security within Kubernetes environment. BIG-IP Next for Kubernetes Introducing BIG-IP Next for Kubernetes not only reduces complexity, but leverages the main networking components to the TMM pods rather than relying on the host server. Think of where current network functions are applied, it’s the host kernel. Whether you are doing NAT or firewalling services, this requires intervention by the host side, which impacts the zero-trust architecture and traffic performance is limited by default kernel IP and routing capabilities. Deployment overview Among the introduced features in 2.0.0 Release API GW CRs (Custom Resources). F5 IPAM Controller to manage IP addresses for Gateway resource. Seamless firewall policy integration in Gateway API. Ingress DDoS protection in Gateway API. Enforced access control for Debug and QKView APIs with Admin Token. In this section, we explore the steps to deploy BIG-IP Next for Kubernetes in your environment, Infrastructure Using different flavors depending on your needs and lab type (demo or production), for labs microk8s, k8s or kind, for example. BIG-IP Next for Kubernetes helm, docker are required packages for this installation. Follow the installation guide BIG-IP Next for Kubernetes current 2.0.0 GA release is available. For the desired objective in this article, you may skip the Nvidia DOCA (that's the focus of the coming article) and go directly for BIG-IP Next for Kubernetes. Install additional CRDs Once the licensing and core pods are ready, you can move to adding additional CRDs (Customer Resources Definition). BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes CRDs Custom CRDs Install F5 Use case Custom Resource Definitions Related Content BIG-IP Next for Kubernetes v2.0.0 Release Notes System Requirements BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs BIG-IP Next for Kubernetes running in Amazon EKS110Views0likes0CommentsIntroduction to BIG-IP SSL Orchestrator
Demo Video What is SSL Orchestrator? F5 BIG-IP SSL Orchestrator is designed and purpose-built to enhance SSL/TLS infrastructure, provide security solutions with visibility into SSL/TLS encrypted traffic, and optimize and maximize your existing security investments. BIG-IP SSL Orchestrator delivers dynamic service chaining and policy-based traffic steering, applying context-based intelligence to encrypted traffic handling to allow you to intelligently manage the flow of encrypted traffic across your entire security stack, ensuring optimal availability. Designed to easily integrate with existing architectures and to centrally manage the SSL/TLS decrypt/re-encrypt function, BIG-IP SSL Orchestrator delivers the latest SSL/ TLS encryption technologies across your entire security infrastructure. With BIG-IP SSL Orchestrator’s high-performance encryption and decryption capabilities, your organization can quickly discover hidden threats and prevent attacks at multiple stages, leveraging your existing security solutions. BIG-IP SSL Orchestrator ensures encrypted traffic can be decrypted, inspected by security controls, then re-encrypted—delivering enhanced visibility to mitigate threats traversing the network. As a result, you can maximize your security services investment for malware, data loss prevention (DLP), ransomware, and NGFWs, thereby preventing inbound and outbound threats, including exploitation, callback, and data exfiltration. Why is this important? Offload your SSL decryption compute resources to F5. Let F5 handle all the decrypt/encrypt functions so your security tools don’t have to. This will increase the performance capabilities of your existing security solutions. Easily create policy to bypass decryption of sensitive traffic like Banking, Finance and Healthcare related websites. Improve high availability by leveraging SSL Orchestrator to distribute load among a group of security devices, like Next Generation Firewalls. A comprehensive SSL decryption solution gives you much-needed visibility into encrypted traffic, which enables you to block encrypted threats. SSL Orchestrator integrates with your existing infrastructure An SSL Orchestrator “Service” is defined as a device that SSL Orchestrator passes decrypted traffic to. A Service can be Layer 2 or 3. It can be unidirectional (TAP). It can be an ICAP server. It can be an Explicit or Transparent HTTP proxy. A Layer 2 device (bridging/bump-in-wire) refers to connectivity without IP Address configuration. Layer 2 devices pass all traffic from one interface to another interface. A Layer 3 device (typical NAT) refers to IP Address to IP Address connectivity. Layer 3 devices must be specifically configured to work on a network. An Explicit Proxy device also utilizes IP Address to IP Address connectivity. However, in this case, web applications have to be specifically configured to use an Explicit Proxy. A Transparent Proxy device also utilizes IP Address to IP Address connectivity. In this case, web applications DO NOT need to be configured to use a Proxy. Other type of devices are supported, like an ICAP server or TAP device. An ICAP server is often used for Data Loss Prevention (DLP). A TAP device is often used for passive visibility as it receives an exact copy of decrypted traffic. Service Chains Service Chains are user-defined groupings of one or more Services. Multiple Service Chains are supported by Policy (see next section). There are no restrictions on the type of Services that can be in a Service Chain. For example: a Service Chain can consist of one or more Layer 2 devices, and one or more Layer 3 devices, and so on. Policy SSL Orchestrator supports a flexible policy editor that is used to determines what type of traffic to send or not to send to a Service Chain. For example: in the case of an Outbound (see next section) configuration, certain content can bypass SSL Decryption based on URL Categories like Banking, Finance and Healthcare. Topologies A Topology defines how SSL Orchestrator will be interested into your traffic flow. It is defined as either Incoming or Outgoing. High-level parameters for how/what to intercept are defined here. In an Inbound Topology, traffic comes from users on the internet to access an application like mobile banking or shopping. This may also be referred to as a reverse proxy. In an Outbound Topology, traffic comes from users on your network to access sites/applications on the internet. For example: a person who works at Apple HQ who is accessing the internet using the company’s network. This may also be referred to as a forward proxy. Conclusion F5 BIG-IP SSL Orchestrator simplifies and accelerates the deployment of SSL visibility and orchestration services. Whether for modern, custom, or classic apps, and regardless of their location—be it on premises, in the cloud, or at the edge—BIG-IP SSL Orchestrator is built to handle today’s dynamic app landscape. Related Articles Integrating Security Solutions with F5 BIG-IP SSL Orchestrator129Views2likes0Comments