AI
59 TopicsF5 AI Gateway to Strengthen LLM Security and Performance in Red Hat OpenShift AI
In my previous article, we explored how F5 Distributed Cloud (XC) API Security enhances the perimeter of AI model serving in Red Hat OpenShift AI on ROSA by protecting against threats such as DDoS attacks, schema misuse, and malicious bots. As organizations move from piloting to scaling GenAI applications, a new layer of complexity arises. Unlike traditional APIs, LLMs process free-form, unstructured inputs and return non-deterministic responses—introducing entirely new attack surfaces. Conventional web or API firewalls fall short in detecting prompt injection, data leakage, or misuse embedded within model interactions. Enter F5 AI Gateway—a solution designed to provide real-time, LLM-specific security and optimization within the OpenShift AI environment. Understanding the AI Gateway Recent industry leaders have said that an AI Gateway layer is coming into use. This layer is between clients and LLM endpoints. It will handle dynamic prompt/response patterns, policy enforcement, and auditability. Inspired by these patterns, F5 AI Gateway brings enterprise-grade capabilities such as: Inspecting and Filtering Traffic: Analyzes both client requests and LLM responses to detect and mitigate threats such as prompt injection and sensitive data exposure. Implementing Traffic Steering Policies: Directs requests to appropriate LLM backends based on content, optimizing performance and resource utilization. Providing Comprehensive Logging: Maintains detailed records of all interactions for audit and compliance purposes. Generating Observability Data: Utilizes OpenTelemetry to offer insights into system performance and security events. These capabilities ensure that AI applications are not only secure but also performant and compliant with organizational policies. Integrated Architecture for Enhanced Security The combined deployment of F5 Distributed Cloud API Security and F5 AI Gateway within Red Hat OpenShift AI creates a layered defense strategy: F5 Distributed Cloud API Security: Acts as the first line of defense, safeguarding exposed model APIs from external threats. F5 AI Gateway: Operates within the OpenShift AI cluster, providing real-time inspection and policy enforcement tailored to LLM traffic. This layered design ensures multi-dimensional defense, aligning with enterprise needs for zero-trust, data governance, and operational resilience. Key Benefits of F5 AI Gateway Enhanced Security: Mitigates risks outlined in the OWASP Top 10 for LLM Applications - such as prompt injection (LLM01) - by detecting malicious prompts, enforcing system prompt guardrails, and identifying repetition-based exploits, delivering contextual, Layer 8 protection. Performance Optimization: Boosts efficiency through intelligent, context-aware routing and endpoint abstraction, simplifying integration across multiple LLMs. Scalability and Flexibility: Supports deployment across various environments, including public cloud, private cloud, and on-premises data centers. Comprehensive Observability: Provides detailed metrics and logs through OpenTelemetry, facilitating monitoring and compliance. Conclusion The rise of LLM applications requires a new architectural mindset. F5 AI Gateway complements existing security layers by focusing on content-level inspection, traffic governance, and compliance-grade visibility. It is specifically tailored for AI inference traffic. When used with Red Hat OpenShift AI, this solution provides not just security, but also trust and control. This helps organizations grow GenAI workloads in a responsible way. For a practical demonstration of this integration, please refer to the embedded demo video below. If you’re planning to attend this year’s Red Hat Summit, please attend an F5 session and visit us in Booth #648. Related Articles: Securing model serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security209Views0likes0CommentsAI, Red Teaming, and Post-Quantum Cryptography: Key Insights from RSA 2025
Join Aubrey and Byron at RSA Conference 2025 as they dive into transformative topics like artificial intelligence, red-teaming strategies, and post-quantum cryptography. From exploring groundbreaking OWASP sessions to analyzing emerging AI threats, this episode highlights key insights that shape the future of cybersecurity. Discover the challenges in red team AI testing, the implications of APIs in multi-cloud environments, and how quantum-resistant cryptography is rising to meet AI-driven threats. Don't miss this exciting recap of RSA 2025!90Views0likes0CommentsLLMs And Trust, Google A2A Protocol And The Cost Of Politeness In AI: AI Friday
It's AI Friday! We're diving into the world of artificial intelligence like never before! 🎩 On this Hat Day edition (featuring NFL draft banter), we discuss fascinating topics like LLMs (Large Language Models) and their trust—or lack thereof—in humanity; Google’s innovative Agent-to-Agent (A2A) protocol, and how politeness towards AI incurs millions in operational costs. We also touch on pivotal AI conversations around zero-trust, agentic AI, and the dynamic collapse of traditional control and data planes. Join us as we dissect how AI shapes the future of human interaction, enterprise-level security, and even animal communication. Don’t miss out on this engaging, informative, and slightly chaotic conversation about cutting-edge advancements in AI. Remember to like, subscribe, and share with your community to ensure you never miss an episode of AI Friday! Articles: What do LLMs Think Of Us? At What Price, Politeness? Google Agent2Agent Protocol (A2A)43Views0likes0Comments2025 Top AI Use Cases, AI For Nuclear Safety & CaMeL Prompt Injection Fixes
It's AI Friday! This week, we unpack The latest AI news and trends, including the top AI use cases for 2025 Intriguing new developments from OpenAI AI in nuclear safety with PG&E (what could possibly go wrong?) Novel defenses against prompt injection attacks with CaMeL LLM-powered conversations with dolphins Join Aubrey, Joel, Ken, and Byron as they blend in-depth insights with good-natured humor. Like and subscribe as we explore the future of AI together! Related Content: How are people using AI in 2025? OpenAI Models o3 and o4 think in images and concepts. OpenAI's 'Break Glass In Case of SkyNet' paper AI For Nuclear Safety - PG&E Can a CaMeL fix prompt injection?? Google's DolphinGemma LLM.. Yep. It's for talking to dolphins.58Views0likes0Commentsf5 AI Gateway pii-redactor not working
I am testing Ai Gateway by looking at NGINX Modern Apps Docs. I have verified that OWASP LLM 01, 07 are working, but 02 Sensitive Information Configuration does not seem to be working. The demo video also contains Sensitive Information related content. how config Sensitive Information masking for ai gateway? https://clouddocs.f5.com/training/community/nginx/html/class15/module6/module6.html The processor's log looks like this: {"time":"2025-04-11T00:55:04.71766415Z","level":"ERROR","msg":"applying config to component failed, rolling back","error":"failed to check processors: failed to fetch parameters for processor pii-redactor: unable to fetch parameters from url: http://aigw-processors-f5.devopschan.svc.cluster.local/api/v1/signature/f5/pii-redactor, got status: 404"} 2025/04/11 00:55:04 WARN will retry config apply in 5s (1 of 3) {"time":"2025-04-11T00:55:05.368088471Z","level":"INFO","msg":"successfully reported usage data"} {"time":"2025-04-11T00:55:09.767886333Z","level":"ERROR","msg":"applying config to component failed, rolling back","error":"failed to check processors: failed to fetch parameters for processor pii-redactor: unable to fetch parameters from url: http://aigw-processors-f5.devopschan.svc.cluster.local/api/v1/signature/f5/pii-redactor, got status: 404"} 2025/04/11 00:55:09 WARN will retry config apply in 5s (2 of 3) {"time":"2025-04-11T00:55:14.817815787Z","level":"ERROR","msg":"applying config to component failed, rolling back","error":"failed to check processors: failed to fetch parameters for processor pii-redactor: unable to fetch parameters from url: http://aigw-processors-f5.devopschan.svc.cluster.local/api/v1/signature/f5/pii-redactor, got status: 404"} configuration file : ... responseStages: - name: protect steps: - name: pii-redactor ... - name: pii-redactor type: external config: endpoint: http://aigw-processors-f5.devopschan.svc.cluster.local namespace: f5 version: 1 params: threshold: 0.2 # Default 0.2 allow_rewrite: true # Default false denyset: ["EMAIL","PHONE_NUMBER","STREETADDRESS","ZIPCODE"] ... thank you.47Views0likes1CommentAI Friday: Are Chatbots Passing the Turing Test? Explore the Ethics! - Ep.14
Join the DevCentral crew as we dive into groundbreaking AI research, explore how OpenAI's GPT models mimic humans, and discuss critical advancements in AI security and ethics. From Turing tests to deepfake controversies, this episode of AI Friday is packed with thought-provoking insights and practical takeaways for the AI-powered future. Secure your AI journey. Stay informed. Stay connected. It’s time to redefine intelligence—welcome to AI Friday! Associated articles: AI PASSES TURING TEST!!! Tracing LLM Reasoning AI Image Ethics More On Model Context Protocol And the episode...32Views1like0CommentsAI Friday LIVE w/ Steve Wilson - Vibe Coding, Agentic AI Security And More
Welcome to AI Friday! In this episode, we dive into the latest developments in Generative AI Security, discussing the implications and challenges of this emerging technology. Join Aubrey from DevCentral and the OWASP GenAI Security Project, along with an expert panel including Byron, Ken, Lori, and special guest Steve Wilson, as they explore the complexities of AI in the news and the evolving landscape of AI security. We also take a closer look at the fascinating topic of vibe coding, its impact on software development, and the transformative potential of AI-assisted coding practices. Whether you're a developer, security professional, or an AI enthusiast, this episode is packed with insights and expert opinions that you won't want to miss. Don't forget to like, subscribe, and join the conversation! Topics: Agentic Risk vs. Reward OWASP GenAI Security Project DeepSeek and Inference Performance Vibe Coding Roundtable Annnnd we may have shared some related, mostly wholesome memes.47Views0likes0Comments