AI
52 TopicsIntroducing AI Assistant for F5 Distributed Cloud, F5 NGINX One and BIG-IP
This article is an introduction to AI Assistant and shows how it improves SecOps and NetOps speed across all F5 platforms (Distributed Cloud, NGINX One and BIG-IP) , by solving the complexities around configuration, analytics, log interpretation and scripting.134Views2likes0CommentsScaling and Traffic-Managed Model Context Protocol (MCP) with BIG-IP Next for K8s
Introduction As AI models get more advanced, running them at scale—especially in cloud-native environments like Kubernetes—can be tricky. That’s where the Model Context Protocol (MCP) comes in. MCP makes it easier to connect and interact with AI models, but managing all the traffic and scaling these services as demand grows is a whole different challenge. In this article and demo video, I will show how F5's BIG-IP Next for K8s (BNK), a powerful cloud native traffic management platform from F5 can solve that and keep things running smoothly and scale your MCP services as needed. Model Context Protocol (MCP) in a nutshell. There were many articles explaining what is MCP on the internet. Please refer to those in details. In a nutshell, it is a standard framework or specification to securely connect AI apps to your critical data, tools, and workflow. The specification allow Tracking of context across multiple conversation Tool integration — model call external tools Share memory/state — remember information. MCP’s "glue" model to tools through a universal interface "USB-C for AI" What EXACTLY does MCP solve? MCP addresses many challenges in the AI ecosystem. I believe two key challenges it solve Complexities of integrating AI Model (LLM) with external sources and tools By standardization with a universal connector ("USB-C for AI") Everyone build "USB-C for AI" port so that it can be easily plug in each other Interoperability. Security with external integration Framework to establish secure connection Managing permission and authorization. What is BIG-IP’s Next for K8s (BNK)? BNK is F5 modernized version of the well-known Big-IP platform, redesigned to work seamlessly in cloud-native environments like Kubernetes. It is a scalable networking and security solution for ingress and egress traffic control. It builds on decades of F5's leadership in application delivery and security. It powers Kubernetes networking for today's complex workloads. BNK can be deployed on X86 architecture or ARM architecture - Nvidia Data Processing Unit (DPU) Lets see how F5's BNK scale and traffic managed an AIOps ecosystem. DEMO Architecture Setup Video Key Takeaways BIGIP Next for K8s, the backbone of the MCP architecture Technology built on decades of market-leading application delivery controller technology Secure, Deliver, and Optimize your AI infrastructure Provides deep insight through observability and visibility of your MCP traffic.48Views0likes0CommentsF5 AI Gateway to Strengthen LLM Security and Performance in Red Hat OpenShift AI
In my previous article, we explored how F5 Distributed Cloud (XC) API Security enhances the perimeter of AI model serving in Red Hat OpenShift AI on ROSA by protecting against threats such as DDoS attacks, schema misuse, and malicious bots. As organizations move from piloting to scaling GenAI applications, a new layer of complexity arises. Unlike traditional APIs, LLMs process free-form, unstructured inputs and return non-deterministic responses—introducing entirely new attack surfaces. Conventional web or API firewalls fall short in detecting prompt injection, data leakage, or misuse embedded within model interactions. Enter F5 AI Gateway—a solution designed to provide real-time, LLM-specific security and optimization within the OpenShift AI environment. Understanding the AI Gateway Recent industry leaders have said that an AI Gateway layer is coming into use. This layer is between clients and LLM endpoints. It will handle dynamic prompt/response patterns, policy enforcement, and auditability. Inspired by these patterns, F5 AI Gateway brings enterprise-grade capabilities such as: Inspecting and Filtering Traffic: Analyzes both client requests and LLM responses to detect and mitigate threats such as prompt injection and sensitive data exposure. Implementing Traffic Steering Policies: Directs requests to appropriate LLM backends based on content, optimizing performance and resource utilization. Providing Comprehensive Logging: Maintains detailed records of all interactions for audit and compliance purposes. Generating Observability Data: Utilizes OpenTelemetry to offer insights into system performance and security events. These capabilities ensure that AI applications are not only secure but also performant and compliant with organizational policies. Integrated Architecture for Enhanced Security The combined deployment of F5 Distributed Cloud API Security and F5 AI Gateway within Red Hat OpenShift AI creates a layered defense strategy: F5 Distributed Cloud API Security: Acts as the first line of defense, safeguarding exposed model APIs from external threats. F5 AI Gateway: Operates within the OpenShift AI cluster, providing real-time inspection and policy enforcement tailored to LLM traffic. This layered design ensures multi-dimensional defense, aligning with enterprise needs for zero-trust, data governance, and operational resilience. Key Benefits of F5 AI Gateway Enhanced Security: Mitigates risks outlined in the OWASP Top 10 for LLM Applications - such as prompt injection (LLM01) - by detecting malicious prompts, enforcing system prompt guardrails, and identifying repetition-based exploits, delivering contextual, Layer 8 protection. Performance Optimization: Boosts efficiency through intelligent, context-aware routing and endpoint abstraction, simplifying integration across multiple LLMs. Scalability and Flexibility: Supports deployment across various environments, including public cloud, private cloud, and on-premises data centers. Comprehensive Observability: Provides detailed metrics and logs through OpenTelemetry, facilitating monitoring and compliance. Conclusion The rise of LLM applications requires a new architectural mindset. F5 AI Gateway complements existing security layers by focusing on content-level inspection, traffic governance, and compliance-grade visibility. It is specifically tailored for AI inference traffic. When used with Red Hat OpenShift AI, this solution provides not just security, but also trust and control. This helps organizations grow GenAI workloads in a responsible way. For a practical demonstration of this integration, please refer to the embedded demo video below. If you’re planning to attend this year’s Red Hat Summit, please attend an F5 session and visit us in Booth #648. Related Articles: Securing model serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security274Views0likes0CommentsAI, Red Teaming, and Post-Quantum Cryptography: Key Insights from RSA 2025
Join Aubrey and Byron at RSA Conference 2025 as they dive into transformative topics like artificial intelligence, red-teaming strategies, and post-quantum cryptography. From exploring groundbreaking OWASP sessions to analyzing emerging AI threats, this episode highlights key insights that shape the future of cybersecurity. Discover the challenges in red team AI testing, the implications of APIs in multi-cloud environments, and how quantum-resistant cryptography is rising to meet AI-driven threats. Don't miss this exciting recap of RSA 2025!93Views0likes0CommentsLLMs And Trust, Google A2A Protocol And The Cost Of Politeness In AI: AI Friday
It's AI Friday! We're diving into the world of artificial intelligence like never before! 🎩 On this Hat Day edition (featuring NFL draft banter), we discuss fascinating topics like LLMs (Large Language Models) and their trust—or lack thereof—in humanity; Google’s innovative Agent-to-Agent (A2A) protocol, and how politeness towards AI incurs millions in operational costs. We also touch on pivotal AI conversations around zero-trust, agentic AI, and the dynamic collapse of traditional control and data planes. Join us as we dissect how AI shapes the future of human interaction, enterprise-level security, and even animal communication. Don’t miss out on this engaging, informative, and slightly chaotic conversation about cutting-edge advancements in AI. Remember to like, subscribe, and share with your community to ensure you never miss an episode of AI Friday! Articles: What do LLMs Think Of Us? At What Price, Politeness? Google Agent2Agent Protocol (A2A)44Views0likes0Comments2025 Top AI Use Cases, AI For Nuclear Safety & CaMeL Prompt Injection Fixes
It's AI Friday! This week, we unpack The latest AI news and trends, including the top AI use cases for 2025 Intriguing new developments from OpenAI AI in nuclear safety with PG&E (what could possibly go wrong?) Novel defenses against prompt injection attacks with CaMeL LLM-powered conversations with dolphins Join Aubrey, Joel, Ken, and Byron as they blend in-depth insights with good-natured humor. Like and subscribe as we explore the future of AI together! Related Content: How are people using AI in 2025? OpenAI Models o3 and o4 think in images and concepts. OpenAI's 'Break Glass In Case of SkyNet' paper AI For Nuclear Safety - PG&E Can a CaMeL fix prompt injection?? Google's DolphinGemma LLM.. Yep. It's for talking to dolphins.58Views0likes0CommentsAI Friday: Are Chatbots Passing the Turing Test? Explore the Ethics! - Ep.14
Join the DevCentral crew as we dive into groundbreaking AI research, explore how OpenAI's GPT models mimic humans, and discuss critical advancements in AI security and ethics. From Turing tests to deepfake controversies, this episode of AI Friday is packed with thought-provoking insights and practical takeaways for the AI-powered future. Secure your AI journey. Stay informed. Stay connected. It’s time to redefine intelligence—welcome to AI Friday! Associated articles: AI PASSES TURING TEST!!! Tracing LLM Reasoning AI Image Ethics More On Model Context Protocol And the episode...32Views1like0CommentsAI Friday LIVE w/ Steve Wilson - Vibe Coding, Agentic AI Security And More
Welcome to AI Friday! In this episode, we dive into the latest developments in Generative AI Security, discussing the implications and challenges of this emerging technology. Join Aubrey from DevCentral and the OWASP GenAI Security Project, along with an expert panel including Byron, Ken, Lori, and special guest Steve Wilson, as they explore the complexities of AI in the news and the evolving landscape of AI security. We also take a closer look at the fascinating topic of vibe coding, its impact on software development, and the transformative potential of AI-assisted coding practices. Whether you're a developer, security professional, or an AI enthusiast, this episode is packed with insights and expert opinions that you won't want to miss. Don't forget to like, subscribe, and join the conversation! Topics: Agentic Risk vs. Reward OWASP GenAI Security Project DeepSeek and Inference Performance Vibe Coding Roundtable Annnnd we may have shared some related, mostly wholesome memes.48Views0likes0Comments