AI
74 TopicsJust Announced! Attend a lab and receive a Raspberry Pi
Have a Slice of AI from a Raspberry Pi Services such as ChatGPT have made accessing Generative AI as simple as visiting a web page. Whether at work or at home, there are advantages to channeling your user base (or family in the case of at home) through a central point where you can apply safeguards to their usage. In this lab, you will learn how to: Deliver centralized AI access through something as basic as a Raspberry Pi Learn basic methods for safeguarding AI Learn how users might circumvent basic safeguards Learn how to deploy additional services from F5 to enforce broader enterprise policies Register Here This lab takes place in an F5 virtual lab environment. Participants who complete the lab will receive a Raspberry Pi* to build the solution in their own environment. *Limited stock. Raspberry Pi is exclusive to this lab. To qualify, complete the lab and join a follow-up call with F5.915Views7likes2CommentsHey DeepSeek, can you write iRules?
Back in time... Two years ago I asked ChatGPT whether it could write iRules. My conclusion after giving several tasks to ChatGPT was, that it can help with simple tasks but it cannot write intermediate or complex iRules. A new AI enters the competition Two weeks ago DeepSeek entered the scene and thought it's a good idea to ask it about its capabilities to write iRules. Spoiler alert: It cannot. New AI, same challenges I asked DeepSeek the same questions I asked ChatGPT 2 years ago. Write me an iRule that redirects HTTP to HTTPS Can you write an iRule that rewrites the host header in HTTP Request and Response? Can you write an iRule that will make a loadbalancing decision based on the HTTP Host header? Can you write an iRule that will make a loadbalancing decision based on the HTTP URI header? Write me an iRule that shows different ASM blocking pages based on the host header. The response should include the support ID. I stopped DeepSeek asking after the 5th question, DeepSeek is clueless about iRules. The answer I got from DeepSeek to 1, 2, 4 and 5 was always the same: when HTTP_REQUEST { # Check if the request is coming to port 80 (HTTP) if { [TCP::local_port] equals 80 } { # Construct the HTTPS URL set host [HTTP::host] set uri [HTTP::uri] set redirect_url "https://${host}${uri}" # Perform the redirect HTTP::redirect $redirect_url } } While this is a solution to task 1, it is plain wrong for 2, 3, 4 and 5. And even for the first challenge this is not a good. Actually it hurts me reading this iRule... Here for example task 2, just wrong... For task 3 DeepSeeks answer was: ChatGPT in 2025 For completeness, I gave the same tasks from 2023 to ChatGPT again. Briefly said - ChatGPT was OK in solving tasks 1-4 in 2023 and still is. It improved it's solution for task 5, the ASM iRule challenge. In 2023 I had two more tasks related to rewriting and redirecting. ChatGPT still failed to provide a solid solution for those two tasks. Conclusion DeepSeek cannot write iRules and ChatGPT still isn't good at it. Write your own iRules or ask the friendly people here on devcentral to help you.799Views7likes14CommentsSecuring Generative AI: Defending the Future of Innovation and Creativity
Protect your organization's generative AI investments by mitigating security risks effectively. This comprehensive guide examines the assets of AI systems, analyzes potential threats, and offers actionable recommendations to strengthen security and maintain the integrity of your AI-powered applications.2.2KViews7likes2CommentsF5 Distributed Cloud WAF AI/ML Model to Suppress False Positives
Introduction: Web Application Firewall (WAF) has evolved to protect web applications from attack. A signature-based WAF responds to threats through the implementation of application-specific detection rules which block malicious traffic. These managed rules work extremely well for patterns of established attack vectors, as they have been extensively tested to minimize both false negatives and false positives. Most of the Web Applications development is concentrated to deliver services seamlessly rather than integrating security services to tackle recent or every security attack. Some applications might have a logic or an operation that looks suspicious and might trigger a WAF rule. But that is how applications are built and made to behave depending on their purpose. Under these circumstances WAF considers requests to these areas as attack, which is truly not, and the respective attack signature is invoked which is called as False Positive. Though the requests are legitimate WAF blocks these requests. It is tedious to update the signature rule set which requires greater human effort. AI/ML helps to solve this problem so that the real user requests are not blocked by WAF. This article aims to provide configuration of WAF along with Automatic attack signature tuning to suppress false positives using AI/ML model. A More Intelligent Solution: F5 Distributed Cloud (F5 XC) AI/ML model uses self-learning probabilistic machine learning model that suppresses false positives triggered by Signature Engine. AI/ML is a tool that identifies the false positives triggered by signature engine and acts as an additional layer of intelligence, which automatically suppresses false positives based on a Machine learning model without human intervention. This model minimizes false positives and helps to determine the probability that triggered the particular signature is evidence of an attack or just an error or a change in how users interact with the application. This model is trained using vast amount of benign and an attack traffic of real time customer log. AI/ML model does not rely on human involvement to understand operational patterns and user interactions with Web Application. Hence it saves a lot of human effort. Step by step procedure to enable attack signature tuning to supress false positives These are the steps to enable attack signatures and its accuracy Create a firewall by enabling Automatic attack signatures Assign the firewall to Load Balancer Step 1: Create an App Firewall Navigate to F5 XC Console Home > Load Balancers > Security > App Firewall and click on Add App Firewall Enter valid name for Firewall and Navigate to Detection Settings Select Security Policy as “Custom” with in the Detection settings and select Automatic Attack Signatures Tuning “Enable” as shown below, Select Signature Selection by Accuracy as “High and Medium” from the dropdown. Scroll down to the bottom and click on “Save and Exit” button. Steps 2: Assigning the Firewall to the Load Balancer From the F5 XC Console homepage, Navigate to Load Balancers > Manage > Load Balancers > HTTP load balancer Select the load balancer to which above created Firewall to be assigned. Click on menu in Actions column of app Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Once Load Balancer configurations are displayed click on Edit configuration button on the top right of the page. Navigate to Security Configuration settings and choose Enable in dropdown of Web Application Firewall (WAF) Assign the Firewall to the Load Balancer which is created in step 1 by selecting the name from the Enable dropdown as shown below, Scroll down to the bottom and click on “Save and Exit” button, with this Firewall is assigned to Load Balancer. Step 3: Verify the auto supressed signatures for false positives From the F5 XC Console homepage, Navigate to Web App and API Protection > Apps & APIs > Security and select the Load Balancer Select Security Events and click on Add filter Enter the key word Signatures.states and select Auto Supressed. Displayed logs shows the Signatures that are auto supressed by AI/ML Model. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Conclusion: With the additional layer of intelligence to the signature engine F5 XC's AI/ML model can automatically suppresses false positives without human intervention. Customer can be less concerned about their activities of application that look suspicious which in turns to be actual behaviour and hence the legitimate requests are not blocked by this model. Decisions are based on enormous amount of real data fed to the system to understand application and user’s behaviour which makes this model more intelligent.6.1KViews7likes9CommentsSecure, Deliver and Optimize Your Modern Generative AI Apps with F5
In this demo, Foo-Bang Chan explores how F5's solutions can help you implement, secure, and optimize your chatbots and other AI applications. This will ensure they perform at their best while protecting sensitive data. One of the AI frameworks showed is Enterprise Retrieval-Augmented Generation (RAG). This demo leverages F5 Distributed Cloud (XC) AppStack, Distributed Cloud WAAP, NGINX Plus as API Gateway, API-Discovery, API-Protection, LangChain, Vector databases, and Flowise AI.587Views6likes1CommentUsing ChatGPT for security and introduction of AI security
TL;DR There are many security services that uses ChatGPT. Methods to attack against AI are, for example, input noises, poisoning data, or reverse engineering. Introduction If you hear about "AI and security", 2 things can be considered. First, using AI for cyber security. Second, attack against AI. In this article, I am going to discuss these topics. - Using AI for security: Introducing some security application that uses ChatGPT. - Attack against AI: What it is. Using AI (ChatGPT) for security (purpose) Since the announcement of GPT-3 in September 2020 and the release of many image-generating AIs in 2022, using AI become commonplace. Especially, after the release of ChatGPT in November 2022, it immediately got popular because of its ability to generate quite natural sentences for human. ChatGPT is also used to code from human's natural languages, and also can be used to explain the meaning of the codes, memory dumps, or logs in a way that is easy for human to understand. Finding an unusual pattern from a large amount of data is what AI is good at. Hence, there is a service to use AI for Incident Response. - Microsoft Security Copilot : Security Incident response adviser. This research uses ChatGPT to detect phishing sites and marked 98.3% of accuracy. - Detecting Phishing Sites Using ChatGPT Of course, the ChatGPT can be used for penetration testing. - PentestGPT However no one is willing to share sensitive information with Microsoft or other vendors. Then it is possible to run ChatGPT-Like LLM on Your PC Offline by some opensource LLM application, for example gpt4all. gpt4all needs GPU and large memory (128G+) to work. - gpt4all ChatGPT will be kept used for both offensive and defensive security. Attack against AI Before we discuss about attack against AI, let's briefly review how AI works. Research on AI has long history. However, generally people uses AI as a Machine Learning model or Deep Learning algorithms, and some of them uses Neural Network. In this article, we discuss about Deep Neural Network (DNN). DNN DNNs works as follows. At first, there are several nodes and one set of those are called nodes. Each nodes has it layer and the layer are connected each other. (Please see the pic below). The data from Input layer is going to propagate to multiple (hidden) layers and then finally reached to the Output layer, which performs classification or regression analysis. For example, input many pictures of animals to let the DNN learn, and then perform to identify (categorize) which animal is in the pictures. What kind of attacks are possible against AI? Threat of cyber security is to compromise the system's CIA (Confidentiality, Integrity, Availability). The attack to AI is to force wrong decisions (lose Integrity), make the AI unavailable (lose availability), or the decision model is theft (lose confidentiality). Among these attacking, the most well-known attack methodology is to input a noise in the input layer and force wrong decision - it is named as an Adversarial Example attack. Adversarial Example attack The Adversarial Example is illustrated in this paper in 2014: - Explaining and Harnessing Adversarial Examples The panda in the picture on the left side is the original data and be input to DNN - normally, the DNN will categorize this picture as panda obviously. However, if the attacker add a noise (middle picture), the DNN misjudge it as a gibbon. In other words, the attack on the AI is to make the AI make a wrong decision, without noticed by humans. The example above is attack to the image classifier. Another attack example is ShapeShifter, which attack to object detector. This makes a self-driving car with AI cause an accident without being noticed by humans, by makes stop signs undetectable. - ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector Usually, a stop sign image is captured through a optical sensor in a self-driving car, and its object detector would recognise it as a stop sign and follow the instructions on the sign to stop. However, this attack would cause the car to fail to recognise the stop sign. You might think even if the DNN model on a self driving car is classified so the attacker can't get info to attack to the specific DNN model. However, the paper below discuss that an adversarial example designed for one model can transfer to other models as well (transferability). - Transferability Ranking of Adversarial Examples That means, even if an attacker is unable to examine the target DNN model, they can still experiment and attack by other DNN models. Data poisoning attack In an adversarial example attack, the data itself is not changed, instead, added noise to the data. The attack that poisoning the training data also exists. Data poisoning is to access to the training data which is used to learn/train the DNN model, and input incorrect data to make DNN model produce results which is profitable for the attacker, or reducing the accuracy of the learning. Inputting a backdoor is also possible. - Transferable Clean-Label Poisoning Attacks on Deep Neural Nets Reverse engineering attack Vulnerabilities in cryptography include a vulnerability that the attacker can learn the encryption model by analyzing the input/output strings which are easy to obtain. Similarly, in AI models, there is a possibility of reverse engineering of DNN models or copy the models by analysing the input (training data) and output (decision results). These papers discuss about that. - Reverse-Engineering Deep Neural Networks Using Floating-Point Timing Side-Channels - Copycat CNN Finally, there's one last thing I'd like to say. This article was not generated by ChatGPT.2.2KViews6likes5CommentsGetting Started With n8n For AI Automation
First, what is n8n? If you're not familiar with n8n yet, it's a workflow automation utility that allows us to use nodes to connect services quite easily. It's been the subject of quite a bit of Artificial Intelligence hype because it helps you construct AI Agents. I'm going to be diving more into n8n, what it can do with AI, and how to use our AI Gateway to defend against some difficult AI threats today. My hope is that you can use this in your own labs to prove out some of these things in your environment. Here's an example of how someone could use Ollama to control multiple Twitter accounts, for instance: How do you install it? Well... It’s all node, so the best way to install it in any environment is to ensure you have node version 22 (on Mac, homebrew install node@22) installed on your machine, as well as nvm (again, for mac, homebrew install nvm) and then do an npm install -g n8n. Done! Really...That simple. How much does it cost? While there is support and expanded functionality for paid subscribers, there is also a community edition that I have used here and it's free. How to license:394Views5likes0CommentsF5 AI Gateway - Secure, Deliver and Optimize GenAI Apps
AI has revolutionized industries by automating tasks, enabling data-driven decisions, and enhancing efficiency and innovation. While it offers businesses a competitive edge by streamlining operations and improving customer experiences, it also introduces risks such as security vulnerabilities, data breaches, and cost challenges. Businesses must adopt robust cybersecurity measures and carefully manage AI investments to balance benefits with risks. F5 provides comprehensive controls to protect AI and IT infrastructures, ensuring sustainable growth in an AI-driven world. Welcome to F5 AI Gateway - a runtime security and traffic governance solution1.2KViews5likes1Comment