AI
54 TopicsSecuring Generative AI: Defending the Future of Innovation and Creativity
Protect your organization's generative AI investments by mitigating security risks effectively. This comprehensive guide examines the assets of AI systems, analyzes potential threats, and offers actionable recommendations to strengthen security and maintain the integrity of your AI-powered applications.2.1KViews7likes2CommentsF5 Distributed Cloud WAF AI/ML Model to Suppress False Positives
Introduction: Web Application Firewall (WAF) has evolved to protect web applications from attack. A signature-based WAF responds to threats through the implementation of application-specific detection rules which block malicious traffic. These managed rules work extremely well for patterns of established attack vectors, as they have been extensively tested to minimize both false negatives and false positives. Most of the Web Applications development is concentrated to deliver services seamlessly rather than integrating security services to tackle recent or every security attack. Some applications might have a logic or an operation that looks suspicious and might trigger a WAF rule. But that is how applications are built and made to behave depending on their purpose. Under these circumstances WAF considers requests to these areas as attack, which is truly not, and the respective attack signature is invoked which is called as False Positive. Though the requests are legitimate WAF blocks these requests. It is tedious to update the signature rule set which requires greater human effort. AI/ML helps to solve this problem so that the real user requests are not blocked by WAF. This article aims to provide configuration of WAF along with Automatic attack signature tuning to suppress false positives using AI/ML model. A More Intelligent Solution: F5 Distributed Cloud (F5 XC) AI/ML model uses self-learning probabilistic machine learning model that suppresses false positives triggered by Signature Engine. AI/ML is a tool that identifies the false positives triggered by signature engine and acts as an additional layer of intelligence, which automatically suppresses false positives based on a Machine learning model without human intervention. This model minimizes false positives and helps to determine the probability that triggered the particular signature is evidence of an attack or just an error or a change in how users interact with the application. This model is trained using vast amount of benign and an attack traffic of real time customer log. AI/ML model does not rely on human involvement to understand operational patterns and user interactions with Web Application. Hence it saves a lot of human effort. Step by step procedure to enable attack signature tuning to supress false positives These are the steps to enable attack signatures and its accuracy Create a firewall by enabling Automatic attack signatures Assign the firewall to Load Balancer Step 1: Create an App Firewall Navigate to F5 XC Console Home > Load Balancers > Security > App Firewall and click on Add App Firewall Enter valid name for Firewall and Navigate to Detection Settings Select Security Policy as “Custom” with in the Detection settings and select Automatic Attack Signatures Tuning “Enable” as shown below, Select Signature Selection by Accuracy as “High and Medium” from the dropdown. Scroll down to the bottom and click on “Save and Exit” button. Steps 2: Assigning the Firewall to the Load Balancer From the F5 XC Console homepage, Navigate to Load Balancers > Manage > Load Balancers > HTTP load balancer Select the load balancer to which above created Firewall to be assigned. Click on menu in Actions column of app Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Once Load Balancer configurations are displayed click on Edit configuration button on the top right of the page. Navigate to Security Configuration settings and choose Enable in dropdown of Web Application Firewall (WAF) Assign the Firewall to the Load Balancer which is created in step 1 by selecting the name from the Enable dropdown as shown below, Scroll down to the bottom and click on “Save and Exit” button, with this Firewall is assigned to Load Balancer. Step 3: Verify the auto supressed signatures for false positives From the F5 XC Console homepage, Navigate to Web App and API Protection > Apps & APIs > Security and select the Load Balancer Select Security Events and click on Add filter Enter the key word Signatures.states and select Auto Supressed. Displayed logs shows the Signatures that are auto supressed by AI/ML Model. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Conclusion: With the additional layer of intelligence to the signature engine F5 XC's AI/ML model can automatically suppresses false positives without human intervention. Customer can be less concerned about their activities of application that look suspicious which in turns to be actual behaviour and hence the legitimate requests are not blocked by this model. Decisions are based on enormous amount of real data fed to the system to understand application and user’s behaviour which makes this model more intelligent.5.7KViews7likes9CommentsSecure, Deliver and Optimize Your Modern Generative AI Apps with F5
In this demo, Foo-Bang Chan explores how F5's solutions can help you implement, secure, and optimize your chatbots and other AI applications. This will ensure they perform at their best while protecting sensitive data. One of the AI frameworks showed is Enterprise Retrieval-Augmented Generation (RAG). This demo leverages F5 Distributed Cloud (XC) AppStack, Distributed Cloud WAAP, NGINX Plus as API Gateway, API-Discovery, API-Protection, LangChain, Vector databases, and Flowise AI.548Views6likes1CommentUsing ChatGPT for security and introduction of AI security
TL;DR There are many security services that uses ChatGPT. Methods to attack against AI are, for example, input noises, poisoning data, or reverse engineering. Introduction If you hear about "AI and security", 2 things can be considered. First, using AI for cyber security. Second, attack against AI. In this article, I am going to discuss these topics. - Using AI for security: Introducing some security application that uses ChatGPT. - Attack against AI: What it is. Using AI (ChatGPT) for security (purpose) Since the announcement of GPT-3 in September 2020 and the release of many image-generating AIs in 2022, using AI become commonplace. Especially, after the release of ChatGPT in November 2022, it immediately got popular because of its ability to generate quite natural sentences for human. ChatGPT is also used to code from human's natural languages, and also can be used to explain the meaning of the codes, memory dumps, or logs in a way that is easy for human to understand. Finding an unusual pattern from a large amount of data is what AI is good at. Hence, there is a service to use AI for Incident Response. - Microsoft Security Copilot : Security Incident response adviser. This research uses ChatGPT to detect phishing sites and marked 98.3% of accuracy. - Detecting Phishing Sites Using ChatGPT Of course, the ChatGPT can be used for penetration testing. - PentestGPT However no one is willing to share sensitive information with Microsoft or other vendors. Then it is possible to run ChatGPT-Like LLM on Your PC Offline by some opensource LLM application, for example gpt4all. gpt4all needs GPU and large memory (128G+) to work. - gpt4all ChatGPT will be kept used for both offensive and defensive security. Attack against AI Before we discuss about attack against AI, let's briefly review how AI works. Research on AI has long history. However, generally people uses AI as a Machine Learning model or Deep Learning algorithms, and some of them uses Neural Network. In this article, we discuss about Deep Neural Network (DNN). DNN DNNs works as follows. At first, there are several nodes and one set of those are called nodes. Each nodes has it layer and the layer are connected each other. (Please see the pic below). The data from Input layer is going to propagate to multiple (hidden) layers and then finally reached to the Output layer, which performs classification or regression analysis. For example, input many pictures of animals to let the DNN learn, and then perform to identify (categorize) which animal is in the pictures. What kind of attacks are possible against AI? Threat of cyber security is to compromise the system's CIA (Confidentiality, Integrity, Availability). The attack to AI is to force wrong decisions (lose Integrity), make the AI unavailable (lose availability), or the decision model is theft (lose confidentiality). Among these attacking, the most well-known attack methodology is to input a noise in the input layer and force wrong decision - it is named as an Adversarial Example attack. Adversarial Example attack The Adversarial Example is illustrated in this paper in 2014: - Explaining and Harnessing Adversarial Examples The panda in the picture on the left side is the original data and be input to DNN - normally, the DNN will categorize this picture as panda obviously. However, if the attacker add a noise (middle picture), the DNN misjudge it as a gibbon. In other words, the attack on the AI is to make the AI make a wrong decision, without noticed by humans. The example above is attack to the image classifier. Another attack example is ShapeShifter, which attack to object detector. This makes a self-driving car with AI cause an accident without being noticed by humans, by makes stop signs undetectable. - ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector Usually, a stop sign image is captured through a optical sensor in a self-driving car, and its object detector would recognise it as a stop sign and follow the instructions on the sign to stop. However, this attack would cause the car to fail to recognise the stop sign. You might think even if the DNN model on a self driving car is classified so the attacker can't get info to attack to the specific DNN model. However, the paper below discuss that an adversarial example designed for one model can transfer to other models as well (transferability). - Transferability Ranking of Adversarial Examples That means, even if an attacker is unable to examine the target DNN model, they can still experiment and attack by other DNN models. Data poisoning attack In an adversarial example attack, the data itself is not changed, instead, added noise to the data. The attack that poisoning the training data also exists. Data poisoning is to access to the training data which is used to learn/train the DNN model, and input incorrect data to make DNN model produce results which is profitable for the attacker, or reducing the accuracy of the learning. Inputting a backdoor is also possible. - Transferable Clean-Label Poisoning Attacks on Deep Neural Nets Reverse engineering attack Vulnerabilities in cryptography include a vulnerability that the attacker can learn the encryption model by analyzing the input/output strings which are easy to obtain. Similarly, in AI models, there is a possibility of reverse engineering of DNN models or copy the models by analysing the input (training data) and output (decision results). These papers discuss about that. - Reverse-Engineering Deep Neural Networks Using Floating-Point Timing Side-Channels - Copycat CNN Finally, there's one last thing I'd like to say. This article was not generated by ChatGPT.2.1KViews6likes5CommentsGetting Started With n8n For AI Automation
First, what is n8n? If you're not familiar with n8n yet, it's a workflow automation utility that allows us to use nodes to connect services quite easily. It's been the subject of quite a bit of Artificial Intelligence hype because it helps you construct AI Agents. I'm going to be diving more into n8n, what it can do with AI, and how to use our AI Gateway to defend against some difficult AI threats today. My hope is that you can use this in your own labs to prove out some of these things in your environment. Here's an example of how someone could use Ollama to control multiple Twitter accounts, for instance: How do you install it? Well... It’s all node, so the best way to install it in any environment is to ensure you have node version 22 (on Mac, homebrew install node@22) installed on your machine, as well as nvm (again, for mac, homebrew install nvm) and then do an npm install -g n8n. Done! Really...That simple. How much does it cost? While there is support and expanded functionality for paid subscribers, there is also a community edition that I have used here and it's free. How to license:177Views5likes0CommentsF5 AI Gateway - Secure, Deliver and Optimize GenAI Apps
AI has revolutionized industries by automating tasks, enabling data-driven decisions, and enhancing efficiency and innovation. While it offers businesses a competitive edge by streamlining operations and improving customer experiences, it also introduces risks such as security vulnerabilities, data breaches, and cost challenges. Businesses must adopt robust cybersecurity measures and carefully manage AI investments to balance benefits with risks. F5 provides comprehensive controls to protect AI and IT infrastructures, ensuring sustainable growth in an AI-driven world. Welcome to F5 AI Gateway - a runtime security and traffic governance solution924Views5likes1CommentSSL Orchestrator Advanced Use Cases: Detecting Generative AI
Introduction Quick, take a look at the following list and answer this question: "What do these movies have in common?" 2001: A Space Odyssey Westworld Tron WarGames Electric Dreams The Terminator The Matrix Eagle Eye Ex Machina Avengers: Age of Ultron M3GAN If you answered, "They're all about artificial intelligence", yes, but... If you answered, "They're all about artificial intelligence that went terribly, sometimes horribly wrong", you'd be absolutely correct. The simple fact is...artificial intelligence (AI) can be scary. Proponents for, and opponents against will disagree on many aspects, but they can all at least acknowledge there's a handful of ways to do AI correctly...and a million ways to do it badly. Not to be an alarmist, but while SkyNet was fictional, semi-autonomous guns on robot dogs is not... But then why am I talking about this on a technical forum you may ask? Well, when most of the above films were made, AI was largely still science fiction. That's clearly not the case anymore, and tools like ChatGPT are just the tip of the coming AI frontier. To be fair, I don't make the claim that all AI is bad, and many have indeed lauded ChatGPT and other generative AI tools as the next great evolution in technology. But it's also fair to say that generative AI tools, like ChatGPT, have a very real potential to cause harm. At the very least, these tools can be convincing, even when they're wrong. And worse, they could lead to sensitive information disclosures. One only has to do a cursory search to find a few examples of questionable behavior: Lawyers File Motion Written by AI, Face Sanctions and Possible Disbarment Higher Ed Beware: 10 Dangers of ChatGPT Schools Need to Know ChatGPT and AI in the Workplace: Should Employers Be Concerned? OpenAI's New Chatbot Will Tell You How to Shoplift and Make Explosives Giant Bank JP Morgan Bans ChatGPT Use Among Employees Samsung Bans ChatGPT Among Employees After Sensitive Code Leak But again...what does this have to do with a technical forum? And more important, what does this have to do with you? Simply stated, if you are in an organization where generative AI tools could be abused, understanding, and optionally controlling how and when these tools are accessed, could help to prevent the next big exploit or disclosure. If you search beyond the above links, you'll find an abundance of information on both the benefits, and security concerns of AI technologies. And ultimately you'll still be left to decide if these AI tools are safe for your organization. It may simply be worthwhile to understand WHAT tools are being used. And in some cases, it may be important to disable access to these. Given the general depth and diversity of AI functions within arms-reach today, and growing, it'd be irresponsible to claim "complete awareness". The bulk of these functions are delivered over standard HTTPS, so the best course of action will be to categorize on known assets, and adjust as new ones come along. As of the publishing of this article, the industry has yet to define a standard set of categories for AI, and specifically, generative AI. So in this article, we're going to build one and attach that to F5 BIG-IP SSL Orchestrator to enable proactive detection and optional control of Internet-based AI tool access in your organization. Let's get started! BIG-IP SSL Orchestrator Use Case: Detecting Generative AI The real beauty of this solution is that it can be implemented faster than it probably took to read the above introduction. Essentially, you're going to create a custom URL category on F5 BIG-IP, populate that with known generative AI URLs, and employ that custom category in a BIG-IP SSL Orchestrator security policy rule. Within that policy rule, you can elect to dynamically decrypt and send the traffic to the set of inspection products in your security enclave. Step 1: Create the custom URL category and populate with known AI URLs - Access the BIG-IP command shell and run the following command. This will initiate a script that creates and populates the URL category: curl -s https://raw.githubusercontent.com/f5devcentral/sslo-script-tools/main/sslo-generative-ai-categories/sslo-create-ai-category.sh |bash Step 2: Create a BIG-IP SSL Orchestrator policy rule to use this data - The above script creates/re-populates a custom URL category named SSLO_GENERATIVE_AI_CHAT, and in that category is a set of known generative AI URLs. To use, navigate to the BIG-IP SSL Orchestrator UI and edit a Security Policy. Click add to create a new rule, use the "Category Lookup (All)" policy condition, then add the above URL category. Set the Action to "Allow", SSL Proxy Action to "Intercept", and Service Chain to whatever service chain you've already created. With Summary Logging enabled in the BIG-IP SSL Orchestrator topology configuration, you'll also get Syslog reporting for each AI resource match - who made the request, to what, and when. The URL category is employed here to identify known AI tools. In this instance, BIG-IP SSL Orchestrator is used to make that assessment and act on it (i.e. allow, TLS intercept, service chain, log). Should you want even more granular control over conditions and actions of the decrypted AI tool traffic, you can also deploy an F5 Secure Web Gateway Services policy inside the SSL Orchestrator service chain. With SWG, you can expand beyond simple detection and blocking, and build more complex rules to decide who can access, when, and how. It should be said that beyond logging, allowing, or denying access to generative AI tools, SSL Orchestrator is also going to provide decryption and the opportunity to dynamically steer the decrypted AI traffic to any set of security products best suited to protect against any potential malware. Summary As previously alluded, this is not an exhaustive list of AI tool URLs. Not even close. But it contains the most common you'll see in the wild. The above script populates with an initial list of URLs that you are free to update as you become aware of new one. And of course we invite you to recommend additional AI tools to add to this list. References: https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-generative-ai-categories2.2KViews5likes0CommentsHow to Prepare Your Network Infrastructure to Add HPC Clusters for AI to Your Data Center
HPC AI clusters are getting deployed as highly-engineered 'lego blocks' which are opaque to established data center operations and standards. By taking advantage of established Kubernetes based networking solutions that provide high-speed intelligent networking, you can save yourself from expensive cost overruns, data center re-auditing, and delays. By using Kubernetes based solutions which take advantage of the high-speed networking solutions already required by HP AI deployments, you further optimize your investment in AI.273Views4likes0Comments