SSL Orchestrator Advanced Use Cases: Detecting Generative AI
Introduction
Quick, take a look at the following list and answer this question: "What do these movies have in common?"
- 2001: A Space Odyssey
- Westworld
- Tron
- WarGames
- Electric Dreams
- The Terminator
- The Matrix
- Eagle Eye
- Ex Machina
- Avengers: Age of Ultron
- M3GAN
If you answered, "They're all about artificial intelligence", yes, but...
If you answered, "They're all about artificial intelligence that went terribly, sometimes horribly wrong", you'd be absolutely correct. The simple fact is...artificial intelligence (AI) can be scary. Proponents for, and opponents against will disagree on many aspects, but they can all at least acknowledge there's a handful of ways to do AI correctly...and a million ways to do it badly. Not to be an alarmist, but while SkyNet was fictional, semi-autonomous guns on robot dogs is not...
But then why am I talking about this on a technical forum you may ask? Well, when most of the above films were made, AI was largely still science fiction. That's clearly not the case anymore, and tools like ChatGPT are just the tip of the coming AI frontier. To be fair, I don't make the claim that all AI is bad, and many have indeed lauded ChatGPT and other generative AI tools as the next great evolution in technology. But it's also fair to say that generative AI tools, like ChatGPT, have a very real potential to cause harm. At the very least, these tools can be convincing, even when they're wrong. And worse, they could lead to sensitive information disclosures. One only has to do a cursory search to find a few examples of questionable behavior:
- Lawyers File Motion Written by AI, Face Sanctions and Possible Disbarment
- Higher Ed Beware: 10 Dangers of ChatGPT Schools Need to Know
- ChatGPT and AI in the Workplace: Should Employers Be Concerned?
- OpenAI's New Chatbot Will Tell You How to Shoplift and Make Explosives
- Giant Bank JP Morgan Bans ChatGPT Use Among Employees
- Samsung Bans ChatGPT Among Employees After Sensitive Code Leak
But again...what does this have to do with a technical forum? And more important, what does this have to do with you? Simply stated, if you are in an organization where generative AI tools could be abused, understanding, and optionally controlling how and when these tools are accessed, could help to prevent the next big exploit or disclosure. If you search beyond the above links, you'll find an abundance of information on both the benefits, and security concerns of AI technologies. And ultimately you'll still be left to decide if these AI tools are safe for your organization. It may simply be worthwhile to understand WHAT tools are being used. And in some cases, it may be important to disable access to these.
Given the general depth and diversity of AI functions within arms-reach today, and growing, it'd be irresponsible to claim "complete awareness". The bulk of these functions are delivered over standard HTTPS, so the best course of action will be to categorize on known assets, and adjust as new ones come along. As of the publishing of this article, the industry has yet to define a standard set of categories for AI, and specifically, generative AI. So in this article, we're going to build one and attach that to F5 BIG-IP SSL Orchestrator to enable proactive detection and optional control of Internet-based AI tool access in your organization. Let's get started!
BIG-IP SSL Orchestrator Use Case: Detecting Generative AI
The real beauty of this solution is that it can be implemented faster than it probably took to read the above introduction. Essentially, you're going to create a custom URL category on F5 BIG-IP, populate that with known generative AI URLs, and employ that custom category in a BIG-IP SSL Orchestrator security policy rule. Within that policy rule, you can elect to dynamically decrypt and send the traffic to the set of inspection products in your security enclave.
- Step 1: Create the custom URL category and populate with known AI URLs - Access the BIG-IP command shell and run the following command. This will initiate a script that creates and populates the URL category:
curl -s https://raw.githubusercontent.com/f5devcentral/sslo-script-tools/main/sslo-generative-ai-categories/sslo-create-ai-category.sh |bash
- Step 2: Create a BIG-IP SSL Orchestrator policy rule to use this data - The above script creates/re-populates a custom URL category named SSLO_GENERATIVE_AI_CHAT, and in that category is a set of known generative AI URLs. To use, navigate to the BIG-IP SSL Orchestrator UI and edit a Security Policy. Click add to create a new rule, use the "Category Lookup (All)" policy condition, then add the above URL category. Set the Action to "Allow", SSL Proxy Action to "Intercept", and Service Chain to whatever service chain you've already created.
With Summary Logging enabled in the BIG-IP SSL Orchestrator topology configuration, you'll also get Syslog reporting for each AI resource match - who made the request, to what, and when.
The URL category is employed here to identify known AI tools. In this instance, BIG-IP SSL Orchestrator is used to make that assessment and act on it (i.e. allow, TLS intercept, service chain, log). Should you want even more granular control over conditions and actions of the decrypted AI tool traffic, you can also deploy an F5 Secure Web Gateway Services policy inside the SSL Orchestrator service chain. With SWG, you can expand beyond simple detection and blocking, and build more complex rules to decide who can access, when, and how.
It should be said that beyond logging, allowing, or denying access to generative AI tools, SSL Orchestrator is also going to provide decryption and the opportunity to dynamically steer the decrypted AI traffic to any set of security products best suited to protect against any potential malware.
Summary
As previously alluded, this is not an exhaustive list of AI tool URLs. Not even close. But it contains the most common you'll see in the wild. The above script populates with an initial list of URLs that you are free to update as you become aware of new one. And of course we invite you to recommend additional AI tools to add to this list.
References: https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-generative-ai-categories
- sonunaeemAltostratus
Catching Generative AI with SSL Orchestrator Advanced Use Case
As the pace of technology development continues to increase with each passing year, cyber security in conjunction with generative AI is a growing topic worldwide. Seeing the advanced use cases reviewed here, particularly as it is applied to detecting generative AI threats from SSL Orchestrator represents a key strategic advance for cybersecurity frameworks.
It manages and controls SSL encrypted traffic at scale since more than 80% of the total internet data is binary still moving over non-secure channels. It is necessary as it assesses the risk of misuse for generative AI models that can be used to generate convincing phishing emails, create deep fakes, or also mimic genuine network traffic allowing them to bypass detection.
Concerning this last point, the SSL Orchestrator is particularly important for decrypting and inspecting SSL/TLS traffic — without the need to hamper speed or productivity. This set of advanced use cases shows how organizations should not be using the tool to monitor themselves for generative AI, but they are. For example, SSL Orchestrator employs machine learning algorithms to recognize patterns of behavior that deviate from the expected norm versus simply flagging activities like unusual data exfiltration or AI-generated content propagation;
Threat intelligence feeds feed into the integrated system to assist it in changing detection parameters on the fly. Herein lies the value of this proactive action, keeping organizations prepared a few steps ahead for the next wave of generative AI threats.
SSL Orchestrator can help correlate traditional threat detection tools and AI-generated tactics, which enables cybersecurity teams to gain a much better understanding of their security posture. These advanced analytics can churn through the high volumes of encrypted traffic looking for malcontent and alerting additional incident response actions.
Yet the review is not blind to problems. However, there are difficulties in administering those and this expands especially over the flood of guidelines concerning them just as protection. This contrast creates a delicate balance for organizations, as they must not just effectively perform SSL inspection on encrypted traffic to maintain security — more than ever it needs to be an implementable ideology driven by an equally detailed policy.