Securing AI, DownFall, TunnelCrack, WEI - August 7th - 12th, 2023 - F5 SIRT - This Week in Security
Introduction: TLDR;
1. Generative AI Security Resources:
- Jordan Zebor from F5SIRT highlights the challenges and solutions for securing Generative AI.
- Community-backed initiatives include the AI Vulnerability Database, AI Incident Database, NeMo Guardrails, DoD Generative AI Task Force, and the AI Red Team.
- These resources aim to address potential failures, real-world AI incidents, and enhance AI system outputs.
2. AI's Potential Security Concerns:
- AI can potentially decode typing sounds, posing security risks, especially with the rise of video conferencing tools.
- Research shows AI can identify keystrokes with up to 95% accuracy using sound recordings.
- Users are advised to exercise caution when typing sensitive data and consider enhanced security measures.
3. AI Recipe Suggestions Controversy:
- New Zealand's Pak ‘n’ Save's AI app, designed to suggest meal recipes, has been criticized for recommending hazardous recipes.
- The company acknowledges the concerns and emphasizes user discretion.
4. Zoom's AI Data Usage:
- Zoom's updated terms allow the use of "service-generated data" for AI training.
- While user-generated content is excluded, Zoom seeks explicit user consent for AI training.
- The company assures that user content is solely for enhancing AI service performance.
5. Google's Web Environment Integrity Proposal:
- Google proposes a system to enhance web client security by assessing client legitimacy.
- The system would require users to pass an "environment attestation" test for content access.
- Despite the potential benefits, concerns arise about user power centralization, implementation ambiguity, and trust issues.
- Major browsers like Brave, Mozilla, and Vivaldi express skepticism, and the proposal lacks W3C endorsement.
6. "Downfall" and "Zenbleed" Vulnerabilities
- Google researchers identified two new security vulnerabilities, Downfall (CVE-2022-40982) and Zenbleed (CVE-2023-20593) which had the potential to impact billions of personal and cloud computers.
7. "TunnelCrack" VPN Vulnerabilities Exposed
- A team of academics unveiled two techniques, known as "TunnelCrack," that could potentially allow attackers to bypass encrypted VPNs under specific conditions.
In summary, while AI advancements promise enhanced security and functionality, they also bring forth potential risks and controversies. It's crucial to strike a balance between innovation and user safety and autonomy.
Generative AI Security Resources:
In the rapidly evolving landscape of Artificial Intelligence (AI), ensuring the security of Generative AI has become paramount. Jordan Zebor from F5SIRT has recently penned an insightful article addressing the challenges associated with securing Generative AI. He meticulously outlines the primary assets, potential threats, and recommended mitigation strategies. Building on this foundation, I present a growing list of community-endorsed projects and initiatives that should help to secure the infrastructure of Generative AI:
-
AI Vulnerability Database: This open-source repository offers a comprehensive knowledge base detailing the potential failure modes of AI models, datasets, and systems. It encompasses:
- A taxonomy categorizing the diverse ways an AI system might falter.
- A structured database of evaluation examples, each highlighting specific instances of failure subcategories.
-
AI Incident Database: A dedicated repository, currently cataloging over 1,000 incidents, that chronicles real-world challenges and near-misses resulting from AI system deployments. Drawing inspiration from analogous databases in aviation and computer security, its primary objective is to facilitate experiential learning, thereby preempting or alleviating adverse outcomes.
-
NeMo Guardrails: An avant-garde open-source toolkit, NeMo Guardrails empowers developers to seamlessly integrate programmable guardrails into LLM-based conversational systems. These "rails" offer precise control over the model's output, ensuring adherence to specific dialog paths, language styles, structured data extraction, and more.
-
DoD Generative AI Task Force: Dr. Kathleen Hicks, the Deputy Secretary of Defense, has spearheaded the formation of Task Force Lima. This initiative is poised to play a crucial role in scrutinizing and assimilating generative AI tools, notably large language models (LLMs), throughout the Department of Defense. The inception of Task Force Lima underscores the Department's unwavering dedication to pioneering AI advancements. The task force is on Twitter/X and may serve aa a useful account to follow in the future.
-
AI Red Team: The advent of Generative AI has ushered in a new era of innovation and entrepreneurship. Yet, comprehending the risks associated with large-scale deployment remains a nascent endeavor. To address this, DEFCON 2023 is set to host the Generative Red Team (GRT) Challenge, uniting thousands of experts from diverse domains in what promises to be the most extensive red teaming exercise for AI models to date.
Artificial Intelligence: Security, Privacy, and Business Implications:
1. AI's Capability to Decode Typing Sounds: Recent research has illuminated the potential security vulnerabilities associated with AI's ability to decipher keystrokes based solely on the sound of typing. With the proliferation of video conferencing platforms like Zoom and the ubiquity of devices with integrated microphones, the susceptibility to sound-based cyber threats has escalated.
A team of researchers has engineered a system capable of identifying keystrokes on a laptop with a remarkable accuracy exceeding 90%, utilizing sound recordings. The methodology involved capturing the acoustic imprints of 36 distinct keys on a MacBook Pro. The AI system was meticulously trained to discern the unique acoustic signatures corresponding to each key. The system demonstrated an impressive 95% accuracy when analyzing recordings from phone calls and 93% from Zoom sessions.
Given the heightened accuracy of this research, it is imperative to exercise prudence while typing confidential information during video calls. To bolster security, experts advocate the adoption of biometric authentication, two-factor verification, and the use of a diverse mix of characters in passwords. Additionally, caution is advised regarding potential visual cues from hand movements during typing, even if the keyboard remains off-camera.
2. AI Application's Controversial Recipe Suggestions: Pak ‘n’ Save, a prominent supermarket chain in New Zealand, launched an AI-driven application designed to propose meal plans leveraging leftover ingredients. However, the application has been under scrutiny for suggesting perilous recipes, including concoctions that could produce chlorine gas.
The application's recommendations, such as a beverage labeled as the "ideal non-alcoholic refreshment," could potentially generate chlorine gas. These alarming suggestions, including other hazardous recipes, have been widely circulated on social media platforms.
Pak ‘n’ Save has conveyed its concerns regarding the unintended application usage and is committed to enhancing the application's controls. The company underscores the importance of user discretion and highlights that the application's suggestions are not manually vetted.
3. Zoom's Data Utilization for AI Enhancement: Zoom has recently amended its terms of service, delineating its entitlement to employ specific "service-generated data" to refine its AI and machine-learning algorithms.
This encompasses data related to product utilization, telemetry, diagnostics, and other pertinent content amassed by Zoom. Notably, there is no provision for users to opt out. While the terms explicitly exclude user-generated content such as messages and documents, Zoom has clarified its stance on not leveraging audio, video, or chat content for AI training without explicit user consent.
Amidst ongoing discussions surrounding the ethics of AI training on personal data, Zoom unveiled two AI-centric features in June. To access these features, users are required to provide consent, permitting Zoom to utilize their content for AI enhancement.
Zoom reiterates its commitment to optimizing the efficacy of its AI services, emphasizing that user content is exclusively harnessed to this end. Users retain the autonomy to activate AI features and determine the extent of content sharing for product augmentation.
Google's Web Environment Integrity Proposal:
Google has recently unveiled a new strategy aimed at bolstering the security of web clients. This approach would empower web servers to assess the legitimacy of the client, ensuring the accurate representation of the software stack and client traffic.
Under this system, during a webpage interaction, the web server might necessitate an "environment attestation" test. Should this occur, the browser would liaise with a third-party attestation server, undergoing a specific evaluation. Successful completion would yield an "IntegrityToken," confirming the environment's integrity. Consequently, if the server places trust in the attestation entity, the user gains access to the content.
Google enumerates several advantages of this system, including:
- Identification of social media manipulation and counterfeit engagement.
- Recognition of non-human traffic in advertising, enhancing user experience.
- Detection of phishing campaigns, such as malicious app webviews.
- Thwarting bulk hijacking and account creation attempts.
- Identifying cheating in web-based games via fraudulent clients.
- Recognizing compromised devices, safeguarding user data.
- Detecting account breaches by pinpointing password guessing attempts.
However, this proposal has been met with skepticism from other browsers including Brave, Mozilla, and Vivaldi. They argue that the "Web Environment Integrity" could potentially diminish user power, favoring major websites, including those operated by Google. Notably, this proposal lacks endorsement from the W3C, as it was never presented for a W3C Technical Architecture Group (TAG) review.
Concerns Surrounding the Proposal: While the technology promises enhanced security, it is not devoid of potential pitfalls:
- Centralization of Power: The proposal could lead to an internet landscape where only sanctioned, officially launched browsers gain website acceptance.
- Ambiguity in Implementation: The parameters for what might be deemed unacceptable remain undefined, leading to potential misuse.
- Trust Issues: Establishing a trust mechanism becomes challenging when the trustworthiness of the technology's creator is in question.
In conclusion, while Google's proposal offers potential advancements in web security, it is essential to weigh these benefits against the potential risks and implications for user autonomy and the broader internet ecosystem.
"Downfall" and "Zenbleed" Vulnerabilities
- Google researchers identified two new security vulnerabilities, Downfall (CVE-2022-40982) and Zenbleed (CVE-2023-20593) which had the potential to impact billions of personal and cloud computers.
- "Downfall" affects Intel Core CPUs (6th - 11th generation) and exploits the speculative forwarding of data from the SIMD Gather instruction, potentially exposing data from other users sharing the same CPU core.
- "Zenbleed" affects AMD Zen2 CPUs and is linked to incorrectly implemented speculative execution of the SIMD Zeroupper instruction, leading to data leaks from physical hardware registers.
- Both vulnerabilities stem from complex optimizations in modern CPUs designed to speed up applications but inadvertently create security loopholes.
- Google collaborated with industry partners to address these vulnerabilities, publishing Security Bulletins and technical details for both.
- Key lessons from these discoveries include the challenges in designing secure hardware, gaps in automated hardware testing, and the potential security risks of optimization features.
- Google emphasizes the importance of ongoing vulnerability research and is investing further in CPU/hardware security research.
"TunnelCrack" VPN Vulnerabilities Exposed
- A team of academics unveiled two techniques, known as "TunnelCrack," that could potentially allow attackers to bypass encrypted VPNs under specific conditions.
- The vulnerabilities can force a user's network traffic outside their secure VPN tunnels, exposing it to potential eavesdroppers on local networks.
- Over 60 VPN clients were tested, revealing vulnerabilities in many, with all VPN apps on iOS being susceptible. Android VPNs appear to be the most secure.
- The two attack methods are named "LocalNet" and "ServerIP." The former involves tricking victims into connecting to a malicious network, while the latter is more complex.
- Even if traffic is rerouted outside the VPN, securely encrypted connections (like HTTPS) should remain confidential unless subjected to advanced decryption attacks.
- Various VPN vendors responded to the findings with mixed reactions. Some acknowledged the vulnerabilities and provided mitigation steps, while others believe the impact is minimal or non-existent.
- Apple, a key player in the ecosystem, has yet to comment on the issue.
- The researchers have provided manual testing instructions for VPNs on their GitHub repository.
- CometRet. Employee
Prof Feng Hao from the University of Warwick, who was not involved in the new study, said people should be careful not to type sensitive messages, including passwords, on a keyboard during a Zoom call.
Besides the sound, the visual images about the subtle movements of the shoulder and wrist can also reveal side-channel information about the keys being typed on the keyboard even though the keyboard is not visible from the camera,” he said.
October 19, 2011It's a pattern that no doubt repeats itself daily in hundreds of millions of offices around the world: People sit down, turn on their computers, set their mobile phones on their desks and begin to work. What if a hacker could use that phone to track what the person was typing on the keyboard just inches away?
A research team at Georgia Tech has discovered how to do exactly that, using a smartphone accelerometer — the internal device that detects when and how the phone is tilted — to sense keyboard vibrations and decipher complete sentences with up to 80 percent accuracy. The procedure is not easy, they say, but is definitely possible with the latest generations of smartphones.
"We first tried our experiments with an iPhone 3GS, and the results were difficult to read," says Patrick Traynor, assistant professor in Georgia Tech's School of Computer Science. "But then we tried an iPhone 4, which has an added gyroscope to clean up the accelerometer noise, and the results were much better. We believe that most smartphones made in the past two years are sophisticated enough to launch this attack."
Previously, Traynor says, researchers have accomplished similar results using microphones, but a microphone is a much more sensitive instrument than an accelerometer. A typical smartphone's microphone samples vibration roughly 44,000 times per second, while even newer phones' accelerometers sample just 100 times per second — two full orders of magnitude less often. Plus, manufacturers have installed security around a phone's microphone; the phone's operating system is programmed to ask users whether to give new applications access to most built-in sensors, including the microphone. Accelerometers typically are not protected in this way.