on 16-Aug-2023 09:05
In summary, while AI advancements promise enhanced security and functionality, they also bring forth potential risks and controversies. It's crucial to strike a balance between innovation and user safety and autonomy.
In the rapidly evolving landscape of Artificial Intelligence (AI), ensuring the security of Generative AI has become paramount. Jordan Zebor from F5SIRT has recently penned an insightful article addressing the challenges associated with securing Generative AI. He meticulously outlines the primary assets, potential threats, and recommended mitigation strategies. Building on this foundation, I present a growing list of community-endorsed projects and initiatives that should help to secure the infrastructure of Generative AI:
AI Vulnerability Database: This open-source repository offers a comprehensive knowledge base detailing the potential failure modes of AI models, datasets, and systems. It encompasses:
AI Incident Database: A dedicated repository, currently cataloging over 1,000 incidents, that chronicles real-world challenges and near-misses resulting from AI system deployments. Drawing inspiration from analogous databases in aviation and computer security, its primary objective is to facilitate experiential learning, thereby preempting or alleviating adverse outcomes.
NeMo Guardrails: An avant-garde open-source toolkit, NeMo Guardrails empowers developers to seamlessly integrate programmable guardrails into LLM-based conversational systems. These "rails" offer precise control over the model's output, ensuring adherence to specific dialog paths, language styles, structured data extraction, and more.
DoD Generative AI Task Force: Dr. Kathleen Hicks, the Deputy Secretary of Defense, has spearheaded the formation of Task Force Lima. This initiative is poised to play a crucial role in scrutinizing and assimilating generative AI tools, notably large language models (LLMs), throughout the Department of Defense. The inception of Task Force Lima underscores the Department's unwavering dedication to pioneering AI advancements. The task force is on Twitter/X and may serve aa a useful account to follow in the future.
AI Red Team: The advent of Generative AI has ushered in a new era of innovation and entrepreneurship. Yet, comprehending the risks associated with large-scale deployment remains a nascent endeavor. To address this, DEFCON 2023 is set to host the Generative Red Team (GRT) Challenge, uniting thousands of experts from diverse domains in what promises to be the most extensive red teaming exercise for AI models to date.
1. AI's Capability to Decode Typing Sounds: Recent research has illuminated the potential security vulnerabilities associated with AI's ability to decipher keystrokes based solely on the sound of typing. With the proliferation of video conferencing platforms like Zoom and the ubiquity of devices with integrated microphones, the susceptibility to sound-based cyber threats has escalated.
A team of researchers has engineered a system capable of identifying keystrokes on a laptop with a remarkable accuracy exceeding 90%, utilizing sound recordings. The methodology involved capturing the acoustic imprints of 36 distinct keys on a MacBook Pro. The AI system was meticulously trained to discern the unique acoustic signatures corresponding to each key. The system demonstrated an impressive 95% accuracy when analyzing recordings from phone calls and 93% from Zoom sessions.
Given the heightened accuracy of this research, it is imperative to exercise prudence while typing confidential information during video calls. To bolster security, experts advocate the adoption of biometric authentication, two-factor verification, and the use of a diverse mix of characters in passwords. Additionally, caution is advised regarding potential visual cues from hand movements during typing, even if the keyboard remains off-camera.
2. AI Application's Controversial Recipe Suggestions: Pak ‘n’ Save, a prominent supermarket chain in New Zealand, launched an AI-driven application designed to propose meal plans leveraging leftover ingredients. However, the application has been under scrutiny for suggesting perilous recipes, including concoctions that could produce chlorine gas.
The application's recommendations, such as a beverage labeled as the "ideal non-alcoholic refreshment," could potentially generate chlorine gas. These alarming suggestions, including other hazardous recipes, have been widely circulated on social media platforms.
Pak ‘n’ Save has conveyed its concerns regarding the unintended application usage and is committed to enhancing the application's controls. The company underscores the importance of user discretion and highlights that the application's suggestions are not manually vetted.
3. Zoom's Data Utilization for AI Enhancement: Zoom has recently amended its terms of service, delineating its entitlement to employ specific "service-generated data" to refine its AI and machine-learning algorithms.
This encompasses data related to product utilization, telemetry, diagnostics, and other pertinent content amassed by Zoom. Notably, there is no provision for users to opt out. While the terms explicitly exclude user-generated content such as messages and documents, Zoom has clarified its stance on not leveraging audio, video, or chat content for AI training without explicit user consent.
Amidst ongoing discussions surrounding the ethics of AI training on personal data, Zoom unveiled two AI-centric features in June. To access these features, users are required to provide consent, permitting Zoom to utilize their content for AI enhancement.
Zoom reiterates its commitment to optimizing the efficacy of its AI services, emphasizing that user content is exclusively harnessed to this end. Users retain the autonomy to activate AI features and determine the extent of content sharing for product augmentation.
Google has recently unveiled a new strategy aimed at bolstering the security of web clients. This approach would empower web servers to assess the legitimacy of the client, ensuring the accurate representation of the software stack and client traffic.
Under this system, during a webpage interaction, the web server might necessitate an "environment attestation" test. Should this occur, the browser would liaise with a third-party attestation server, undergoing a specific evaluation. Successful completion would yield an "IntegrityToken," confirming the environment's integrity. Consequently, if the server places trust in the attestation entity, the user gains access to the content.
Google enumerates several advantages of this system, including:
However, this proposal has been met with skepticism from other browsers including Brave, Mozilla, and Vivaldi. They argue that the "Web Environment Integrity" could potentially diminish user power, favoring major websites, including those operated by Google. Notably, this proposal lacks endorsement from the W3C, as it was never presented for a W3C Technical Architecture Group (TAG) review.
Concerns Surrounding the Proposal: While the technology promises enhanced security, it is not devoid of potential pitfalls:
In conclusion, while Google's proposal offers potential advancements in web security, it is essential to weigh these benefits against the potential risks and implications for user autonomy and the broader internet ecosystem.
Prof Feng Hao from the University of Warwick, who was not involved in the new study, said people should be careful not to type sensitive messages, including passwords, on a keyboard during a...
Besides the sound, the visual images about the subtle movements of the shoulder and wrist can also reveal side-channel information about the keys being typed on the keyboard even though the keyboard is not visible from the camera,” he said.
It's a pattern that no doubt repeats itself daily in hundreds of millions of offices around the world: People sit down, turn on their computers, set their mobile phones on their desks and begin to work. What if a hacker could use that phone to track what the person was typing on the keyboard just inches away?
A research team at Georgia Tech has discovered how to do exactly that, using a smartphone accelerometer — the internal device that detects when and how the phone is tilted — to sense keyboard vibrations and decipher complete sentences with up to 80 percent accuracy. The procedure is not easy, they say, but is definitely possible with the latest generations of smartphones.
"We first tried our experiments with an iPhone 3GS, and the results were difficult to read," says Patrick Traynor, assistant professor in Georgia Tech's School of Computer Science. "But then we tried an iPhone 4, which has an added gyroscope to clean up the accelerometer noise, and the results were much better. We believe that most smartphones made in the past two years are sophisticated enough to launch this attack."
Previously, Traynor says, researchers have accomplished similar results using microphones, but a microphone is a much more sensitive instrument than an accelerometer. A typical smartphone's microphone samples vibration roughly 44,000 times per second, while even newer phones' accelerometers sample just 100 times per second — two full orders of magnitude less often. Plus, manufacturers have installed security around a phone's microphone; the phone's operating system is programmed to ask users whether to give new applications access to most built-in sensors, including the microphone. Accelerometers typically are not protected in this way.