devxlogo

Vanna AI flaw enables RCE via prompt injection

Vanna AI flaw enables RCE via prompt injection

AI Flaw

A high-severity security flaw in the Vanna.AI library exposes databases to remote code execution (RCE) attacks via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), involves a flaw in the “ask” function that can be exploited to execute arbitrary commands. Vanna is a Python-based tool that allows users to interact with their SQL databases by asking questions, which are translated into SQL queries using a large language model (LLM).

The rising use of generative artificial intelligence (AI) models has brought new risks of exploitation by malicious actors, who can inject harmful prompts to bypass built-in safety mechanisms. “A prominent class of these attacks involves ‘jailbreaks,’ where an attacker uses a multi-step strategy to cause a model to ignore its guardrails,” said Mark Russinovich, chief technology officer of Microsoft Azure. In a novel jailbreak attack known as Skeleton Key, once the system rules are altered, the model can respond to prohibited queries without filtering harmful content.

The latest findings demonstrate how prompt injections can lead to severe impacts, particularly when tied to command execution.

Prompt vulnerabilities in Vanna AI

CVE-2024-5565 capitalizes on Vanna’s functionality, which turns textual prompts into SQL queries that are then executed and visualized using the Plotly graphing library.

This creates a security hole since an attacker can embed a command in a prompt to be executed on the underlying system. “The Vanna library’s ‘ask’ function, especially with visualizations enabled by default, can be manipulated to run arbitrary Python code instead of the intended visualization code,” researchers noted. Following responsible disclosure, Vanna has issued a warning, advising users to use the Plotly integration in a sandboxed environment to mitigate the risk of RCE attacks.

See also  Ziply Fiber expands high-speed internet in Montana

“This discovery underscores the drastic implications of using generative AI and LLMs without proper security governance,” said Shachar Menashe, senior director of security research. “The dangers of prompt injection are not yet widely known but are easy to execute. Companies should employ robust mechanisms beyond pre-prompting defenses when interfacing LLMs with critical resources.”

For further inquiries, please contact our cybersecurity news desk.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist