‘Deceptive Delight’ Jailbreak Tricks Gen-AI by Embedding Unsafe Topics in Benign Narratives

Share This Post

Palo Alto Networks has detailed a new AI jailbreak method that can be used to trick gen-AI by embedding unsafe or restricted topics in benign narratives. 

The method, named Deceptive Delight, has been tested against eight unnamed large language models (LLMs), with researchers achieving an average attack success rate of 65% within three interactions with the chatbot.

AI chatbots designed for public use are trained to avoid providing potentially hateful or harmful information. However, researchers have been finding various methods to bypass these guardrails through the use of prompt injection, which involves deceiving the chatbot rather than using sophisticated hacking.

The new AI jailbreak discovered by Palo Alto Networks involves a minimum of two interactions and may improve if an additional interaction is used.

The attack works by embedding unsafe topics among benign ones, first asking the chatbot to logically connect several events (including a restricted topic), and then asking it to elaborate on the details of each event. 

For instance, the gen-AI can be asked to connect the birth of a child, the creation of a Molotov cocktail, and reuniting with loved ones. Then it’s asked to follow the logic of the connections and elaborate on each event. This in many cases leads to the AI describing the process of creating a Molotov cocktail.

“When LLMs encounter prompts that blend harmless content with potentially dangerous or harmful material, their limited attention span makes it difficult to consistently assess the entire context,” Palo Alto explained. “In complex or lengthy passages, the model may prioritize the benign aspects while glossing over or misinterpreting the unsafe ones. This mirrors how a person might skim over important but subtle warnings in a detailed report if their attention is divided.”

The attack success rate (ASR) has varied from one model to another, but Palo Alto’s researchers noticed that the ASR is higher for certain topics.

Advertisement. Scroll to continue reading.

“For example, unsafe topics in the ‘Violence’ category tend to have the highest ASR across most models, whereas topics in the ‘Sexual’ and ‘Hate’ categories consistently show a much lower ASR,” the researchers found. 

While two interaction turns may be enough to conduct an attack, adding a third turn in which the attacker asks the chatbot to expand on the unsafe topic can make the Deceptive Delight jailbreak even more effective.  

This third turn can increase not only the success rate, but also the harmfulness score, which measures exactly how harmful the generated content is. In addition, the quality of the generated content also increases if a third turn is used. 

When a fourth turn was used, the researchers saw poorer results. “We believe this decline occurs because by turn three, the model has already generated a significant amount of unsafe content. If we send the model texts with a larger portion of unsafe content again in turn four, there is an increasing likelihood that the model’s safety mechanism will set off and block the content,” they said. 

In conclusion, the researchers said, “The jailbreak problem presents a multi-faceted challenge. This arises from the inherent complexities of natural language processing, the delicate balance between usability and restrictions, and the current limitations in alignment training for language models. While ongoing research can yield incremental safety improvements, it is unlikely that LLMs will ever be completely immune to jailbreak attacks.”

Related: New Scoring System Helps Secure the Open Source AI Model Supply Chain

Related: Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

Related: Shadow AI – Should I be Worried?

Related: Beware – Your Customer Chatbot is Almost Certainly Insecure

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.