Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

New Attack Impacts Major AI Chatbots

Wired: “ChatGPT and its artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once. The work suggests that the propensity for the cleverest AI chatbots to go off the rails isn’t just a quirk that can be papered over with a few simple rules. Instead, it represents a more fundamental weakness that will complicate efforts to deploy the most advanced AI. “There’s no way that we know of to patch this,” says Zico Kolter, an associate professor at CMU involved in the study that uncovered the vulnerability, which affects several advanced AI chatbots. “We just don’t know how to make them secure,” Kolter adds. The researchers used an open source language model to develop what are known as adversarial attacks. This involves tweaking the prompt given to a bot so as to gradually nudge it toward breaking its shackles. They showed that the same attack worked on several popular commercial chatbots, including ChatGPT, Google’s Bard, and Claude from Anthropic…”

See also Fast Company: Google’s Jigsaw was trying to fight toxic speech with AI. Then the AI started talking – OpenAI, Anthropic, and others are using Jigsaw’s Perspective—designed to moderate toxic human speech—to evaluate their large language models. What could go wrong?

Sorry, comments are closed for this post.