Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

How researchers broke ChatGPT and what it could mean for future AI development

ZDNet: “As many of us grow accustomed to using artificial intelligence tools daily, it’s worth remembering to keep our questioning hats on. Nothing is completely safe and free from security vulnerabilities. Still, companies behind many of the most popular generative AI tools are constantly updating their safety measures to prevent the generation and proliferation of inaccurate and harmful content.  Researchers at Carnegie Mellon University and the Center for AI Safety teamed up to find vulnerabilities in AI chatbots like ChatGPT, Google Bard, and Claude — and they succeeded. In a research paper to examine the vulnerability of large language models (LLMs) to automated adversarial attacks, the authors demonstrated that even if a model is said to be resistant to attacks, it can still be tricked into bypassing content filters and providing harmful information, misinformation, and hate speech. This makes these models vulnerable, potentially leading to the misuse of AI.”

Sorry, comments are closed for this post.