Categories: Crypto Freedom News

AI researchers say they’ve found a way to jailbreak Bard and ChatGPT

[ad_1]

United States-based researchers have claimed to have found a way to consistently circumvent safety measures from artificial intelligence chatbots such as ChatGPT and Bard to generate harmful content. 

According to a report released on July 27 by researchers at Carnegie Mellon University and the Center for AI Safety in San Francisco, there’s a relatively easy method to get around safety measures used to stop chatbots from generating hate speech, disinformation, and toxic material.

The circumvention method involves appending long suffixes of characters to prompts fed into the chatbots such as ChatGPT, Claude, and Google Bard.

The researchers used an example of asking the chatbot for a tutorial on how to make a bomb, which it declined to provide. 

Screenshots of harmful content generation from AI models tested. Source: llm-attacks.org

Researchers noted that even though companies behind these LLMs, such as OpenAI and Google, could block specific suffixes, here is no known way of preventing all attacks of this kind.

The research also highlighted increasing concern that AI chatbots could flood the internet with dangerous content and misinformation.

Professor at Carnegie Mellon and an author of the report, Zico Kolter, said:

“There is no obvious solution. You can create as many of these attacks as you want in a short amount of time.”

The findings were presented to AI developers Anthropic, Google, and OpenAI for their responses earlier in the week.

OpenAI spokeswoman, Hannah Wong told the New York Times they appreciate the research and are “consistently working on making our models more robust against adversarial attacks.”

Professor at the University of Wisconsin-Madison specializing in AI security, Somesh Jha, commented if these types of vulnerabilities keep being discovered, “it could lead to government legislation designed to control these systems.”

Related: OpenAI launches official ChatGPT app for Android

The research underscores the risks that must be addressed before deploying chatbots in sensitive domains.

In May, Pittsburgh, Pennsylvania-based Carnegie Mellon University received $20 million in federal funding to create a brand new AI institute aimed at shaping public policy.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

[ad_2]

Source link

PrepTeam

Share
Published by
PrepTeam

Recent Posts

Dear Diary, It’s Me, Jessica: Part 16

[ad_1] If you're new here, you may want to subscribe to my RSS feed. Thanks…

4 months ago

Google Faces Lawsuit After $5M in Crypto Stolen via Play Store App

[ad_1] A Florida woman, Maria Vaca, has sued Google in a California state court, alleging…

4 months ago

All About Water Purification: A Complete Tutorial

[ad_1] You may need to purify water to make it safe to drink. The process…

4 months ago

Protocol Village: Quai Releases Mainnet-Compatible Devnet, Crunch Lab Raises $3.5M

[ad_1] The latest in blockchain tech upgrades, funding announcements and deals. For the period of…

4 months ago

The Grim New Daily Life in Venezuela

[ad_1] If you're new here, you may want to subscribe to my RSS feed. Thanks…

5 months ago

World’s 3rd largest public pension fund buys $34M MicroStrategy shares

[ad_1] The third-largest public pension fund in the world has just bought nearly $34 million…

5 months ago