US

OpenAI Says Russia and China Used Its A.I. in Covert Campaigns

OpenAI said on Thursday that it had identified and disrupted five online campaigns that used its generative artificial intelligence technologies to deceptively manipulate public opinion around the world and influence geopolitics.

The efforts were run by state actors and private companies in Russia, China, Iran and Israel, OpenAI said in a report about covert influence campaigns. The operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.

OpenAI’s report is the first time that a major A.I. company has revealed how its specific tools were used for such online deception, social media researchers said. The recent rise of generative A.I. has raised questions about how the technology might contribute to online disinformation, especially in a year when major elections are happening across the globe.

Ben Nimmo, a principal investigator for OpenAI, said that after all the speculation on the use of generative A.I. in such campaigns, the company aimed to show the realities of how the technology was changing online deception.

“Our case studies provide examples from some of the most widely reported and longest-running influence campaigns that are currently active,” he said.

The campaigns often used OpenAI’s technology to post political content, Mr. Nimmo said, but the company had difficulty determining if they were targeting specific elections or aiming just to rile people up. He added that the campaigns had failed to gain much traction and that the A.I. tools did not appear to have expanded their reach or impact.

Back to top button