OpenAI removed accounts linked to known propaganda operations in Russia, China and Iran, an Israeli political campaign firm and a previously unknown group originating from Russia that its researchers dubbed “Bad Grammar,” which built software that used OpenAI's technology to help write posts, translate them into different languages and automatically post them on social media.
None of these groups have been able to gain much traction, and their associated social media accounts have small user bases. “They only have a handful of followers,” said Ben Nimmo, lead researcher on OpenAI's information research team. Still, the OpenAI report shows that propagandists who have been operating on social media for years are using AI techniques to bolster their campaigns.
“We're seeing them generate much larger volumes of text with fewer errors than traditional operations,” Nimmo, who previously worked in Meta's influence-tracking operations, said at a press conference. Nimmo said it's possible that other groups are still using OpenAI's tools without the company's knowledge.
“Now is not the time for complacency. History has shown us that influence operations that may have produced no results over the years can suddenly erupt when no one is pursuing them,” he said.
Governments, political parties and activist groups have long tried to use social media to influence politics. Concerns about Russian influence in the 2016 presidential election led social media platforms to pay close attention to how their sites are used to sway voters. Companies typically prohibit governments and political groups from concealing coordinated efforts to influence users, and political ads must disclose who funded them.
Disinformation researchers have expressed concern that as AI tools that can generate realistic text, images and even video become more publicly available, it will become harder to spot and combat misinformation and covert online influence operations. With hundreds of millions of people voting in elections around the world this year, generative AI deepfakes have already proliferated.
OpenAI, Google and other AI companies are working on technology to identify deepfakes created with their own tools, but such technology is still unproven, and some AI experts believe deepfake detectors may never be fully effective.
Earlier this year, a group linked to the Chinese Communist Party posted an AI-generated audio purportedly of a candidate endorsing another candidate in Taiwan's elections, even though the politician in question, Foxconn owner Terry Gou, was not endorsing any other politician.
In January, New Hampshire primary voters received robocalls claiming to be from President Biden that were quickly identified as coming from an AI, and last week a Democratic operative who claimed to have commissioned the calls was indicted on charges of voter suppression and candidate impersonation.
OpenAI's report details how five groups used its technology in influence operations. According to the company, Spamouflage, a known China-based group, used OpenAI's technology to research social media activity and post in Chinese, Korean, Japanese, and English. An Iranian group known as the International Union of Virtual Media also used OpenAI's technology to create articles and publish them on its site.
A previously unknown group, Bad Grammar, used OpenAI technology to create a program that can automatically post to the messaging app Telegram. Bad Grammar then used OpenAI technology to generate posts, According to reports, comments were made in both Russian and English arguing that the United States should not support Ukraine.
The report also found that Israeli political campaign company Stoic used Open AI to create pro-Israel posts about the Gaza war and target people in Canada, the United States and Israel, Open AI said.On Wednesday, Facebook owner Meta also publicized Stoic's activities and said it had removed 510 Facebook and 32 Instagram accounts used by the group. The company told reporters that some of the accounts had been hacked and others belonged to fictitious people.
The accounts in question, posing as pro-Israel American college students and African-Americans, frequently commented on the pages of celebrities and media organisations, supporting the Israeli military and warning Canada that “radical Islam” poses a threat to Canada's liberal values, Mehta said.
AI was used to word some of the comments so that they seemed odd and out of context to real Facebook users.The company says the tactic didn't work, attracting just 2,600 legitimate followers.
Meta took action after the Atlantic Council's Digital Forensic Research Lab discovered the network on X.
Over the past year, disinformation researchers have suggested that AI chatbots could be used to have long, detailed conversations with specific people online to steer them in certain directions. AI tools could also ingest reams of data about individuals and tailor messages to them directly.
Nimmo said OpenAI hasn't yet discovered either of these advanced uses for AI. “It's more of an evolution than a revolution,” he said. “But that doesn't mean we won't see it in the future.”
Joseph Meng contributed to this report.