close
close

OpenAI and Meta reveal that their AI tools have been used for political disinformation

Both OpenAI and Meta this week revealed details of ongoing nefarious campaigns by entities linked to China, Israel, Russia and Iran, which they determined were using their services to spread disinformation and disrupt politics in the U.S. and other countries.

In its latest quarterly threat report released on Wednesday, Meta highlighted that generative AI is still easy to detect in such campaigns.

“To date, we have not seen novel GenAI-powered tactics that impede our ability to disrupt the adversarial networks behind them,” the social media giant said.

While AI-generated photos are widely used, Meta added that political fake news – which many experts consider to be a major global threat – is not common. “We have not currently observed threat actors using photorealistic AI-generated media depictions of politicians as a broader trend,” the report noted.

For its part, OpenAI said it built security into its AI models, worked with partners to share threat intelligence and used its own AI technology to detect and disrupt malicious activity.

“Our models are designed to impose friction on threat actors,” the company said yesterday. “We built them with defense in mind.”

Noting that content protections proved effective and models refused to generate some requests, OpenAI said it had blocked accounts associated with identified campaigns and shared relevant details with industry partners and law enforcement to facilitate further investigations.

OpenAI described covert influence operations as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the entities behind them.” The latest disclosures are described as part of OpenAI’s transparency efforts.

The company used information gathered from these campaigns to dig deeper, assess the impact of disinformation operations and rank their techniques to improve future countermeasures. On a scale of 1 to 6 – the highest score representing campaigns that reached authentic audiences across multiple platforms – OpenAI found that none of the identified actors scored higher than 2.

According to OpenAI, at least five different campaigns used its models to generate text content that was then distributed on social media platforms such as Telegram, Twitter, Instagram, Facebook, and online forums, as well as other websites. Meanwhile, Meta, along with other groups it flagged for “coordinated inauthentic behavior,” reported AI-generated content.

Here are some of the specific campaigns selected by both companies.

Russian threat

One Russian campaign called “Bad Grammar” used OpenAI systems to generate comments in multiple languages ​​published on Telegram, targeting audiences in Russia, Ukraine, the United States, Moldova and the Baltic countries. The comments covered topics such as Russia’s invasion of Ukraine, politics and current events.

“The network mainly commented on posts from a small number of Telegram channels,” OpenAI says. “The most frequently mentioned pro-Russian channel @Slavyangrad was followed by the English-language @police_frequency and @SGTNewsNetwork.”

Another ongoing Russian operation called “Doppelganger” used ChatGPT to generate website articles, social media posts, and comments overwhelmingly portraying Russia in a positive light while denigrating Ukraine, the United States, and NATO. This content was intended to increase engagement and productivity on platforms like 9GAG.

Doppelganger also tried to use OpenAI tools to create artificial images with captions critical of Western governments, but the company said its system rejected requests that appeared to be disinformation propaganda.

Meta also mentioned this group in its adversarial threat report, focusing on attempts to infiltrate Meta’s social media platforms through various themes. Meta noted that the challenge was that the group often changes tactics and evolves over time.

Disinformation from Israel

An Israeli private company called STOIC launched an operation OpenAI dubbed “Zero Zeno” that used its artificial intelligence models to generate comments. Zero Zeno incorporated these remarks into a broader disinformation tactic targeting Europe and North America.

“Zero Zeno published short texts on specific topics, especially the conflict in Gaza, on Instagram and X. These texts were generated using our models,” OpenAI revealed. “The next group of accounts on these platforms would then respond with comments that were also generated by this operation.”

“Open source research conducted in February found that this network is critical of the UN Palestine aid agency,” the report noted, linking to a more comprehensive report.

OpenAI technology was also used by Zero Zeno to create false biographies and contribute to false engagement. OpenAI also revealed that the Israeli company used its technology to target “the organization of Histadrut trade unions in Israel and elections in India.”

This group was also tagged by Meta.

“The network’s accounts posed as residents of the countries they targeted, including Jewish students, African Americans and ‘concerned’ citizens,” Meta said. “They wrote mainly in English about the war between Israel and Hamas, including calls for the release of a hostage; praising Israel’s military actions; criticism of anti-Semitism on campuses, the United Nations Relief and Works Agency (UNRWA) and Muslims claiming that ‘radical Islam’ poses a threat to liberal values ​​in Canada.”

Meta said it had banned the group and issued a cease and desist letter to STOIC.

China’s “Spamouflage” Efforts.

China’s “Spamouflage” campaign used OpenAI language models for tasks such as debugging code and generating comments in various languages ​​to spread narratives under the guise of creating productivity software.

“Spamouflage published short comments about X criticizing Chinese dissident Cai Xia (w) in the form of an initial post and a series of replies,” OpenAI says. “Each comment in the ‘conversation’ was artificially generated using our models, possibly creating the false impression that real people were engaging with the content of the operation.”

However, in the case of anti-Ukrainian campaigns, it appears that comments and posts generated via OpenAI on 9GAG have been met with extremely negative reactions and criticism from users who have condemned the activity as false and inauthentic.

Meta has detected another AI disinformation campaign with links to China. “They wrote mostly in English and Hindi about news and current events, including images likely manipulated with photo editing tools or generated by artificial intelligence.”

For example, online users posted negative comments about the Indian government and touched on similar topics such as the Sikh community, the Khalistan movement and the murder of Hardeep Singh Nijji.

Iranian operation

A long-running Iranian operation known as the International Union of Virtual Media (IUVM) was found to have abused OpenAI’s text generation capabilities to create multilingual posts and images supporting pro-Iran, anti-US and anti-Israel viewpoints and narratives.

“This campaign targeted a global audience and focused on generating content in English and French – it used our models to generate and proofread articles, headlines and website tags,” OpenAI said. The content will then be published and promoted on pro-Iranian websites and social media as part of a broader disinformation campaign.

Neither Meta nor OpenAI responded to a request for comment Decrypt.

Edited by Ryan Ozawa and Andrew Hayward