Why China is so bad at disinformation
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
The headlines sounded dire: “China Will Use AI to Disrupt Elections in the US, South Korea and India, Microsoft Warns.” Another claimed, “China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the US.”
They were based on a report published earlier this month by Microsoft’s Threat Analysis Center which outlined how a Chinese disinformation campaign was now utilizing artificial technology to inflame divisions and disrupt elections in the US and around the world. The campaign, which has already targeted Taiwan’s elections, uses AI-generated audio and memes designed to grab user attention and boost engagement.
But what these headlines and Microsoft itself failed to adequately convey is that the Chinese-government-linked disinformation campaign, known as Spamouflage Dragon or Dragonbridge, has so far been virtually ineffective.
“I would describe China's disinformation campaigns as Russia 2014. As in, they're 10 years behind,” says Clint Watts, the general manager of Microsoft’s Threat Analysis Center. “They're trying lots of different things but their sophistication is still very weak.”
Over the past 24 months, the campaign has switched from pushing predominately pro-China content to more aggressively targeting US politics. While these efforts have been large-scale and across dozens of platforms, they have largely failed to have any real world impact. Still, experts warn that it can take just a single post being amplified by an influential account to change all of that.
“Spamouflage is like throwing spaghetti at the wall, and they are throwing a lot of spaghetti,” says Jack Stubbs, chief information officer at Graphika, a social media analysis company that was among the first to identify the Spamouflage campaign. “The volume and scale of this thing is huge. They're putting out multiple videos and cartoons every day, amplified across different platforms at a global scale. The vast majority of it, for the time being, appears to be something that doesn't stick, but that doesn't mean it won't stick in the future.”
Since at least 2017, Spamouflage has been ceaselessly spewing out content designed to disrupt major global events, including topics as diverse as the Hong Kong pro-democracy protests, the US presidential elections, and Israel and Gaza. Part of a wider multibillion-dollar influence campaign by the Chinese government, the campaign has used millions of accounts on dozens of internet platforms ranging from X and YouTube to more fringe platforms like Gab, where the campaign has been trying to push pro-China content. It’s also been among the first to adopt cutting-edge techniques such as AI-generated profile pictures.
Even with all of these investments, experts say the campaign has largely failed due to a number of factors including issues of cultural context, China’s online partition from the outside world via the Great Firewall, a lack of joined-up thinking between state media and the disinformation campaign, and the use of tactics designed for China’s own heavily controlled online environment.
“That's been the story of Spamouflage since 2017: They're massive, they're everywhere, and nobody looks at them except for researchers,” says Elise Thomas, a senior open source analyst at the Institute for Strategic Dialogue who has tracked the Spamouflage campaign for years.
“Most tweets receive either no engagement and very low numbers of views, or are only engaged with by other accounts which appear to be a part of the Spamouflage network,” Thomas wrote in a report for the Institute of Strategic Dialogue about the failed campaign in February.
Over the past five years, the researchers who have been tracking the campaign have watched as it attempted to change tactics, using video, automated voiceovers, and most recently the adoption of AI to create profile images and content designed to inflame existing divisions.
The adoption of AI technologies is also not necessarily an indicator that the campaign is becoming more sophisticated—just more efficient.
“The primary affordance of these Gen AI products is about efficiency and scaling,” says Stubbs. “It allows more of the same thing with fewer resources. It's cheaper and quicker, but we don't see it as a mark of sophistication. These products are actually incredibly easy to access. Anyone can do so with $5 on a credit card.”
The campaign has also taken place on virtually every social media platform, including Facebook, Reddit, TikTok, and YouTube. Over the years, major platforms have purged their systems of hundreds of thousands of accounts linked to the campaign, including last year when Meta took down what it called “the largest known cross-platform covert influence operation in the world.”
The US government has also sought to curb the effort. A year ago, the Department of Justice charged 34 officers of the Chinese Ministry of Public Security’s “912 Special Project Working Group” for their involvement in an influence campaign. While the DOJ did not explicitly link the arrests to Spamouflage, a source with knowledge of the event told WIRED that the campaign was “100 percent” Chinese state-sponsored. The source spoke on the condition of anonymity as they were not authorized to speak publicly about the information.
“A commercial actor would not be doing this,” says Thomas, who also believes the campaign is run by the Chinese government. “They are more innovative. They would have changed tactics, whereas it's not unusual for a government communications campaign to persist for a really long time despite being useless.”
For the past seven years, however, the content pushed by the Spamouflage campaign has lacked nuance and audience-specific content that successful nation-state disinformation campaigns from countries like Russia, Iran, and Turkey have included.
“They get the cultural context confused, which is why you'll see them make mistakes,” says Watts. “They're in the audience talking about things that don't make sense and the audience knows that, so they don't engage with the content. They leave Chinese characters sometimes in their posts.”
Part of this is the result of Chinese citizens being virtually blocked off from the outside world as a result of the Great Firewall, which allows the Chinese government to strictly control what its citizens see and share on the internet. This, experts say, makes it incredibly difficult for those running an influence operation to really grasp how to successfully manipulate audiences outside of China.
“They're having to adapt strategies that they might have used in closed and tightly controlled platforms like WeChat and Weibo, to operating on the open internet,” says Thomas. “So you can flood WeChat and Weibo with content if you want to if you are the Chinese government, whereas you can't really flood the open internet. It's kind of like trying to flood the sea.”
Stubbs agrees. “Their domestic information environment is not one that is real or authentic,” he says. “They are now being tasked with achieving influence and affecting operational strategic impact in a free and authentic information environment, which is just fundamentally a different place.”
Russian influence campaigns have also tended to coordinate across multiple layers of government spokespeople, state-run media, influencers, and bot accounts on social media. They all push the same message at the same time—something the Spamouflage operators don’t do. This was seen recently when the Russian disinformation apparatus was activated to sow division in the US around the Texas border crisis, boosting the extremist-led border convoy and calls for “civil war” on state media, influencer Telegram channels, and social media bots all at the same time.
“I think the biggest problem is [the Chinese campaign] doesn’t synchronize their efforts,” Watts said. “They’re just very linear on whatever their task is, whether it’s overt media or some sort of covert media. They’re doing it and they’re doing it at scale, but it’s not synchronized around their objectives because it’s a very top down effort.”
Some of the content produced by the campaign appeared to have a high number of likes and replies, but closer inspection revealed that those engagements came from other accounts in the Spamouflage network. “It was a network that was very insular, it was only engaging with itself,” says Thomas.
Watts does not believe China’s disinformation campaigns will have a material impact on the US election, but added that the situation “can change nearly instantaneously. If the right account stumbles onto [a post by a Chinese bot account] and gives it a voice, suddenly their volume will grow.”
This, Thomas says, has already happened.
A post, written on X by an account Thomas had been tracking that has since been suspended, referenced “MAGA 2024” in their bio. It shared a video from Russian state-run channel RT that alleged President Joe Biden and the CIA had sent a neo-Nazi to fight in Ukraine—a claim that has been debunked by investigative group Bellingcat. Like most Spamouflage posts, the video received little attention initially, but when it was shared by the account of school shooting conspiracist Alex Jones, who has more than 2.2 million followers on the platform, it quickly racked up hundreds of thousands of views.
“What is different about these MAGAflage accounts is that real people are looking at them, including Alex Jones. It’s the most bizarre tweet I’ve ever seen,” Thomas said.
Thomas says the account that was shared by Jones is different from typical Spamouflage accounts, because it was not spewing out automated content, but seeking to organically engage with other users in a way that made them appear to be a real person—reminiscent of what Russian accounts did in the lead-up to the 2016 election.
So far, Thomas says she has found just four of these accounts, which she has dubbed “MAGAflage,” but worries there may be a lot more operating under the radar that will be incredibly difficult to find without access to X’s backend.
“My concern is that they will start doing this, or potentially are already doing this, at a really significant scale,” Thomas said. “And if that is happening, then I think it will be very difficult to detect, particularly for external researchers. If they start doing it with new accounts that don't have those interesting connections to the Spamouflage network and if you then hypothetically lay on top of that, if they start using large language models to generate text with AI, I think we're in a lot of trouble.”
Stubbs says that Graphika has been tracking Spamouflage accounts that have been attempting to impersonate US voters since before the 2022 midterms, and hasn’t yet witnessed real success. And while he believes reporting on these efforts is important, he’s concerned that these high-profile campaigns could obscure the smaller ones.
"We are going to see increasing amounts of public discussion and reporting on campaigns like Spamouflage and Doppelganger from Russia, precisely because we already know about them,” says Stubbs. “Both those campaigns are examples of activity that is incredibly high scale, but also very easy to detect. [But] I am more concerned and more worried about the things we don't know."