[ad_1]
When Russia meddled in the 2016 U.S. presidential election and fueled outrage by spreading divisive and inflammatory posts online, the posts were cheeky and full of misspellings and strange syntax. They are designed to get your attention by any means necessary.
A Russian Facebook post read: “Hillary is the devil.”
Eight years later, foreign interference in American elections has become much more sophisticated and much harder to track.
U.S. intelligence and defense officials, tech companies and academic researchers say disinformation from abroad, particularly Russia, China and Iran, is increasing as countries try, iterate and deploy increasingly subtle tactics. It has grown into a consistent and pernicious threat. If even a small number of Americans can influence the presidential election, which is generally seen as a close race according to public opinion polls, it could have a huge impact.
According to US intelligence assessments, Russia aims to strengthen former President Donald J. Trump’s candidacy, while Iran supports his opponent, Vice President Kamala Harris. There does not seem to be a positive outcome for China.
However, the broad objectives of these efforts have not changed. It is about sowing discord and chaos in hopes of undermining the credibility of American democracy in the eyes of the world. However, campaigns have evolved and adapted to changes in the media environment and the proliferation of new tools that make it easier to fool credulous audiences.
Here’s how foreign disinformation has evolved.
Disinformation is basically everywhere these days.
In 2016, Russia was the primary source of disinformation related to the U.S. election, and its posts were primarily circulated on Facebook.
Iran and China are now making similar efforts to influence U.S. politics, appearing on dozens of platforms from small forums where Americans chat about the local weather to messaging groups united by common interests. We are distributing our efforts across the board. Although there is debate over whether they have directly cooperated on strategy, the two countries have taken cues from each other.
Telegram is home to a swarm of Russian accounts that spread divisive and sometimes vitriolic videos, memes, and articles about the presidential election. At least hundreds of Chinese nationals imitated students this summer to stir up tensions on American campuses over the Gaza war. Both countries also have accounts on Gab, a less well-known social media platform favored by the far right, where they have worked to promote conspiracy theories.
Russian operatives also sought support for Trump on far-right-favored Reddit and forum boards, targeting voters in six battleground states as well as potential Trump supporters, including Hispanic Americans and video gamers. It has been revealed that they are targeting people who have been identified as The Ministry of Justice announced this in September.
One campaign associated with China’s state influence operations, known as spamoflage, used accounts using the name Harlan to create the impression that conservative-leaning content was coming from YouTube, X It was operated on four platforms: , Instagram, and TikTok.
The content is much more targeted.
New disinformation being spread by foreign countries targets not only battleground states, but also specific districts within them and specific ethnic and religious groups within those districts. The more targeted disinformation is, the more likely it is to stick, according to researchers and academics who have studied new influence campaigns.
Melanie Smith, research director at the Institute for Strategic Dialogue, a London-based research institute, said: “Disinformation is more effective when it preys on the interests and opinions of a specific audience and is tailored to that audience.” It will become.” “Last election, we were trying to figure out what the big false narrative was going to be. This time around, subtly polarizing messages are going to drive the tension.”
Iran, in particular, has spent resources setting up covert disinformation operations to draw in niche groups. The website, titled “Not Our War,” which aims to engage American veterans, was littered with articles featuring violently anti-American views and conspiracy theories about the lack of support for active-duty soldiers.
Other sites include Afro Majority, which created content targeted at black Americans, and Savannah Time, which sought to sway conservative voters in the battleground state of Georgia. In Michigan, another battleground state, Iran has launched an online outlet called Westland Sun aimed at Arab-Americans in the Detroit suburbs.
“Iran’s targeting of Arabs and Muslims in Michigan shows that it has a nuanced understanding of the American political landscape and is targeting key demographics in order to influence elections in a targeted manner.” “It shows that they are trying to rig it up to appeal,” said senior executive Max Lesser. Analyst at the Foundation for Defense of Democracies.
China and Russia are following a similar pattern. During this year’s X, Chinese state media spread false reports about the Supreme Court in Spanish, which were then spread further by Spanish-speaking users on Facebook and YouTube, according to Logically, an organization that monitors online misinformation. .
Experts on Chinese disinformation said fake social media accounts associated with the Chinese government have become more convincing and appealing, and now include first-person references to being Americans or military veterans. In recent weeks, fraudulent accounts associated with Chinese spamfrage have targeted House and Senate Republicans seeking re-election in Alabama, Tennessee, and Texas, according to a report from Microsoft’s Threat Analysis Center. I did.
Artificial intelligence is driving this evolution.
Recent advances in artificial intelligence have increased disinformation capabilities beyond what was possible in past elections, allowing state agencies to craft and distribute campaigns with greater sophistication and efficiency.
OpenAI, which used the ChatGPT tool to popularize the technology, reported this month that it disrupted more than 20 overseas businesses that used its product between June and September. These included efforts by countries such as Russia, China and Iran to create and embed websites, spread propaganda and disinformation on social media, and even analyze and respond to specific posts. (The New York Times sued OpenAI and Microsoft last year for copyright infringement of news content, but both companies denied the charges.)
“AI capabilities are being used to exacerbate the threats we expected and the threats we are seeing,” Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, said in an interview. ” he said. “They are essentially lowering the bar for foreign actors to conduct more sophisticated influence campaigns.”
The utility of commercially available AI tools can be seen in the work of John Mark Dorgan, a former Florida deputy sheriff who is currently living in Russia after fleeing criminal charges in the United States.
Working from an apartment in Moscow, he has created a number of websites masquerading as American news organizations and uses them to publish disinformation that eight years ago would have involved an army of bots. He virtually performs waxwork alone. NewsGuard said Dougan’s site has spread several derogatory claims about Harris and her running mate, Minnesota Gov. Tim Walz.
China is also deploying increasingly sophisticated toolkits, including AI-manipulated audio files, to damage memes and falsify voter results in election campaigns around the world. This year, a deepfake video of a Republican politician from Virginia went viral on TikTok falsely claiming the politician was soliciting votes for a critic of the Chinese government who was running for Taiwan’s president (and later won). A caption in Chinese was added to make the point.
Disinformation has become very difficult to identify.
All three countries are also getting better at covering their tracks.
Last month, Russia tried to influence Americans by covertly supporting a group of conservative American commentators employed through Tenet Media, a digital platform founded in Tennessee in 2023. It was discovered that the attempt was concealed.
The company served as a seemingly legitimate facade for publishing numerous videos containing sharp political commentary as well as conspiracy theories about election fraud, COVID-19, immigration, and the war between Russia and Ukraine. Influencers who secretly received money to appear on Tenet also said they had no idea the money came from Russia.
Reports last fall said that, following Russian plans, Chinese operatives cultivated a network of foreign influencers to spread Russian discourse, calling them “foreign mouths” and “foreign pens.” , has created a group that has been described as “foreign brains”. Australian Strategic Policy Institute.
New tactics are making it more difficult for government agencies and tech companies to spot and eliminate influence campaigns, said Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab. At the same time, he said, he is emboldening other hostile nations.
“The more malicious foreign influence activity there is, the more surface area there is and the more permission there is for other bad actors to jump into that space,” he said. “If they all did it, the cost of exposure wouldn’t be that high.”
Tech companies aren’t doing much to stop disinformation.
Foreign disinformation has exploded as tech giants have all but abandoned efforts to combat disinformation. Major companies like Meta, Google, OpenAI, and Microsoft have scaled back their efforts to label and remove misinformation since the last presidential election. Some people don’t have a team established at all.
Security officials and tech executives said the lack of a cohesive policy among tech companies makes it impossible to form a united front against foreign disinformation.
“These alternative platforms do not have the same level of content moderation or robust trust and safety practices that could potentially mitigate these campaigns,” said Lesser of the Foundation for Defense of Democracies.
He added that even larger platforms such as X, Facebook and Instagram were locked in a perpetual game of whack-a-mole as foreign state operatives quickly rebuilt influence campaigns that had been removed. Alethea, a company that tracks online threats, said an Iranian disinformation campaign using an account named after the colorful hoopoe bird recently resurfaced on X despite being banned twice previously. I recently discovered.