Business

Elections Under Siege: The Threat of AI-Driven Misinformation

Dan Nicholson

As the world gears up for the 2024 elections, a new specter haunts the political landscape: AI-generated disinformation. The proliferation of deepfakes and other AI-generated content threatens to blur the lines between fact and fiction, raising concerns about the integrity of democratic processes. With the accessibility of generative AI tools, distinguishing reality from fabrication becomes increasingly challenging, amplifying the risk of misinformation influencing public opinion and our democratic processes. Here’s what to expect from AI’s influence on election communication in the coming months.

The Rise of AI-Generated Deepfakes

The emergence of generative AI technologies has ushered in a new era of disinformation warfare. Customized chatbots, realistic video and audio generation in the form of deep fakes, and targeted election disinformation campaigns against foreign adversaries are poised to dominate the digital landscape in 2024. These technologies jeopardize political stability and cast a shadow of doubt and distrust over societal discourse.

It marks a quantum leap from a few years ago, when creating phony photos, videos, or audio clips required teams of people with time, technical skill, and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deep fakes” with just a simple text prompt, highlights Henry Ajder, a leading expert in generative AI based in Cambridge, England. “As the technology improves, definitive answers about a lot of the fake content are going to be hard to come by,” Ajder says. 

Political actors and malicious entities alike are already leveraging these tools to sway public opinion and undermine the integrity of electoral processes. In Argentina, two presidential candidates created AI-generated images and videos of their opponents to attack them. In Slovakia, deepfakes of a liberal pro-European party leader threatening to raise the price of beer and making jokes about child pornography spread like wildfire during the country’s elections. A video of an opposition lawmaker in Bangladesh—a conservative Muslim majority nation—wearing a bikini. Meanwhile, in the U.S., Republican presidential nominee Donald Trump has cheered on a group that uses AI to generate memes with racist and sexist tropes. 

Audio-only deepfakes are especially hard to verify because, unlike photos and videos, they lack telltale signs of manipulated content. Robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January’s primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deep fakes. While it’s hard to say how much these examples have influenced election outcomes, their proliferation is a worrying trend. 

“You don’t need to look far to see some people… being clearly confused as to whether something is real or not,” says Ajder. The question is no longer whether AI deep fakes could affect elections, but how influential they will be.

Addressing the Threat of AI Misinformation 

Governments and organizations worldwide are racing to implement safeguards against the proliferation of AI-generated disinformation, recognizing its detrimental implications for democratic processes. However, efforts to counter AI-generated disinformation are nascent among governmental and tech companies alike. 

Social media platforms are notoriously slow in taking down misinformation, though major tech companies have signed an accord to prevent AI from being used to disrupt democratic elections worldwide. Some platforms are attempting to implement watermarks, such as Google DeepMind’s SynthID, to authenticate images, yet most are still voluntary.

On the governing side, the U.S. Federal Communications Commission (FCC) has outlawed AI-generated robocalls aimed to discourage voters. The European Union (EU) already requires social media platforms to cut the risk of spreading disinformation or “election manipulation.” It will mandate special labeling of AI deep fakes starting next year, too late for the EU’s parliamentary elections in June. Still, the rest of the world is a lot further behind, and experts warn that less developed countries face greater threats to their systems without intervention. 

Some experts worry that efforts to rein in AI deep fakes could have unintended consequences, too. Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington. As regulation attempts go into effect, there will likely be accurate information caught up in its wake. 

Conclusion

As the 2024 elections draw near, the ominous shadow of AI-generated disinformation looms large over democratic processes worldwide. The proliferation of deepfakes and other AI-driven manipulations threatens to erode trust in the integrity of electoral systems, blurring the lines between reality and fabrication. Despite efforts by governments and organizations to implement safeguards against this growing threat, the response remains fragmented and largely inadequate. As we navigate this complex landscape, vigilance and innovation will be paramount in safeguarding the foundations of democracy against the insidious influence of AI-driven misinformation, explains Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia. “A world in which everything is suspect—and so everyone gets to choose what they believe,” she says, “is also a world that’s really challenging for a flourishing democracy.”

Sources

AP

MIT Technology Review

Dan Nicholson is the author of “Rigging the Game: How to Achieve Financial Certainty, Navigate Risk and Make Money on Your Own Terms,” deemed a best-seller by USA Today and The Wall Street Journal. In addition to founding the award-winning accounting and financial consulting firm Nth Degree CPAs, Dan has created and run multiple small businesses, including Certainty U and the Certified Certainty Advisor program.

No items found.
Top
Nth Degree - Safari Dan
Next Up In
Business
Top
Nth Degree - Safari Dan
Mid
Pinnacle Chiropractic (Mid)
Banner for Certainty Tools, Play your Game.  Blue gradient color with CertaintyU Logo
No items found.
Top
Nth Degree - Safari Dan
Mid
Pinnacle Chiropractic (Mid)