Wednesday, November 20, 2024

The threat to our elections from a growing dominance of artificial intelligence

By Lori Lee
NDG Contributing Writer

As artificial intelligence (AI) technology has evolved, it has become so easy to use that what formerly required a studio and production team now costs very little and can be achieved in a few simple clicks. Improved access to AI and its ability to reach a tremendous number of people, makes it a powerful tool, which in the wrong hands, could overwhelm and confuse the public leading up to the Fall elections.

We will see AI coming at the public at an increasingly fast pace in this election, said Jinxia Niu, program manager for Chinese for Affirmative Action, a nonprofit civil rights organization. With the vast number of videos circulating demonstrating how to generate fake videos, AI has made it all too easy to saturate our information waves with false narratives.

The truth is under assault every single day, said Jonathan Mehta Stein, executive director of California Common Cause. The organization is a state initiative, which works to protect democracy relative to technology within California.

 

Ethnic communities have been overwhelmed with AI campaigns, creating a huge need for fact-checking organizations to combat the flood of disinformation targeting communities of color. (DWG Studio)

Just this week, the Department of Justice disrupted a Russian campaign that used fake social media profiles to promote pro-Russia propaganda. The campaign, operated by a single individual, generated thousands of fake posts, noted Stein, successfully targeting different races in regions across the country.

Now, it is a relatively simple matter for practically any individual to amplify their voice with the new, easy-to-use tools that are widely accessible. This means any conspiracy theorist, anyone running for political office, or any foreign state can easily undermine our elections, said Stein.

Artificial intelligence is an amazing tool that can be used to generate sometimes artistic, and sometimes bizarre creations, he said. The technology can range from simple algorithms used by companies like Netflix to make film recommendations to more complex systems that can predict crime or help sustainable energy systems run smoothly. AI can also save manpower, which could help local government serve the public more efficiently, he added.

Even so, AI can have devastating effects. Take the last Presidential primary, when a spoof robocall using an image of Joe Biden directed New Hampshire Democrats not to vote. And a May 2023 deep fake even set off a dip in the stock market after convincing investors momentarily that the Pentagon was under attack.

Dangers may be most evident at the local level while fakes targeting higher offices will most likely be exposed quickly. Deep fakes targeting local officials or state representatives will take longer to surface, said Stein, making political impacts greater in state or local communities.

As the election draws near, new fake media websites are emerging, like the Miami Chronicle, a fake local news site created by Russian intelligence to carry propaganda. Stein recommends people be on the lookout for these sites, as well as fake county election sites attempting to confuse or influence voters.

Targeted for centuries, voters of color and immigrants face particular threats as political players attempt to make it harder for these groups to vote. Stein said AI technology brings new and crafty ways to achieve such goals.

Examples include multiple deep fakes that have been circulating to create a false narrative that the former president has more support in the Black community than he has.
As Niu explains, ethnic communities have been overwhelmed with such campaigns, creating a huge need for fact-checking organizations to combat the flood of disinformation targeting communities of color.

AI technology continues to grow, making images and audio more realistic, adds Stein, and unless people know what to look for, it will be difficult to tell whether the images are real. Upon close examination, people may look idealistic almost cartoonish, with every hair in place. Certainly, perusing images on a small phone screen or quickly scrolling through Twitter, will make spotting such images very difficult.

Such misinformation campaigns are not unique to the U.S., said Stein. In India, deep fakes have emerged front and center as voters are bombarded with millions of fake videos surrounding the elections. Candidates have begun to embrace the fakes as a means of keeping up with the race, including candidates who prefer not to use them but feel they have no choice.

India serves as an example of what could happen if the public is not educated on AI technology, said Stein.

Political information found on social media should be approached with skepticism, Stein warns, and images too good to be true should be scrutinized, especially when reposting. Stein urges that when people see an image of political leaders in an activity that could harm them, they should get out of the social media environment and check the story with an objective or trusted media source.

A common goal of players who seek to mislead about politics is to saturate society with so much misinformation that people don’t know what to believe, added Brandon Silverman, policy expert on internet transparency and former CEO of CrowdTangle, a social analytics tool acquired by Facebook.

And the vast majority of such misinformation falls into a gray area. Silverman refers to the use of information that is not literally untrue and doesn’t break any rules, but that is simply misleading. It’s the difference between saying the moon is made of cheese and that some people are saying the moon is made of cheese, he explained. There is a tremendous amount of misinformation that is achieved in this way, he said.

As the issue peaks, social media platforms are walking away from the responsibility of addressing the problem, Stein added. Youtube, Meta and X have all stopped labeling and removing posts that repeat Trump’s claims about the 2020 election. Twitter has stopped utilizing a tool that identifies disinformation on its platform, while Facebook has made some fact-checking features optional. And all of the platforms have laid off key members of their trusted safety and civic integrity teams.

If social media platforms aren’t going to take responsibility for the problem, Stein asked, whose job is it to protect our communities?

The current saturation of AI in our political discourse requires policy changes, said Stein. The State of California has legislation in the works that would offer some protections. Yet building a system to track political disinformation is more difficult than just flagging certain words, he said. Deciphering intended meaning in political discourse is complicated, requiring a great deal of work.

It seems that for those who rely strictly on social media to get their news, AI has the power to create a false reality that is too easily believed by the masses. It is a problem that has perhaps grown too big for policy solutions, and it may be up to trusted leaders and the media to help make communities aware of the problem.

LEAVE A REPLY

Please enter your comment!
Please enter your name here