close
close

In these elections, intelligence may be artificial

On Monday, the secretaries of state of Minnesota, Michigan, New Mexico, Pennsylvania and Washington had a short and sweet message for Elon Musk, the billionaire disinformant who has turned Twitter into his personal megaphone: Deal with the Grok AI chatbot, which has been spreading false information about voting dates in nine states after Vice President Kamala Harris became the Democratic presidential nominee.

A July post on Grok, just hours after Joe Biden dropped out of the presidential race, claimed that the deadline for presidential candidates to appear on the ballot in the 2024 general election had passed. But those nine states had more than enough time to change the ballots to include Harris and her vice presidential candidate, Gov. Tim Walz of Minnesota. Although the inaccurate information was only available to subscribers at the Premium level and above, it took X, a site with about 250 million daily active users, a week to fix the problem.

If the Grok episode wasn’t bad enough, Musk, chairman of X and a supporter of former President Donald Trump, clearly enjoys sharing deepfake videos using generative AI to discredit Kamala Harris, further tarnishing X’s deeply compromised reputation. With Harris at the top of the Democratic ticket, the personal attacks on the first black female presidential candidate and the spread of malicious information to voters who would have supported her are sure to intensify.

More from Gabrielle Gurley

(Secretaries of state aren’t the only ones fed up with these dangerous antics, and X isn’t the only place embroiled in election fraud. America PAC, an initiative backed by Musk, has drawn scrutiny from both Michigan Secretary of State Jocelyn Benson and the North Carolina State Board of Elections for allegedly harvesting data from users who believed the group’s website would help them register to vote. But the site didn’t redirect users in key swing states like Michigan who shared their data to another site that would actually register them to vote. Michigan has a law that criminalizes the intentional dissemination of AI-generated deepfakes.)

Grok isn’t the first or only AI chatbot to induce hallucinations, but the episode illustrates how AI could add even more chaos to the campaign season as Election Day and the post-election processes approach. To counteract future stumbles, the secretaries of state insisted that Grok direct users to CanIVote.org, a nonpartisan effort founded by the National Association of Secretaries of State and OpenAI, a company Musk co-founded and then left after internal strife.

In an alternate universe, X might have taken that advice. But here, in U.S. District Court in Northern California, also on Monday, Musk renewed his vendetta with his former OpenAI colleagues by filing a new lawsuit. He withdrew an earlier lawsuit in June that accused the company, originally a nonprofit, of selling more advanced AI capabilities to private companies.

Without apparent irony, Musk’s argument argues that the technology was meant to be “openly released to the public for the good of all, not for private profit.” warns that the dangers of AI include “accelerating the spread of disinformation,” which is exactly what the secretaries of state asked him to do. Suffice it to say that the probability that X will accept the NASS-OpenAI partnership is negative.

Misrepresenting in-person voting times, dates and locations has long been a part of the process of making voting more difficult in black neighborhoods.

This leaves voters at the mercy of chatbots that are ready, willing, and able to entertain them with sarcasm rather than provide facts they are unable to provide.

The first clue that something is wrong with Grok is that it is advertised as a “humorous” AI search assistant. Available to users who pay for Premium X subscriptions, the search assistant includes a disclaimer urging people to verify the information they share. That’s because, as X warns, Grok is an “early version” search assistant that “may possibly provide incorrect information, incorrectly summarize, or omit some context.”

These admissions alone should send a user to the nearest public library to consult human librarians, or not click on Grok at all. But only the more curious will learn about it; clicking on Grok on the X homepage only invites users to subscribe before they hear about the shortcomings of the product they want to buy.

Gowri Ramachandran, director of elections and security at the Brennan Center’s Elections and Government Program, has been monitoring the abuse and misuse of generative AI in other democracies that are holding elections this year. He explains that “a lot of the really high-profile chatbots that exist have, fortunately, taken steps to ensure that when people type in searches or prompts for election information, they don’t hallucinate the answer, and instead they send the user to the appropriate secretary of state or elections website or portal that helps them find the right place to get the information they need, which is a huge improvement.”

Ramachandran also says that X has finally addressed Grok’s flaws since the letter appeared. “While it may have taken longer than would have been ideal, from what I understand, the Grok chat bot is also making some improvements.” He adds: “Of course, acting in a socially responsible manner is welcome, encouraged, but it doesn’t eliminate the need for enforceable rules.”

AI regulation continues to lag. Concerns raised by Harris’ deepfake prompted Senate Majority Leader Chuck Schumer (D-NY) to say he’d like to see two bills introduced by Sen. Amy Klobuchar (D-MN) pass soon. One would require political ads to indicate they were created using AI; the other would prohibit “the distribution of materially deceptive audio or visual media generated by AI about federal candidates” for the purpose of “raising funds” or “influence[ing]elections.”

Will Schumer be able to break through Republicans’ knee-jerk stubbornness—even as they claim to worry about disinformation—as another election-year government crisis looms? The bill would be a piece of history. More likely, however, Congress will be unable to act. One federal tool remains, however. Last month, the Federal Communications Commission proposed rules that would require disclosure of AI content in political ads by broadcasters, cable companies and others.

Meanwhile, voters must deal with disinformation and deepfakes themselves.

Federal government agencies like the Cybersecurity and Infrastructure Security Agency (CISA) can advise voters to seek voting information from credible government sources, but there is little to stop dishonest people from impersonating relatively unknown election officials or distributing materials online or by mail that appear authentic but are fakes.

“A lot of people might not be familiar with their secretary of state’s website,” says Danielle Davis, director of technology policy at the Joint Center for Political and Economic Studies, a Washington think tank that studies the socioeconomic status and civic engagement of African Americans. “What if someone else posts something that looks exactly the same but isn’t true?”

Few voters realize that one lie can dramatically change a presidential election. Communities of color are particularly, but not exclusively, vulnerable to these attacks. The distribution of inaccurate times, dates, and locations for in-person voting has long been a feature of voter suppression in black neighborhoods. Ramachandran explains that voters must obtain information from sources such as .gov sites that are vetted, created, and monitored by federal, state, and local governments, which are “very difficult to fake,” he says, because .gov domains are reserved for government agencies at all levels.

In 2016, the Russian government used disinformation tactics, such as photos of African Americans emblazoned with initials and slogans, to persuade African Americans not to vote for Hillary Clinton. On Facebook, Davis says Russian agents targeted ads to black audiences that either ignored the election, discouraged black Americans from voting, or advocated for independent candidates. Some Russian-backed Instagram accounts, Davis notes, essentially “committed to painting their faces black” and “claimed to be black, talking about black history, black nationalism, but were actually Russians who weren’t even in the United States.”

“I definitely see there’s not enough urgency in addressing the issues that come from these platforms,” Davis says, warning that black communities need to be cautious and careful when sharing information on social media sites like Instagram that seems “interesting” but may be false, especially if the voter can’t find any details about that “interesting” post on a reputable news site.