To cut through online noise and potentially false information surrounding the U.S. elections and ensure that voters go to the polls armed with verified facts, the John S. and James L. Knight Foundation Thursday announced it will put nearly $7 million toward fighting misinformation in states crucial to determining the results in November.
Funding for the Associated Press, in the amount of $1.5 million, will provide training to small newsrooms on how to report election-related polling and how to identify and explain to readers instances of misinformation spreading online. Another $2.75 million will be dedicated to the Knight Election Hub, which provides resources like polling and data, as well as training for newsrooms. Knight will split the remainder of the commitment among nonprofits that support journalism and will direct grants to one news outlet in each of the election swing states of Arizona, Georgia, Michigan, Nevada, Pennsylvania, and Wisconsin.
The Knight Foundation grants are part of an escalating effort on the part of progressive foundations to fund both newsrooms and social media researchers in thwarting what are perceived as election-related falsities. This “pro-democracy” grant-making effort started during the 2016 presidential election cycle, when there was evidence of election interference from Russia. It reached an apex in 2019-20, when President Donald Trump was accused of blocking the peaceful transition of power. While many praise pro-democracy philanthropy, the obstacles are numerous, including the accusation from conservatives that the grant-making effort is inherently partisan.
Maribel Pérez Wadsworth, CEO of the Knight Foundation, a leading funder of journalism and media innovation, insists that her philanthropy is not taking sides in the presidential election. The Knight Foundation’s grant focuses on newsrooms in swing states, she said, due to recent U.S. intelligence reports that Russia continues to flood social media sites with false information, especially related to those states, in an attempt to secure the election of Donald Trump.
“It is important that we help bolster the access to critical and accurate nonpartisan information about the big choices in the election,” said Wadsworth. “The country would be taking a vastly different direction depending on the outcome.”
Wadsworth said a neutral player like the Associated Press, which she said sets the “gold standard” for journalistic integrity, is key in restoring confidence in U.S. elections and strengthening the role of newsrooms across the country.
Making sure the American public has access to accurate information about candidates and policy issues is important not only because it leads to better-informed voting decisions, said Julie Pace, senior vice president and executive editor of the Associated Press. Dependable information can also increase people’s trust in the integrity of the election system, resulting in higher voter turnout and a stronger democracy.
Said Pace: “We’re not the liberal media, and we don’t have a partisan agenda.”
But others believe calling something “misinformation” is a catchall phrase for information they simply disagree with.
Philanthropic support of efforts to quash misinformation is not an attempt to sort out the truth online, said William Schambra, senior fellow at the Hudson Institute, who commented before the Knight announcement. Rather, Schambra sees the fight against misinformation as a response to the growing number of alternative media sites on the right, which progressives view with suspicion.
“Misinformation is a term of art invented by the left to describe something that is inherently bad or wrong,” he wrote in an email. “Hence the projects funded to ‘overcome’ it are really just efforts to undo the ill effects of conservative points of view.”
It isn’t clear how much foundations have spent in recent years to stop misinformation. Recent commitments, such as Knight’s, which complements more than $100 million it has already committed to researchers studying how bad actors use the internet to interfere with elections, are starting to add up. Other grant makers, including the Hewlett, MacArthur, Open Society, and Packard foundations, have placed a growing focus on the issue.
But neutralizing misinformation faces a tangle of challenges. The attempts by foreign actors, such as Russia, may be the most vexing. In addition, the growth of artificial intelligence that can produce convincing but false videos and images has made it easier to hoodwink the American public. And many, like Schambra, view efforts to fight misinformation as an attempt to silence the speech of political adversaries. This concern has led Republican House members to try to curtail misinformation research. They are getting help from social media platforms themselves, which once were relatively open to providing data to tech researchers but have taken a more guarded position in recent years.
For her part, Wadsworth believes that helping news organizations base their election coverage on verified factual reporting can play a major role in ensuring the integrity of the coming election and strengthening journalism over the long haul, as newsrooms tinker with new ways to stay financially viable.
“Philanthropy plays a really important role in the near term because it can help to provide an incredibly important runway while the media ecosystem continues to evolve,” she said.
Philanthropy’s Role, Politicians’ Response
Leigh Chapman has seen firsthand the role misinformation has played in elections. In the weeks before the 2022 midterm elections, Chapman, Pennsylvania’s then acting secretary of state, shuttled between meetings with some of the world’s biggest social media platforms — Facebook, Google, Snapchat, and TikTok. Her goal was to persuade them to take down misleading or untrue claims about how mail-in ballots were being counted. Sometimes the companies responded, and sometimes they didn’t, she said.
Now, less than four months before a pivotal general election, Chapman is still trying to stem the flow of misinformation. At the Open Society Foundations, where she oversees grants designed to strengthen U.S. democracy, she manages nearly $4 million in funding to combat misinformation by supporting organizations that work to beat back lies and expose subterfuge spread online that could affect the vote. She said her job may have gotten tougher since she left public service.
That’s because over the past four years, many voters have hardened in their political views and are less inclined to believe messages from the news media, government, or nonprofits intended to correct false information, experts say. And internet companies have not advanced their efforts to quash disinformation on their platforms. Some, like X, formerly Twitter, have seemingly walked away from the whole idea of monitoring online propaganda. Others, like Meta, have cut down on content monitoring staff or closed their platforms to researchers wanting to study how misinformation spreads.
Philanthropic support for researchers to track and respond to disinformation on social media sites was also dealt a blow by Republican lawmakers. The GOP-led House Judiciary Committee launched an investigation into whether the fight against misinformation was instead an attempt to censor conservative voices — and last year began issuing subpoenas to university and nonprofit-funded researchers in a move some nonprofit and academic leaders say has chilled further investigation.
Pressure from lawmakers may have also caused some foundations to balk at supporting such efforts.
One major research effort, the Stanford Internet Observatory, or SIO, in June suspended a project that delved into election-related misinformation, particularly on social media.
The project, which has received more than $5 million in funding since 2019 from Craig Newmark Philanthropies, the William and Flora Hewlett Foundation, and others, cited a barren fundraising environment. A statement on the Stanford Internet Observatory website reads, “Stanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted.”
Meanwhile, technology keeps marching on. The use of artificial intelligence to create false but deeply convincing videos and images related to politics could torpedo efforts to tamp down online fakes as the November elections draw near. The technology can sow confusion by, for instance, manipulating videos of candidates to misrepresent their biographies and voting histories.
“Because this is the first major election with generative A.I., it really stands to be a pivotal year for A.I. experimentation in the elections,” Chapman said. “It’s going to be very hard for voters to determine what is fact and what is fiction.”
Billions in Democracy Funding
According to a December survey by the Democracy Fund, a philanthropy created by eBay founder Pierre Omidyar, grant makers plowed up to nearly $7 billion into democracy issues, including get-out-the-vote efforts and journalism in 2021 and 2022 combined. It is not clear how much of that total went to fighting misinformation.
Philanthropy has supported a variety of approaches to fight misinformation in the past two years. In November, Open Society joined with nine other foundations, including Ford, Hewlett, MacArthur, and Packard , to commit $200 million toward ensuring that artificial intelligence serves the public interest, including efforts to regulate its use in political social media posts. Separately, Open Society this year made nearly $4 million in grants to organizations responding to online misinformation.
In addition to its latest announced commitment to support news outlets, the Knight Foundation expanded a $50 million commitment from 2019 to support researchers studying the spread of misinformation; as of June, it had made about $107 million in grants, including $5 million to create the Center for an Informed Public at the University of Washington. The center, which was a research partner of the Stanford Internet Observatory misinformation project, has conducted rapid research to monitor and push social media platforms to take down unsubstantiated political claims.
To inoculate voters from the spread of false information, Open Society’s Chapman is making grants to nonprofits to provide “pre-buttal” responses that anticipate online lies and to create spaces on the internet where people can be confident that the voting and candidate information they are getting is legitimate.
Said Chapman: “The best way to counter disinformation is through providing access to accurate election information through trusted sources.”
Philanthropy, Misinformation, and the Problem of Trust
Luis Lozada, CEO of Democracy Works, a nonprofit that helps voters find accurate information online and received a $500,000 grant this year from Open Society, is trying to do just that. He said the biggest bulwark against misinformation is to flood the internet with the facts.
The challenge, he said, is getting the information in the right places. Most people, particularly young voters, don’t access their state elections office for voter registration information or to get up to speed on mail-in ballot deadlines or polling places.
Instead, young voters are more inclined to use a search engine or an A.I. tool. To help circulate accurate information about voting dates and registration information, Lozada has contracted with search engines and social media platforms that pay Democracy Works for ballot information that the nonprofit has collected and rendered into a readable format for the internet.
“We get top billing in the Google search box,” he said.
Getting the right information to Google is important, Lozada said, because organizations and political parties take information from a Google search and plug it into their own websites. In addition to Google, Lozada has contracts with Facebook and TikTok to steer users looking for voting information to verified information from Democracy Works.
Democracy Works benefits financially from those arrangements, but the bulk of the group’s funding — grants from foundations — has decreased since the 2020 election cycle. Back then, Democracy Works had a $15 million annual budget and a staff of 70. Last year, his budget dropped by one third, and the nonprofit’s head count is now about 50 people. However, a gift last year from MacKenzie Scott — Lozada won’t say how much — will help bring the group’s budget closer to its peak.
Silencing Researchers and Funders
In May, a message describing research on the number of noncitizens who fraudulently voted ricocheted across the internet. Users of social media platforms such as TikTok, Facebook, Truth Social, and YouTube amplified the message. On X, the message was spread by more than 26 million retweets in a single day.
The burst of activity helped resurface a rumor, fueled by presidential candidate Donald Trump, that up to 27 percent of noncitizens, including documented and undocumented immigrants, participate in elections. The research used to promote the claim was peer reviewed but then widely denounced by hundreds of academics.
Kate Starbird and other researchers at the University of Washington Center for an Informed Public set out to analyze how the rumor spread across social media. One clue — a comment about the message from X’s CEO, Elon Musk, was widely retweeted. But without cooperation from social media companies, Starbird and her team can only guess how the message was reinforced, she says.
Starbird, who has teamed up with researchers at the Stanford Internet Observatory in the past, would like to publish a guidebook on how to combat misinformation. But for most of last year, her work was stymied by both GOP members of the House Judiciary Committee and by social media companies that have blocked researchers from using quantitative analyses of posts, a key step to mapping the spread of misinformation on their platforms. Her project, along with the Stanford Internet Observatory, were also targets of a lawsuit by former Trump adviser Stephen Miller’s nonprofit, America First Legal.
According to Miller’s group, misinformation researchers screened 859 million tweets during the 2020 election cycle and called for the censorship of 20 million messages.
“They created a regime of surveillance, censorship, and control fit for communist China,” Miller said in a statement when the lawsuit was filed in May 2023. “Under the Orwellian guise of policing ‘mis’ and ‘disinformation,’ the organizations and entities we are suing today are responsible for radically eroding the rights and liberties upon which the survival of free society depends.”
Conservative policy experts argue that the term misinformation isn’t sufficiently defined. As a result, social media platforms, often under pressure from nonprofit or government leaders, can dictate what is tossed from their sites and what is amplified.
In such a set-up, says Daniel Cochrane, senior research associate at the Heritage Foundation, people with less power than a tech executive or federal official are unable to bring their views to the table in public debate without being blacklisted.
“Misinformation is an inherently vague and amorphous term that has no fixed, concrete, or objective definition,” he says. “Because of that, it is prone to abuse in a way that runs counter to the idea of free speech and democratic discourse at its core.”
Social media platforms, including X, have published policies against posting misinformation. X, for instance, says that it removes content if it is “likely to cause serious harm, including widespread civil unrest or mass violence.” But the guidelines also attempt to protect free speech, giving users wide latitude, leaving the door open for memes, satire, commentary, and animations as long as they don’t contribute to “significant confusion about the authenticity of the media.”
After Elon Musk bought Twitter in 2022 for $44 billion, the company dismantled its staff working on trust and safety issues. Also, by blocking external researchers from tracking misinformation, the internet platforms have prevented understanding of how their technology is changing the world, says Alex Abdo, litigation director at the Knight First Amendment Institute at Columbia University, which was established in 2016 with a $30 million grant from Knight and $30 million from Columbia. With billions of posts being added each day, traffic is at such a huge scale, it is difficult to get a clear handle on how to identify the sources of misinformation and stop its spread.
Rather than open up their troves of data, internet companies have preferred to send researchers cease-and-desist letters, Abdo says, to suppress criticism of how they moderate content. For instance, Facebook called on New York University researchers to stop digging into misinformation on the site related to the January 6 attacks on the U.S. Capitol and eventually shut down their access to the site, reported the Associated Press.
Abdo and others at the Knight Institute advise a legal defense fund housed at the Miami Foundation, which has provided consultation to dozens of researchers facing lawsuits or who feel intimidated by the threat of one.
Abdo fears that the researchers’ work, which has been “hijacked” by internet companies and Congress, could result in a tainted ballot.
Says Abdo: “We’re headed into the 2024 election with a blindfold on.”
Reporting for this article was underwritten by a Lilly Endowment grant to enhance public understanding of philanthropy. The Associated Press is a publishing partner of the Chronicle of Philanthropy through this grant. The Ford, Hewlett, MacArthur, and Open Society foundations are financial supporters of the Chronicle. The Chronicle is solely responsible for its content. See more about the Chronicle, our grants, how our foundation-supported journalism works, and our gift-acceptance policy.