Earlier this year, researchers at the Brookings Institution sent tens of thousands of emails to legislators. Some were generated by ChatGPT while others were from actual people.
Policymakers couldn’t tell the difference.
It’s clear that artificial intelligence will fundamentally change how business, governments, and nonprofits function. But this example points to a particularly alarming scenario: humans competing with bots to affect government decisions, thereby watering down what it means to be a citizen. It’s not hard to imagine a future where astroturfing campaigns use ChatGPT to flood elected officials with fabricated public comments, influencing policy decisions, legislation, and more.
While people will use A.I.-enhanced tools to solve problems and work more efficiently, the same technology may be catastrophic for democracy and people’s agency in society.
To counterbalance the influence of A.I., grant makers need to fund strategies that bring people from different backgrounds together to shape decisions that affect their lives. Philanthropy should make human agency — including over A.I. — a more explicit goal, asking whether initiatives it supports increase the power of citizens to make their voices heard.
Combatting Misinformation
Generative A.I. bots with access to massive data sets and the ability to approximate human communication already enable politicians and companies to manipulate public sentiment and promote polarization, further driving authoritarianism in the United State and the world. Conspiracy theories will be increasingly easy to spread through A.I.-enhanced versions of Facebook, X, and other social-media platforms.
Authoritarian governments have effectively used troll farms to spread misinformation and sow division. A.I. only enhances such efforts, including monitoring and criminalizing political behavior. In countries with functioning democracies, generative A.I. is likely to have subtler but still-corrosive effects on civic participation.
Already, tech companies are selling state and local governments A.I.-enhanced tools to automate virtually everything, including decisions about child welfare, bail and parole, education, and policy priorities, despite evidence that these systems reflect and magnify racial bias. When algorithms that incorporate machine learning make decisions previously handled by public officials, citizens are less able to understand, let alone influence, government policies.
For example, the United States Agency for International Development claims on its website that ChatGPT and other A.I. tools can help establish partnerships, understand community needs, and advance the agency’s efforts to allow local communities to make decisions. Notably, USAID isn’t saying A.I. will free up staff’s time so they can spend it talking with local grassroots organizations. Rather, it claims machine learning itself can build trust and accomplish goals long associated with face-to-face interaction. Overwhelmingly, though, people want to talk to humans, not bots.
Eventually, avoiding A.I. will prove impossible. Despite public skepticism, pressure will build on governments, companies, large NGOs, and philanthropies to adopt the technology. Only those with the most resources will be able to afford specially designed A.I. systems, exacerbating racial, class, and North-South power imbalances. Better resourced organizations, especially those making claims and designing programs based on technical expertise, will have yet another advantage in fundraising and access to decision makers, compared with grassroots groups.
Prioritize In-Person Organizing
Given how fast the A.I. race is moving, grassroots groups, activists, and social-change donors need to consider how to respond and adapt quickly. That should start with investments in face-to-face community organizing.
Organized people are the best antidote to A.I. We need to know ourselves better than the bots do and maintain relationships with each other to keep the upper hand. In-person organizing that connects individuals from different backgrounds can help immunize people against A.I.-fueled polarization. It’s also the hardest form of civic participation to fabricate.
I’ve witnessed that firsthand as a community organizer. In deeply red Indiana, I saw how thousands of conversations between bus riders, employers, and people of faith revived a twice-defeated plan to build a regional mass transit system that now helps low-income workers commute to better-paying jobs. This is just one example of how grassroots organizing can turn disparate groups of individuals with little voice into forceful advocates — something that will become increasingly critical in an A.I.-dominated world.
Nonprofits need to make strategic choices about how they deploy A.I. tools, ensuring that their use encourages more, not less, human interaction. Sure, let ChatGPT write that first draft of a grant report, but don’t use A.I. to automate decisions about how to allocate client services — an impersonal approach that can lead to mistakes and perpetuate bias.
When awarding grants, philanthropy should prioritize programs that foster human engagement. That includes encouraging organizations with different constituencies, skills, and strategies to collaborate rather than compete with one another. To accomplish this, foundations should put a premium on regular personal interaction, including investing in site visits and study trips that bring people together and increase understanding of local dynamics.
People vs. Big Tech
Humans need to be better organized to maintain control of A.I. systems — to understand how they make decisions and to determine how the technology should be used. Social-justice grant makers and organizers can help by involving more people in the tug of war with big tech over A.I. regulation.
They can support city and state efforts to regulate machine learning, including restricting local government purchases of A.I. systems to automate bail, policing, or other decisions that are best left to people. Funding local campaigns will allow more people to gain experience in effective strategies for reigning in A.I. and develop models for national legislation.
Some cities and states are beginning to take action. Under New York City’s Automated Employment Decision Tool, employers using A.I. to automate hiring or promotion must now notify applicants, disclose what data algorithms are using, and report annually on racial and gender disparities perpetuated by the tool. State legislatures have also passed laws that prevent the use of facial recognition to make arrests, create task forces and offices to study A.I., and allow consumers to opt out of A.I. using their personal data.
Still, the United States remains far behind Europe in regulating A.I. and protecting privacy, a gap philanthropy can help rectify.
The need for change is urgent, particularly as the 2024 election ramps up. This summer, Florida Governor Ron DeSantis circulated fake images of Donald Trump hugging Anthony Fauci in a campaign video that appeared to be generated by A.I. Combating such threats is difficult. But the nonprofit world can play an indispensable role in making society resilient to A.I.’s harms and demonstrating that human agency is essential for personal well-being and democratic governance.