Technology doesn’t solve social problems, yet the notion that it does is thriving as never before. Fueled by Big Tech’s economic and cultural power in the United States, technology experts, government leaders, and philanthropists eagerly propose digital innovations as the solution to structural, systemic, and political problems. But here’s the dilemma: Technology consistently mirrors and magnifies the good, the bad, and the ugly in society.
Technology can provide a bandage for social problems, but no app, database, or algorithm will bring about the structural change desperately needed to address inequality, political polarization, and racism. Nor will it provide clear answers to the fundamental debates on rights and values facing our country.
And yet, since the early 2000s, philanthropic institutions have led the way in driving “techno-solutionism” as a strategy to address social ills by funding apps, digital platforms, and, algorithmic decision-making systems used in employment screening, criminal-justice sentencing and much more. At the same time, philanthropy has not adequately supported the larger structural changes needed for technology to have a meaningful impact.
Philanthropic institutions have a challenging mandate: spend money to fix society’s most intractable problems while also supporting innovative, outside-the-box solutions. Additionally, they can’t waste money or take on too much risk. Given such a checklist, it’s no surprise that philanthropy increasingly turns to technological solutions to address the thorniest societal issues.
Funding technology produces a concrete outcome that pleases boards. It carries the heady whiff of innovation, while positioning the tech-generated results as a source of definitive and objective truth: We have the data to prove it! But relying on data sets and modeling to address urgent social problems ignores the following two fundamental realities.
Technology is not inherently democratic. It does not “level the playing field,” “give everyone a voice,” or create the conditions for objective reality. The technology we have embraced and the values we have allowed to inform technology design broadly — scalability, efficiency, market potential — work in opposition to values such as equity, agency, and protection of vulnerable populations.
Technology doesn’t exist in a vacuum. All technologies are deployed within a social system. How real people engage with technology is infinitely varied, often surprising, sometimes malicious, sometimes hilarious. And yet many digital systems are designed without their social impact in mind from the outset, disproportionately harming already vulnerable populations.
Consider the widely employed risk-assessment algorithms used in courtrooms to determine bail for defendants and touted as more objective than human judges. These systems were shown to use racially biased data, landing people in jail with no legal recourse to challenge the tool that sent them there. Similarly, the deployment of body cameras in police departments nationwide promised increased police accountability but were backed by little evidence. Instead, they have allowed further surveillance of already heavily surveilled populations — primarily people who are Black or brown — with little effect on policing quality.
A techno-solutionism mentality was also evident in Mark Zuckerberg’s promise to Congress that artificial intelligence would solve Facebook’s content-moderation problem even as abuse, harassment, and disinformation continue to run rampant and battles rage over so-called anti-conservative bias on social media.
What’s more, consider the millions of aid dollars poured into “AI for Good” efforts intended to reduce poverty and hunger, and improve health and education in the world’s poorest countries. These were all deployed without first addressing the fundamental structural issues of inequality and globalized capitalism, resource extraction, and withering corruption.
Philanthropists Can Say No
In some cases, the answer may be for philanthropists to simply say “no” to supporting technology developments that haven’t undergone rigorous and community-based assessments demonstrating their ability to produce just outcomes. In the tech design world, we’re starting to see a growing movement known as design refusal.
“To refuse is to say no — to turn down requests and opportunities to build technologies that are likely to produce harm,” argues Chelsea Barabas of MIT Media Lab. “But refusal is more than just an exit strategy. It’s an opportunity to reimagine the default categories, assumptions, and problem formulations that so often circumscribe the work of data science.”
What would it look like for philanthropic institutions to adopt a similar stance when it comes to funding technology? What would it look like for grant makers to both fund and seek guidance from the organizations and experts deeply engaged in studying and fighting for algorithmic justice? And what would it look like for donors to work together to document decisions of refusal in funding technology and share their reasoning and reflections?
A declaration from philanthropy to shine a light on these issues and tie them directly to funding decisions would fundamentally change the terms of debate.
Support Community Organizers
To be sure, technology can contribute to advancing a just and equitable democracy. But technological solutions are not the starting point for democracy. They are, if anything, a byproduct of robust community action and rigorous interdisciplinary research. Over the next 10 years, philanthropy should not be distracted with building new technologies. After all, the concentration of capital and power in Big Tech and in government ensures that funding for these technologies will come from elsewhere.
Rather, philanthropy should act as a bulwark and counterweight against technological solutionism and focus on redefining what “technology in the public interest” means. This can be done by supporting community organizers, advocates, and researchers in the growing field of algorithmic and data justice and by focusing resources on advancing just, equitable, and enforceable governance of technology.
The deafening regulatory silence in the United States has allowed the Silicon Valley mantra of “move fast and break things” to drive our societal understanding of how technology should be designed and deployed. Philanthropy is in a unique position to reject such thinking and, instead, support the multiyear, costly, and radically necessary work of ensuring new technology is governed by policies, norms, laws, and practices designed with and by the communities they aim to serve.
A new presidential administration offers the philanthropic world a golden opportunity to reshape how we think about the intersection of technology and democracy. The drive both inside and outside government to build “public interest technology” is about to return to the forefront after four years in exile. Now is the time for philanthropic organizations to use their power, influence, and financial resources to insist technology lives up to its potential to advance justice and equity in our society as so many imagined it would.
This piece was adapted from an essay originally published by the Kettering and Knight Foundations in their new publication, “Democracy and Civic Life: What Is the Long Game for Philanthropy?”