Foundations Seek to Advance A.I. for Good — and Also Protect the World From Its Threats
While technology experts sound the alarm on the pace of artificial intelligence development, philanthropists — from established foundations to tech billionaires — have been responding with an uptick grants, often in the area of “ethical A.I.”
Ethical A.I. explores how to solve or mitigate the harmful effects of artificial intelligence systems and is sometimes described as using the technology for good. Recently, there has been more concern about the bad than the good: A.I. tools have been found to
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
If you continue to experience issues, please contact us at 202-466-1032 or firstname.lastname@example.org
While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists — including long-established foundations and tech billionaires — have been responding with an uptick in grants.
Much of the philanthropy is focused on what is known as technology for good or “ethical A.I.,” which explores how to solve or mitigate the harmful effects of artificial-intelligence systems. Some scientists believe A.I. can be used to predict climate disasters and discover new drugs to save lives. Others are warning that the large language models could soon upend white-collar professions, fuel misinformation, and threaten national security.
What philanthropy can do to influence the trajectory of A.I. is starting to emerge. Billionaires who earned their fortunes in technology are more likely to support projects and institutions that emphasize the positive outcomes of A.I., while foundations not endowed with tech money have tended to focus more on A.I.’s dangers.
For example, former Google CEO Eric Schmidt and wife, Wendy Schmidt, have committed hundreds of millions of dollars to artificial-intelligence grant-making programs housed at Schmidt Futures to “accelerate the next global scientific revolution.” In addition to committing $125 million to advance research into A.I., last year the philanthropic venture announced a $148 million program to help postdoctoral fellows apply A.I. to science, technology, engineering, and mathematics.
Also in the A.I. enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of a few philanthropies that has made artificial intelligence and data science an explicit grant-making priority. In 2021, the foundation committed $40 million to help nonprofits use artificial intelligence and data to advance “their work to protect the planet, foster economic prosperity, ensure healthy communities,” according to a news release from the foundation. McGovern also has an internal team of A.I. experts who work to help nonprofits use the technology to improve their programs.
“I am an incredible optimist about how these tools are going to improve our capacity to deliver on human welfare,” says Vilas Dhar, president of Patrick J. McGovern Foundation. “What I think philanthropy needs to do, and civil society writ large, is to make sure we realize that promise and opportunity — to make sure these technologies don’t merely become one more profit-making sector of our economy but rather are invested in furthering human equity.”
Salesforce is also interested in helping nonprofits use A.I. The software company announced last month that it will award $2 million to education, work-force, and climate organizations “to advance the equitable and ethical use of trusted A.I.”
Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another big donor who believes A.I. can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve that goal. He is betting A.I. can positively transform areas like health care (“giving everyone a medical assistant”) and education (“giving everyone a tutor”), he told the New York Times in May.
The enthusiasm for A.I. solutions among tech billionaires is not uniform, however. EBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which is making grants to nonprofits using the technology for scientific innovation as well as those trying to protect data privacy and advocate for regulation.
“One of the things that we’re trying really hard to think about is how do you have good A.I. regulation that is both sensitive to the type of innovation that needs to happen in this space but also sensitive to the public accountability systems,” says Anamitra Deb, managing director at the Omidyar Network.
The A.I. Skeptics
Grant makers that hold a more skeptical or negative perspective on A.I. are also not a uniform group; however, they tend to be foundations unaffiliated with the tech industry.
The Ford, MacArthur, and Rockefeller foundations number among several grant makers funding nonprofits examining the harmful effects of A.I.
For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted pivotal research on racial and gender bias from facial-recognition tools — which persuaded Amazon, IBM, and other companies to pull back on the technology in 2020 — have received sizable grants from them and other big, established foundations.
Gebru launched the Distributed Artificial Intelligence Research Institute in 2021 to research A.I.’s harmful effects on marginalized groups “free from Big Tech’s pervasive influence.” The institute raised $3.7 million in initial funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur, and Open Society foundations are financial supporters of the Chronicle.)
Buolamwini is continuing research on and advocacy against artificial-intelligence and facial-recognition technology through her Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller Foundations as well as from the Alfred P. Sloan and Mozilla Foundations.
“These are all people and organizations that I think have really had a profound impact on the A.I. field itself but also really caught the attention of policymakers as well,” says Eric Sears, who oversees MacArthur’s grants related to artificial intelligence.
The Ford Foundation also launched a Disability x Tech Fund through Borealis Philanthropy, which is supporting efforts to fight bias against people with disabilities in algorithms and artificial intelligence.
There are also A.I. skeptics among the tech elite awarding grants. Tesla CEO Elon Musk has warned A.I. could result in “civilizational destruction.” In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent “existential risk” from A.I., and spearheaded a recent letter calling for a pause on A.I. development. Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has provided majority support to the Center for AI Safety, which also recently warned about the “risk of extinction” associated with A.I.
A significant portion of foundation giving on A.I. is also directed at universities studying ethical questions. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvard’s Berkman Klein Center, received $26 million from 2017 to 2022 from Luminate (the Omidyar Group), Reid Hoffman, Knight Foundation, and the William and Flora Hewlett Foundation. (Hewlett is a financial supporter of the Chronicle.)
The goal, according to a May 2022 report, was “to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice.”
One university funding effort comes from the Kavli Foundation, which in 2021 committed $1.5 million each over five years to two new centers focused on scientific ethics — with artificial intelligence as one priority area — at the University of California at Berkeley and the University of Cambridge. The Knight Foundation announced in May it will spend $30 million to create a new ethical technology institute at Georgetown University to inform policymakers.
Although hundreds of millions of philanthropic dollars have been committed to ethical A.I. efforts, influencing tech companies and governments remains a massive challenge.
“Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, the Goliath-sized A.I. companies, the Goliath-sized regulators and policymakers that can actually take a crack at this,” says Deb of the Omidyar Network.
Even with those obstacles, foundation leaders, researchers, and advocates largely agree that philanthropy can — and should — shape the future of A.I.
“The industry is so dominant in shaping not only the scope of development of A.I. systems in the academic space, they’re shaping the field of research,” says Sarah Myers West, managing director of the AI Now Institute. “And as policymakers are looking to really hold these companies accountable, it’s key to have funders step in and provide support to the organizations on the front lines to ensure that the broader public interest is accounted for.”
Reporting for this article was underwritten by a Lilly Endowment grant to enhance public understanding of philanthropy. The Chronicle is solely responsible for the content. See more about the Chronicle, the grant, how our foundation-supported journalism works, and our gift-acceptance policy.