“How do we know you won’t pull an OpenAI?”

It’s the question Stella Biderman has gotten used to answering when she seeks funding from major foundations for EleutherAI, her two-year-old nonprofit A.I. lab that has developed open-source artificial intelligence models.

The irony isn’t lost on her. Not long ago, she declined a deal dangled by one of Silicon Valley’s most prominent venture capitalists who, with the snap of his fingers, promised to raise $100 million for the fledgling nonprofit lab — over 30 times EleutherAI’s current annual budget — if only the lab’s leaders would agree to drop its 501(c)(3) status.

In today’s A.I. gold rush, where tech giants spend billions on increasingly powerful models and top researchers command seven-figure salaries, to be a nonprofit A.I. lab is to be caught in a Catch-22: defend your mission to increasingly wary philanthropic funders or give in to temptation and become a for-profit company.

Philanthropy once played an outsize role in building major A.I. research centers and nurturing influential theorists — by donating hundreds of millions of dollars, largely to university labs — yet today those dollars are dwarfed by the billions flowing from corporations and venture capitalists. For tech nonprofits and their philanthropic backers, this has meant embracing a new role: pioneering the research and safeguards the corporate world won’t touch.

“If making a lot of money was my goal, that would be easy,” said Biderman, whose employees have seen their pay packages triple or quadruple after being poached by companies like OpenAI, Anthropic, and Google.

ADVERTISEMENT

But EleutherAI doesn’t want to join the race to build ever-larger models. Instead, backed by grants from Open Philanthropy, Omidyar Network, and A.I. companies Hugging Face and StabilityAI, the group has carved out a different niche: researching how A.I. systems make decisions, maintaining widely used training datasets, and shaping global policy around A.I. safety and transparency.

Several A.I. ethics experts, including Biderman, say their policy work has taken on a new urgency under what they expect to be a hands-off approach to A.I. regulation under the Trump administration.

“We need there to be people who are experts in the development of this technology, who do research on it and promote access to it but who don’t have a vested financial interest in its commercial success,” Biderman said.

The Academic Wave

Not so long ago mentioning “artificial intelligence” made venture capitalists’ eyes glaze over. After decades of boom-and-bust cycles in which the field seemed perpetually on the verge of a breakthrough, many investors were reluctant to spend too big or too fast on a technology that still lacked commercial direction.

Indeed, a decade ago, A.I. research was still relatively cheap, nerdy, and niche — and dominated by academia, which produced the bulk of new A.I. models and prototypes for what would become today’s in-demand tools.

ADVERTISEMENT

“There were very highly celebrated papers that were literally models trained on somebody’s laptop at a conference,” recalled Biderman. “They went to a conference, they met someone, they had a really interesting conversation, they wrote some code, they plugged their laptop into the wall, and they left it running.”

During such “dark periods” of investment in A.I., “the field sort of motors along with the typical sources of funding for research in academia — plus some companies doing a lot of good internal work,” said Jerry Kaplan, a former researcher at Stanford University who has been studying the A.I. field since its dawn in the 1970s. That is until “some kind of advancement occurs, or some new technique that allows us to solve a different class of problems. Then all of a sudden the investment community comes running, and that distorts the field, usually for a few years.”

The industry’s last big hype cycle occurred in the 1980s with advancements in rules-based computing systems, a kind of A.I. that uses rules to mimic human decision-making, but interest and investment collapsed by the early 1990s. After the discovery of neural networks and deep learning in the early 2010s, A.I. began to regain momentum before exploding into another major boom in recent years with the rise of generative A.I.

Though the major tech companies retained their own specialized A.I. teams, in those quieter days, philanthropy and government funding were a steady source of funding, playing an outsize role in advancing research through academic grants and nonprofit labs.

ADVERTISEMENT

“Ten years ago, the clock speed of A.I. was extraordinarily different. I won’t call it a backwater, but it was kind of sleepy,” said Mark Greaves, executive director of Schmidt Sciences’ A.I. and Advanced Computing Institute, who has worked in government and nonprofit A.I. initiatives for decades. “We didn’t have OpenAI. We had an active and prestigious group at Microsoft, but it wasn’t what it is today, so there was also room for philanthropy” to move the field forward.

Toward the end of the 2010s, tech-inclined philanthropists were pouring hundreds of millions into artificial intelligence research and development.

By 2017, the Ethics and Governance of A.I. Initiative, a collaboration between MIT Media Lab and the Harvard Berkman Klein Center for Internet and Society, had raised $26 million from the Omidyar Network, Knight Foundation, William and Flora Hewlett Foundation, and LinkedIn cofounder Reid Hoffman.

In 2015, Open Philanthropy, a foundation funded largely by Facebook co-founder Dustin Moskovitz, began granting tens of millions of dollars for research on the technology’s risks and potential, and that same year, Elon Musk donated $10 million to the Future of Life Institute, a think tank focused on the risks of A.I.

ADVERTISEMENT

Blackstone CEO Stephen Schwarzman made waves in 2018 when he donated $350 million to MIT to establish a college of computing. Soon after, Microsoft co-founder Paul Allen endowed the Allen Institute for Artificial Intelligence — a nonprofit research institute he founded in 2014 — with $125 million and later donated $40 million to establish a computing school at the University of Washington. In 2017, longtime Nvidia executive Dwight Diercks gave $34 million to the Milwaukee School of Engineering.

The Rise of OpenAI

Yet philanthropy’s most consequential contribution came in 2015 with the creation of OpenAI as a nonprofit, albeit one dominated with figures from the venture capital world. Founded in 2015 by some of the tech world’s biggest and deepest-pocketed stars, OpenAI received about $137 million in donations in its first four years, roughly a third of which came from Musk.

The group’s nonprofit status — and a charter that promised to avoid a long-term “competitive race without time for adequate safety precautions” — had inherent appeal to some of the field’s premier researchers, many of whom feared existential risks from unchecked A.I. growth. So much so that when co-founder Gregory Brockman approached 10 of them with an offer to work for OpenAI at just a fraction of the salary they could demand at Google or Microsoft, nine said yes.

In other words, OpenAI’s nonprofit mission was an asset in the battle for rarefied talent — a pattern repeated by other nonprofit A.I. labs that have produced cutting-edge research at a fraction of corporate costs.

“The engineers we have here, especially the more advanced A.I. folks, could go to Amazon or Microsoft or Google and almost certainly make more money than they make here,” said Ted Schmitt, senior director of conservation at the Allen Institute for A.I., or Ai2.

“They’re not just for the paycheck or the career ladder. They’re trying to do something they care about,” he said. “That only goes so far, but it does help with the cost.”

ADVERTISEMENT

Those costs have ballooned to breathtaking heights in recent years, leading to dramatic shifts in the direction of A.I.-related nonprofits and philanthropy. OpenAI spent upward of $100 million to train its latest model — up from the mere $4.6 million it spent on GPT-3 in 2020 — and estimates the training costs of Google’s Gemini at a staggering $191 million. It’s no wonder that today, the vast majority of new A.I. models come from companies rather than academia or nonprofits like EleutherAI and Ai2.

Rising costs — and the aim to release a highly competitive product — led to OpenAI’s controversial decision in 2019 to restructure into a “capped-profit” company governed by a nonprofit board. The move, CEO Sam Altman argued, was necessary to raise the funds needed for advanced A.I. research. Shortly after the restructuring, OpenAI released GPT-3 — the large language model that would form the foundations of ChatGPT — without publishing its source code, a first for an organization that had once prided itself on its transparency.

The decision sent shock waves through some A.I. researchers. On a Discord channel populated by independent A.I. researchers and hobbyists, Biderman of EleutherAI was livid.

“We were concerned about the fact that the leading research institute didn’t seem to value giving people access to it,” she said. “If you are concerned about the capabilities, the limitations, the biases, and the potential societal impacts” of A.I., then “I don’t want the people who are developing the model to control who can do what studies.”

In response, she and other members of Discord came together to build their own open-source alternative to GPT-3, which in early 2023 became the nonprofit EleutherAI.

“Our goal was to promote the ability of researchers to use this technology to save technology,” she said.

With an annual budget of under $3 million, EleutherAI doesn’t focus on building the world’s largest open-source A.I. models anymore “because there are other people doing it” with far higher budgets, Biderman said.

ADVERTISEMENT

Even DeepSeek, she noted, the Chinese startup that has rattled the industry with its impressive open-source chatbot, claimed to have spent only about $6 million in training costs, a workable expense for some nonprofits but double EleutherAI’s entire budget.

That a high-quality A.I. model could be built for millions of dollars rather than billions has sparked debate over whether major tech companies have vastly overstated the costs of A.I. development, an accusation that Kaplan at Stanford recognizes well from previous hype cycles.

“It’s an amazing technology, he said, but “the amount of money that is being invested and the valuations and the pay that people are getting — I’ve been through this many times — it is out of proportion to the returns that are going to occur.”

For nonprofit labs like EleutherAI, the exact price tag may be beside the point.

Instead of building new models, the group — with its hundreds of volunteer researchers — is focusing on more targeted research projects and maintaining widely used training libraries that even government agencies rely on.

According to Biderman, with so much work to be done, the group has no interest in giving in to the temptation of venture capital anytime soon.

ADVERTISEMENT

“I don’t think we’re at a crossroads,” she said. “I think we’ve been going the wrong way from a crossroads for several years. My primary activity in the past three years has been trying to push against that tide.”

Noncommercial Directions

As OpenAI has become more commercialized and closed-source, Ai2 — founded a year before OpenAI by the late Paul Allen with a similar mission and comparable funding — remains a nonprofit. It also has become increasingly focused on noncommercial research, including a free chatbot trained on millions of scientific papers and a set of A.I. tools designed to monitor climate change.

“There are some things that we don’t need to do anymore, that the commercial space is just going to do, and they’ve got the money and they’ve got the business models to do it,” Schmitt of Ai2 said. “Where are the gaps that a commercial space that’s moving very, very fast won’t fill? Those tend to be policy issues, ethics or safety, and making this technology available to Global South countries that are deeply underresourced.”

It’s a pattern repeated across the nonprofit and philanthropic A.I. worlds. Rather than attempting to compete with corporate giants to build ever-larger models, organizations are carving out niches where they can still have impact. Ai2 initially focused on advancing fundamental research, and while the group still releases open-source alternatives to commercial chatbots, in recent years, it has begun to focus on targeted tools like Skylight, which uses A.I. to detect illegal fishing, and EarthRanger, a wildlife tracking system that in November helped conservationists identify the first documented case of avian flu in a wild cougar.

“A.I. is expensive, the compute is expensive, the engineers are expensive,” Schmitt said. “It’s about lowering the barrier to the promise of A.I., because if we don’t lower that barrier, then a whole lot of people are going to be left out.”

ADVERTISEMENT

Philanthropic funders have also refocused their giving to reflect these rapid changes in the field. At Schmidt Sciences, part of the philanthropy venture founded by ex-Google CEO Eric Schmidt and his wife, Wendy, that means funding projects that benefit the public but lack immediate commercial value — like a recent partnership with the Sorbonne and the Louvre that deploys A.I. techniques to study historical paintings.

One of our jobs is to try and figure out how to use these tools, which are getting so incredibly powerful, to really push forward science, discovery, humanities, and scholarship,” he said.

While corporate investment in A.I. continues to surge with big tech companies planning to spend tens of billions on A.I., many argue that philanthropy still has an important role. With the new Trump administration’s rollback of fledgling federal A.I. regulations — alongside a broad pledge to “enhance America’s A.I. dominance” — the stakes for philanthropically funded independent watchdog efforts may be even higher now than ever before.

“You can’t play on the capital expenditure turf, but there are a lot of different ways that philanthropy can make an important difference,” said Mike Kubzansky, CEO of Omidyar Network, eBay founder Pierre Omidyar’s philanthropic venture. Kubzansky points to the need for investment in “seat belts and catalytic converters” for the A.I. revolution — including tools to detect deepfakes and systems for auditing A.I. models.

In recent years, Omidyar Network has taken a wide range of approaches to influencing the field’s direction, including a decision last year to take an ownership stake in Anthropic, a leading A.I. company founded by former OpenAI employees, with the explicit goal of maintaining ethical commitments. In the past year, the network has also supported antitrust lawsuits against the tech giant Google and played a role in petitioning California’s attorney general to examine OpenAI’s conversion from nonprofit to for-profit status.

ADVERTISEMENT

“The good news is you can actually make a difference as a philanthropy if you are smart about how you use your funds. You’re not going to be building data centers like Stargate, but that’s never been the role for philanthropy anyway,” Kubzansky said. If you find ways to fill crucial gaps and provide the necessary counterweights to commercial interests, “you don’t need as much money as the big dogs to be effective in this space.”

Still, only a tiny sliver of philanthropy today goes to such investments in tech governance, even as many major foundations encourage their grantees to experiment — if not outright embrace — new corporate A.I. tools. It’s a dichotomy that Lucy Bernholz, director of Stanford’s Digital Civil Society Lab, warns could be a major mistake.

“For the most part, philanthropy is doing the wrong thing right now,” said Bernholz, who argues that foundations should prioritize safety testing rather than rushing to fund A.I. adoption. “The pressure they are putting on organizations to prioritize efficiency over mission is dangerous — please stop.”

Reporting for this article was underwritten by a Lilly Endowment grant to enhance public understanding of philanthropy. The Chronicle is solely responsible for the content. See more about the Chronicle, the grant, how our foundation-supported journalism works, and our gift-acceptance policy.