Few philanthropic leaders have occupied a front-row seat to the A.I. revolution like Vilas Dhar, president of the McGovern Foundation, one of the first grant makers exclusively dedicated to data science.
With an endowment of $1.2 billion from the late Patrick J. McGovern — who founded Computerworld, Macworld, and PCWorld — the young foundation has doled out $411 million in grants for projects related to data science since 2017, including over $66 million earmarked for A.I. in the last year alone.
Dhar is a computer scientist and lawyer by training who co-founded a public-interest law firm with his brother in Boston, where he also led a startup nonprofit incubator called the Next Mile Project, before joining the McGovern Foundation as its inaugural president. He is also an adviser on artificial intelligence to the United Nations, OECD, the World Economic Forum, and other global governing bodies. Yet it’s not just tech expertise that’s needed at the table as the future is coded, he says: “These conversations are very rarely actually about technology.”
The Chronicle spoke with Dhar about why and how philanthropy can fuel nonprofit collaboration, access, and resilience to the latest technology, why disinformation is not what he’s afraid of in the 2024 elections, and how A.I. could soon change just about everything.
“This isn’t a conversation about A.I.,” he says. “It’s actually a conversation about how philanthropy changes itself to understand how A.I. is going to affect all of what philanthropy does.”
Last year, A.I. exploded into the popular imagination. How has that shifted priorities for the foundation and philanthropy at large?
The rise of ChatGPT was incredible because it created this widespread curiosity and willingness to learn more about what A.I. empowers us to do.
Many, many foundations have reached out to us in the last 12 months. I’ve been talking to boards and philanthropic organizations, and our team works directly with program staff at a whole bunch of foundations to help empower their nonprofit grantees to use these tools.
We also have a set of data scientists and A.I. engineers who build our own in-house A.I. products. It’s kind of like a tech company wholly owned by a philanthropy.
Our first product was for newsrooms and allows editors to repurpose existing investigative content using generative tools. Our second product is going to help philanthropies use A.I. to conduct better financial diligence on their grantee partners.
We build this stuff under an open license, we build it all open source, we give it away for free. The idea is to build A.I. products that actually make philanthropy better.
We’ve only been around about five years, so it’s been an incredible privilege to build to fit the need, rather than having to shift inertia from a more traditional institution.
What advice do you have for those in philanthropy looking to get started?
If foundations stepped forward and said, “How do we use A.I. to advance our work around literacy, malnutrition, poverty, and digital equality,” that could be such a powerful conversation.
But it takes work to get there and it takes deep engagement.
This isn’t the time to jump on some sort of A.I. bandwagon. This is a time to really look closely at how we change our institutions to prepare for the changes that A.I. is creating in the world.
Philanthropies operate at scale, so we have the capacity to be translators of what’s happening in the A.I. world for our grant partners.
While an individual nonprofit might not be able to go off and hire an entire data-science team, what if foundations built data science teams and shared resources with all of their partners as a part of their grant making strategy?
The World Resources Institute has done just this with their massively successful Data Lab. We’ve seen all of these other climate organizations that now go to that lab as their primary resource.
How do you use A.I. in your everyday life — and the foundation in its everyday work?
For years now, well before ChatGPT, we’ve had staff exercises where our team learns about a new A.I. tool. Recently, we looked at automated video generation as a way of talking about some of the work our partners do.
I am incredibly excited about the way these tools change our capacity to express our internal creativity. I think that’s super fun.
At an organizational level, I’m really interested in how we can use A.I. to do knowledge management and to track the complexity of our strategic grant-making processes.
And it’s safe to say A.I. hasn’t replaced your employees yet?
No, not at all. I spend a lot of time talking about displacement.
But in philanthropy, where one of our great virtues is empathy and all of our activities need to be filtered through a lens of human understanding, I don’t see A.I. as a massive threat.
You’ve written about a growing divide between those with access to A.I. and those without. What role can philanthropy play in bridging that divide?
When I think about the A.I. divide, I think about economic opportunity, fundamental rights of access, and the question of who gets to participate in building this future.
The answer isn’t that we need to train everybody to be an A.I. engineer. It’s that we need to adapt to the transformation that’s happening so that the decisions being made about our shared future are made in a way that promotes what we all care about.
For that to happen, nonprofits and philanthropy have to be at the table. They can’t just be responding to bad decisions. They have to be the ones who are actually influencing and architecting the right decisions.
The incredible power that nonprofits have is that they deeply understand the challenges of the communities they serve.
The McGovern Foundation awarded over $66 million in A.I.-related grants last year alone. How are you putting these ideas into practice?
Before we think about what A.I. will do, we need to think about who is going to create it. We’re funding groups that are building new skill sets at massive scale for underrepresented populations.
There is an organization called Technovation that’s empowering 25 million girls and young women around the world with digital literacy through partnerships with tech companies, governments, and schools.
It’s not just training — it’s mentorship. They build these skills in a way that says, “Hey, I don’t need you to go off and become an engineer — I just want you to have the basic literacy to be able to navigate whatever career you want to.”
That’s stage one. It’s who builds, who creates, who advises. Who’s part of making the decisions. That is how we empower a generational shift.
There’s also a lot of very practical and pragmatic applications.
I was in India last week for the Global Partnership on Artificial Intelligence, and I met with a project we’ve supported that deals with elephants who go off into the forest, eat rotting fruit, get drunk, and then rampage through farmers’ rice fields.
Everything that sober elephants would be deterred by doesn’t work for drunk elephants.
So this group of students put together a solution — a set of super low-cost trail cameras deployed around at-risk rice fields. They use machine learning to identify when an elephant’s approaching and its behaviors and then they have an A.I. that comes up with new ways to deter them using the equipment that’s there.
The whole system costs a couple hundred bucks, but it protects hundreds of farmers.
What most excites you about A.I. in the year ahead?
We are just at the start of the journey, but we’re already seeing a transformation in the delivery of services. Organizations that deliver life-saving aid are going to become better at what they do very fast.
We have a project at the United Nations in partnership with McKinsey and Google called Disha. It uses telecom data from 80 companies to see how people move and use their phones after a disaster.
If we know that a disaster is coming, we can equip every humanitarian-aid organization in that crisis zone to be as effective as possible. Within 12 months, that could save tens of thousands of lives.
I also think we are going to make some real strides on A.I. governance and we’ll continue to bring in the civil-society perspectives on things like participation and equity.
It’s an election year and the prospect of disinformation at scale is on a lot of people’s minds. What unnerves you?
Everybody’s worried about disinformation and A.I., but I’m not sure that’s actually the threat.
Here’s the thing: if A.I. enables you to target an individual — to understand everything about how they make decisions — and then you tailor an automated message to manipulate the way they’re going to act, I don’t have to tell a lie anywhere in that process. I actually don’t have to use disinformation at all.
The way this works is to give you factual information, but to target you in a way that’s manipulative. We can see the evolution of Cambridge Analytica plus generative A.I. causing a real problem over the next 12 months.
[Editor’s Note: The data firm Cambridge Analytica came under fire for using personal Facebook data and psychological manipulation — but not necessarily disinformation or outright falsehoods — to micro-target voters with personalized ads in 2016. Disinformation refers to intentionally deceptive materials, like false news articles or A.I.-generated photos of rival politicians.]
That terrifies me.
What kind of checks and balances do we need to create an A.I. future that is safe, equitable, and a force for social good?
There’s a lot of folks who want to scare us with the short-term idea that A.I. won’t be fair or transparent.
That’s really important, because it forces us to say, “How do we design these technologies for the public good? How do we make sure there are diverse and inclusive processes?”
There’s also a very small set of people that want to scare us about the long term: the existential risk crowd. I think they are often very self-interested. They’re just incredibly well-resourced, this Open Philanthropy crowd, and they have taken over the public conversation.
The challenge with the public narrative right now is a missing middle.
We could use public trust and public funds to build these tools in a way that actually gives us an alternative narrative that empowers human dignity and agency and justice and invests in building products that make people’s lives better.
The challenge is finding the capital and the expertise to do that. The answer should be philanthropy, but we haven’t done the work to get ready yet.
People talk a lot about what it would look like if we built the internet for purpose instead of for profit.
That’s the opportunity: if philanthropies really became the ones that could understand how to build these tools at scale, think about the incredible leverage we’d have to solve major problems.
This interview has been edited for brevity and clarity.