How is Emerson Collective leveraging cutting-edge A.I. for itself and its grantees? Try asking Ralph, a set of experimental A.I. chatbots. Its expertise spans the group’s internal policies and data: from company benefits to detailed, easily accessible information about former grantees.
Better yet, ask Raffi Krikorian, who helped build Ralph as part of a broader technological transformation at Emerson Collective, the philanthropy venture founded by Laurene Powell Jobs. A veteran tech executive, Krikorian once helped develop self-driving cars at Uber, ran a global engineering team at the social platform then known as Twitter, and helped modernize the Democratic National Committee as its first-ever head of tech. He joined Emerson Collective as chief technology officer in 2019.
Since then, he has grown the group’s three-person tech team into the organization’s largest department and a veritable IT powerhouse, employing over 40 engineers, data scientists, and technicians. Emerson Collective now has more engineers than grants analysts or project managers.
That’s no coincidence, says Krikorian, who lures top tech talent into the social sector by showing them that “there are real problems that can be solved here — and engineers can make an impact.”
As A.I. and other technologies permeate everyday life, Kirkorian hopes the Emerson Collective can cultivate itself as a major technological resource for its nonprofit partners in the years to come. A small nonprofit organization might not be able to build its own chatbot just yet, but the I.T. team at Emerson Collective can.
Earlier this year, the Emerson Collective launched a chatbot designed to help immigration groups support asylum claims, which several of the group’s partners have already begun deploying in the field. According to Krikorian, that’s just the beginning.
The Chronicle spoke with Krikorian about the Emerson Collective’s “data-curious” approach to technology, why he’s optimistic about A.I.'s potential to transform the philanthropic world, and what’s needed to ensure a safe and positive A.I. future for everyone.
What does A.I. look like at the Emerson Collective — and how are you using it to advance your work?
We’ve been doing a lot of work both on internal systems — creating data lakes and dashboards — and also getting into how to use A.I. as an entrance to our software systems. Right now, it takes one of my tech team members to sit down with another team member and help them walk through questions like “Who have we funded in the climate space in Arizona?”
We’ve been experimenting with tools internally called Ralph. You can just ask Ralph these types of questions and Ralph will figure it out for you. I don’t believe we should actually be data-driven, but it will continue us on our quest to be data-curious. It should be easy to ask questions about what Emerson Collective knows — you shouldn’t need to schedule a Zoom meeting to find out.
So, Ralph is a set of experiments we’ve been running right now. It’s really fun. We’ve been doing everything with our internal data, like what grantees we’ve funded, but also things like: ‘What’s our replacement policy on AirPods if I lost them?’ So, we’re also trying to tackle the HR side too.
We have a lot of analysts at Emerson Collective — we’re not looking to replace them — we’re just leveling up the kinds of questions you could ask.
How difficult was it to build out this suite of tools — this internal chatbot — and how long did it take?
What’s difficult for us is not the technology — it’s the use case. It’s talking with team members about how they want to get their information presented and what are the kind of questions they want to ask. Have you nailed the right interface? Have you designed it effectively?
That part took five years. We’ve been doing it ever since I joined. So, when the new generative technologies popped up, it was actually quite easy to do our first stab of it. We spent five years building our data systems. We have tons of paperwork on every single one of our grantees, and they’re all appropriately indexed. So, just unleashing it was pretty easy for us.
Grantees might worry that you might soon use A.I. to decide who gets funded. Is that in the cards?
Our philosophy has been that we just give everyone easy superpowers. How do we make them more efficient and effective? How do we give them the tools to answer questions?
Can I say for sure whether some of them might run a grant proposal through a large language model and ask it to summarize it? I can’t tell you that for sure, but our culture is more about getting questions answered really fast, so we can be as educated as possible.
You’ve talked about how you’re using A.I. internally — what are you doing to help grantees?
A lot of it has been training. My perspective is not that everyone needs an A.I. strategy, but that everyone should purposefully choose whether to have an A.I. strategy.
We did a webinar almost a year ago that was the most highly attended in Emerson’s history because most of our grantees and investments either were questioning A.I. or just didn’t know how to make sense of it. So we talked through some really simple use cases, which literally everyone should do as a gateway to thinking about what A.I. could do for them.
We’ve also done some collaborations between nonprofits and big tech companies, so the nonprofits can learn from the creators of the technology, but also so the companies can learn more about the mentality of a nonprofit and what they’re concerned about.
Lastly, we’ve been co-building. You probably saw our post about the immigration chatbot we developed with some immigration legal services groups. We did that because we could. It was interesting and it has a large impact, but our goal was also to show good design practices — start with a limited audience, make sure you have a human in the loop, ensure you have good security measures.
It is trying to practice what we preach and demonstrate what’s possible. It’s slowly starting to get used by the field right now in limited cases, and we’re learning from it. We’ve found a few trusted partners who understand the limitations and who are giving us feedback.
What are some A.I. best practices you recommend for nonprofits?
A lot of it starts with educating yourself. Read every privacy policy carefully that you’re agreeing to. We open-sourced our own A.I. ethics policy, and we’ve just given it to [partners] as a Google doc.
Number two is make sure you’re using the right technology to solve the problem you’re trying to solve. I’ve talked to a lot of funders who want to fund something with A.I., but it’s easy right now to be a hammer looking for a nail. What’s the thing you’re trying to solve, and what’s the best way to solve it? Nine times out of ten, it’s probably not generative A.I. It might be data science or some other form of A.I.
Number three is to start a conversation with your employees and your board on your A.I. stance. A lot of people have been asking me about how to respond to their board putting on pressure to do something with A.I.
I always say, empower your employees to start small. The most interesting things will come after you’ve done the boring stuff first.
Start with those mundane tasks, like drafting job descriptions, to try it out and learn for yourself what it is. These tools are less about someone like me telling you how to use them and more about you figuring out how it fits into your own organization.
The last thing is to remain human-centered and remember that every data point is a person. I’ve talked to community health workers in Africa about this: Care should be done from a human, but maybe technology can help make that care better.
You’ve testified before Congress about some of the dangers of A.I. What should nonprofits be wary of?
These systems are incredibly data-hungry. They eat through it like they’re the Cookie Monster of data.
We’ve already seen them eroding copyright — and now making deals with publishers — and personal data is the next frontier. We need better legislation in order to prevent that backside from continuing the erosion of our private space.
One of the reasons we say to read privacy policies so carefully is because we deal with lists of people that might be in compromised situations, like, our immigration or reproductive-justice grantees. None of that data should go anywhere, so the best thing you can do is read the privacy policy and understand what you’re really getting into.
We need to be very cognizant that privacy is being eroded left and right, and it’s our job to push back against that, especially when we’re using these tools. We are the stewards of the people we serve, and it’s our job to maintain their privacy and not let it be eroded.
Why should nonprofits experiment with A.I. despite the risks? As A.I. advances, what excites you about its potential?
The way we interact with computers day-to-day will fundamentally change in 5 years. We’ll have completely new ways of accessing knowledge and information. Not necessarily chatbots, but a whole paradigm shift.
Just think of some of the amazing things people are up to right now — I’ve talked to groups doing wildfire prediction and prevention using generative A.I. and others using it for drug discovery, doing protein foldings in ways that nobody’s ever seen before. That’s now possible.
I used to work in politics. It’s going to change canvassing and how you talk to voters, not because of a whole bunch of chatbots, but the level of personalization and specificity in messaging in our get-out-to-vote campaigns in the next presidential cycle will look really wild compared to what we’re seeing today.
We really believe that one of the things that we can do beyond just dollars is capacity-building for our grantees. Technology is one of our pillars here at Emerson. So my job is to look around the corner and if there’s this thing coming out, to start talking to our grantees about it to make sure that they’re always in the forefront.
Note: This article has been edited for clarity and brevity.