Many nonprofits are already employing artificial intelligence in their everyday operations — from streamlining rote tasks to using data to make smarter decisions, sometimes without even realizing A.I. is at work. But many have questions about what the technology is, ways to put it to work, and how to avoid potential pitfalls. Three experts joined the Chronicle in an online forum to discuss key steps and considerations when adopting A.I. at your organization.
“One of the things that’s really exciting when we think about A.I. and the nonprofit sector is that it’s not just about increasing efficiencies,” says Sarah Di Troia, senior strategic advisor of product innovation at Project Evident, a consultancy that helps organizations use data to measure and improve their impact. “You can really begin thinking about different types of machine-learning applications or generative A.I. as a way to move towards mission attainment.”
For example, Crisis Text Line, a nonprofit that offers crisis counseling by text message, is experimenting with using a chatbot to help train its volunteer counselors, Di Troia says. The bot uses data from fictional cases to interact with volunteers, letting them practice responding to a wide variety of cases in a way that feels authentic, Di Troia says. These live simulations don’t require oversight, she adds, so they have the potential to deepen volunteers’ training and update their skills while freeing up supervisors to focus on real cases.
Di Troia was joined on the panel by Afua Bruce, principal at ANB Advisory Group LLC, a consultancy that helps organizations use data and technology, and Carrie Fassett, director of partnerships and impact at the Greater DC Diaper Bank. The session, “Understanding the Basics and Benefits of AI,” was hosted by Sara Herschander, breaking news reporter at the Chronicle.
Read on for highlights, or watch the video to get all the insights.
First, Build a Solid Foundation
“Before you get really excited about launching into A.I., you have to attend to the foundation on which it’s going to sit,” Di Troia says. To help groups get started, Project Evident is developing a tool that assesses readiness to employ A.I. The tool, in beta testing at the time of the forum, asks questions about where groups are in key areas, such as a practice of designing for justice and equity, access to relevant knowledge and skills, and a culture of using data for innovation.
Ideally, your technology, program, and evaluation teams should review these questions together, Di Troia says. Based on your answers, the tool will provide a customized report with suggestions for how to prioritize.
Another critical step to take when developing an A.I. tool is to define clear guiding principles for how you’ll use it, says Fassett of the DC Diaper Bank, which created an A.I. model to help guide its strategy for diaper distribution. A few of the nonprofit’s principles include not letting A.I. stop a family from receiving services, not collecting data the group doesn’t need, and ensuring equitable practices for gathering and sharing data. Keep revisiting those principles as you develop and use your tool, Fassett says. “It’s not, like, a one-and-done. You don’t build your model and then stop.”
It’s also important to find a partner with the expertise and resources to help you develop your model, she adds, and be sure to factor the costs of maintaining it over time into your future budget.
We have to recognize that, in 2023, caring for our communities, caring for our clients is caring for their digital health and their digital data.
Use A.I. in Ways That Advance Your Mission
“We have to recognize that, in 2023, caring for our communities, caring for our clients is caring for their digital health and their digital data,” Bruce says. So, when designing or adopting a new technology tool, you should use your organization’s mission, values, and priorities as a compass, such as its stance on data privacy and trust. This includes thinking about how you will collect and share data, how you will compensate people for their time and data, and how you will involve your community in decisions about your A.I. systems.
You should also consider questions like whether an A.I. tool will have the ability to add or remove access to your services or programs, Bruce says. Tools that are designed to include and give access can be helpful, she adds, while those that automatically exclude people from services without human input may be riskier. In general, it’s best to “keep a human in the loop” as you develop or use an A.I. tool, she says, including in final decision making about service provision, such as determining who will get diapers or assistance from a crisis hotline.
We cannot wait for it to get perfect so that then we can join — it’s not going to get perfect without us.
If your nonprofit plans to use existing A.I. tools rather than creating its own model, then you’ll need to consider how to evaluate which kinds of tools you’ll use and what you won’t, Bruce says, which also comes back to your organization’s values. Some groups decide to experiment with any model that comes out and assess the results as they go, she says. Others opt out of using certain systems that aren’t aligned with their values. For instance, some organizations have a policy against using generative A.I. tools that create visual content because they don’t feel comfortable with how artists’ data is used without compensation, Bruce says.
Ensure Ethical and Responsible Use
Many employees are probably already using A.I. on their computers in “unofficial” ways, Di Troia says, so it’s essential to set policies for individual use. This includes defining what intellectual property means for your nonprofit and how you will allow staffers to use that information when interacting with A.I.
Understand that any information you feed to a publicly available generative A.I. tool will then become part of the data set it uses to learn and respond to others beyond your organization, Bruce says. That means you should think carefully about what information, such as donor data, employees can upload to these tools.
When your technology vendors add or make an A.I. feature available, ask them if they tested the algorithm for racial bias, Di Troia says, and if so, ask to see the testing. You should also ask to see the data the company used to code the algorithm, she adds, so you can understand if it might also have bias. “Please use your economic power in asking for things,” Di Troia says. “By the way, if you ask for things, you will help shape what the entire ecosystem of A.I. looks like … We cannot wait for it to get perfect so that then we can join — it’s not going to get perfect without us.”