Artificial intelligence (AI), data mining, expert system software, genetic programming, machine learning, deep learning, neural networks and another modern computer technologies concepts. Brain representing artificial intelligence with printed circuit board (PCB) design.
None of the pleas for help that pour into the Crisis Text Line are easy. Young people wrestle with eating disorders and question their sexuality. Students struggle to deal with bullies in school and online. Desperate parents seek help for their children’s substance abuse.
In the most severe cases, people text the hotline because they’re contemplating suicide. It’s imperative that those messages are flagged for immediate attention. Early on, the organization’s volunteer counselors made that call on their own. They read messages waiting in the hotline’s queue and responded to the person who seemed in the greatest distress.
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 571-540-8070 or cophelp@philanthropy.com
Jirsak, Getty Images/iStockphoto
Artificial intelligence (AI), data mining, expert system software, genetic programming, machine learning, deep learning, neural networks and another modern computer technologies concepts. Brain representing artificial intelligence with printed circuit board (PCB) design.
None of the pleas for help that pour into the Crisis Text Line are easy. Young people wrestle with eating disorders and question their sexuality. Students struggle to deal with bullies in school and online. Desperate parents seek help for their children’s substance abuse.
In the most severe cases, people text the hotline because they’re contemplating suicide. It’s imperative that those messages are flagged for immediate attention. Early on, the organization’s volunteer counselors made that call on their own. They read messages waiting in the hotline’s queue and responded to the person who seemed in the greatest distress.
Data science has the power to transform nonprofit work. Leaders are thrilled by the opportunity to scour vast amounts of data for connections but unnerved by the prospect of data tainted by bias and unseen algorithms deciding who gets services. For our July cover package, the Chronicle examines how philanthropy is grappling with what it means when machines take on tasks that humans usually to do.
As the five-year-old group gained experience, however, it turned to data to triage incoming messages. Using past text exchanges with people who were suicidal, it trained an algorithm to identify the highest-risk texters, label them “code orange,” and move them to the top of the list. Now, instead of poring over messages, counselors click a button marked “Help another texter.” The algorithm eventually proved so effective that it identified 86 percent of the people who are suicidal within their first couple messages.
The result: People at greatest risk got help faster, and overall wait times dropped by roughly 40 percent.
Today Crisis Text Line’s algorithms monitor the conversations. An exchange might drop down from code orange if the counselor de-escalates the situation or might move up to code orange as a texter opens up about more severe problems. The group is working on a system that will color code conversations by level of severity.
ADVERTISEMENT
The Crisis Text Line employees who supervise counselors often oversee 20 or more conversations at once. The ongoing risk assessment helps them determine which exchanges warrant the most attention.
“What we’re talking about is real-time analysis,” says Bob Filbin, chief data scientist at Crisis Text Line. “The true end game of artificial intelligence is thinking like a human would think at the speed or better than a human would think.”
Crisis Text Line’s innovation illustrates the potential for artificial intelligence, or A.I., to transform the nonprofit world. But talk of A.I. makes Filbin uncomfortable. He’d rather call it machine learning or data science. The term “artificial intelligence” can be polarizing and comes with baggage about moral values, he says.
“People either really like it or really dislike it,” Filbin says. “Personally, I think it distracts from the work.”
As Filbin suggests, the nonprofit field is equal parts thrilled and unnerved by the promise of artificial intelligence for social good. Leaders marvel at the opportunity to scour huge amounts of data for connections that would otherwise go unnoticed. But the specter of unseen algorithms deciding who gets services and the fear of bias-tainted data make the technological future seem more menacing than transformational. And nonprofits are terrified about what clients and the public would think about computers taking over tasks traditionally done by people.
ADVERTISEMENT
Sorting Photos with A.I.
Motion-sensitive cameras snap hundreds of thousands of photographs that scientists at the Lincoln Park Zoo study to learn about the coyotes, raccoons, and other critters that prowl greater Chicago. An employee or volunteer reviews each image and culls any in which incidental motion triggers the camera — a task so painstaking and slow that the zoo has a two-year backlog. The researchers, however, are working with a data-science company to train an algorithm to label the animal images — and to select the roughly 70 percent of photos that include no wildlife at all and the 10 percent that show humans.
Photographs courtesy of Lincoln Park Zoo
Ambitious Projects
The number of charities that deploy artificial intelligence is still minuscule, but interest is growing. Sometimes the technology drives ambitious projects. The Anti-Defamation League, for example, uses artificial intelligence to better understand online hate speech, information that shapes its prescriptions for lawmakers and tech companies.
The research would be nearly impossible to do any other way, says Brittan Heller, director of technology and society at the nonprofit. “What A.I. does is identify patterns at scale.”
Perhaps the most common nonprofit use is to automate repetitive tasks. To glimpse the hidden lives of urban wildlife, researchers at the Lincoln Park Zoo study thousands of photographs taken by motion-sensitive cameras installed throughout Chicago and its suburbs. The cameras take 400,000 to 500,000 pictures annually, with a staff member or volunteer reviewing each one. It’s an enormous task, and there’s a two-year backlog.
Uptake.org, the philanthropic arm of the artificial-intelligence company Uptake, is teaming up with the Urban Wildlife Institute, a research center at the zoo, to train an algorithm to electronically label the photos. Ultimately, the organizations hope the technology will differentiate between, say, a coyote and a racoon. First, however, the algorithm has to learn to weed out the thousands of images in which something other than wildlife — like grass blowing in the wind — triggered the camera. Some 70 percent of photos show no living creatures. Another 10 percent show people.
ADVERTISEMENT
Just eliminating those photos could enhance the institute’s ability to influence policy and planning decisions, says Seth Magle, the institute’s director. He often fields questions like, Does your data show there are fewer squirrels this year?
“Right now what we have to tell them is, ‘Well, we’re so backlogged I don’t know. I’m still working on data from three years ago,’ " Magle says.
Benefit and Risk
Even the biggest backers of artificial intelligence argue that its benefits come with risks.
“Like any tool, if it’s misused, it’s going to cause harm,” says Andrew Means, director of Uptake.org. “The new thing is that the scale of harm can be quite large if it goes unchecked. But that also means that the scale of benefit can be larger than ever before as well.”
In addition to the work with the Urban Wildlife Institute, Means and his Uptake.org team are developing A.I. tools to help low-income students decide where to go to college. It’s also building a system to improve collaboration among anti-trafficking groups. For the past five years, Means has been organizing conferences about nonprofit data use, including the Good Tech Fest in May.
ADVERTISEMENT
Still, he worries about data science done badly. Analysts don’t always understand the data they work with, know what they can build with it, or grasp its limits.
The buzz about data science is attracting new people to the field, not all of whom are skilled, Means says. “They’re misinterpreting data all over the place.”
Critics also worry that combining flawed data and advanced analytics will replicate and magnify discrimination.
Take criminal-justice data. Researchers and others are concerned that policing practices and prosecution are racially biased. Yet numbers generated from law-enforcement agencies and criminal-justice proceedings are plugged into algorithms designed to predict, for instance, which criminals are likely to land in jail again. It’s fair to ask: Is the algorithm’s end product accurate, or does it reflect bias?
“Garbage in, garbage out,” says Roy Austin, who co-authored a report on big data and civil rights when he was director of the Office of Urban Affairs, Justice, and Opportunity in the Obama White House.
ADVERTISEMENT
Officials at the Laura and John Arnold Foundation think a lot about such issues. They acknowledge shortcomings in criminal-justice data but still think the information is critical. The foundation is taking on the cash-bail system, which often results in indigent defendants languishing in jail before they are tried. Detention can mean the loss of a job, housing, or even child custody. It also creates an incentive for people to accept a plea deal whether or not they committed the crime.
One way the foundation is trying to reduce unnecessary pretrial detention: Public Safety Assessment, which judges can use to help decide whether to release a defendant or set bail. Created with outside experts, the tool relies on an algorithm with nine factors to gauge the risk of a defendant committing a new crime if released before trial or failing to appear for future hearings.
The foundation lists on its website those nine factors and how they’re weighted. Perhaps most important, the tool determines risk scores drawing on convictions data, not arrests. “Arrests don’t always lead to convictions, and arrests are sometimes made on a flimsy legal basis,” says Jeremy Travis, executive vice president of criminal justice at the foundation.
The assessment is currently in use in 40 jurisdictions, including statewide in Arizona, Kentucky, and New Jersey. More than 500 jurisdictions have expressed interest in it. The foundation is looking for a national organization to help it make the assessment available broadly.
Travis says he understands critics’ concerns that such assessments will perpetuate past injustices, but he says the alternative is the inconsistency that comes with each judge.
ADVERTISEMENT
“It doesn’t make the decisions for them,” he says. “But if it guides the decisions and there’s less subjectivity in those decisions, that’s progress.”
The Omidyar Network is thinking about how data is used by the biggest supporter of nonprofits — governments. It has long supported efforts to encourage policy makers to be transparent and release more data to the public. Over time, the network’s leaders became concerned about how that data was being used.
“When the allocation of public resources is being decided by an automated decision-making system, it raises massive questions of fairness, of bias, of inclusion, exclusion, and equity,” says Martin Tisné, an investment partner at Omidyar.
Three years ago, the network began making grants related to digital rights. It supports organizations that conduct research on artificial intelligence and recommends how to regulate data use. Last year Omidyar committed $10 million to help create the Ethics and Governance of Artificial Intelligence Fund, which supports research on A.I. in the public interest. LinkedIn founder Reid Hoffman, the Knight Foundation, and others kicked in $17 million more.
The race is on to determine how data use and artificial intelligence will be regulated, Tisné says. “I fundamentally believe that the public should have a voice in those matters.”
ADVERTISEMENT
Human vs. Machine
The opportunity to automate decision making is alluring. But some leaders worry about removing the human element from decision making.
Overwhelmed by people in need, social-service organizations could be tempted to turn tough decisions over to an algorithm, says Allison Fine, an expert on nonprofit leadership and strategy. She argues the field needs to think carefully about what tasks are appropriate for computers.
Feeding Children Everywhere
Feeding Children Everywhere’s Fed 40 mobile app lets people apply easily for temporary food assistance.
“It’s really easy to begin to have the machines do the hard work of choosing who gets services and who doesn’t,” Fine says. “You’re going to lose that empathetic touch if we head toward too much automation.”
Feeding Children Everywhere decided the benefits of A.I. outweigh the costs. The Florida nonprofit’s Fed 40 program lets applicants apply for temporary food assistance, often during a crisis, through a mobile app. Those who qualify are mailed the ingredients for 40 free meals. Roughly half of applicants qualify automatically, but when even one criterion is missed, an employee reviews the application.
As the program grew, the nonprofit realized it would need to hire more case workers to keep up with applications. But when employees listed the reasons a denied applicant might be approved after further consideration, it became clear that artificial intelligence could identify the patterns. Some applicants, for example, had made too much money to qualify but recently lost a job or were the victim of a natural disaster.
ADVERTISEMENT
The organization used previously approved applications to train an algorithm that now reviews requests that don’t meet basic requirements. As a check, case workers now review applications denied by the algorithm. Eventually, however, technology will make the final call. What clinched the decision? By not hiring more case workers, the charity will provide 200,000 more meals next year.
Sensitive to the concern that technology lacks heart, Feeding Children Everywhere named its A.I. effort Project Teresa in honor of Mother Teresa. “Anybody who’s on the team working on this understands that a compassionate approach is our number-one goal,” says Dave Green, the nonprofit’s CEO.
Feeding Children Everywhere
Using its Fed 40 app and artificial intelligence, Feeding Children Everywhere can sort efficiently through applications for food aid. Thanks to the technology, the organization’s volunteers and staff will prepare and provide 200,000 more meals next year.
Green says that his family was denied food assistance when he was a child, an experience that has shaped his leadership of the group. He says auditing technology-produced denials in the beginning will ensure that no one truly in need is turned away and that the artificial intelligence will continue to improve over time.
“It’s like if you hired an employee to look at those applications; they’d get better and better and better over time, knowing what they can approve or deny,” he says. “The difference is that with an employee, they get promoted, they move to a new department, or they move on to a different job.”
The Empathy Factor
Technology has always been critical to Beyond 12’s work to help first-generation and low-income college students complete their degrees. The charity pairs students with “virtual coaches,” recent graduates who help them navigate college life by phone, email, and even Snapchat.
ADVERTISEMENT
The charity now is using technology to provide automatic services, which in turn will mean the group can work with more students. Step one is an app that sends students notifications about things like the deadline for picking courses. The group also plans to add a chatbot to answer students’ most frequent questions: Can I drop a class after the add-drop period? Where can I find my PIN number for the financial-aid website?
Coaches are on hand if students run into problems, and they call every two weeks and text or email between conversations. Over time, Beyond 12 wants to use artificial intelligence to analyze data and predict when a student is headed for trouble. With that, coaches could spend more time working out small issues before they snowball and focus a little less on students who are thriving.
“This came from our coaches,” says Alexandra Bernadotte, founder and CEO of Beyond 12. Counselors complain that because basic logistics conversations take so much time, they can’t do enough to inspire students and encourage big-picture thinking.
Artificial intelligence will never replace coaches, Bernadotte says. The technology’s greatest promise is to manage rote tasks so employees have more time for the things that people still do best. “Humans can’t scale, but machines can’t empathize.”