Experts in artificial intelligence debate what they call P(doom) — the probability that A.I. will grow to such heights of power that it wipes out humanity. For some, a digital apocalypse is a matter of when, not if.
A small band of nonprofit advocates, meanwhile, is rallying against the immediate dangers of machine learning, algorithms, and other A.I. technologies. “The issue is not that they’re omnipotent,” Amba Kak, executive director of the A.I. Now Institute, told the Atlanticrecently. “It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or cophelp@philanthropy.com
Experts in artificial intelligence debate what they call P(doom) — the probability that A.I. will grow to such heights of power that it wipes out humanity. For some, a digital apocalypse is a matter of when, not if.
A small band of nonprofit advocates, meanwhile, is rallying against the immediate dangers of machine learning, algorithms, and other A.I. technologies. “The issue is not that they’re omnipotent,” Amba Kak, executive director of the A.I. Now Institute, told the Atlanticrecently. “It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”
Technology-focused groups have raised alarms about the peril of digital advances and concentration of power with Big Tech for more than a decade. But as A.I. concerns grow, advocacy and research groups of all stripes are linking arms with them.
Groups that promote affordable housing worry about bias in algorithms that determine rent, screen tenants, and make loan decisions. Racial-justice advocates point to evidence that facial-recognition software discriminates against people of color. Social-service organizations see danger in A.I.-driven distribution of benefits. Human-rights activists warn of deepfakes that could lead to the imprisonment of innocents. Indeed, few parts of the nonprofit world seem immune from A.I.'s impact.
To gauge just how groups are responding, the Chronicle asked a dozen experts and advocates to identify top nonprofit leaders in the growing field often called “A.I. safety.” The list of 33 below, while hardly comprehensive, speaks to the scope of what’s happening and the diversity of the players. It features Google exiles and nonprofit veterans. Academics and community organizers. Tech experts and social-justice champions.
Interestingly, the list favors women, not men, and includes many people of color, including several Black women. Such diversity was missing in the movement’s early beginnings, when white men dominated, just as they do in the tech industry itself. Advocates say people of color are particularly attuned to A.I. bias in technology because they are Silicon Valley outsiders yet often impacted directly by discrimination embedded in its products.
“There’s a story to be told about the number of Black women who have really pioneered this space,” says Eric Sears, who runs the Technology and the Public Interest program at the John D. and Catherine T. MacArthur Foundation.
Leaders of Tech-Focused Groups
Catherine Bracy, co-founder and CEO, TechEquity Collaborative. The collaborative educates and mobilizes tech workers to address ways that their employers and products — including A.I.-powered software — drive inequality. Previously, Bracy worked on Barack Obama’s 2012 campaign, building a corps of volunteer technologists, and led community organizing at Code for America.
ADVERTISEMENT
Joy Buolamwini, founder, Algorithmic Justice League. Buolamwini is a computer scientist and activist known as a “poet of code” and hailed by Fortune magazine as “the conscience of the A.I. revolution.” She documents A.I. biases through research and illustrates them through art. As an MIT doctoral student, she began documenting the failures of facial-recognition systems to identify dark-skin female faces — work that culminated in the groundbreaking 2018 “Gender Shades,” research that she led, and a spoken-poem video that’s exhibited in museums.
Alexandra Reeve Givens, CEO, Center for Democracy & Technology. CDT — a long-standing advocacy and research group established in 1994 at the dawn of the internet — is a regular in congressional hearings, White House huddles, and op-eds in outlets like the New York Times. Under Givens — daughter of the late actor Christopher Reeve, who was paralyzed after an accident — the organization has highlighted A.I.’s discrimination against people with disabilities.
Janet Haven, executive director, Data & Society. Haven started her career at tech startups in Europe and spent a decade at the Open Society Foundations as the field of data and technology governance took form. Data & Society brings together scholars and experts from a range of fields to study tech topics including A.I. and automation. It recently created the Algorithmic Impact Methods Lab to develop ways to measure automated decision-making’s impact on individuals and society.
Amba Kak and Sarah Myers West, AI Now Institute. The six-year-old organization is a leading player in the push for rigorous “algorithm accountability” policies that would require companies to assess the risk of their algorithms and address any negative impact. Kak, a lawyer, and West, a scholar and AI Now’s managing director, both did stints at the Federal Trade Commission. West is writing a book, Tracing Code, about the origins of data capitalism and commercial surveillance.
ADVERTISEMENT
Yeshimabeit “Yeshi” Milner, founder and CEO, Data for Black Lives. A former Echoing Green and Ashoka fellow, Milner is a longtime organizer who started Data for Black Lives to connect tech experts — data scientists, software engineers, mathematicians — with leaders and activists in Black communities. She calls for abolishing “Big Data,” which she describes as “a new form of social and political control.”
Emily Tucker, executive director, Georgetown University’s Center on Privacy and Technology. Before joining the center, Tucker — a lawyer who has a master’s degree in theological studies and expertise in immigration issues — worked for a decade helping grassroots groups organize and litigate against surveillance of poor communities and communities of color. The center last year published the widely cited report “American Dragnet,” arguing that the U.S. Immigration and Customs Enforcement agency “now operates as a domestic surveillance agency.”
Big Tech is watching you. We’re watching Big Tech.
Harlan Yu, executive director, Upturn. Upturn examines how technology reinforces inequality. Recently, it sought a federal investigation of how Meta unfairly steers Facebook job ads away from users based on factors like gender and age. Yu, who has a Princeton Ph.D. in computer science, is an expert on the impact of A.I.-driven body cameras and other emerging technologies used in policing.
Other Nonprofit Leaders
Olga Akselrod and ReNika Moore, American Civil Liberties Union. Moore leads the organization’s racial-justice program, where Akselrod is senior staff attorney. The two have taken up tech issues and argue that A.I. upends the balance of power between the people and a host of government actors — including police, immigration officials, and health-care providers. Local ACLU chapters are also active; in Massachusetts, the group’s decade-old Technology for Liberty program has fought to ban local face-recognition surveillance systems.
Lydia X.Z. Brown, director of public policy at the National Disability Institute. Brown, a lawyer who defines herself as a queer disabled person, has documented algorithmic harm in public-benefits decisions, hiring, and surveillance for people with disabilities. She recently helped decide the inaugural grantees of the Disability x Tech Fund, which addresses disability bias in technology.
Henry Claypool, tech policy consultant,American Association of People With Disabilities. Claypool, who has lived with a disability for decades after suffering a spinal-cord injury, is an expert on how technology — from self-driving cars to self-proctored student exams — can expand or limit the lives of people with disabilities. A top official on disability issues in the Obama administration, he helped launch the Disability x Tech Fund.
Sam Gregory, executive director, Witness. A longtime human-rights advocate, Gregory is a leading authority on deepfakes and other forms of A.I.-generated misinformation and disinformation. He focuses on preparing countries and communities for how doctored videos and manipulated media could be used to justify coups, jail innocents, and spark conflict.
Damon Hewitt, CEO, Lawyers’ Committee for Civil Rights Under Law. The organization, with the Leadership Conference on Civil and Human Rights and others, brings together groups as different as Color of Change and Free Press Action to push for federal policy to address A.I. discrimination in housing, employment, financial services, and more.
ADVERTISEMENT
Maya Wiley and Corrine Yu, Leadership Conference on Civil and Human Rights. Wiley, who ran for New York City mayor in 2021, is CEO; Yu leads its efforts on digital rights and privacy. The group organizes a big-tent coalition of organizations and advocates to pressure Congress, the White House, and federal agencies to ensure that laws and enforcement keep pace with A.I.’s rapid growth and the threats to civil rights. More than 60 organizations — from the Hip Hop Caucus to the National Center for Learning Disabilities — signed on to a recent call for action.
Hannah Sassaman, executive director, People’s Tech Project. The slogan for the Philadelphia organization: “Arming movements for liberation with the tools to fight the tech that oppresses us.” Sassaman, a seasoned community organizer, helped spin the effort off from the Movement Alliance Project after several tech-related campaigns, including one that fought the use of algorithms in the city’s bail and parole decisions.
Scholars and Writers
Julia Angwin and Nabiha Syed. Independent investigative journalist Angwin was part of the ProPublica team that wrote “Machine Bias” in 2016 about racially discriminatory software in criminal sentencing — one of the first analyses to make clear the potential harms of A.I. for the public. In 2018, she founded The Markup, a nonprofit news outlet now led by Syed and committed to challenging technology to serve the public good. Its motto: “Big Tech is watching you. We’re watching Big Tech.”
ADVERTISEMENT
Meredith Broussard, data journalist and research director for New York University’s Alliance for Public Interest Technology. In her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Broussard points to ways that A.I. harms are evident in everyday life, from soap dispensers to breast-cancer screening. Credited with coining the term “technochauvinism,” she writes frequently in mainstream news outlets, including the Atlantic, New Yorker, and Wired.
Arvind Narayanan and Sayash Kapoor. Narayanan and Kapoor write the Substack newsletter AI Snake Oil, where they aim to pierce A.I. hype. Narayanan is a Princeton computer science professor; Kapoor, a former Facebook software engineer, is a doctoral student at the university.
Safiya Umoja Noble, founder and faculty director, UCLA Center on Race and Digital Justice. Noble wrote the 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism and won a MacArthur “genius” grant in 2021. That same year, she founded the nonprofit Equity Engine to deepen investment in companies, education, and networks led by women of color.
Former Government Officials
Sorelle Friedler, Alondra Nelson, and Suresh Venkatasubramanian. The three worked in the Biden White House and are seen as the muscle and brains behind its “Blueprint for an AI Bill of Rights,” released last fall as a set of principles for protecting civil rights and personal freedom. Nelson, now at the Center for American Progress and the Institute for Advanced Study, was the first Black woman to lead the White House’s Office of Science and Technology Policy in its 45-year history. Venkatasubramanian, a Brown professor, and Friedler, a Haverford College scholar, took leaves from their jobs (both study algorithmic fairness) for the White House posts.
ADVERTISEMENT
Marietje Schaake, international policy director at Stanford’s Cyber Policy Center. A former member of the European Parliament from the Netherlands, Schaake is a leading voice in debates on technology regulation who provides a perspective from the European Union, which is moving more quickly than the United States to address potential A.I. risks.
Latanya Sweeney, founder, Harvard’s Public Interest Tech Lab. Sweeney is a pioneer in research that demonstrated racial discrimination in algorithms. A former chief technologist for the Federal Trade Commission and the first Black woman to receive a Ph.D. in computer science from MIT, she runs a lab to identify tech harms and use technology to solve social and political problems.