> Skip to content
FEATURED:
  • Philanthropy 50
  • Nonprofits and the Trump Agenda
  • Impact Stories Hub
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
ADVERTISEMENT
People
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Email
  • Facebook
  • Copy Link URLCopied!
  • Print

33 Leaders Standing Up to Big Tech in the Age of A.I.

By  Drew Lindsay
July 31, 2023
Computer Scientist Joy Buolamwini in the City of London. Buolamwini is a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League, an organization that looks to challenge bias in decision making software.
Christopher Pledger, eyevine, Redux
Computer scientist Joy Buolamwini, who has led ground-breaking research illustrating gender and racial bias in artificial intelligence, founded the nonprofit Algorithmic Justice League.

Experts in artificial intelligence debate what they call P(doom) — the probability that A.I. will grow to such heights of power that it wipes out humanity. For some, a digital apocalypse is a matter of when, not if.

A small band of nonprofit advocates, meanwhile, is rallying against the immediate dangers of machine learning, algorithms, and other A.I. technologies. “The issue is not that they’re omnipotent,” Amba Kak, executive director of the A.I. Now Institute, told the Atlantic recently. “It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

We're sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network.

Please allow access to our site, and then refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, please contact us at 571-540-8070 or cophelp@philanthropy.com

Experts in artificial intelligence debate what they call P(doom) — the probability that A.I. will grow to such heights of power that it wipes out humanity. For some, a digital apocalypse is a matter of when, not if.

A small band of nonprofit advocates, meanwhile, is rallying against the immediate dangers of machine learning, algorithms, and other A.I. technologies. “The issue is not that they’re omnipotent,” Amba Kak, executive director of the A.I. Now Institute, told the Atlantic recently. “It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Technology-focused groups have raised alarms about the peril of digital advances and concentration of power with Big Tech for more than a decade. But as A.I. concerns grow, advocacy and research groups of all stripes are linking arms with them.

Groups that promote affordable housing worry about bias in algorithms that determine rent, screen tenants, and make loan decisions. Racial-justice advocates point to evidence that facial-recognition software discriminates against people of color. Social-service organizations see danger in A.I.-driven distribution of benefits. Human-rights activists warn of deepfakes that could lead to the imprisonment of innocents. Indeed, few parts of the nonprofit world seem immune from A.I.'s impact.

To gauge just how groups are responding, the Chronicle asked a dozen experts and advocates to identify top nonprofit leaders in the growing field often called “A.I. safety.” The list of 33 below, while hardly comprehensive, speaks to the scope of what’s happening and the diversity of the players. It features Google exiles and nonprofit veterans. Academics and community organizers. Tech experts and social-justice champions.

Artificial intelligence (AI), data mining, expert system software, genetic programming, machine learning, deep learning, neural networks and another modern computer technologies concepts. Brain representing artificial intelligence with printed circuit board (PCB) design.
Related Content
  • From Diaper Banks to Disaster Relief: How A.I. Is Changing Nonprofit Operations
  • How Nonprofits Can Use A.I. Well — and Avoid Pitfalls
  • Foundations Seek to Advance A.I. for Good — and Also Protect the World From Its Threats
  • How Nonprofits Can Avoid A.I. Ethical and Legal Pitfalls
  • How Some Fundraisers Are Using ChatGPT Technology to Do Their Jobs Better

Interestingly, the list favors women, not men, and includes many people of color, including several Black women. Such diversity was missing in the movement’s early beginnings, when white men dominated, just as they do in the tech industry itself. Advocates say people of color are particularly attuned to A.I. bias in technology because they are Silicon Valley outsiders yet often impacted directly by discrimination embedded in its products.

“There’s a story to be told about the number of Black women who have really pioneered this space,” says Eric Sears, who runs the Technology and the Public Interest program at the John D. and Catherine T. MacArthur Foundation.

Leaders of Tech-Focused Groups

Catherine Bracy, co-founder and CEO, TechEquity Collaborative. The collaborative educates and mobilizes tech workers to address ways that their employers and products — including A.I.-powered software — drive inequality. Previously, Bracy worked on Barack Obama’s 2012 campaign, building a corps of volunteer technologists, and led community organizing at Code for America.

ADVERTISEMENT

Joy Buolamwini, founder, Algorithmic Justice League. Buolamwini is a computer scientist and activist known as a “poet of code” and hailed by Fortune magazine as “the conscience of the A.I. revolution.” She documents A.I. biases through research and illustrates them through art. As an MIT doctoral student, she began documenting the failures of facial-recognition systems to identify dark-skin female faces — work that culminated in the groundbreaking 2018 “Gender Shades,” research that she led, and a spoken-poem video that’s exhibited in museums.

Timnit Gebru poses for a phots in Stanford, Calif., Monday, March 21, 2022.
Jeff Chiu, AP
Timnit Gebru, once a co-leader of Google’s ethical A.I. team, now leads a philanthropy-backed effort to build a field of A.I. experts outside the tech industry.

Timnit Gebru, founder, Distributed Artificial Intelligence Research Institute. An Ethiopian-born engineer, Gebru famously lost her job in 2020 as co-leader of Google’s ethical A.I. team after pointing to biases and inequalities embedded in the company’s work. She launched the institute in 2021 to build a field of A.I. experts outside the tech industry. “A.I. needs to be brought back down to earth,” Gebru said.

Alexandra Reeve Givens, CEO, Center for Democracy & Technology. CDT — a long-standing advocacy and research group established in 1994 at the dawn of the internet — is a regular in congressional hearings, White House huddles, and op-eds in outlets like the New York Times. Under Givens — daughter of the late actor Christopher Reeve, who was paralyzed after an accident — the organization has highlighted A.I.’s discrimination against people with disabilities.

Janet Haven, executive director, Data & Society. Haven started her career at tech startups in Europe and spent a decade at the Open Society Foundations as the field of data and technology governance took form. Data & Society brings together scholars and experts from a range of fields to study tech topics including A.I. and automation. It recently created the Algorithmic Impact Methods Lab to develop ways to measure automated decision-making’s impact on individuals and society.

Amba Kak and Sarah Myers West, AI Now Institute. The six-year-old organization is a leading player in the push for rigorous “algorithm accountability” policies that would require companies to assess the risk of their algorithms and address any negative impact. Kak, a lawyer, and West, a scholar and AI Now’s managing director, both did stints at the Federal Trade Commission. West is writing a book, Tracing Code, about the origins of data capitalism and commercial surveillance.

ADVERTISEMENT

Yeshimabeit “Yeshi” Milner, founder and CEO, Data for Black Lives. A former Echoing Green and Ashoka fellow, Milner is a longtime organizer who started Data for Black Lives to connect tech experts — data scientists, software engineers, mathematicians — with leaders and activists in Black communities. She calls for abolishing “Big Data,” which she describes as “a new form of social and political control.”

Emily Tucker, executive director, Georgetown University’s Center on Privacy and Technology. Before joining the center, Tucker — a lawyer who has a master’s degree in theological studies and expertise in immigration issues — worked for a decade helping grassroots groups organize and litigate against surveillance of poor communities and communities of color. The center last year published the widely cited report “American Dragnet,” arguing that the U.S. Immigration and Customs Enforcement agency “now operates as a domestic surveillance agency.”

Big Tech is watching you. We’re watching Big Tech.

Harlan Yu, executive director, Upturn. Upturn examines how technology reinforces inequality. Recently, it sought a federal investigation of how Meta unfairly steers Facebook job ads away from users based on factors like gender and age. Yu, who has a Princeton Ph.D. in computer science, is an expert on the impact of A.I.-driven body cameras and other emerging technologies used in policing.

Other Nonprofit Leaders

Olga Akselrod and ReNika Moore, American Civil Liberties Union. Moore leads the organization’s racial-justice program, where Akselrod is senior staff attorney. The two have taken up tech issues and argue that A.I. upends the balance of power between the people and a host of government actors — including police, immigration officials, and health-care providers. Local ACLU chapters are also active; in Massachusetts, the group’s decade-old Technology for Liberty program has fought to ban local face-recognition surveillance systems.

Amanda Ballantyne, AFL-CIO. Ballantyne is a lawyer, organizer, and director of the union’s Technology Institute, which launched in 2021 and bills itself as a tech think tank for the labor movement. Last year, she was named one of the 27 members of the National Artificial Intelligence Advisory Committee.

ADVERTISEMENT

Lydia X.Z. Brown, director of public policy at the National Disability Institute. Brown, a lawyer who defines herself as a queer disabled person, has documented algorithmic harm in public-benefits decisions, hiring, and surveillance for people with disabilities. She recently helped decide the inaugural grantees of the Disability x Tech Fund, which addresses disability bias in technology.

Henry Claypool, tech policy consultant, American Association of People With Disabilities. Claypool, who has lived with a disability for decades after suffering a spinal-cord injury, is an expert on how technology — from self-driving cars to self-proctored student exams — can expand or limit the lives of people with disabilities. A top official on disability issues in the Obama administration, he helped launch the Disability x Tech Fund.

Sam Gregory, executive director, Witness. A longtime human-rights advocate, Gregory is a leading authority on deepfakes and other forms of A.I.-generated misinformation and disinformation. He focuses on preparing countries and communities for how doctored videos and manipulated media could be used to justify coups, jail innocents, and spark conflict.

Damon Hewitt, CEO, Lawyers’ Committee for Civil Rights Under Law. The organization, with the Leadership Conference on Civil and Human Rights and others, brings together groups as different as Color of Change and Free Press Action to push for federal policy to address A.I. discrimination in housing, employment, financial services, and more.

ADVERTISEMENT

Maya Wiley and Corrine Yu, Leadership Conference on Civil and Human Rights. Wiley, who ran for New York City mayor in 2021, is CEO; Yu leads its efforts on digital rights and privacy. The group organizes a big-tent coalition of organizations and advocates to pressure Congress, the White House, and federal agencies to ensure that laws and enforcement keep pace with A.I.’s rapid growth and the threats to civil rights. More than 60 organizations — from the Hip Hop Caucus to the National Center for Learning Disabilities — signed on to a recent call for action.

Local Grassroots Leaders

Tawana Petty, director of policy and advocacy, Algorithmic Justice League. Although now at a national nonprofit, Petty — a poet, author, and organizer — remains a leading digital activist in Detroit and a sharp critic of the city police’s Project Green Light mass-surveillance system. A recent victory: U.S. Department of Justice evaluators recently said the system had “no effect” on crime.

Hannah Sassaman, executive director, People’s Tech Project. The slogan for the Philadelphia organization: “Arming movements for liberation with the tools to fight the tech that oppresses us.” Sassaman, a seasoned community organizer, helped spin the effort off from the Movement Alliance Project after several tech-related campaigns, including one that fought the use of algorithms in the city’s bail and parole decisions.

Scholars and Writers

Julia Angwin and Nabiha Syed. Independent investigative journalist Angwin was part of the ProPublica team that wrote “Machine Bias” in 2016 about racially discriminatory software in criminal sentencing — one of the first analyses to make clear the potential harms of A.I. for the public. In 2018, she founded The Markup, a nonprofit news outlet now led by Syed and committed to challenging technology to serve the public good. Its motto: “Big Tech is watching you. We’re watching Big Tech.”

ADVERTISEMENT

Meredith Broussard, data journalist and research director for New York University’s Alliance for Public Interest Technology. In her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Broussard points to ways that A.I. harms are evident in everyday life, from soap dispensers to breast-cancer screening. Credited with coining the term “technochauvinism,” she writes frequently in mainstream news outlets, including the Atlantic, New Yorker, and Wired.

Lisbon , Portugal - 2 November 2022; Julia Angwin, Founder & Editor-At-Large, The MarkUp, at Fourth Estate Stage during day one of Web Summit 2022 at the Altice Arena in Lisbon, Portugal.
Eóin Noonan, Sportsfile for Web Summit, Getty Images
Investigative journalist Julia Angwin is among the first reporters to document bias and discrimination in algorithms.

Karen Levy, information science professor, Cornell University. A lawyer and sociologist, Levy studies how businesses collect data on workers. Her 2022 book Data Driven: Truckers, Technology, and the New Workplace Surveillance, about digital surveillance of truck drivers, is praised as a deep yet readable dive into the pernicious effects of technology in one industry.

Arvind Narayanan and Sayash Kapoor. Narayanan and Kapoor write the Substack newsletter AI Snake Oil, where they aim to pierce A.I. hype. Narayanan is a Princeton computer science professor; Kapoor, a former Facebook software engineer, is a doctoral student at the university.

Safiya Umoja Noble, founder and faculty director, UCLA Center on Race and Digital Justice. Noble wrote the 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism and won a MacArthur “genius” grant in 2021. That same year, she founded the nonprofit Equity Engine to deepen investment in companies, education, and networks led by women of color.

Former Government Officials

Sorelle Friedler, Alondra Nelson, and Suresh Venkatasubramanian. The three worked in the Biden White House and are seen as the muscle and brains behind its “Blueprint for an AI Bill of Rights,” released last fall as a set of principles for protecting civil rights and personal freedom. Nelson, now at the Center for American Progress and the Institute for Advanced Study, was the first Black woman to lead the White House’s Office of Science and Technology Policy in its 45-year history. Venkatasubramanian, a Brown professor, and Friedler, a Haverford College scholar, took leaves from their jobs (both study algorithmic fairness) for the White House posts.

ADVERTISEMENT

Marietje Schaake, international policy director at Stanford’s Cyber Policy Center. A former member of the European Parliament from the Netherlands, Schaake is a leading voice in debates on technology regulation who provides a perspective from the European Union, which is moving more quickly than the United States to address potential A.I. risks.

Latanya Sweeney, founder, Harvard’s Public Interest Tech Lab. Sweeney is a pioneer in research that demonstrated racial discrimination in algorithms. A former chief technologist for the Federal Trade Commission and the first Black woman to receive a Ph.D. in computer science from MIT, she runs a lab to identify tech harms and use technology to solve social and political problems.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
AdvocacyExecutive Leadership
Drew Lindsay
Drew is a longtime magazine writer and editor who joined the Chronicle of Philanthropy in 2014.
ADVERTISEMENT
ADVERTISEMENT
SPONSORED, GEORGE MASON UNIVERSITY
  • Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
    Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
  • The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
    The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
  • Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Philanthropy
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin