> Skip to content
FEATURED:
  • Philanthropy 50
  • Nonprofits and the Trump Agenda
  • Impact Stories Hub
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
ADVERTISEMENT
Interview
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Email
  • Facebook
  • Copy Link URLCopied!
  • Print

How A.I. Could Hurt Your Cause: A Veteran Tech Watchdog Explains

The Center for Democracy & Technology’s Alexandra Reeve Givens on why tech policy is now social-justice policy.

By  Lynn Schnaiberg
October 16, 2023
Concept image of a digitised face overlaid with a biometric facial recognition pattern
Getty Images

Founded in 1994 at the dawn of the Internet, the Center for Democracy & Technology has long battled many threats that technology poses to American life: Government and business abuses of surveillance technology. Privacy violations in Big Tech’s warehousing and commercialization of personal data. Restrictions on free speech.

Today, the group is one of the leaders in a growing nonprofit response to the dangers of artificial intelligence, which may prove thornier. There has always been a risk that technology discriminates, says president and CEO Alexandra Reeve Givens. “But now, with A.I., we have sophisticated systems that can embed that discrimination into an entire industry. And it happens within a black box, so people don’t even know that discrimination is taking place. The harms I worry about are evolutionary, not revolutionary. What’s different about A.I. is that it has made the threats proliferate faster.”

We're sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network.

Please allow access to our site, and then refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, please contact us at 571-540-8070 or cophelp@philanthropy.com

Founded in 1994 at the dawn of the internet, the Center for Democracy & Technology has long battled many threats that technology poses to American life: Government and business abuses of surveillance technology. The privacy violations of Big Tech’s warehousing and commercialization of personal data. Restrictions on free speech.

Today the group is one of the leaders in a growing nonprofit response to the dangers of artificial intelligence, which may prove thornier. There has always been a risk that technology discriminates, says its president and CEO Alexandra Reeve Givens. “But now, with A.I., we have sophisticated systems that can embed that discrimination into an entire industry. And it happens within a black box, so people don’t even know that discrimination is taking place. The harms I worry about are evolutionary, not revolutionary. What’s different about A.I. is that it has made the threats proliferate faster.”

Consider the array of potential risks:

  • Surveillance tools leading to over-policing of communities of color and algorithms that can unfairly blacklist tenants from immigrant, Indigenous, and Black communities.
  • Automated hiring practices that can discriminate against women, people of color, people with disabilities, and older Americans.
  • Algorithms tapping location and other data to expose sensitive information on women’s reproductive health choices as states restrict and criminalize abortion after the Supreme Court overturned Roe v. Wade in Dobbs v. Jackson Women’s Health Organization.
  • Financial services using bias-replicating machine learning that can hinder people of color’s access to credit and mortgages.

Tech policy, Reeve Givens contends, is social-justice policy. Few in philanthropy who are concerned with closing societal divides can afford to look away. Government intervention is needed to protect the public from A.I. harms, she says, but foundations, grant makers, and nonprofits must mobilize, too.

Related Stories

Computer Scientist Joy Buolamwini in the City of London. Buolamwini is a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League, an organization that looks to challenge bias in decision making software.
  1. People

    33 Leaders Exposing the Danger of A.I. in Housing, Employment, Civil Rights, and More

  2. Technology

    How Nonprofits Can Avoid A.I. Ethical and Legal Pitfalls

  3. Technology

    From Diaper Banks to Disaster Relief: How A.I. Is Changing Nonprofit Operations

A former litigator and an adjunct professor at the law schools of Columbia and Georgetown universities, Reeve Givens took the helm at CDT in 2020 after wrangling issues like net neutrality and online speech and the First Amendment as chief counsel on the Senate Judiciary Committee. Previously, she was founding executive director of Georgetown University’s Institute for Technology Law & Policy, where she helped lead a drive in colleges and universities to educate “public interest technologists” — graduates who would push technology toward the public good from a range of career fields.

Reeve Givens’s personal life gave her early insight into inequality, most notably as experienced by people with disabilities. Her late father, actor Christopher Reeve of Superman fame, was paralyzed in an accident in 1995 and became a staunch disability advocate. Reeve Givens has expanded CDT’s disability work and espouses a big-tent vision on tech.

Following are highlights from a Chronicle conversation with Reeve Givens about how nonprofits and philanthropy should be thinking about — and responding to — A.I. It has been edited for length and clarity.

Should foundations and grant makers respond differently to the A.I. threat than earlier threats from tech?

This moment is a wake-up call. Tech policy is central to the core issues impacting society today. I don’t think the reaction should be different. But maybe A.I. gets their attention and invites them in. If you think of yourself as a funder who focuses on economic mobility or on women’s rights, know that tech policy is deeply impacting the communities and issues you care about, and you should be funding at that intersection. After the Dobbs decision, we have had to mobilize a huge effort around online privacy and access to information about reproductive care. It is hugely important that foundations don’t just leave A.I. work to the tech specialists.

I don't think every food bank needs to worry about A.I. But groups representing impacted communities definitely do.

Are you seeing philanthropy funding at that intersection in a big way? Are foundations and grant makers supporting A.I.-related work across a wide range of nonprofits and issue areas?

There is a very specific universe of funders that pay attention to tech policy issues because they realize the impact on society. But there is a much broader universe of funders who should realize how much tech and A.I. are impacting the values they care about. A big education is needed on how all these values that motivate funders have a significant tech component that needs to be supported — beyond the handful of funders that already know to focus on tech policy or digital rights. There is still a real need to get involved to shape A.I.

ADVERTISEMENT

Alex Reeve Givens, president and CEO of the Center for Democracy and Technology.
Center for Democracy & Technology
Alexandra Reeve Givens, president and CEO of the Center for Democracy & Technology.

Can you name some groups that recognize the importance of A.I. in their work? Are there organizations that might surprise us?

I don’t think every food bank needs to worry about A.I. But groups representing impacted communities definitely do. The American Association of People with Disabilities and AARP are examples of organizations that five years ago probably never thought about tech policy, but suddenly have had to pivot to be deep in the weeds on how A.I. is impacting the community they represent. The AFL-CIO now has a tech institute looking at how A.I. and tech affect workers’ rights. Civil-rights groups like NAACP and the Leadership Conference on Civil and Human Rights have dedicated tech people.

What do you say to nonprofits who may want to engage on A.I. but don’t have the bandwidth or expertise — or simply don’t know how to engage?

I don’t want to put the burden on organizations to get really smart on A.I. It is incumbent on organizations like CDT to serve as an ally and resource. Tech policy should not feel like some foreign space that you need a computer-science degree to participate in.

The most important thing is groups feeling that they should have a seat at the table and have something to contribute to this conversation — bringing their subject-matter expertise, their community, and their lived experience to these issues. I want to make sure that people are not self-selecting out.

What is amazing is it hasn’t been that many years, and there has been a ton of progress.

Are there sectors or voices that you think have been missing or under-represented at the A.I. table?

There has been really good work over the past decade to integrate technology and racial justice work. But the conversation around disability, LGBTQ rights, and age discrimination often gets left behind or kind of treated as a second-order priority. And that was happening in the tech-policy space. To me, it was a real call to action to make sure that disability issues and the voices of disability leaders were included.

What is amazing is it hasn’t been that many years, and there has been a ton of progress. The White House has published its A.I. Bill of Rights, and disability features throughout. And I don’t think that conversation would have been the same just a couple of years ago. Meredith Broussard, one of the leading tech and social-justice authors, just published a book on bias in tech, and “ability bias” is in the title’s subhead. I was like: “Disability made the front cover!” Of course, there’s more work to be done.

ADVERTISEMENT

Earlier, you noted that the harms from A.I. aren’t new. But are the solutions funders could pursue new?

There is a lot the tech accountability movement can learn from predecessors in public health, public interest law, and environmental justice. With environmental impact assessments, there is a very real conversation right now about how well those work and what can we copy and paste for algorithmic impact assessments. So, before you deploy a new tool or strategy, you should be obligated to do a broader assessment around the diffused harm it is going to have. And be charged with not realizing that harm.

In the 1970s, there was a big push for public interest lawyers. There is a very conscious movement now to seed a public interest technologist career path. So, if you’re graduating with a computer science degree, you should have had foundational training in ethics, responsible product development, and be thinking about service in government or the nonprofit sector.

Some funders are focused on advancing A.I. for good versus reducing A.I. harms. What opportunities do you see in the A.I.-for-good space from your perch at CDT?

In my universe, it comes up in the delivery of government services: the potential for A.I. or data-driven programs to help tailor programs to individual needs or to share information across agencies — for example, to make benefits allocations more responsive.

A.I. could be helpful in access to justice, making it easier for people to file in small claims court or file a domestic violence complaint because the A.I. chatbot has the template.

Or in expungement, for people who had a criminal charge but then either were released or their time was served or the charges were dropped, so it doesn’t come up when people look at your record. Expungement is a very paperwork-intensive effort that can be automated.

CDT reports getting nearly half of its funding from foundations. But the second-biggest source, about a third, comes from corporations that include Amazon, Google, Apple, Meta, and Microsoft. How does CDT stay independent? And your husband works at NVIDIA, a tech company whose products help power A.I. (Reeve Givens says the company has not been a donor.)

Civil-society organizations have a really important role in the ecosystem, setting the norms that then inform what companies do. We play a very strong role advocating directly to the companies to strengthen their responsible technology practices. We think we’re providing a service to the ecosystem. And we receive unrestricted donations from companies that think that is a value.

So to me, it feels right that the companies are putting out some money for that work. Companies know very well that we are going to disagree with them probably more times than we are going to agree with them. Apple has been a donor for years. CDT happily led a coalition of over 90 civil society organizations telling Apple we thought they had done the wrong thing when they made changes to their privacy policies.

ADVERTISEMENT

We have a broad range of funders, so we’re never wed to one particular company. And we just have very firm internal policies to make sure that the staff have full freedom of operation and the positions that they take on any given policy issue. My husband’s job focuses on business operations, not on policy. We have two young kids, so policy discussions aren’t the norm at home.

A version of this article appeared in the November 7, 2023, issue.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Technology
Lynn Schnaiberg
Lynn Schnaiberg is an award-winning, Seattle-based journalist and author who has written for national publications including Anthropocene, Outside, Education Week, and Business 2.0.
ADVERTISEMENT
ADVERTISEMENT
SPONSORED, GEORGE MASON UNIVERSITY
  • Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
    Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
  • The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
    The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
  • Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Philanthropy
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin