AI Attitudes Study Shows Stark Divide Among Marginalized Groups, University of Michigan Reports

A University of Michigan study reveals significant AI skepticism among nonbinary, transgender, and disabled individuals, highlighting concerns on social bias in AI applications and urging policymakers to include marginalized voices to ensure equitable technology development.
AI attitudes study

New study reveals widespread AI skepticism among nonbinary, transgender, and disabled Americans, challenging tech’s promise of equity.


Gender and Disability Drive Deep AI Distrust, U-M Research Finds

A University of Michigan research team has released new AI attitudes study with findings showing that marginalized populations—specifically nonbinary, transgender, and disabled individuals—hold significantly more negative views of artificial intelligence compared to other demographic groups. The study, presented at the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT), surveyed 742 U.S. residents and is among the first to quantify AI skepticism across identity lines at a national level.

AI may be everywhere, but it’s not for everyone—at least not yet,” said Oliver Haimson, assistant professor at the U-M School of Information and the study’s lead author. “If we continue to ignore the perspectives of marginalized people, we risk building an AI-powered future that deepens inequities rather than reducing them” University of Michigan News, 2025.


Facial Recognition, Healthcare, and Policing Fuel Mistrust

AI Attitudes Study

Respondents expressed concern over real-world applications of AI that reinforce social bias. Facial recognition software, for example, was cited as frequently misgendering nonbinary and trans individuals, particularly in surveillance contexts. In healthcare, AI systems often fail to consider the specific needs of those with neurodivergence or mental health conditions, contributing to poorer outcomes and a lack of trust.

These findings align with prior studies showing similar algorithmic failures, including Joy Buolamwini and Timnit Gebru’s 2018 research demonstrating racial and gender bias in commercial AI systems used for facial recognition.


Black Americans Report Surprisingly Positive AI Outlook

Contrary to the study’s second hypothesis, Black participants reported more positive attitudes toward AI than white participants. This outcome may be partially explained by the theory of “Black optimism,” a sociological perspective that suggests marginalized communities may view technology as a tool for empowerment despite systemic challenges Sexton, 2011.

Still, the authors warn that higher optimism does not negate the disproportionate harms these groups may face when interacting with AI-powered systems, especially in law enforcement or hiring.


AI Perceived Differently Based on Gender, Race, and Disability

Among the groups surveyed:

  • Nonbinary participants scored the lowest on positive AI attitudes (M = 3.84 on a 7-point scale).
  • Transgender individuals followed closely behind (M = 4.12).
  • Women rated AI less positively than men (M = 4.96 vs. 5.32).
  • Disabled respondents, especially those with neurodivergence or mental health conditions, expressed significantly lower trust in AI than non-disabled counterparts.

These disparities remain statistically significant even when adjusting for income, education, and exposure to technology.


Study Urges Policymakers to Focus on Equity in AI Regulation

The authors argue that policymakers and technology developers must center the voices of those with the most to lose. Without meaningful inclusion of marginalized communities, AI deployment may reproduce or worsen existing social inequalities.

While no federal legislation currently addresses these concerns, states like California and Colorado are leading with new laws mandating transparency and bias audits for AI systems. The authors suggest Michigan legislators may follow suit, especially given the state’s robust academic and technology research sectors.


Michigan-Based Research at the Forefront of AI Ethics

The study was authored by a team from the University of Michigan, including Samuel Reiji Mayworm, Alexis Shore Ingber, and Nazanin Andalibi, and was supported in part by an NSF grant. The research highlights Ann Arbor’s growing role in ethical AI and social impact studies—a field gaining urgency as AI tools are embedded across healthcare, education, and government systems.


Read More Interesting Feature Stories From ThumbWind

  • Michigan Feature News Stories – Unveiling the diverse and vibrant people, captivating places, and remarkable events that come together to make the Great Lake State unique.
  • Strange Political News – A sarcastic take on official news from around the U.S., exploring the absurdities that often arise in the political landscape while providing a humorous perspective on current events and highlighting the quirks of politicians and policies.
  • Michigan Hometown News – News and events from Michigan’s Upper Thumb region worth knowing, including local stories, impactful interviews, and updates on community happenings that shape the culture and lifestyle of the area.

Your Turn – Like This, or Hate it – We Want To Hear From You

Please offer an insightful and thoughtful comment. We review each response. Follow us to have other feature stories fill up your email box, or check us out at ThumbWind News.


Discover more from Thumbwind

Subscribe to get the latest posts sent to your email.

Michael Hardy

Michael is the owner of Thumbwind Publications LLC. It started in 2009 as a fun-loving site covering Michigan's Upper Thumb. Since then, he has expanded sites and range of content and established a loyal base of 60,000 visitors per month.

View all posts by Michael Hardy →