Refresh

This website www.fingerlakes1.com/2025/04/30/ai-moderation-tools-offer-new-hope-in-fighting-online-ableism/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

Skip to content
Home » News » AI moderation tools offer new hope in fighting online ableism

AI moderation tools offer new hope in fighting online ableism

  • / Updated:
  • Staff Report 

Social media users with disabilities are calling for more personalized and transparent moderation tools to combat the growing tide of online harassment, according to new Cornell research presented this week.

Led by Shiri Azenkot, an associate professor at Cornell Tech and the Jacobs Technion-Cornell Institute, the study found that disabled users overwhelmingly preferred AI moderation systems that explained the nature of the hate they encountered—whether it promoted eugenicist ideas or equated disability with weakness—over those that simply hid offensive content.


“Our work showed that indicating the type of content … supported transparency and trust, and increased user agency,” Azenkot said.

The study, titled Ignorance is not Bliss: Designing Personalized Moderation to Address Ableist Hate on Social Media, was presented April 28 at the Association for Computing Machinery’s CHI ’25 conference in Yokohama, Japan.

Through interviews and focus groups, researchers tested various AI-powered designs that categorized ableist speech by its specific themes. Users strongly favored these approaches over sensitivity sliders, which filter content based on how intensely hateful it appears. Subtle forms of ableism, participants said, often caused deeper harm than overt slurs.

“This distinction is crucial,” said Sharon Heung, the study’s lead author and a Ph.D. student in information science. “Sometimes subtler or more insidious forms of ableism can cause deeper, more lasting harm.”

Participants also voiced a broader distrust in current AI moderation. Many shared frustrations with tools that mistakenly flagged neutral content simply for mentioning disability, pointing to the need for more context-aware systems.

“More work needs to be done with the disability community to ensure the accuracy of LLMs and to ensure that these tools are usable in practice,” Heung said.

In their findings, the researchers urged platforms to develop customizable AI filters that align with users’ own definitions of harm. Suggested improvements include adding content warnings for ableist language, giving users control over filtering errors, and allowing exemptions for trusted accounts.

“If the goal is to reduce the emotional or psychological harm caused by encountering hate, then the design of these tools must reflect what users themselves find harmful or distressing,” Heung said.

The study was co-authored by Aditya Vashistha of the Bowers College of Computing and Information Science, Cornell Tech Ph.D. student Sharon Heung, and University of Washington Ph.D. student Lucy Jiang, a former Cornell master’s student.

“Social media platforms can adopt this approach to moderation for all kinds of hateful content, not just ableism,” Azenkot said.



Tags:
Categories: NewsEducation