Artificial intelligence is already being used in home health care—yet most home care workers have no idea it’s happening.
A new study from Cornell Tech reveals that personal care aides, home health aides, and certified nursing assistants are often unaware AI systems are involved in decisions about their schedules and workloads. The findings raise concerns about transparency, fairness, and bias in the growing use of algorithmic tools in health care settings.
The research will be presented at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI ’25), running April 26–May 1 in Yokohama, Japan.
Workers unaware of AI’s role in their jobs
Researchers interviewed 22 frontline workers, agency staff, and labor advocates. While agency personnel acknowledged AI’s role in improving efficiency, most direct care workers didn’t realize these systems were already shaping their daily work.
For example, many agencies now use algorithmic shift-matching systems to assign home care workers based on availability, qualifications, and patient needs. These systems are a form of AI—but the workers rarely knew that.
“We found a significant knowledge gap,” said lead researcher Ian René Solano-Kamaiko, a doctoral student at Cornell Tech. “Most frontline workers were unaware AI was being used at all.”
AI tools risk replicating bias
This lack of awareness has serious implications. The study highlights the risk that AI tools may replicate existing workplace biases. Algorithmic decision-making systems, including shift matchers, can unintentionally disadvantage women, people of color, immigrants, and other marginalized groups—demographics that dominate the home care workforce.
Even agency staff, who had more awareness of AI’s role, often trusted the tools without understanding how they worked or assessing whether they operated fairly.
“Most assumed the systems were trustworthy simply because they worked efficiently,” said Solano-Kamaiko.
A call for inclusive AI governance
The researchers call for transparent and equitable governance of AI systems in health care. They argue that home care workers and patients must be included in decision-making about how these systems are designed and used.
Nicola Dell, associate professor at Cornell Tech and co-author of the study, emphasized the importance of “participatory governance structures” that give frontline workers a voice.
The study also recommends new approaches to AI education that focus less on technical skills and more on helping workers understand how these systems affect their jobs.
“Instead of asking workers to learn how AI works, we need to help them understand how it applies in their context,” said Solano-Kamaiko.
Study details
The research was funded by the Innovation Resource Center for Human Resources. It marks an early step in efforts to develop safer, fairer AI systems for the home care industry.
For more, visit ssa.gov or follow updates from the CHI ’25 conference.