Document Type

Article

Publication Date

11-7-2025

Comments

This article is the author’s final published version in JMIR Publications Inc., Volume 4, 2025, Article number e69006.

The published version is available at https://doi.org/10.2196/69006. Copyright © Yining Hua, Winna Xia, David Bates, George Luke Hartstein, Hyungjin Tom Kim, Michael Li, Benjamin W Nelson, Charles Stromeyer IV, Darlene King, Jina Suh, Li Zhou, John Torous.

Abstract

BACKGROUND: Health care chatbots are rapidly proliferating, while generative artificial intelligence (AI) outpaces existing evaluation standards.

OBJECTIVE: We aimed to develop a structured, stakeholder-informed framework to standardize evaluation of health care chatbots.

METHODS: PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses)-guided searches across multiple databases identified 266 records; 152 were screened, 21 full texts were assessed, and 11 frameworks were included. We extracted 356 questions (refined to 271 by deduplication and relevance review), mapped items to Coalition for Health AI constructs, and organized them with iterative input from clinicians, patients, developers, epidemiologists, and policymakers.

RESULTS: We developed the Health Care AI Chatbot Evaluation Framework (HAICEF), a hierarchical framework with 3 priority domains (safety, privacy, and fairness; trustworthiness and usefulness; and design and operational effectiveness) and 18 second-level and 60 third-level constructs covering 271 questions. Emphasis includes data provenance and harm control; Health Insurance Portability and Accountability Act/General Data Protection Regulation-aligned privacy and security; bias management; and reliability, transparency, and workflow integration. Question distribution across domains is as follows: design and operational effectiveness, 40%; trustworthiness and usefulness, 39%; and safety, privacy and fairness, 21%. The framework accommodates both patient-facing and back-office use cases.

CONCLUSIONS: HAICEF provides an adaptable scaffold for standardized evaluation and responsible implementation of health care chatbots. Planned next steps include prospective validation across settings and a Delphi consensus to extend accountability and accessibility assurances.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

PubMed ID

41202290

Language

English

Share

COinS