Document Type
Article
Publication Date
5-9-2024
Abstract
This study evaluates the clinical accuracy of OpenAI's ChatGPT in pediatric dermatology by comparing its responses on multiple-choice and case-based questions to those of pediatric dermatologists. ChatGPT's versions 3.5 and 4.0 were tested against questions from the American Board of Dermatology and the "Photoquiz" section of Pediatric Dermatology. Results show that human pediatric dermatology clinicians generally outperformed both ChatGPT iterations, though ChatGPT-4.0 demonstrated comparable performance in some areas. The study highlights the potential of AI tools in aiding clinicians with medical knowledge and decision-making, while also emphasizing the need for continual advancements and clinician oversight in using such technologies.
Recommended Citation
Huang, Charles Y.; Zhang, Esther; Caussade, Marie-Chantal; Brown, Trinity; Stockton Hogrogian, Griffin; and Yan, Albert C., "Pediatric Dermatologists Versus AI Bots: Evaluating the Medical Knowledge and Diagnostic Capabilities of ChatGPT" (2024). Student Papers, Posters & Projects. Paper 166.
https://jdc.jefferson.edu/student_papers/166
Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Language
English


Comments
This article is the author’s final published version in Pediatric Dermatology, Volume 41, Issue 5, September/October 2024, Pages 831-834.
The published version is available at https://doi.org/10.1111/pde.15649. Copyright © 2024 The Authors. Pediatric Dermatology published by Wiley Periodicals LLC.
Publication made possible in part by support through a transformative agreement between Thomas Jefferson University and the publisher.