Document Type

Article

Publication Date

5-9-2024

Comments

This article is the author’s final published version in Pediatric Dermatology, Volume 41, Issue 5, September/October 2024, Pages 831-834.

The published version is available at https://doi.org/10.1111/pde.15649. Copyright © 2024 The Authors. Pediatric Dermatology published by Wiley Periodicals LLC.

Publication made possible in part by support through a transformative agreement between Thomas Jefferson University and the publisher.

Abstract

This study evaluates the clinical accuracy of OpenAI's ChatGPT in pediatric dermatology by comparing its responses on multiple-choice and case-based questions to those of pediatric dermatologists. ChatGPT's versions 3.5 and 4.0 were tested against questions from the American Board of Dermatology and the "Photoquiz" section of Pediatric Dermatology. Results show that human pediatric dermatology clinicians generally outperformed both ChatGPT iterations, though ChatGPT-4.0 demonstrated comparable performance in some areas. The study highlights the potential of AI tools in aiding clinicians with medical knowledge and decision-making, while also emphasizing the need for continual advancements and clinician oversight in using such technologies.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Language

English

Share

COinS