Document Type
Article
Publication Date
11-20-2023
Abstract
UNLABELLED: The rapid progress in artificial intelligence, machine learning, and natural language processing has led to increasingly sophisticated large language models (LLMs) for use in healthcare. This study assesses the performance of two LLMs, the GPT-3.5 and GPT-4 models, in passing the MIR medical examination for access to medical specialist training in Spain. Our objectives included gauging the model's overall performance, analyzing discrepancies across different medical specialties, discerning between theoretical and practical questions, estimating error proportions, and assessing the hypothetical severity of errors committed by a physician.
MATERIAL AND METHODS: We studied the 2022 Spanish MIR examination results after excluding those questions requiring image evaluations or having acknowledged errors. The remaining 182 questions were presented to the LLM GPT-4 and GPT-3.5 in Spanish and English. Logistic regression models analyzed the relationships between question length, sequence, and performance. We also analyzed the 23 questions with images, using GPT-4's new image analysis capability.
RESULTS: GPT-4 outperformed GPT-3.5, scoring 86.81% in Spanish (p < 0.001). English translations had a slightly enhanced performance. GPT-4 scored 26.1% of the questions with images in English. The results were worse when the questions were in Spanish, 13.0%, although the differences were not statistically significant (p = 0.250). Among medical specialties, GPT-4 achieved a 100% correct response rate in several areas, and the Pharmacology, Critical Care, and Infectious Diseases specialties showed lower performance. The error analysis revealed that while a 13.2% error rate existed, the gravest categories, such as "error requiring intervention to sustain life" and "error resulting in death", had a 0% rate.
CONCLUSIONS: GPT-4 performs robustly on the Spanish MIR examination, with varying capabilities to discriminate knowledge across specialties. While the model's high success rate is commendable, understanding the error severity is critical, especially when considering AI's potential role in real-world medical practice and its implications for patient safety.
Recommended Citation
Guillen-Grima, Francisco; Guillen-Aguinaga, Sara; Guillen-Aguinaga, Laura; Alas-Brun, Rosa; Onambele, Luc; Ortega, Wilfrido; Montejo, Rocio; Aguinaga-Ontoso, Enrique; Barach, Paul; and Aguinaga-Ontoso, Ines, "Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine." (2023). Department of Medicine Faculty Papers. Paper 433.
https://jdc.jefferson.edu/medfp/433
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Table S2 Answer to MIR examination questions.pdf (98 kB)
Table S3 Questions failed by GPT4 indicating the wrong answer and the classification of the Error.pdf (257 kB)
Table S4 Images and English translation of the questions with image.pdf (1856 kB)
PubMed ID
37987431
Language
English
Included in
Artificial Intelligence and Robotics Commons, Medical Education Commons, Medical Specialties Commons, Patient Safety Commons
Comments
This article is the author's final published version in Clinics and Practice, Volume 13, Issue 6, December 2023, Pages 1460 - 1487.
The published version is available at https://doi.org/10.3390/clinpract13060130.
Copyright © 2023 by the authors