Document Type
Article
Publication Date
3-27-2025
Abstract
Background/Objectives: The utility of artificial intelligence (AI) in medical education has recently garnered significant interest, with several studies exploring its applications across various educational domains; however, its role in orthopedic education, particularly in shoulder and elbow surgery, remains scarcely studied. This study aims to evaluate the performance of multiple AI models in answering shoulder- and elbow-related questions from the AAOS ResStudy question bank.
Methods: A total of 50 shoulder- and elbow-related questions from the AAOS ResStudy question bank were selected for the study. Questions were categorized according to anatomical location, topic, concept, and difficulty. Each question, along with the possible multiple-choice answers, was provided to each chatbot. The performance of each chatbot was recorded and analyzed to identify significant differences between the chatbots' performances across various categories.
Results: The overall average performance of all chatbots was 60.4%. There were significant differences in the performances of different chatbots (p = 0.034): GPT-4o performed best, answering 74% of the questions correctly. AAOS members outperformed all chatbots, with an average accuracy of 79.4%. There were no significant differences in performance between shoulder and elbow questions (p = 0.931). Topic-wise, chatbots did worse on questions relating to "Adhesive Capsulitis" than those relating to "Instability" (p = 0.013), "Nerve Injuries" (p = 0.002), and "Arthroplasty" (p = 0.028). Concept-wise, the best performance was seen in "Diagnosis" (71.4%), but there were no significant differences in scores between different chatbots. Difficulty analysis revealed that chatbots performed significantly better on easy questions (68.5%) compared to moderate (45.4%; p = 0.04) and hard questions (40.0%; p = 0.012).
Conclusions: AI chatbots show promise as supplementary tools in medical education and clinical decision-making, but their limitations necessitate cautious and complementary use alongside expert human judgment.
Recommended Citation
Fares, Mohamad Y.; Parmar, Tarishi; Boufadel, Peter; Daher, Mohammad; Berg, Jonathan; Witt, Austin; Hill, Brian W.; Horneff, John G.; Khan, Adam Z.; and Abboud, Joseph A., "An Assessment of the Performance of Different Chatbots on Shoulder and Elbow Questions" (2025). Rothman Institute Faculty Papers. Paper 287.
https://jdc.jefferson.edu/rothman_institute/287
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Language
English
Comments
This article is the author’s final published version in the Journal of Clinical Medicine, Volume 14, Issue 7, April 2025, Article number 2289.
The published version is available at https://doi.org/10.3390/jcm14072289. Copyright © 2025 by the authors. Licensee MDPI, Basel, Switzerland.