Document Type
Article
Publication Date
3-1-2025
Abstract
PURPOSE: The advent of large language models (LLMs) like ChatGPT has introduced notable advancements in various surgical disciplines. These developments have led to an increased interest in the use of LLMs for Current Procedural Terminology (CPT) coding in surgery. With CPT coding being a complex and time-consuming process, often exacerbated by the scarcity of professional coders, there is a pressing need for innovative solutions to enhance coding efficiency and accuracy.
METHODS: This observational study evaluated the effectiveness of five publicly available large language models-Perplexity.AI, Bard, BingAI, ChatGPT 3.5, and ChatGPT 4.0-in accurately identifying CPT codes for hand surgery procedures. A consistent query format was employed to test each model, ensuring the inclusion of detailed procedure components where necessary. The responses were classified as correct, partially correct, or incorrect based on their alignment with established CPT coding for the specified procedures.
RESULTS: In the evaluation of artificial intelligence (AI) model performance on simple procedures, Perplexity.AI achieved the highest number of correct outcomes (15), followed by Bard and Bing AI (14 each). ChatGPT 4 and ChatGPT 3.5 yielded 8 and 7 correct outcomes, respectively. For complex procedures, Perplexity.AI and Bard each had three correct outcomes, whereas ChatGPT models had none. Bing AI had the highest number of partially correct outcomes (5). There were significant associations between AI models and performance outcomes for both simple and complex procedures.
CONCLUSIONS: This study highlights the feasibility and potential benefits of integrating LLMs into the CPT coding process for hand surgery. The findings advocate for further refinement and training of AI models to improve their accuracy and practicality, suggesting a future where AI-assisted coding could become a standard component of surgical workflows, aligning with the ongoing digital transformation in health care.
TYPE OF STUDY/LEVEL OF EVIDENCE: Observational, IIIb.
Recommended Citation
Isch, Emily; Lee, Jamie; Self, D. Mitchell; Sambangi, Abhijeet; Habarth-Morales, Theodore E.; Vaile, John R.; and Caterson, E. J., "Artificial Intelligence in Surgical Coding: Evaluating Large Language Models for Current Procedural Terminology Accuracy in Hand Surgery" (2025). Department of Surgery Faculty Papers. Paper 284.
https://jdc.jefferson.edu/surgeryfp/284
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
PubMed ID
40182863
Language
English
Included in
Artificial Intelligence and Robotics Commons, Health Services Administration Commons, Library and Information Science Commons, Surgery Commons
Comments
This article is the author's final published version in Journal of Hand Surgery Global Online, Volume 7, Issue 2, March 2025, Pages 181 - 185.
The published version is available at https://doi.org/10.1016/j.jhsg.2024.11.013.
Copyright © 2024 THE AUTHORS