Introduction to AI in Medical Coding
A recent study conducted by researchers at the Mount Sinai Health System has revealed a promising advancement in the use of artificial intelligence (AI) for medical coding. By incorporating a simple retrieval step, AI systems can significantly improve their accuracy in assigning diagnostic codes, potentially surpassing human performance. This breakthrough, published in NEJM AI, could streamline administrative processes in healthcare, reduce billing errors, and enhance the quality of patient records.
The Challenge of Accurate Medical Coding
In the United States, physicians spend considerable time each week assigning ICD codes, which are alphanumeric strings used to describe medical conditions ranging from minor injuries to severe illnesses. Despite the capabilities of large language models like ChatGPT, these AI systems often struggle with accurately assigning these codes. The study aimed to address this challenge by implementing a “lookup-before-coding” method.
The “Lookup-Before-Coding” Method
The innovative approach involves prompting the AI to first describe a diagnosis in plain language. Subsequently, the AI selects the most appropriate code from a list of real-world examples. This method enhances the AI’s accuracy, reduces errors, and achieves performance levels comparable to or better than those of human coders.
Study Methodology and Findings
The research team analyzed 500 emergency department patient visits at Mount Sinai Health System hospitals. For each case, physician notes were input into nine different AI models, including smaller open-source systems. Initially, the models generated an ICD diagnostic description. Using a retrieval method, each description was matched with 10 similar ICD descriptions from a database of over one million hospital records, considering the frequency of those diagnoses. In the final step, the model used this information to select the most accurate ICD description and code.
Performance Evaluation
The coding results were independently evaluated by emergency physicians and two separate AI systems, without knowledge of whether the codes were generated by AI or human clinicians. Across the board, models utilizing the retrieval step outperformed those that did not, and in many cases, even surpassed physician-assigned codes. Notably, even smaller open-source models demonstrated improved performance when allowed to “look up” examples.
Implications for Healthcare
“This is about smarter support, not automation for automation’s sake,” stated co-corresponding senior author Girish N. Nadkarni, MD, MPH. The retrieval-enhanced method is designed to support human oversight rather than replace it. Although not yet approved for billing, the method shows potential for clinical use, such as suggesting codes in electronic records or flagging errors before billing.
Future Integration and Expansion
The researchers are currently integrating this method into Mount Sinai’s electronic health records system for pilot testing. They aim to expand its application to other clinical settings and include secondary and procedural codes in future iterations. The ultimate goal is to relieve physicians of administrative burdens, allowing them more time for direct patient care.
Conclusion
The integration of AI in medical coding, particularly with the retrieval step, holds the potential to transform patient care by improving efficiency and accuracy. As David L. Reich MD, Chief Clinical Officer of the Mount Sinai Health System, noted, “Using AI in this way improves our ability to provide attentive and compassionate care by spending more time with patients.” This advancement strengthens the foundation of healthcare systems, benefiting clinicians, patients, and health systems of all sizes.
🔗 **Fuente:** https://medicalxpress.com/news/2025-09-adding-lookup-ai-assigning-medical.html