Skip to main content

AI cuts discharge note writing time by half: study

The LLM-based tool, developed by researchers from Yonsei University College of Medicine, automatically generates legally required discharge notes in the emergency department.
By Adam Ang
Emergency staff transporting a patient on a gurney

Photo: Science Photo Library via Getty Images

A large language model-based tool developed in South Korea has shown potential to halve the time emergency physicians spend writing discharge notes while improving documentation quality. 

Researchers from Yonsei University College of Medicine have developed the LLM tool that automatically generates legally required discharge notes in the emergency department.

HOW IT WORKS

The tool, called Your Knowledgeable Navigator of Treatment (Y‑KNOT), is an ED discharge note generation assistant built atop a compact, transformer-based text generation model (Llama3-8B by Meta). 

The model was pretrained on a mix of general and medical-specific data and then fine-tuned using 592 emergency department cases collected between September and August 2023. Eligible records included adult patients and pediatric patients with nondisease conditions (such as trauma, poisoning, or burns), where full ED and discharge documentation had been completed within 48 hours. 

The discharge note generation tool is designed to handle two typical clinical workflows in the ED: for patients managed solely by emergency physicians, the system uses the initial ED record, plus the prescription list as inputs. For patients requiring specialty consultations, it ingests the ED initial record along with the consultation request form. 

After a note is initially generated, a rule-based post-processing step inserts standardised patient education statements and streamlines prescription terminology, ensuring that the result aligns with legal and institutional requirements for discharge documentation. 

FINDINGS

In a randomised sequential evaluation of 50 test ED cases, six emergency physicians manually wrote 300 discharge notes and edited 300 LLM-generated notes in a virtual EHR interface. Three independent attending emergency physicians blindly assessed all notes.

Based on findings published in JAMA Network Open, LLM-assisted notes scored significantly higher than manually written notes across four metrics: completeness (4.23 vs. 4.03), correctness (4.38 vs. 4.20), conciseness (4.23 vs. 4.11), and clinical utility (4.17 vs. 3.85). 

Researchers noted that the median time to complete a discharge note dropped from 69.5 seconds with manual writing to 32.0 seconds with LLM assistance.

When compared with unedited LLM drafts, the edited LLM-generated notes were more concise and maintained equivalent clinical utility, though completeness and correctness scores were slightly lower.

WHY IT MATTERS

ED discharge notes, which capture treatment details, summarise test findings, and support transitions back to primary or community care, are critical for patient care continuity. Yet, in the fast-paced ED environment, documentation is often delayed, incomplete, or missing, contributing to prescription errors, missed follow-ups, and readmissions.  

Recent attempts to automate this work have mainly relied on proprietary LLMs, according to the Yonsei researchers, raising concerns about data privacy and cross-border transfers and demanding substantial computing resources for hospital-wide deployment.

By contrast, the AI-powered Y-KNOT ED discharge note generation assistant runs as a lightweight, open-source model deployed entirely on hospital infrastructure, an approach the team said is designed to prevent sensitive patient data from leaking while keeping the system practical to operate.

Findings suggest that this on-site AI assistant can reduce the time for writing discharge notes to less than half of manual documentation while producing notes that are more complete, correct, concise, and clinically useful than those written without AI assistance. 

These potential benefits, the Yonsei researchers claimed, can ease documentation burden in busy EDs, as well as provide evidence for integrating locally-hosted AI assistants into routine clinical workflows. 

Still, the study authors emphasised that clinician review remains essential to catch rare confabulations. Further work is also needed to test the system in live settings and to evaluate patient comprehension and longer-term outcomes.

THE LARGER CONTEXT

Last year, the South Korean government announced investments in new technology projects to support decision-making in overwhelmed EDs across the country. 

five-year $17 million project involving five major hospitals, including Gangnam Severance Hospital (a part of Yonsei University Health System), is developing a clinical decision support system to predict patient deterioration. Two other projects developing AI-based emergency systems were announced under the Korean Advanced Research Projects Agency for Health (ARPA-H) initiative last year.