Large language models simplify radiology report impressions

Large language models (LLMs) can significantly simplify radiology report impressions, enhancing readability for patients, according to research published March 26 in Radiology. In a study involving 750 radiology reports, a team of researchers from Yale University tested four different LLMs – including ChatGPT, Bard, and Bing -- and found that all four were able to significantly simplify report impressions. Model performance did differ, however, based on the wording of the prompt used. “Our study highlights how radiology reports, which are complex medical documents that implement language and style above the college graduate reading level, can be simplified by LLMs,” wrote the team led by Rushabh Doshi. “Patients may use publicly available LLMs at home to simplify their reports, or medical practices could adapt automatic simplification into their workflow.” As the complex medical terminology in radiology reports can be confusing to patients or induce anxiety, the researchers sought to assess if LLMs could make these reports more readable. They gathered 150 CT, 150 MRI, 150 ultrasound, and 150 diagnostic mammography reports from the Medical Information Mart for Intensive Care (MIMIC-IV) database. Next, they queried ChatGPT-3.5, ChatGPT 4, Bing (Microsoft), and Bard -- now known as Gemini -- (Google) using three different prompts: “Simplify this radiology report.” “I am a patient. Simplify this radiology report.” “Simplify this radiology report at the 7...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: Artificial Intelligence Source Type: news