ChatGPT tested on nuclear medicine cases

ChatGPT shows potential for diagnosing nuclear medicine cases, yet needs further development before it can be implemented in practice, according to a study presented November 28 at RSNA in Chicago. Gillean Cortes, DO, a resident at the University of California, Irvine presented a study that put ChatGPT-3.5 and ChatGPT-4  to the test on nuclear medicine differential diagnoses cases transcribed from two textbooks. The chatbot versions achieved accuracies of 60% and 70%, but were prone to “hallucinations,” Cortes noted. “While ChatGPT has shown some potential in generating accurate diagnoses, this technology requires further development before it can be implemented into clinical and educational practice,” Cortes said. In the study, Cortes and colleagues culled a sample of 50 cases specific to nuclear medicine imaging from the textbooks “Top 3 Differentials in Radiology” and “Top 3 Differentials in Nuclear Medicine.” The researchers converted the cases into standardized prompts that contained purely descriptive language and queried ChatGPT-3.5 and ChatGPT-4 for the most likely diagnosis, the top three differential diagnoses, and corresponding explanations and references from the medical literature. The large language model’s output diagnoses were analyzed for accuracy based on comparisons with the original literature, while reliability was assessed through manual verification of the generated explanations and citations. ChatGPT-3.5 generated the top diagno...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: Subspecialties Nuclear Radiology Source Type: news