Is ChatGPT too ‘smart’ for its own good?

ChatGPT shows promise in answering questions by radiology patients, yet the complexity of its responses may complicate its use as a patient education tool, according to a group at the University of Pennsylvania.Researchers led by Emile Gordon, MD, tested ChatGPT in answering common imaging-related questions and further examined the effect of asking the chatbot to simplify its responses. While accurate, ChatGPT’s responses were uniformly complex, they found.“None of the responses reached the eighth-grade readability recommended for patient-facing materials,” the group wrote. The study was published October 18 in the Journal of the American College of Radiology.The ACR has prioritized effective patient communications in radiology and encourages its improvement, the authors wrote. ChatGPT has garnered attention as a potential tool in this regard. For instance, studies suggest that it could be useful for answering questions on breast cancer screening, the group noted.However, its role in addressing patient imaging-related questions remains unexplored, they added.To that end, the researchers asked ChatGPT 22 imaging-related questions deemed important to patients: safety, the radiology report, the procedure, preparation before imaging, and the meaning of terms and medical staff. The questions were posed with and without this follow-up prompt: “Provide an accurate and easy-to-understand response that is suited for an average person.”Four experienced radiologists evaluated ...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: Imaging Informatics Artificial Intelligence Source Type: news