ChatGPT-4 not reliable in cancer patient messaging

ChatGPT-4 is not a reliable source for answering patients’ questions regarding cancer, a study published April 24 in The Lancet Digital Health found. Researchers led by Danielle Bitterman, MD, from Mass General Brigham in Boston, MA, found that ChatGPT-4 generated acceptable messages to patients without any additional editing by radiation oncologists 58% of the time, and 7% of responses generated by GPT-4 were deemed unsafe by the radiation oncologists if left unedited. “Taking the collective evidence as a whole, I would still consider generative AI for patient messaging at its current stage to be experimental,” Bitterman told AuntMinnie.com. “It is not clear yet whether these models are effective at addressing clinician burn-out, and more work is needed to establish its safety when used as with a human-in-the-loop.” Medical specialties, including radiology and radiation oncology, continue to explore the potential of large language models such as ChatGPT. Proponents of the technology say that ChatGPT and other such models could help alleviate administrative and documentation responsibilities, which could in turn mitigate physician burnout. The researchers noted that electronic health record (EHR) vendors have adopted generative AI algorithms to aid clinicians in drafting messages to patients. However, they also pointed out that the efficiency, safety, and clinical impact of their use isn’t well known. Bitterman and colleagues used GPT-4 to generate 100 scenar...
Source: AuntMinnie.com Headlines - Category: Radiology Authors: Tags: Radiation Oncology/Therapy Advanced Visualization Source Type: news