Can we trust ChatGPT to get the basics right?

by MATTHEW HOLT Eric Topol has a piece in his excellent newsletter Ground Truth‘s today about AI in medicine. He refers to the paper he and colleagues wrote in Nature about Generalist Medical Artificial Intelligence (the medical version of GAI). It’s more on the latest in LLM (Large Language Models). They differ from previous AI which was essentially focused on one problem, and in medicine that mostly meant radiology. Now, you can feed different types of information in and get lots of different answers. Eric & colleagues concluded their paper with this statement: “Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.” But he does note that “there are striking liabilities and challenges that have to be dealt with. The “hallucinations” (aka fabrications or BS) are a major issue, along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security and deep concerns about regulatory issues.” What he’s saying is that there are unexplained errors in LLMs and therefore we need a human in the loop to make sure the AI isn’t getting stuff wrong. I myself had a striking example of this on a topic that was purely simple calculation about a wel...
Source: The Health Care Blog - Category: Consumer Health News Authors: Tags: Health Tech Matthew Holt AI ChatGPT Eric Topol Source Type: blogs