Can Hallucinated AI Help with Patient Engagement?

The following is a guest article by Michelle X. Zhou, Ph.D., Co-Founder and CEO at Juji ChatGPT and similar AI applications have given humankind a new tool. While this new tool is powerful, it may not always be reliable. Hence the term “AI hallucinations” is coined to refer to such unreliable AI performance. Here is an example. I asked ChatGPT, “Who founded Juji“, the AI startup I co-founded. It hallucinated with the following reply: It got several facts wrong including my education. I received my Ph.D. from Columbia University, not Carnegie Mellon University. Moreover, Juji was co-founded by Dr. Huahai Yang and me, not just myself. As the medical community considers the role of generative AI, it begs the question: Can a hallucinated AI still help with high-stakes applications such as patient engagement? The short answer is yes if hallucinated AI is appropriately used. Customize an AI Chatbot with Accurate Information One cause behind AI hallucinations is that applications like ChatGPT are trained on public data, which may lack accurate information. For example, if ChatGPT has Juji’s proprietary data on its founding history and founder information, it probably won’t make a mistake as seen above. Similarly, reliance on trustworthy healthcare information is crucial for AI chatbots designed to engage patients. For example, an AI chatbot sitting on Mayo Clinic’s website should be trained with validated, up-to-date healthcare information ...
Source: EMR and HIPAA - Category: Information Technology Authors: Tags: AI/Machine Learning C-Suite Leadership Clinical Communication and Patient Experience Health IT Company Healthcare IT Hospital - Health System AI Chatbot ChatGPT Dr. Huahai Yang Generative AI Healthcare AI Hallucinations Healthcare Source Type: blogs