Why And How To Regulate ChatGPT-Like Large Language Models In Healthcare?

Large Language Models (LLMs), such as ChatGPT or Bard, hold immense promise but also pose significant challenges for healthcare and medicine. To unlock their enormous benefits, we must ensure their safe application in an environment where lives are at stake.  In other words, our task is to establish a robust, ethical framework for these generative AI models – without making the boundaries so tight that it kills innovation. Our latest paper published in Nature’s npj Digital Medicine, “The Imperative for Regulatory Oversight of Large Language Models (or Generative AI)  in Healthcare” published with Dr Eric Topol analysed the challenges and the possibilities of LLM regulations to make them accessible for healthcare use.  We need brand new kinds of regulations Regulators must cover myriad scenarios and perspectives since LLMs significantly differ from existing and regulated AI algorithms and deep learning methods.  A range of distinct characteristics sets them apart, including their:  scale and complexity – LLMs utilize billions of parameters, resulting in unprecedented complexity. Tokenization, their basic “processing” method is currently not covered by healthcare regulators broad applicability – LLMs have unprecedented versatility compared to specialized deep learning models. As they can be used in various sectors, from finance to healthcare, one-size-fits-all regulations will not suffice real-time a...
Source: The Medical Futurist - Category: Information Technology Authors: Tags: TMF Artificial Intelligence in Medicine AI AI in healthcare AI in medicine large language models MedPaLM ChatGPT in healthcare Bard Source Type: blogs