Artificial Intelligence and Clinical Decision Making: The New Nature of Medical Uncertainty

Estimates in a 1989 study indicated that physicians in the United States were unable to reach a diagnosis that accounted for their patient’s symptoms in up to 90% of outpatient patient encounters. Many proponents of artificial intelligence (AI) see the current process of moving from clinical data gathering to medical diagnosis as being limited by human analytic capability and expect AI to be a valuable tool to refine this process. The use of AI fundamentally calls into question the extent to which uncertainty in medical decision making is tolerated. Uncertainty is perceived by some as fundamentally undesirable and thus, for them, optimal decision making should be based on minimizing uncertainty. However, uncertainty cannot be reduced to zero; thus, relative uncertainty can be used as a metric to weigh the likelihood of various diagnoses being correct and the appropriateness of treatments. Here, the authors make the argument, using as examples the experiences of 2 AI systems, IBM Watson on Jeopardy and Watson for Oncology, that medical decision making based on relative uncertainty provides a better lens for understanding the application of AI to medicine than one that minimizes uncertainty. This approach to uncertainty has significant implications for how health care leaders consider the benefits and trade-offs of AI-assisted and AI-driven decision tools and ultimately integrate AI into medical practice.
Source: Academic Medicine - Category: Universities & Medical Training Tags: Scholarly Perspectives Source Type: research