Trust in AI: why we should be designing for APPROPRIATE reliance

AbstractUse of artificial intelligence in healthcare, such as machine learning-based predictive algorithms, holds promise for advancing outcomes, but few systems are used in routine clinical practice. Trust has been cited as an important challenge to meaningful use of artificial intelligence in clinical practice. Artificial intelligence systems often involve automating cognitively challenging tasks. Therefore, previous literature on trust in automation may hold important lessons for artificial intelligence applications in healthcare. In this perspective, we argue that informatics should take lessons from literature on trust in automation such that the goal should be to fosterappropriate trust in artificial intelligence based on the purpose of the tool, its process for making recommendations, and its performance in the given context. We adapt a conceptual model to support this argument and present recommendations for future work.
Source: Journal of the American Medical Informatics Association - Category: Information Technology Source Type: research