The role of trust in the use of artificial intelligence for chemical risk assessment

Regul Toxicol Pharmacol. 2024 Feb 23:105589. doi: 10.1016/j.yrtph.2024.105589. Online ahead of print.ABSTRACTRisk assessment of chemicals is a time-consuming process and needs to be optimized to ensure all chemicals are timely evaluated and regulated. This transition could be stimulated by valuable applications of in silico Artificial Intelligence (AI)/Machine Learning (ML) models. However, implementation of AI/ML models in risk assessment is lagging behind. Most AI/ML models are considered 'black boxes' that lack mechanistical explainability, causing risk assessors to have insufficient trust in their predictions. Here, we explore 'trust' as an essential factor towards regulatory acceptance of AI/ML models. We provide an overview of the elements of trust, including technical and beyond-technical aspects, and highlight elements that are considered most important to build trust by risk assessors. The results provide recommendations for risk assessors and computational modelers for future development of AI/ML models, including: 1) Keep models simple and interpretable; 2) Offer transparency in the data and data curation; 3) Clearly define and communicate the scope/intended purpose; 4) Define adoption criteria; 5) Make models accessible and user-friendly; 6) Demonstrate the added value in practical settings; and 7) Engage in interdisciplinary settings. These recommendations should ideally be acknowledged in future developments to stimulate trust and acceptance of AI/ML models for ...
Source: Regulatory Toxicology and Pharmacology : RTP - Category: Toxicology Authors: Source Type: research