Scientists developed an AI monitoring agent to detect and stop harmful outputs

A team of researchers from artificial intelligence (AI) firm AutoGPT, Northeastern University, and Microsoft Research have developed a tool that monitors large language models (LLMs) for potentially harmful outputs and prevents them from executing. The agent is described in a preprint research…#microsoftresearch #llm #openai #gpt
Source: Reuters: Health - Category: Consumer Health News Source Type: news
More News: Health