From task structures to world models: what do LLMs know?

Trends Cogn Sci. 2024 Mar 4:S1364-6613(24)00035-4. doi: 10.1016/j.tics.2024.02.008. Online ahead of print.ABSTRACTIn what sense does a large language model (LLM) have knowledge? We answer by granting LLMs 'instrumental knowledge': knowledge gained by using next-word generation as an instrument. We then ask how instrumental knowledge is related to the ordinary, 'worldly knowledge' exhibited by humans, and explore this question in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science. We discuss ways LLMs could recover degrees of worldly knowledge and suggest that such recovery will be governed by an implicit, resource-rational tradeoff between world models and tasks. Our answer to this question extends beyond the capabilities of a particular AI system and challenges assumptions about the nature of knowledge and intelligence.PMID:38443199 | DOI:10.1016/j.tics.2024.02.008
Source: Trends Cogn Sci - Category: Neuroscience Authors: Source Type: research
More News: Neuroscience | Science | Tics