From lazy to rich to exclusive task representations in neural networks and neural codes

Curr Opin Neurobiol. 2023 Sep 25;83:102780. doi: 10.1016/j.conb.2023.102780. Online ahead of print.ABSTRACTNeural circuits-both in the brain and in "artificial" neural network models-learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as "lazy" or "rich" depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including "hidden layers"). We further elucidate rich networks through the lens of compression and "neural collapse", ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.PMID:37757585 | DOI:10.1016/j.conb.2023.102780
Source: Current Opinion in Neurobiology - Category: Neurology Authors: Source Type: research