Statistical learning of syllable sequences as trajectories through a perceptual similarity space

Cognition. 2024 Mar;244:105689. doi: 10.1016/j.cognition.2023.105689. Epub 2024 Jan 13.ABSTRACTLearning from sequential statistics is a general capacity common across many cognitive domains and species. One form of statistical learning (SL) - learning to segment "words" from continuous streams of speech syllables in which the only segmentation cue is ostensibly the transitional (or conditional) probability from one syllable to the next - has been studied in great detail. Typically, this phenomenon is modeled as the calculation of probabilities over discrete, featureless units. Here we present an alternative model, in which sequences are learned as trajectories through a similarity space. A simple recurrent network coding syllables with representations that capture the similarity relations among them correctly simulated the result of a classic SL study, as did a similar model that encoded syllables as three dimensional points in a continuous similarity space. We then used the simulations to identify a sequence of "words" that produces the reverse of the typical SL effect, i.e., part-words are predicted to be more familiar than Words. Results from two experiments with human participants are consistent with simulation results. Additional analyses identified features that drive differences in what is learned from a set of artificial languages that have the same transitional probabilities among syllables.PMID:38219453 | DOI:10.1016/j.cognition.2023.105689
Source: Cognition - Category: Neurology Authors: Source Type: research