Dissociating language and thought in large language models

TitleDissociating language and thought in large language models
Publication TypeJournal Article
Year of Publication2024
AuthorsMahowald, K, Ivanova, AA, Blank, IA, Kanwisher, N, Tenenbaum, JB, Fedorenko, E
JournalTrends in Cognitive Sciences
Volume28
Issue6
Pagination517 - 540
Date Published03/2024
ISSN13646613
Abstract

Large language models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split. Here, we evaluate LLMs using a distinction between formal linguistic competence (knowledge of linguistic rules and patterns) and functional linguistic competence (understanding and using language in the world). We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms. Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty and often requires specialized fine-tuning and/or coupling with external modules. We posit that models that use language in human-like ways would need to master both of these competence types, which, in turn, could require the emergence of separate mechanisms specialized for formal versus functional linguistic competence.

URLhttps://linkinghub-elsevier-com.ezproxy.canberra.edu.au/retrieve/pii/S1364661324000275
DOI10.1016/j.tics.2024.01.011
Short TitleTrends in Cognitive Sciences

Associated Module: 

CBMM Relationship: 

  • CBMM Related