Like a storm front moving across a well-kept garden, meaning in the brain rarely arrives all at once. It gathers, shifts, prunes, and settles before you quite notice what has happened. Neuroscientists have been trying to catch those patterns for decades, politely insisting that words are not just sounds with good publicists. Now James Fodor and Shinsuke Suzuki review a fast-growing corner of that effort, where brain imaging meets computational semantics - or, if you prefer, where philosophers of meaning are forced to share a table with people who turn words into vectors and then ask the cortex what it thinks [1].
The Brain, Caught Reading Its Own Mail
The basic trick is less mystical than it sounds. Computational semantics represents words or sentences as points in a mathematical space. "Cat" ends up closer to "dog" than to "tax audit," which is comforting for everyone except the tax auditor. These vector-based models grew out of the old linguistic hunch that a word is known by the company it keeps. Words that show up in similar contexts often carry similar meaning. Crude at first glance, yes. Also weirdly powerful.
Fodor and Suzuki review 57 fMRI studies that used these models to probe how the brain handles meaning [1]. The broad finding is that the models often line up with neural activity rather well. If you feed a language model a story and feed a person the same story, some of the model’s internal geometry can help predict what parts of the person’s language network will light up. The brain is not literally running ChatGPT in a little waistcoat, but the overlap is too large to dismiss as a parlor trick.
Tiny Gossip Networks, Larger Problems
That said, the review is not a victory lap. It is more like a stern gardener pointing out that half the flowerbeds have mislabeled stakes. Fodor and Suzuki argue that the field has methodological inconsistencies all over the place - different stimuli, different preprocessing pipelines, different ways of choosing embeddings, different evaluation metrics, and not always enough standardization to make one study sit neatly beside the next [1]. The result is a literature that can look more coherent from fifty feet away than it does up close.
This matters because semantics is already slippery. The brain does not store meaning in one tidy filing cabinet labeled "noun thoughts." Semantic cognition seems to rely on distributed systems that combine stored knowledge with context-sensitive control [2]. A word like "bank" is either a pleasant riverside location or a place that sends you emails beginning with "Dear Valued Customer," and your brain sorts that out almost instantly. That is live negotiation.
Recent work has pushed this model-brain comparison beyond single words. Goldstein and colleagues found that human brains and autoregressive deep language models share several broad computational habits during natural speech, including context-sensitive prediction and surprise when the next word arrives [3]. Caucheteux and colleagues argued that brain language processing looks hierarchical across timescales, with broader contextual prediction engaging higher-level regions [4]. Tuckute and colleagues went a step further and used a GPT-based encoding model to identify sentences that would drive or suppress activity in the human language network - which sounds a little like teaching an orchestra by mailing strongly worded haikus directly to the violins [5].
Why Anyone Outside a Scanner Should Care
If this line of research keeps holding up, it could sharpen theories of how meaning is represented, not just where language "lives." It could improve brain-computer interfaces for people who cannot easily speak. It could also make AI models more scientifically useful as test benches for language hypotheses, rather than merely expensive autocomplete machines with excellent self-esteem. The interesting question is not whether these models are "like" the brain in some grand metaphysical way, but which pieces of language processing they capture well, and which pieces they miss entirely [1,3-5].
And they do miss things. Human meaning is grounded in perception, memory, action, social context, and the inconvenient fact that we inhabit bodies. A language model can tell you that cinnamon is warm and autumnal; your brain also knows what it smells like, what it tastes like, and which relative overdid it at Thanksgiving in 2009. Recent reviews of semantic cognition still emphasize that meaning in the brain is built from distributed representational systems plus control systems that shape knowledge for the moment at hand [2,6]. Elegant vectors help. They are not the whole orchard.
So the upshot of this review is pleasantly sober. Computational semantics is giving neuroscience better tools to study meaning, especially under natural language conditions that older experiments often struggled to capture. But the field needs cleaner methods, stronger comparisons, and fewer opportunities for everyone to announce they have found "the meaning area" after one especially lucky scan. The brain remains a notoriously fussy landscape. Still, these vector models are finally giving us better boots.
References
- Fodor J, Suzuki S. Using computational semantics to study meaning in the brain. Neurosci Biobehav Rev. 2025;174:106514. DOI: https://doi.org/10.1016/j.neubiorev.2025.106514
- Jackson RL, Rogers TT, Lambon Ralph MA. Reverse-engineering the cortical architecture for controlled semantic cognition. Nat Hum Behav. 2021;5:774-786. DOI: https://doi.org/10.1038/s41562-020-01034-z
- Goldstein A, Zada Z, Buchnik E, et al. Shared computational principles for language processing in humans and deep language models. Nat Neurosci. 2022;25(3):369-380. DOI: https://doi.org/10.1038/s41593-022-01026-4. PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC8904253/
- Caucheteux C, Gramfort A, King JR. Evidence of a predictive coding hierarchy in the human brain listening to speech. Nat Hum Behav. 2023;7(3):430-441. DOI: https://doi.org/10.1038/s41562-022-01516-2. PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC10038805/
- Tuckute G, Sathe A, Srikant S, et al. Driving and suppressing the human language network using large language models. Nat Hum Behav. 2024;8:544-561. DOI: https://doi.org/10.1038/s41562-023-01783-7
- Diveica V. How the brain constructs and uses meaning. Nat Rev Psychol. 2025;4:682. DOI: https://doi.org/10.1038/s44159-025-00494-2
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.