April 17, 2026

Your Brain Might Not Be Finishing Your Sentences After All

The shortest version of this story: a new study suggests that what neuroscientists thought was the brain predicting upcoming words might just be a statistical quirk baked into language itself. The interesting version takes a bit longer.

The Hype Was Real (Maybe Too Real)

Over the past few years, a genuinely exciting idea took hold in neuroscience: your brain is basically autocomplete on steroids. The theory goes that as you listen to someone talk, your neural machinery is constantly running ahead, generating predictions about the next word before it even hits your ears. Think of it as your brain playing a perpetual game of Jeopardy!, buzzing in before the question is finished.

Your Brain Might Not Be Finishing Your Sentences After All

This wasn't just armchair speculation. Researchers used large language models - the same kind of AI that powers chatbots - to create mathematical representations of words (called embeddings) and then checked whether these could predict brain activity before a word was actually spoken. And the results looked amazing. Two findings were heralded as smoking-gun evidence: brain signals could be predicted before word onset, and this effect got stronger for more predictable words (Goldstein et al., 2022; Heilbron et al., 2022).

Neuroscience Twitter (yes, that's a thing) was buzzing. The brain really does work like GPT!

Plot Twist: What If It's Just Language Being Language?

Enter Inés Schönmann and colleagues from the Donders Institute and University of Amsterdam, who asked an inconvenient but brilliant question: what if the regression models are just exploiting the fact that words in natural language aren't random? (Schönmann et al., 2026).

Here's the thing about language that makes this tricky. Words don't appear in isolation. "Peanut" and "butter" hang out together. "The cat sat on the..." - you already know it's "mat." Adjacent words share massive amounts of statistical information. So when a model trained on word embeddings seems to "predict" brain activity before a word arrives, it might not be detecting the brain's crystal ball. It might just be picking up on the fact that word N+1 is statistically similar to word N.

To test this, the researchers did something clever. They ran the exact same encoding analysis on "passive control systems" - things that represent the stimulus but definitely cannot predict the future. Specifically, they tested word embeddings themselves (just the mathematical word representations, no brain involved) and raw speech acoustics (the physical sound waves).

Spoiler: The Control Systems Passed the Test Too

Both hallmarks of "neural prediction" showed up in systems that literally cannot predict anything. The word embeddings and acoustic signals produced the same patterns that were supposed to be unique signatures of the brain actively forecasting upcoming words.

It's like claiming your dog can predict earthquakes because it barks before they happen, then discovering that your dog also barks before the mailman, thunderstorms, and its own reflection. The barking might be real, but the prediction part? Not so much.

Even more damning: the researchers tested proposed statistical corrections designed to remove these stimulus dependencies. The effects persisted anyway. The fixes didn't fix it.

Why This Matters (And Why It's Actually Good News)

This isn't a "the brain doesn't predict" paper. Decades of behavioral and neural evidence strongly suggest that predictive processing is a core feature of how brains work (Schrimpf et al., 2021). What this paper does say is that one particular method for measuring prediction might be fooling us by confusing correlation with causation at the statistical level.

That's a really important distinction. Predictive coding - the idea that your brain maintains an internal model of the world and constantly updates it based on prediction errors - remains one of the most influential frameworks in neuroscience. But the tools we use to study it need to be airtight, especially when we're dealing with something as statistically slippery as natural language.

The Bigger Picture

This study is a healthy reality check for a field that's been racing to draw parallels between AI language models and brains. Those parallels might exist! But proving them requires ruling out the boring explanations first (Caucheteux & King, 2022). The fact that GPT-2 activations correlate with brain signals is cool, but if those same correlations appear in a system with zero predictive ability, we need better methods before claiming the brain runs on autocomplete.

The authors aren't throwing cold water on the field - they're handing it a better thermometer.

References

  1. Schönmann, I., Szewczyk, J., de Lange, F. P., & Heilbron, M. (2026). Stimulus dependencies - rather than next-word prediction - can explain pre-onset brain encoding in naturalistic listening designs. eLife, 12, e106543. https://doi.org/10.7554/eLife.106543

  2. Goldstein, A., Zada, Z., Buchnik, E., et al. (2022). Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25(3), 369-380. https://doi.org/10.1038/s41593-022-01026-4

  3. Heilbron, M., Armeni, K., Schoffelen, J.-M., Hagoort, P., & de Lange, F. P. (2022). A hierarchy of linguistic predictions during natural language comprehension. PNAS, 119(32), e2201968119. https://doi.org/10.1073/pnas.2201968119

  4. Schrimpf, M., Blank, I. A., Tuckute, G., et al. (2021). The neural architecture of language: Integrative modeling converges on predictive processing. PNAS, 118(45), e2105646118. https://doi.org/10.1073/pnas.2105646118

  5. Caucheteux, C., & King, J.-R. (2022). Brains and algorithms partially converge in natural language processing. Communications Biology, 5, 134. https://doi.org/10.1038/s42003-022-03036-1

Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.