Brains did not evolve to win elegance contests. They evolved to keep an animal alive long enough to eat, dodge, mate, remember who bit it last week, and maybe pull off a decent two-minute drill when the world got weird. That rewarded systems that could predict what comes next, fast. Now neuroscience is asking a very 2025 question: if giant AI models get really good at predicting brain activity and behavior, can they help us understand how the brain runs its plays - or are they just fancy box score merchants? That is the question Thomas Serre and Ellie Pavlick tackle in a recent Neuron essay.
Prediction Is Nice. Explanation Wins Championships
Serre and Pavlick’s core point is simple: neuroscience should not settle for models that merely predict neural activity. The real prize is explanation - linking what a model computes to actual mechanisms of perception, cognition, and behavior (Serre & Pavlick, 2025).
That sounds obvious, but science loves a scoreboard. If a model predicts what neurons will do, or what choice a person will make, everybody claps. Fair enough. Prediction matters. But a prediction-only model can still be a black box. It might nail the outcome while telling you almost nothing about the underlying play design.
The New Recruits Look Legit
One 2025 Nature paper trained a foundation model on large-scale mouse visual cortex data and showed that it could predict neural responses to new kinds of stimuli and even connect those predictions to anatomy better than narrower models (Wang et al., 2025). That is impressive. It is also the kind of result that makes neuroscientists start talking like a franchise just found its quarterback.
Another 2025 Nature paper introduced Centaur, a foundation model trained on more than 10 million human choices from 160 psychology experiments. It generalized across new tasks and participants, and its internal representations became more aligned with human neural activity after training (Binz et al., 2025). In plain English: the model did not just memorize one drill. It started reading a broader chunk of the field.
That is why people are excited. Foundation models can absorb messy, huge, unlabeled datasets and then adapt across tasks. In neuroscience, the hope is that this could unify scattered findings from brain imaging, behavior, language, and physiology.
Why Neuroscience Keeps Dropping Easy Passes
The problem is data. Brain data are expensive, noisy, limited, and fragmented. You cannot scrape a trillion clean neural examples off the open internet. A mouse does not upload its cortex to GitHub. Human studies use different tasks, scanners, species, and recording setups. It is less one league and more twelve pickup games happening in different parking lots.
That is exactly why this paper matters. Serre and Pavlick argue that if foundation models are going to help brain science, the field needs common benchmarks, broader datasets, and stronger tests of generalization. A 2024 Nature Neuroscience perspective makes a similar point: cognitive neuroscience should focus more on task demands and whether findings generalize across settings, rather than treating each lab task like sacred scripture (Nau et al., 2024).
There is also a deeper issue. Computational neuroscience has always tried to connect math to mechanism, not just fit curves beautifully and walk away (Computational neuroscience, Wikipedia). So if a giant model predicts your fMRI data but nobody can say what it learned in biologically meaningful terms, you may have built a spectacular intern - useful, fast, and hard to trust alone.
What This Could Change in the Real World
If these models keep improving and, big if, hold up under replication, they could speed up several parts of neuroscience and neurology. They could help researchers identify which experiments to run next, improve brain-computer interfaces by learning better representations from EEG or neural recordings, and make it easier to detect subtle disease patterns across imaging, language, and behavior. They might even help build more unified theories of cognition, which is academic code for "maybe the field can stop arguing over tiny subdrills and finally diagram the whole offense."
But nobody serious should oversell this. Even recent expert commentary has emphasized the same caution: these systems may be powerful tools for finding patterns while still falling short of genuine understanding or reliable clinical use without heavy validation (IBM Think, 2025). Translation: exciting assistant coach, not head coach of medicine.
The Real Test
The most interesting part of this paper is that it refuses the easy hype cycle. It does not ask, "Can AI predict the brain?" It asks the tougher question: "When does prediction become understanding?" That is the right standard. A model that can call the next play is useful. A model that explains why the play works, when it fails, and how to design a better one is science. Neuroscience has enough highlight reels already. What it needs now is a playbook.
References
Serre T, Pavlick E. From prediction to understanding: Will AI foundation models transform brain science? Neuron. 2025. DOI: https://doi.org/10.1016/j.neuron.2025.09.039
Wang EY, Fahey PG, Ding Z, et al. Foundation model of neural activity predicts response to new stimulus types and anatomy. Nature. 2025;640(8058):470-477. DOI: https://doi.org/10.1038/s41586-025-08829-y
Binz M, Akata E, Bethge M, et al. A foundation model to predict and capture human cognition. Nature. 2025;644:1002-1009. DOI: https://doi.org/10.1038/s41586-025-09215-4
Wang R, Chen ZS. Large-scale foundation models and generative AI for BigData neuroscience. Neuroscience Research. 2025;215:3-14. DOI: https://doi.org/10.1016/j.neures.2024.06.003
Nau M, Schmid AC, Kaplan SM, Baker CI, Kravitz DJ. Centering cognitive neuroscience on task demands and generalization. Nature Neuroscience. 2024;27(9):1656-1667. DOI: https://doi.org/10.1038/s41593-024-01711-6
McClelland JL. Capturing advanced human cognitive abilities with deep neural networks. Trends in Cognitive Sciences. 2022;26(12):1047-1050. DOI: https://doi.org/10.1016/j.tics.2022.09.018
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.