Adaptive intelligence sounds a bit like making soup from a fridge full of leftovers. You start with a rough plan, the onions go in, something smells promising, then reality barges in - the broth is thin, the heat is wrong, the cat is judging you - and somehow dinner still happens. That, in miniature, is what animals do all day. They keep adjusting while the world is still moving. And according to a recent Nature Neuroscience Perspective by Mackenzie Mathis, that is exactly the trick modern AI still struggles to pull off [1].
Why today’s AI still face-plants in the wild
A lot of AI is excellent at the exam and weirdly fragile in real life. Train it on a fixed dataset, test it on similar data, everyone claps. Then the lighting changes, the task shifts, and the system suddenly has the street smarts of a decorative gourd.
Mathis argues that the next step is not just "better AI" in the usual sense, but adaptive intelligence - systems that learn online, generalize beyond their training diet, and keep functioning when the world stops cooperating [1]. That idea leans heavily on neuroscience, because animal brains do this constantly. They do not wait for a training pipeline and a benchmark suite. They update on the fly, which is rude but impressive.
One big theme here is the brain’s use of internal models. In plain English: your nervous system keeps a running guess about what is out there, compares that guess to incoming evidence, and updates when reality starts acting suspicious. Predictive coding theories frame this as a loop of prediction and correction, with "prediction error" acting like the universe tapping you on the shoulder and saying, "Nope, try again" [2].
Tiny gossip networks, giant consequences
This matters because adaptation is not one magic feature. You need memory. You need flexible representations. You need a way to notice when the world has changed instead of pretending everything is fine, which is a strategy mainly favored by doomed startups.
A 2024 Nature Machine Intelligence Perspective makes the AI version of this argument: systems in the real world need "open-world learning," meaning they must detect unexpected changes and adapt rather than just fail more confidently [3]. Brains also seem to rely on replay - reactivating past experiences during rest or sleep-like states - to stabilize learning and improve generalization. In a 2022 Cerebral Cortex paper, Barry and Love showed that replay-like mechanisms in neural networks can help prepare a system for future experiences, not just archive the past like a very anxious librarian [4].
What the paper is really betting on
Mathis is basically saying: stop treating adaptation like an optional software update. Build it into the machine’s bones [1].
That means AI agents may need better ways to build world models, monitor their own uncertainty, preserve useful knowledge while still learning, and update only the parts of the system that actually need updating. Her discussion connects nicely with a Nature Communications study showing that empirically estimated neural network models from human fMRI data can reveal transformations linked to adaptive behavior [5]. Translation: the brain is not just storing facts. It is continuously reformatting information so action can happen.
If this line of research holds up, the payoff is not abstract. More adaptive AI could help robots cope with messy homes, hospitals, and sidewalks instead of only succeeding in clean demo videos. It could make assistive devices more responsive to changing bodies and environments. It could improve scientific tools that model movement, perception, and decision-making. It might even make AI less wasteful. The brain runs on about 20 watts while our biggest models behave like they are trying to heat a small nation. The comparison is not exact, but it is embarrassing enough to be motivating.
The abyss looking back with a calibration curve
The funny part is that the brain, this damp three-pound cathedral of electrified pudding, may still be the best proof that adaptive intelligence is possible. Not neat. Not elegant. But possible.
That does not mean neuroscientists are handing AI engineers a secret treasure map. It means they are offering something messier and more useful: examples of systems that learn under uncertainty, recover after surprises, and keep steering through noise. The ocean does not stop changing so the fish can finish a gradient update. The fish adapts or becomes lunch.
AI wants that kind of toughness. Neuroscience may not provide a recipe card, but it does offer a view from the deep water - where intelligence is less about memorizing the menu and more about surviving when the kitchen catches fire.
References
- Mathis MW. Leveraging insights from neuroscience to build adaptive artificial intelligence. Nature Neuroscience. 2025. DOI: 10.1038/s41593-025-02169-w
- Friston K, Moran RJ, Nagai Y, Taniguchi T, Gomi H, Tenenbaum JB. World model learning and inference. Neural Networks. 2021;144:573-590. DOI: 10.1016/j.neunet.2021.09.011
- Kejriwal M, Kildebeck E, Steininger R, et al. Challenges, evaluation and opportunities for open-world learning. Nature Machine Intelligence. 2024;6:580-588. DOI: 10.1038/s42256-024-00852-4
- Barry DN, Love BC. A neural network account of memory replay and knowledge consolidation. Cerebral Cortex. 2022;33(1):83-95. DOI: 10.1093/cercor/bhac054 PMCID: PMC9758580
- Ito T, Yang GR, Laurent P, Schultz DH, Cole MW, et al. Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior. Nature Communications. 2022;13:673. DOI: 10.1038/s41467-022-28323-7
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.