Tucked deep inside your temporal lobe, just medial to the amygdala and roughly behind your ears, sits the hippocampus - a curled, seahorse-shaped structure that has been the subject of more PhD dissertations than probably any other chunk of neural tissue. It's the brain's rapid-fire note-taker, jotting down the specifics of what happened, where, and when. Meanwhile, sprawled across the vast real estate of your cerebral cortex, a much slower system quietly extracts the general gist of things - learning that dogs are furry, fire is hot, and you probably shouldn't reply-all to company-wide emails. These two systems, episodic and semantic memory respectively, were first formally distinguished by Endel Tulving back in 1972, and neuroscientists have been arguing about how they talk to each other ever since.
Two AIs Walk Into a Brain
A new computational model called GENESIS (Generative Episodic-Semantic Integration System) from D'Alessandro and colleagues finally gives us a unified framework for how these two memory systems interact - and the answer involves an architecture that AI engineers will find eerily familiar (D'Alessandro et al., 2026).
Here's the setup: GENESIS models your cortex as one variational autoencoder (the "Cortical-VAE") that learns to compress the world into useful categories, and your hippocampus as a second one (the "Hippocampal-VAE") that stores compressed indices of specific episodes. The kicker? The whole thing runs on a retrieval-augmented generation architecture. Yes, RAG - the same design pattern powering your favorite chatbot's ability to look stuff up before answering - might be what your brain has been doing all along.
The cortex handles encoding AND decoding. When you experience something, the cortex compresses it. When you remember something, the cortex reconstructs it. The hippocampus just stores the search keys - like a librarian who remembers which shelf a book is on but needs the cortex to actually read it back to you.
Why Your Memories Are (Productively) Wrong
One of the most satisfying things GENESIS explains is why your memories are systematically distorted - and not in a random, oops-the-hard-drive-corrupted way, but in a predictable, mathematically describable way.
The model uses rate-distortion theory (borrowed from information theory) to formalize what happens when your brain's storage capacity runs low. As the hippocampal bottleneck tightens, individual memories get squeezed toward their category prototypes. Saw a slightly unusual shade of red? Your brain remembers "standard red." Met someone with an unusual name at a party? Your brain helpfully overwrites it with something more statistically probable. (This is why you keep calling that guy "Mike.")
In their experiments using colored MNIST digits, the researchers showed that as encoding capacity drops, reconstructions literally converge toward category averages. At maximum compression, every "7" looks like the same Platonic ideal of a 7. Your brain isn't failing at memory - it's doing lossy compression, and the artifacts are features, not bugs (Spens & Burgess, 2024).
The Remix Artist in Your Head
Perhaps the wildest result: GENESIS shows how episodic replay can recombine elements from completely different memories to imagine novel scenarios. Saw a red 9 yesterday and a yellow 7 last week? Your brain can remix those into a yellow 9 it's never actually encountered.
This is a formal account of what psychologists call "constructive episodic simulation" - the ability to pre-experience future events by recombining stored episodes. Planning a vacation? Your brain is running GENESIS-style recombination on fragments from previous trips and that one beach photo your friend posted (Fayyaz et al., 2022).
The model also nails serial recall effects - you remember the beginning and end of a list better than the middle. By baking temporal embeddings into hippocampal keys, GENESIS reproduces primacy and recency effects, and shows how delayed recall selectively kills the recency advantage. Anyone who's crammed for an exam and forgotten the middle chapters will find this painfully relatable.
Beyond the Buddy System
What makes GENESIS genuinely novel is that it breaks from the classic Complementary Learning Systems (CLS) framework that has dominated the field since McClelland, McNaughton, and O'Reilly's landmark 1995 paper (updated by Kumaran et al., 2016). CLS basically said: hippocampus learns fast, cortex learns slow, and they teach each other during sleep. Neat and tidy.
GENESIS says: actually, it's messier than that. The cortex doesn't just passively receive replayed memories - it actively shapes what gets encoded and how it's reconstructed. Semantic knowledge warps episodic recall in real time, not just during overnight consolidation. The two systems aren't just complementary; they're codependent in ways that produce both the power and the systematic weirdness of human memory.
So What?
If GENESIS holds up (and its ability to reproduce a genuinely impressive range of behavioral findings suggests it might), it reframes memory not as a filing cabinet or a camera but as a pair of collaborating generative models running on a budget. Your brain doesn't store experiences - it stores just enough to regenerate a plausible version later, filling in the gaps with learned priors. It's less "total recall" and more "educated guess with receipts."
For AI, the implications are equally interesting: maybe the next breakthrough in language model memory isn't more parameters or bigger context windows, but a GENESIS-style architecture where fast episodic indexing works hand-in-hand with slow semantic compression. Your hippocampus figured this out a few hundred million years ago. Silicon is just catching up.
References
-
D'Alessandro, M., D'Amato, L., Elkano, M., Uriz, M., & Pezzulo, G. (2026). GENESIS: A Generative model of Episodic-Semantic Interaction. Neuroscience and Biobehavioral Reviews, 106627. DOI: 10.1016/j.neubiorev.2026.106627
-
Spens, E., & Burgess, N. (2024). A generative model of memory construction and consolidation. Nature Human Behaviour, 8, 526-543. DOI: 10.1038/s41562-023-01799-z | PMCID: PMC10963272
-
Fayyaz, Z., Altamimi, A., Zoellner, C., Klein, N., Wolf, O. T., Cheng, S., & Wiskott, L. (2022). A model of semantic completion in generative episodic memory. Neural Computation, 34(9), 1841-1870. DOI: 10.1162/neco_a_01520
-
Kumaran, D., Hassabis, D., & McClelland, J. L. (2016). What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends in Cognitive Sciences, 20(7), 512-534. DOI: 10.1016/j.tics.2016.05.004 | PMID: 27315762
-
McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102(3), 419-457. DOI: 10.1037/0033-295X.102.3.419 | PMID: 7624455
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.