The clues were all there, scattered across thousands of flickering neurons like evidence at a crime scene. For years, neuroscientists had been circling a tantalizing suspect: the idea that you could reverse-engineer what a brain is seeing just by reading its neural activity. The early leads were promising but blurry - literally. Static images, low resolution, more Rorschach test than recreation. But a team at University College London just cracked the case wide open, and the big reveal? They reconstructed actual movies from mouse brain activity. Ten-second clips, 30 frames per second, recognizable enough to make you do a double-take.
Reading the Brain's Screening Room
So how do you pull a movie out of a mouse's head? You start with two-photon calcium imaging - a technique that lets researchers watch individual neurons light up in real time as calcium floods in during firing. Think of it as eavesdropping on hundreds of tiny conversations happening simultaneously in the visual cortex.
Joel Bauer and colleagues at UCL's Sainsbury Wellcome Centre showed mice natural video clips while recording from neurons in their primary visual cortex (V1). Then came the clever part. They built a dynamic neural encoding model - basically an AI that learns to predict how each neuron will respond to any given video frame, accounting for the mouse's movements and even its pupil size. Once that model is trained, you flip it backwards. You start with random noise, ask "what video would make these neurons fire the way they actually did?", and iteratively refine your guess through backpropagation until you land on something that matches (Bauer et al., 2026).
The result? A pixel-level correlation of 0.57 between the original and reconstructed videos. That might not sound jaw-dropping until you learn the previous best for static images from mouse V1 was 0.24. They more than doubled the accuracy and added the dimension of time.
Why Mice, Though?
Fair question. Most of the splashy brain-reading headlines have come from human fMRI studies. The MindEye system, for instance, achieved jaw-dropping image reconstructions from human brain scans using contrastive learning and diffusion models (Scotti et al., 2024). And way back in 2011, Nishimoto and Gallant at UC Berkeley first showed you could reconstruct fuzzy movie clips from human fMRI data (Nishimoto et al., 2011). More recently, Takagi and Nishimoto used Stable Diffusion to generate strikingly detailed image reconstructions from fMRI signals (Takagi & Nishimoto, 2023).
But here's the thing about fMRI: it's measuring blood flow changes across thousands of neurons at once. It's like trying to figure out what song a stadium is singing by measuring how much the building vibrates. Single-cell recordings in mice give you the actual sheet music - which specific neurons fire, when, and how intensely. That precision is what let Bauer's team achieve reconstructions this clean.
The Brain Is Not a Camera (And That's the Point)
Here's where it gets properly interesting. The reconstructed videos aren't perfect copies. They're subtly warped - slightly distorted versions of what the mouse actually saw. And as lead author Joel Bauer put it: "The deviation between reality and representations in the brain is not necessarily an error but a feature, reflecting how our minds interpret and augment sensory information."
Your brain doesn't passively record the world like a security camera. It's actively editing the footage in real time, emphasizing what matters, smoothing over what doesn't, filling in gaps with best guesses. This reconstruction technique gives researchers a way to see those edits happening - to literally watch the brain's director's cut play out alongside the original.
More Neurons, Better Movies
One of the study's most practical findings: reconstruction quality scales with the number of neurons recorded. More eavesdropping, clearer picture. They also found that model ensembling - combining predictions from multiple trained models - significantly boosted quality. It's the neural decoding equivalent of asking five friends what happened at the party instead of just one.
This matters because calcium imaging technology keeps getting better. The MICrONS project has already recorded from around 75,000 neurons in mouse visual cortex. As datasets grow, so does the potential resolution of these reconstructions.
What Comes Next
The team plans to push toward higher resolution reconstructions covering more of the visual field. But the really exciting application isn't just making prettier videos. It's using reconstruction as a tool - a way to investigate how visual processing changes across brain regions, during learning, in disease models, or between species. If you can see what the brain thinks it's seeing, you can start asking why it sees things differently than reality.
We don't have a perfect representation of the world in our heads. We never did. But now, for the first time, we can watch the imperfect version play back - and that might teach us more about vision than the perfect version ever could.
References
-
Bauer, J., Margrie, T. W., & Clopath, C. (2026). Movie reconstruction from mouse visual cortex activity. eLife, 14, RP105081. DOI: 10.7554/eLife.105081 | PMCID: PMC12975128
-
Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646. DOI: 10.1016/j.cub.2011.08.031 | PMCID: PMC3326357
-
Takagi, Y., & Nishimoto, S. (2023). High-resolution image reconstruction with latent diffusion models from human brain activity. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14453-14463. DOI: 10.1109/CVPR52729.2023.01389
-
Scotti, P. S., Banerjee, A., Gober, J., Chiu, S., Pfeiffer, T., Naselaris, T., Law, K., & Abraham, N. (2024). Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors. Advances in Neural Information Processing Systems, 36. DOI: 10.48550/arXiv.2305.18274
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.