April 27, 2026

BRAINS, BUT MAKE THEM USEFUL: What Happens When AI Studies the Visual Cortex Instead of Just Vibes

Before this kind of work, a lot of “brain-inspired AI” talk was basically decorative trim - slap the word neural on a model, nod respectfully at biology, and go home. After this paper, the relationship gets less like a marketing brochure and more like actual plumbing: researchers are trying to connect what a human brain really does during vision to how a machine learns to see. Messy? Yes. Promising? Also yes. Slightly alarming that your fMRI scan might become a study guide for a transformer? Absolutely.

Before this kind of work, a lot of “brain-inspired AI” talk was basically decorative trim - slap the word neural on a model, nod respectfully at biology, and go home. After this paper, the relationship gets less like a marketing brochure and more lik

The basic stunt

A new paper introduces BRACTIVE - short for Brain Activation Network - a transformer-based framework that learns links between visual features and fMRI signals from the human brain Nguyen et al., 2025. The goal is not just to admire brain scans like abstract art. It’s to line up what a person sees with the patterns of activation in their visual cortex, then use that alignment to identify regions of interest, or ROIs.

That sounds dry, but stick with me. ROIs are brain areas that reliably care about certain kinds of stuff. Faces. Bodies. Places. Probably not your unread emails, though the amygdala may have opinions.

What makes BRACTIVE different is scale and flexibility. Older methods often handled one subject at a time, like a lab tech sorting socks one lonely pair at a time. BRACTIVE can extend ROI identification across multiple subjects and multiple brain regions, while still preserving person-specific differences. That matters because brains are not identical factory parts. They are more like apartments renovated by different landlords over several decades.

Why visual neuroscience keeps bullying AI in the best way

Computer vision has gotten absurdly good at some things. It can label images, generate fake grandmothers, and identify a raccoon wearing sunglasses with unsettling confidence. But human vision still does things current AI struggles with - robustness, efficiency, generalization, and learning from less data.

That’s why researchers keep peeking over the fence at the visual cortex.

The visual system has specialized areas, layered processing, and a remarkable ability to extract meaning from noisy input. Neuroscience has mapped some of this organization for decades, including category-selective regions like the fusiform face area for faces and the extrastriate body area for bodies. Reviews over the last few years keep reinforcing the idea that comparing deep networks with brain data can help both fields - better models for AI, better hypotheses for neuroscience (Kriegeskorte & Douglas, 2018; Cichy & Kaiser, 2019).

BRACTIVE leans into that idea. Instead of treating brain activity as a weird side quest, it uses it as guidance for machine learning. Plot twist: the squishy original hardware may still have a few notes for Silicon Valley.

So what did they actually find?

The authors report that BRACTIVE can identify person-specific visual ROIs, including regions selective for faces and bodies, in ways that line up with established neuroscience findings Nguyen et al., 2025. That’s the first win: the framework seems to recover meaningful brain organization rather than hallucinating nonsense from noisy fMRI data.

The second win is where things get spicy. They also found that using human visual brain activity to guide deep neural networks improved performance across several benchmarks.

That does not mean we can pour a graduate student into a GPU and call it innovation. It means biologically informed supervision may help AI learn more effectively. The brain, annoyingly, might still be the best tutorial.

This fits with a broader trend. Recent work comparing artificial neural networks with human neural representations suggests that models perform better when their internal features align more closely with biological visual processing (Schrimpf et al., 2020; Bonner et al., 2021). The dream is not to build a perfect digital cortex. The dream is to steal enough design principles from evolution to stop reinventing bad wheels.

Why you should care, even if you don’t own an fMRI scanner

If this line of work holds up, the payoff could hit in two directions at once.

On the neuroscience side, tools like BRACTIVE could help researchers map functional brain areas more efficiently across people. That matters for understanding how perception varies from person to person and for studying disorders that affect visual processing.

On the AI side, brain-guided training could produce systems that are more data-efficient and more reliable in the wild. Less “I mistake a turtle for a rifle because one pixel sneezed.” More actual resilience.

There’s also a nice philosophical jab here. For years, AI has borrowed the vocabulary of neuroscience while mostly doing its own thing. Now we may be entering a phase where brain data becomes less of a mascot and more of an engineering constraint. The old joke was that artificial neural networks are about as much like real brains as paper airplanes are like hawks. Fair enough. But if the hawk starts offering flight lessons, you’d be dumb not to listen.

The catch, because there is always a catch

fMRI is noisy, expensive, and indirect. It measures blood-oxygen changes, not neurons yelling in real time. Sample sizes in this kind of work can also be limited, and cross-subject alignment is hard because everybody’s cortex folds like a fitted sheet designed by a sadist.

Also, better benchmark performance does not automatically mean better intelligence. AI researchers love benchmarks the way raccoons love shiny garbage. Useful, yes. The whole story, no.

Still, BRACTIVE lands in a lively moment for neuroAI - a field trying to get brains and machines to stop talking past each other and maybe trade blueprints for once.

References

Nguyen X-B, Jang H, Li X, Khan SU, Sinha P, Luu K. BRACTIVE: A Brain Activation Approach to Human Visual Brain Learning. IEEE Trans Pattern Anal Mach Intell. 2025. doi: 10.1109/TPAMI.2025.3612582

Kriegeskorte N, Douglas PK. Cognitive computational neuroscience. Nat Rev Neurosci. 2018;19(11):693-706. doi: 10.1038/s41583-018-0110-3

Cichy RM, Kaiser D. Deep neural networks as scientific models. Trends Cogn Sci. 2019;23(4):305-317. doi: 10.1016/j.tics.2019.01.009

Schrimpf M, Kubilius J, Lee MJ, et al. Brain-Score: Which artificial neural network for object recognition is most brain-like? Neuron. 2020;107(3):495-510.e10. doi: 10.1016/j.neuron.2020.07.012

Bonner MF, Chung S, Saxe A. Bridging the gap between AI and neuroscience. Trends Cogn Sci. 2021;25(10):928-941. doi: 10.1016/j.tics.2021.07.005

Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.