April 20, 2026

Your Brain Runs a Single Currency, and Scientists Just Found the Exchange Rate

In a universe of roughly two trillion galaxies, inside a skull of roughly 1,400 grams, your brain is solving a problem that would bankrupt most trading floors: converting radically different inputs into a single, spendable currency. You see a dog. You read the word "dog." You imagine a dog while staring at the ceiling at 2 a.m. Somehow, your cortex settles on the same internal price for all three. A new study just caught the brain's exchange desk in action, and the receipts are wild.

In a universe of roughly two trillion galaxies, inside a skull of roughly 1,400 grams, your brain is solving a problem that would bankrupt most trading floors: converting radically different inputs into a single, spendable currency. You see a dog. Yo

The Arbitrage Problem Nobody Talks About

Here's the thing about sensory information: it arrives in completely different denominations. Light hits your retina as photons. Text arrives as abstract squiggles your visual cortex learned to decode sometime around first grade. Mental imagery? That's the brain trading with itself, no external input required. Classical neuroscience treated these as separate markets, each with its own processing pipeline, its own dedicated neural real estate, its own decoder ring.

Mitja Nikolaus and colleagues at the University of Toulouse decided to test a contrarian hypothesis: what if a single decoder, blind to input modality, could read brain activity just as well as specialized ones? Their paper, published in eLife, introduces "modality-agnostic decoders" trained on a new dataset called SemReps-8K, roughly 8,500 fMRI trials per subject across six participants (Nikolaus et al., 2025). Subjects watched images, read captions describing those images, and imagined visual scenes, all while a scanner tracked blood flow across their cortex.

One Decoder to Rule Them All

The expected outcome? A generalist decoder should underperform the specialists. That's basic portfolio theory: diversification costs you peak returns. But the brain, as usual, didn't read the textbook.

The modality-agnostic decoder matched modality-specific decoders when decoding images. For captions and mental imagery, it actually outperformed them. Read that again. The generalist beat the specialists at their own game, at least for language and imagination. It's as if a currency trader who accepts dollars, euros, and bitcoin simultaneously turned out to be better at pricing euros than the euro-only desk.

Why? Because training across modalities forces the decoder to latch onto higher-level, abstract representations, the brain's equivalent of purchasing-power parity. Strip away the surface noise of pixels versus letters, and what remains is pure semantic content: the concept itself, uncontaminated by its delivery method.

The Brain's Real Estate Map Gets Redrawn

Using a searchlight analysis (think of it as scanning a small flashlight across the cortex, testing each neighborhood for modality-invariant information), the team found something that should make neural cartographers nervous. Modality-invariant representations aren't confined to some tidy executive suite in the prefrontal cortex. They're everywhere. Large swaths of the brain contain information that doesn't care whether you saw something, read about it, or just thought about it.

Low-level visual areas (the occipital cortex) still specialize in raw image data. Language regions still prefer text. No surprise there. But higher-level temporal visual areas? They traded happily in both currencies. And the regions richest in modality-invariant representations turned out to be especially good at decoding mental imagery, the brain's most private and least understood market.

Why This Matters Beyond the Scanner

This isn't just an academic flex. The race to decode human thought from brain signals is accelerating fast. Tang and colleagues showed in 2023 that continuous language can be reconstructed from fMRI recordings, translating brain activity into intelligible sentences (Tang et al., 2023). Brain-computer interfaces are now restoring speech to people with paralysis. But most of these systems are modality-specific: they decode what you see or what you hear, not what you think regardless of how the thought got there.

A modality-agnostic approach changes the cost-benefit calculus entirely. Instead of building separate decoders for every input channel, you build one that taps into the brain's own unified currency. Fewer training requirements. Broader applicability. And critically, better performance on the hardest problem of all: decoding imagination, the one modality where there's no external signal to cheat from.

Previous work established that regions like the temporoparietal cortex host modality-invariant representations for sight and sound (Man et al., 2012). What Nikolaus and colleagues add is scale and precision: a dataset large enough to train serious decoders, evidence that the invariance is cortex-wide, and proof that exploiting it actually improves decoding performance.

The Bottom Line

Your brain has been running a unified exchange for every sensory input you've ever processed. It just took neuroscience this long to notice the trading floor was open. The next time you picture a sunset without looking at one, remember: somewhere in your cortex, the same neurons are firing the same patterns they would if the sunset were real. The brain's internal market doesn't distinguish between the real thing and the memory. Apparently, the price is the same.

References

  1. Nikolaus, M., Mozafari, M., Berry, I., Asher, N., Reddy, L., & VanRullen, R. (2025). Modality-agnostic decoding of vision and language from fMRI. eLife, 14, e107933. https://doi.org/10.7554/eLife.107933

  2. Tang, J., LeBel, A., Jain, S., & Huth, A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 26(5), 858-866. https://doi.org/10.1038/s41593-023-01304-9

  3. Man, K., Kaplan, J. T., Damasio, A., & Meyer, K. (2012). Sight and sound converge to form modality-invariant representations in temporoparietal cortex. Journal of Neuroscience, 32(47), 16629-16636. https://doi.org/10.1523/JNEUROSCI.2342-12.2012

  4. Du, B., Cheng, X., Duan, Y., & Ning, H. (2022). fMRI brain decoding and its applications in brain-computer interface: A survey. Brain Sciences, 12(2), 228. https://doi.org/10.3390/brainsci12020228

Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.