January 03, 2026

Reading Arm Movements from Brain Signals Across the Whole Brain (Not Just Motor Cortex)

Reaching for your coffee cup feels simple, but neurologically speaking, it's ridiculously complicated. A study in Cell Reports shows that movement information isn't just hanging out in the motor cortex like the textbooks suggest. It's distributed across large swaths of the brain, and researchers can now decode continuous arm movements from all those regions simultaneously.

Reading Arm Movements from Brain Signals Across the Whole Brain (Not Just Motor Cortex)

Motor Cortex Isn't Running the Show Alone

Brain-computer interface research has traditionally focused on motor cortex, the brain region everyone points to when they talk about movement control. Motor cortex sends the signals that make muscles contract. It's the "command center." Makes sense to focus there, right?

But here's the thing: movement-related signals have been popping up in all sorts of brain regions that aren't motor cortex. The researchers wanted to know just how widespread this movement information is. How much can you learn about someone's arm position and velocity from brain areas that supposedly aren't "about" movement? And can you decode this information continuously, tracking the arm moment by moment?

Recording from All Over

The researchers got access to something pretty special: stereotactic EEG recordings from 18 patients who had electrodes implanted for epilepsy monitoring. These electrodes weren't placed based on the research question; they were there to find seizure sources. But that meant the coverage spanned all kinds of brain regions: motor areas, premotor regions, parietal cortex, and plenty of areas nobody usually thinks about when they think about reaching for things.

While the electrodes were in, the patients performed a 3D reaching task. Reach here, reach there, move your arm around in three-dimensional space. The researchers recorded what the brain was doing during these movements.

Decoding 12 Different Measures of How the Arm Moves

The decoder didn't just pull out "the arm moved." It extracted continuous information about 12 different kinematic variables: hand position in three dimensions, velocity in three dimensions, acceleration in three dimensions, and more. That's a lot of detail.

And here's the key finding: movement information wasn't confined to motor cortex. Multiple brain regions contributed to decoding accuracy. Areas that traditional neuroscience would say are "about" other things still carried useful information about arm movement.

The decoder also worked continuously, tracking movement in real time rather than just detecting "movement started" or "movement ended" at discrete moments. This matters for practical applications where you want to know exactly where someone is trying to move their arm, not just whether they're moving at all.

Why This Matters for Brain-Computer Interfaces

Brain-computer interfaces are trying to restore movement to paralyzed patients. The dream is that someone who can't move their arm could control a robotic prosthetic just by thinking about moving. Most BCIs have focused on reading signals from motor cortex because that's the obvious place to look.

But what if motor cortex is damaged? Stroke, injury, disease. The exact region you're trying to read from might be compromised. These findings suggest that signals from other brain areas could potentially compensate. The movement information is out there in parietal cortex, premotor regions, and elsewhere. You might not need motor cortex specifically.

This also helps explain something puzzling about brain damage. People with localized damage to motor cortex often recover movement ability to some degree. How? If movement control were entirely localized to one region, damage there should be catastrophic. But the distributed representation of movement information means other areas might pick up the slack.

The Brain Doesn't Believe in Specialization

This finding fits a broader pattern in neuroscience. The old model of the brain had neatly labeled boxes: this region does vision, this one does language, this one does movement. Clean and simple.

The reality is messier. Most cognitive functions are distributed across networks, not localized to single regions. Movement isn't just a motor cortex thing. It's a whole-brain thing, with different regions contributing different aspects of the computation.

From an engineering perspective, this is actually good news. If you're trying to build a brain-reading device, having multiple redundant sources of information is helpful. You can combine signals from wherever you can get them. You're not locked into one specific spot that might be damaged or inaccessible.

From a neuroscience perspective, it raises interesting questions about why the brain is organized this way. Is the distributed representation an evolutionary adaptation for robustness? Is it just how the computations work out most efficiently? The answer probably depends on who you ask, but the finding is solid: movement information is everywhere, and we can read it out across the whole brain if we know how to look.


Reference: Bhattacharyya S, et al. (2025). Decoding continuous goal-directed movement from human brain-wide intracranial recordings. Cell Reports. doi: 10.1016/j.celrep.2025.116328 | PMID: 40975869

Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.