April 29, 2026

When The Average Brain Lies

A school psychologist once described a 10-year-old patient to me this way: on Monday he could stop himself mid-blurt, wait his turn, and look like a small ambassador of self-control; by Thursday, under the same rules, he was all impulse and apology. If you averaged those days together, you would get a tidy child who existed mostly on paper. Neuroscience, it seems, has been fond of that sort of paper child for rather a long time.

That is why this new Nature Communications paper by Percy Mistry and colleagues lands with the faintly exasperated air of a gardener pointing out that plants are living things, not green spreadsheets. Using brain imaging and behavior from more than 4,000 young participants, the authors asked a deceptively simple question: if you look at brain-behavior patterns across a big crowd, do you learn how a single person's mind works from moment to moment? Answer: often, no. Sometimes you learn the opposite.[1]

A school psychologist once described a 10-year-old patient to me this way: on Monday he could stop himself mid-blurt, wait his turn, and look like a small ambassador of self-control; by Thursday, under the same rules, he was all impulse and apology.

The Average Is A Sneaky Little Goblin

The paper centers on two ideas with names only a statistician could love before coffee: nonergodicity and Simpson's paradox.

Nonergodicity means that patterns across people are not the same as patterns within one person over time. In plain English, the crowd is not a person in a trench coat. A brain pattern that seems linked to better control when you compare Alice to Bob may flip direction when you compare Alice's good trials to Alice's bad ones.

Simpson's paradox is the nastier trick. You combine data across groups, and the overall pattern can reverse the pattern seen inside each group. Statistics occasionally behaves like a magician who has mistaken misdirection for ethics.

Mistry and colleagues found exactly that kind of reversal in cognitive control, the mental ability that helps you stop, wait, switch gears, and avoid doing the first foolish thing that pops into your head. They used a stop-signal task, basically a laboratory version of "hit the brakes." Across subjects, some brain-behavior links looked one way. Within subjects, trial by trial, those same links often went the other way.[1]

Your Brain Keeps Two Toolboxes

The authors did not stop at saying, "Averages can mislead." They also used a Bayesian model to separate reactive control from proactive control.

Reactive control is what happens when the brain slams on the brakes after trouble appears. Proactive control is more like easing off the gas because trouble probably is coming. One is the cat leaping off the counter when it hears footsteps. The other is the cat deciding, for once, not to get on the counter.

The paper found that reactive and proactive control showed dissociated neural representations within individuals, meaning the brain does not seem to run them as one mushy all-purpose "self-control blob." It uses different circuitry for different styles of control.[1] That fits broader work arguing that cognitive control is not one tidy faculty sitting in the prefrontal cortex wearing a manager's lanyard, but a distributed set of processes with different roles and different failure modes.[2,5]

The authors also found that children who regulated control adaptively versus maladaptively showed different, and sometimes opposite, brain-behavior relationships.[1] Which is exactly what clinicians, teachers, and parents have been trying to tell us between sighs: two kids can produce the same outward mess for very different inner reasons.

Why This Matters Outside Statistics Class

If these findings hold up, they carry a sobering message for neuroscience and psychiatry. A great deal of brain research still leans on between-subject averages. That has value. But if group-level patterns can reverse within a person, then some theories about "the brain basis" of attention, inhibition, or symptoms may be potted plants labeled as oak trees.

This is also why the paper connects so naturally to the current push toward precision neuroscience and precision psychiatry. Recent reviews have argued that group findings often fail to generalize cleanly to the individual, and that future clinical tools will need denser, more person-specific measurement rather than prettier averages.[2,6] Longitudinal work on "dynamic computational phenotyping" makes a similar point from behavior alone: what looks like noise may actually be structure unfolding over time.[4]

That matters in the real world. If you want to predict when a child will struggle, tailor a treatment, or decide whether a brain-based marker means anything for one patient, the average child is not enough. Medicine learned this lesson long ago. Your doctor does not tell you your blood pressure is fine because the waiting room's average looks respectable.

The Thorny Bit

None of this makes research easier. Within-person neuroscience is expensive, data-hungry, and methodologically fussy. Brains are not roses that bloom on schedule for our convenience. You need repeated measurements, better task reliability, and models that can tell signal from nuisance.[3,4]

So the old dream of one grand average brain explaining everybody may need a little pruning. Frankly, it has needed pruning for years.

This paper's real charm is that it does not merely complain about the problem. It shows, at scale, that the problem is already sitting inside standard cognitive neuroscience analyses, quietly rearranging the furniture. The average was never useless. It was just overpromoted.

References

  1. Mistry PK, Branigan NK, Gao Z, Cai W, Menon V. Nonergodicity and Simpson's paradox in neurocognitive dynamics of cognitive control. Nature Communications. 2026;17:3494. DOI: https://doi.org/10.1038/s41467-026-71404-0
  2. Mattoni M, Fisher AJ, Gates KM, Chein J, Olino TM. Group-to-individual generalizability and individual-level inferences in cognitive neuroscience. Neuroscience and Biobehavioral Reviews. 2025;169:106024. DOI: https://doi.org/10.1016/j.neubiorev.2025.106024 PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC11835466/
  3. Zorowitz S, Niv Y. Improving the reliability of cognitive task measures: A narrative review. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2023;8(8):789-797. DOI: https://doi.org/10.1016/j.bpsc.2023.02.004 PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC10440239/
  4. Schurr R, Reznik D, Hillman H, Bhui R, Gershman SJ. Dynamic computational phenotyping of human cognition. Nature Human Behaviour. 2024;8:917-931. DOI: https://doi.org/10.1038/s41562-024-01814-x PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC11132988/
  5. Friedman NP, Robbins TW. The role of prefrontal cortex in cognitive control and executive function. Neuropsychopharmacology. 2022;47(1):72-89. DOI: https://doi.org/10.1038/s41386-021-01132-0 PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC8617292/
  6. Williams LM, Gabrieli SW. Neuroimaging for precision medicine in psychiatry. Neuropsychopharmacology. 2024;50(1):246-257. DOI: https://doi.org/10.1038/s41386-024-01917-z PMCID: https://pmc.ncbi.nlm.nih.gov/articles/PMC11525658/

Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.