"We expected to see one pattern or the other," you can almost hear Hannah McDermott saying as she stared at the EEG data. "What we got was both - just at completely different times." That's the kind of result that makes a neuroscientist's week and breaks everyone else's intuition about how brains work.
Here's the deal: your brain is a prediction machine. Every waking moment, it's running a nonstop forecasting operation, guessing what's coming next based on patterns it's already learned. See a flash of lightning? Your brain has already queued up the thunder before your ears catch it. This isn't laziness - it's efficiency. Why waste precious neural bandwidth processing stuff you already saw coming?
The Great Neuroscience Bar Fight
For years, neuroscientists have been arguing about how the brain handles expected stimuli, and it's gotten surprisingly heated. One camp says the brain "sharpens" its response - basically turning up the signal-to-noise ratio on the thing it predicted, like noise-canceling headphones for everything except the expected input. The other camp says the brain "dampens" its response - dialing down the volume on things it already predicted, like your phone muting a notification you've seen twelve times.
Both sides had solid evidence. Both sides had strong opinions. It was the neuroscience equivalent of the "is a hot dog a sandwich" debate, except with fMRI machines and grant funding on the line.
Plot Twist: Everyone Was Right (Sort Of)
Enter McDermott, de Martino, Schwiedrzik, and Auksztulewicz with an elegant study published in eLife (DOI: 10.7554/eLife.103689). They sat 31 participants down, showed them pairs of scene images in quick succession, and used multivariate EEG decoding to track exactly how the brain represented those images - millisecond by millisecond.
The trick? One image category predicted another. See a kitchen scene, and a beach scene is likely next. Your brain picks up on this statistical regularity surprisingly fast.
Here's where it gets wild. Within a single trial, the brain does both things - but on a schedule. In the first ~120-180 milliseconds after seeing an expected image, the brain sharpens its representation. The neural signal actually gets cleaner and more decodable. Then, around 280-300 milliseconds later, it switches to dampening mode, pulling back on the response like it's saying, "Got it, move along."
This lines up beautifully with the Opposing Process Theory, which proposed that sharpening and dampening aren't rivals - they're coworkers on different shifts (Walsh et al., 2020; Kok et al., 2012).
But Wait, There's a Second Plot Twist
The researchers didn't stop there. When they zoomed out and looked at how these effects changed across trials as participants learned the image pairings, the pattern flipped. Early in the experiment, dampening dominated. Later, as the brain nailed down the statistical patterns, sharpening took over.
Think of it like learning to drive. At first, everything is overwhelming - your brain suppresses broadly because it hasn't figured out what matters yet. But once you've internalized the patterns, your neural representations sharpen up, homing in on what's relevant with surgical precision.
The team argues this reflects hierarchical learning: higher brain areas quickly catch on and start dampening, while lower sensory areas take longer to build up sharp, refined predictions (de Lange et al., 2018; Richter & de Lange, 2019).
Why Should You Care?
Beyond settling a long-running debate, this work reshapes how we think about learning itself. Your brain isn't just passively soaking up information - it's actively recalibrating its prediction machinery at multiple levels and timescales simultaneously. It's a bit like an orchestra tuning up: each section comes in at different times, but the result is a coordinated performance.
This has real implications for understanding conditions where prediction goes haywire - think anxiety (overactive prediction engines), autism (different prediction calibration), or schizophrenia (prediction errors that won't quit). If sharpening and dampening unfold on different timescales, therapies might need to target specific windows of processing rather than applying blanket fixes.
The brain, it turns out, doesn't pick sides in the sharpening-versus-dampening debate. It just runs both programs on different clocks. Classic multitasker.
References:
-
McDermott, H. H., de Martino, F., Schwiedrzik, C. M., & Auksztulewicz, R. (2026). Dissociable dynamic effects of expectation during statistical learning. eLife, 14, e103689. DOI: 10.7554/eLife.103689
-
Kok, P., Jehee, J. F. M., & de Lange, F. P. (2012). Less is more: Expectation sharpens representations in the primary visual cortex. Neuron, 75(2), 265-270. DOI: 10.1016/j.neuron.2012.04.034
-
Richter, D., & de Lange, F. P. (2019). Expectation suppression dampens sensory representations of predicted stimuli. Journal of Neuroscience, 38(50), 10592-10599. DOI: 10.1523/JNEUROSCI.1082-18.2018
-
de Lange, F. P., Heilbron, M., & Kok, P. (2018). How do expectations shape perception? Trends in Cognitive Sciences, 22(9), 764-779. DOI: 10.1016/j.tics.2018.06.002
-
Walsh, K. S., McGovern, D. P., Clark, A., & O'Connell, R. G. (2020). Evaluating the neurophysiological evidence for predictive processing as a model of perception. Annals of the New York Academy of Sciences, 1464(1), 242-268. DOI: 10.1111/nyas.14321
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.