Here's one of the hardest questions in science: when you look at brain activity, how do you know if there's someone home? Not just neurons firing, but actual experience happening? A review in Neuroscience & Biobehavioral Reviews proposes something audacious: a framework for setting consciousness thresholds using MRI-derived measures.
Yes, they're trying to put a number on consciousness. No, it's not as crazy as it sounds.
The Problem With "Correlates"
Neuroscience has spent decades identifying what we call neural correlates of consciousness, brain patterns that seem to accompany conscious experience. When you're aware of something, certain areas light up. When you're not, they don't.
But here's the frustrating limitation: correlates are descriptive, not predictive. They tell you "when people report being conscious, their brains look like this." They don't tell you "brains that look like this are definitely conscious."
This matters a lot in practical situations. You have a patient in a coma who can't report anything. Their brain is doing something. Is there experience happening? The correlates approach just shrugs. We need something more.
From Description to Prediction: The Threshold Idea
The framework proposed in this review takes a different approach. Instead of just describing what conscious brains look like, it aims to derive computational measures that could serve as thresholds.
Below the threshold: neural activity is happening, but it doesn't support consciousness. Above the threshold: someone's home.
This is a fundamentally different claim. It's saying not just "conscious brains tend to have these properties" but "if a brain has these properties above a certain level, that's sufficient for consciousness."
It's moving from correlation to something closer to a diagnostic criterion.
What Kind of Measures Are We Talking About?
The framework focuses on MRI-derived neurophysiological measures, things you can compute from brain imaging data. These might include:
- Complexity measures that capture how differentiated and integrated brain activity is
- Connectivity patterns that show how information flows between regions
- Dynamic signatures that capture how the brain's state evolves over time
The idea is that consciousness requires a certain type of information processing, one that's complex enough to support rich experience but integrated enough to feel unified. Measure the relevant properties, set a threshold, and you have a diagnostic tool.
Of course, actually implementing this is harder than describing it. What exactly should the threshold be? How do you validate it? We'll get to those challenges.
Why This Matters for Real Patients
The clinical applications are obvious and important. Disorders of consciousness are devastating conditions where patients may or may not be aware, and we can't tell from the outside.
Patients in vegetative states sometimes show brain responses suggesting they're processing information, maybe even following commands, despite appearing completely unresponsive. Some of them are actually conscious but unable to communicate. Others aren't. How do we tell the difference?
Similarly, anesthesia isn't as understood as we'd like. Most of the time, anesthetized patients aren't conscious. But sometimes they are, and they can't tell anyone because they're paralyzed. An objective measure of consciousness during surgery would be incredibly valuable.
Right now, clinicians make educated guesses based on behavior, EEG patterns, and clinical experience. A validated threshold-based measure could transform these judgment calls into something more like diagnostic tests.
The Wild Implications: Conscious Robots?
The framework also has implications that venture into science fiction territory. If we have genuine thresholds for consciousness based on neurophysiological properties, we could theoretically apply them to non-biological systems.
Does this robot have the right information processing signatures above the consciousness threshold? If the measures are about computation and information processing, there's no principled reason they couldn't apply to artificial systems.
This doesn't mean we'll be diagnosing conscious AIs anytime soon. But it does mean the framework forces us to think about what we actually believe consciousness requires. Is it the specific biological substrate? Or is it the type of information processing, regardless of what's doing the processing?
The framework leans toward the latter, which has profound implications for how we think about minds and machines.
The Circularity Problem (And Why It's Not Fatal)
Let's be honest about the challenges. Deriving consciousness thresholds has a nasty circularity problem.
How do you validate your threshold? You compare your measure against cases where you know if consciousness is present or absent. But how do you know consciousness is present? Either the person tells you (which requires them to already be conscious and communicative) or you assume based on other criteria (which begs the question).
You can't step outside consciousness to get an objective view. The only consciousness you can directly verify is your own.
The authors acknowledge this. Their approach is to anchor the framework in cases where conscious state is relatively clear (healthy wakeful people versus deeply anesthetized people) and then extend to ambiguous cases.
It's not perfect, but it's a strategy. And honestly, all consciousness research faces some version of this problem. At least this framework makes the assumptions explicit.
The Roadmap Value
Even if the specific measures and thresholds proposed here don't work out exactly right, the framework provides something valuable: a roadmap for what progress would look like.
It defines what questions need to be answered. It specifies what kind of validation would be convincing. It connects theoretical ideas about consciousness to practical measurement challenges.
In a field where people can argue endlessly about philosophy without making empirical progress, that's worth something.
The Bottom Line
Consciousness remains mysterious, but that doesn't mean we can't make progress on measuring it. This framework proposes moving from describing neural correlates to deriving actual thresholds, from "brains of conscious people look like this" to "brain activity above this threshold supports consciousness."
The applications are real and urgent: better diagnosis for patients who can't communicate, better monitoring during anesthesia, and eventually maybe even answers to questions about artificial consciousness.
Will it work? The honest answer is we don't know yet. But the framework tells us what success would look like and how to pursue it. In consciousness research, that counts as a win.
Reference: Bhattacharyya S, et al. (2025). Beyond the brain: a computational MRI-derived neurophysiological framework for consciousness. Neuroscience & Biobehavioral Reviews. doi: 10.1016/j.neubiorev.2025.106110 | PMID: 41110526
Disclaimer: The image accompanying this article is for illustrative purposes only and does not depict actual experimental results, data, or biological mechanisms.