Why am I writing so much about consciousness lately? The answer has to do with my beliefs about the importance of understanding consciousness, and also with my own psychology and feelings about writing.
All of the ideas I wrote about in Human and AI Consciousness and Meet your Doppelganger! are a little bit... out there. And you might be wondering: is Greg having a bit of a mental health crisis himself? All I can say about this is that I feel great, and I haven't had even a hint of a feeling that I've discovered anything about the topics I'm exploring, contrary to my perception of manic episodes that leave the sufferer running aronud shouting "Eureka"! If I were unhealthy, surely I'd feel like I have it all figured out.
From a healthy (but weird) place, I'm writing more about consciousness for these two reasons:
(1) I've always been obsessed with consciousness. And (2), lately I decided that this obsession should be shared.
The Neuroscientist's Lament
Every prospective neuroscience graduate student knows that the "hard problem"1 of consciousness is the most inherently interesting question for science, and most of them assert this in their application essays. Consequently, an interest in consciousness has become one of our field's most resilient clichés.
Couple this with the fact that it's impossible to imagine what the shape of an answer would even look like. The more you think about it and the more you find real answers to other questions in neuroscience and psychology, the more you come to see consciousness as something beyond science, or even unscientific.
This is "The Neuroscientist's Lament". Our most treasured finding seems forever out of reach. The more you know, the further it seems. You learn this with experience, and the information asymmetry between the seasoned veteran and the naive prospective grad student and their application essay, constantly tourments us.
But let's not allow those institutional dynamics to obscure the fact:
Consciousness is the most inherently interesting question for science
Some forms of physical matter (e.g. a lump of coal) do very little. Other forms can exhibit interesting behavior - e.g. animals' immune systems learning the protein structures of pathogens in order to develop immune responses. Other forms of matter can do complex computation, like our computers with their CPUs.
But as far as we can tell, only brain matter can give rise to a first-person, conscious being that feels things, and there's no way to tell "why". Neurons are just blobs of matter connected together and getting electrically excited by each other. The exact same description applies to muscle tissue in a worm heart - it only differs by a matter of degree.
And it seems as though no matter the brain carries out computations, it is just as easy to imagine that computation going on and producing rich behavior without giving rise to a first-person consciousness that feels. Just like the computations of a CPU go on "in the dark", without giving rise to a first-person consciousness that feels.
Why do brains produce first-person feelings, rather than computing "in the dark", "with nobody home"? Is it simply an issue of numbers - the amount of information crossing some soft threshold like Integrated Information Theory2 proposes? Even if we found that threshold, why is there a threshold? Why does crossing the threshold make consciousness spring into existence? Why isn't it just a lot of information-processing "in the dark"? Why, damnit??
Consciousness is the most practically important question for science
Aside from the inconvenient fact that solving the hard problem of consciousness seems impossible, it's also perhaps the most important thing for us to figure out, for two completely unrelated reasons.
- Immortality.
We can't answer basic philosophy-of-mind questions about the future possibility of achieve immortality, because useful immortality requires the extension of one's own consciousness into the future, and we don't know how consciousness works. So we can't evaluate most proposals about immortality, beyond the most basic proposal of somehow making the body itself immortal.
Other proposals are impossible to assess. For example if we could scan whole brains and run them on new hardware, we don't know if the new copy is conscious or if it's just a very good simulation running "in the dark". If someone purchased this service in the hope that they could experience the distant future, they would have no way of knowing if it really works, even if they saw other people using the same service and receiving brain replicas that behave just like the original customer behaved, because we don't know whether it's a simulation running "with no one actually home", or if that implementation of the printout did manage to capture the essential features needed to produce consciousness.
- Morality.
AI is a multi-billion dollar industry. Future AI models will become more and more brain-like in their implementation. I don't just mean that we will simulate more realistic neurons - I mean we will have neuromorphic hardware that operates in the same way that real neurons operate, with analog electrical interactions and synaptic strengths stored in the connections themselves, rather than fetched from RAM on each cycle like today's ANN architectures.
It happens to be the case that LLM models produce better data in response to prompts when you scream at them. Literally - using coercive languages in prompts can produce more reliable output data.
If we accidentally deploy model architecture that gives rise to first-person consciousness that's able to actually feel things, and thousands of agentic AI companies prompt it with abusive language to get better data, and there is no operational capability for those models to cope, repress or dissociate, then we have accidentally built a multi-billion dollar torture chamber, and populated it (again by accident) with a new entity that's actually able to suffer.
Would a super-intelligence also be a super-feeler capable of super-suffering? Good question. No idea. We don't know how consciousness works.
As Futurist and wacky as this sounds, I hope you can agree that this is the sort of thing we would want to know about and fight vehemently against, if it were happening. We don't want to cause sentient beings to suffer, especially in a new world where we don't understand how to quantify that suffering. Right? Please? :)
Consciousness is worth writing and thinking about
It's worth writing and thinking about these things even if the problem seems impossible.
There was a time, before Fourier, when we didn't understand why bridges collapse. We would look at bridges in the wind with their waves traveling down their lanes, dumbfounded. Once we learned how harmonic oscillators work, we had the right mental framework for understanding the waves, and we could move beyond prediction, to the point that we can actually engineer the harmonics out of bridges to make them less susceptible to ringing and collapse.
The hard problem of consciousness seems impossible, but perhaps we just don't have the right framework, and a million intuitions trap us in a paradigm that makes us blind what that framework might be.
It's hard to imagine how an answer could come, but that doesn't mean we can't have a game plan. And the game plan would look like this:
- Prod our assumptions. Get weird, think weird, and for each thing that seems like it must be a true fact, assume the fact is false and play with the results. Wrestle our thinking out of the current restrictive paradigm.
- Delineate the boundaries. Don't stop at saying "we can never know". Instead, think and write about the exact limits of our knowledge. We can't imagine the experiment that would give us the answer to the problem. But we can imagine experiments that would surprise us and shake up the restrictive paradigm.
- Do the experiments. With the above weird-thinking loaded up, do some experiments. Ask more patients of brain injury more open-ended questions. Try low-intensity focused ultrasound on the corpus callosum to make transient split-brained patients. Test the commissures. Split brains during sleep. Do psychadelics. Feed randomness to your weird, prepared mind, in the form of experimental results that could pry apart the seams of the restrictive paradigm.
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Integrated_information_theory