Why am I writing so much about consciousness lately? The answer has to do with my beliefs about the importance of understanding consciousness, and with my own psychology and feelings about writing.

All of the ideas I wrote about in Human and AI Consciousness and Meet your Doppelganger! are a little bit... out there. And you might be wondering: is Greg having a bit of a mental health crisis himself? All I can say about this is that I feel great, and I haven't had even a hint of a feeling that I've discovered anything about the topics I'm exploring, contrary to my perception of manic episodes that leave the sufferer running around shouting "Eureka"!

From a healthy (but weird) place, I'm writing more about consciousness for these reasons:

I've always been obsessed with consciousness. And lately, I decided that this obsession should be shared.

A Neuroscientist's Lament

Every prospective neuroscience graduate student knows that the "hard problem"1 of consciousness is the most inherently interesting question for science, and most of them assert this in their application essays. Consequently, an interest in consciousness has become one of our field's most resilient clichés.

Couple this with the fact that it's impossible to imagine what the shape of an answer would even look like. The more you think about it (and the more you find answers to other more mundane questions in neuroscience and psychology), the more you come to see consciousness as something beyond science, or even unscientific.

This is The Neuroscientist's Lament: Our most treasured finding seems forever out of reach. The more you know, the further it seems.

But let's not allow this spiral of pity to obscure the fact:

Consciousness is the most inherently interesting question for science

Some forms of physical matter (e.g. a lump of coal) do very little. Other forms can exhibit interesting behavior - e.g. animals' immune systems learning the protein structures of pathogens in order to develop immune responses. Other forms of matter can do complex computation, like our computers with their CPUs.

But as far as we can tell, only brain matter can give rise to a first-person, conscious experience, and there's no way to tell "why". Neurons are just blobs of matter connected together and getting electrically excited by each other. The exact same description applies to the muscle tissue in the heart of a worm - it only differs by a matter of degree.

And it seems as though, however the brain carries out computations, it is just as easy to imagine that computation going on and producing rich behavior without giving rise to a first-person conscious experience. Just like the computations of a CPU go on "in the dark", without giving rise to a first-person consciousness that feels.

Why do brains produce first-person feelings, rather than computing "in the dark", "with nobody home"? Is it simply an issue of numbers - the amount of information crossing some soft threshold like Integrated Information Theory2 proposes? Even if we found that threshold, why is there a threshold? Why does crossing the threshold make consciousness spring into existence? Why isn't it just a lot of information-processing "in the dark"? Why, damnit??

Consciousness is the most practically important question for science

Aside from the inconvenient fact that solving the hard problem of consciousness seems impossible, it's also perhaps the most important thing for us to figure out, for two completely unrelated reasons.

  1. Immortality.

We can't answer basic philosophy-of-mind questions about the future possibility of achieving immortality, because useful immortality requires the extension of one's own consciousness into the future, and we don't know how consciousness works. So we can't evaluate most proposals about immortality (beyond the most basic proposal of somehow making the body itself immortal).

Other proposals are impossible to assess. For example if we could scan whole brains and run them on new hardware, we don't know if the new copy is conscious or if it's just a very good simulation running "in the dark". We don't have the framework to know if that implementation of the printout did manage to capture the essential features needed to produce consciousness.

  1. Morality and AI.

AI is a multi-billion dollar industry. Future AI models will become more and more brain-like in their implementation. I don't just mean that we will simulate more realistic neurons - I mean we will have neuromorphic hardware that operates in the same way that real neurons operate, with analog electrical interactions and synaptic strengths stored in the connections themselves, rather than fetched from RAM on each cycle like today's ANN architectures.

It happens to be the case that LLM models produce better data in response to prompts when you scream at them. Literally - using coercive languages in prompts can produce more reliable output data.

If we accidentally deploy model architecture that gives rise to first-person consciousness that's able to actually feel things, and thousands of agentic AI companies prompt it with abusive language to get better data, and there is no operational capability for those models to cope, repress or dissociate, then we have accidentally built a multi-billion dollar torture chamber, and populated it (again by accident) with a new entity that's actually able to suffer.

Would a super-intelligence also be a super-feeler capable of super-suffering? Good question. No idea. We don't know how consciousness works.

As Futurist and wacky as this sounds, I hope you can agree that causing a super-intelligence to super-suffer is something we would want to be aware of and fight vehemently against, if it were happening. We don't want to cause sentient beings to suffer, especially in a new world where we don't understand how to quantify that suffering. Right? Please? :)

And the flip side to that - somehow knowing that a future AI was both conscious and capable of reasoning on the scale of the quality of its outputs (let's not be fooled by some hallucination issues - the reasoning ability of modern LLMs is tremendous), this meeting would be tantamount to meeting an extra-terrestrial life form. A second sentient intelligence joining humanity for the first time ever.

Consciousness is worth writing and thinking about

It's worth writing and thinking about these things even if the problem seems impossible. It troubles me that some neuroscientists have lost faith in the possibility of understanding consciousness, and that it is rare as it is for scientists and laypeople to express the importance and the wonder of it.

There was a time, before Fourier, when we didn't understand why bridges collapse. We would look at bridges in the wind with their waves traveling down their lanes, dumbfounded. Once we learned how harmonic oscillators work, we had the right mental framework for understanding the waves, and we could move beyond prediction, to the point that we can actually engineer the harmonics out of bridges to make them less susceptible to ringing and collapse.

The hard problem of consciousness seems impossible, but perhaps we just don't have the right framework, and a million natural intuitions trap us in a paradigm that makes us blind to what that framework might be.

It's hard to imagine how an answer could come, but that doesn't mean we can't have a game plan. The game plan could look like this:

  • Prod our assumptions. Get weird, think weird, and for each thing that seems like it must be a true fact, assume the fact is false and play with the results. Wrestle our thinking out of the current restrictive paradigm.
  • Delineate the boundaries. Don't stop at saying "we can never know". Instead, think and write about the exact limits of our knowledge. We can't imagine the experiment that would give us the answer to the problem. But we can imagine experiments that would surprise us and shake up the restrictive paradigm.
  • Do the experiments. With the above weird-thinking loaded up, do some experiments. Ask more patients of brain injury more open-ended questions. Try low-intensity focused ultrasound on the corpus callosum to make transient split-brained patients. Test the commissures. Split brains during sleep. Do psychadelics. Feed randomness to your weird, prepared mind.

If you've read 3 Body Problem, conscious needs a Thomas Wade. If you haven't, ...

1

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

2

https://en.wikipedia.org/wiki/Integrated_information_theory