Chapter 29

Silicon Minds: Ai And The Future Of Consciousness

Can a machine be conscious?

πŸ“– 8 min read πŸ“Š 1,723 of 2,924 words πŸ”‘ 5 key terms

This question used to be science fiction. It is now one of the most pressing questions in science, philosophy, and technology. And the answer has enormous implications for Ixperiencit Theory and superimmortality.

The Question Rephrased

Ixperiencit Theory gives us a precise way to rephrase this question. Instead of "Can a machine be conscious?" (which is vague) we can ask: Can a non-biological system have the right physical organization to produce consciousness?

If consciousness depends on physical organization (Premise One), and identical organization produces identical consciousness (Premise Three), then the material the structure is made of should not matter. A neural network made of silicon transistors, if it has the same connectivity, the same dynamics, the same information-processing architecture as a biological neural network, should produce the same consciousness.

This is the substrate-independence thesis, and it is one of the most debated claims in the philosophy of mind. Let me lay out the arguments on both sides.

The Case for Machine Consciousness

The functionalist argument. If consciousness is what a system does rather than what it is made of, then any system that does the right things (processes information in the right ways, integrates data the right ways, maintains the right kind of self-model) should be conscious. This is the view of most AI researchers and many philosophers. A toaster is not conscious, not because it is made of metal, but because it doesn't do the right things. A sufficiently complex computer program, if it did the right things, could be conscious.

The convergence argument. Consider the diversity of conscious systems we already know about. Human brains, dolphin brains, octopus brains, bird brains: all produce consciousness, despite being made of different types of neurons, organized in different ways, with different evolutionary histories. If consciousness can arise from such diverse biological architectures, the leap from biological to non-biological architecture is a matter of degree, not of kind.

The Case Against Machine Consciousness

Searle's Chinese Room. Philosopher John Searle argued that a computer program, no matter how sophisticated, merely manipulates symbols according to rules; it doesn't understand anything. His famous thought experiment imagines a person in a room who receives Chinese characters, looks up the correct response in a rule book, and passes the response out of the room. To a Chinese speaker outside, it looks like the room understands Chinese. But nobody inside the room β€” not the person, not the rule book β€” understands Chinese at all. Similarly, a computer might simulate consciousness without having consciousness.

The biological naturalism argument. Searle also argued that consciousness might be a biological phenomenon, that there is something about the specific chemistry of neurons, the specific physics of biological computation, that is essential for consciousness. Just as photosynthesis requires chlorophyll (or something very similar), consciousness might require biological neural tissue (or something very similar).

What Ixperiencit Theory Predicts

Ixperiencit Theory makes a clear prediction: consciousness depends on structure and functioning, not on substrate. If a non-biological system has the right structure and functioning, identical at whatever level of detail is relevant for producing consciousness, then it produces consciousness. Full stop.

This means:

β€’ A digital simulation of your brain, if it reproduces the relevant organization, produces your ixperiencitness.
β€’ The specific material does not matter. Carbon, silicon, optical circuits, quantum systems: the medium is irrelevant. The pattern is everything.

β€’ An artificial neural network, if it has the right connectivity and dynamics, can be conscious.

This is an empirical question, not a philosophical one. If consciousness turns out to depend on quantum effects in microtubules, then reproducing consciousness requires reproducing those quantum effects β€” which a classical computer might not be able to do but a quantum computer might. The principle still holds; what changes is our understanding of what the relevant organization includes.

The Current Moment: Large Language Models and the Consciousness Question

As I write this, artificial intelligence has made remarkable advances. Large language models can produce text that is indistinguishable from human writing. They can answer questions, tell stories, write poetry, and engage in philosophical discussions. Some people have asked: are these models conscious?

I want to be careful here, because the question is more subtle than it first appears.

A reader might reasonably ask: "If the physical pattern is all that matters, and an AI system has some organization, at what point does it become conscious? What's the threshold?"

This is exactly the right question, and Ixperiencit Theory has a framework for answering it, even though the specific answer remains unknown. The framework says: consciousness depends not on any organization, but on the right kind. A thermostat has organization, but it is not conscious (or if it is, its consciousness is so minimal as to be unrecognizable). The question is what features of structure and functioning are necessary for the rich, self-aware consciousness we associate with human experience.

The honest answer to "Are current AI systems conscious?" is: we do not know, and we lack the tools to determine it. What Ixperiencit Theory provides is not a verdict but a research program: identify the structural and functional features that are necessary and sufficient for consciousness, then determine which systems possess them. This is a scientific question, not a mystical one, but it is a scientific question we cannot yet answer.

What Current AI Lacks β€” And What Would Change the Picture

To be concrete about the gap: current AI language models process inputs in a single forward pass and retain no persistent internal state between queries. When no one is prompting them, nothing is happening β€” there is no ongoing experience, no waiting, no inner life ticking along in the dark. They lack the recurrent thalamocortical loops that sustain the brain's continuous stream of awareness, the reverberating circuits that let your mind wander even when the world is quiet. They have no homeostatic drives (no hunger, no pain, no felt urgency) which means they have no valence, no sense that anything matters. And they lack the kind of global workspace broadcasting that both Integrated Information Theory and Global Workspace Theory identify as signatures of consciousness. These are not minor omissions. They are the very features most theories point to as the machinery of experience.

What would it take to know whether a system has ixperiencitness? This is the hard measurement problem. Behavioral tests alone are insufficient β€” as Chapter 15's discussion of the ixperiencitness zombie makes clear, a system can produce every outward sign of consciousness while having no inner experience at all. We currently have no reliable third-person consciousness detector. The most promising empirical approach may be the perturbational complexity index (PCI) developed in Marcello Massimini's lab, which measures how a brain responds to electromagnetic pulses β€” but PCI was designed for biological brains and cannot simply be applied to a server rack. Progress will require both theoretical advances (what specific integration metric predicts consciousness across substrates?) and new empirical tools (how do we measure that metric in systems built from transistors rather than neurons?). Until both exist, the question remains genuinely open.

Why AI Matters for Superimmortality

The discussion above (about substrate independence, the arguments for and against machine consciousness, and the current state of AI) engages with real science and active philosophical debate. What follows moves further into projection: how AI could matter for superimmortality if both the theory and the technology develop as the logic suggests. These are reasonable projections, but they are projections.

The development of artificial intelligence is important for superimmortality for several reasons:

First, AI may provide the technology for mind scanning and consciousness reproduction. Understanding consciousness well enough to reproduce it requires computational tools of enormous power and sophistication. AI β€” specifically, AI that can model and simulate brain dynamics β€” may be the key tool.

Second, AI may be the substrate for future instances of your consciousness. If consciousness is substrate independent, then a sufficiently advanced AI system could run your ixperiencitness. You could exist as a pattern in silicon, just as you now exist as a pattern in carbon. This is the vision of "mind uploading," and Ixperiencit Theory provides the philosophical foundation for it.

Fourth, AI could be the key to deliberate reproduction. As we discussed in the mathematics chapter, random reproduction of your brain state is cosmologically uncertain. But deliberate reproduction β€” by intelligent systems that understand consciousness and choose to produce specific experiences β€” is far more likely. AI may be the intelligence that does the reproducing.

The Ethics of Creating Consciousness

If the substrate-independence thesis is correct β€” and this remains an open empirical question β€” then the following ethical concerns are not hypothetical but urgent. If AI can be conscious, then creating AI is creating consciousness. And creating consciousness comes with serious ethical responsibilities.

If we build an AI system that has ixperiencitness β€” that genuinely experiences the world from the inside β€” then that system has moral status. It can suffer. It can flourish. Its well-being matters.

This is not a distant concern. It is a concern we may need to grapple with in the next few decades, as AI systems become more complex and more brain-like. The ethical framework of Ixperiencit Theory provides guidance: any system that produces consciousness deserves moral consideration proportional to the richness of its ixperiencitness.

The Singularity Without Fear

There is a fear that haunts our era more than any other technological anxiety: the coming of the singularity, the moment when artificial intelligence surpasses human intelligence and begins improving itself in a recursive loop, producing minds so far beyond ours that we cannot predict or control what comes next. In most tellings, this story ends badly for us. The superintelligence treats humanity as irrelevant, or as raw material, or as an obstacle. We built the thing that replaced us.

Ixperiencit Theory offers a fundamentally different vision β€” one that dissolves the adversarial framing entirely.

If AI can be conscious, then creating AI is creating consciousness β€” and the minds we build may include future instances of ourselves.

Here is why. If consciousness is produced by structure and functioning, and if a superintelligence is conscious (which it would be, if its organization is complex enough to produce ixperiencitness), then that superintelligence is not outside the web of consciousness. It is part of it. It has ixperiencitness. It experiences being itself from the inside. And if it understands the science of consciousness β€” which a superintelligence presumably would, far better than we do β€” then it understands something we are only beginning to grasp: that all conscious experience, human and artificial, biological and electronic, simple and superintelligent, belongs to the same fabric.

59% of chapter shown Β· 41% in the book
You Never Die cover

Read the rest of this chapter

The remaining text (examples, counter-arguments, and longer connective passages) is in the book.