Consciousness is not mysterious, it’s just the brain describing itself—to itself.Michael Graziano is a professor of psychology and neuroscience at Princeton University. His style of explaining consciousness is fully materialistic:
- The hard problem of consciousness doesn't exist, because consciousness is an illusion.
- The real problem is just this: Why do humans proclaim "I'm conscious."? What do they mean by that, and what made them proclaim that?
- The answer is that the illusion of consciousness is a byproduct of human evolution.
Basically, consciousness is an illusion, a spandrel of evolution.
What exactly is the mechanism that leads from integrated information in the brain to a person who ups and claims, “Hey, I have a conscious experience of all that integrated information!” There isn't one.In order to figure out why humans have this illusion, it's necessary to study humans as fully physical (no funny qualia business plz) robots that think about themselves but in a grossly inaccurate way:
The theories that show the most promise are metacognitive theories... the brain doesn’t just model concrete objects in the external world. It also models its own internal processes. It constructs simulations of its own cognition. And those simulations are never accurate. They contain incomplete, sometimes surreal information. The brain constructs a distorted, cartoon sketch of itself and its world. And this is why we’re so certain that we have a kind of magic feeling inside us.
The origin of consciousness: we think about own thoughts in grossly simplified models. |
Conscious AI is entirely buildable. Just build it with the same kind of distorted self-modeling capacity, and it'll proclaim the same kind of illusionary consciousness.
A New Theory Explains How Consciousness Evolved (2016), Michael Graziano. (He wrote quite a bit for The Atlantic)
What is the adaptive value of consciousness? When did it evolve and what animals have it?
The Attention Schema Theory (AST)... suggests that consciousness arises to deal with too much information to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence.Step 0: get a neuron.
Graziano didn't talk about this step, so I'll say. Of all modern animals, only sponges (porifera) and a mysterious little guy called "placozoa" have no neurons at all. All the others: comb jellies (ctenophora), jellyfishes and (cnidaria), and bilaterally symmetric animals (bilateria), have neural systems.
There's great debate about exactly how the animal family tree (cladogram) should be drawn, but it seems that however it's drawn, ctenophora is the oldest phylum that has neurons. Ctenophora evolved in the Cambrian explosion, which pins it to about 550 million years ago.
One possible cladogram. Note how Ctenophora evolved before Placozoa, implying Placozoa lost its neurons. Hox, Wnt, and the evolution of the primary body axis (2007), Joseph F Ryan and Andreas D Baxevanis |
Step 1: selective signal enhancement. Basically, representative democracy among neurons.
Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition.This gives rise to the most bottom-up attention: attention in the neurons in the sensors. In a deep neuron network, those neurons would be at the input layer.
The hydra arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others... The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world.Note that almost all neural visual systems in the world starts with raw pixels, then at the second layer do edge detection. This is true for arthropods, for primate visual cortex, and deep learning networks. This is probably because all physically interesting pictures on earth have their most interesting features in the edges.
Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life.There's some issue with timing here... but let's not worry about paleontology yet.
Step 2: basic top-down attention with central brain control. The body schema emerges.
In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.Tectum is present in all vertebrates, and absent in all invertebrates. This places its evolution at around 520 million years ago.
To control the head and the eyes efficiently, the tectum constructs something called an internal model. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement.Proprioception, for example, requires an internal model. The sensors at the joints in the body tells the tectum the angles at each joint, and the tectum must put these numbers to the internal model of the body it's controlling and calculate the shape of the current body.
Even fishes, it seems, have their fish-homunculi. This internal model is called the body schema.
Step 3: complex top-down attention with cortex.
With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst.Mammals inherited that thing, and it's called "cerebral cortex". In birds, it's just called "wulst".
The cortex is like an upgraded tectum... The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.Cortex does complicated top-down attention: spotlight attention, basically. That's how you can pay attention to something even if you aren't looking directly at it, or looking at all.
Cortex is also an attention controller, so it also needs an internal model. But it doesn't model concrete things like a limb or a body. It models abstract ideas and attentions. It models the effects of its own attentions. Instead of a body schema, it needs an attention schema.
According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are.The attention schema is a high-level language, and doesn't mention its physical implementation, and that's how it seems nonphysical.
It depicts covert attention in a physically incoherent way, as a non-physical essence. And this is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description.
A summary
From Sponge to Human: The Evolution of Consciousness (2017), MSA Graziano and TW Webb. This article gives more details to the evolution process, as well as a funny imaginary conversation between the author and a human that proclaims about its own consciousness, then some more.
Experimental Testing
It's well-accepted that brains contain body schemas and uses them in a way explained by control-theory, and this has been tested experimentally by causing the body schema to make mistakes.
It is quite easy to introduce discrepancies between the body schema and the actual arm. When these discrepancies occur, the control of the arm is compromised in ways that are predicted by control theory (Graziano and Botvinick, 2002; Scheidt et al., 2005; Wolpert et al., 1995).
Similarly, to test if awareness is an attention schema, we can see if lack of awareness leads to lack of attention schema, which leads to lack of control stability.
In control theory, when the internal control model is temporarily missing, the control system is less able to maintain stability. If awareness is the internal control model for attention, then without awareness, attention should still be possible but should become less stable.
Fortunately, we've already read about how to study attention without consciousness (Koch et al, 2006), and it remains to test if in such situations, control stability degrades as predicted by control theory.
Graziano wrote elsewhere
On the basis of the attention schema theory, we predicted that attention would show less stability over time when awareness of the stimulus was absent. This prediction was confirmed. Without awareness of the stimulus, attention to that stimulus behaved in a less stable manner. Attention wobbled up and down significantly more during the tested time interval.
The article reviews an experiment, and concludes
The hypothesis was confirmed. In this experiment, awareness acted in a manner consistent with the internal model of attention. Without awareness, attention was possible but less stable over time.
It seems to be jumping to conclusions. A computer simulation showing how an artificial agent would degrade in its performance in the same way when not being aware, would be more convincing.
The Problem of Other Minds
So we know why humans proclaim their own consciousness. Why do they proclaim that others are also conscious? Because a theory of mind is useful for predicting what others would do, so it got evolved.
Whether nonhuman animals do is controversial, but by some reports apes do (Call and Tomasello, 2008; Premack and Woodruff, 1978) and crows do (Clayton, 2015).
And just as the feeling that oneself has consciousness, the feeling that others have consciousness is automatic and cannot be affected by intellectual reasoning.
It is better described as perception rather than cognition. It cannot be chosen or turned off. When projecting awareness onto someone else, we are ourselves not necessarily aware of it. We intuit that Joe is aware of the cookie, or unaware of the puddle in front of him, and we use that perception to predict his behavior and thus better interact with him... If we want to predict his behavior, it would be useful to have an internal model of his attention, an attention schema that we can use to attribute awareness to him.
How and why to build an AI Consciousness
Because it's cool, probably useful, and necessary for human-socially-acceptable way to interact with humans socially.
No comments:
Post a Comment