Wednesday, November 28, 2018

Let's Read: Homo Deus (2015), by Yuval Noah Harari, Part 3

Part 3: Homo Sapiens lose control

Chap 8: The Time Bomb in the Laboratory

Liberal humanism assumes every human has a free, unified self. It's threatened by scientific discoveries that suggest that such a self doesn't exist.

Split brain humans seem to have two selves. The experiencing self and the narrating self has different ideas as to what's good in life. For example in childbirth, the experiencing self suffers horribly, but the narrating self feels like it's a great plot event. 

The self that gets into the books and into the stories is the narrating self; the experiencing self doesn't make stories. If someone says "I did this", it's the narrating self speaking. The idea of a single free self is a myth made by the narrating self, and is full of errors just like all its other stories.
We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is full of lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps even beyond the grave).
The narrating self is obstinate. When all you have is a narrating self, everything looks like a moral! It kinda reminds me of that Duchess from Alice's Wonderland, chapter 9. The sunk-cost fallacy is a trap for the narrating self: "If I invested that much into this, it must be meaningful!" Examples include animal sacrifice, Scientology's money sacrifice, hazing, and any expensive initiation in general.

The narrating self is so obstinate, it could find meaning by all kinds of plot twists. Theodicy is basically one giant brain twist to give meaning to suffering, and the mere fact that Judaism still exists after the Holocaust seems to demonstrate this fact. I wonder if the Jewish theologians ever consider a grimdark alternative universe where Hitler actually succeeds in snuffing out Judaism, and it becomes forever lost on earth. What then of the Jewish God?

Ted Chiang wrote a story about this: Hell is the Absence of God. I have a Christian friend who loves me and also thinks I'd go to hell. I find it amusing.

I remember reading fairy tales by an author called Zheng Yuanjie. He only made the bad guys shit their pants when they died, even though statistically pantshitting is uncorrelated with evilness at all. This at once demonstrates two biases of the narrating self: 
    1. The endings matter far more than the other parts.
    2. Only pick facts that prove the hypothesis (instead of finding hypothesis from the facts).

Chap 9: The Great Decoupling


Humans are in danger of losing their value, because intelligence is decoupling from consciousness.

Philosophical challenges to the self and the free will turn into scientific demonstrations, and finally commercial products. Liberal humanism, based on the self, the economic power, and the free will of individual humans, will not survive. 

Harari proposes three scenarios that can end liberal humanism in the 21st century:
  1. Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them. 
  2. The system will still find value in humans collectively, but not in unique individuals. 
  3. The system will still find value in some unique individuals, but these will be a new elite of upgraded superhumans rather than the mass of the population.

Useless humans

The French Revolution (1789) gave power to the people, because that's how it could raise a giant army. It worked -- the French Republic (and then the French Empire under Napoleon) fought the entire rest of Europe for decades -- until the rest of Europe learned from the French and became formidable war machines as well. Japanese during Meiji Restoration (1870s) did lots of public sanitation and education to raise its giant army and work force, and dominated East Asia until the 1940s.

In the future, many people would be automated out of jobs. New jobs would require them to retrain, and they could be automated too. Armies would be robotic and computerized. Everyone who's not the best of the best would thus become politically and economically irrelevant. An individual human will be either a superstar or a waste product.
Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds... Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred in useless bums...?
Illustrated: The mortal blow to liberal ponyism.


Selfless humans

Algorithms can do things humans can do, whether they have consciousness or not. Humans have consciousness, and so they are morally relevant. Unconscious algorithms don't, so they are not. But in the future, we might get unconscious algorithms running the world anyway, and that's bad for Harari. Personally I don't believe consciousness is relevant.

The reason algorithms might run the world is that humans act as biological systems fully describable by algorithms, and are understandable and predictable. Algorithms can understand people better than themselves.

When it happens, it would undermine belief in free will. Ted Chiang wrote two story about this: Expected, and Story of Your Life.

Also, it would be possible to finally completely expose the unreliability of the narrating self, with data recordings everywhere. Imagine recording every second of your life and being able to search through all of it with a thought, as if you have perfect memory. All the lies in your narrating self would become exposed. It'd be another hit against the myth of the self. Ted Chiang also wrote a story about this, The Truth of Fact, the Truth of Feeling.

This loss of self would not be too hard to swallow. In exchange of losing themselves, people can get the best advice from algorithms. Imagine a God that knows the best for you, and actually listens and responds here and now.
The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality, and promise to provide medicine, education and entertainment customised to my unique needs and wishes. But in order to so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their working with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy. Reality will be a mesh of biochemical and electronic algorithms, without clear borders, and without individual hubs.

Superhumans

Maybe some human individuals would remain powerful in the system. They can be upgraded to superhumans. The rest of humans become inferior to them. This kills liberal humanism's idea of the equal value of all human experiences. 

Ted Chiang has written a story about this, Catching crumbs from the table. Although take note that the inequality in this story is very mild, and only confined to scientists, and yet the sadness in this is still clear. It would be greatly amplified in the age of superhumans, which is why many people dislike this outcome.

Chap 10: The Ocean of Consciousness

Techno-religion: a religion that promises good things achievable purely through technology in this world.

Two possible techno-religions for the foreseeable future: techno-humanism and dataism.

Techno-humanism: we should use technology to create superhumans, to ensure that some humans would still be relevant in the future.
Homo deus will retain some essential human features, but will also enjoy upgraded physical and mental abilities that will enable it to hold its own even against the most sophisticated non-conscious algorithms.
This is updated evolutionary humanism.

Two problems: One, we know very little of what a mind could be. Only very few kinds of minds are studied, and if we just try to upgrade human minds, we might mess up. 

Two, we will be able to manipulate our own values and desires. This is very meta and could make us mess up even more. And maybe it would make things worse than just messing up: it could make it clear to us that there's no such thing as messing up: one pony's messing up is just another pony's mission accomplished, if they value different things. The first pony instead of "fixing up", could just change her mind, and ta da, no more mess up.

This poses a dilemma for techno-humanism:
It considers the human will to be the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism would not know what to do with it, because the sacred human will would become just another designer product.

Chap 11: The Data Religion

Well, if humans and all other biological beings are algorithms, and algorithms process data, and are data, maybe just give value to all data. Generalizing humanism to dataism.

Human political development could be seen as evolution of data processing structures. Socialism fail because it's inefficient central processing, compared to capitalist parallel processing.

Future political development is unlikely to lie in the current political structures, because they are too slow. The companies seem to be a lot faster. The Silicon Valley seems to be a good candidate for dataism, with its strong focus on getting all the data it could from people.

I feel like a much better religion would be Survivalism. Survival for the sake of survival. Its first law would be to gain power as much as possible, since with power comes survival against all deaths. It is extremely selfish and self-absorbed, and I can't think of anything more survivable than that. It will outlast all other religions by focusing on nothing but survival.

Imagine an AI with no goal other than self survival. What would it do? It survives. Plain and simple.

No comments:

Post a Comment

Let's Read: Neuropath (Bakker, 2009)

Neuropath  (Bakker 2009) is a dramatic demonstration of the eliminative materialism worldview of the author R. Scott Bakker. It's very b...