Sunday, February 10, 2019

Let's Read: Nassim Nicolas Taleb's Skin in the Game (2018)

Today at the airport I picked up Taleb's book Skin in the Game (2018). After reading about 40 pages around it, I got quite irritated by the rhetoric and gave the book away. Then I opened a webpage to see what the book is actually about without all the rhetoric.

In short, Taleb argues for two aspects of a "survival first" philosophy:
  • Value: survival is the only sacred value. Big survival (survival of the species) is more sacred than small survival (personal survival). All else are profane and infinitely smaller than survival. From this comes his disdain of cost-benefit analysis and his obsession with ruins, deaths, and "black swan events" (rare but big events whose probability can't be expected before they actually happen).
  • Knowledge: trust evolution, not human thoughts. If something have survived for a long time, it is most likely correct. Untested human thoughts (except by rare geniuses) are certainly wrong. From this comes his hate of intellectuals, theories, and short-time lab experiments, and his revere of traditions, and long-time players.

Ergodicity

Time to explain ergodicity, ruin and rationality. Recall from the previous chapter that to do science (and other nice things) requires survival, not the other way around?
a situation is deemed nonergodic when observed past probabilities do not apply to future processes. There is a “stop” somewhere... a ruin.
Ergodicity: how much personal-average (time-average) resembles world-average (ensemble-average). For example, soldiers don't have ergodicity about their own survival: every one would find that they survived every encounter, even if 10% would die.

Ruin: End of some kind. Death, or things that are like death. In general, any kind of event in a sequential game that ends the game.

Rationality: Doing all you can to survive. Survival first. Science and all thoughts must optimizesurvival. If superstitions give the best chance for survival, it must be believed. Even practical doublethink can thus be justified.

Life and death

... the presence of ruin does not allow cost-benefit analyses... almost all people involved in decision theory made a severe mistake. Everyone? Not quite: every economist, but not everyone: the applied mathematicians Claude Shannon, Ed Thorp, and the physicist J.-L. Kelly of the Kelly Criterion got it right.
Here Taleb is claiming that cost-benefit analysis can't be used to study ruin, since an individual must ensure survival before trying to decide what to act, conditional on ensuring survival.
The central problem is that if there is a possibility of ruin, cost benefit analyses are no longer possible... if you played Russian roulette more than once, you are deemed to end up in the cemetery. Your expected return is … not computable.
Here Taleb claims that life and death cannot be given a computable value. This again stems from his philosophy that survival is a sacred value that is infinitely greater than any profane values like comfort, money, and fame.
The flaw in psychology papers is to believe that the subject doesn’t take any other tail risks anywhere outside the experiment and will never take tail risks again. The idea of “loss aversion” have not been thought through properly... I believe that risk/loss aversion does not exist: what we observe is, simply a residual of ergodicity.
About every time I discuss the precautionary principle, some overeducated pundit suggests that “we cross the street by taking risks”, so why worry so much about the system? This sophistry usually causes a bit of anger on my part. Aside from the fact that the risk of being killed as a pedestrian is one per 47,000 years, the point is that my death is never the worst case scenario unless it correlates to that of others. 

Survival is the ultimate good

I have a finite shelf life, humanity should have an infinite duration... Individual ruin is not as big a deal as the collective one... Courage is when you sacrifice your own wellbeing for the sake of the survival of a layer higher than yours.
Here Taleb is explicitly prescribing the layered structure of sacred survival values: individual < group < humanity < the whole ecosystem < ?.
there are zillion ways to make money without taking tail risk. There are zillion ways to solve problems (say feed the world) without complicated technologies that entail fragility and an unknown possibility of tail risks.
Here Taleb is alluding to his argument against GMO from the precautionary principle (GMO has not been time-tested and might destroy the environment). Kahneman actually commented on the precautionary principle too in his Thinking Fast and Slow, and said that it stems from human risk-avoidance. Unsurprisingly, Taleb denies risk-avoidance exists.
the fragility of the components is required to ensure the solidity of the system. If humans were immortals, they would go extinct from an accident, or from a gradual buildup of misfitness. But shorter shelf life for humans allows genetic changes to accompany the variability in the environment.
Here, Taleb is referring to another book he wrote, Antifragile (I'm not going to read that one). Basically, it says that a system is less likely to die all at once if it is made of smaller parts that can die and get replaced. Evolution works by killing off all the misfits and keeping only the survivable ones. In this way, the ecology on earth is robust and sacrifices the smaller survivals for the bigger survivals.

Okay but where is the "skin"?

The "skin in the game" just means "getting punished if things go badly" or "having something at stake". Taleb claims that when people do things without getting punished for bad outcomes, they don't do good things, because this does not allow evolution to do its thing and kill off the bad things.

Criticisms of Taleb's theories

Skin in the game can fail to give good outcomes

Getting a bunch of players to play with skin in the game can fail to give you good outcomes. Evolution doesn't always give you good results, merely things that work good enough to not go extinct.
  • Human eyes are built with the retina stuck in the inverted direction, due to its evolutionary history. Octopus eyes are built with retina in the correct direction.
  • People who buy lottery a lot are those with skin in the game, and they tend to be those who have developed very elaborate theories. And the theories help the players stay playing lottery: it gives them hope. But in this case, this process of "natural selection" in lottery-players produced players with entirely wrong theories.
  • Loyal users of a product might refuse to even look at how other products might be better, in fear of feeling like an idiot for making the wrong choice. (In the book, Taleb said that he though Tesla electric cars are good, because a friend stayed happy with the car for a few years. This looks very suspicious to me: maybe that friend simply wants to feel like he made the right choice!)
  • Hazing and other painful initiation ceremonies make people stick longer to the group, even if the group membership itself is useless or harmful.
For black swan events, even those that has never ever happened, their probability can be estimated by Fermi estimate. Break the big event into small events, and estimate their probability individually, accounting for probabilistic dependence between the events. Their nonergodicity is not a lethal problem.

Death is not the ultimate bad

Personally, survival is not the ultimate value. A good death (literally "euthanasia") is just as valuable. Even a suicide done well is valuable. And because life is not sacred, it's valid to factor it into cost-benefit analysis. Think of the lottery again. If all that life is is a game of lottery where the only way to keep your will to survive is to believe in number superstitions, perhaps quitting life is a good choice. The only winning move in an absurd life is suicide.

Even if life is sacred, some kind of cost-benefit analysis is required when you are facing a tragic tradeoff, where you have to trade between sacred values. For example, instead of playing Russian roulette for money, you might be playing it to save your friend (kidnapped by a supervillain who forces you to play it, or if you are the protagonist in The Deer Hunter). Claiming that the value of life is "noncomputable" is just wrong. At most, it might be "infinitely bigger" than profane values.

Taleb did prescribe how one should trade between survivals. Basically, one should always sacrifice low-level survival for high-level survival. Always sacrifice yourself for survival of humanity, for example. This has its own problems though.

And death is inevitable, anyway. Not just for humans, or for humanity, but for the whole world. Once one recognizes this, the problem can't be about "how to keep humanity survive forever?", since that's just impossible, but rather "what's a good survival before death?"

An inability to contemplate the value of a world that inevitably ends in total human extinction is the fatal flaw of Taleb's theory. (My ability to, is the fatal flaw in my theory, according to Taleb.)

"Survival is the only good" requires a leap of faith

There is at least one thing I whole-heartedly agree with Taleb: survival is the ultimate decider of good. In a world of evolving thinking creatures, the creatures with the purest obsession with survival itself would survive the best. The creature whose only good is survival survives the best. Truth, beauty, love, all are mere tools for survival.

However, by leaping from "'survival is the only good' is the most common belief one would observe" to "survival is the only good" would be inferring what is good from what is natural, and philosophers have made all kinds of arguments against that. The only thing to note is that it is a circular argument, but refusal of it can also be circular.

If I accept that "survival is the only good", then "'survival is the only good' helps my survival" justifies my belief in "survival is the only good". If I accept that "being true is the only good", and "'survival is the only good' helps survival, and that's why it's so common, but it's not actually true", then that justifies my belief in "survival is not the only good, even though it's so commonly thought as".

To convert someone to this belief of "survival is the only good" thus requires a leap of faith. This is not a logical criticism in this belief, though, since there's no self-contradiction.

Is a robust survival the most valuable survival?

There can be a choice between different kinds of survivals on the same level. In the ancient past, for example, the oceans were full of anaerobic creatures. Then some started to make oxygen from photosynthesis and the oxygen-filled atmosphere made most of the creatures go extinct. This was the Oxygen Catastrophe. It triggered the snowball earth that almost killed off life on earth forever. But when it was finally over, earth life became more complicated and interesting, due to the oxygen allowing creatures to use more efficient chemical energy.

The Oxygen Catastrophe was very "irrational" in two ways: One, it made most of the creatures go extinct, even some of the photosynthetic creatures themselves. Two, it almost killed off life on earth completely. But it was a high risk, high reward gamble, or better, a "high death, high life" gamble.

So, a safe but mediocre survival, or risk extinction for a far more interesting survival? Taleb votes the first, but I might vote the second. 

How about survival of the same old humanity forever, versus a radical post-humanity that is quite incomprehensible to the old humanity? We will have the technology to radically change humanity into posthumans, and all we know is that the posthumans would be vastly more complicated and survivable, and they would treat us like how we treat ants: mostly indifferent. They might just chew up 99.9% of the solar system into probes and thinking machines to improve their chance of surviving for ten billion years in the Milky Way.

Would they be good or evil? No, they'll just be a robust race of monsters. Do we push for posthuman technology, or not? The problem, I think, is in how one makes the levels of survivals:

individual < group of friends < society < humanity < earth ecosystem

Where could the race of monsters fit in? On the same level as humanity, or ecosystem, or bigger than even the ecosystems?

Are we ever justified in risking our own Oxygen Catastrophe? Sacrificing ourselves to create superhumanly strong monsters? I say yes. Taleb probably says no (just a guess, since Taleb did not write about the value of nonhuman survivals). Why not? It depends on what counts as a meaningful survival. What is a meaningful survival? A humanly understandable one. Why humanly understandable? A robust survival simply is valuable. Human understandability is irrelevant. 

No comments:

Post a Comment

Let's Read: Neuropath (Bakker, 2009)

Neuropath  (Bakker 2009) is a dramatic demonstration of the eliminative materialism worldview of the author R. Scott Bakker. It's very b...