Wednesday, February 20, 2019

The ghost of morality in science

Simply put, morality is a ghost that doesn't like to be scienced, and science has been trying its best to make morality go away.

The basic point from the scientific viewpoint on morality is that it does not do anything. Morality by its definition resists scientific explanation. If there's an explanation to why we do things morally, it becomes imaginable that we do something else morally, or not, and either way, "morality" can be seen as a ghost that doesn't do anything.

Think of the marital morality of the praying mantis. Is it moral, or not, for the female mantis to eat the male mantis after marriage? It is in their nature, with a deep reason. Praying mantis evolved to have this kind of strange and cruel (viewed by a human) morality. If they evolved in some other way, they would have a different morality.

Tuesday, February 12, 2019

$\omega$-inconsistent life

I was sad and wanted to die, but didn't want to die yet because I would like to think of a good reason to.

I said, "I know that there's a proof of life's meaninglessness, but I'd like to be justified in my belief. So I would kill myself on the next day iff I find a proof of life's meaninglessness."

Sunday, February 10, 2019

Let's Read: Nassim Nicolas Taleb's Skin in the Game (2018)

Today at the airport I picked up Taleb's book Skin in the Game (2018). After reading about 40 pages around it, I got quite irritated by the rhetoric and gave the book away. Then I opened a webpage to see what the book is actually about without all the rhetoric.

In short, Taleb argues for two aspects of a "survival first" philosophy:
  • Value: survival is the only sacred value. Big survival (survival of the species) is more sacred than small survival (personal survival). All else are profane and infinitely smaller than survival. From this comes his disdain of cost-benefit analysis and his obsession with ruins, deaths, and "black swan events" (rare but big events whose probability can't be expected before they actually happen).
  • Knowledge: trust evolution, not human thoughts. If something have survived for a long time, it is most likely correct. Untested human thoughts (except by rare geniuses) are certainly wrong. From this comes his hate of intellectuals, theories, and short-time lab experiments, and his revere of traditions, and long-time players.

Thursday, February 7, 2019

Mechanical Terra-reforming, human-tech coevolution

Terraforming, terra-reforming

In 2030s, the humans arrived on Mars. Humans were born on land, and they had lungs that could not extract the gas that humans need to continue their chemical life-fire. Not a problem. They brought with them cylinders of compressed life-gas, and life-gas producing creatures. They turned Mars green and covered its surface with life-gas.

In 19th century, metallic horses were born on earth, and they had circular feet that could not trot over most kinds of land. No mountains, no grasslands, no marsh. Only hard flat surface of no more than 10 degrees in slope. Not a problem. Their human companion-species set out and terra-reformed the world for them. Now there are 10e10 of these metallic horses running wild on the hard flat surfaces over earth.

Tuesday, February 5, 2019

A theory of free will and psychopathy

Free will

Define: an effectively deterministic system, or an apparent robot, is a system whose behavior can be predicted easily from its initial state and immediate surroundings.

Define: an effectively teleological system, or an apparent agent, is a system whose behavior cannot be predicted as above, but whose future state can be predicted somewhat.

Basically, an apparent robot is somebody that you can read like a book, and an apparent agent is somebody that you can't, but you can still guess what "goals" (likely future states) they have.

A successful creature needs to figure out what other creatures are going to do. But it's too hard to model them as apparent robots, just because how complicated creatures are. It's easier to model them as apparent agents.

Apparent agents are apparently free: they aren't apparently deterministic.

Apparent agents are willful: they do actions.

Thus, apparent agents apparently have free will. To say someone "has free will" means that someone is a creature that does things in a way you can't predict.

Monday, February 4, 2019

An Off-by-one Error in the Tet Offensive (1968)

North Vietnam used UTC+07:00 since 1968-01-01.
South Vietnam used UTC+08:00 since 1960-01-01.

The relevant new moon was on 1968-01-29, 16:31 UTC,
in North Vietnam time, 1968-01-29, 23:31.
In South Vietnam time, 1968-01-30, 00:31.

In Chinese and Vietnamese calendars, months begin on the day of the new moon. Years begin on the second or third new moon after the winter solstice.

Tet was on 1968-01-29 in North Vietnam.
Tet was on 1968-01-30 in South Vietnam.

Saturday, February 2, 2019

Fermi estimate of time it takes to solve chess

Landauer limit 3e-23 J/bit in our current universe background radiation, with
$$kT\log 2, T = 3K, k = 1\times 10^{-23}J/(K\cdot bit)$$

Searching for game nodes has about 2e10 nodes/s on a GPU with 3e12 bits/s, so that gives about 100 bits/node.

The number of chess nodes needed to evaluate is about 1e43.

So we get an upper bound of 1e45 bits needed to solve chess, assuming we can't prune any nodes.

That requires 3e22 J at the Landauer limit.

Let's Read: Neuropath (Bakker, 2009)

Neuropath  (Bakker 2009) is a dramatic demonstration of the eliminative materialism worldview of the author R. Scott Bakker. It's very b...