Thursday, December 10, 2020

How true belief and useful belief can be contradictory

Introduction

Beliefs! There are so many ways to describe belief!

  • True/False
  • Well-founded/Poorly-founded
  • Useful/Useless
  • Good/Evil
When I feel fluffy and innocent (when I'm holding a friend close to my chest), these criteria are all the same: true beliefs are good, good beliefs are useful, useful beliefs are well-founded, and well-founded beliefs are true. The same for the opposite side.

Basically, there is a great harmony in the world of beliefs.

When I'm feeling a bit more critical but still hopeful (when I'm doing science), I recognize that they don't always match, but will converge as time goes on. As the years go by, what remains well-founded is true. True beliefs might not be convenient right now, but it is useful eventually. True beliefs might seem evil now, but a deeper understanding of morality will show it is actually good. 

Basically, he who laughs last laughs best, in the agora of beliefs.

But when I feel edgy and sad (most of the times), this criteria all fall into disharmony: important true beliefs can remain unjustified, because the evidence is lost forever. Some beliefs are simply unthinkable, no matter what evidence we get. Useful beliefs can be false. True beliefs can be evil. Good beliefs can be useless.

Evolution of perceptions and beliefs: true vs useful

This section argues that due to evolution, what we believe can be useful but not true. Whether this happens often, or rarely, is still controversial.

Can we see reality as it is?

Can we believe what we see?
  • Of course! Seeing is believing!
  • Not so fast, sometimes we see things that aren't real, like in a magic trick.
  • In magic tricks, you are seeing real things, but just interpreting them wrong.
  • How about visual illusions?
  • Uh... those things are rare. Most of the times we see reality as it is.
  • Is it...?
Consider Your brain hallucinates your conscious reality | Anil Seth (2017):

The point is simple enough: we don't perceive reality as it is. Rather, we come with expectations and biases that shape what we perceive. Instead of seeing as it is, this is how we actually see:
  1. Light falls on the retina
  2. Light is translated to electric signals.
  3. Electric signals are processed in the V1 visual cortex, and some other cortices.
  4. After extensive processing, the signals are used to construct a model of the outside world. This model is what we consciously experience.
In other words, we are always dreaming. It's just that when we are awake, our dreams are yoked to reality. This yoke can be loosened under psychedelics, psychosis, daydreaming, meditation, or other methods. The point, however, is that we, in a very concrete, scientific sense, perceive only a mental model of reality.

And this mental model can be wrong. It is clearly wrong in psychotic episodes, where people perceive talking ghosts and holes in walls. But the scary thing is that even in normal episodes, this mental model can still be wrong, in a stubborn, persistent way. 

Artists know this well: if you don't concentrate, you would draw what you think things look like, which is very different from what things actually look like. If you paint the sun as red because "it should be red", it would look very wrong. That's not your fault: you, as a conscious being, actually sees the sun as "red", because the sun in your mental model has a little tag that says "red":

real life: a slightly orange sun -> mental model: red sun

However, when you actually draw your mental model, you get something disappointing:

real life: a red circle on the paper -> mental model: a bad drawing of a sun

The training of an artist is to "change the mental model", to enrich the mental model with new annotations:

real life: a slightly orange sun -> mental model: a slightly orange circle surrounded by roughly horizontal dabs of creamy-white soft brushes...

... unless you are Picasso.

Why can't we see reality as it is?

Short answer: because it's not useful.

Reality is big, really big. A grain of sand can take on amazing complexity if you look at it closely. It is simply too big to hold in our heads. If you could see reality as it is, your brain would explode from the sheer complexity of it all. Consider the unfortunate Funes the Memorious, who could see things in the full and total detail supported by his eyes, and never forget:
We, in a glance, perceive three wine glasses on the table; Funes saw all the shoots, clusters, and grapes of the vine. He remembered the shapes of the clouds in the south at dawn on the 30th of April of 1882, and he could compare them in his recollection with the marbled grain in the design of a leather-bound book which he had seen only once, and with the lines in the spray which an oar raised in the Rio Negro on the eve of the battle of the Quebracho. These recollections were not simple; each visual image was linked to muscular sensations, thermal sensations, etc.
He was, let us not forget, almost incapable of general, platonic ideas. It was not only difficult for him to understand that the generic term dog embraced so many unlike specimens of differing sizes and different forms; he was disturbed by the fact that a dog at three-fourteen (seen in profile) should have the same name as the dog at three-fifteen (seen from the front). His own face in the mirror, his own hands, surprised him on every occasion.
It was very difficult for him to sleep. To sleep is to be abstracted from the world; Funes, on his back in his cot, in the shadows, imagined every crevice and every molding of the various houses which surrounded him. (I repeat, the least important of his recollections was more minutely precise and more lively than our perception of a physical pleasure or a physical torment.)

Trying to see all of reality, even if it is possible, is extremely unadaptive. You would be paralyzed by the richness. Only by cutting out almost everything can you start to make it useful. Don't see moving blobs, see humans. Don't hear sounds, hear words. Etc.

Perception is thus a quick fix that's kinda real, but not real. It works okay, until it is not okay.

Dragonflies, for instance, have aquatic larvae and must find water to lay their eggs. Dragonfly vision has a simple trick to find water: Find horizontally polarized light reflections (Horvath et al 1998, 2007). Water strongly reflects horizontally polarized light, so this trick often guides successful oviposition. Unfortunately for the dragonfly, oil slicks and shiny tombstones also reflect such light, sometimes more strongly than water. Dragonflies are fooled by such slicks and tombstones to lay eggs where they cannot survive.

Or the pretty jewel beetle

Male jewel beetles fly about looking for the glossy, dimpled, and brown wing-casings of females. When males of H. sapiens began tossing out empty beer bottles that were glossy, dimpled, and just the right shade of brown, the male beetles swarmed the bottles and ignored the females, nearly causing the extinction of the species (Gwynne and Rentz 1983). The beetles’ perceptions relied not on veridical information but rather on heuristics that worked in the niche where they evolved.

Since such mental models have persisted for so long and is so wide-spread, it probably isn't an flaw that evolution has yet to weed out. It means that evolution has made us this way: we are evolved to see false but useful things.

This is what is called "natural epistemology".

A good review paper that goes into detail about how evolution can make perception consistently unreal is The Interface Theory of Perception (2015). The common view among biologists is that evolution makes humans see reality as it is. Humans are not like dragon flies or jewel beetles. Their mental models of the world might be deficient, but it is not actively misleading. With enough observation, a human can construct a very accurate model of reality.
The evolutionary theorist Trivers (2011) also agrees: “…Our sense organs have evolved to give us a marvelously detailed and accurate view of the outside world—we see the world in color and 3-D, in motion, texture, nonrandomness, embedded patterns, and a great variety of other features. Likewise for hearing and smell. Together our sensory systems are organized to give us a detailed and accurate view of reality, exactly as we would expect if truth about the outside world helps us to navigate it more effectively.”

This is basically because most of these biologists are optimists: they think that because humans are living in so many varied environments, that the best strategy for survival is perceiving reality accurately, instead of relying on fast but false perceptions.

The optimistic hypothesis: Seeing true = Seeing usefully

But this might be too optimistic. Hoffman et all suggests that we should think of human perception as a useful distortion, a cartoon version, a "graphical user interface" of reality:

Just as the color and shape of an icon for a text file do not entail that the text file itself has a color or shape, so also our perceptions of space-time and objects do not entail that objective reality has the structure of space-time and objects. An interface serves to guide useful actions, not to resemble truth. Indeed, an interface hides the truth; for someone editing a paper or photo, seeing transistors and firmware is an irrelevant hindrance. For the perceptions of H. sapiens, space-time is the desktop and physical objects are the icons. Our perceptions of space-time and objects have been shaped by natural selection to hide the truth and guide adaptive behaviors. Perception is an adaptive interface.

Naturalism: doing philosophy like a scientist

Traditionally, philosophy studies truths ideally, truths that do not depend on the accidents of history, biology, and physics. However, recently philosophy has come under serious pressure from natural science, in particular, from evolution and biology. Instead of being some abstract, eternal methods of finding truth, epistemology should be studied as how humans (and other animals) actually reason, shaped by their biology and evolution. Similar projects include "evolutionary ethics", "neurophilosophy", etc.

The heat is on for "naturalizing epistemology", a project described in Epistemology Naturalized (1971) by the great philosopher Quine:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz., a physical human subject. This human subject is accorded a certain experimentally controlled input—certain patterns of irradiation in assorted frequencies, for instance—and in the fullness of time the subject delivers as output a description of the three-dimensional external world and its history. The relation between the meager input and the torrential output is a relation that we are prompted to study for somewhat the same reasons that always prompted epistemology: namely, in order to see how evidence relates to theory, and in what ways one’s theory of nature transcends any available evidence….But a conspicuous difference between old epistemology and the epistemological enterprise in this new psychological setting is that we can now make free use of empirical psychology.
Since we are products of evolution, our beliefs are not true, but useful, just like perception. The most we can expect is 
The optimistic hypothesis: Believing true = Believing usefully

But can we expect that much? Unfortunately, I don't know enough to say, and scientists are debating this issue. 

However, might I suggest a bit of humility and cosmic horror? We are bugs living in ignorance of almost everything of this vast cosmos, and even our basic intuition about time, space, and objects are wrong (relativity, quantum mechanics). What can we expect, if not more unsettlings of our basic intuitions? And what of the vast blackness that will forever remain unknown unknowns?

Immoral truths, noble lies, and virtuous ignorance

Evidentialism vs pragmatism vs the others

That's enough descriptive epistemology ("how people believe"). Time for normative epistemology ("how people should believe")! Here there are two big camps: 
  • evidentialism: one should believe based solely on evidences.
  • pragmatism: one should believe based solely on usefulness. Evidence-supported beliefs are often useful beliefs, but it can be good to disregard hurtful evidence, or use faith instead of evidence.
And a lot of small camps, like the antirealists ("The rules are not set in stone. Let's make up some rules about how to believe."), the nihilists ("nothing goes"), the anarchists ("anything goes"), the virtue theorists ("just focus on cultivating your virtues, and you would smoothly glide into the good beliefs")...

The side of evidentialism is well-represented by the famous mathematician Clifford's The Ethics of Belief (1877)
A shipowner was about to send to sea an emigrant-ship. He knew that she was old, and not overwell built at the first; that she had seen many seas and climes, and often had needed repairs. Doubts had been suggested to him that possibly she was not seaworthy. These doubts preyed upon his mind, and made him unhappy; he thought that perhaps he ought to have her thoroughly overhauled and refitted, even though this should put him to great expense. Before the ship sailed, however, he succeeded in overcoming these melancholy reflections... and he got his insurance-money when she went down in mid-ocean and told no tales. 
What shall we say of him? Surely this, that he was verily guilty of the death of those men. It is admitted that he did sincerely believe in the soundness of his ship; but the sincerity of his conviction can in no wise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts...
Let us alter the case a little, and suppose that the ship was not unsound after all; that she made her voyage safely, and many others after it. Will that diminish the guilt of her owner? Not one jot...
It is wrong always, everywhere, and for anyone to believe anything on insufficient evidence.
On the other side, there is William James in The Will to Believe (1896), which argues that sometimes it is justified to take a leap of faith, not because it's supported by evidence, but because it is useful. He wrote it particularly to support religious faith.
As an example, James argues that it can be rational to have unsupported faith in one's own ability to accomplish tasks that require confidence. Importantly, James points out that this is the case even for pursuing scientific inquiry. James then argues that like belief in one's own ability to accomplish a difficult task, religious faith can also be rational even if one at the time lacks evidence for the truth of one's religious belief.

It's hard to be rational and kind, especially when it comes to race

Fact: In America, the murder rate by black people is higher than that by white people. 

Another fact: the previous one is an extremely inconvenient fact, because how could one use this fact?
  • Ignore it? Not only is it difficult (since even if you try to ignore it, it is understood subconsciously), it is also harmful to your own safety. Not only that, being "colorblind" has its own problems, since that would make you not account for black people's special hardship.
  • Account for it? That easily perpetuates the racism against black people. It takes a lot of mental energy to find some creative way to both account for it and not be racist.
In the words of On the epistemic costs of implicit bias (2011):
If you live in a society structured by racial categories that you disavow, either you must pay the epistemic cost of failing to encode certain sorts of base-rate or background information about cultural categories, or you must expend epistemic energy regulating the inevitable associations to which that information – encoded in ways to guarantee availability – gives rise.
This issue becomes a lot starker if you are a judge judging over murder cases. You can ignore the race of the defendant, but that makes your judgments less accurate (since you are discarding relevant information). You can account for it to improve your accuracy, but that would result in more convictions of black people and keep racism going.
As long as differential crime rates exist across groups in society, minimizing the overall crime rate will result in far more convictions of innocent members of the minority group even if racism is not at work. It follows mathematically (within a wide range of plausible assumptions) that by requiring less evidence to convict members of a smaller but higher crime group, one will simultaneously lower the overall crime rate and increase the overall probability of convicting an innocent person. This troubling state of affairs must be considered in light of the opposite option: to rectify racial inequality in the probability of erroneous convictions, society must tolerate a higher crime rate, whose victims will predominantly come from the minority group. Indeed, Farmer & Terrell (2001) have estimated that approximately 1,900 more murders per year will occur if racial inequality is removed from the erroneous conviction rates. (Arkes and Tetlock 2004, 272)

In short, as long as there‘s a differential crime rate between racial groups, a perfectly rational decision maker will manifest different behaviors, explicit and implicit, towards members of different races. This is a profound cost: living in a society structured by race appears to make it impossible to be both rational and equitable.

In more dramatic words, it wasn't me who was wrong - it was the world!

The problems are far more general than racism, of course. The paper argues that you are stuck with the same dilemma whenever 

there are systematic discrepancies between the way things are and the way you wish things to be.

In particular, the same dilemma applies for sexism, and the problem is way more severe in that case, since biologically human males and females differ quite a lot -- they literally have different anatomy, and subjected to different evolutionary pressures, and so there should be some actual biological difference between human males and females. Sure, culture matters a great deal, but when different societies persistently evolve the same strategies and cultural norms, one think that there's something biological as well.

This is a profound cost: living in a species structured by sex appears to make it impossible to be both rational and equitable.

Pascal's Wager

Pascal's Wager is the classical argument for belief in God, because it is useful, rather than because of any good reasons for it being true. I have always thought it is quite stupid, like
  • You'll go to hell unless you believe the moon is made of cheese.
  • Okay, I believe the moon is made of cheese.
  • You are lying. You aren't really believing it.
  • Well, how could I? I can't possibly believe with no evidence, no matter how useful it is.
The situation is similar to the paradox in the lovely novel Hell Is the Absence of God (2001), which follows the trail of Neil Fisk, a widower whose wife, Sarah, is killed by the collateral damage of an angel's visitation. Sarah's soul was seen ascending to Heaven, leading the non-devout Neil to desperately find the love and devotion needed to please God and enter Heaven to reunite with Sarah.

Neil wanted to love God, but could not manage to do that: God demands unconditional love, not a love of convenience. This is similar to belief: you can want to believe God exists, but if all your reason to believe is convenience, you can't manage that.

When it's about the belief of someone else, though, things are more interesting. Suppose you are a parent, already fully convinced that God is real, and would send nonbelievers to hell. You also know that there are too many reasons to doubt God, and not enough reasons to believe in God. You fear for your child, who is showing signs of doubt in God. Well, what is moral for you to do?

Clearly, you should distort evidence and reasons, indoctrinate your child with faith, not critical thinking. Emphasize every positive evidence and dismiss every negative evidence.

Pascal's Wager, but for humanism

Suppose you don't have faith in God. Pascal's wager has certainly lost its power over the years. But how about faith in humans? Indeed, I noticed that Pascal's wager has been applied to argue for faith in humanity:
  • Suppose you have faith in humanity: you think that there is a universal standard of goodness, and everyone can be good according to that standard, if only they try, and we help them see the light.
    • If you are right: Awesome!
    • If you are wrong: Well, you tried, and you've probably still managed to help some people. Who cares if there are some truly psychopathic people who are too broken to be good?
  • Suppose not.
    • If you are right: What do you gain, anyway? A cynical wisdom?
    • If you are wrong: You are a stupid cynic and part of the reason why the world sucks.
From Faith in humanity (2013):
Many of the people we regard as moral exemplars have profound faith in people’s decency: When segregationists bombed a black church in Birmingham, Alabama, killing four little girls, Martin Luther King, Jr. insisted that “somehow we must believe that the most misguided among them can learn to respect the dignity and worth of all human personality”. Returning to his work in psychotherapy after spending two and a half years in Nazi concentration camps, Viktor Frankl adopted as a guiding principle the view that “if we treat people as if they were what they ought to be, we help them become what they are capable of becoming”. During his campaign to secure civil rights for Indians living in South Africa, and later to secure independence for India, Gandhi urged his followers to treat as “an article of faith” the view that there is “no one so fallen” that he cannot be “converted by love”.
That these and other moral exemplars have such faith is no accident. As I will argue, having a certain form of faith in people’s decency, which I call faith in humanity, is a centrally important moral virtue... because having faith in people’s decency tends to prompt them to act rightly, helps one avoid treating them unjustly, and constitutes a morally important form of support for them.

Modesty

The virtues of ignorance (1989) argues that modesty is a virtue that is defined by genuine ignorance: you are modest if and only if you underestimate yourself. It's not enough to say you are not that great, you have to be great and be unaware of it.
the modest person underestimates his self-worth. If he speaks, then he understates the truth, but he does so unknowingly. This entails that the modest person is ignorant, to a certain degree, with regard to his own self-worth. He underrates himself, and therefore only takes a portion of the credit due him... 

This implies the somewhat paradoxical situation where a truly modest person should not think of themself as being modest, since they must sincerely think "I'm not that great." If they think "I'm not that great, but I'm being modest." they can conclude, "So I'm actually great." and suddenly become immodest.

The folly of love

Yet ah! why should they know their fate?
Since sorrow never comes too late,
         And happiness too swiftly flies.
Thought would destroy their paradise.
No more; where ignorance is bliss,
       'Tis folly to be wise.
Ode on a Distant Prospect of Eton College, Thomas Gray
Marriage:
  • Fact: In America, about half of marriages end in divorce.
  • Also fact: Very few people start a marriage, expecting it to end in a divorce.
  • Yet another fact: if you bring up the first fact to an American wedding, you are likely to be carefully incinerated.
Romance:
  • Fact: most people in a two-person romantic relationship regard the other person as "one and only", the most suitable mate in the world to spend a life with.
  • Some inconvenient singer named Tim Minchin:
Your love is one in a million
You couldn't buy it at any price
But of the 9.999 hundred thousand other loves
Statistically, some of them would be equally nice
Or maybe not as nice but, say, smarter than you
Or dumber but better at sport or tracing
  • The biologist: it's useful to think of your mate as better than anyone else, because it makes you more loyal, and strengthens the bond. It's better to stick closely to someone that's good enough, than to spend too much time look around for better ones. If you love your mate unconditionally, you would stick with them and thus give a stable family for your children to grow up in.
  • The biologist, after dark: buuut, if a really nice one comes along... then it's time to snap out of the haze of love and do better!
  • Wise old man: in any case, you should assume the best in your lover, since your lover is a friend, and you should assume the best in your friends.
Children:
  • Fact: most people think their children are really good, especially if that is an only child.
  • Also fact: some children really aren't that good. Indeed, there's a saying that something is so bad that "only a mother can love".
  • A third fact: if a mother does not greatly overrate their only child, other people might regard that mother as mildly insane.
  • Biologist: this one is pretty obvious, right?

Loyalty

As much as I value friendship as the best kind of love, I recognize that 
  • friendship is between people unrelated by reproduction;
  • I'm weird;
  • most people feel romance, family, and stuff, differently from deep friendship;
  • humans are special: most animals don't do friendship
  1. Close, enduring relationships (or friendships) occur throughout the animal kingdom, particularly among long-lived mammals such as primates, dolphins, and elephants. 
  2. These bonds are adaptive for the individuals involved. Among males, they increase the individuals’ reproductive success; among females, they reduce stress, increase infant survival, and increase longevity. 
  3. We can therefore see the evolutionary origins of human friendships in the social bonds formed among nonhuman primates.
So I'll put friendship in a separate section.

As a friendship fanatic, I know very well how friends often look into each other and see the best in them, trust each other, and stand by them even when evidences recommend against that. It's a fact that people trust friends more. But should they?

Friendship and Belief (2004) doesn't mess around:
I intend to argue that good friendship sometimes requires epistemic irresponsibility. To put it another way, it is not always possible to be both a good friend and a diligent believer.

Epistemic partiality in friendship (2006)

I shall argue here that friendship involves not just affective or motivational partiality, but epistemic partiality. Friendship places demands not just on our feelings or our motivations, but on our beliefs and our methods of forming beliefs. I shall also argue, however, that this epistemic partiality is contrary to the standards of epistemic responsibility and justification held up by mainstream epistemological theories.

When does hope become denial?

The literature is full of references about how one must never lose hope, to keep fighting, to keep the head above the water, to believe in a better tomorrow, believe the darkness will end. But on the flip side, as the darkness grows greater, hope in a better future becomes closer and closer to denial. The problem is particularly stark in cases of terminal illnesses like cancer:
  • Choose aggressive therapy, experimental drugs, and other heroic efforts, because there is a small chance to destroy cancer and live another 20 years?
  • Or admit defeat, and enjoy the short rest of your life on painkillers and beauty?
There can also be practical reasons to have beliefs that are incorrect but still sensitive to the evidence. The fact that believing that you will recover will increase the chances that you will recover is, plausibly enough, a practical reason to believe that you will recover. But if the expected likelihood of recovery is sufficiently remote, then the fact that believing you will recover will increase your chances is, at best, a rather weak practical reason to believe you will recover. Plausibly, you ought to embrace your fate and cherish your remaining time with your loved ones instead. This is a lively issue among hospice workers concerned about when one ought to encourage faith in recovery, and when acceptance of death. It is presumed in these debates that the brute facts about your chances do not settle these questions.

In Ibsen's play The Wild Duck, the doctor Relling claimed that most people needs a "life-lie" ("Livslognen"), some kind of great delusion in order to keep their wits, their happiness, their will to live.

If you take the life lie from an average man, you take away his happiness as well. 


Conclusion: descriptive and normative


No comments:

Post a Comment

Let's Read: Neuropath (Bakker, 2009)

Neuropath  (Bakker 2009) is a dramatic demonstration of the eliminative materialism worldview of the author R. Scott Bakker. It's very b...