Tuesday, October 30, 2018

Let's read: Norvig's AI, chap 24, 25, 26, 27

Continuing from last post.

Chapter 24: Perception

This chapter is about feelings, or how to sense the world through sensors. It's too engineering for me so I skipped it.

Chapter 25: Robotics

This chapter is about robotics, actuators, or how to actually move around in the world. This is about the most traditional and engineering part of the textbook, and quite out of my part, so I'll skim it as much as I skimmed technical stuff in mechanical engineering during my study of Newtonian physics.

My favorite part from the chapter is a game:

Exercise 25.11: pretend to be a robot! A game for people from kindergarten to PhD! 
I can think of a few variations on this game: First, to make communication strictly minimal, players should only communicate through text messages and webcams. Second, instead of one person playing the brain, why not use two? One for each hemisphere! The left brain can only see the right eye and control the right hand, and vice versa. Then there will be six players. 
And we can even imagine a game of epiphenomenon.

Wait a minute, this sounds just like specialization...



Chapter 26: philosophical problems. 

This is really better explained in Stanford Encyclopedia.

Weak AI: AI that do what humans can do.
Strong AI: AI that are persons and as conscious as humans.

Not many still object to weak AI. Engineers accept it without question. Still many object to strong AI. Engineers don't bother with it.
... the attitude of AI researchers is that philosophizing is sometimes fun, but the upward march of AI engineering cannot be stopped, will not fail, and will eventually render such philosophizing otiose.
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. -- Edsger W. Dijkstra 

Against Weak AI

Aka, "machines can never do X" argument. This is so old that even Turing talked about it:
[X could be] Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as man, do something really new.
The best argument against it is
Size: 750x525 | Tagged: animated, artist:heir-of-rick, bipedal, chest fluff, dialogue, ear fluff, earth pony, female, floppy ears, frown, giant ears, impossibly large ears, just do it, looking at you, mare, multeity, open mouth, pinkie pie, pony, safe, shia labeouf, simple background, too much pink energy is dangerous, white background

The newest case of Just doing it is the Todai robot, which I liked.

Layponies are particularly easy to fall prey to the argument on "machines cannot love", because they think about love too much. If machines can hunger, eat, poop, why can they not love? [not a serious argument, merely rhetorical]

In Mind over Machine (1972), Dreyfus and Dreyfus proposed a detailed model of how an AI could work, then pointed out how the model can't be built because of problems. But the problems have been technical and solved largely. Also, if the embodied cognition is true, then separating mind and body is a flawed idea, and the mind-body-environment are really together, not separate.

Against Strong AI 

The Chinese Room argument by Minds, Brains, and Programs (1980), Searle, is always amusing to me because I have perfect Chinese. That aside, it's the most common argument against strong AI. I think the room itself certainly understands Chinese... though what I should make of the rulebook, I don't quite know.

What is the difference between a mathematical definition of a Turing machine, and the Turing machine "in motion"? There must be a difference, but how to pin it down? To know the difference is crucial for solving the problem of consciousness.

The Gödel incompleteness argument by Minds, Machines and Gödel (1961), later supported strongly by Roger Penrose (I like his book The Emperor's New Mind, just for the marvelous pictures), is the other famous argumest. I'm a lot more interested in this one because it's got Gödel in it, and the loopiness must be essential for understanding consciousness.

The sonnet argument, well, is poetic, and also a common one used in pop culture. Computers might write a sonnet, but it won't feel it. The common objection is that, well, you don't know if I feel either, and yet you think I feel, so if a machine behaves like humans, you should also think the machine feel, to be consistent.

After urea was synthesized in 1848, the separation between natural chemical and synthetic chemical fell apart. Maybe in the future, after AI comes to be, the separation between natural and artificial intelligence will also.


The qualia argument is the philosophically more formal way to say the sonnet argument: that a machine can do all the things like a human, but it won't feel.

Functionalism says that a mind is defined by what a mind does, the same way that a function is just a set of input-output pairs, and how the function is implemented doesn't matter. According to functionalism, the whole problem of qualia doesn't matter because they are irrelevant to whether a mind is a mind. I won't say more about it, but suffice to say I think qualias are undefined and (philosophical) zombies are people too.

Ethics of AI

The question is not, can we? Because we really canter can can-can.


The problem is, should we? Kaczynski would certainly say, hell no.

There are some basic problems:

  • People may lose jobs.
  • People may have too much/too little leisure time.
  • People may feel upset because humans aren't uniquely human anymore.
  • AI might do evil.
  • The human legal system depends on assigning blames, but it can be hard to decide exactly which part of AI to blame.
  • AI might kill humans.

One day we will do a "let's read" series on Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, but not today.


Let's see the problems:

Jobless: Discussed everywhere already. I think it will happen and the solution is some kind of socialism, universal income, drastic decrease of human population, colonization of space without population increase, or something.

Leisure time: Keynes in 1930 predicted that people will work 15 hours/week from increase in technology. People used to work 80 hours/week. However it seems the modern humans still work 40-50 hr/wk. And technology seems to just shift the baseline. Once it becomes possible for the workers to work more efficiently, the work unit accepts more workload, pushing the workers to go right back to working hard. Telecom and stuff makes it possible to work at home, which allows companies to make their workers work more...

In The Age of Em [cool book, might do a let's read later], Robin Hanson reasoned using basic economy rule of supply and demand, that if some humans are uploaded and become robot workers, they will rapidly proliferate to the point of earning only subsistence wages. It'd be not a surprise at all, Hanson said, since throughout human history, people have always earned subsistence wages. It's the recent age that's weird, where people can earn way more than subsistence wages.

In fact, it is so far from equilibrium that it probably cannot last, and it's only a matter of time before we swing right back to subsistence level living.

Humans get another blow of their ego: Not a problem. Pop more Prozac.
Just kidding. They'll manage without Prozac. They survived Copernicus and Darwin, they'll survive this.

Evil AI: a very practical, and immediate problem. It's largely what the study of "AI policies" tries to prevent. Total surveillance [as a Chinese citizen, I have a love-hate relation to that, but mostly hate], robots making wars less scary thus more likely, etc...

Accountability: There are two issues at stake here that I see:

1. Designing a legal system so that AI researchers and companies won't get too reckless. They should not be allowed to design risky AI and claim all the rewards when it works but not claim the damage when it fails. It would produce bad and buggy AI and it will make me sad. 

A similar problem existed in the financial system. The financial companies designed their products recklessly, claimed all the rewards, and got bailed out when the market crashed. This system is unstable and inefficient and the 2008 market crash was proof of that.

Bluntly, just fining the company would work as a start, though more refined legal system will surely be better. I leave this problem to the lawyers.

2. Designing the social rituals so that victims will feel happier. Humans are hardwired to see agency in everything. They curse their computers and hate a lot of clearly inanimate objects. If some AI does something bad, they will want to inflict pain somehow, not because it is needed to make the market system efficient, but because they want to get even.

The solution is to design more appropriate social rituals to appease victims of AI technology. It's not necessary that they do anything, as long as it makes the victims feel better and the spectators feel that the justice of the world is restored, it will be good. Even a scapegoat ritual, if it works, will be good.

One is reminded of the endless arguments about reproductive technologies. If embryocide or feticide, is murder [a matter of convention]. If IVF technology would produce people with profound social problems [no]. Whether genetically enhancing humans should be allowed [probably]. Human legal systems are still struggling to catch up with the rapid developments in technology. Laws are based on a model of reality, and it just fails if the model of reality fails from technology.

Humans may go extinct
Within thirty years [2023], we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. -- Vernor Vinge 
We'll read a lot about that. Like The Basic AI Drives (2008), Omohundro.

Exercises

Exercise 26.9 is analyzed in Superintelligence, and it raised the interesting point that, if we get AGI right, our troubles with  bio-, nano-, and nuclear technologies will be over, since an AGI will be vastly more capable of protecting us from the threat of these other technologies.

Chapter 27: Conclusions

It started with a basic overview of the whole book, then thought about whether the whole idea of "rationality" is wrong. There are a few alternative ways to make an AI, and there's no telling which one would actually work the best, or if none would.
  • Perfect rationality: Always do the right thing that maximizes expected utility value. Takes too much time. Still, considered as the ideal intelligence, both in the book and in many other rational theories about intelligence.
  • Calculative rationality: Always do the thing that would have been the right thing when it started thinking. A poet would call it l'esprit d'escalier. It's kind of what current AI is stuck in during its quest for perfect rationality.
  • Bounded rationality: Instead of trying to be perfect, try to be good enough. What "enough" means though is still uncertain.
    ...the capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behavior in the real world — or even for a reasonable approximation to such objective rationality. 
    ---- Administrative Behavior (1947), Herbert Alexander Simon 
  • Bonuded optimality: given the hardware limitations of the agent, use the agent among all those that can be implemented in the hardware,  that would perform the best. Unfortunately, it's suspected that such a program would be hopelessly complicated (just think of how hopelessly complicated anything that's optimized to the max are: Story of Mel, evolved hardware of Adrian Thompson, F1 cars, human brains...)
    If you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle. 
    ---- The Art of War (6th century BC), Sun Tzu
Finally, the authors ended the book with a hopeful ending, saying that AI will make more good than bad for humans.

No comments:

Post a Comment

Let's Read: Neuropath (Bakker, 2009)

Neuropath  (Bakker 2009) is a dramatic demonstration of the eliminative materialism worldview of the author R. Scott Bakker. It's very b...