Share and discuss this blog



Saturday, December 6, 2014

Hawking is afraid of AI without having a clue about what AI is; don't worry Steve

The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.

"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.

Wow! Really? So, a well known scientist can say anything he wants about anything without having any actual information about what he is talking about and get world wide recognition for his views. We live in an amazing time.

Juts to set the record straight lets talk about AI, the reality version not the fantasy one.

Yes, we all know the fantasy one 2001, Star Wars, Her. We have been watching intelligent machines in the movies for decades.

Apparently, Hawking is using a voice system. That’s nice. Maybe he should find out how it works. The new system learns how Hawking thinks and suggests words he might want to use next, according to the BBC. So that makes it very smart does it? That is statistics. We can easily count what you have been saying and guess what you will say next. It is not that complicated to do, and it is not AI.

What is AI? AI is the modeling of mind such that you have created a new mind. At least that is what it is to people who don’t work in the field. To people who do work in the field, the issue is not what word comes next as much as it how to have  a idea about something, or how to have an original thought, or how to have an interaction with someone in which they would think you are very clever and are not a machine.

You average five year old is smarter than any computer today and is smarter than any computer is likely to be any time real soon. Why? Because a five year can do the following:

  1. figure out what annoys his little sister and do it when his mother is not watching
  2. invent a new game
  3. utter a sentence that he has never uttered before
  4. understand what his parents are telling him
  5. decide not to do it because he has something he would  rather do
  6. be left alone in the kitchen and make an attempt to cook something possibly burning down the house but in any case leaving a giant mess
  7. listen to someone say something a draw a conclusion from it and ask an interesting quetsion about it
  8. find his way school without help if allowed to do so
  9. throw a ball
  10. get better at throwing a ball by practice
  11. eat certain foods and hate them,  and others a love them
  12. cry when he is felling anxious
  13. be thrilled with a new toy
  14. throw a temper tantrum
  15. make his mother think he is the best thing in whole world

Why am I listing such mundane things as hallmarks of intelligence? Because in order to build and intelligent machine, that machine would have to grow up. It would have to learn about the world by living in it and failing a lot and being helped by its parents. It would have to have goals and tastes and make an effort to satisfy those goals every day. I would not be planted with goals. I didn’t grow up wanting to work in AI for example. That interest developed while I was in college as result of a wide variety of experiences and interactions with others.

If we have to build an intelligence that acquires knowledge and motivation naturally we would have to know how to build the equivalent of an infant and teach it to interact with the world. Would that infant have arms and legs and be trying to learn how to walk and get stuff it liked and be angry and hot an hopeful? If not, it wouldn’t be much like a human. 

But maybe Hawking doesn’t mean AI that is human-like. Maybe he just mean a computer program that is relay good at prediction by statistics. That is not AI my view, but it is something. Is it something to fear? Only if you are worried about a machine that predict certain things in the world better than you can. That could happen.

To build the AI that I have always had in mind, requires more money than Mark Zuckerberg is willing to invest and requires a purpose. Before someone builds a general purpose AI they would have to try building a special purpose one, maybe one that is smart enough to kill Bin Laden. Interestingly, while the Defense Department has invested plenty of money in Ai it still sent humans to do that job. The Defense Department would undoubtedly have preferred to send an AI robot to do the job, but they are nowhere close to having one.

Could they have one? Yes, someday. But it would be talking to you, or predicting what works not what Hawking wanted to say next. It would be about navigation and inference and figuring out things just in time and son on. It needs to know how to talk and comprehend the world (to think really.)

Special pursue AI machines, ones that do things like clean our house will be around long before any AI Hawking fears. As much as we all would like one, I don’t see any AI cooks and maids around. 

The AI problem is very very hard. It requires people who work in AI understanding the nature of knowledge; how conversation work; how to have an original thought; how to predict the actions of others; how to understand why people do what they do; and a few thousand things like that. In case no one has noticed, scientists aren’t very good at telling you how all that stuff works in people. And until they can there will be no machines that can do any of it.