Why bill gates ( and You ) do not be afraid of artificial intelligence

Thirty three million one hundred nine thousand six hundred ten

Last week bill gates during the session of questions and answers on the resource Reddit received an interesting question from one of users: "What threat to the survival of humanity is machine superintelligence, and whether you think that a full encryption for all Internet activity can protect us from this threat (that is, the less you know the car, the better?".

Unfortunately, gates didn't answer the second part of the question, but wrote the following:

"I treat people who are worried about superintelligence. At first, machines will do a lot of work for us and will not be very reasonable. It would be nice if we will be ably to lead them. Several decades later, this intelligence will become powerful enough to become a problem. I agree with Elon Musk and others on this and don't understand why people don't care".

Fortunately robohobo, the best team of anti-heroes assembled nearly. Elon Musk and bill gates — is it a purse and influence, the legendary cosmologist Stephen Hawking is extremely important. They might as well have a hand that will be able to perform dirty deeds against the machines in this preventive war.

In fact, this emerging Avengers need something more visionary: any sign that artificial superintelligence will be a serious threat, or more serious study.

However, amid growing hysteria around the non-existent artificial intelligence, there is considerable bewilderment. Experts do not see any hint of the gigantic tyrannical artificial intelligence to modern code. Perhaps, gates, Hawking and Musk are damn smart guys know something that we don't know. But you can listen to the leading researchers of this most artificial intelligence.

When people discuss neural networks, cognitive computing and the whole concept of artificial intelligence should surpass human, they are also talking about machine and deep learning. If anyone knows that it is time to panic, these guys.

Artificial superintelligence comes niotkuda Dileep George, co-founder of the artificial intelligence startup Vicarious, talks about the risks of superintelligence:

"The history remembers a lot of panic around a variety of technologies, from the steam engine to nuclear energy and biotechnology. You'll sell more Newspapers and movie tickets if you play on hysteria, and now, in my opinion, the same thing happens: people fanning fears around artificial intelligence. As scientists, we need to inform the public about the difference between Hollywood and reality. The community of artificial intelligence is not yet even on the verge of creating something like this, what's worth caring about".

This view is Vicarious has weight not only because of the current company's work on artificial intelligence, but because Elon Musk has presented the start as part of his sight behind the curtain. Musk told CNBC that he invested in Vicarious, "not to discourage investment but to be able to keep an eye on artificial intelligence." He also wrote that "leading companies for the development of AI take serious measures to ensure security. They realize the danger that I believe that will be able to shape and control the digital superintelligence and prevent its escape into the Internet. Let's see...".

On the question of Vicarious about how much time and resources they devote to protection against artificial intelligence, replied Scott Phoenix, another co-founder of Vicarious:

"Artificial intelligence does not emerge by chance or suddenly. We are at the beginning of the research, how to build a basic system of intelligence, and there is still a long process of repetition and learning of the system and explore ways of its creation and ways of security."

Vicarious doesn't deal with superintelligence. It's not even on the radar. Which, then, the companies said Mask, and which seriously worried, in his own words, they say, to bad superintelligence does not "run the Internet"? How, in his own words, to build a prison for a demon, when and themselves-that the demons no?

Now as to the question of how companies actively withhold artificial intelligence under lock and key, said Ian Lecun, Director of AI research at Facebook:

"Some people wonder how to prevent that hypothetical Superintelligent, friendly artificial intelligence will decide to "reprogram" itself, and guarantees the sawing of all mankind. Most of these people are not AI researchers, even programmers are not."

Lecun also indicates one of the sources of confusion: the difference between intelligence and autonomy.

"Let's say you at first glance, a super-intelligent system that has no autonomy. For example, a chess program that can beat almost any man. Does she superintelligence? Very soon the aircraft autopilots and self-driving cars will be safer people. Does this mean that they will be smarter people? In a very narrow sense, these systems will be "more intelligent" than humans, but their experience will lie within a very narrow field, and the system will be very little autonomy. They will not be able to go beyond requirements."

This is an important distinction, and it well describes the panic, which succumbed to the Musk, Hawking, and now gates. AI can be smart without being intelligent and creative. More importantly, the AI isn't adjusted to the kind of the digital equivalent of the biological evolutionary pathway, and voproizvodit competing with each mutation better and better until it reaches a runaway superintelligence. No, not working AI.

We surprised machine intelligence mouse, but it will not happen soroushian search produces results faster than any human, but it will never grow the code of ethics and he will not hide the pornography on your taste and color. Only a science fiction writer or other Humanities are far from computer technology, can you imagine how artificial intelligence at the human level a sudden and inexplicably grows out of nowhere. And if you believe in one miracle, why not add another, say, self-assembling superintelligence?

Lecun advocates the Mask, claiming that his review, compares AI with a nuclear weapon, were "exaggerated, but also misunderstood."

"Elon is very concerned about the existential threat to humanity (because he is building a rocket that will send humans to colonize other planets). Even if the risk of the uprising of artificial intelligence are insignificant and illusory, we should think about it, to build proactive, to follow well-defined rules. As was established bioethical restrictions in the 70s and 80s, even before genetic engineering was used everywhere, we need to create constraints in the field of ethics of artificial intelligence. But as rightly said by Joshua, we have enough time. Our systems are superior to humans in very narrow areas, but the General AI of human level will not be soon, maybe in a few decades, not to mention the General Autonomous AI that can never be built".

The scientist, who had something to say on the subject, was mentioned in the previous paragraph. By Joshua bangin the head of the machine learning Laboratory at the University of Montreal, one of the pioneers of the sub-areas of deep learning (along with Lacuna). Here is his opinion about the fact that the research AI is fraught with danger:

"In this view there is no truth, when considered in the context of current artificial intelligence research. Most people don't realize how primitive system we build, and, unfortunately, many journalists (and some scientists) to spread fears about AI that do not correspond to reality. We wouldn't believe it if I created a machine with the intelligence of the mouse in the near future, but we're far from it, very far. Yeah, these algorithms already have useful application, and then there will be more. Moreover, I really think that people will one day create machines that are as smart as people in many ways. However, it will be in the distant future, so the current debate is a waste of time. When the question becomes pressing, of course, we will be bringing together scientists, philosophers, and lawyers to determine the best ways to prevent the effects belonging to the currently science fiction".

On the question of whether the superintelligence is inevitable, Bengio said:

"It is quite difficult to answer this question, leaving aside all the rigor of scientific thinking. I see a lot of good mathematical and computational reasons why study AI once comes to a standstill (due to the exponential growth of complexity), which came one day human intelligence — that, in principle, also explains why dolphins and elephants, though distinguished by a large brain, are not Superintelligent. We just don't know enough to go any further assumptions on this issue. If the deadlock hypothesis is confirmed, one day we will have computers that are as smart as people, but will have more rapid access to knowledge. But by the time people can get that access to information (in principle, we're already doing it, but slowly — using search algorithms). Of course, this has nothing to do with superintelligence. I think of AI as helping the mind, as the industrial revolution was by using the hands in the last two centuries."

Regarding the comment about the Mask is that the AI is "potentially more dangerous than nuclear weapons" and with it the need to be careful, Bengio drawn to date, not knocking the door into an unknown future:

"While this question does not make sense. In the future things may change, but for now we're guessing on a coffee thick. When we get there, of course, we think about it seriously."

The common theme in the discussions of the time. When you speak with the researchers of the AI — again specifically AI researchers, people who work with these systems — they are not particularly worried about what a superintelligence sneaks up on them (fools, run!). Unlike the terrible stories I have been reading Musk, artificial-intelligence researchers do not even dare to discuss superintelligence, it does not exist even in the plans. Whether the rights of bill gates?

Source: hi-news.ru

Tags

See also

New and interesting