On the first Sunday of 2015, Elon Musk took the stage closed conference in Puerto Rico to discuss an intelligence explosion. This scary theoretical term refers to an uncontrolled giant leap in the cognitive ability of artificial intelligence. As Musk and renowned physicist Stephen Hawking believe that one day AI can be a sentence for the human race.
The very appearance of such figures as Elon Musk, at the conference, which has long been the preserve of academics troubles, is a significant event. However, the conference with the optimistic title "the Future of AI: opportunities and challenges" was an unprecedented Grand meeting of the minds of mankind; came the ethicist AI of Oxford Nick Bostrom, the founder of Skype Jaan Tallinn and expert on AI in Google Shane Legge.
Musk and Hawking is worried that AI could lead to the Apocalypse, although there are more obvious threats. Over the past five years advances in artificial intelligence — in particular, in the field of deep neural networks — brought artificial intelligence into our everyday life. Google, Facebook, Microsoft, Baidu — if you call a few companies hire professionals on artificial intelligence with unprecedented speed and injected hundreds of millions of dollars in the race for better algorithms and clever computers.
Problems of artificial intelligence, which seemed erasability just a few years ago, have been resolved. Deep learning has allowed to be high-quality speech recognition. Google builds self-driving cars and computer systems that can learn to identify cats in videos. Robotic dogs walk almost the same way as their living relatives.
"Starts to work the computer vision, speech recognition. There is a certain acceleration in the development of artificial intelligence systems, says Bart Selman, a Cornell University Professor of ethics and AI, the former at the event with Musk. — All this forces us to deal directly with the ethics of AI".
Given this significant growth, Musk and other scientists are urging those who create these products, carefully consider the ethical implications. At the conference in Puerto Rico, the delegates signed an open letter calling to do a thorough study of the implications of the development of artificial intelligence, to avoid possible errors. Musk is also signed. "Leading AI researchers saying that AI safety is a very important thing. I agree with them".
Google savedelete convenes researchers from DeepMind, a company developing artificial intelligence, which was acquired by Google last year, also signed the letter. In General, the history of writing goes back to 2011. Then Jaan Tallinn met with Demis Hassabis after the latter made a presentation at the conference on artificial intelligence. Not long ago, Hassabis founded a hot startup DeepMind, and Tallinn are very interested. Since founding Skype, he became an Evangelist safe AI and was looking for supporters. The two men got to talking about the AI, and soon the Tallinn invested in DeepMind and last year, Google posted $ 400 million for a company with 50 employees. At once Google took over the largest available pool of talents and experts in the field of deep learning. Google does not disclose the ambitions of DeepMind, but it is known that DeepMind engaged in research that will allow robots or self-driving cars to better navigate in the environment.
All this worries of Tallinn. In the presentation, which he presented at a conference in Puerto Rico, Tallinn remembered how once at lunch, Hassabis showed him a machine learning system that could play classic arcade games 80-ies of the Breakout. Moreover, the machine mastered the game, she played it with ruthless efficiency that shocked Tallinn. While "the technologist in me was surprised with this achievement, my other hand was thinking about how ruthless can be the artificial intelligence that demonstrates the incredible possibilities," says Tallinn.
Permissions and prohibitions ethical plan has already taken in history, for example, molecular biologists during the meeting the 1975 Asilomar Conference on Recombinant DNA (Belomorska conference on recombinant DNA), when they agreed on safety standards designed to prevent the creation of artificial genetically modified organisms and threaten the public. Belomorskaya conference yet gave more concrete results than the conversations in Puerto Rico.
At the conference in Puerto Rico the parties signed a letter outlining the research priorities in the field of AI — the economic and legal consequences, as well as the security of AI systems. Yesterday Elon Musk has allocated $ 10 million for these studies (see below). This is an important first steps on deduction of robots from the destruction of the economy. Some companies went even further. Last year, the canadian manufacturer of robotics Clearpath Robotics promised not to build Autonomous robots for military use. "People who are against killer robots: we support you," wrote technical Director of Clearpath Robotics Ryan Gariepy on the website of the company.
Promises not to build the terminator is one step. Company developing artificial intelligence like Google needs to think about safety and legal liability of their self-driving cars, robots, leaving people without work, and unintended consequences of algorithms that can behave unjustly towards people. For example, is it ethical from the point of view of Amazon to sell products at one price one people and another to others? What is the guarantee that the trading algorithm does not bring down commodity markets? What about the people who work as taxi drivers and bus drivers in the era of driverless transport?
Itamar Arel is the founder Binatix, the company's deep learning, which works in the stock market. He was not at the conference in Puerto Rico, but signed the letter soon after reading. According to him, in view of the coming revolution in the field of intelligent algorithms and cheap, intelligent robots, all that is necessary to study and to work. "The time has come to allocate additional resources to understand the social implications of artificial intelligence systems that will displace blue-collar jobs, he says. In my opinion, it is obvious that they will develop with the speed at which the society will not be able to catch up with them. And that's the problem."
The media may want to inflate out of molehills, supplying such materials headings on the fact that the AI will enslave the world, but the reality is much more prosaic. Just effects the AI will deal on maturalne, the ethics Board and consultants on security and artificial intelligence.
10 million against artificial intellectuelen Musk worries that artificial intelligence researchers can go wrong, absolutely wrong.
It may seem surprising that the architect of the conceptual high-speed transportation system Hyperloop, CEO of SpaceX and Tesla thinks so. But Musk is so serious about this that, after the conference in Puerto Rico, he donated $ 10 million to an Institute for the future life (Future of Life Institute, FLI) to develop a global research program aimed at preserving AI "beneficial to humanity". In other words, Musk wants AI have grown exceptionally in useful and not dangerous to humans.
Program — and these $ 10 million to sponsor research worldwide, aimed in this direction. Monday FLI will open a portal that enables scientists to apply for grants under the program.
For a long time, artificial intelligence has remained in the sights of Hollywood and science fiction or discussed in abstract philosophy. But as giants like Google include AI into the very core of their current and future technology, and the wave of small start-UPS create business on the crest of this science, we cannot deny the development of powerful AI. However, the debate about the ethical side of the issue will continue for a long time.