367
AI should be able to sympathize, not to hurt people
Twenty five million nine hundred eighty thousand five hundred sixty one
The idea that a computer can have consciousness, more and more disturbs people. It seems unnatural that a humanoid machine can feel pain, understand the depth of sorrow, or to experience a heavenly feeling of hope. However, some experts claim that this type of artificial intelligence we need if we want to suppress the threat to human existence in the Bud than this technology, can certainly become in the future.
Speaking recently at Cambridge Murray Shanahan, Professor of cognitive robotics at Imperial College London, said that in order to nullify the threat, "the AI human-level" or General artificial intelligence (AIS) — it should be "humanoid".
Shanahan suggested that if the forces that drive us to develop AI at the human level, it is impossible to stop, there are two options. Will be developed or potentially dangerous AIS, which is the process of ruthless optimization without any hint of morality or they will be developed on the basis of psychological or perhaps neurological impression of the person.
"Right now I would vote for the second option, in the hope that it will lead to a form of harmonious co-existence (humanity)," said Shanahan.The end of human guncelleme with Stephen Hawking and Elon Musk, Shanahan consulted the Center for the study of global risks (CSER) in writing an open letter that urged AI researchers to pay attention to pitfalls in the development of artificial intelligence.
According to the Mask, these traps can be more dangerous than nuclear weapons, while Hawking and does believe that they can lead to the end of the human era.
"The primitive forms of artificial intelligence we already have, and they have proven themselves very well, — said Hawking in December 2014. But I think the development of full artificial intelligence could spell the end for the human race".Still from the film Ex machina, which will be released in spring 2015
As you know, is not quite clear how far we are from developing a OII — predictions range from 15 to 100 years from now. Shanahan believes that the year 2100, they will become "more probable, but not certain".
Like it or not, all the danger lies in what motivations will drive the development of the route of AIS.
Money can be a "cause of risk"There are fears that the current social, economic and political factors that lead us to AI humanoid level will lead us to the first option of the two proposed Hanahana.
"The capitalist forces trigger a ruthless process of maximization. And this is when the temptation to develop risky things," said Shanahan, citing the example of a company or the government, which could use the AIS to undermine the markets, rigged elections or the creation of new automated and potentially uncontrollable military technologies."The military industry will do this as much as others, so this process is very difficult to stop."Despite these dangers, Shanahan believes it would be premature to ban research AI in any form, because currently there is no reason to believe that we can actually reach this point. Instead, it makes sense to direct research in the right direction.
Imitation of the mind to create gomeostaza who is solely focused on optimizing, not necessarily malicious towards people. However, the fact that he can put to get the instrumental purpose like self-preservation or to acquire resources may present a significant risk.
As pointed out in 2008, theorist of artificial intelligence Eliezer Yudkowsky, "AI does not hate us and don't like, but you are made of atoms which it can use for something else."
Creating some form of homeostasis in AIS, in the opinion of Shanahan, the potential of AI can be implemented without the destruction of civilization as we know it. For AIS, this would be the opportunity to understand the world as well as people, including the ability to get to know others, to form relationships, to communicate and sympathize.
One of the ways of creating humanlike machines — simulating the human brain in its structure, and we "know how the human brain can achieve". Scientists two steps away from making a complete map of the brain, not to mention replication.
The Human Connectome Project is currently working on a reverse repetition of the brain and will conclude by the third quarter of 2015, although the analysis of collected data will continue long after 2015.
"Our project will have far-reaching consequences for the development of artificial intelligence, it is part of many efforts that seek to understand how the brain is organized and how the different regions work together in different situations and performing different tasks," said Jennifer Elam from HCP."To date, there have been a lot of communication between the brain mapping and artificial intelligence, mainly because these two fields have approached the understanding of the brain from different angles and at different levels. Because the HCP data analysis continues, it is likely that some modelers of the brain including its findings, to the extent possible in their computational structures and algorithms".Even if this project will be useful in assisting researchers in AI, remains to be seen what it is capable of other initiatives like the Human Brain Projects. This provides an important basis for the development of humanoid machines.
Now, says Shanahan, we should be at least aware of the dangers posed by the development of the AI not paying attention to the Hollywood movies or the horror stories in the media, which further confuses the issue.
"We have to think about these risks and devote some resources to address these issues, says Shanahan. — I hope that will last us decades to overcome these barriers".published
Source: hi-news.ru
The idea that a computer can have consciousness, more and more disturbs people. It seems unnatural that a humanoid machine can feel pain, understand the depth of sorrow, or to experience a heavenly feeling of hope. However, some experts claim that this type of artificial intelligence we need if we want to suppress the threat to human existence in the Bud than this technology, can certainly become in the future.
Speaking recently at Cambridge Murray Shanahan, Professor of cognitive robotics at Imperial College London, said that in order to nullify the threat, "the AI human-level" or General artificial intelligence (AIS) — it should be "humanoid".
Shanahan suggested that if the forces that drive us to develop AI at the human level, it is impossible to stop, there are two options. Will be developed or potentially dangerous AIS, which is the process of ruthless optimization without any hint of morality or they will be developed on the basis of psychological or perhaps neurological impression of the person.
"Right now I would vote for the second option, in the hope that it will lead to a form of harmonious co-existence (humanity)," said Shanahan.The end of human guncelleme with Stephen Hawking and Elon Musk, Shanahan consulted the Center for the study of global risks (CSER) in writing an open letter that urged AI researchers to pay attention to pitfalls in the development of artificial intelligence.
According to the Mask, these traps can be more dangerous than nuclear weapons, while Hawking and does believe that they can lead to the end of the human era.
"The primitive forms of artificial intelligence we already have, and they have proven themselves very well, — said Hawking in December 2014. But I think the development of full artificial intelligence could spell the end for the human race".Still from the film Ex machina, which will be released in spring 2015
As you know, is not quite clear how far we are from developing a OII — predictions range from 15 to 100 years from now. Shanahan believes that the year 2100, they will become "more probable, but not certain".
Like it or not, all the danger lies in what motivations will drive the development of the route of AIS.
Money can be a "cause of risk"There are fears that the current social, economic and political factors that lead us to AI humanoid level will lead us to the first option of the two proposed Hanahana.
"The capitalist forces trigger a ruthless process of maximization. And this is when the temptation to develop risky things," said Shanahan, citing the example of a company or the government, which could use the AIS to undermine the markets, rigged elections or the creation of new automated and potentially uncontrollable military technologies."The military industry will do this as much as others, so this process is very difficult to stop."Despite these dangers, Shanahan believes it would be premature to ban research AI in any form, because currently there is no reason to believe that we can actually reach this point. Instead, it makes sense to direct research in the right direction.
Imitation of the mind to create gomeostaza who is solely focused on optimizing, not necessarily malicious towards people. However, the fact that he can put to get the instrumental purpose like self-preservation or to acquire resources may present a significant risk.
As pointed out in 2008, theorist of artificial intelligence Eliezer Yudkowsky, "AI does not hate us and don't like, but you are made of atoms which it can use for something else."
Creating some form of homeostasis in AIS, in the opinion of Shanahan, the potential of AI can be implemented without the destruction of civilization as we know it. For AIS, this would be the opportunity to understand the world as well as people, including the ability to get to know others, to form relationships, to communicate and sympathize.
One of the ways of creating humanlike machines — simulating the human brain in its structure, and we "know how the human brain can achieve". Scientists two steps away from making a complete map of the brain, not to mention replication.
The Human Connectome Project is currently working on a reverse repetition of the brain and will conclude by the third quarter of 2015, although the analysis of collected data will continue long after 2015.
"Our project will have far-reaching consequences for the development of artificial intelligence, it is part of many efforts that seek to understand how the brain is organized and how the different regions work together in different situations and performing different tasks," said Jennifer Elam from HCP."To date, there have been a lot of communication between the brain mapping and artificial intelligence, mainly because these two fields have approached the understanding of the brain from different angles and at different levels. Because the HCP data analysis continues, it is likely that some modelers of the brain including its findings, to the extent possible in their computational structures and algorithms".Even if this project will be useful in assisting researchers in AI, remains to be seen what it is capable of other initiatives like the Human Brain Projects. This provides an important basis for the development of humanoid machines.
Now, says Shanahan, we should be at least aware of the dangers posed by the development of the AI not paying attention to the Hollywood movies or the horror stories in the media, which further confuses the issue.
"We have to think about these risks and devote some resources to address these issues, says Shanahan. — I hope that will last us decades to overcome these barriers".published
Source: hi-news.ru