471
Scenarios which can lead to the development of artificial intelligence
Neuroscientist and philosopher Sam Harris describes scenarios that although seem scary, but it can happen to us as a species. In fact, it is very difficult to imagine how Artificial Intelligence (AI) will destroy us, but still, there are people who are trying to move the future into the present and to foresee the further possible ways of development of mankind. And that's what we should think before on this issue think about AI.
What would you choose artificial intelligence at the moment when he stands before the choice?
One of the main problems confronting people with the development of AI is how to get computational systems to not just make sensible decisions, but also emotional. Because the AI could easily choose to kill all the people, in order to save other species. This as if you were attacked by a thug with a knife, and you pulled your gun. And at that moment, instead of protecting host, your faithful dog will attack you just because you are in the hands of more dangerous weapons...
And what will happen if the doors are the same? Behind door number one ... Given how valuable are intelligence, what happens if they begin to work the AI? What can follow it? Full-scale nuclear war? A global pandemic? Justin Bieber will become President of the United States? The fact that the AI can just destroy civilization in the form we know it. Today, many say how much and how quickly society degrades. And what will happen if you think about it, the AI will decide that it will be easier to destroy humanity, not to continue the technological development, which only worsens the situation?
What's behind door number two?
The only alternative is that we year after year continue to improve our intelligent machines. At some point, we will build machines that are smarter than we are, and once we have machines that are smarter than us, they begin to improve themselves. And then we risk to create an "intelligence explosion," and that this process can destroy us all. The question is, what are we going to build machines that are much more competent than we are, that the slightest discrepancy between their goals and our own can lead us to destruction as a species.
Insects of the XXI century , Just think about how we treat ants. We don't hate them. We go out of our way to harm them. We are walking through these little hard workers not paying attention to them, but whenever their presence seriously interferes with one of our goals, say, in the construction of the buildings, we destroy them without the slightest regret. We are talking about what we one day build machines that, regardless of whether they are conscious or not, you can start to treat us like ants.
Deep thinking, deep impact Intelligence is the result of a process of information processing in physical systems. We have created a narrow intelligence for our machines and many of them perform certain functions on level or even a better person. Intelligence is the source of all that we value, or weapons, which we use to protect all that we value. This is our most valuable resource. We have problems that we must solve. We want to end AIDS, diabetes and cancer. We want to build a truly working economic system. We want to improve the climate of the planet. What will you do with AI, if he simply is no such thing as deep thinking?
Where are we?
Humanity has not yet reached the peak of intelligence. It is very important to understand. That's what makes our situation so unstable, and what makes our intuitive risks losing. In the vast majority of cases, the potential of human intelligence extends far beyond what we currently imagine. What if we build machines that are smarter than us? They will be able much better to calculate the future and to better define the place where they are, to choose the direction of further movement.
Speed of intelligence Imagine that humanity has built Superintelligent AI which is not smarter than the average group of scientists. The only difference is that his electronic circuits are functioning about a million times faster than ours, biochemical. As a result, the car "thinks" about a million times faster than the consciousness that built it. Thus, leaving the AI to work during the week, we will find that he has performed 20,000 years of intellectual work in the human equivalent. What do we do with this processing speed? To slow down the speed of the AI to human level?
The best scenario actually, as you and I was most concerned about how events will develop in an ideal environment. Imagine that we are thought of and constructed a Superintelligent AI that has no problems with security. It is as if we had created an Oracle that behaves exactly as intended. What do we get? In fact, we will create the perfect labor-saving device. So we're talking about the end of the boring human existence. Class!!! You can also say that the creation of such machines will be the end of all intellectual work. It may seem that it is good, but ask yourself what would happen in this case? Even now, with a slight development of AI, we are seeing the growth of income inequality and unemployment, which we have never seen. We are not willing to share their wealth with others, and probably the world is ruled by a few trillionerov, while the remainder of mankind will depend entirely on their will, but rather from the will subject to them AI that will act sverginare not sparing anyone.
The next arms race
Imagine that every country in the world has an AI that calculates the ideal scenario for victory in the global war. No matter what it is a war ground, in the air or even cyber warfare, the winning script will get all. In fact, AI will create for each superpower, the strategy of victory, which eventually lead to the end of humanity, because it will be a war Artificial Intelligences against humanity.
"Don't worry, and don't spam your pretty little head about these stupid questions..."
In fact, the AI will build a community Narciso-rastamanov. Don't worry, be happy. That's what veiled said AI our life easier today. Today we feel like the AI in its embryonic form makes us dumber, and what will happen in 50 or 100 years, we promise to have will be an AI surpassing human intelligence. If intelligence is the issue of information processing, and we continue to improve our machines, do not You think that we have already created a superintelligence? We already notice that the AI is unsafe, for example the AI in browsers, which causes people to go into the oncoming lane and jump on the car in the lake. And I'm talking about the fact that we have only 50 years to meet face to face with the main enemy of human superintelligence.
Technology directly into the brain
Cars in the future can not share our values, because they are literally an extension of our own. They will be sewn into our brain and we, in fact, become their limbic system. Think about what you have something to put in the brain that can read and process it according to an unknown algorithm. Sometimes you don't trust your brain, and then the chip. But with the advent of Superintelligent AI, the embedding of chips in his brain, will be the most safe and sensible way to go forward. But the issue of data security that we can't solve even today, makes you wonder what all is wrong.
We need to consider these scenarios and act humanity has no solution to the problem of AI, we can offer and just recommend. People should initially build AI based on the principle of "do no harm to mankind." When you talk about a Superintelligent AI that can amend itself, it seems that in order to proceed to its creation, we, as a species, need to resolve all our internal issues otherwise we will be doomed. We must recognize that we are in the process of becoming a kind of God. Imagine that you live under the gaze of God, which you physically feel and what to go to confession you have not the Church, and to your mobile device. published
P. S. And remember, just changing your mind — together we change the world! ©
Source: muz4in.net/news/kak_vygody_kotorye_my_poluchim_ot_sozdanija_iskusstvennogo_intellekta_mogut_v_konechnom_schete_unichtozhit_nas/2016-10-11-42148
What would you choose artificial intelligence at the moment when he stands before the choice?
One of the main problems confronting people with the development of AI is how to get computational systems to not just make sensible decisions, but also emotional. Because the AI could easily choose to kill all the people, in order to save other species. This as if you were attacked by a thug with a knife, and you pulled your gun. And at that moment, instead of protecting host, your faithful dog will attack you just because you are in the hands of more dangerous weapons...
And what will happen if the doors are the same? Behind door number one ... Given how valuable are intelligence, what happens if they begin to work the AI? What can follow it? Full-scale nuclear war? A global pandemic? Justin Bieber will become President of the United States? The fact that the AI can just destroy civilization in the form we know it. Today, many say how much and how quickly society degrades. And what will happen if you think about it, the AI will decide that it will be easier to destroy humanity, not to continue the technological development, which only worsens the situation?
What's behind door number two?
The only alternative is that we year after year continue to improve our intelligent machines. At some point, we will build machines that are smarter than we are, and once we have machines that are smarter than us, they begin to improve themselves. And then we risk to create an "intelligence explosion," and that this process can destroy us all. The question is, what are we going to build machines that are much more competent than we are, that the slightest discrepancy between their goals and our own can lead us to destruction as a species.
Insects of the XXI century , Just think about how we treat ants. We don't hate them. We go out of our way to harm them. We are walking through these little hard workers not paying attention to them, but whenever their presence seriously interferes with one of our goals, say, in the construction of the buildings, we destroy them without the slightest regret. We are talking about what we one day build machines that, regardless of whether they are conscious or not, you can start to treat us like ants.
Deep thinking, deep impact Intelligence is the result of a process of information processing in physical systems. We have created a narrow intelligence for our machines and many of them perform certain functions on level or even a better person. Intelligence is the source of all that we value, or weapons, which we use to protect all that we value. This is our most valuable resource. We have problems that we must solve. We want to end AIDS, diabetes and cancer. We want to build a truly working economic system. We want to improve the climate of the planet. What will you do with AI, if he simply is no such thing as deep thinking?
Where are we?
Humanity has not yet reached the peak of intelligence. It is very important to understand. That's what makes our situation so unstable, and what makes our intuitive risks losing. In the vast majority of cases, the potential of human intelligence extends far beyond what we currently imagine. What if we build machines that are smarter than us? They will be able much better to calculate the future and to better define the place where they are, to choose the direction of further movement.
Speed of intelligence Imagine that humanity has built Superintelligent AI which is not smarter than the average group of scientists. The only difference is that his electronic circuits are functioning about a million times faster than ours, biochemical. As a result, the car "thinks" about a million times faster than the consciousness that built it. Thus, leaving the AI to work during the week, we will find that he has performed 20,000 years of intellectual work in the human equivalent. What do we do with this processing speed? To slow down the speed of the AI to human level?
The best scenario actually, as you and I was most concerned about how events will develop in an ideal environment. Imagine that we are thought of and constructed a Superintelligent AI that has no problems with security. It is as if we had created an Oracle that behaves exactly as intended. What do we get? In fact, we will create the perfect labor-saving device. So we're talking about the end of the boring human existence. Class!!! You can also say that the creation of such machines will be the end of all intellectual work. It may seem that it is good, but ask yourself what would happen in this case? Even now, with a slight development of AI, we are seeing the growth of income inequality and unemployment, which we have never seen. We are not willing to share their wealth with others, and probably the world is ruled by a few trillionerov, while the remainder of mankind will depend entirely on their will, but rather from the will subject to them AI that will act sverginare not sparing anyone.
The next arms race
Imagine that every country in the world has an AI that calculates the ideal scenario for victory in the global war. No matter what it is a war ground, in the air or even cyber warfare, the winning script will get all. In fact, AI will create for each superpower, the strategy of victory, which eventually lead to the end of humanity, because it will be a war Artificial Intelligences against humanity.
"Don't worry, and don't spam your pretty little head about these stupid questions..."
In fact, the AI will build a community Narciso-rastamanov. Don't worry, be happy. That's what veiled said AI our life easier today. Today we feel like the AI in its embryonic form makes us dumber, and what will happen in 50 or 100 years, we promise to have will be an AI surpassing human intelligence. If intelligence is the issue of information processing, and we continue to improve our machines, do not You think that we have already created a superintelligence? We already notice that the AI is unsafe, for example the AI in browsers, which causes people to go into the oncoming lane and jump on the car in the lake. And I'm talking about the fact that we have only 50 years to meet face to face with the main enemy of human superintelligence.
Technology directly into the brain
Cars in the future can not share our values, because they are literally an extension of our own. They will be sewn into our brain and we, in fact, become their limbic system. Think about what you have something to put in the brain that can read and process it according to an unknown algorithm. Sometimes you don't trust your brain, and then the chip. But with the advent of Superintelligent AI, the embedding of chips in his brain, will be the most safe and sensible way to go forward. But the issue of data security that we can't solve even today, makes you wonder what all is wrong.
We need to consider these scenarios and act humanity has no solution to the problem of AI, we can offer and just recommend. People should initially build AI based on the principle of "do no harm to mankind." When you talk about a Superintelligent AI that can amend itself, it seems that in order to proceed to its creation, we, as a species, need to resolve all our internal issues otherwise we will be doomed. We must recognize that we are in the process of becoming a kind of God. Imagine that you live under the gaze of God, which you physically feel and what to go to confession you have not the Church, and to your mobile device. published
P. S. And remember, just changing your mind — together we change the world! ©
Source: muz4in.net/news/kak_vygody_kotorye_my_poluchim_ot_sozdanija_iskusstvennogo_intellekta_mogut_v_konechnom_schete_unichtozhit_nas/2016-10-11-42148
Germany by half would reduce the energy production of wind
False hustle: how permanent employment reduces productivity