58
5 reasons not to be afraid of artificial intelligence
With the latest news in the field of artificial intelligence, there is clearly a growing excitement: What if we are on the verge of the end of the world? Movies like “The Terminator” or “The Matrix” once gave us a lot of fears – and today there are a number of people who think that a scenario in which computers develop superhuman intelligence and destroy the human race is possible.
Among such people there are well-known futurologists - Ray Kurzweil, Robin Hanson and Nick Bostrom. For the most part, futurists believe that we overestimate the likelihood that computers will think like intelligent beings, and the danger that such machines pose to the human race. The development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they exist, will give us just as much as we need. Here's why.
1. Real intelligence requires practical experience.
Bostrom, Kurzweil, and other theorists of superhuman intelligence believe endlessly in the power of brute computational power that can solve any intellectual problem. However, in many cases, the real problem is not the lack of intelligent horsepower.
To understand why, imagine someone who speaks Russian brilliantly but has never spoken Chinese. Lock him in a room with huge stacks of Chinese books and make him learn. No matter how smart a person is and how long they learn Chinese, they will never learn enough to call themselves a native Chinese speaker.
This is because an integral part of learning a language is interacting with other native speakers. In conversation with them, you can learn local slang, detect subtle shades in the meanings of words and learn about popular topics of conversation. In principle, all these things can be written in textbooks, but in practice it turns out that it is not - since the nuances of the language vary from place to place and depending on time.
A machine that tries to develop human-level intelligence will face far more serious problems of the same kind. A computer program will never grow up in a human family, fall in love, freeze, get hungry or get tired. In short, it won’t have as many contexts that allow people to naturally communicate with each other.
This view applies to most other problems that intelligent machines can face, from drilling oil wells to solving tax problems. Most of the information needed to solve difficult problems is not written anywhere, so no amount of theoretical reasoning alone will help to find the right answers. The only way to become an expert is to do something and look at the results.
This process is extremely difficult to automate. It's like doing experiments and looking at the outcome of experiments -- a very costly process in scale, time and resources. Scenarios according to which computers quickly overtake people in knowledge and capabilities are impossible - intelligent computers will work the same "pumping method" as humans.
2. Machines are extremely dependent on people
In the Terminator series, Skynet becomes self-aware and begins to use military equipment to destroy people.
This scenario grossly underestimates the dependence of machines on humans. The modern economy is built on millions of different machines that perform a variety of specialized functions. While an increasing number of these machines are moving to automation, to some extent they all depend on people who provide them with energy and raw materials, repair them, produce more replacement machines, and so on.
You can imagine humanity building a huge number of these robots to meet demand. But nowhere have we come close to building general purpose robots.
Creating such robots at all may be impossible due to the problem of endless regression: robots that can build, repair and maintain all the machines in the world will be incredibly complex. More robots will be needed to maintain them. Evolution solved this problem by starting with the cell, a relatively simple and self-replicating building block for all life. Today’s robots don’t have anything like it (despite the dreams of some futurists) and are unlikely to get it anytime soon.
This means that unless there is a major breakthrough in robotics or nanotechnology, machines will depend on humans for support, repair and other services. A smart computer that decides to destroy the human race will commit suicide.
3. The human brain is extremely difficult to imitate.
Bostrom argues that if there is no other option, scientists will be able to produce at least human-level intelligence by emulating the human brain. But it is much more difficult than it seems at first glance.
Digital computers are ways to mimic the behavior of other digital computers, as computers function in precisely defined, deterministic ways. To simulate a computer, you just need to execute a sequence of instructions that the computer follows.
The human brain is very different. Neurons are complex analog systems whose behavior cannot be modeled in the same way as the behavior of digital chips. And even a slight inaccuracy in imitating certain neutrons can lead to the highest degree of disruption of the brain as a whole.
A good analogy here would be simulated weather. Physicists have gained an excellent understanding of the behavior of individual air molecules. You might think it’s possible to build a model of the Earth’s atmosphere that predicts weather in the distant future. But until now, weather simulation remains a computationally intractable problem. Small errors in the early stage of the simulation grow into a snowball of large errors later on. Despite the enormous increase in computing power over the past few decades, we can only draw up a modest program to predict the weather patterns of the future.
Simulating the brain to produce a mind can be even more challenging than mimicking weather behavior. There is no reason to believe that scientists will be able to do this in the foreseeable future.
4. For power, relationships may be more important than intelligence
Bostrom suggests that intelligent machines can become "extremely powerful to shape the future according to their preferences." But if we think about how human society works, it becomes clear that intelligence alone is not enough to get that power.
If that were the case, society would be governed by scientists, philosophers, chess geniuses. Nevertheless, society is governed by people such as Vladimir Putin, Barack Obama, Martin Luther King, Stalin, Reagan, Hitler and others. These people gained strength and power not because they were extraordinarily intelligent, but because they had good charisma, connections and knew how to combine carrots and sticks to get others to do their will.
Yes, brilliant scientists have been instrumental in creating powerful technologies like the atomic bomb. And it's clear that a smart computer can do that, too. But building new technologies and putting them into practice requires a pile of money and labor that governments and large corporations can afford. The scientists who developed the atomic bomb needed Franklin Roosevelt to finance their work.
The same goes for thinking computers. Any comprehensive plan to take over the world will require the cooperation of thousands of people. There is no reason to believe that a computer will be more efficient than a scientist. On the contrary, given how much old friendships, groupings and charisma do, a disembodied computer program without friends will be at a huge disadvantage.
The same goes for the singularity, Ray Kurzweil’s idea that computers will one day become so intelligent that humans won’t be able to understand what they’re doing. The most powerful ideas are not ideas that only the inventor understands. The most powerful ideas are those that are widely accepted and understood by many people, multiplying their impact on the world. It works for both human ideas and machine ideas. To change the world, a superintelligent computer will need to bring change to the public.
5. The more intelligence in the world, the less it is valued
One would expect computers to use their superior intelligence to become fabulously rich and then use their vast wealth to bribe people. But it ignores an important economic principle: as a resource becomes more common, its value declines.
Sixty years ago, a computer that could be less than a modern smartphone was worth millions of dollars. Modern computers may be much more powerful than previous generations of computers, but the value of computing power is falling faster than its capabilities are improving.
Thus, the first superintelligent computer may make a lot of money, but its advantage will be fleeting. As computer chips continue to fall in value and gain in computing power, people will build more superintelligent computers. Their unique capabilities will become mediocre.
In a world of abundant intelligence, the most valuable resources will be those that are limited – land, energy, minerals. Because these resources are controlled by humans, we will have at least as much leverage over intelligent computers as they do over us.
Source: hi-news.ru
Source: /users/1617