2416
Is it possible to create a powerful artificial intelligence, without copying the human brain?
prerequisite occurrence of technological singularity is the creation of a "strong artificial intelligence» (artificial superintelligence, ASI), the ability to independently modify itself. It is important to understand whether this AI work as a human mind, or at least its platform to be constructed similarly to the brain?
The brain of the animal (including human) and computer work differently. The brain is a three-dimensional network, "incarcerated" under the parallel processing of huge amounts of data, while current computers process information linearly, although millions of times faster than brains. Microprocessors can perform calculations with great speed and efficiency far exceeds the capabilities of the human brain, but they use a completely different approach to information processing. But traditional processors do not cope very well with parallel processing of large data volumes, which is necessary for solving complex multifactor problems or, for example, pattern recognition.
Currently being developed neuromorphic circuit designed to process information in parallel, similar to the brains of animals, using, in particular, neural network. Neuromorphic computers are likely to use optical technology, which will enable to produce trillions of concurrent computation, which will allow to model more or less accurately the entire human brain.
Projects Blue Brain Project and Human Brain Project , funded by the European Union, the Government of Switzerland and IBM, are aiming to build a complete computer model of the functioning of the human brain using biologically realistic simulation of neurons. Human Brain Project aims to achieve a functional modeling of the human brain by 2016.
On the other hand, neuromorphic chips allow the computer to process the data from the "senses" to detect and predict patterns and learn from their experiences. This is a huge step forward in the field of artificial intelligence, notably brings us closer to the creation of a full-fledged strong AI, ie computer, which could successfully solve any problems that could theoretically solve people.
Imagine this AI inside a humanoid robot that looks and behaves like a man, but much faster trains and can perform almost any task better than Homo Sapiens. These robots could be self-aware and / or feelings, depending on how we decided to program them. Working robot is all to anything, but what about the "social" robots, living with us, caring for children, the sick and the elderly? Of course, it would be great if they could fully communicate with us; if they have consciousness and emotions like us? A bit like in the movie AI Spike Jones «She» (Her) .
In the not too distant future, perhaps even less than twenty years later, these robots can replace virtually any human work, creating a society of abundance, where people can spend time as they want. In this reality upscale robots will move the economy. Food, energy, and most consumer goods will be free or very cheap, and the people will receive monthly фиксированное benefit from the state .
It all sounds very nice. But what about the AI that significantly surpass the human mind? Artificial superintelligence (ASI), or strong artificial intelligence (SAI), having the ability to learn and improve themselves and potentially able to be millions or billions of times smarter than the smartest of people? The creation of such beings could theoretically lead to a technological singularity.
Futurologist Ray Kurzweil believes that the singularity occurs in the area in 2045. Among the critics of Kurzweil - co-founder of Microsoft Paul Allen , who believes that before the singularity is still far. Allen believes that to build such a computer, you must first thoroughly understand the working principles of the human brain, and that these studies for some reason, has accelerated dramatically as digital technology in 70-90 years, medicine or a little earlier. But in reality, on the contrary, the study of the brain require more effort and bring less tangible results - he calls this the problem of "braking due to the complexity».
Without interfering in the dispute between Paul Allen and Ray Kurzweil (его response to criticism of Allen ), I would like to discuss is whether absolutely necessary to create a full understanding of the FIC and the simulation of the human brain.
For us, it is natural to consider himself as a kind of pinnacle of evolution, including intellectual property, just because it happened in the biological world on Earth. But this does not mean that our brain is perfect, and that other forms of higher intelligence can not work otherwise.
Conversely, if the aliens with superior our intelligence there, almost unbelievably, that their mind will function the same as ours. The process of evolution is random and depends on innumerable factors, and even if life will be recreated on the planet, identical to Earth, it would not grow as well, and, accordingly, by N billions of years, we have observed a very different species. If it had not happened Массового Permian or some other global extinction? We would not have. But this does not mean that other animals do not have to doevolyutsionirovali developed intellect instead of us (and it is likely that their intelligence would be more developed due to the odds in millions of years). Perhaps it would have been any reasonable octopus with a completely different structure of the brain.
Human emotions and limitations of pushing us to the idea that all good and reasonable should be arranged in the same way as we do. This mistake of thinking led to the development of religions with anthropomorphic gods. Primitive religion or simplified as animism, or Buddhism, or often have non-humanoid deity or have no gods at all. More selfish religion, poly- or monotheistic, as a rule, are a god or gods as superhumans. But we do not want to make the same mistake when creating artificial superintelligence. Superhuman intelligence should not be "enhanced" copy of the human and the computer does not have to be similar to our biological brain.
The human brain - a brilliant result of four billion years of evolution. Or, more correctly, a tiny twig in the Great Tree of Evolution. Birds have a much smaller brain than mammals and are often considered a very stupid animal. But, for example, crows are about psychological skills at preschool. They exhibit a conscious, proactive, goal-directed behavior, develop problem solving skills and can even use tools. And all this with a brain the size of a bean. In 2004, a study in the department of animal behavior and experimental psychology at Cambridge University showed that crows almost as smart as apes.
Of course, there is no need to repeat the human brain in detail for the manifestation of consciousness and initiative. Intellect is dependent not only on the size of the brain, the number of neurons or cortical complexity, but also, for example, the ratio of the size of the brain to body weight. Therefore cow brain which is similar to the size of the brain of chimpanzees, mice and more stupid crows.
But how does a computer? Computers that, in fact, only the "brains", they do not have bodies. Indeed, when computers have become faster and more efficient, their size is usually reduced and not increased. This is another reason why we should not compare the biological brain and computers.
As Kurzweil explains in his response to Allen, the knowledge of how the human brain works, can only push on some solutions to specific problems in the development of AI, however, most of these problems are gradually solved without the help of neuroscientists. We already know that the "specialization" of the brain occurs mainly through education and treatment of their own experience, and not a "programming". Modern AI systems are already quite can learn from their experiences, for example, IBM Watson most of their "knowledge" assembled, self-reading books.
So, there is no reason to be confident that it is impossible to create artificial superintelligence, not understanding at first in his brain. The computer chip to determine arranged differently biochemical neural network, and the machine will never feel the emotion as we (although it may experience different emotions and beyond human understanding). And despite these differences, computers can already acquire knowledge on their own, and most likely it they will receive all the better, even if they do not learn the same way as men. And if you give them the opportunity to improve themselves, machines themselves may well run non-biological evolution, leading to a superhuman intelligence, and eventually to the singularity.
Source: habrahabr.ru/post/223987/
These beautiful stones - artificial
Windows 8.1 with Bing defies Android: one gig of RAM is sufficient again