794
Elon Musk reinforces their opinion about the dangers of using AI $ 10M
As experts say the general Artificial Intelligence (AGI): «If a person has nothing to say on the essence of the problem AGI, he talks about the problem of his (in) security." This problem is much clearer and closer wider audience than subtle technical issues. On this issue, the opinion may declare at least famous physicist , though neizvesno habrozhitel . Recently on this topic and flashed saying Elon Musk about the dangers of AI, a spokesman who said Musk soon publish a more detailed view of artificial intelligence. < / a>. And the answer was not just a word, but backed up by $ 10 million .
The distribution of this money will be engaged Future of Life Institute (commission of experts which includes Stuart Russell - renowned expert and author of one of the most popular textbooks on AI), which is 22 January will open a portal to apply for grants. The site of the institute is also posted an open letter the priority problems of reliability and usefulness of the (not yet existing) artificial intelligence. This letter was signed by many well-known professionals working in academic and commercial organizations. It turns out, the problem of security worries AI already people who have something to say and to the AGI?
In addition, people like Elon Musk, money, though not too big, just not scattered. Yet this proposal looks more like a PR and craze. It is doubtful that the problem of the reliability and safety of AI can be practiced in the abstract, without its own advanced prototype AGI. Judging by the fact that it seems to conferences AGI , those people who might be interested in this grant, there is no prototype, which would have it is time to be afraid. And if it is at a corporation, then it is unlikely to be interested in this grant. So what, then, is its meaning? Does it make sense now seriously deal with the problems of safe / friendly AGI?
The distribution of this money will be engaged Future of Life Institute (commission of experts which includes Stuart Russell - renowned expert and author of one of the most popular textbooks on AI), which is 22 January will open a portal to apply for grants. The site of the institute is also posted an open letter the priority problems of reliability and usefulness of the (not yet existing) artificial intelligence. This letter was signed by many well-known professionals working in academic and commercial organizations. It turns out, the problem of security worries AI already people who have something to say and to the AGI?
In addition, people like Elon Musk, money, though not too big, just not scattered. Yet this proposal looks more like a PR and craze. It is doubtful that the problem of the reliability and safety of AI can be practiced in the abstract, without its own advanced prototype AGI. Judging by the fact that it seems to conferences AGI , those people who might be interested in this grant, there is no prototype, which would have it is time to be afraid. And if it is at a corporation, then it is unlikely to be interested in this grant. So what, then, is its meaning? Does it make sense now seriously deal with the problems of safe / friendly AGI?
Yes, it is time to be engaged in full swing, it is too late | |
Yes, if you pay money for it < | Yes, but only in addition to developing their own AGI |
You can, but not necessarily | |
No, until it is speculation and a waste of time | |
No, developed intellect will inevitably friendly | |
AI is generally impossible, so the question is meaningless | |
Other Voted 521 people. Excused 123 people. Only registered users can vote in polls. Sign , please. Source: geektimes.ru/post/244366/ |
Study on genetic prolong life flies may help slow aging
Case for iPhone with a second screen on the e-ink