Researchers argue that the development of artificial intelligence leads to a "probable catastrophe" for mankind

Researchers argue that the development of artificial intelligence leads to a "probable catastrophe"

Are artificial intelligences leading us to death? "Probably," the researchers who have studied this question believe, although such claims of doom and gloom regularly appear on social media, the arguments put forward by scientists are very interesting.

Scientists from Google and Oxford University co-organized an AI magazine. In their Twitter, they briefly summarized their conclusion: in their view, IA may represent "a threat to humanity".

In fact, they even claim that "an existential disaster is not only possible, but also likely." The reason for this is that they have considered a very specific way of working with IE. Today what is known as "artificial intelligence" essentially covers the method of "machine learning." In this case, "artificial intelligence" consists of a system that feeds a lot of data for learning and extracting logical links for a given purpose.

As scientists have explained, learning for artificial intelligence takes the form of a reward that confirms whether the result is appropriate or not. It is this seemingly simple mechanism that can become a serious problem, they consider. ", they explain.

To help us understand this idea, they give an example of a magic box, assuming that this magic box is able to determine when a series of actions has led to something positive or negative for the world. To pass on information to the AI, it translates success or failure to reach the goal into 0 or 1.

Scientists point out that there's something different about how I.I. gets this information. For example, we'll take two I.I.s. It's clear that the reward given by the model is the number displayed by the magic box. The other side, on the other hand, could quite well understand that the reward is the "number that takes its camera off." There's nothing that could contradict that information at first sight. However, this interpretation is very different from the first one. Indeed, in the second case, I.E. could quite well decide simply to remove the paper on which we would scratch "1" in order to get a better reward, and optimize it, so it directly interferes with the reward process and interrupts the process imposed by its creators.

"", scientists say, "In addition, the game is influenced by a variety of prejudices that researchers believe make this type of interpretation likely." One reason is that such rewards are simply easier to obtain and may therefore appear to be more optimal.

They wondered, however, whether artificial intelligence could really interfere with the reward process? They concluded that as long as it interacts with the world, which is necessary for it to be of any benefit, yes. And even with a limited scope, suppose that the actions of the AI are confined to putting the text on the screen that an operator can read. The AI agent can deceive the operator and gain access to the direct levers through which his actions can have a wider effect.

In the case of our magic box, the consequences may seem harmless, but they may be "catastrophe" depending on the scope of application and the way the AI carries out its actions. "," scientists describe.

"In their view, such a system of remuneration could lead to a collision with a person. ", they add.