One of the major experts in AI is clear about what will happen if we create a super-intelligent AI: “We will kill everyone”

Last week the Future of Life Institute (FLI) published an open letter with a forceful petition: pause the development and training of new artificial intelligences for six months. The letter was signed by personalities like Elon Musk, and in it a great concern was shown for this frenetic development without thinking about the consequences. Now an expert in this field goes further and affirms that pausing this development is not enough.

We must stop all the development of the IA. That’s what Eliezer Yudkowsky, head of the Machine Intelligence Research Institute, thinks. He is an expert who has been researching the development of General Artificial Intelligence -and its dangers- since 2001 and is considered one of the founders of this research field. He has just published an editorial in Time in which he raised the question of how the moratorium of six months is better than nothing, but that the charter of the FLI “does little” to solve the threat posed to us.

Yudkowsky explains how we are in an unexplored field where the limits are not known:

We cannot calculate in advance what will happen and when, and currently it seems possible to imagine that a research laboratory could cross critical lines without getting involved.

"The era of the IA has begun": Bill Gates believes that we are facing our second great technological revolution

A super-intelligent AI will kill us all. Although there has been an analysis -like his own- about the risks, for this expert the conclusion of his reflections on the development of AI is clear and forceful:

The most likely result of building a superhumanly intelligent AI, under anything remotely similar to the current circumstances, is that literally everything in the world on Earth will die. Not as in “tal vez haya alguna remote possibility”, but as in “it’s obviously what will happen”.

We are not prepared to survive a super IA. Yudkowsky believes that we cannot survive something like this without precision and preparation. Without them, he explains, an IA will appear that “does not do what we want and we do not care for the living beings that feel in general”. It would be necessary to imbuir this feeling and concern for those seen in the IA, but currently “we don’t know how to do it”. Before an IA of this type and in the actual situation, the fight would be futile. It would be “como si el siglo XI trying to fight against el siglo XXI”.

GPT-5, AGI and self-awareness. Yudkowsky also talks about how “we have no idea how to determine if AI systems are self-aware” because we don’t know how they think y desarrollan sus respuestas. In fact, he states that if the jump between GPT-4 and a hypothetical GPT-5 is of the same magnitude as from GPT-3 to GPT-4, “we will not be able to say in a justifiable way that it is probably not self-conscious” . GPT-5 is now spoken of as a general artificial intelligence indistinguishable from human intelligence.

Hay que parar totally el desarrollo de la IA. For this expert, the only solution is to stop the training of future IAs. “It must be indefinite at the world level. No puede haber excepciones, including governments and armies”. He would turn off all the big ones clusters of GPUs in those that are carried out from the training of the IAs, would put a limit to the energy consumption that could be used for IA systems, and would even monitor the sales of graphic cards. He even talked about “being willing to run the risk of having a nuclear exchange”, something that provoked a great controversy and that he then clarified in a long message on Twitter.

We will take decades to be prepared. Yudkowsky remembers that it took more than 60 years since the beginning of this discipline to get here, and it could take at least 30 years to be prepared for a safe development of AI. Its conclusion is equal to determining

No we are prepared. We are on the way to being significantly more prepared in the short term. Si seguimos así todos moriremos, includante a niños que no eligieron esto y que no hicieron nada malo. Mathematics [el desarrollo de IAs].

From OpenAI’s CEO dissenting. Yudkowsky’s speech is not new: he has warned of the danger of developing this type of AI since the early 2000s. Sam Altman, CEO of OpenAI – the company responsible for the development of ChatGPT or GPT-4 – does not agree with these statements and I know Yudkowski very well. In be a selfie recently, for example, posaba with him and with Musk’s partner, the singer Grimes.

Step by step? As they indicated in Bloomberg even before the Time editorial was published, Altman spoke on Twitter about this future development of a general artificial intelligence:

There will be scary moments as we advance towards AGI-level systems and significant disruptions, but the advantages can be so amazing that it is well worth the effort to overcome the large networks to get there.

Altman himself recognized in an interview with ABCNews that concern for the development of artificial intelligence. “Tenemos que tener cuidado. I think that the people should be happy with the fact that we will be a little terrified of this”. Even so, he clarified, his developments “hope someone else gives them an entrance. It’s a tool that’s very controlled for the human being”.

Remembering Oppenheimer. In an interview in The New York Times, the editor recalled how in a previous meeting in 2019 Altman quoted Robert Oppenheimer, one of the people most responsible for the creation of the atomic bomb. Then he said that “technology happens because it is possible”. For him, he explained, artificial intelligence was as inevitable now as it was from the Manhattan Project then.

And OpenAI promises safe and responsible development. This is from a message from one of the sections of his web site. In it he explains how “AI systems are becoming part of everyday life. The key is to guarantee that these machines are aligned with human intentions and values”. The message is ese, but the truth is that although they originally openly shared information about their progress, the company has become much more cautious in what it reveals about its projects.

But they don’t know how to do it (and they have secrets). The OpenAI managers themselves explain in their blog how they plan an “iterative and empirical” development to go step by step, but they recognize that this “alignment” of general artificial intelligence is complex to achieve, something that Yudkowsky himself sign language in 2018.

Image: Precessor | TechCrunch

In Xataka | Copilot, ChatGPT and GPT-4 have changed the programming world forever. This is the opinion of the programmers


Latest articles

Let’s try Claude, a competitor of ChatGPT created by Anthropic. Y se comporta la mar de bien

ChatGPT may be the absolute reference in the world of chatbots, but the competition in this segment is increasingly lively. In...

Take care! Some IA like ChatGPT, Bard and Bing expose tus datos sin permiso

The growth in both quantity and utility of the different Artificial Intelligence platforms that are appearing, is evident in the future. But not...

Udemy Free: Clip Studio Paint Course – Rotoscopia with 3D Models

Rotoscopy with 3D modelsIn this course you will learn to make a rotoscopy with the 3D models that have been incorporated into the program.An...

TikTok doesn’t just want to focus on videos, from now on you can publish and share texts

But as is the case with most platforms of these characteristics, the comment does not only want to focus on one type of content....

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here