“We ask all IA laboratories to pause immediately for at least 6 months from training IA systems more powerful than GPT-4”. This is from the message reflected in the open letter signed by, among others, Elon Musk, Steve Wozniak (co-founder of Apple), Jaan Tallinn (co-founder of Skype) and Max Tegmark (MIT).
This letter, signed by more than 1,000 personalities from the technological world, puts the rapid advances of artificial intelligence systems on the table. In fact, those that assume a capacity superior to that of GPT-4.
Lack of planning. The letter is clear, in the first place, that the development of IA systems can represent a profound change in history, as long as it is managed and planned properly. The signatories affirm that not even the creators of these systems, today, are capable of understanding or reliably predicting their behavior.
“Contemporary AI systems are coming to compete with humans in general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and falsehoods? Should we automate all jobs? Should we develop non-human minds? que con el tiempo nos superen en numero, inteligencia, obsolescence y replacement?”
El ojo puesto en GPT-4. The letter does not refer to a complete pause in the development of the IA, but only to those systems with more capacity than GPT-4. Piden “one step back” in the race towards “unpredictable models and each time with more emerging capacities”. The main point is that the actual development must be focused on “achieving that the powerful current vanguard systems are more precise, secure, interpretable, transparent, robust, aligned, reliable and loyal.”
In other words, pausing the development of capacities each time superior to start working in depth on the own control and reliability of these systems.
Protocols that assume guarantees. More than working on the reliability and security of these artificial intelligence systems, if the need to develop new security protocols shared between them is put on the table. “Estos protocols should guarantee that the systems that adhere to them are secure beyond reasonable doubt”. Se pide asi una creación y adheso a certain standards que garanticen el buen hacer de estos sistemas.
A major regulation. They also put on the table the need to create new regulatory authorities in the field of IA. It is, according to the petition, they should supervise and follow “highly capable IA systems”, in addition to encouraging forms to distinguish the real from the creation of an artificial form. If there is also talk of legal regulation in terms of possible damages created by AI, tracking of possible data leaks in these models or public funding systems for security research in this field.
“The institutions need to be well endowed with resources to face the drastic economic and political disturbances (especially for democracy) that the IA will cause.”
Perfect timing. The letter could not have arrived at a more appropriate time, as it seems to be the week of questioning the IA models. Last March 27, the European Police warned that criminal networks could use this type of tool to their advantage. The effects of disinformation that can be assumed from the use of these tools were put on the table, being used to create texts used for phishing or with other malicious purposes.
Imagen | Steve Jurvetson
More information futureoflife
In Xataka | The mega-guide of 71 artificial intelligence tools: tell me what you need and tell me what IA is the best