Microsoft has fired its artificial intelligence ethics team. It’s a bad sign

As indicated in Platformer, Microsoft has decided to dismiss the “ethics and society” team from its artificial intelligence division. This measure is part of the recent announcement of massive layoffs that will affect 10,000 employees throughout the company, but in this case the meaning is special.

A small but important team. This team was reduced and consisted of only seven people after a reorganization that took place in October. There will be 30 people in 2020. Despite the cuts, they were in charge of doing something important, as one example from the division commented, “our job was to create rules in areas where none existed”.

It’s more important to get things done earlier. In that reorganization a few months ago, John Montgomery, corporate vice president of the IA division, explained to the employees that the objective was to advance rapidly. In a recording of the meeting, this manager explained to his employees how “the pressure of Kevin [Scott, Director Técnico] and Satya [Nadella,  Consejero Delegado] It’s very strong so that the newest models of OpenAI and those that come later reach the hands of the clients at great speed”.

For the glory of Turing, why is it so difficult to define what is artificial intelligence?  (Captcha 1x01)

Responsible innovation. Ethics is the key to innovation in a discipline as promising but complex as artificial intelligence. One of the employees tried to raise objections and highlight that his team was “deeply concerned” about how AI could affect society “and the negative impacts we have had. And they are significant”. Montgomery explained that he couldn’t do anything about it because “the pressures are still the same”.

An example. Last year, this ethics team published a memo in which they gave their opinion on the risks involved in Bing Image Creator, which uses DALL-E to generate images from text “prompts”. They predicted correctly how this service threatened the income of artists to allow anyone to copy their styles. That also posed a threat to Microsoft, they explained. Platforms like Stable Diffusion have been asked by Getty Images, for example, which shows the problem they are actually facing in Redmond with this type of initiative.

The threat is there. The ethical development of technologies like artificial intelligence is crucial now that this field is having massive popularity, and there are many who have warned of the dangers of not understanding how they really work (“how they think”) engines like ChatGPT when they generate the texts that generic Noham Chomsky, one of the most important contemporary thinkers, wrote a few days ago a column about the “false promise” of ChatGPT and criticized its success, while the writer Gary Marcus talked about how these tools can end up causing endless amounts of disinformation.

Microsoft has a price. The company from Redmond, with little to lose in the field of searches, has shown itself especially hasty in its launch of Bing with ChatGPT, although it has ended up limiting its operation to avoid problems. Google is being much more cautious, according to them to avoid reputational damage. Hay otras razones, por supuesto.

No está todo perdido. Despite these dismissals, Microsoft maintains the so-called “Office of IA Responsible” which precisely is dedicated to creating rules and directives that govern artificial intelligence initiatives in which the company embarks. The company’s managers wanted to highlight in a communiqué that they want to “develop IA products and experiences in a safe and responsible way”.

Image: Tony Webster

In Xataka | A practical guide to writing the best ‘prompts’ in Midjourney and creating amazing images

Leave a Comment

Your email address will not be published. Required fields are marked *