No te fíes demasiado de lo que te dicen ChatGPT or Bard. OpenAI’s conversational artificial intelligence engines have a fundamental problem: they invent things and commit errors, some of which are very expensive to companies like Google. There is, however, another dark problem in them.
Contaminant. In this race to create search engines based on artificial intelligence, enormous computing power is needed, which in turn makes the amount of energy used by companies also grow. This leads us to major carbon emissions, aggravating a problem that was already noticeable until now and that could get worse.
Training costs a lot. There will be engines that feed huge amounts of data with which they are trained so that they can offer increasingly accurate and better answers to our questions. Carlos Gómez-Rodríguez, a computer scientist at the University of La Coruña, explained in Wired how “to train these models requires an immense amount of computational power. Now only the big technological ones can train them”.
estimates. OpenAI has not indicated what the computational or energy costs of these products are, but there are studies published by Google researchers that estimate that the training of GPT-3 -on which ChatGPT is partially based- consumed 1,287 MWh and generated emissions of more than 550 tons of carbon.
Other independent studies, such as from the University of Massachusetts Amherst, revealed that a single workout generates as many emissions as five cars during its entire viewing cycle. If you train a model of repeated form, the emissions grow enormously. In The Economist (paywall) they warned a few years ago: the cost of training machines “is turning into a problem”.
Is that much? The quantity is not particularly exceptional -equivalent to the emissions of 550 flights between New York and San Francisco- but to that quantity would have to be added those corresponding to “running the engine to serve the petitions of millions of users”, as explained by Gómez-Rodríguez .
And ChatGPT is not even updated. The OpenAI engine has the limitation of being trained with data that reaches up to the end of 2021. In an engine like the one that integrates Bing and that is updated practically every day, that training is theoretically constant and that implies a high energy use and, so todo, constantly According to the International Energy Agency, data centers are currently responsible for 1% of greenhouse gas emissions, and this massive use of resources could increase that share.
Environmental objectives. This could complicate things for Microsoft and Google, whose objective is to have a negative balance of emissions in 2050. Google wants to achieve zero emissions in 2030, but these processes can harm those objectives. Renewable energies help, and there is also talk of redesigning the neuronal networks so that they are more efficient and reduce the so-called “inference time”, that is to say, the necessary amount of computational power for an algorithm to work with new data.
Image: Marek Piwnicki