No matter what you ask an AI, it will always generate an answer. In order to do this, whether the response is accurate or not, the system relies on tokens. These tokens are made up of words or fragments of words that are transformed into numerical data so the AI model can process them.

That process, along with the broader computing involved, results in carbon dioxide (CO2) emissions. Yet most people are unaware that using AI tools comes with a significant carbon footprint. To better understand the impact, researchers in Germany analyzed and compared the emissions of several pre-trained large language models (LLMs) using a consistent set of questions.

“The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. “We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models.”

To read more, click here.