Artificial intelligence is not just changing software. It is also driving a sharp rise in electricity use. In the United States alone, AI systems and data centers consumed about 415 terawatt-hours of electricity in 2024, according to the International Energy Agency. That amounts to more than 10% of the nation’s total energy output, and the figure is expected to double by 2030.

That trend is raising a difficult question for the future of AI: Can these systems become more capable without becoming dramatically more expensive to power?

Researchers at the Tufts University School of Engineering believe the answer may be yes. They have built a proof of concept for an AI approach that could use up to 100 times less energy than today’s standard systems while also producing more accurate results on certain tasks. In a field that often rewards ever larger models and ever larger computing infrastructure, that kind of improvement could be significant.

The work was developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor. It centers on neuro-symbolic AI, which combines standard neural networks with symbolic reasoning, similar to how people break problems into steps and categories.

To read more, click here.