AI has an Energy Consumption Problem

AI has an Energy Consumption Problem

Popular AI technologies like ChatGPT and Google’s Bard use staggering amounts of energy. Just how much is hard to tell, because the people who deploy the technology won’t say. However, according to a recent study, they may soon consume as much electricity as an entire country.

In responding to users’ queries, these technologies use as much as ten times more energy than a regular Google search. It is estimated that if Google were to incorporate AI into their search technology, its energy consumption would increase by 30%, to an annual amount of 29.3 Twh (terawatt hours) of electricity. That’s more energy than Ireland uses in a year.

Microsoft looks to be responding to this new problem by going nuclear. A recent job posting indicates that they are seeking a “Principal Program Manager in Nuclear Technology” to examine the potential of Small Modular Reactors (SMRs) as a solution for their energy demands. These SMRs have historically been used to power aircraft carriers and submarines.

The Lincoln Laboratory and Super Computing Center at MIT is currently working on ways to tackle AI’s thirst for power. Capping the power use of the GPUs used to train AI Models is the most straightforward approach, and it is surprisingly effective, reducing consupmtion by 12-15 percent while increasing the computing time by only 3 percent or so. Capping power also allows the GPUs to run cooler, prolonging their lifespan.

MIT has also developed a model that can estimate how well a given AI training configuration is learning. Using this model, researchers can halt the power-intensive training of the underperforming models early, resulting in an 80 percent energy savings.

Of course, we still have to find ways to improve the efficiency of models as they run. One way to accomplish this is to simply allocate the various computing tasks to the hardware that will complete those tasks most effciently. Researchers at Northwestern University and MIT have demonstarted how this can be done. You can read all about it here.

In the long run, we may need to rely on more efficient AI technologies. There are more than a few promising candidates on the horizon:

1. Neuromorphic Computing: Neuromorphic computing mimics the structure and function of the human brain. Instead of using energy-intensive deep learning models, these systems use spiking neural networks, which are inherently more energy-efficient.

2. Bio-inspired AI: Bio-inspired AI draws inspiration from natural processes. For instance, genetic algorithms and evolutionary computing can solve problems more efficiently, consuming less energy than traditional optimization algorithms.

3. Reservoir Computing: Reservoir computing is a machine learning paradigm inspired by the brain’s dynamic signal processing. It has shown promise in tasks like speech recognition and time-series forecasting with a lower energy footprint.

4. AI-ML Hybrid Models: Combining traditional machine learning techniques with deep learning models can reduce energy consumption while maintaining high accuracy. Such hybrid models leverage the strengths of both approaches.

5. Explainable AI: Exploring more interpretable and explainable AI models can have the indirect benefit of reducing energy consumption by simplifying model architectures and making them easier to optimize.