Site icon Technology Hardware

New computer chip design could slash AI energy demands

New computer chip design could slash AI energy demands

As artificial intelligence platforms like OpenAI’s ChatGPT and Microsoft’s Copilot go mainstream, power bills from their usage are exploding. In response, researchers are racing to build hardware that would guzzle less energy.

One such effort is underway at the University of Texas at Dallas. Working with Texas Instruments and Arizona-based Everspin Technologies, scientists there have built a small neuromorphic computer system – a brain-inspired design – that uses tiny magnetic “sandwiches” inside its chips to mimic the behavior of neurons. In lab tests, an AI program running on this hardware recognized tiny black-and-white images while using less circuitry and energy compared to today’s AI systems.

The findings of this prototype, published in the journal Communications Engineering, are “a tour de force” not just in how AI models run but in how they learn – the energy-hungry process that teaches these algorithms to make good predictions, said Mark Stiles, a computer scientist at the National Institute of Standards and Technology in Washington, D.C., who was not involved in the study.

Like a synapse

Imitating human brains in computing isn’t a new idea. In the late 1950s, the U.S. Office of Naval Research unveiled the “perceptron,” a 5-ton, room-sized machine that, after about 50 trials, taught itself to identify punch cards marked on either the left or right. The computer relied on a single-layer neural network, an algorithm that learns through trial and error to tell which of two categories an input belongs to.

Decades later, the perceptron’s design would inspire deep learning, a kind of AI that finds patterns in data by running it through many layers of artificial neurons, also known as nodes. (Each neural layer receives data, processes it and sends it on to the next layer.) Deep learning revolutionized AI and is now commonplace, curating social media feeds, powering image recognition and more.

But that kind of intelligence, along with other generative AI models, comes with a hefty energy bill. Training OpenAI’s GPT-3, for example, consumes about as much electricity as powering an average U.S. household for 120 years. One estimate says ChatGPT’s daily queries from millions of users use enough energy to charge thousands of electric vehicles a day and power roughly 29,000 U.S. homes for a year.

Some efforts to slash energy costs focus on using renewable sources, or on slimming down the AI models themselves. But engineers and computer scientists are also looking to neuromorphic computing – first conceived of in the late 1980s – as another way to offset AI’s energy boom.

At UT Dallas, Joseph Friedman, an associate professor of electrical and computer engineering, is making neuromorphic computer chips that process information like human neurons and store it locally, like synapses. Synapses pass a signal to the next neuron and convey how strong that signal is. In the human brain, studies suggest that at least some types of memory are stored in synapses.

Strength of connection

One of the biggest hurdles for the researchers is copying how synapses store the strength of a connection, Friedman said. Signals between neurons in the brain aren’t simply binary – on or off, like in a conventional computer. Synapses can be stronger or weaker, adjusted like a volume dial on a boombox through different chemicals.

Storing that kind of analog data in computer hardware is messy and error-prone, Friedman said. His team instead uses magnetic tunnel junctions, or tiny magnetic sandwiches made of two magnetic layers separated by a thin barrier. Electrons can travel through that barrier easily when the magnets line up and less so when they point in opposite directions, making each junction act like an on-and-off switch. The overall signal between artificial neurons can be strengthened or weakened by flipping on more or fewer of these switches.

Friedman and his colleagues wired together eight of these magnetic sandwiches into a prototype computer system running an AI image-recognition model. The black-and-white images they asked the AI to distinguish were simple and small – just four pixels large, or roughly the size of a speck on a TV screen.

That task might not seem like much, but when the team pitted the setup against a conventional AI system, it learned the patterns and made predictions with less total energy – in part because it could store its memory within the neuromorphic chips.

The need to scale

While the prototype is small, Friedman said that once the neuromorphic system is built at a large scale, “we’re shooting for on the order of 100 to 1,000 times more energy efficiency” compared with electronic circuits called graphics processing units, such as those produced by California-based tech giant Nvidia.

Stiles at the National Institute of Standards and Technology, who is also studying neuromorphic chips that use magnetic tunnel junctions, said it will still be an uphill battle to incorporate neuromorphic hardware into computing platforms that run AI systems. Trillions of integrated chips, he said, would be needed in one of the large language models. “So the scaling up is still huge at this point, and that’s going to require stepping stones,” such as the projects from his and Friedman’s labs. Those that are intriguing enough, Stiles added, will convince companies or other investors to provide the financial resources to solve the biggest problems in computing.

In Texas, where data centers are popping up to feed the AI boom, driving concerns about wintertime power shortfalls, that pressure on the grid could push neuromorphic computing needs. Friedman said another push could come from the AI boom itself, because neuromorphic hardware can help models learn more efficiently by storing information locally.

Today, for example, an electric car like a Tesla typically sends data from its sensors to the cloud, where distant data centers process it and send results back to the car’s AI system.

“But you could have a Tesla that learns on its own, without being connected to anything, and that could enable safer cars,” Friedman said. “The idea is that you could democratize the learning, and everyone could have their own AI model that they don’t have to share.”

With private U.S. investment in AI soaring past $100 billion, AI systems showing up everywhere from medical devices to self-driving cars and weekly OpenAI users hitting about 700 million in August, the pressure to make AI less power-hungry isn’t going away. Neuromorphic computing may be in its early days, but the hope it offers for smarter, more efficient machines is likely to stick around.

link

Exit mobile version