Networks programmed directly into computer chip hardware can identify images faster, and use much less energy, than the traditional neural networks that underpin most modern AI systems. That’s according to work presented at a leading machine learning conference in Vancouver last week.
Neural networks, from GPT-4 to Stable Diffusion, are built by wiring together perceptrons, which are highly simplified simulations of the neurons in our brains. In very large numbers, perceptrons are powerful, but they also consume enormous volumes of energy—so much that Microsoft has penned a deal that will reopen Three Mile Island to power its AI advancements.
Part of the trouble is that perceptrons are just software abstractions—running a perceptron network on a GPU requires translating that network into the language of hardware, which takes time and energy. Building a network directly from hardware components does away with a lot of those costs. One day, they could even be built directly into chips used in smartphones and other devices, dramatically reducing the need to send data to and from servers.
To read more, click here.