Speech recognition without heavy software or energy-hungry processors: researchers at the University of Twente, together with IBM Research Europe and Toyota Motor Europe, present a completely new approach. Their chips allow the material itself to "listen." The publication by Prof. Wilfred van der Wiel and colleagues appears today in Nature.
Until now, speech recognition has relied on cloud servers and complex software. The Twente researchers show that it can be done differently. They combined a Reconfigurable Nonlinear Processing Unit (RNPU), developed at the University of Twente, with a new IBM chip. Together, these devices process sound as smoothly and dynamically as the human ear and brain. In tests, this approach proved at least as accurate as the best software models—and sometimes even better.
The potential impact is considerable: hearing aids that use almost no energy, voice assistants that no longer send data to the cloud, or cars with direct speech control. "This is a new way of thinking about intelligence in hardware," says Prof. Van der Wiel. "We show that the material itself can be trained to listen."
To read more, click here.