In 1943, a pair of neuroscientists were trying to describe how the human nervous system works when they accidentally laid the foundation for artificial intelligence. In their mathematical framework (opens a new tab) for how systems of cells can encode and process information, Warren McCulloch and Walter Pitts argued that each brain cell, or neuron, could be thought of as a logic device: It either turns on or it doesn’t. A network of such “all-or-none” neurons, they wrote, can perform simple calculations through true or false statements.
“They were actually, in a sense, describing the very first artificial neural network,” said Tomaso Poggio (opens a new tab) of the Massachusetts Institute of Technology, who is one of the founders of computational neuroscience.
McCulloch and Pitts’ framework laid the groundwork for many of the neural networks that underlie the most powerful AI systems. These algorithms, built to recognize patterns in data, have become so competent at complex tasks that their products can seem eerily human. ChatGPT’s text is so conversational and personal that some people are falling in love (opens a new tab). Image generators can create pictures so realistic that it can be hard to tell when they’re fake. And deep learning algorithms are solving scientific problems that have stumped humans for decades. These systems’ abilities are part of the reason the AI vocabulary is so rich in language from human thought, such as intelligence, learning and hallucination.
But there is a problem: The initial McCulloch and Pitts framework is “complete rubbish,” said the science historian Matthew Cobb (opens a new tab) of the University of Manchester, who wrote the book The Idea of the Brain: The Past and Future of Neuroscience (opens a new tab). “Nervous systems aren’t wired up like that at all.”
To read more, click here.