In 1972, at the age of ten, I spent a week somewhere near Windsor – it’s hazy now – learning how to program a computer. This involved writing out instructions by hand and sending the pages to unseen technicians who converted them into stacks of cards punched with holes. The cards were fed overnight into a device that we were only once taken to see. It filled a room; magnetic tape spooled behind glass panels in big, grey, wardrobe-sized boxes. The next morning, we’d receive a printout of the results and the day would be spent finding the programming faults that had derailed our calculations of pi to the nth decimal place.

There was awed talk of computer experts who worked at an even rawer level of abstraction, compiling programs (no one called it coding then) in the opaque, hieroglyphic notation of “machine code”. Those were the days when you had to work close to the guts of the machine: you thought in terms of central processing units, circuit diagrams, binary logic. If you wanted to play games, you had to write them yourself – by the 1980s, on a BBC Micro or Sinclair ZX Spectrum with less graphical sophistication than an ATM.

I was reminded of those clunky, makeshift early days of public access to computers when, in September, I saw one of IBM’s quantum computers at the company’s research labs in Rüschlikon, a suburb of Zurich. On a hill overlooking Lake Zurich, in the early autumn sunshine, the labs have a laid-back air that is more Californian than Swiss. In the past several decades, they have been the incubator of Nobel Prize-winning scientific innovations. Things grow here that affect the world.

 

This computer has the improvised appearance of a work in progress. It’s a sturdy metal cylinder the size and shape of a domestic water-heater immersion tank, suspended on a frame of aluminium beams reaching to the ceiling and brought to life by a dense tangle of wires that lead to a bank of off-the-shelf microwave oscillators. The “brain” – the component in which binary ones and zeros of data are crunched from input to output – sits deep inside this leviathan, on a microchip the size of a baby’s fingernail.

The last time I visited IBM’s Zurich centre, in 2012, its head of science and technology, Walter Riess, talked about the company’s plans for an imminent “post-silicon” era, after the silicon-chip technology of today’s computers had reached the physical limits of its ability to offer more computing power. Back then, quantum computing seemed like a far-off and speculative option for meeting that challenge.

Now it’s real. This is what computing felt like in the 1950s, Riess told me in September as he introduced me to the new device. It has become routine to juxtapose images of these room-filling quantum machines with the first prototype digital computers, such as the valve-driven ENIAC (or “Electronic Numerical Integrator and Computer”) at the University of Pennsylvania, used for ballistics calculations by the US military. If this is where quantum computing is now, such pictures imply, just try to imagine what’s coming.

Quantum computing certainly sounds like the future. It’s the technology of choice for sci-fi film-makers who want their artificial intelligence networks to have unlimited potential. But what is it really about, and what might it do?

To read more, click here.