Quantum computers - why would you want one?

Key text

This topic is sponsored by the Sir Mark Oliphant International Frontiers of Science and Technology Conference Series.
Do we really need even faster computers

The computers we have already go at blinding speed and can do pretty much whatever we want. Yet we have always been able to find ways to use each step-up in computer power as it has been presented. We make greater demands for download speed or whiz-bang graphics or use even fancier software.

Supercomputers, which usually take up a whole room, are already in demand for things like forecasting weather and climate, designing aircraft and computer chips. Yet something called a quantum computer is emerging which could give everyone that sort of grunt on their desktop.

Cracking the code

One much-discussed need for a really slick computer is cryptography or code-cracking. This is not just spy stuff. We move money about on the Internet all the time: we are told the transactions are protected by uncrackable security codes. Many of these encryptions rely on very large numbers - like numbers with 400 digits. To crack the code, these have to be broken down into the smaller prime numbers which create the big number when you multiply them together.

With today's computers this takes just about forever, so we can rely on such encryptions to keep our money safe. But it seems quantum computers could crack them within a few hours or even a few minutes. What then? Even tougher codes that will need even faster computers? Will future quantum computers spend a lot of time chasing their own tails?

Moore's Law hits the wall

The most famous, or perhaps the most notorious statement in computer science is Moore's law, named after a famous computer pioneer. It is not really a 'law', it is rather an observation of what has happened, and what we might expect to continue to happen, at least in the immediate future.

This 'law' says that the number of components which computer-chip makers can squeeze onto a chip for data storage or processing doubles every 18 months or so, as design and manufacturing methods improve. Certainly this has been going on for more than three decades. Where a few thousand transistors or capacitors or resistors would fit in 1975 - on a piece of silicon the size of your fingernail - we can now place hundreds of millions. Vastly more information is being stored and calculations now done billions of times a second, mightily increasing the power of computers.

For this to happen, the chip makers must make those components even smaller. Every 18 months or so they are cut in half in size, which is another way of expressing Moore's law.

Into the nanoworld and beyond

The smallest dimensions of a chip, such as the width of the connecting wires, were ten microns (10 millionths of a metre) or more 30 years ago.

Nowadays 100 nanometres (100 billionths of a metre) or less is the typical size. These electronic fragments have become smaller than viruses and a thousandth the width of a human hair. And they continue to dwindle in size, as their masters push for ever better performance to satisfy customers.

But this cannot go on for ever. If we keep driving in that direction, sooner or later we will run into trouble. Some nasty 'quantum uncertainties' will show up and the chips will not behave as they should. By 2020, according to Moore's law, the circuit elements would be as small as atoms. Long before then, the bits of electric charges that store information and drive processing power will start to leak away.

And it is probably impossible to manufacture circuit elements so small anyway. Even the current generations of computer chips are straining the ingenuity of the chip engineers, and the costs of building manufacturing plants have become astronomical.

The good news is that there is a way out of this dead end. We can drop, straight into the nanoworld rather than creeping down step by step (Box 1: Into the nanoworld). In that way we can take advantage of the peculiarities of quantum physics, rather than having to work our way round them. That path leads to the quantum computer.

Bits and qubits

To see how it might happen, we should compare the vision of a quantum computer with the computer that sits on your desk. In that machine, and in the biggest supercomputer in the world, information is stored very simply as strings of numbers. In fact, there are only two sorts of numbers, 0s and 1s. These are known as 'bits', short for 'binary digits'.

Clever coding now lets us reduce all sorts of information, such as ordinary numbers, words, sounds, pictures and movies to such strings of numbers, and process that information by adding, subtracting and comparing the number chains. Each bit can be stored, permanently or temporarily, in a tiny box, as a dab of electric charge in a capacitor or a tiny fragment of magnetism on a circle of magnetic film. Something in the box means a 1; an empty box represents a 0.

A quantum computer does much the same thing, but it uses nano-sized particles, such as atoms, as the storage boxes. These are called quantum bits or qubits. For example an atom spinning one way would represent a 1, spinning the other way would be a 0.

Quantum weirdness

The difference between our everyday computer and a quantum computer is that the nanoworld lets a qubit be both a 0 and a 1 at the same time. This peculiar behaviour is called quantum superposition. There is a certain probability that the qubit holds a 1 and another probability it is recording a 0. You have to interrogate the qubit to find out, but that will disturb it, and stop it from taking any further part in any computation. That is something else unexpected that quantum physics demands.

Now we get the real payout from all this odd behaviour. As a consequence of superposition, you need a huge amount of information to describe the state of even a small number of qubits. That information doubles for each qubit you add to the assembly. Just 50 qubits would demand more than a billion numbers to describe their collective contents. Put another way, a collection of 50 qubits could store a vast amount of information, far more than any everyday computer memory can hold.

It does not stop there. In an everyday computer, the program has to operate on its stored information in sequence, one bit at a time. Even so, that can allow for billions of calculations every second. A quantum computer can process all the information in all the qubits simultaneously - geeks call this parallel processing. Imagine having millions of desk tops running side by side rather than just one, all working on the same problem. Yet a quantum computer will need only one processor.

The consequence of all this is devastating processing speed when compared with the 'classic' computers of today. The hare against the tortoise many times over. A really tough problem like the big number factorisation highlighted above, that would take a supercomputer years or decades to crack, can be crunched - at least in theory - by a quantum computer in very little time at all.

The quantum computer era is more than a glow on the horizon, but the dawn is still some distance off (Box 2: What are we up to here in Australia?). But quantum computers will start to affect our lives one day, perhaps a decade from now (Box 3: So when will we have one?). There is no fundamental reason why we should not have them, though we will need to think about their uses before that day arrives.

External sites are not endorsed by the Australian Academy of Science.
Posted August 2007.