Welcome to the homepage of
Asaf Pe'er

## How do quantum computers take advantage of quantum mechanics?

Before we look at quantum computers, recall how regular computers work: by manipulating strings of 0s and 1s. Whatever task you want it to perform, whether it's calculating a sum or booking a holiday, the underlying process is always the same: an instance of the task is translated into a string of 0s and 1s (the input), which is then processed by an algorithm. A new string of 0s and 1s pops out at the end (the output), which encodes the result. However clever an algorithm might appear, all it ever does is manipulate strings of bits - where each bit is either a 0 or a 1. On the machine level, this either/or dichotomy is represented using electrical circuits which can either be closed, in which case a current flows, or open, in which case there isn't a current.

One thing never changes in all computers, and in all processors: every bit of information can be either "0" or "1". There are no alternatives. Well, not in classical computers. The quantum world, though, is different. Quantum computing is based on the fact that, in the microscopic world, things are not as clear-cut as we see in our macroscopic world experience. The first crucial difference between the quantum and the "big" (macroscopic) world, is that tiny particles, such as electrons or photons, can simultaneously take on states that we would normally deem mutually exclusive. They can be in several places at once, for example, and in the case of photons simultaneously exhibit two kinds of polarisation. Thus, if we mark one polarization state with "0" and a second polarization with "1", we figure that a photon can be simultaneously at both states! This is in sharp contrast to the classical case, in which a photon is either in state 0 or 1, but not both.

But this is not the end of the story. The second point in quantum mechanics which is dramatically different than in the macroscopic world (and is very confusing indeed), is the idea that when we try to measure the state in which the particle, or photon really is, we force it to choose one of the states! This is known as "collapse of the wave function". So, what does it mean that the photon is in "both" (superposition of) states before the measurement? It means that we can't know for sure what state it will be when we measure it. The most we can say is that there is some probability, p that it will be in state 0 (and therefore probability, 1-p that it will be in state 1), but no more than that, prior to actually conduction the measurement.

Quantum computer uses this fact, to re-define the basic bits of information. Instead of working with discrete bits of 0 and 1, they work with "qubits", which can take on the value 0, or 1, or both simultaneously. This means that every operation that is being done on this qubit, is done simultaneously on both the 0 and 1.

This by itself does not provide much advantage over regular computers. However, here comes a third phenomenon that is unique to the quantum world: the fact that different particles (that make the different qubits) are in general not independent of each other, but are instead entangled - even if they are separated in space. This means, that if you make a measurement of one particle in an entangled system, the result that you get automatically tells you what the outcome of the other qubits will be, if you measure them. This means that in a quantum computer, one cannot simply treat each qubit individually, but must consider also all the correlations between the different qubits. For n qubits, there are 2n such correlations.

Consider, for example, 3 qubits: each one is in a superposition of 0 and 1 states. There are total 23 = 8 possible states (000, 001, 010, 011, 100, 101, 110, 111). Due to the entanglement, if we now perform any operation (=computation), it will affect all qubits simultaneously - namely all 8 possible states! This seem to save a huge amount of computational time.

Unfortunately, life is not so easy. While indeed the laws of quantum mechanics enable us to make a calculation simultaneously on all qubits, the same laws prevent us from knowing the result... The result of the computation will be another quantum state, but when we try to measure it, we can only do this in a probabilistic way. For example, suppose we have a computer with 20 qubits. This means that every cell of 20 qubits will be equivalent to 220 ~ 1 million values, and each operation we make will be carried simultaneously on all these values. Suppose we make a calculation now that finds the correct answer to a problem, and mark it as 1. After we run the algorithm, there are 1 million values, one of those is 1, but we don't know which one. If we measure the values, the chance that we get the correct answer is 1 in 1 million... So if we want to be certain that we got the right answer, we need to repeat the calculation 1 million times. Didn't seem to get much.

Indeed, a major challenge of quantum computing today, and one of the key reasons why they are still not in use, despite the fact that the idea is not new (it dates back to the early 1980s) is the need to find proper algorithms that could actually use its advantages.

Since the 1990's, several such algorithms have been developed. Examples are Shor's algorithm, Grover's algorithm or Deutsch-Josta algorithm (you can read about one of them here: https://plus.maths.org/content/really-how-do-quantum-computers-work), which were proven to enable calculations on time scales much faster than can be achieved in regular computers; again, only for particular set of problems. However, there are still very large technical problems, mainly in producing reliable qubits and proper ways of reading/ wring from them that so far prevented quantum computers from a wide use.