Finally, we have the tools to understand quantum error correcting codes. We know what types of errors happen, and how to measure the parity of groups of qubits without destroying the superposition and entanglement, at least for certain carefully constructed quantum states. So how do we design a complete code?
The triple redundancy code
In the prior article, we saw the triple redundancy code that encodes \(|0\rangle\) into the three-qubit state \(|000\rangle\) and \(|1\rangle\) into \(|111\rangle\). Is this a complete code?
Let’s examine its error protection properties: if a single bit is flipped, we have a state such as \(|101\rangle\), and the parity extraction will find parity 1 on the first two qubits and also on the second two qubits. Since a good state has parity 0, we know that both pairs have a bit flip error, and it’s easy to see that this implies that it must be the middle qubit is in error. We can easily correct it by flipping the middle qubit again.
That takes care of what we call \(X\) errors. However, \(Z\) errors, or phase flips, are problematic. A \(|000\rangle\) state would be unchanged, but a \(|111\rangle\) state becomes \((\pi)|111\rangle\), which appears innocuous but in practice destroys our ability to correctly create the interference on which quantum algorithms depend.
The parity extraction operations will see a \((\pi)|111\rangle\) state exactly like a \(|111\rangle\). (Or, for that matter, a \((\pi)|101\rangle\) exactly like a \(|101\rangle\) state.) Phase flip errors, therefore, will go undetected.
The phase flip error takes a \(|+\rangle\) state to a \(|-\rangle\) state, and vice versa. Normal parity is calculated along the \(Z\) axis, but we can take the parity along the \(X\) axis in a similar fashion. Turn a \(|+\rangle\) state into a \(|+++\rangle\) state and a \(|-\rangle\) state into a \(|---\rangle\) state and apply similar parity extraction methods.
Peter Shor recognized that we can use triple redundancy on each of the two axes \(X\) and \(Z\) in order to detect both phase flips and bit flips. First, encode \(|0\rangle\rightarrow |000\rangle\) and \(|1\rangle\rightarrow |111\rangle\) using two CNOT gates. Then take each of the three qubits, apply a Hadamard gate (which swaps the \(X\) and \(Z\) axes), and expand each of the three qubits into a set of three, \(|+\rangle\rightarrow |+++\rangle\) and \(|-\rangle\rightarrow |111\rangle\). Now we have nine physical qubits that are encoding one logical qubit, and the Hamming distance between the code words is 3, so we describe the code as \([[9,1,3]]\).
Since the code distance is 3, this code can detect a single bit flip, a single phase flip, or one of each. With a \(9:1\) consumption of space, it’s not a terribly efficient code, but it’s one of the simplest to understand. Beyond this already significant cost, for reasons that we won’t go into in detail, extracting the parity involves operations can result in the propagation of errors from qubit to qubit. To keep this from happening, we use still more qubits in specially prepared states, so the total overhead is \(18:1\).
The two-layer, triple redundancy Shor code gives you the flavor of how quantum error correction works. Many different kinds of codes, most building on known classical codes in novel ways, have been developed over the last two decades, by researchers including Charles Bennett, Andrew Steane, Dave Bacon, and a long list of others. John Preskill, Todd Brun, Barbara Terhal, and others are advancing the theory of what it means for a system to be fault tolerant. There are books and conferences on the topic for specialists. We have no need to go beyond the basic ideas in this course, but one particular class of error correcting codes has generated a lot of excitement over the last fifteen years, and is worth mentioning.
Surface codes, or topological codes, build on ideas by Alexei Kitaev and his collaborators. They recognized that if you laid qubits out on the surface of a torus (a donut shape), you could store one logical qubit in the entire set. To maintain the state and check the parity bits to see if the state has developed an error, each physical qubit only has to interact with four neighbors. The code turns out to have a very high threshold, or level of errors that it can tolerate.
Of course, it’s not practical to create a large donut and put many qubits on its surface for every logical qubit we would like to have; algorithms such as Shor take thousands of logical qubits, and we have no way of connecting those donuts to each other.
Robert Raussendorf and his collaborators figured out how to create mathematically similar arrangements with qubits laid out on a flat surface, and how to put many logical qubits into one large surface of stationary qubits, or by entangling photons into a 3-D lattice. Austin Fowler and his collaborators (including us) have extended the engineering of the 2-D code to make it practical. In the next step, when Simon Devitt and I discuss the history of error correcting codes, you will see an animation of this code.
Because the surface codes tolerate high error rates and can be laid out in this nearest neighbor fashion, many researchers (including us) believe that they are the most attractive for real systems. Over the last decade, Kae Nemoto and members of her group, especially Simon Devitt, have been among the fiercest advocates for the code and have worked on designs using single photons and NV diamond centers. Designs have also been proposed for ion traps, and IBM and Google are pursuing the code for superconducting systems.
The optical quantum computers discussed by Professor Furusawa are advancing toward using the 3-D version of the surface code, and the superconducting devices discussed by Professor Nakamura are aiming toward the 2-D surface code, which is ideally suited to qubits that can only interact with their nearest neighbors in such a layout.
© Keio University