Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Diffusion and Performance

Our second major building block in Grover’s algorithm is the diffusion operator, which implements the “inversion about the mean” operation. The algorithm gives us a polynomial speedup on a broad array of problems.

Inverting About the Mean

After the marking operation discussed in the last Step, we invert the amplitude of all of the states about the mean (average) of the set. In this figure, on the left, we show two isolated examples. Accept for the moment that the average over all of the elements of the superposition is at the dashed line. In diffusion, the blue (upper) one moves toward the center of the dial when it inverts about the mean – that is, its amplitude shrinks. The orange (lower) one is the same amplitude (length) as the blue one, but because its phase puts it pointing downward, it sits below and farther from the average, so when it flips about the mean, it winds up moving away from the center of the dial – that is, its amplitude grows. (Its phase also flips, so that it’s pointed up.) grover-average

The diffusion operator is effective because of the phase of the states. In the figure below, the first step is the mark operator. The solution to our problem is the one with the phase opposite the others (orange in the figure). The possibilities that aren’t marked (the blue bars) all point up. As shown in the third column of dials, this results in an average of the bars as shown, close to the tip of the blue bars.

For our four-possibility chess example, let’s see what happens when we run the mark-diffuse sequence once. The amplitude of a losing move sequence (each blue bar in the figure below) becomes zero and the amplitude of the winning move sequence (orange bar) becomes one.

After these steps, we measure the two qubits and learn which one of the move sequences had been marked, with probability one. In this case, the measurement result becomes which corresponds to the third move sequence. Recall that our marking function identifies solutions to the problem, but it does not a priori know what value or values of the input will be flipped; all of them are done at the same time, in superposition.

All of this happens because of the constructive interference that causes the amplitude of our answer to grow and the destructive interference that causes the incorrect possibilities to shrink.



In our four-possibility chess example, as it happens, only one iteration of our mark-diffuse sequence is needed to raise the amplitude of the answer to one. For larger problems, the average is very close to the unmarked states. Therefore, the first few iterations don’t have much of an effect, but eventually the growth of the orange vector starts to pick up speed until it accumulates half or more of the total wave function, at which point it’s time to stop and measure it. (The exact number of iterations that it takes depends on the number of marked states versus the number of unmarked states, but a variant of Grover’s algorithm allows you to estimate that ratio, as well.)

As a general case using a classical computer, with possibilities and only one answer, the worst case is that we have to check all possibilities, and the average number we will check is approximately . Using a quantum computer with qubits running Grover’s search algorithm, we can find the value of we are looking for with only executions of the mark-diffuse routine.


Practical application of Grover’s algorithm to solving the traveling salesman problem (TSP) is known. TSP asks us to find the shortest possible route that visits all of the cities in a list exactly once and returns to the starting city. We preset the state of our quantum register to a superposition of all of the possible permutations of routes. We can also apply this search procedure to find the best move for chess, shogi, or go. Preset our state to a superposition of all legal sequences of moves, and flip the phase of any position in which we win. The computational complexity of Grover’s algorithm grows more slowly than brute force classical algorithms that simply try every possibility.

However, whether this results in a useful speedup depends on the problem type and size. Classical algorithms for some problems, such as examining chess positions, require exponentially more computation for each additional step you try to look ahead, so the corresponding execution of Grover’s algorithm will still require exponential amounts of computation. It can be said that this fact suggests that quantum computer cannot solve all problems with a usefully dramatic speedup.

Moreover, both chess and TSP have known heuristics that provide pretty good, if not perfect, solutions. Better candidates for Grover’s algorithm are those with no known structure, where our best order for testing possibilities classically would be no better than random.

It is hard to construct efficient algorithms for problems without structure, such as searching through a randomly shuffled set of cards. Considering only the asymptotic computational complexity, Grover’s search algorithm achieves a square root speed up compared to classical algorithms. This is true for all computations, so the worst case improvement for any algorithm using a quantum computer should be a square root speedup. Not bad, but not the exponential speedup we might naively think.

Share this article:

This article is from the free online course:

Understanding Quantum Computers

Keio University

Contact FutureLearn for Support