Finally, I want to talk to you a little bit about the performance of Shor’s algorithm. I have a plot here showing the performance of the algorithm compared to the number we want to factor. So the X axis, this is the length of the number we want to factor in bits, so 100 bits, 1000 bits, 10,000, 100,000, it’s a logarithm scale. The vertical axis is the amount of time it takes us to factor and that’s also a logarithmic scale, so here in the middle we have 1 year, down here at the bottom we have 1 second, up at the top is about the age of the universe.
Right now, there is one line on this, this black line, that’s the number field sieve we’ve been talking about, the classical algorithm that does the factor and you can see that it goes up very steeply. There is one blue point on it, is the current record for how large a number we can factor using the classical algorithm. It’s 768 bits and you can see that from there getting to maybe a 1000 bits might be okay, but going from 1000 bits to 2000 bits means going from maybe a year’s worth of execution time to up to the lifetime of the universe, it’s a big deal.
Well, we talked earlier about Moore’s law and the advance of classical technology as well, so would that give us a big advantage? Let’s say we used a thousand times as much computing power, how would that look? Well, it would look something like this, curve is the same shape and it doesn’t move very far to the right, so it’s still a big problem even the apparently exponential gain of Moore’s law doesn’t get us a big improvement in the overall performance of the classical factoring algorithm. So, what about Shor’s algorithm? Let’s put Shor’s algorithm on here and compare it. Shor’s algorithm, instead, would look something like this. On this log-log plot, it’s a straight line, it’s a polynomial number.
Now, you noticed though that I took away the time on the vertical axis, how long does it actually take? That actually depends on a number of factors. You may have heard that a quantum computer can factor a number in seconds, that’s not actually true; the performance actually depends on several factors. The first, of course, is actually the clock speed of the quantum computer itself. How fast can we execute each individual operation? So let’s put in several curves here. Let’s say, you can execute only one operation per second. Up here, factoring a large number can still take thousands of years, thousand times faster than that, it still might take a year.
At a million times faster than that, 1 million gates per second, now we are down into the seconds and we are down into some really good performance here and the gigahertz might be even faster, so the clock performance actually has a bigger impact on the performance. What about the architecture? The architecture of the computer actually also has a big impact. You can see here three different lines and those represent differences in the internal structure of the quantum computer itself as well as how well we match to the modular exponentiation algorithm to the system itself.
And so you can see, we go from a line like this to a line like that, to a line like that, so you can see that ultimately we have these three big factors that impact the performance, we have the clock speed of the system itself, our ability to make good algorithm that executes the modular exponentiation part and finally the architecture of the quantum computer itself. So when somebody tells you that a quantum computer can factor a number in seconds, ask them those three questions; how fast is your computer, what’s the architecture of it and what algorithm are you using to actually execute the arithmetic.