Want to keep learning?

This content is taken from the Keio University's online course, Understanding Quantum Computers. Join the course to learn more.

Not Supercomputing, Not Big Data

Quantum computers aren’t suitable for working on problems that involve enormous amounts of data, like climate simulations or what is called “big data”. Let us take a brief look at why this is so. The contrast should help illuminate problems for which they are most likely to be useful.

Data-Intensive Supercomputing

A “supercomputer” is a computer system that handles computations that a simple computer like your laptop could never handle. They generally have thousands of times as much memory and disk space as a laptop or desktop computer. These days, they also typically have thousands or tens of thousands of separate CPUs, or central processing units – the “brains” of the computer.

Of course, how big and how fast a computer has to be in order to count as a “super” computer varies over time, as the technology gets better. One way to think of it is that the several thousand fastest computer systems in the world are supercomputers, and everything else isn’t. They also typically require lots of power – hundreds of kilowatts or even megawatts, and cost millions or tens of millions of dollars.

Supercomputers do many important computations: they simulate weather, climate, and earthquakes, and help interpret seismic data for oil and gas exploration. All of these are very data-intensive applications; sensors produce many terabytes of data that must be input into the computer.

Supercomputers help design airplanes and spacecraft by simulating fluid flow (air moving around wings, for example). In this case, it might start with very little input data (just the shape of the wing and a few facts about the air), but create enormous amounts of data during the computation which must be stored and possibly compared to experimental data.

In all of those cases, lots of data must be fed into the computer or stored as it comes out. It must be transferred from disk, or from another computer via a network.

What is Big Data?

One of the most popular topics in computing these days is “big data”. Big data is, as you might expect, large quantities of data. Usually, it refers to data about people: what they have bought, where they have been, what websites they have visited. It’s processed looking for patterns that can help you find products you are interested in, such as music you might like. Like real-world human activity, it’s often data that’s not well organized, so that finding patterns involves a lot of looking.

Volatile and Non-volatile Storage

Computers have two types of storage for data: volatile and non-volatile. Volatile memory loses the data when you turn the power off. Your computer’s main memory is volatile, a type of technology known as RAM. Non-volatile data stays intact even when you turn the power off; flash memory (as in USB thumb drives) or hard disks are non-volatile. Roughly speaking, the volatile memory holds data that your computer is working on now, and non-volatile storage holds previously-created files that you are keeping for future use.

Quantum computers, at least for the moment, have only volatile memory; indeed, as we’ll see when we talk about quantum computer technology and architecture, quantum memories are exceedingly fragile. There is no “quantum hard disk”, at least not yet. Moreover, quantum data is, in a manner of speaking, disposable; in most cases, the state of the data is altered so that the data is effectively consumed during the processing, so it’s not really possible to store and reuse quantum data the same way that we do classical data.

Lots of Classical Data, but Little Quantum Data

Your laptop might have several gigabytes (\(2^{30}\) or about \(10^{9}\) bytes, depending on exactly how you count) of memory and a terabyte (\(10^{12}\) or about \(2^{40}\)) or more of disk. (Memory is usually counted in powers of two, and disk in powers of ten; they are several percent different.) A supercomputer might have many terabytes of memory and many petabytes (\(10^{15}\) bytes) of disk.

As of this writing, quantum computers are still small – a few quantum bits up to a few tens; it will be many years before they have terabyte-sized quantum memories. Thus, in the short run, it is imperative to focus on quantum algorithms that process only a few qubits at a time.

Inputting and Outputting Data

Supercomputers also have to move that data in and out; what we refer to input/output. They can do this at tremendous rates; well-balanced systems are designed to read entire datasets as fast as they can process them.

One of the weaknesses of many types of quantum computer technology is that it is very slow to input data and read it out when you’re done, compared to classical computers. For some technologies, this is inherent in the physics of the devices; for others, it’s a matter of time and engineering before we can begin to move data at high rates.

This is an important caveat to our recent discussion of machine learning algorithms, but we will see one way of working around this in certain circumstances, using a hybrid quantum-classical technology.

More Attractive Supercomputing Applications

By now, you may have realized that quantum computers aren’t a good fit for problems involving lots of data, which is a large fraction of the use of classical supercomputers. So are there supercomputing applications where quantum computers will be directly competitive?

In fact, most of the applications we have already covered for quantum computers – factoring, quantum chemistry, machine learning – are common applications of classical supercomputers. However, they are all difficult problems for classical computers. Thus, we can say that quantum computers will address some of the same ground that classical supercomputers do, but will be more complementary than competitive.

Share this article:

This article is from the free online course:

Understanding Quantum Computers

Keio University