Want to keep learning?

This content is taken from the Keio University's online course, Understanding Quantum Computers. Join the course to learn more.

Not Supercomputing, Not Big Data

Quantum computers aren’t suitable for working on problems that involve enormous amounts of data, like climate simulations or what is called “big data”. Let us take a brief look at why this is so. The contrast should help illuminate problems for which they are most likely to be useful.

Data-Intensive Supercomputing

A “supercomputer” is a computer system that handles computations that a simple computer like your laptop could never handle. They generally have thousands of times as much memory and disk space as a laptop or desktop computer. These days, they also typically have thousands or tens of thousands of separate CPUs, or central processing units – the “brains” of the computer.

Of course, how big and how fast a computer has to be in order to count as a “super” computer varies over time, as the technology gets better. One way to think of it is that the several thousand fastest computer systems in the world are supercomputers, and everything else isn’t. They also typically require lots of power – hundreds of kilowatts or even megawatts, and cost millions or tens of millions of dollars.

Supercomputers do many important computations: they simulate weather, climate, and earthquakes, and help interpret seismic data for oil and gas exploration. All of these are very data-intensive applications; sensors produce many terabytes of data that must be input into the computer.

Supercomputers help design airplanes and spacecraft by simulating fluid flow (air moving around wings, for example). In this case, it might start with very little input data (just the shape of the wing and a few facts about the air), but create enormous amounts of data during the computation which must be stored and possibly compared to experimental data.

In all of those cases, lots of data must be fed into the computer or stored as it comes out. It must be transferred from disk, or from another computer via a network.

What is Big Data?

One of the most popular topics in computing these days is “big data”. Big data is, as you might expect, large quantities of data. Usually, it refers to data about people: what they have bought, where they have been, what websites they have visited. It’s processed looking for patterns that can help you find products you are interested in, such as music you might like. Like real-world human activity, it’s often data that’s not well organized, so that finding patterns involves a lot of looking.

Volatile and Non-volatile Storage

Computers have two types of storage for data: volatile and non-volatile. Volatile memory loses the data when you turn the power off. Your computer’s main memory is volatile, a type of technology known as RAM. Non-volatile data stays intact even when you turn the power off; flash memory (as in USB thumb drives) or hard disks are non-volatile. Roughly speaking, the volatile memory holds data that your computer is working on now, and non-volatile storage holds previously-created files that you are keeping for future use.

Quantum computers, at least for the moment, have only volatile memory; indeed, as we’ll see when we talk about quantum computer technology and architecture, quantum memories are exceedingly fragile. There is no “quantum hard disk”, at least not yet. Moreover, quantum data is, in a manner of speaking, disposable; in most cases, the state of the data is altered so that the data is effectively consumed during the processing, so it’s not really possible to store and reuse quantum data the same way that we do classical data.

Lots of Classical Data, but Little Quantum Data

Your laptop might have several gigabytes (\(2^{30}\) or about \(10^{9}\) bytes, depending on exactly how you count) of memory and a terabyte (\(10^{12}\) or about \(2^{40}\)) or more of disk. (Memory is usually counted in powers of two, and disk in powers of ten; they are several percent different.) A supercomputer might have many terabytes of memory and many petabytes (\(10^{15}\) bytes) of disk.

As of this writing, quantum computers are still small – a few quantum bits up to a few tens; it will be many years before they have terabyte-sized quantum memories. Thus, in the short run, it is imperative to focus on quantum algorithms that process only a few qubits at a time.

Inputting and Outputting Data

Supercomputers also have to move that data in and out; what we refer to input/output. They can do this at tremendous rates; well-balanced systems are designed to read entire datasets as fast as they can process them.

One of the weaknesses of many types of quantum computer technology is that it is very slow to input data and read it out when you’re done, compared to classical computers. For some technologies, this is inherent in the physics of the devices; for others, it’s a matter of time and engineering before we can begin to move data at high rates.

This is an important caveat to our recent discussion of machine learning algorithms, but we will see one way of working around this in certain circumstances, using a hybrid quantum-classical technology.

More Attractive Supercomputing Applications

By now, you may have realized that quantum computers aren’t a good fit for problems involving lots of data, which is a large fraction of the use of classical supercomputers. So are there supercomputing applications where quantum computers will be directly competitive?

In fact, most of the applications we have already covered for quantum computers – factoring, quantum chemistry, machine learning – are common applications of classical supercomputers. However, they are all difficult problems for classical computers. Thus, we can say that quantum computers will address some of the same ground that classical supercomputers do, but will be more complementary than competitive.




いわゆる”スーパーコンピュータ”は一般的なPCの数百倍のメモリー、ディスクスペースを持ち合わせており、みなさんがお持ちのコンピュータでは到底扱えないような規模の処理をすることができます。 今日ではおよそ数千〜数万倍のCPU(コンピュータの脳に当たる部分)パワーを持っています。 もちろん、テクノロジーの進歩とともに「スーパー:コンピュータの定義は、より高速に、より巨大に日々変化しています。 世界でもっとも高速な数千台をスーパーコンピュータと呼び、それ以外は一般のPCだ、というように捉えることもできます。そしてスーパーコンピュータを動かすは、数百キロワットからメガワットレベルの膨大の電力と、数百万から数千万ドルの巨額の資金を必要とします。

スーパーコンピュータは気候シミュレーション、地震シミュレーション、石油・ガス探索のための地質データの解釈など、数々の重要な処理に用いられています。 これらは全てまさにデータ集約型アプリケーションであり、センサーによって収集される数テラバイト分ものデータがコンピュータに入力される必要があります。 例えば流体シミュレーションによる飛行機や宇宙船の設計を行うにあたっては、初めは翼の形状や風についてのデータなどほんの少しの入力データから始まりますが、処理を進める過程で膨大なデータが新たに生み出されます。 この新たに生み出されたデータを蓄え、さらに実測データと比較するといった処理も行う必要があります。



今日において最も注目されるトピックの一つとしてビッグデータが挙げられます。 ビッグデータとは、ただ名前通りの膨大な数のデータであるだけではなく、その内容は、人々の消費行動やどこへ訪れたか、どんなウェブサイトを閲覧したかなど人間の行動に関するデータであることがほとんどです。 このように行動に関するデータを集めることで、その人物の指向に基づいた製品や好みの音楽などを見つけやすくすることができます。 しかし人間の行動は必ずしも規則的であるとは言えません。 データの世界においてもこれは同じであり、規則性を見つけるためには多くの視座が必要になります。





みなさんのコンピュータはおそらく数ギガバイトのメモリと約1テラバイトのディスク容量を持ち合わせていると思います。 一方スーパーコンピュータは数テラバイトのメモリと数ペタバイト(\(10^{15}\) bytes)のディスク容量を持っています。

先程述べた通り、量子コンピュータの扱えるデータ量は数量子ビット〜数十量子ビットとまだまだ十分ではありません。 テラバイト規模の量子メモリが開発されるまでにはまだ数年かかると見込まれています。 そのため、短期的には一度に数量子ビットのみを扱う量子アルゴリズムに注力することになるでしょう。







ここまでビッグデータ領域において量子コンピュータはあまり向いていない、むしろ従来のスーパーコンピュータの方がこの領域における重要な役割を担っていることが理解できたかと思います。 ではスーパーコンピューティングの応用範囲において量子コンピュータが競争力をもつような分野はあるのでしょうか?

実は、今まで学んできた素因数分解や量子化学、機械学習などは従来のスーパーコンピュータ上で一般的に解かれている問題でもあります。 これらの領域はスーパーコンピュータにとっても解くのが難しい問題であるため、量子コンピュータはその分野で活躍することができると言えます。ただし、古典コンピュータと競合するというよりも、互いを補完しつつ共存してこれらの問題に取り組んでいくこととなるでしょう。

Share this article:

This article is from the free online course:

Understanding Quantum Computers

Keio University