In 1964, Control Data Corporation built the world’s first supercomputer, the CDC6600. It ran at 40 MHz and could perform up to three million floating point operations a second.
Last Year, Oak Ridge National Laboratories unveiled the “Summit” a supercomputer that could perform two hundred quadrillion floating point operations a second, making it more than 70 billion times as fast as the old CDC6600. By comparison, the Summit is more of a leap from the CDC6600 than the CDC6600 is from an abacus.
However, it is interesting to look at just how Summit got to be so powerful. Yes, the processors used are more powerful than that of the old CDC6600. Time marches on and technology improves. But there is more to it than that. The Summit contains within it 37,000 separate processors. 28,000 of these are graphics processors produced by NVidia (the same company that makes the graphics processors and adapters in many home and office computers–including my own). In addition there are 9,000 conventional processors made by IBM. The Summit gets its power from massively parallel processing–dividing the task among many processors each only dealing with a small portion of the task and how that portion connects to neighboring parts of the problem which other processors handle.
It makes the Summit more powerful than any single processor system can ever hope to be.
This is much like how an economy works in a free market with prices allowed to function by supply and demand. In a centrally planned command economy of whatever flavor–socialist, communist, fascist, Nazi, whatever–a single, or small number of decision makers, “processors” make all the myriad decisions involved in the allocation of scarce resources which have alternative uses. Labor, raw materials, land, time.
In the free market, however, each individual need only worry about those specific resources (their own time and labor, raw materials under their control, their particular skills, and so on) that are under their own personal direct control. They have a small part of the total problem and can work on maximizing their own part without knowing, or needing to know, the overall problem. Information about other parts of the problem are carried through prices. If desire for a certain product increases, people bid up the price. This signals others to provide more of it. If desire drops, people aren’t willing to pay as much and either the price falls or amount sold at the current price falls. If something becomes more difficult to produce or obtain, the price rises and people either buy less, divert resources from elsewhere to produce or obtain more, or some combination of the two.
It’s a massively parallel processing operation only instead of a few thousand or even a few tens of thousands of processors, it’s millions, or even billions of individuals, each bringing to bear far more knowledge, far more understanding, far more “human computing power” than any small group of “central planners”, no matter how smart or how well educated, can ever hope to possess.
There are some small handful of things, as I have discussed elsewhere, where this massively parallel processing cannot handle well, things where the problem cannot effectively be divided (such as national defense) or things where the division is unclear (external costs and benefits), but for the vast majority it works and works well to provide the most value as determined by society as a whole.
It’s why the free market will always be more efficient than centrally planned economies.