“Rado” Danilak has over 25 years of industry experience and over 100 patents designing state-of-the-art processing systems.
Depending on whom you consult here at Forbes, Moore’s Law is either over (a polite way of saying “dead”), no longer holding up or alive and well (if you believe the company and its founder who invented the concept).
The principle, which assured us that processor performance will double every two years or so, drove the development of semiconductors for decades. But now, innovation at U.S. companies that once dominated the chip market has slowed, investment capital has shrunk and manufacturing costs have increased — a frightening state of affairs, frankly, for any industry or company that depends on a computer. Nvidia CEO Jen-Hsun Huang recently declared Moore’s Law is dead, and a 2016 article from MIT Technology Review came to the same conclusion. Now What?
The impact of this plateau in performance will certainly be felt in areas like mobile devices, IoT, machine learning and artificial intelligence. More troubling is the impact on hyperscale data centers with massive aggregated compute power that manage cloud-based consumer and commercial products and services.
Most troubling of all is the impact on advanced research done via supercomputing (i.e., high-performance, high-density data processing) that drives progress in genomics, energy, national security, weather forecasting and modeling and big data analytics.
The forward march of progress and the health of the economy rely on continued improvement in semiconductors. I won’t bore you on why nanometer-size microelectronics are reaching their technological limits (let’s just say, “because physics”). It’s become too hard to make those already small circuits any smaller. As transistors shrank they become faster, but the wires now became thinner and slower. The solution is to address the problem from the first principle that will help solve the wire slow-down problem. This problem can be fixed by introducing a wire centric computational mechanism in a way where existing applications do not need to be rewritten.
The problem of device physics is, however, a solvable one, and that’s the silver lining to the death of Moore’s Law. The industry’s performance plateau creates a market space and opportunity for new ways of thinking, new designs and new inventions. We need radical, not incremental, change. We cannot afford to be bound by the dogma that has dictated traditional Silicon Valley product development (this dogma, by the way, is why the alternatives we’ve been promised are far behind schedule).
Nearly 20 years ago, the auto industry pulled the plug on the first commercially available electric car, and today, well-heeled, green-leaning drivers are standing in line to buy electric cars from Tesla. It’s more than a business fable — it illustrates how a disruptive invention (i.e., a new way of thinking) can challenge a massive industry.
When it comes to using specialized processor systems such as TPUs and GPUs to save energy, we face headwinds from the growing non-recurring cost of designing chips that must be amortized against increasingly higher sales volume to maintain economic viability. A TPU or GPU can only run a small percentage of applications (algorithms) more efficiently than a CPU. In any computer architecture, it takes a lot more energy to fetch and schedule an instruction than it does to execute that instruction. A GPU has better efficiency than a more general CPU only if it’s mindlessly executing the same instruction on a larger set of data. For example, a CPU fetches an instruction and then executes that instruction (operation) on a piece of data. A GPU fetches an instruction and then mindlessly executes that same instruction on 32 pieces of data.