How computing will change amid challenges to Moore’s Law

We are in the midst of a true inflection point in computing, and the very way we interface with technology daily is changing.

The rapid inclusion of embedded sensors and internet connectivity is turning most of the appliances we use into “smart devices” that can respond to our voice commands, while generating masses of data that is in turn analyzed in edge-of-network hub computers or the cloud.

We are seeing virtual and augmented reality now just starting to ramp adoption, and these technologies require significant compute and graphics processing to have a more real-life experience. This is coupled with the phenomenal advancement in machine learning applications that can be trained to sift through masses of data and deliver timely and context-aware information, or take over mundane tasks. These new applications are challenging the industry to deliver much more computation capability at an improved affordability level.

Supplying this demand for more computation is especially challenging — the Moore’s Law pace of semiconductor advancements has slowed down.

Moore’s Law is defined as the trend for the doubling of the number of transistors on a chip about every two years through ever smaller circuitry, producing greater performance and energy efficiency. It used to be that each generation of semiconductor technology could be relied upon to enable each next generation of a computer chip to be cheaper and faster.

The laws of physics can’t be fooled, and we have reached a level where the miniaturization of transistors is now bumping against physical limits. New semiconductor technology nodes will still bring significant miniaturization and lower power over the next decade, but the cost is increasing and the historical improvements in speed are not achieved.

The laws of physics can’t be fooled and we have reached a level where the miniaturization of transistors is now bumping against physical limits.

So we face a dichotomy as the historical improvements of Moore’s Law slow, while new compute-intensive applications require exponentially more capability. This is driven by insatiable consumer demand for more data and more data processing, more real-time information and faster services. Self-driving cars, drones and robotics all require massively more real-time information processing, inference and interpretation.

For fail-safe operation or quick responsiveness, the computation can’t all be done in the cloud. We are seeing the need for computation to move out the edge of networks and move closer to the user. Emerging smart applications and AR/VR interfaces require high-performance compute capability to be local — in your car, in your home or business, in a cell tower — yet still connected to the cloud.

The pervasive inclusion of sensors in millions of “Internet of Things” devices, along with the digitization of most aspects of our work and personal lives, have created a data explosion. This massive data trove is driving a need for real-time processing and analytics of massive amounts of data. We want to use this data in new ways with hybrid VR/AR environments that can allow us to visually see overlays of information and mix rendered images with the immediate space we are in. Such demands fundamentally change how we interface with technology and require more teraflops of computing performance.

This power enables virtual and augmented reality to render lifelike images and augmented overlays of contextually relevant information or graphics on top of a real-world view.

The Royal College of Medicine has already recorded surgeries in VR, and you can quite easily imagine an AR overlay providing real-time information to help a surgeon perform with more accuracy. These are truly disruptive applications.

The disruption will affect industry, and after industry, if the computing can stay on pace, how best to keep it going amid the decline of Moore’s Law? How do we provide more compute?

It turns out that there are many more levers that engineers can manipulate to drive future performance gains. This is what I call Moore’s Law Plus. It will require engineers to be more creative and cross-disciplinary and leverage cross-industry collaboration. Moore’s Law Plus opens the door for innovation across four main elements:

  • Integration of smaller semiconductor devices with new cost-efficient packaging and interconnected technology. This will enable flexibility to put chip technology together in novel ways.
  • Leveraging a heterogeneous mix of compute processors (CPU and GPU) along with specialized accelerators, and feeding these engines with data from advanced memories.
  • Open-source software programs and development frameworks that are easy to program and take advantage of heterogeneous compute.
  • Software application ecosystems that make it easy to program apps using the advanced computation of machine learning, data analytics and the rendering for VR/AR.

In a Moore’s Law Plus era, universities and industry will apply these levers to drive performance forward. On the foundry front, extreme ultraviolet lithography will be an effective technique to drive small process node manufacturing forward, leading to new and smaller transistors. These will be wired together with new metal constructs to lower resistance. There will be further manufacturing advances on semiconductors.

Future applications will require more memory, whether on a PC or mobile device or server. For servers, certain workloads, especially with machine learning, virtualization applications, and database processing, have an insatiable demand for more memory. Yet, our year-over-year density increases for memory have been declining. Once again, innovation is leading to new gains as can be seen with new non-volatile memory, and stacked memory.

There are also advances in packaging with less expensive techniques to connect multiple dies on an organic package. There will be more 3D die stacking of the CPU, graphics, stacked memory and other chip elements connected without the underlying wafer underneath. A little further out is the ability to natively bring optical interconnect right into the die. These are important for performance but also for the flexibility for system design, for new approaches to computing that can efficiently connect more deep, dense non-volatile, persistent memory so content is not lost when you turn off the power.

In my view, the effort to stay on a Moore’s Law Plus growth rate of 18-24 months is all for naught if you cannot make it easy to program these new approaches. The ecosystem is there for CPUs, but if you want to leverage GPUs and other accelerators you need an open approach. Some take a proprietary approach, and it works, but is costly.

AMD co-founded the Heterogeneous Systems Architecture (HSA) Foundation to allow these different technologies — CPU, GPU and fixed-function accelerators, including FPGAs — to work together, share memory and be optimized from a systems standpoint.

To continue advances in a Moore’s Law Plus world requires co-engineering and a cooperative approach across the semiconductor industry with different manufacturers, working with academia, and open standards to create an easy-to-program environment. That is why I’m confident companies will be able to add more transistors, and manage the cost curve.

Put all of this together and it really advances the further acceleration of computing. With Moore’s Law Plus, there’s no drop-off from the Moore’s Law pace, and that will continue to fuel the disruption.