Is Moore’s Law Dead? Dying? Or still Alive?

Looking into technical details behind Moore’s Law.

Robert Kwiatkowski
6 min readMar 5, 2024
Photo by Christian Wiediger on Unsplash

Today’s modern world is running widely thanks to advancements in technology and science, especially since it is almost impossible to imagine life without modern electronics. It is present almost everywhere now, but it was not always the case in the past. The first Integrated Circuits (IC) were developed in 1959 and they enabled development of new kinds of electronic devices, more powerful and much smaller than the previous ones. This development trend (increase of power and reduction of size) was for a long time following, so called, Moore’s Law.

However, some time ago, in 2022, a CEO of NVIDIA, Jensen Huang, said that Moore’s Law is not more valid. Not much later, CEO of Intel Pat Gelsinger, responded that it is still valid. So, what is the truth? Why did NVIDIA CEO state that? And was this a single voice or are there more people sharing his view?

Background

In the 1960s, there were some major milestones in development and evolution of electronics, especially the invention of CMOS (Complementary Metal-Oxide Semiconductor) Integrated Circuit and development of the first microprocessors. Since that time the number of transistors in manufactured ICs was increasing and the first development trends could be identified. And here we come to the titular “law”.

Moore’s Law is named after Gordon E. Moore, the co-founder of Intel. It is just an empirical observation and not a law per se. Moore wrote in an article for Electronics magazine in 1965 that the density of transistors in an Integrated Circuit will double yearly. Later he revised that and said it will double every two years. This means that the transistors are getting smaller and smaller every year to fit them more in the same area (practically saying — a chip). For many years this has been solving the problem of insufficient computational power and at the same time was accelerating miniaturization of electronic devices. And this applied not only to CPUs but also to GPUs, memory cards and even digital camera sensors. And one can start here to wonder — are there any limits to that scaling down. In this article we will delve into technical aspects that are the biggest obstacles for this trend to continue.

Data and Trends

First, let’s look at the number of transistors in a single CPU core over the years.

Max Roser, Hannah Ritchie, CC BY 4.0, via Wikimedia Commons

One important note when interpreting this graph in the context of Moore’s Law — we are interested in the maximum number of transistors in a single chip for a given year. As we see, for many years the growth followed Moore’s Law. So far the trend holds. So, why NVIDIA’s CEO stated end of this trend? Are we reaching some limitations? What are they? Let’s look into details.

Dennard Scaling and Power Wall

And indeed, there are some limitations coming mostly from transistors design and physical laws. One of them is the limit coming from the physical downsizing of the transistor. The principles of CPUs downscaling for obtaining both better transistor density and speed increase was described in 1974 by Robert H. Dennis and is known as Dennard Scaling. Simply speaking, it leads to the situation that the power density stays constant although the transistors themselves scale down which is achieved by reducing the supply voltage. And the limitations come directly from the voltage.

Here, we must understand first what defines the active power of a transistor. These parameters are:

  • Capacitance
  • Frequency
  • Operating Voltage

And the formula is:

We see here that the power increases to the square of the voltage. So, when scale down the transistor capacitance will go down with the size. As we want to keep the electric field unchanged we can reduce voltage by the same rate as the size of the chip and simultaneously increase its frequency. That’s good, right? Yes, and no. There are some consequences of doing that and there’s a practical limit to what extent we can downscale. Most of them come from the reduction of operating voltage. Here we have to dig a bit deeper into this topic.

Every digital system operates in two states: high (1) and low (0). However, behind that there is an analog signal and we as designers must decide which level is still low and which high and we switch the digital state to 1. In case of a voltage, we have to set a threshold in Volts for high/low states. Current CPUs operate at about 1–1.5V. Here we have two things to consider, which also define the limit of scaling down:

  • What is the minimum voltage required to trigger (turn on) the transistor? It is commonly about 0.6–0.7 V.
  • What is the expected voltage noise in the system to distinguish the real signal from the background noise? Let’s assume 0.2 V.

You must add these two to have a minimum operating voltage. In the example above 0.7+0.2=0.9V. Plus some safety margin to confidently turn on the transistor when we expect the high state.

At the same time when we keep the power per transistor constant, we pack them more in the same chip. Often, we want to achieve in addition two additional goals: high performance or small size (e.g. for mobile applications). This became an important challenge since early 2000’s as before Dennard Scaling was working fine.

This leads to another challenge — heat generation measured in W/cm². This is an obstacle to achieving cost-effective high-performance microchips and is commonly known as “Power wall”. That’s why all modern CPUs require intensive cooling, usually air cooling but some implement also liquid cooling to achieve higher frequencies (at the same voltage). Practically, this effect causes that CPU frequency doesn’t rise so quickly anymore and that’s why most people can run 5–8 years old CPUs without any issues for most common tasks.

Transistor Design

There is also a limit related to the size of the transistor itself and comes from its design. As it gets smaller the thicknesses get smaller and therefore the insulation efficiency decreases leading to so called power leakage (also sub-threshold leakage). This essentially means that a current flow through the insulation is a big issue in scaling down transistors.

Depiction of Power Leakage (Sub-threshold Leakage); Image by Author

Conclusions

These three factors practically led to the limit of a CPU frequency of about 4–5 GHz (single core) for air-cooled processors.

To sum up there are limits to transistors density and therefore Moore’s Law because manufacturers must:

  • Meet a minimum threshold to activate a transistor.
  • Meet robustness criteria against noise.
  • Provide heat dissipation level to prevent the chip from physically melting.
  • Prevent leakage power loss due to physical size of components.

That’s why there are more and more voices that Moore’s Law does not apply anymore, and Jensen Huang is one of the people stating that publicly. However, until we meet the real, hard limits described above the manufacturers will squeeze as much as possible from the current technology. So maybe Moore’s Law is still alive but in the not far future it will indeed be dead. Unless technology takes another breathtaking step.

References:

Status and Future Prospects of CMOS Scaling and Moore’s Law — A Personal Perspective

Application of Moore’s Law in Semiconductor and Integrated Circuits Intelligent Manufacturing

Let’s consider Moore’s law in its entirety

--

--

Robert Kwiatkowski
Robert Kwiatkowski

Written by Robert Kwiatkowski

Machine Learning Engineer with a background in the Aerospace Industry www.linkedin.com/in/robertkwiatkowski01

Responses (1)