What Elements Limit the Number of Cores in a Processor

Number of Cores in a Processor

Since the mid-2000s we have stopped seeing single-core CPUs to multi-core CPUs in our PCs. What started to be 2-core configurations are now up to 16-core in a standard PC and with configurations of up to 64 cores in professional workstations. But is there a limit to the number of cores? If so, what is said limit and what are the factors?

Our processors have more and more cores inside them, however, it will reach the point where the number of cores will stop scaling. But what are the factors that are going to put a brake on the number of cores in a CPU?

Why do we use multiple cores in one CPU?

cantidad núcleos

Until the mid-2000s, PC processors were single-core, but by then CPUs became multi-core.

To understand it we have to take into account what is the Dennard Scale, whose formula is the following:

Consumption = Q * f * C * V 2

Where:

  • Q is the number of active transistors.
  • f is the clock frequency
  • C is the ability of transistors to hold a charge
  • V is the Voltage

The magic of this formula is that if the size of the transistors is reduced, not only does the density per area increase, but V and C are reducing. This phenomenon was given the name Dennard Scale, because it was coined by Robert H. Dennard, an IBM scientist who, among other things, invented DRAM memory.

Escala Dennard

The Dennard Scale uses the S value as the relative value between the new node and the old one. If for example the old node was 10 nm and the new one is 7 nm, then the heat of S is going to be 0.7.

But the Dennard scale reached its limit when the 65nm node was reached, with which the Post-Dennard era was entered, the main reason being the existence of a part of the formula that until that moment had been ignored because its effects were disposable: current leakage.

What made the formula look like this:

Consumption = Q * f * C * V 2 + (V * Current Leakage).

The consequence was that Q, f and C maintained their scale values, but the voltage did not maintain it and it became almost constant. Since the clock speed scales with the voltage then the architects had to start looking for other ways to get faster and more powerful processors.

The consequence of this was that the era in which single-core CPUs were made passed away, and the big manufacturers of PC CPUs such as Intel and AMD began to bet on multi-core processors.

What is the limit on the number of cores?

There are several factors that limit the number of cores with which a processor is built, the first and most obvious being the density in terms of the number of transistors per area, which grows with each new node and therefore allows to remove processors with more and more nuclei.

But this is not the case and we are going to explain to you what are the challenges faced by architects when designing the new processors.

Internal communication between the different nuclei

Interconexiones cantidad núcleos

There is a very important element that is communication between the different nuclei, and of these with memory, which grow exponentially to the number of elements we have in a system.

This has meant that engineers have had to work on increasingly complex communication structures, all so that the different cores of a processor receive and send data in line with another processor.

The dark silicon problem

Silicio Oscuro

Dark Silicon is the part of the circuitry within an integrated circuit that cannot be powered at a normal operating voltage (that determined by the chip) for a given power consumption limit.

The key is that the amount of Dark Silicon from one node to another increases by 1 / S 2, so despite the fact that we can physically put twice as many transistors, the useful amount of these decreases from one generation to another.

Cost per area is increasing, not decreasing

coste_mm_procesadores

Up to 28 nm the cost per 1 mm2 of area decreased, but beyond 28 nm it has increased. This means that, if the cost of the processors is to be maintained, then they have to reduce the area and if instead they want to maintain the evolutionary rhythm then the processors will be increasingly expensive.

This may seem to be unimportant in terms of the number of cores. Yes, it is because architects take into account the budget as far as transistors are concerned, whose limiting factor is cost. This in turn is related to the cost per wafer.

Amdahl’s Law: software doesn’t scale with the number of cores

Ley Amdahl Cantidad núcleos

Amdahl’s Law was coined by Gene Amdahl, a computer scientist famous among other things for being the architect of the IBM System / 360. Although Amdahl’s law is not a physical phenomenon, we are reminded that not all the workload of a computer program can be parallelized.

The consequences of this is that while certain parts of the work will increase with the number of cores in the system, other parts, on the other hand, cannot scale in parallel since they work serially and depend on the power of each processor alone to scale. . So workloads that cannot be run in parallel do not increase in speed when it comes to the number of cores.

CPU DIe Procesador

This is the reason why architects, instead of trying to put more and more cores in the processors, design new architectures, more and more efficient. The main objective of the designs being to reduce the time that the processors alone take to carry out certain instructions.

On the other hand, it must be taken into account that the software is designed according to the hardware on the market, most of the current software is optimized for the average number of cores that people have on their PCs and although there is a degree of scalability, This is never linear but rather logarithmic, so it gets to the point where certain applications, regardless of how many cores a processor has or even how fast it is, will not scale much more or will scale so slowly that before.