The Multi-Core Revolution in CPUs
Why CPU now use multi-core designs
Around 2007, CPU frequency growth began to slow down or even decrease, but the number of logical cores increased significantly.
Relationship
Clock speed, clock frequency, and frequency are the same thing in this context.
Early on, clock frequency benefited from smaller transistors having lower capacitance, allowing faster switching and higher frequency operation.
But later, clock frequency faced physical limits. Smaller transistors at higher frequencies produced more heat, leading to cooling problems and affecting chip stability and lifespan. Another important factor was that energy consumption grew exponentially, not linearly, making further clock speed increases impractical.
Also, transistor size was approaching physical limits, restricting frequency increases.
So, the industry turned to multi-core designs. Multi-core not only solved heating issues but also improved efficiency for certain tasks like big data processing, virtualization, and neural network batch training. However, sequential tasks with dependencies can't be parallelized and are better suited for single-core, high-frequency processors.
Now, besides multi-core, there are various heterogeneous computing options like GPUs and ASICs.
Moore's Law
Moore's Law states that the number of transistors doubles every 18-24 months, leading to performance improvements and cost reductions.
Originally, the law referred to transistor evolution and count prediction on a single CPU. Later, due to transistor size limits and heating/energy issues, it evolved into multi-core CPUs. This allows processors to increase overall computing power within power limits.
From a broad view, Moore's Law is still followed, but from a narrow view, this isn't strictly within Moore's Law's scope.