What if one of the most repeated claims about the evolution of hardware was continually misinterpreted? This is the case with Moore’s Law, which is cited continuously but not correctly.
What is Moore’s Law?
In 1965, Gordon Moore, who was the founder of Intel, made the prediction that the number of transistors per area would double annually, a prediction that he would revise after two years in 1975. Moore’s statement was not originally published in the form of a scientific document or paper, but in an article published in the magazine Electronics published on April 19, 1965, where he said that by 1975 we were going to have a 65,000 transistor processor.
In Moore’s own words:
The complexity for the minimum cost of the components has increased by a factor of two per year. Certainly in the short term this trend is expected to continue or increase. In the long term, the rate of increase is more uncertain, although there is no reason not to believe that it will remain almost constant for at least ten years. Which means that by 1975, the number of components (transistors) in an integrated circuit at minimal cost will be 65,000.
Moore’s law therefore speaks of the density of transistors per area and not of the performance as this has been said. Part of the performance came more from Dennard scaling, which broke down from 65nm. In order not to lose ourselves, Moore’s Law was never about performance, but on the density of the transistors at a certain cost.
The way to get the size reduction is very simple, the size of the transistors is regularly reduced by 0.7. Since the processors are interconnected transistor arrays this translates into a reduction to 0.49 and therefore to half. However, Moore’s Law has been misinterpreted by eliminating an essential part of its statement.
Why has Moore’s Law been misinterpreted?
In the 60s there were no CPUs as we know them now, but the same CPU could be built by several different chips separately that over time would end up being integrated into the same piece. Over time, the increase in transistors per area led to the gradual integration of the different elements of the processor.
Gordon Moore spoke of “at minimal cost” and it is at this point that there has been a disconnect between the original statement and the development of new technologies. Since in recent years we have been able to see how the cost per area has increased in each node. And it is that Moore’s Law has a counterpart or complementary law in the form of the second Moore’s Law or also known as Rocks Law.
What does the Law of Rocks say? Well what chip manufacturing cost doubles every 4 years, which conflicts with Moore’s Law that everyone understands, since they always leave the minimum cost part, which causes Moore’s Law to be misinterpreted, since it was made from an observation by Moore himself without take into account manufacturing costs.
The end of the Law of Rocks
The problem with Rocks ‘Law is that while Moore’s prediction has remained correct for a good part of the time, but Rocks’ law has not done at the same speed, so the cost per area started to drop. grow much faster than expected, a change that began in the mid-90s with very slight effects, but in recent nodes has led to the cost of processors increasing more than expected.
That is, at the same cost from one generation to the next, the processors are increasingly expensive, if we take into account the density in transistors, Moore’s Law is still valid, but not at minimum cost. So Moore’s Law has been misinterpreted and its end has not been due to density, but due to costs.