54
Moore's Law is dead, long live Moore's Law!
Moore's Law turns 50 next week - and it's a good reason to revisit Gordon Moore's classic prediction, its rise to sacred prediction over the past 50 years, and to clarify what Moore's Law can tell us about the future of computing. Our colleagues at ExtremeTech decided to turn to Christopher Mack, an honorary doctor of science in computing, to find answers to these questions. It would be strange to talk about the future of Moore’s law with a scientist who jokingly foreshadowed his death a year ago, but one of the hallmarks of this “law” is that it has changed several times over the past half-century.
In a recent article, Dr. Mack argues that what we call "Moore's Law" is actually three different laws. In the first era, called Moore’s Law 1.0, the emphasis was on expanding the number of components on a single chip. A simple example is found in the evolution of the microprocessor itself. In the early 1980s, the vast majority of processors could only perform integer arithmetic on a chip. If you wanted to do floating point calculations (i.e., calculations using a decimal point), you would have to buy a standalone unit with its sling and connector on the motherboard.
Some of you may also remember that in the early days of the CPU cache, the cache was installed on the motherboard and was not integrated into the CPU. The term Front-Side Bus (sometimes known as the "front bus" that ran from the north bridge controller to the main memory and various peripherals) originally contrasted with the Back-Side Bus, which ran from the CPU cache to the CPU itself. Integrating these components on a chip didn’t always cut costs—sometimes the final product was more expensive—but it did significantly improve productivity.
Moore's Law 2.0 came into effect in the mid-1990s. Moore's Law has always had a quiet partner: Dennard's Law of Scaling. The latter law states that as transistors decrease, their power density remains constant – that is, smaller transistors will require less voltage and current. If Moore’s Law stated that we could pack more transistors into one area, Dennard’s Law of Scaling specified that these transistors should be cooler and consume less power. It was Dennard’s law that led Intel, AMD and other major manufacturers to abandon scaling in 2005 in favor of adding more CPU cores and improving single-threaded performance.
From 2005 to 2014, Moore’s Law worked — but the emphasis was on reducing costs by lowering the cost of each additional transistor. These transistors might not be faster than their predecessors, but they were often more energy efficient and less expensive to manufacture. As Mack notes, most of the improvements were driven by the development of lithographic instruments. The total cost of manufacturing (per transistor) fell, while the total cost per square millimeter fell more slowly or remained the same.
Moore's Law 3.0 is much more diverse and involves the integration of functions and capabilities that historically were not considered functions of the processor. Intel’s voltage-on-chip stabilizer, or further integration of power circuits to improve downtime and CPU load characteristics, could be an application of Moore’s Law 3.0 — along with some of NVIDIA’s deep learning features or its desire to move camera processing technology into a single core.
Dr. Mack points to ideas of nanorelays—tiny moving switches that can switch less quickly than digital logic, but won’t require energy when turned on. Whether such technologies will be integrated into the design of future chips is unknown, and it is unclear what research will be behind them. It is possible that the company will spend millions trying to improve the design of digital logic or adapt the principles of semiconductors to other types of chip designs, and eventually find that the final product is not much better than the previous one.
Changing the Nature of Moore's Law
Gordon Moore
There is an argument against this bias in use that goes something like this: Moore’s law, apart from Gordon Moore’s actual words, is not Moore’s law at all. Changing the definition of Moore’s Law turns it from a sound scientific postulate to a sweet marketing term. And that criticism is justified. Both clock speed, transistor density, test results, and Moore's Law, in any form, are subject to distortion.
There is an opinion, however, that all the additional layers were added to the law a long time ago. Gordon Moore's original work was not published in a high-profile newspaper for people - it was a technical paper that was supposed to predict the long-term trend of the observed phenomena. Modern processor manufacturers remain focused on improving density and reducing transistor costs as much as possible. But the notion of Moore’s Law has quickly shifted from a mere statement of trends to a general trend that governs virtually every aspect of computing.
Even this overarching trend began to reverse in 2005 without any help from marketing. First, Intel and AMD focused on adding additional cores, but that required additional support from software and performance management vendors. More recently, both companies have focused on improving energy efficiency and reducing downtime consumption to better fit the power demands of mobile technology. Intel and AMD have done an incredible job of cutting down on platform-level downtime, but the power consumption of a fully loaded CPU drops much more slowly, and the maximum CPU temperature has risen dramatically. Today, when fully loaded, we have a temperature of 80-95 degrees Celsius, whereas ten years ago it was 60-70 degrees. CPU manufacturers deserve praise for the fact that CPUs generally work normally at such temperatures, but such changes were made because Dennard's law, which underlies Moore's Law 2.0, no longer worked.
Even a person who is not an engineer can understand that each change in the definition of Moore’s Law is accompanied by a profound shift in the nature of advanced computing capabilities. Moore's Law 1.0 gave us a mainframe and a minicomputer. Moore's Law 2.0 placed emphasis on transistor performance and scaling costs, leading the era of minicomputers in the embodiment of desktop and laptop computers. Moore’s Law 3.0, with its focus on platform-level cost and overall system integration, gave us a smartphone, a tablet, and the nascent wearable electronics market.
Twenty years ago, the pace of Moore’s law accelerated transistors and increased clock speed. They now make it possible to improve battery life, increase the rate of sleep and active mode, reduce the power consumption of these processes, provide us with clear screens, thin form factors and yes – overall performance in some ways, although not as quickly as we would like. It remains a key concept because it means much more than just transistor performance or gate electrical performance.
50 years later, Moore's Law has become a cultural shorthand for innovation itself. When Intel or NVIDIA or Samsung refer to Moore’s Law in this context, they are referring to the continuous application of decades of knowledge and ingenuity in hundreds of products. It's kind of a way of recognizing the incredible collaboration that starts in the factory and flows into the living room, trying to squeeze a little more out of every detail, according to what users want. Is that marketing? It's your call. published
Based on ExtremeTech materials
Source: hi-news.ru