John Hennessy (Alphabet Chairman) – The End of the Road for General Purpose Processors & the Future of Computing (Jul 2018)


Chapters

00:00:08 The End of Moore's Law and Dennard Scaling: A New Era in
00:13:57 Limits of Semiconductor Scaling and Changing Computer Architectures
00:25:50 Speculation and Its Consequences in Modern Processors
00:30:40 Amdahl's Law and the Limits of Multi-Core Scaling
00:35:23 The End of Dennard Scaling: Dark Silicon and Packaging Challenges
00:37:40 Constrained Computing: Challenges and Potential Solutions
00:41:29 Domain-Specific Architectures for Enhanced Performance
00:50:11 Architecture Innovations for Future Computing

Abstract

Navigating the Post-Moore Era: Rethinking Electronic Systems and Computer Architecture

In an era marked by the waning influence of Moore’s Law and Dennard scaling, the inaugural lecture of John Hennessy at the Distinguished Seminar on Electronic Systems Technology has brought into sharp focus the urgent need for a paradigm shift in computer architecture and electronic system design. Hennessy’s insights, reflecting on John Linville’s pioneering legacy at Stanford and the evolving landscape of computing, highlight the critical challenges and opportunities ahead. From the constraints of instruction-level parallelism and Amdahl’s Law to the emerging potential of Domain-Specific Architectures (DSAs) and dark silicon issues, the lecture encapsulates a fundamental transformation in how we conceive, design, and utilize electronic systems in a rapidly changing technological landscape.



Introduction of the Distinguished Seminar on Electronic Systems Technology:

The seminar, organized to honor the legacy of John Linville, emphasized the importance of energy-efficient systems and acknowledged Linville’s lasting impact on Stanford and engineering. These remarks provided historical context and highlighted current challenges faced by the field, setting the stage for Hennessy’s lecture.

John Linville’s Legacy at Stanford:

John Linville, a visionary recruited by Fred Terman in 1954, was instrumental in developing Stanford’s program on transistor applications. His establishment of three new laboratories and the Center for Integrated Systems (CIS) underscored the interdisciplinary nature of modern electronic systems, blending electrical engineering and computer science.

John Hennessy’s Inaugural Lecture:

Hennessy’s lecture served as a crucial turning point, drawing attention to the massive changes in computing driven by technology and architecture. He nostalgically referred to the ‘golden age of computing,’ while soberly acknowledging the slowdown of Moore’s Law and Dennard scaling, signaling a crisis in power consumption and efficiency.

The End of Moore’s Law and Dennard Scaling:

The slowing pace of Moore’s Law and the invalidation of Dennard scaling present formidable challenges in designing efficient electronic systems. This transition demands a fresh approach to computing, focusing on efficiency, architecture, and interdisciplinary collaboration.

The Changing Landscape of Computer Architecture:

The field of computer architecture is undergoing a profound transformation. Architectural limits, a shifting application landscape, and the priority of energy efficiency are reshaping how we approach modern architectures, especially in the context of mobile devices and cloud-based data centers.

The Slowdown of Moore’s Law and Its Implications:

DRAM challenges and the impact on transistor counts and Dennard scaling have brought energy efficiency to the forefront. This slowdown necessitates a fundamental reevaluation of processor designs.

Instruction-Level Parallelism and Its Limits:

The diminishing returns of ILP techniques like pipelining and multiple issue, in terms of energy efficiency, signal the need for new architectural approaches that go beyond traditional techniques.

Branch Prediction and Instruction-Level Parallelism Challenges:

Branch prediction, a key feature of modern processors, illustrates the inherent difficulties and inefficiencies in maximizing ILP. The increasing complexity and overhead of managing incorrect predictions demonstrate the practical limits of this approach.

Shift to Multi-Core Era and Amdahl’s Law:

The transition to multi-core processors, driven by the limits of ILP and the challenges of branch prediction, underscores the pivotal role of Amdahl’s Law. This law highlights the diminishing returns of parallel processing, especially when a fraction of the program remains sequential.

Dark Silicon and Power Consumption:

The concept of ‘dark silicon,’ arising from the end of Dennard scaling, illustrates the challenges in multicore scaling and power management. The limitations of packaging technology and the consequent power and efficiency limits have significant implications for the design and utilization of multicore systems.

Amdahl’s Law and Its Impact on Multi-Core Processors:

Amdahl’s Law states that the speedup of a program running on a parallel computer is limited by the portion of the program that can only be executed sequentially. In multi-core processors, this effect is significant as the speedup is limited by the fraction of the code that can only run on a single core. Overcoming Amdahl’s Law in a general-purpose computing environment is challenging, and attempts to solve the problem often encounter new instances of the law. Coordinating and synchronizing processes can also create Amdahl’s Law bottlenecks.

The End of Dennard Scaling and the Rise of Dark Silicon:

The end of Dennard scaling marks the end of multicore scaling as it has been traditionally done, leading to the phenomenon of “dark silicon,” where cores are turned off to save energy and reduce heat generation. Turning off a core takes a significant amount of time to turn back on, making it crucial to carefully consider when to do so. Power consumption and heat dissipation are major challenges in multicore scaling, and even with aggressive assumptions about packaging improvements, only a fraction of the total cores on a future chip could be active at any given time.

Challenges in Multicore Scaling and Potential Solutions:

Power consumption and heat dissipation limit the number of active cores in a processor, with liquid cooling being a potential solution to remove heat effectively and allow for more active cores. The combination of power limitations and Ambell’s Law results in a grim outlook for multicore scaling. Alternative approaches, such as returning to efficiency in software and rewriting code in a more efficient language like C, are potential routes for addressing these challenges.

Domain-Specific Architectures (DSAs) and Their Advantages:

Domain-Specific Architectures (DSAs) offer a solution to the limitations of general-purpose architectures by optimizing hardware and software for specific domains. Advantages of DSAs include improved performance and efficiency due to better parallelism, memory bandwidth utilization, and elimination of unnecessary accuracy. They can be tailored to specific domains, enabling customization for a family of related tasks. DSAs are programmable, unlike ASICs, allowing for flexibility and adaptability to changing requirements.

Challenges of Domain-Specific Architectures:

Challenges in developing DSAs include maintaining a niche advantage over general-purpose architectures, developing domain-specific programming models that enable software to align with the hardware’s capabilities, and creating a diverse range of architectures, potentially leading to increased complexity and fragmentation in the computing landscape.

Key Principles for Effective Domain-Specific Architectures:

Effective DSAs employ SIMD (single instruction, multiple data) parallelism for increased efficiency and simplicity, utilize software analysis to determine parallelism, optimize memory usage through user-controlled memories and eliminating caches when appropriate, and reduce unnecessary precision by using smaller data units and relaxed accuracy requirements.

Importance of a Domain-Specific Programming Model:

A domain-specific programming model is crucial for matching software requirements with hardware capabilities and achieving performance gains. Historical examples, such as the ILLIAC-IV, highlight the need for a close relationship between software and hardware design.

Implications for Future Architectures:

Future architectures will require architects to think differently about performance optimization, considering the algorithms and structures of specific domains. The focus should shift from low-level software interfaces to understanding and leveraging the structure of programs. This approach could lead to a proliferation of specialized architectures, posing challenges for system design and integration.

Specialized Architectures for Different Applications:

Specialized architectures optimized for specific applications, such as machine learning and deep neural networks related to driving, are becoming increasingly important. Examples include giant machines in the cloud for general-purpose deep neural network tasks, phones with processors designed for speech recognition, and virtual reality headsets with processors optimized for virtual and augmented reality applications.

Collaboration Between Algorithm Designers and Hardware/Software Experts:

To effectively utilize specialized architectures, collaboration between algorithm designers, application experts, software developers, and hardware engineers is essential.

Design Cost Considerations:

Designing multiple specialized architectures can be costly. Efforts should be made to reduce the design costs of these architectures.

Rethinking Hardware-Software Interfaces:

Rethinking the interfaces between hardware and software can help bridge the gap between specialized architectures and traditional silicon-based computing.

Continued Innovation in Silicon-Based Computing:

Ongoing innovation in silicon-based computing is necessary to maintain the benefits of Moore’s Law and ensure a smooth transition to specialized architectures.



Hennessy’s lecture at the Distinguished Seminar on Electronic Systems Technology, reflecting on John Linville’s legacy and the current challenges in electronic systems and computer architecture, marks a critical juncture. The industry stands at a crossroads, with the slowdown of Moore’s Law and the rise of DSAs and dark silicon shaping the future of computing. As we navigate this post-Moore era, a rethinking of hardware-software paradigms, a focus on energy efficiency, and interdisciplinary collaboration emerge as key pathways forward.


Notes by: Alkaid