關于我們
書單推薦
新書推薦
|
計算機體系結(jié)構
本書介紹了計算機系統(tǒng)的設計基礎、指令集系統(tǒng)結(jié)構、流水線和指令集并行技術、層次化存儲系統(tǒng)與存儲設備、互連網(wǎng)絡、多處理器系統(tǒng)、云計算和手機客戶端技術等內(nèi)容。
Much of the improvement m computer performance over the last 40 years has been provided by computer architecture advancements that have leveraged Moore's Law and Dennard scaling to build larger and more parallel systems. Moore's Law is the observation that the maximum number of transistors in an integrated circuit doubles approximately every two years. Dennard scaling refers to the reduc-tion of MOS supply voltage in concert with the scaling of feature sizes, so that as transistors get smaller, their power density stays roughly constant. With the end of Dennard scaling a decade ago, and the recent slowdown of Moore's Law due to a combination of physical limitations and economic factors, the sixth edition of the preeminent textbook for our field couldn't be more timely. Here are some reasons.
First, because domain-specific architectures can provide equivalent perfor-mance and power benefits of three or more historical generations of Moore's Law and Dennard scaling, they now can provide better implementations than may ever be possible with future scaling of general-purpose architectures. And with the diverse application space of computers today, there are many potential areas for architectural innovation with domain-specific architectures. Second,high-quality implementations of open-source architectures now have a much lon-ger lifetime due to the slowdown in Moore's Law. This gives them more oppor-tunities for continued optimization and refinement, and hence makes them more attractive. Third, with the slowing of Moore's Law, different technology compo-nents have been scaling heterogeneously. Furthermore, new technologies such as 2.5D stacking, new nonvolatile memories, and optical interconnects have been developed to provide more than Moore's Law can supply alone. To use these new technologies and nonhomogeneous scaling effectively, fundamental design decisions need to be reexamined from first principles. Hence it is important for students, professors, and practitioners in the industry to be skilled in a wide range of both old and new architectural techniques. All told, I believe this is the most exciting time in computer architecture since the industrial exploitation of instruction-level parallelism in microprocessors 25 years ago.
The largest change in this edition is the addition of a new chapter on domain-specific architectures.lt's long been known that customized domain-specific archi-tectures can have higher performance, lower power, and require less silicon area than general-purpose processor implementations. However when general-purpose processors were increasing in single-threaded performance by 40% per year (seeFig. 1.11), the extra time to market required to develop a custom architecture vs.using a leading-edge standard microprocessor could cause the custom architecture to lose much of its advantage. In contrast, today single-core performance is improving very slowly, meaning that the benefits of custom architectures will not be made obsolete by general-purpose processors for a very long time, if ever.Chapter 7 covers several domain-specific architectures. Deep neural networks have very high computation requirements but lower data precision requirements-this combination can benefit significantly from custom architectures. Two example architectures and implementations for deep neural networks are presented: one optimized for inference and a second optimized for training. Image processing is another example domain; it also has high computation demands and benefitsfrom lower-precision data types. Furthermore, since it is often found in mobiledevices, the power savings from custom architectures are also very valuable.Finally, by nature of their reprogrammability, FPGA-based accelerators can beused to implement a variety of different domain-specific architectures on a singledevice. They also can benefit more irregular applications that are frequently updated, like accelerating internet search.
Although important concepts of architecture are timeless, this edition has been thoroughly updated with the latest technology developments, costs, examples, and references. Keeping pace with recent developments in open-sourced architecture,the instruction set architecture used in the book has been updated to use the RISC-V ISA.
On a personal note, after enjoying the privilege of working with John as a grad-uate student, I am now enjoying the privilege of working with Dave at Google.What an amazing duo!
約翰·L.亨尼斯(John L.Hennessy),Hennessy與Patterson共同榮獲了2017年度“圖靈獎”,以表彰他們在計算機體系結(jié)構領域的開創(chuàng)性貢獻。Hennessy現(xiàn)為Google母公司Alphabet的董事長,之前曾任斯坦福大學第十任校長。他是IEEE和ACM會士,美國國家工程院、國家科學院、美國哲學院以及美國藝術與科學院院士。他于1981年開始研究MIPS項目,之后創(chuàng)辦MIPS Computer Systems公司,開發(fā)了商用RISC微處理器之一。他還領導了DASH項目,設計了一個可擴展cache-致性多處理器原型。
戴維·A.帕特森(David A.Patterson),Patterson與Hennessy共同榮獲了2017年度“圖靈獎”。Patterson現(xiàn)為Google杰出工程師,之前為加州大學伯克利分校教授。他曾任ACM主席一職,目前是ACM和IEEE會士,美國藝術與科學院和計算機歷史博物館院士,并入選了美國國家工程院、國家科學院和硅谷工程名人堂。他領導了RISC I的設計與實現(xiàn)工作,并且是RAID項目的領導者。
Chapter 1 Fundamentals of Quantitative Design and Analysis
1.1 Introduction 2
1.2 Classes of Computers 6
1.3 Defining Computer Architecture 11
1.4 Trends in Technology 18
1.5 Trends in Power and Energy in Integrated Circuits 23
1.6 Trends in Cost 29
1.7 Dependability 36
1.8 Measuring, Reporting, and Summarizing Performance 39
1.9 Quantitative Principles of Computer Design 48
1.10 Putting It All Together: Performance, Price, and Power 55
1.11 Fallacies and Pitfalls 58
1.12 Concluding Remarks 64
1.13 Historical Perspectives and References 67
Case Studies and Exercises by Diana Franklin 67
Chapter 2 Memory Hierarchy Design
2.1 Introduction 78
2.2 Memory Technology and Optimizations 84
2.3 Ten Advanced Optimizations of Cache Performance 94
2.4 Virtual Memory and Virtual Machines 118
2.5 Cross-Cutting Issues: The Design of Memory Hierarchies 126
2.6 Putting It All Together: Memory Hierarchies in the ARM Cortex-A53 and Intel Core i7 6700 129
2.7 Fallacies and Pitfalls 142
2.8 Concluding Remarks: Looking Ahead 146
2.9 Historical Perspectives and References 148
Case Studies and Exercises by Norman P. Jouppi, Rajeev
Balasubramonian, Naveen Muralimanohar, and Sheng Li
Chapter 3 Instruction-Level Parallelism and Its Exploitation
3.1 Instruction-Level Parallelism: Concepts and Challenges 168
3.2 Basic Compiler Techniques for Exposing ILP 176
3.3 Reducing Branch Costs With Advanced Branch Prediction 182
3.4 Overcoming Data Hazards With Dynamic Scheduling 191
3.5 Dynamic Scheduling: Examples and the Algorithm 201
3.6 Hardware-Based Speculation 208
3.7 Exploiting ILP Using Multiple Issue and Static Scheduling 218
3.8 Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation 222
3.9 Advanced Techniques for Instruction Delivery and Speculation 228
3.10 Cross-Cutting Issues 240
3.11 Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput 242
3.12 Putting It All Together: The Intel Core i7 6700 and ARM Cortex-A53 247
3.13 Fallacies and Pitfalls 258
3.14 Concluding Remarks: What’s Ahead? 264
3.15 Historical Perspective and References 266
Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell 266
Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
4.1 Introduction 282
4.2 Vector Architecture 283
4.3 SIMD Instruction Set Extensions for Multimedia 304
4.4 Graphics Processing Units 310
4.5 Detecting and Enhancing Loop-Level Parallelism 336
4.6 Cross-Cutting Issues 345
4.7 Putting It All Together: Embedded Versus Server GPUs and Tesla Versus Core i7 346
4.8 Fallacies and Pitfalls 353
4.9 Concluding Remarks 357
4.10 Historical Perspective and References 357
Case Study and Exercises by Jason D. Bakos 357
Chapter 5 Thread-Level Parallelism
5.1 Introduction 368
5.2 Centralized Shared-Memory Architectures 377
5.3 Performance of Symmetric Shared-Memory Multiprocessors 393
5.4 Distributed Shared-Memory and Directory-Based Coherence 404
5.5 Synchronization: The Basics 412
5.6 Models of Memory Consistency: An Introduction 417
5.7 Cross-Cutting Issues 422
5.8 Putting It All Together: Multicore Processors and Their Performance 426
5.9 Fallacies and Pitfalls 438
5.10 The Future of Multicore Scaling 442
5.11 Concluding Remarks 444
5.12 Historical Perspectives and References 445
Case Studies and Exercises by Amr Zaky and David A. Wood 446
Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
6.1 Introduction 466
6.2 Programming Models and Workloads for Warehouse-Scale Computers 471
6.3 Computer Architecture of Warehouse-Scale Computers 477
6.4 The Efficiency and Cost of Warehouse-Scale Computers 482
6.5 Cloud Computing: The Return of Utility Computing 490
6.6 Cross-Cutting Issues 501
6.7 Putting It All Together: A Google Warehouse-Scale Computer 503
6.8 Fallacies and Pitfalls 514
6.9 Concluding Remarks 518
6.10 Historical Perspectives and References 519
Case Studies and Exercises by Par
你還可能感興趣
我要評論
|