計(jì)算機(jī)組成與設(shè)計(jì)(硬件/軟件接口 MIPS版 英文版 第5版 亞洲版)
定 價(jià):139 元
叢書(shū)名:經(jīng)典原版書(shū)庫(kù)
- 作者:[美] David A.Patterson,John L.Hennessy
- 出版時(shí)間:2014/2/1
- ISBN:9787111453161
- 出 版 社:機(jī)械工業(yè)出版社
- 中圖法分類:TP303
- 頁(yè)碼:704
- 紙張:膠版紙
- 版次:5
- 開(kāi)本:16K
帕特森、亨尼斯所著的《計(jì)算機(jī)組成與設(shè)計(jì)(硬件軟件接口MIPS版英文版第5版亞洲版)》這本最暢銷的計(jì)算機(jī)組成與設(shè)計(jì)的經(jīng)典教材經(jīng)過(guò)全面修訂,關(guān)注后PCB寸代發(fā)生在計(jì)算機(jī)體系結(jié)構(gòu)領(lǐng)域的革命性變革(從單處理器發(fā)展到多核微處理器。從串行發(fā)展到并行),并強(qiáng)調(diào)了新出現(xiàn)的移動(dòng)計(jì)算和云計(jì)算。為了研討和強(qiáng)調(diào)這種重大的變化,《計(jì)算機(jī)組成與設(shè)計(jì)(硬件軟件接口MIPS版英文版第5版亞洲版)》更新了許多內(nèi)容,重點(diǎn)介紹平板電腦、云體系結(jié)構(gòu)以及ARM(移動(dòng)計(jì)算設(shè)備)和x86(云計(jì)算)體系結(jié)構(gòu)。
因?yàn)檎_理解現(xiàn)代硬件對(duì)于實(shí)現(xiàn)好的性能和能效至關(guān)重要,所以本版在全書(shū)中增加了一個(gè)新的實(shí)例“Going Faster”,以演示非常有效的優(yōu)化技術(shù)。本版還新增了一個(gè)關(guān)于計(jì)算機(jī)體系結(jié)構(gòu)“八大理念”的討論。
與前幾版一樣,本書(shū)采用MIPS處理器來(lái)展示計(jì)算機(jī)硬件技術(shù)、匯編語(yǔ)言、計(jì)算機(jī)算術(shù)、流水線、存儲(chǔ)器層次結(jié)構(gòu)以及I/O等基本功能。
帕特森、亨尼斯所著的《計(jì)算機(jī)組成與設(shè)計(jì)(硬件軟件接口MIPS版英文版第5版亞洲版)》包含新的實(shí)例、練習(xí)和資料,重點(diǎn)介紹新出現(xiàn)的移動(dòng)計(jì)算和云計(jì)算。涵蓋從串行計(jì)算到并行計(jì)算的革命性變革,特別用一章篇幅講述并行處理器,并且每章中還有一些強(qiáng)調(diào)并行硬件和軟件主題的小節(jié)。全書(shū)采用Intel Core i7、ARM Correx-A8和NVIDIA Fermi GPU作為實(shí)例。增加一個(gè)新的實(shí)例“Going Faster”,展示正確理解硬件技術(shù)能夠激發(fā)軟件優(yōu)化,提高200倍的性能。 討論并強(qiáng)調(diào)計(jì)算機(jī)體系結(jié)構(gòu)的“八大理念”——Performance via Parallelism;Performance via Pipelirring;Performa rice via Prediction;Design for Moore's Law;Hierarchy of Memories;Absttraction to Simplify Design;Make the Common Case Fast;Dependability via Redundancy。 全面更新和改進(jìn)了練習(xí)。
John L.Hennessy,斯坦福大學(xué)校長(zhǎng),IEEE和ACM會(huì)士,美國(guó)國(guó)家工程研究院院士及美國(guó)科學(xué)藝術(shù)研究院院士。Hennessy教授因?yàn)樵赗ISC技術(shù)方面做出了突出貢獻(xiàn)而榮獲2001年的Eckert-Mauchly獎(jiǎng)?wù),他也?001年Seymour Cray計(jì)算機(jī)工程獎(jiǎng)得主,并且和David A.Patterson分享了2000年Johnvon Neumann獎(jiǎng)。David A.Patterson加州大學(xué)伯克利分校計(jì)算機(jī)科學(xué)系教授,美國(guó)國(guó)家工程院院士,美國(guó)國(guó)家科學(xué)院院士,IEEE和ACM會(huì)士。他因?yàn)榻虒W(xué)成果顯著而榮獲了加州大學(xué)的杰出教學(xué)獎(jiǎng)、ACM的Karlstrom獎(jiǎng)、IEEE的Mulligan教育獎(jiǎng)?wù)潞捅究粕虒W(xué)獎(jiǎng)。因?yàn)閷?duì)RISC技術(shù)的貢獻(xiàn),他獲得lEEE的技術(shù)成就獎(jiǎng)和ACM的Eckert-Mauchly獎(jiǎng);而在RAID方面的貢獻(xiàn)為他贏得了IEEE Johnson信息存儲(chǔ)獎(jiǎng)。他還和John L. Hennessy分享了IEEE John von Neumann獎(jiǎng)?wù)潞蚇EC C&C獎(jiǎng)金。Patterson還是美國(guó)藝術(shù)與科學(xué)院院士、美國(guó)計(jì)算機(jī)歷史博物館院士,并被選入硅谷工程名人堂。Patterson身為美國(guó)總統(tǒng)信息技術(shù)顧問(wèn)委員會(huì)委員,還曾擔(dān)任加州大學(xué)伯克利分校電子工程與計(jì)算機(jī)科學(xué)系計(jì)算機(jī)科學(xué)分部主任、計(jì)算機(jī)研究協(xié)會(huì)(CRA)主席和ACM主席。這一履歷使他榮獲了ACM和CRA頒發(fā)的杰出服務(wù)獎(jiǎng)。
Preface v
About the Author xiii
CHAPTERS
1 Computer Abstractions and Technology 2
1.1 Introduction 3
1.2 Eight Great Ideas in Computer Architecture 11
1.3 Below Your Program 13
1.4 Under the Covers 16
1.5 Technologies for Building Processors and Memory 24
1.6 Performance 28
1.7 The Power Wall 40
1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors 43
1.9 Real Stuff: Benchmarking the Intel Core i7 46
1.10 Fallacies and Pitfalls 49
1.11 Concluding Remarks 52
1.12 Historical Perspective and Further Reading 54
1.13 Exercises 54
2 Instructions: Language of the Computer 60
2.1 Introduction 62
2.2 Operations of the Computer Hardware 63
2.3 Operands of the Computer Hardware 66
2.4 Signed and Unsigned Numbers 73
2.5 Representing Instructions in the Computer 80
2.6 Logical Operations 87
2.7 Instructions for Making Decisions 90
2.8 Supporting Procedures in Computer Hardware 96
2.9 MIPS Addressing for 32-Bit Immediates and Addresses 106
2.10 Parallelism and Instructions: Synchronization 116
2.11 Translating and Starting a Program 118
2.12 A C Sort Example to Put It All Together 126
2.13 Advanced Material: Compiling C 134
2.14 Real Stuff: ARMy7 (32-bit) Instructions 134
2.15 Real Stuff: x86 Instructions 138
2.16 Real Stuff: ARMv8 (64-bit) Instructions 147
2.17 Fallacies and Pitfalls 148
2.18 Concluding Remarks 150
2.19 Historical Perspective and Further Reading 152
2.20 Exercises 153
3 Arithmetic for Computers 164
3.1 Introduction 166
3.2 Addition and Subtraction 166
3.3 Multiplication 171
3.4 Division 177
3.5 Floating Point 184
3.6 Parallelism and Computer Arithmetic: Subword Parallelism 210
3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86 212
3.8 Going Faster: Subword Parallelism and Matrix Multiply 213
3.9 Fallacies and Pitfalls 217
3.10 Concluding Remarks 220
3.11 Historical Perspective and Further Reading 224
3.12 Exercises 225
4 The Processor 230
4.1 Introduction 232
4.2 Logic Design Conventions 236
4.3 Building a Datapath 239
4.4 A Simple Implementation Scheme 247
4.5 An Overview ofPipelining 260
4.6 Pipelined Datapath and Control 274
4.7 Data Hazards: Forwarding versus Stalling 291
4.8 Control Hazards 304
4.9 Exceptions 313
4.10 Parallelism via Instructions 320
4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines 332
4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply 339
4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware
Design Language to Describe and Model a Pipeline and More Pipelining
Illustrations 342
4.14 Fallacies and Pitfalls 343
4.15 CondudingRemarks 344
4.16 Historical Perspective and Further Reading 345
4.17 Exercises 345
5 Large and Fast: Exploiting Memory Hierarchy 360
5.1 Introduction 362
5.2 Memory Technologies 366
5.3 The Basics of Caches 371
5.4 Measuring and Improving Cache Performance 386
5.5 Dependable Memory Hierarchy 406
5.6 Virtual Machines 412
5.7 Virtual Memory 415
5.8 A Common Framework for Memory Hierarchy 442
5.9 Using a Finite-State Machine to Control a Simple Cache 449
5.10 Parallelism and Memory Hierarchies: Cache Coherence 454
5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 458
5.12 Advanced Material: Implementing Cache Controllers 458
5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies 459
5.14 Going Faster: Cache Blocking and Matrix Multiply 463
5.15 Fallacies and Pitfalls 466
5.16 GoncludingRemarks 470
5.17 Historical Perspective and Further Reading 471
5.18 Exercises 471
6 Parallel Processors from Client to Cloud 488
6.1 Introduction 490
6.2 The Difficulty of Creating Parallel Processing Programs 492
6.3 SISD, MIMD, SIMD, SPMD, and Vector 497
6.4 Hardware Multithreading 504
6.5 Multicore and Other Shared Memory Multiprocessors 507
6.6 Introduction to Graphics Processing Units 512
6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors 519
6.8 Introduction to Multiprocessor Network Topologies 524
6.9 Communicating to the Outside World: Cluster Networking 527
6.10 Multiprocessor Benchmarks and Performance Models 528
6.11 Real Stuff: Benchmarking Intel Core i7 versus NVIDIA Tesla GPU 538
6.12 Going Faster: Multiple Processors and Matrix Multiply 543
6.13 Fallacies and Pitfalls 546
6.14 Concluding Remarks 548
6.15 Historical Perspective and Further Reading 551
6.16 Exercises 551
APPENDICES
A Assemblers, Linkers, and the SPiM Simulator A-2
A.1 Introduction A-3
A.2 Assemblers A-IO
A.3 Linkers A-18
A.4 Loading A-19
A.5 Memory Usage A-20
A.6 Procedure Call Convention A-22
A.7 Exceptions and Interrupts A-33
A.8 Input and Output A-38
A.9 SPIM A-40
A.10 MIPS R2000 Assembly Language A-45
A.11 Concluding Remarks A-81
A.12 Exercises A-82
B TH-2 High Performance Computing System B-2
B.1 Introduction B-3
B.2 Compute Node B-3
B.3 The Frontend Processors B-5
B.4 The Interconnect B-6
B.5 The Software Stack B-7
B.6 LINPACK Benchmark Run (HPL) B-7
B.7 Concluding Remarks B-8
F Networks-on-Chip F-2
F.1 Introduction F-3
F.2 Communication Centric Design F-3
F.3 The Design Space Exploration ofNoCs F-5
F.4 Router Micro-architecture F-8
F.5 Performance Metric F-9
F.6 Concluding Remarks F-9
Index I-1