《編譯原理》是編譯領(lǐng)域無可替代的經(jīng)典著作,被廣大計(jì)算機(jī)專業(yè)人士譽(yù)為“龍書”!毒幾g原理》上一版自1986年出版以來,被世界各地的著名高等院校和研究機(jī)構(gòu)(包括美國(guó)哥倫比亞大學(xué)、斯坦福大學(xué)、哈佛大學(xué)、普林斯頓大學(xué)、貝爾實(shí)驗(yàn)室)作為本科生和研究生的編譯原理課程的教材。該書對(duì)我國(guó)高等計(jì)算機(jī)教育領(lǐng)域也產(chǎn)生了重大影響。
第2版對(duì)每一章都進(jìn)行了全面的修訂,以反映自上一版出版二十多年來軟件工程、程序設(shè)計(jì)語言和計(jì)算機(jī)體系結(jié)構(gòu)方面的發(fā)展對(duì)編譯技術(shù)的影響。
《編譯原理》全面介紹了編譯器的設(shè)計(jì)。并強(qiáng)調(diào)編譯技術(shù)在軟件設(shè)計(jì)和開發(fā)中的廣泛應(yīng)用,每章中都包含大量的習(xí)題和豐富的參考文獻(xiàn)。《編譯原理》適合作為高等院校計(jì)算機(jī)專業(yè)本科生和研究生的編譯原理與技術(shù)課程的教材,也可供廣大計(jì)算機(jī)技術(shù)人員參考。
In the time since the 1986 edition of this book, the world of compiler designhas changed significantly. Programming languages have evolved to present newcompilation problems. Computer architectures offer a variety of resources ofwhich the compiler designer must take advantage. Perhaps most interestingly,the venerable technology of code optimization has found use outside compilers.It is now used in tools that find bugs in software, and most importantly, findsecurity holes in existing code. And much of the "front-end" technology ——grammars, regular expressions, parsers, and syntax-directed translators —— arestill in wide use.
Thus, our philosophy from previous versions of the book has not changed.We recognize that few readers will build, or even maintain, a compiler for amajor programming language. Yet the models, theory, and algorithms associ-ated with a compiler can be applied to a wide range of problems in softwaredesign and software development. We therefore emphasize problems that aremost commonly encountered in designing a language processor, regardless ofthe source language or target machine.Use of the BookIt takes at least two quarters or even two semesters to cover all or most of thematerial in this book. It is common to cover the first half in an undergraduatecourse and the second half of the book —— stressing code optimization —— ina second course at the graduate or mezzanine level. Here is an outline of thechapters:Chapter 1 contains motivational material and also presents some backgroundissues in computer architecture and programming-language principles.Chapter 2 develops a miniature compiler and introduces many of the impor-taut concepts, which are then developed in later chapters. The compiler itselfappears in the appendix.Chapter 3 covers lexical analysis, regular expressions, finite-state machines, andscanner-generator tools. This material is fundamental to text-processing of allsorts.
AlfredV.Aho,美國(guó)哥倫比亞大學(xué)教授。美國(guó)國(guó)家工程院院士,ACM和lEEE會(huì)士,曾獲得IEEE的馮·諾伊曼獎(jiǎng)。著有多部算法、數(shù)據(jù)結(jié)構(gòu)、編譯器、數(shù)據(jù)庫系統(tǒng)及計(jì)算機(jī)科學(xué)基礎(chǔ)方面的著作。
MonicaS.Lam,斯坦福大學(xué)計(jì)算機(jī)科學(xué)系教授。曾任T'ensilica的首席科學(xué)家,也是Moka5的首任CEO。曾經(jīng)主持SLJIF項(xiàng)目。該項(xiàng)目產(chǎn)生了最流行的研究用編譯器之一。
Ravi Sethi,Avaya實(shí)驗(yàn)室總裁。曾任貝爾實(shí)驗(yàn)室高級(jí)副總裁和LLicentTectlIlologies通信軟件的CTO。他曾在賓夕法尼亞州立大學(xué)、亞利桑那州立大學(xué)和普林斯頓大學(xué)任教,是ACM會(huì)士。
Jeffrey D.UIIman,斯坦福大學(xué)計(jì)算機(jī)科學(xué)系教授和GradianceCEO。他的研究興趣包括數(shù)據(jù)庫理論、數(shù)據(jù)庫集成、數(shù)據(jù)挖掘和利用信息基礎(chǔ)設(shè)施教學(xué)等。他是美國(guó)國(guó)家工程院院士、IEEE會(huì)士,獲得過ACM的Karlstrom杰出教育獎(jiǎng)和Knufh獎(jiǎng)。
1 introduction
1.1 language processors
1.2 the structure of a compiler
1.3 the evolution of programming languages
1.4 the science of building a compiler
1.5 applications of compiler technology
1.6 programming language basics
1.7 summary of chapter 1
1.8 references for chapter 1
2 a simple syntax-directed translator
2.1 introduction
2.2 syntax definition
2.3 syntax-directed translation
2.4 parsing
2.5 a translator for simple expressions
2.6 lexical analysis
2.7 symbol tables
2.8 intermediate code generation
2.9 summary of chapter 2
3 lexical analysis
3.1 the role of the lexical analyzer
3.2 input buffering
3.3 specification of tokens
3.4 recognition of tokens
3.5 the lexical-analyzer generator lex
3.6 finite automata
3.7 from regular expressions to automata
3.8 design of a lexical-analyzer generator
3.9 optimization of dfa-based pattern matchers
3.10 summary of chapter 3
3.11 references for chapter 3
4 syntax analysis
4.1 introduction
4.2 context-free grammars
4.3 writing a grammar
4.4 top-down parsing
4.5 bottom-up parsing
4.6 introduction to lr parsing: simple lr
4.7 more powerful lr parsers
4.8 using ambiguous grammars
4.9 parser generators
4.10 summary of chapter 4
4.11 references for chapter 4
5 syntax-directed translation
5.1 syntax-directed definitions
5.2 evaluation orders for sdd's
5.3 applications of syntax-directed translation
5.4 syntax-directed translation schemes
5.5 hnplementing l-attributed sdd's
5.6 summary of chapter 5
5.7 references for chapter 5
6 intermediate-code generation
6.1 variants of syntax trees
6.2 three-address code
6.3 types and declarations
6.4 translation of expressions
6.5 type checking
6.6 control flow
6.7 backpatching
6.8 switch-statements
6.9 intermediate code for procedures
6.10 summary of chapter 6
6.11 references for chapter 6
7 run-time environments
7.1 storage organization
7.2 stack allocation of space
7.3 access to nonlocal data on the stack
7.4 heap management
7.5 introduction to garbage collection
7.6 introduction to trace-based collection
7.7 short-pause garbage collection
7.8 advanced topics in garbage collection
7.9 summary of chapter 7
7.10 references for chapter 7
8 code generation
8.1 issues m the design of a code generator
8.2 the target language
8.3 addresses in the target code
8.4 basic blocks and flow graphs
8.5 optimization of basic blocks
8.6 a simple code generator
8.7 peephole optimization
8.8 register allocation and assignment
8.9 instruction selection by tree rewriting
8.10 optimal code generation for expressions
8.11 dynamic programming code-generation
8.12 summary of chapter 8
8.13 references for chapter 8
9 machine-independent optimizations
9.1 the principal sources of optimization
9.2 introduction to data-flow analysis
9.3 foundations of data-flow analysis
9.4 constant propagation
9.5 partial-redundancy elimination
9.6 loops in flow graphs
9.7 region-based analysis
9.8 symbolic analysis
9.9 summary of chapter 9
9.10 references for chapter 9
10 instruction-level parallelism
10.1 processor architectures
10.2 code-scheduling constraints
10.3 basic-block scheduling
10.4 global code scheduling
10.5 software pipelining
10.6 summary of chapter 10
10.7 references for chapter 10
11 optimizing for parallelism and locality
11.1 basic concepts
11.2 matrix multiply: an in-depth example
11.3 iteration spaces
11.4 aftlne array indexes
11.5 data reuse
11.6 array data-dependence analysis
11.7 finding synchronization-free parallelism
11.8 synchronization between parallel loops
11.9 pipelining
11.10 locality optimizations
11.11 other uses of affine transforms
11.12 summarv of chapter 11
11.13 references for chapter 11
12 interprocedural analysis
12.1 basic concepts
12.2 why interprocedural analysis?
12.3 a logical representation of data flow
12.4 a simple pointer-analysis algorithm
12.5 context-insensitive interprocedural analysis
12.6 context-sensitive pointer analysis
12.7 datalog implementation by bdd's
12.8 summary of chapter 12
12.9 references for chapter 12
a a complete front end
a.1 the source language
a.2 main
a.3 lexical analyzer
a.4 symbol tables and types
a.5 intermediate code for expressions
a.6 jumping code for boolean expressions
a.7 intermediate code for statements
a.8 parser
a.9 creating the front end
b finding linearly independent solutions
index
Languagel, are used to search databases. Database queries consist of predicatescontaining relational and boolean operators. They can be interpreted or com-piled into commands to search a database for records satisfying that predicate.Compiled SimulationSimulation is a general technique used in many scientific and engineering disci-plines to understand a phenomenon or to validate a design. Inputs to a simula-tor usually include the description of the design and specific input parametersfor that particular simulation run. Simulations can be very expensive. We typi-cally need to simulate many possible design alternatives on many different inputsets, and each experiment may take days to complete on a high-performancemachine. Instead of writing a simulator that interprets the design, it is fasterto compile the design to produce machine code that simulates that particulardesign natively. Compiled simulation can run orders of magnitude faster thanan interpreter-based approach. Compiled simulation is used in many state-of-the-art tools that simulate designs written in Verilog or VHDL.1.5.5 Software Productivity ToolsPrograms are arguably the most complicated engineering artifacts ever pro-duced; they consist of many many details, every one of which must be correctbefore the program will work completely. As a result, errors are rampant inprograms; errors may crash a system, produce wrong results, render a systemvulnerable to security attacks, or even lead to catastrophic failures in criticalsystems. Testing is the primary technique for locating errors in programs.
An interesting and promising complementary approach is to use data-flowanalysis to locate errors statically (that is, before the program is run). Data-flow analysis can find errors along all the possible execution paths, and notjust those exercised by the input data sets, as in the case of program testing.Many of the data-flow-analysis techniques, originally developed for compileroptimizations, can be used to create tools that assist programmers in theirsoftware engineering tasks.
The problem of finding all program'errors is undecidable. A data-flow anal-ysis may be designed to warn the programmers of all possible statements witha particular category of errors. But if most of these warnings are false alarms,users will not use the tool. Thus, practical error detectors are often neithersound nor complete. That is, they may not find all the errors in the program,and not all errors reported are guaranteed to be real errors. Nonetheless, var-ious static analyses have been developed and shown to be effective in findingerrors, such as dereferencing null or freed pointers, in real programs. The factthat error detectors may be unsound makes them significantly different fromcompiler optimizations. Optimizers must be conservative and cannot alter thesemantics of the program under any circumstances.
……