要使我國的數(shù)學事業(yè)更好地發(fā)展起來,需要數(shù)學家淡泊名利并付出更艱苦地努力。另一方面,我們也要從客觀上為數(shù)學家創(chuàng)造更有利的發(fā)展數(shù)學事業(yè)的外部環(huán)境,這主要是加強對數(shù)學事業(yè)的支持與投資力度,使數(shù)學家有較好的工作與生活條件,其中也包括改善與加強數(shù)學的出版工作。
科學出版社影印一批他們出版的好的新書,使我國廣大數(shù)學家能以較低的價格購買,特別是在邊遠地區(qū)工作的數(shù)學家能普遍見到這些書,無疑是對推動我國數(shù)學的科研與教學十分有益的事。
這次科學出版社購買了版權(quán),一次影印了23本施普林格出版社出版的數(shù)學書,就是一件好事,也是值得繼續(xù)做下去的事情。大體上分一下,這23本書中,包括基礎(chǔ)數(shù)學書5本,應(yīng)用數(shù)學書6本與計算數(shù)學書12本,其中有些書也具有交叉性質(zhì)。這些書都是很新的,2000年以后出版的占絕大部分,共計16本,其余的也是1990年以后出版的。這些書可以使讀者較快地了解數(shù)學某方面的前沿,例如基礎(chǔ)數(shù)學中的數(shù)論、代數(shù)與拓撲三本,都是由該領(lǐng)域大數(shù)學家編著的“數(shù)學百科全書”的分冊。對從事這方面研究的數(shù)學家了解該領(lǐng)域的前沿與全貌很有幫助。按照學科的特點,基礎(chǔ)數(shù)學類的書以“經(jīng)典”為主,應(yīng)用和計算數(shù)學類的書以“前沿”為主。這些書的作者多數(shù)是國際知名的大數(shù)學家,例如《拓撲學》一書的作者諾維科夫是俄羅斯科學院的院士,曾獲“菲爾茲獎”和“沃爾夫數(shù)學獎”。這些大數(shù)學家的著作無疑將會對我國的科研人員起到非常好的指導(dǎo)作用。
作者現(xiàn)任美國西北大學教授,多種國際**雜志的主編、副主編。作者根據(jù)在教學、研究和咨詢中的經(jīng)驗,寫了這本適合學生和實際工作者的書。
Preface
1 Introduction
Mathematical Formulation
Example: A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stochastic and Deterministic Optimization
Optimization Algorithms
Convexity
Notes and References
2 Fundamentals of Unconstrained Optimization
2.1 What Is a Solution?
Recognizing a Local Minimum
Nonsmooth Problems
2.2 Overview of Algorithms
Two Strategies: Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust—Region Methods
Scaling
Rates of Convergence
R—Rates of Convergence
Notes and References
Exercises
3 Line Search Methods
3.1 Step Length
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
3.2 Convergence of Line Search Methods
3.3 Rate of Convergence
Convergence Rate of Steepest Descent
Quasi—Newton Methods
Newton's Method
Coordinate Descent Methods
3.4 Step—Length Selection Algorithms
Interpolation
The Initial Step Length
A Line Search Algorithm for the Wolfe Conditions
Notes and References
Exerases
4 Trust—Region Methods
Outline of the Algorithm
4.1 The Cauchy Point and Related Algorithms
The Cauchy Point
Improving on the Cauchy Point
The DoglegMethod
Two—Dimensional Subspace Minimization
Steihaug's Approach
4.2 Using Nearly Exact Solutions to the Subproblem
Characterizing Exact Solutions
Calculating Nearly Exact Solutions
The Hard Case
Proof of Theorem 4.3
4.3 Global Convergence
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
Convergence of Algorithms Based on Nearly Exact Solutions
4.4 Other Enhancements
Scaling
Non—Euclidean Trust Regions
Notes and References
Exercises
5 Conjugate Gradient Methods
5.1 The Linear Conjugate Gradient Method
Conjugate Direction Methods
Basic Properties of the Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Rate of Convergence
Preconditioning
Practical Preconditioners
5.2 Nonlinear Conjugate Gradient Methods
The Fletcher—Reeves Method
The Polak—Ribiere Method
Quadratic Termination and Restarts
Numerical Performance
Behavior of the Fletcher—Reeves Method
Global Convergence
Notes and References
Exerases
6 Practical Newton Methods
6.1 Inexact Newton Steps
6.2 Line Search Newton Methods
Line Search Newton—CG Method
Modified Newton's Method
6.3 Hessian Modifications
Eigenvalue Modification
Adding a Multiple of the Identity
Modified Cholesky Factorization
Gershgorin Modification
Modified Symmetric Indefinite Factorization
6.4 Trust—Region Newton Methods
Newton—Dogleg and Subspace—Minimization Methods
Accurate Solution of the Trust—Region Problem
Trust—Region Newton—CG Method
Preconditioning the Newton—CG Method
Local Convergence of Trust—Region Newton Methods
Notes and References
Exerases
7 Calculating Derivatives
7.1 Finite—Difference Derivative Approximations
Approximating the Gradient
Approximating a Sparse Jacobian
Approximatingthe Hessian
Approximating a Sparse Hessian
7.2 Automatic Differentiation
An Example
The Forward Mode
The Reverse Mode
Vector Functions and Partial Separability
Calculating Jacobians of Vector Functions
Calculating Hessians: Forward Mode
Calculating Hessians: Reverse Mode
Current Limitations
Notes and References
Exercises
8 Quasi—Newton Methods
8.1 The BFGS Method
Properties ofthe BFGS Method
Implementation
8.2 The SR1 Method
Properties of SRl Updating
8.3 The Broyden Class
Properties ofthe Broyden Class
8.4 Convergence Analysis
Global Convergence ofthe BFGS Method
Superlinear Convergence of BFGS
Convergence Analysis of the SR1 Method
Notes and References
Exercises
9 Large—Scale Quasi—Newton and Partially Separable Optimization
9.1 Limited—Memory BFGS
Relationship with Conjugate Gradient Methods
9,2 General Limited—Memory Updating
Compact Representation of BFGS Updating
SR1 Matrices
Unrolling the Update
9.3 Sparse Quasi—Newton Updates
9.4 Partially Separable Functions
A Simple Example
Internal Variables
9.5 Invariant Subspaces and Partial Separability
Sparsity vs.Partial Separability
Group Partial Separability
9.6 Algorithms for Partially Separable Functions
Exploiting Partial Separabilityin Newton's Method
Quasi—Newton Methods for Partially Separable Functions
Notes and References
Exercises
……
10 Nonlinear Least—Squares Problems
11 Nonlinear Equations
12 Theory of Constrained Optimization
13 Linear Programming: The Simplex Method
14 Linear Programming:Interior—Point Methods
15 Fundamentals of Algorithms for Nonlinear Constrained Optimization
16 Quadratic Programnung
17 Penalty, Barrier, and Augmented Lagrangian Methods
18 Sequential Quadratic Programming
A Background Material
References
Index