模式分析是從一批數(shù)據(jù)中尋找普遍關系的過程。它逐漸成為許多學科的核心,從生物信息學到文檔檢索都有廣泛需求。本書所描述的核方法為所有這些學科提供了一個有力統(tǒng)一的框架,推動了可用于各種普遍形式的數(shù)據(jù)(如字符串、向量、文本等)的各種算法的發(fā)展,還可用于尋找各種普遍的關系類型(如排序、分類、回歸和聚類等)。書中提供了大量算法、核函數(shù)和具體解決方案供各種實際問題選擇使用。書中描述了各種核函數(shù),從基本的例子到高等遞歸核函數(shù),從生成模型導出的核函數(shù)(如HMM)到基于動態(tài)規(guī)劃的串匹配核函數(shù),以及用于處理文本文檔的特殊核函數(shù)等。本書適用于所有從事人工智能、模式識別、機器學習、神經網絡及其應用的學生、教師和研究人員,也可供相關領域的科研人員參考。
The study of patterns in data is as old as science. Consider,for example,the astronomical breakthroughs of Johannes Kepler formulated in his three famous laws of planetary motion. They can be viewed as relations that he detected in a large set of observational data compiled by Tycho Brahe.
Equally the wish to automate the search for patterns is at least as old as computing. The problem has been attacked using methods of statistics,machine learning,data mining and many other branches of science and en-gineering.
Pattern analysis deals with the problem of (automatically) detecting and characterising relations in data.Most statistical and machine learning meth-ods of pattern analysis assume that the data is in vectorial form and that the relations can be expressed as classification rules,regression functions orcluster structures,these approaches often go under the general heading of 'statistical pattern recognition'.cSyntactical' or 'structural pattern recogni-tion' represents an alternative approach that aims to detect rules among,for example,strings,often in the form of grammars or equivalent abstractions.
The evolution of automated algorithms for pattern analysis has undergone three revolutions. In the 1960s efficient algorithms for detecting linear rela-tions within sets of vectors were introduced. Their computational and sta-tistical behaviour was also analysed. The Perceptron algorithm introduced in 1957 is one example. The question of how to detect nonlinear relations was posed as a major research goal at that time. Despite this developing algorithms with the same level of efficiency and statistical guarantees has proven an elusive target.
In the mid 1980s the field of pattern analysis underwent a 'nonlinear revo-lution' with the almost simultaneous introduction of backpropagation multi layer neural networks and efficient decision tree learning algorithms. These approaches for the first time macle it possible to detect nonlinear patterns,albeit with heuristic algorithms and incomplete statistical analysis.The impact of the nonlinear revolution cannot be overemphasised: entire fields such as data mining and bioinformatics were enabled by it. These nonlinear algorithms,however,were based on gradient descent or greedy heuristics and so suffered from local minima. Since their statistical behaviour was not well understood,they also frequently suffered from overfitting.
A third stage in the evolution of pattern analysis algorithms took place in the mid-1990s with the emergence of a new approach to pattern analy-sis known as kernel-based learning methods that finally enabled researchers to analyse nonlinear relations with the efficiency that had previously been reserved for linear algorithms.Furthermore advances in their statistical analysis made it possible to do so in high-dimensional feature spaces while avoiding the dangers of overfitting. From all points of view,computational,statistical and conceptual,the nonlinear pattern analysis algorithms devel-oped in this third generation are as efficient and as well founded as linear ones.The problems of local minima and overfitting that were typical of neural networks and decision trees have been overcome. At the same time,these methods have been proven very effective on non vectorial data,in this way creating a connection with other branches of pattern analysis.
Kernel-based learning first appeared in the form of support vector ma-chines,a classification algorithm that overcame the computational and sta-tistical difficulties alluded to above. Soon,however,kernel-based algorithmsable to solve tasks other than classification were developed,making it in-creasingly clear that the approach represented a revolution in pattern analy-sis.Here was a whole new set of tools and techniques motivated by rigorous theoretical analyses and built with guarantees of computational efficiency.
Furthermore,the approach is able to bridge the gaps that existed be-tween the different subdisciplines of pattern recognition. It provides a uni-fied framework to reason about and operate on data of all types be they vectorial,strings,or more complex objects,while enabling the analysis of a wide variety of patterns,including correlations,rankings,clusterings,etc.
This book presents an overview of this new approach. We have attempted to condense into its chapters an intense decade of research generated by a new and thriving research community. Together its researchers have created
a class of methods for pattern analysis,which has become an important part of the practitioner's toolkit.
約翰·肖·泰勒(John Shawe-Taylor)目前是英國倫敦大學學院聯(lián)合國教科文組織人工智能講席教授,并擔任計算機科學系系主任和計算統(tǒng)計和機器學習中心主任。他還協(xié)調組織了多個機器學習歐洲聯(lián)合研究項目,比如NeuroCOLT(“神經計算學習”)項目和PASCAL(“模式分析、統(tǒng)計建模與計算學習”)項目。
內洛·克里斯蒂安尼尼(Nello Cristianini)目前是英國布里斯托爾大學計算機科學系的人工智能教授。他獲得過英國皇家學會沃爾夫森杰出研究成就獎和歐洲研究理事會高階研究基金獎。2014年他被湯森路透列入2002至2012十年間具影響力的科學家名單,2016年被AMiner列入機器學習領域具影響力的百位研究者名單。
Preface
Part I. Basic Concepts
1. Pattern analysis
2. Kernel methods: an overview
3. Properties of kernels
4. Detecting stable patterns
Part II. Pattern Analysis Algorithms
5. Elementary algorithms in feature space
6. Pattern analysis using eigen-decompositions
7. Pattern analysis using convex optimisation
8. Ranking, clustering and data visualisation
Part III. Constructing Kernels
9. Basic kernels and kernel types
10. Kernels for text
11. Kernels for structured data: strings, trees, etc.
12. Kernels from generative models
Appendix A. Proofs omitted from the main text
Appendix B. Notational conventions
Appendix C. List of pattern analysis methods
Appendix D. List of kernels
References