中國(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分模型的構(gòu)建
定 價(jià):69.9 元
- 作者:梁茂成 著
- 出版時(shí)間:2011/1/1
- ISBN:9787513504997
- 出 版 社:外語(yǔ)教學(xué)與研究出版社
- 中圖法分類:H315
- 頁(yè)碼:291
- 紙張:膠版紙
- 版次:1
- 開(kāi)本:16
《中國(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分模型的構(gòu)建》由梁茂成編著。
學(xué)生作文自動(dòng)評(píng)分系統(tǒng)在國(guó)外已有研究和開(kāi)發(fā)。有些已經(jīng)實(shí)際應(yīng)用多年,但在中國(guó)仍比較少見(jiàn)!吨袊(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分模型的構(gòu)建》旨在構(gòu)建中國(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分的統(tǒng)計(jì)模型,探討學(xué)生英語(yǔ)作文中的多種文本特征項(xiàng)對(duì)學(xué)生作文成績(jī)的預(yù)測(cè)能力。通過(guò)模型構(gòu)建和模型驗(yàn)證表明機(jī)器評(píng)分的信度達(dá)到甚至超過(guò)人工評(píng)分的信度,該模型對(duì)中國(guó)學(xué)生英語(yǔ)作文質(zhì)量的解釋能力可以與國(guó)外應(yīng)用于母語(yǔ)作文自動(dòng)評(píng)分的軟件系統(tǒng)對(duì)母語(yǔ)作文質(zhì)量的解釋能力相媲美。
《中國(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分模型的構(gòu)建》是由梁茂成編寫(xiě),全書(shū)共分8個(gè)章節(jié),主要對(duì)學(xué)生作文自動(dòng)評(píng)分系統(tǒng)作了探討和研究,旨在構(gòu)建中國(guó)學(xué)生英語(yǔ)作文自動(dòng)評(píng)分的統(tǒng)計(jì)模型,探討學(xué)生英語(yǔ)作文中的多種文本特征項(xiàng)對(duì)學(xué)生作文成績(jī)的預(yù)測(cè)能力。該書(shū)可供各大專院校作為教材使用,也可供從事相關(guān)工作的人員作為參考用書(shū)使用。
List of Abbreviations
List of Tables
List of Figures
Part One Introduction
Introducing the Study
0.1 Introductory remarks
0.2 Need for this study
0.2.1 Theoretical considerations
0.2.2 Practical considerations
0.3 Description of the study
0.4 Organization of the study
0.5 Summary
Part Two Literature Review
Chapter 1 A Review of Existing Computer-Assisted Essay Scoring Systems
1.1 Introduction
List of Abbreviations
List of Tables
List of Figures
Part One Introduction
Introducing the Study
0.1 Introductory remarks
0.2 Need for this study
0.2.1 Theoretical considerations
0.2.2 Practical considerations
0.3 Description of the study
0.4 Organization of the study
0.5 Summary
Part Two Literature Review
Chapter 1 A Review of Existing Computer-Assisted Essay Scoring Systems
1.1 Introduction
1.2 Key concepts
1.2.1 Computer-assisted essay scoring
1.2.2 EFL writing assessment
1.3 Existing computer-assisted essay scoring systems
1.3.1 Project Essay Grade (PEG): A form-focused system
1.3.2 Intelligent Essay Assessor (IEA): A content-focused system
1.3.3 E-rater: A hybrid system with a modular structure
1.3.4 An appraisal of the three existing systems
1.4 Lessons from existing essay scoring systems
1.5 Summary
Chapter 2 Studies on Measures of Writing Quality
2.1 Introduction
2.2 Measures of writing quality in the literature
2.2.1 Measures of the quality of language
2.2.2 Measures of the quality of content and organization
2.3 An overview of the measures in the literature
2.4 A conceptual model for the computer-assisted scoring of EFL essays
2.5 Proposed measures of EFL writing quality
2.5.1 Proposed measures of the quality of language in EFL writing
2.5.2 Proposed measures of the quality of content in EFL writing
2.5.3 Proposed measures of the quality of organization in EFL writing
2.6 Summary
Part Three Methodology
Chapter 3 Research Questions and Data Preparation
3.1 Introduction
3.2 Research questions
3.3 The corpus
3.4 The rating scheme
3.4.1 Selecting a rating scale
3.4.2 The revised rating scale
3.4.3 The evaluation of content
3.4.4 The weighting scheme
3.5 Rating
3.5.1 Rater selection
3.5.2 Rater training
3.5.3 The rating sessions
3.6 Score reliability
3.7 Summary
Chapter 4 Text Analysis and Statistical Analysis
4.1 Introduction
4.2 Tools
4.3 Essay feature extraction
4.3.1 Language features
4.3.2 Content features
4.3.3 Organizational features
4.4 Data analysis
4.4.1 Correlation analysis
4.4.2 Multiple regression analysis
4.4.3 Stages of data analysis
4.5 Summary
Part Four Results and Discussion
Chapter 5 Identifying Predictors of EFL Writing Quality
5.1 Introduction
5.2 Linguistic features and writing quality
5.2.1 Fluency and writing quality
5.2.2 Complexity of language and writing quality
5.2.3 Measures of linguistic idiomaticity and appropriateness
5.3 Results of content analysis
5.3.1 Results of Latent Semantic Analysis
5.3.2 Procedural vocabulary and essay score
5.4 Essay organization and writing quality
5.4.1 Paragraphing and writing quality
5.4.2 Discourse conjuncts and writing quality
5.4.3 Demonstratives, pronouns, connective and writing quality
5.5 Power of the predictors proposed in this study
5.6 Summary
Chapter 6 A Statistical Model for Computer-Assisted Essay Scoring
6.1 Introduction
6.2 Diagnosing the preliminary model
6.3 The refined model
6.4 Predictors and aspects of writing quality measured
6.4.1 Predictors in the language module
6.4.2 Predictors in the content module
6.4.3 Predictors inthe organization module
6.4.4 Interdependence of the modules
6.5 Implementing the model
6.6 Summary
Chapter 7 Validating the Model
7.1 Introduction
7.2 Cross-validating the model
7.3 Reliability of computer scores in cross-validation
7.3.1 Aspects of reliability
7.3.2 Consistency estimates
7.3.3 Consensus estimates
7.4 Double cross-validation
7.4.1 Constructing the model
7.4.2 Model statistics and estimated equation
7.5 Reliability of computer scores in double cross-validation
7.6 Comparison with existing essay scoring systems
7.6.1 Comparison with PEG
7.6.2 Comparison with IEA
7.6.3 Comparison with E-rater
7.7 Summary
Part Five Conclusion
Chapter 8 Conclusion
8.1 Major findings
8.1.1 A model for the computer-assisted scoring of EFL essays
8.1.2 Predictors of EFL writing quality
8.2 Limitations of the study
8.3 Future work
References
Appendices
Appendix Ⅰ PEG's proxes and their beta values (Page 1968)
Appendix Ⅱ Page's (1995) model and variables
Appendix Ⅲ Argument weight
Appendix Ⅳ Examples of good openings and endings
Appendix Ⅴ Scoring table (Organization & Content)
Appendix Ⅵ Scoring table (Language)
Appendix Ⅶ List of stopwords
Appendix Ⅷ Lemma list (excerpt)
Appendix Ⅸ List of content words
Appendix Ⅹ Sample essays
Appendix Ⅺ POS-tagged samples