登录

  • 登录
  • 忘记密码?点击找回

注册

  • 获取手机验证码 60
  • 注册

找回密码

  • 获取手机验证码60
  • 找回
毕业论文网 > 开题报告 > 文学教育类 > 英语 > 正文

基于Iwrite的大学生英语写作智能批改与教师批改的对比分析A Contrastive Study Between Intelligent Automated essay scoring and Teacher Scoring in English Writing of University Students Based on Iwrite开题报告

 2021-03-10 23:55:45  

1. 研究目的与意义(文献综述)

english writing assessment from teachers has been considered a time-consuming and expensive activity, the subjectivity of which can not be guaranteed during the grading process. resorting to the automated essay scoring tools, however, the consistency in assessment of articles can be achieved. there lie varied automated assessment tools in the current environment, namely, the project essay grade(peg) system, developed by page et al, intelligent essay accessor(iea) system, developed in the late 1990s, e-rater system and so on. from automated objective tests grading tools aiming at true-false value and multiple choice to essay grading tools, while the benefits seem to be conspicuous, the authentic effectiveness, however, remains unclear, especially when compared with teacher assessment. thus the thesis aims to study the accuracy and precising of the automated essay scoring tools by comparing the teacher feedback and machine feedback towards forty articles written by chinese students.

as for the study abroad, while some take the articles written by native speakers as objects in the research of computer-based writing assessment, a certain share of documents focus on the articles written by language learners who study english as a second language(li amp; liu, 2017). what’s more, some studies aiming at various automated scoring tools, such as iea, peg, has reported relatively high correlations between automated assessment tools and human raters(coniam, 2009; jim, stephanie amp; evgeny, 2016). coniam (2009) concluded that while computer rating programs have their detractors in terms of transparency, it can be seen that they produce results which compare favourably with human raters. apart from those supporting studies, a major criticism is that the computer rating process is a essentially a ‘‘black box’’ since rating criteria were not explicit(weigle, 2002). except for the study towards the degrees of accuracy the automated writing scoring tools can match that of humans, there are studies which incorporate essay writing systems, namely, scigen, ghostwrite and gatherer, research on the questions like whether automated essay writing systems can generate intelligent and coherent essays which can fool university markers into assigning good grades to them, embodying the disparity between both technology and research(williams amp; nash, 2009).

剩余内容已隐藏,您需要先支付后才能查看该篇文章全部内容!

2. 研究的基本内容与方案

iwrite as a newly-founded yet quickly growing english writing assessment website established by foreign language teaching and research(fltrp), it has its own features in english essay scoring based on the system created. allowing for the current abundant resources and studies aiming at pigai website, pertinent studies towards iwrite is rather scarce. thus the thesis chose forty anonymous articles assessed in the website as objects. errors are classified based on linguistic features. the thesis aims to study the accuracy and precision of current automated writing assessment tool-iwrite, which can be achieved via analysis and comparison between the teacher feedback and automated scoring feedback on iwrite towards the forty articles in accordance with linguistic categorization

first, forty articles would be analyzed by teachers in accordance with the categorization above, such as syntactic errors, lexical errors, collocation errors and technical errors. each extensive category has its own divisions, such as the number of words; average sentence length; number of verbs; content features such as specific words and phrases; and other characteristics including the order in which concepts appear and the occurrence of certain noun-verb pairs and so on. a record towards the comparison between each division of the same article will be made, on the basis of which the statistics are collected pertaining to the inconsistency of the teacher and machine feedback as well as errors the machine has made. related documents would be referred to simultaneously and ultimately a thesis can be written.

剩余内容已隐藏,您需要先支付后才能查看该篇文章全部内容!

3. 研究计划与安排

before 20th january : settlement of the title

before20th march: submission of the outline

before 25th april : submission of the first draft

剩余内容已隐藏,您需要先支付后才能查看该篇文章全部内容!

4. 参考文献(12篇以上)

[1]li x., liu j. automatic essay scoring based on coh-metrix feature selection for chinese english learners. in: wu tt., gennari r., huang ym., xie, 2017

[2]jim ranalli, stephanie link, evgeny chukharev-hudilainen. automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. educational psychology, 2016

[3]ha, m. amp; nehm, r.h.the impact of misspelled words on automated computer scoring: a case study of scientific explanations.journal of science education and technology, 2016

剩余内容已隐藏,您需要先支付 10元 才能查看该篇文章全部内容!立即支付

企业微信

Copyright © 2010-2022 毕业论文网 站点地图