A Non-native Rater's Perception of the IELTS Analytical Rating Scale

1847 Words4 Pages

In the past,large scale and standardized testing organizations have implemented language assessments aimed to assess the English language proficiency of students aiming to study in higher education.These high stake tests play a vital role when decisions made on individual performance and its outcome is considered as a diagnosis of the test takers’ ability.Among these performance, the International English Language Testing System ( IELTS) writing score is considered by most universities a benchmark against learners success in higher education.This has increase the concern of non-native (NNS) raters reliability and their consistency in rating scores in countries these tests are adopted. Although these NNS raters are not qualified as IELTS examiners,they are occupying IELTS preparation language classes in these countries. As a wash back to the assessment,the curriculum of such courses is shaped by how the raters perceive the assessment criteria. Various factors underlying the variabilities in NNS rater rating include rater characteristics such as experience, background knowledge and cultural background. Recent studies claimed that raters ego, style and their intellectual ability to memorize can be accounted for the raters diverse actions during the rating process (Lumeley,2002; Wiseman, 2012). However, the process of rating assessments was suggested by Barkaoui (2010), that rating is a decision making process which evolves interaction between the rater and also the rating scales. Therefore, its influence on raters behavior due to the variety of scales use in different tests and raters individual differences in scales interpretation may interfere with how they derive at a score. This paper aims to review studies of... ... middle of paper ... ...SL essay evaluation: The influence of sentence-level and rhetorical features. Journal of Second Language Writing P, 3-17. McNamara, T., & Roever, C. (2006). Validity and the social dimension of language testing. Language Learning, 56, 9-42. Upshur, J. A., & Turner, C. E. (1999). Systematic effects in the rating of second-language speaking ability: Test method and learner discourse. Language Testing, 16(1), 82–111. Weigle, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press. Winke, P., Gass, S., & Myford, C. (2011). The relationship between raters' prior language study and the evaluation of foreign language speech samples. TOEFL iBT® Research Report. Princeton, NJ, Educational Testing Services. Wiseman, C. S. (2012). Rater effects: Ego engagement in rater decision-making. Assessing Writing, 17(3), 150-173.

Open Document