Please use this identifier to cite or link to this item: http://hdl.handle.net/10125/56004

Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment

File Description SizeFormat 
LaFlair_Staples(2017)_post-print-ss.pdfPostprint468.86 kBAdobe PDFView/Open

Item Summary

Title: Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment
Authors: LaFlair, Geoffrey T.
Staples, Shelley
Keywords: corpus linguistics
domain analysis
multi-dimensional analysis
performance assessment
register analysis
show 1 morevalidity argument
show less
Issue Date: 19 Sep 2017
Publisher: Sage Journals
Citation: LaFlair, G. T., & Staples, S. (2017). Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment. Language Testing. 34(4), 451-475
Abstract: Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013).
The current study investigates the extrapolation inference for a high-stakes test of spoken English, the Michigan English Language Assessment Battery (MELAB) speaking task. This inference requires evidence that supports the inferential step from observations of what test takers can do on an assessment to what they can do in the target domain (Chapelle et al., 2008b; Kane, 2013). Typically, the extrapolation inference has been supported by evidence from a criterion measure of language ability. This study proposes an additional empirical method, namely corpus-based register analysis (Biber & Conrad,
2009), which provides a quantitative framework for examining the linguistic relationship between performance assessments and the domains to which their scores are extrapolated. This approach extends Bachman and Palmer’s (2010) focus on the target language use (TLU) domain analysis in their study of assessment use arguments by providing a quantitative approach for the study of language. We first explain the connections between corpus-based register analysis and TLU analysis. Second, an investigation of the MELAB
speaking task compares the language of test-taker responses to the language of academic, professional, and conversational spoken registers, or TLU domains. Additionally, the language features at different performance levels within the MELAB speaking task are
investigated to determine the relationship between test takers’ scores and their language use in the task. Following previous studies using corpus-based register analysis, we conduct a multi-dimensional (MD) analysis for our investigation. The comparison of the language features from the MELAB with the language of TLU domains revealed that support for the extrapolation inference varies across dimensions of language use.
Sponsor: CaMLA Spaan Research Grant Program
Pages/Duration: 43
URI/DOI: http://hdl.handle.net/10125/56004
DOI: http://journals.sagepub.com/doi/full/10.1177/0265532217713951
Rights: This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
Appears in Collections:SLS Faculty & Researcher Works


Please contact sspace@hawaii.edu if you need this content in an ADA compliant alternative format.

Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.