Homepage » Research » KPG research » Interlocutor Performance Variability in Language Proficiency Testing: The Case of the Greek State Certificate Examinations

KPG research

Interlocutor Performance Variability in Language Proficiency Testing: The Case of the Greek State Certificate Examinations

Xenia Delieza
PhD Thesis
Faculty of English Language and Literature
School of Philosophy
National and Kapodistrian University of Athens

The assessment of oral production constitutes a real challenge in the field of Language Testing since assessment is more or less subjective and results in scores which are not always reliable because there are so many variables that affect the candidates oral performance. Presently I am interested in thoroughly investigating one of these variables, i.e., the examiner himself/ herself. 
The context of my investigation is the Greek state exams for foreign language proficiency  the English exams in particular and specifically the component which aims at assessing oral production and mediation. These exams, known as KPG exams (the initials standing for Kratiko Pistopiitiko Glossomathias, meaning State Certificate of Language Proficiency), are based on the scales set by the Council of Europe as described in the Common European Framework of Reference for Languages(thereafter CEFR). 
The purpose of my research is to critically describe the discourse practices of examiners during the oral KPG tests, and the way these practices interfere with the candidates output on the one hand and with the rating of their communicative performance on the other. In other words, my research focuses on the role of the oral examiners as interlocutors and also as raters. The way that this role is enacted is a major variable (or facet as this is very often referred to in the literature) which can interact with other variables to affect candidate output and examiner rating. 
This area of study is inextricably linked with the demand for more thorough examiner trainingand monitoring of their practices, with a view to increasing the possibilities for (a) valid interlocutor performance and (b) inter- and intra-rater reliability. Through the collection, analysis and interpretation of data collected before, during and after the English KPG exams, my ultimate aim is to create a reliable tool on the basis of which the examiner-as-interlocutor discourse is observed and evaluated. This tool can then be used to systematically examine the degree to which examiners comply with the norms of standardisation of the specific examination. That is, whether they follow the specific instructions for conducting the test, whether they adhere to the interlocutor frames prescribed by the test designers, how they use the evaluation criteria and which variables affect the final rating of candidates performance. Answers to such questions may yield information leading to a revision of the individual processes within the examination with a view to improving it in terms of validity and reliability.Study into examiner practices as a variable affecting outcomes in different examinations presents itself as an area of widespread interest in the international testing arena, where the demand for coherence and transparency in language certification have been repeatedly accentuated and especially emphasised with the introduction of the CEFR. In addition, the oral test of the KPG exams in English, being part of a new language testing battery provides an unexplored territory, awaiting research, the results of which could contribute to the understanding of the nature of foreign language performance, give insight into aspects of variation which can detract from reliability and validity and bring to light possible ways of coping with such variation.

 

<Back to The KPG project Research>