Share this page

Judging the efficacy of artificial exemplar material

By Mithra Vijayathasan, Anne Pinot de Moira, Neil Stringer

Abstract

Before any examiner can mark national examinations in England they must be trained to use the mark scheme in a process called standardisation. The selection of exemplar material from live scripts for use in standardisation is time consuming and often fails to unearth a full range of candidate responses. This study investigates the possibility of improving the process by generating artificial exemplars in advance of the time-critical period. As a first step, it investigates whether there is any detectable difference in the quality of an exemplar dependent upon how it is created. Four different conditions are included: artificial exemplars written by the Principal Examiner in advance of the examination being sat; artificial exemplars written by the Principal Examiner after the examination is sat; standardisation exemplars selected in the traditional manner; and randomly selected live exemplars.

The study concludes that there is no apparent difference in the perceived quality of response dependent upon condition. However, exemplars randomly selected from the live scripts may be harder to judge. It is recommended that future research should focus on whether artificial exemplars can give rise to comparable, or higher, levels of marking reliability.

How to cite

Vijayathasan, M., Pinot de Moira, A., & Stringer, N. (2016). Judging the efficacy of artificial exemplar material. Manchester: AQA Centre for Education Research and Practice.  

Keywords

Connect with us

Contact our team

Join us

Work with us to advance education and enable students and teachers to reach their potential.

Apply now