The University of Southampton
University of Southampton Institutional Repository

JISC Report on E-Assessment Quality (REAQ) in UK Higher Education

JISC Report on E-Assessment Quality (REAQ) in UK Higher Education
JISC Report on E-Assessment Quality (REAQ) in UK Higher Education
Commissioned by the Joint Information Systems Committee (JISC) in 2008, the ‘Report on Summative e-Assessment Quality (REAQ)’ project surveyed quality assurance (QA) activities commonly undertaken in summative e-assessment by UK Higher Education (HE) practitioners and others. The project focused on what denotes high quality in summative e-assessment for the interviewees and the steps that they take to meet their own standards. An expert panel guided the project. What denotes high quality summative e-assessment Expert opinion focused, in this order of priority, on: • Psychometrics (reliability, validity), • Pedagogy (mapping to intended learning outcomes), and • Practical issues (security, accessibility). What ‘high quality’ meant to our interviewees depended on the role they played in the process of creating and using e-assessments. They listed the following matters, in this order of volume: • Using the medium to give an extra dimension to assessment, including creating e-assessments that are authentic to the skills being tested; • Issues around delivery including security, infrastructure reliability, and accessibility; • Fairness and ease of use; • Supporting academic, managerial, and organisational goals; • Addressing the intended learning outcomes; and • Validity and reliability, mainly in their ‘non-psychometric’ senses. Interviewees with the role of learning technologist (or similar roles designed to aid academics in the use of e-assessment) used these terms in their psychometric senses. Interviewees focused on the e-assessment issues that were foremost in their mind. As processes to deliver e-assessment are rarely embedded in institutions at present, interviewees described spending time and effort on practical issues ensuring that e-assessments would work effectively. Many of the quality characteristics identified by the interviewees as important in summative e-assessment are measured by psychometrics. Although some academics use these measures, the report suggests that more could benefit from using psychometric evaluation. Steps needed to produce high quality e-assessment Expert opinion focused on: • Establishing sets of steps to follow for both content and quality management; • Identifying, using, and developing relevant standards for both content and quality management; • Identifying metrics for both content and process; and • Capability maturity modelling as an encapsulation of these three essential elements of a quality management process. Interviewee comments fell under a variety of rules of thumb or suggestions for useful steps, such as: noting that the effort needed to write e-assessments, their marking Final Report, May 2009 ii schemes, and to construct feedback is front-loaded; starting with easier questions and making later questions more difficult; checking assessments with subject matter experts and high performers; identifying ‘weak’ questions and improving or eliminating them; reviewing question content to ensure syllabus coverage; getting help for academics who usually have very limited knowledge of psychometrics; attending to security; and using accessibility guidelines. In summary: • Heuristic steps for both content and quality management, and • Accessibility standards. Many interviewees assumed that e-assessments were: • Valid if they were created by the academics responsible for the course, and • Subject to the same quality assurance processes as traditional assessments as well as those required specifically for e-assessment. The report questions these assumptions. Recommendations The report makes a number of recommendations to support academics creating high quality summative e-assessments, including: • A toolkit for the end-to-end process of creating e-assessment should be developed. • A practical guide to the steps involved in creating and maintaining an e-assessment system. • Guidelines for the quality assurance of e-assessments. • Psychometric measures for assessing the quality of item banks rather than individual questions, for assessing, tracking, and reporting the quality of banked items during their lifecycle of use. • Development and extension of existing psychometric theory to include multi-staged and optional stepped constructed response questions. • Workshops and support materials to disseminate good practice in the use of psychometrics for selected response items and for questions employing constructed responses. • Workshops and support materials to disseminate good practice in question creation and meeting educational needs beyond simple selected response items, possibly subject based. • Accessibility and user interface guidelines for deploying e-assessment, in particular addressing the use of browsers. • Guidelines for the use and role of MathML for expression recognition in e-assessments. • A repository of exemplars of good practice for both selected response and constructed response questions. • JISC and other community calls for and sponsorship of e-assessment bids should consider where and how bidders should incorporate appropriate psychometric measures in their proposals. • Commercial vendors should improve the accessibility of their psychometric reports to all stakeholders, possibly simplifying them to encourage take-up of their contents.
e-assessment
Gilbert, Lester
a593729a-9941-4b0a-bb10-1be61673b741
Gale, Veronica
cdb5afba-3379-4821-b9d1-4afab91ac36f
Wills, Gary
3a594558-6921-4e82-8098-38cd8d4e8aa0
Warburton, Bill
a946df13-4dd7-41fc-9d37-f4ef81217d78
Gilbert, Lester
a593729a-9941-4b0a-bb10-1be61673b741
Gale, Veronica
cdb5afba-3379-4821-b9d1-4afab91ac36f
Wills, Gary
3a594558-6921-4e82-8098-38cd8d4e8aa0
Warburton, Bill
a946df13-4dd7-41fc-9d37-f4ef81217d78

Gilbert, Lester, Gale, Veronica, Wills, Gary and Warburton, Bill (2009) JISC Report on E-Assessment Quality (REAQ) in UK Higher Education

Record type: Monograph (Project Report)

Abstract

Commissioned by the Joint Information Systems Committee (JISC) in 2008, the ‘Report on Summative e-Assessment Quality (REAQ)’ project surveyed quality assurance (QA) activities commonly undertaken in summative e-assessment by UK Higher Education (HE) practitioners and others. The project focused on what denotes high quality in summative e-assessment for the interviewees and the steps that they take to meet their own standards. An expert panel guided the project. What denotes high quality summative e-assessment Expert opinion focused, in this order of priority, on: • Psychometrics (reliability, validity), • Pedagogy (mapping to intended learning outcomes), and • Practical issues (security, accessibility). What ‘high quality’ meant to our interviewees depended on the role they played in the process of creating and using e-assessments. They listed the following matters, in this order of volume: • Using the medium to give an extra dimension to assessment, including creating e-assessments that are authentic to the skills being tested; • Issues around delivery including security, infrastructure reliability, and accessibility; • Fairness and ease of use; • Supporting academic, managerial, and organisational goals; • Addressing the intended learning outcomes; and • Validity and reliability, mainly in their ‘non-psychometric’ senses. Interviewees with the role of learning technologist (or similar roles designed to aid academics in the use of e-assessment) used these terms in their psychometric senses. Interviewees focused on the e-assessment issues that were foremost in their mind. As processes to deliver e-assessment are rarely embedded in institutions at present, interviewees described spending time and effort on practical issues ensuring that e-assessments would work effectively. Many of the quality characteristics identified by the interviewees as important in summative e-assessment are measured by psychometrics. Although some academics use these measures, the report suggests that more could benefit from using psychometric evaluation. Steps needed to produce high quality e-assessment Expert opinion focused on: • Establishing sets of steps to follow for both content and quality management; • Identifying, using, and developing relevant standards for both content and quality management; • Identifying metrics for both content and process; and • Capability maturity modelling as an encapsulation of these three essential elements of a quality management process. Interviewee comments fell under a variety of rules of thumb or suggestions for useful steps, such as: noting that the effort needed to write e-assessments, their marking Final Report, May 2009 ii schemes, and to construct feedback is front-loaded; starting with easier questions and making later questions more difficult; checking assessments with subject matter experts and high performers; identifying ‘weak’ questions and improving or eliminating them; reviewing question content to ensure syllabus coverage; getting help for academics who usually have very limited knowledge of psychometrics; attending to security; and using accessibility guidelines. In summary: • Heuristic steps for both content and quality management, and • Accessibility standards. Many interviewees assumed that e-assessments were: • Valid if they were created by the academics responsible for the course, and • Subject to the same quality assurance processes as traditional assessments as well as those required specifically for e-assessment. The report questions these assumptions. Recommendations The report makes a number of recommendations to support academics creating high quality summative e-assessments, including: • A toolkit for the end-to-end process of creating e-assessment should be developed. • A practical guide to the steps involved in creating and maintaining an e-assessment system. • Guidelines for the quality assurance of e-assessments. • Psychometric measures for assessing the quality of item banks rather than individual questions, for assessing, tracking, and reporting the quality of banked items during their lifecycle of use. • Development and extension of existing psychometric theory to include multi-staged and optional stepped constructed response questions. • Workshops and support materials to disseminate good practice in the use of psychometrics for selected response items and for questions employing constructed responses. • Workshops and support materials to disseminate good practice in question creation and meeting educational needs beyond simple selected response items, possibly subject based. • Accessibility and user interface guidelines for deploying e-assessment, in particular addressing the use of browsers. • Guidelines for the use and role of MathML for expression recognition in e-assessments. • A repository of exemplars of good practice for both selected response and constructed response questions. • JISC and other community calls for and sponsorship of e-assessment bids should consider where and how bidders should incorporate appropriate psychometric measures in their proposals. • Commercial vendors should improve the accessibility of their psychometric reports to all stakeholders, possibly simplifying them to encourage take-up of their contents.

Text
REAQ_Final_Report_v1-4.pdf - Version of Record
Available under License Other.
Download (2MB)

More information

Published date: May 2009
Keywords: e-assessment
Organisations: Electronic & Software Systems

Identifiers

Local EPrints ID: 267697
URI: http://eprints.soton.ac.uk/id/eprint/267697
PURE UUID: 8b8e9f59-70cc-4bd9-953f-588f93b60702
ORCID for Gary Wills: ORCID iD orcid.org/0000-0001-5771-4088

Catalogue record

Date deposited: 24 Jul 2009 07:12
Last modified: 15 Mar 2024 02:51

Export record

Contributors

Author: Lester Gilbert
Author: Veronica Gale
Author: Gary Wills ORCID iD
Author: Bill Warburton

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×