The University of Southampton
University of Southampton Institutional Repository

Beyond reliability and validity: The role of metacognition in psychological testing

Beyond reliability and validity: The role of metacognition in psychological testing
Beyond reliability and validity: The role of metacognition in psychological testing
Much research on psychological testing in educational contexts has focused on issues to do with reliability and validity. An excellent example is the debate over whether or not to penalize errors on multiple-choice tests, so called formula scoring. The case in support of formula scoring has typically focused on the idea that it may improve the reliability and validity of the test by removing error variance from the observed score. However, because students writing formula-scored tests are given the opportunity to "pass" on questions for which the answer is not known, opponents of formula scoring have argued that it contaminates the test score by introducing strategic factors. For example, conservative or risk-averse students may penalize themselves by answering too few questions on the test. A key factor that has been missing from this debate is the role of metacognitive monitoring, that is, the extent to which students can assess the accuracy of their own answers. Students with good metacognitive monitoring are at an advantage relative to students with poor monitoring because they know better which answers to offer (correct ones) and which to omit (incorrect ones). This parameter contaminates the corrected test score and varies between individuals just as aptitude or knowledge does, yet commonly used methods of scoring have no way of estimating its influence. In this chapter, we outline a signal-detection model that allows this metacognitive parameter to be estimated separately from other test parameters, review research on its influence, and make recommendations to test designers as to how they can obtain purer measures of the different aspects of test performance
139-162
Nova Science Publishers
Higham, P.A.
4093b28f-7d58-4d18-89d4-021792e418e7
Arnold, M.M.
36761703-6e77-4a91-9207-4f70b719bed2
DeGregorio, R.A.
Higham, P.A.
4093b28f-7d58-4d18-89d4-021792e418e7
Arnold, M.M.
36761703-6e77-4a91-9207-4f70b719bed2
DeGregorio, R.A.

Higham, P.A. and Arnold, M.M. (2007) Beyond reliability and validity: The role of metacognition in psychological testing. In, DeGregorio, R.A. (ed.) New Developments in Psychological Testing. Hauppauge, USA. Nova Science Publishers, pp. 139-162.

Record type: Book Section

Abstract

Much research on psychological testing in educational contexts has focused on issues to do with reliability and validity. An excellent example is the debate over whether or not to penalize errors on multiple-choice tests, so called formula scoring. The case in support of formula scoring has typically focused on the idea that it may improve the reliability and validity of the test by removing error variance from the observed score. However, because students writing formula-scored tests are given the opportunity to "pass" on questions for which the answer is not known, opponents of formula scoring have argued that it contaminates the test score by introducing strategic factors. For example, conservative or risk-averse students may penalize themselves by answering too few questions on the test. A key factor that has been missing from this debate is the role of metacognitive monitoring, that is, the extent to which students can assess the accuracy of their own answers. Students with good metacognitive monitoring are at an advantage relative to students with poor monitoring because they know better which answers to offer (correct ones) and which to omit (incorrect ones). This parameter contaminates the corrected test score and varies between individuals just as aptitude or knowledge does, yet commonly used methods of scoring have no way of estimating its influence. In this chapter, we outline a signal-detection model that allows this metacognitive parameter to be estimated separately from other test parameters, review research on its influence, and make recommendations to test designers as to how they can obtain purer measures of the different aspects of test performance

This record has no associated files available for download.

More information

Published date: 1 January 2007

Identifiers

Local EPrints ID: 45104
URI: http://eprints.soton.ac.uk/id/eprint/45104
PURE UUID: ef2d3901-0a0c-43b3-9bd1-fff1fc27bade
ORCID for P.A. Higham: ORCID iD orcid.org/0000-0001-6087-7224

Catalogue record

Date deposited: 26 Mar 2007
Last modified: 10 Apr 2024 01:38

Export record

Contributors

Author: P.A. Higham ORCID iD
Author: M.M. Arnold
Editor: R.A. DeGregorio

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×