Individualised rating-scale procedure: a means of reducing response style contamination in survey data?
Individualised rating-scale procedure: a means of reducing response style contamination in survey data?
Response style bias has been shown to seriously contaminate the substantive results drawn from survey data; particularly those conducted using cross-cultural samples. As a consequence, identification of response formats that suffer least from response style bias has been called for. Previous studies show that respondents’ personal characteristics, such as age, education level and culture, are connected with response style manifestation.
Differences in the way respondents interpret and utilise researcher-defined fixed rating-scales (e.g. Likert formats), poses a problem for survey researchers. Techniques that are currently used to remove response bias from survey data are inadequate as they cannot accurately determine the level of contamination present and frequently blur true score variance. Inappropriate rating-scales can impact on the level of response style bias manifested, insofar as they may not represent respondents’ cognitions. Rating-scale lengths that are too long present respondents with some response categories that are not ‘meaningful’, whereas rating-scales that are too short force respondents into compressing their cognitive rating-scales into the number of response categories provided (this can cause ERS contamination – extreme responding). We are therefore not able to guard against two respondents, who share the same cognitive position on a continuum, reporting their stance using different numbers on the rating-scale provided. This is especially problematic where a standard fixed rating-scale is used in cross-cultural surveys.
This paper details the development of the Individualised Rating-Scale Procedure (IRSP), a means of extracting a respondent’s ‘ideal’ rating-scale length, and as such ‘designing out’ response bias, for use as the measurement instrument in a survey. Whilst the fundamental ideas for self-anchoring rating-scales have been posited in the literature, the IRSP was developed using a series of qualitative interviews with participants. Finally, we discuss how the IRSP’s reliability and validity can be quantitatively assessed and compared to typical fixed researcher-defined rating-scales, such as the Likert format.
scale length, response styles, response bias, survey research, cross-cultural surveys, individualised rating-scale procedure
9-20
Chami-Castaldi, Elisa
f0818ec6-5877-47b9-8b20-ca6271eb77f8
Reynolds, Nina
43f998c7-5bb2-4f15-a7ec-45eaec50dcb8
Wallace, James
d05e2900-dbea-4dbe-9550-f409bb61f891
2008
Chami-Castaldi, Elisa
f0818ec6-5877-47b9-8b20-ca6271eb77f8
Reynolds, Nina
43f998c7-5bb2-4f15-a7ec-45eaec50dcb8
Wallace, James
d05e2900-dbea-4dbe-9550-f409bb61f891
Chami-Castaldi, Elisa, Reynolds, Nina and Wallace, James
(2008)
Individualised rating-scale procedure: a means of reducing response style contamination in survey data?
Electronic Journal of Business Research Methods, 6 (1), .
Abstract
Response style bias has been shown to seriously contaminate the substantive results drawn from survey data; particularly those conducted using cross-cultural samples. As a consequence, identification of response formats that suffer least from response style bias has been called for. Previous studies show that respondents’ personal characteristics, such as age, education level and culture, are connected with response style manifestation.
Differences in the way respondents interpret and utilise researcher-defined fixed rating-scales (e.g. Likert formats), poses a problem for survey researchers. Techniques that are currently used to remove response bias from survey data are inadequate as they cannot accurately determine the level of contamination present and frequently blur true score variance. Inappropriate rating-scales can impact on the level of response style bias manifested, insofar as they may not represent respondents’ cognitions. Rating-scale lengths that are too long present respondents with some response categories that are not ‘meaningful’, whereas rating-scales that are too short force respondents into compressing their cognitive rating-scales into the number of response categories provided (this can cause ERS contamination – extreme responding). We are therefore not able to guard against two respondents, who share the same cognitive position on a continuum, reporting their stance using different numbers on the rating-scale provided. This is especially problematic where a standard fixed rating-scale is used in cross-cultural surveys.
This paper details the development of the Individualised Rating-Scale Procedure (IRSP), a means of extracting a respondent’s ‘ideal’ rating-scale length, and as such ‘designing out’ response bias, for use as the measurement instrument in a survey. Whilst the fundamental ideas for self-anchoring rating-scales have been posited in the literature, the IRSP was developed using a series of qualitative interviews with participants. Finally, we discuss how the IRSP’s reliability and validity can be quantitatively assessed and compared to typical fixed researcher-defined rating-scales, such as the Likert format.
This record has no associated files available for download.
More information
Published date: 2008
Keywords:
scale length, response styles, response bias, survey research, cross-cultural surveys, individualised rating-scale procedure
Identifiers
Local EPrints ID: 179961
URI: http://eprints.soton.ac.uk/id/eprint/179961
ISSN: 1477-7029
PURE UUID: f33e5606-b75b-4f51-8bea-2fc9d214e3d4
Catalogue record
Date deposited: 05 Apr 2011 07:43
Last modified: 08 Jan 2022 05:34
Export record
Contributors
Author:
Elisa Chami-Castaldi
Author:
Nina Reynolds
Author:
James Wallace
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics