The University of Southampton
University of Southampton Institutional Repository

Method variation in calculating perceived change

Method variation in calculating perceived change
Method variation in calculating perceived change
Prompted by literature findings suggesting that error attributed to measures used in generating retrospective change reports are excessive, this paper explores error caused by methods that individuals use for calculating change retrospectively. According to Dowling (2001) simple, common-sense ideas of how to measure change in marketing settings can often result in ambiguous and possibly incorrect conclusions being drawn. Assessing change via longitudinal studies appears to allow a straightforward comparison to take place. However, it is difficult to establish whether the observed change is alpha, beta or gamma type change (Golembiewski et al., 1976). That is change in scores may be due to change in attribute level (alpha), recalibration of the measurement (beta) and/or change in construct meaning (gamma). Cross-sectionally, one type of retrospective design uses ‘retrospective pretests’ administered at the end of the intervention (treatment); at the same time and often on the same form as posttest ratings (Hill and Betz, 2005). Another design asks respondents to report ‘perceived change’ retrospectively, which may (or may not) be the result of an intervention (Lam and Bengo, 2003).

Despite the potential problems with the longitudinal measurement of change, the overwhelming conclusion is that in contrast to ‘pretest-posttest’ designs (Byrne and Crombie, 2003), the “use of retrospective accounts in management research needs to be seriously questioned” (Golden, 1992:857). Nevertheless, proponents of longitudinal designs argue that the validity concerns inherent in retrospective data result in unacceptable levels of measurement bias; proponents of retrospective designs argue that response shift bias (i.e., gamma change) poses a greater problem than self report bias.

Given that there is limited research into the intricacies of measuring change retrospectively, the popularity of retrospective reports, and that findings suggesting error attributed to informant fallibility is not excessive but error attributed to measures used in generating the retrospective reports is excessive (Miller et al., 1997), this paper investigates method variation in calculating change without taking into consideration the accuracy of recall. To examine the impact of method variation in calculating perceived change, two studies have been conducted.

The first study identifies the different calculation methods used by respondents. It identified four methods for calculating change – initial-base (time1 as a base for calculating change in all subsequent time periods), re-base (timen-1 as a base for calculating change in timen), cumulative (change in all previous time periods included in the change calculation for the current time period) and adjustment (change in previous time period used as the base for change in the current time period). Re-base and initial-base methods were most frequently used in study 1. The purpose of the second study (of 274 respondents), was to a) examine respondents’ choice of method from those identified the first study b) assess contextual influences, and c) establish the extent to which methods were considered correct. Re-base and initial-base were the most frequently used methods for both paper-based calculations (78%) and mental arithmetic (75%), 63% used the same method for both. Respondents did not generally consider all methods to be correct.

These findings indicate that in reporting percentage change there are validity threats stemming from the calculation methods that are used; method variation is a source of bias when calculating change retrospectively. Recalibrating scores could remove this source of bias, but only when the calculation method used is known. This paper provides evidence that part of the complexity of measuring change retrospectively rests with computational diversity. However, many other aspects of change measurement still require investigation.
Simintiras, Antonias .
c631a837-5716-4cec-9f82-809f6ca436f8
Reynolds, Nina L.
8bcb90ae-9a8b-40b1-a0cd-14be6e2c2c60
Simintiras, Antonias .
c631a837-5716-4cec-9f82-809f6ca436f8
Reynolds, Nina L.
8bcb90ae-9a8b-40b1-a0cd-14be6e2c2c60

Simintiras, Antonias . and Reynolds, Nina L. (2010) Method variation in calculating perceived change. Academy of Marketing Science Annual Conference - Achieving Balance: Research, Practice and Career, Portland, United States. 25 - 28 May 2010.

Record type: Conference or Workshop Item (Paper)

Abstract

Prompted by literature findings suggesting that error attributed to measures used in generating retrospective change reports are excessive, this paper explores error caused by methods that individuals use for calculating change retrospectively. According to Dowling (2001) simple, common-sense ideas of how to measure change in marketing settings can often result in ambiguous and possibly incorrect conclusions being drawn. Assessing change via longitudinal studies appears to allow a straightforward comparison to take place. However, it is difficult to establish whether the observed change is alpha, beta or gamma type change (Golembiewski et al., 1976). That is change in scores may be due to change in attribute level (alpha), recalibration of the measurement (beta) and/or change in construct meaning (gamma). Cross-sectionally, one type of retrospective design uses ‘retrospective pretests’ administered at the end of the intervention (treatment); at the same time and often on the same form as posttest ratings (Hill and Betz, 2005). Another design asks respondents to report ‘perceived change’ retrospectively, which may (or may not) be the result of an intervention (Lam and Bengo, 2003).

Despite the potential problems with the longitudinal measurement of change, the overwhelming conclusion is that in contrast to ‘pretest-posttest’ designs (Byrne and Crombie, 2003), the “use of retrospective accounts in management research needs to be seriously questioned” (Golden, 1992:857). Nevertheless, proponents of longitudinal designs argue that the validity concerns inherent in retrospective data result in unacceptable levels of measurement bias; proponents of retrospective designs argue that response shift bias (i.e., gamma change) poses a greater problem than self report bias.

Given that there is limited research into the intricacies of measuring change retrospectively, the popularity of retrospective reports, and that findings suggesting error attributed to informant fallibility is not excessive but error attributed to measures used in generating the retrospective reports is excessive (Miller et al., 1997), this paper investigates method variation in calculating change without taking into consideration the accuracy of recall. To examine the impact of method variation in calculating perceived change, two studies have been conducted.

The first study identifies the different calculation methods used by respondents. It identified four methods for calculating change – initial-base (time1 as a base for calculating change in all subsequent time periods), re-base (timen-1 as a base for calculating change in timen), cumulative (change in all previous time periods included in the change calculation for the current time period) and adjustment (change in previous time period used as the base for change in the current time period). Re-base and initial-base methods were most frequently used in study 1. The purpose of the second study (of 274 respondents), was to a) examine respondents’ choice of method from those identified the first study b) assess contextual influences, and c) establish the extent to which methods were considered correct. Re-base and initial-base were the most frequently used methods for both paper-based calculations (78%) and mental arithmetic (75%), 63% used the same method for both. Respondents did not generally consider all methods to be correct.

These findings indicate that in reporting percentage change there are validity threats stemming from the calculation methods that are used; method variation is a source of bias when calculating change retrospectively. Recalibrating scores could remove this source of bias, but only when the calculation method used is known. This paper provides evidence that part of the complexity of measuring change retrospectively rests with computational diversity. However, many other aspects of change measurement still require investigation.

This record has no associated files available for download.

More information

Published date: 27 May 2010
Additional Information: William R Darden award for best marketing research paper
Venue - Dates: Academy of Marketing Science Annual Conference - Achieving Balance: Research, Practice and Career, Portland, United States, 2010-05-25 - 2010-05-28

Identifiers

Local EPrints ID: 179965
URI: http://eprints.soton.ac.uk/id/eprint/179965
PURE UUID: 2384766f-8938-4d03-a57b-3519ab812ade

Catalogue record

Date deposited: 05 Apr 2011 10:36
Last modified: 10 Dec 2021 19:01

Export record

Contributors

Author: Antonias . Simintiras
Author: Nina L. Reynolds

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×