The University of Southampton
University of Southampton Institutional Repository

Assessing quality in systematic reviews of the effectiveness of health promotion: areas of consensus and dissension

Assessing quality in systematic reviews of the effectiveness of health promotion: areas of consensus and dissension
Assessing quality in systematic reviews of the effectiveness of health promotion: areas of consensus and dissension
Systematic reviews have played an increasingly important role in health promotion in recent
years. Yet there are debates about how they should be conducted, particularly about how the
quality of evidence should be assessed. The aim of this research was to assess current
approaches to, and general views on, the use of quality assessment in systematic reviews of
effectiveness in health promotion, and to identify areas of consensus and dissension around the
choice of techniques, methods and criteria employed.

There were two stages of data collection. The first was a structured mapping of a random
sample of 30 systematic reviews of the effectiveness of health promotion to identify and explain
trends and themes in methods and approaches to quality assessment. During the second stage
semi-structured interviews were conducted with a purposive sample of 17 systematic reviewers
who had conducted at least one review of a health promotion topic, to investigate some of these
trends and approaches in greater detail.

The mapping found that the majority of systematic reviews had assessed the quality of the
included studies, to varying degrees. However, procedures were not always explicitly reported
or consistent. There was some degree of consensus over criteria, with experimental evaluation
methods commonly favoured. Most frequently used quality assessment criteria included
participant attrition, the validity and reliability of data collection and analysis methods, and
adequacy of sample sizes. External validity was commonly assessed, primarily in terms of
generalisability and replicability, but less so in terms of intervention quality.

The interviews revealed some of the barriers to effective systematic reviewing, including: lack
of time and resources, complexity of some health promotion interventions, inclusion of
observational evaluation designs, and poor reporting of primary studies. Systematic reviewing
was commonly done in small teams, mostly comprising academics, sometimes with
practitioners. Interviewees learned systematic review skills through a combination of training,
support from colleagues and mentors, literature and a strong emphasis on hands-on practical
learning. Subjective judgement was often required, contra to the popular belief that systematic
reviews are wholly objective.

The overall conclusions of this study are that systematic reviewing in health promotion is often
challenging due the complexity of interventions and evaluation designs. This places additional
demands on reviewers in terms of knowledge and skills required, often exacerbated by finite
time scales and limited funding. Initiatives are in place to foster shared ways of working,
although the extent to which complete consensus is achievable in a multi- disciplinary area such
as health promotion is questionable.
Shepherd, Jonathan Paul
dfbca97a-9307-4eee-bdf7-e27bcb02bc67
Shepherd, Jonathan Paul
dfbca97a-9307-4eee-bdf7-e27bcb02bc67
Weare, Katherine
3f5bedd8-374f-4fdf-bd92-251ebc4c3d6c

Shepherd, Jonathan Paul (2009) Assessing quality in systematic reviews of the effectiveness of health promotion: areas of consensus and dissension. University of Southampton, School of Education, Doctoral Thesis, 300pp.

Record type: Thesis (Doctoral)

Abstract

Systematic reviews have played an increasingly important role in health promotion in recent
years. Yet there are debates about how they should be conducted, particularly about how the
quality of evidence should be assessed. The aim of this research was to assess current
approaches to, and general views on, the use of quality assessment in systematic reviews of
effectiveness in health promotion, and to identify areas of consensus and dissension around the
choice of techniques, methods and criteria employed.

There were two stages of data collection. The first was a structured mapping of a random
sample of 30 systematic reviews of the effectiveness of health promotion to identify and explain
trends and themes in methods and approaches to quality assessment. During the second stage
semi-structured interviews were conducted with a purposive sample of 17 systematic reviewers
who had conducted at least one review of a health promotion topic, to investigate some of these
trends and approaches in greater detail.

The mapping found that the majority of systematic reviews had assessed the quality of the
included studies, to varying degrees. However, procedures were not always explicitly reported
or consistent. There was some degree of consensus over criteria, with experimental evaluation
methods commonly favoured. Most frequently used quality assessment criteria included
participant attrition, the validity and reliability of data collection and analysis methods, and
adequacy of sample sizes. External validity was commonly assessed, primarily in terms of
generalisability and replicability, but less so in terms of intervention quality.

The interviews revealed some of the barriers to effective systematic reviewing, including: lack
of time and resources, complexity of some health promotion interventions, inclusion of
observational evaluation designs, and poor reporting of primary studies. Systematic reviewing
was commonly done in small teams, mostly comprising academics, sometimes with
practitioners. Interviewees learned systematic review skills through a combination of training,
support from colleagues and mentors, literature and a strong emphasis on hands-on practical
learning. Subjective judgement was often required, contra to the popular belief that systematic
reviews are wholly objective.

The overall conclusions of this study are that systematic reviewing in health promotion is often
challenging due the complexity of interventions and evaluation designs. This places additional
demands on reviewers in terms of knowledge and skills required, often exacerbated by finite
time scales and limited funding. Initiatives are in place to foster shared ways of working,
although the extent to which complete consensus is achievable in a multi- disciplinary area such
as health promotion is questionable.

Text
Microsoft_Word_-_J_Shepherd_PhD_Thesis.pdf - Other
Download (3MB)

More information

Published date: June 2009
Organisations: University of Southampton

Identifiers

Local EPrints ID: 67475
URI: http://eprints.soton.ac.uk/id/eprint/67475
PURE UUID: ded73e92-92b1-4bb1-9771-d6d4fa88f00f
ORCID for Jonathan Paul Shepherd: ORCID iD orcid.org/0000-0003-1682-4330

Catalogue record

Date deposited: 27 Aug 2009
Last modified: 14 Mar 2024 02:38

Export record

Contributors

Thesis advisor: Katherine Weare

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×