The University of Southampton
University of Southampton Institutional Repository

Crowdsourcing hypothesis tests: making transparent how design choices shape research results

Crowdsourcing hypothesis tests: making transparent how design choices shape research results
Crowdsourcing hypothesis tests: making transparent how design choices shape research results
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.
Crowdsourcing, scientific transparency, stimulus sampling, forecasting, conceptual replications, research robustness
0033-2909
Landy, Justin
f24a63d1-af99-4130-aedc-f36e6a184525
Jia, Miaolei (Liam)
aa9e0400-7b44-4a42-a8e5-03e04fce96b9
Ding, Isabel
c842358c-9edf-4ca6-85a9-e1f8aba6fabe
Dawson, Ian
dff1b440-6c83-4354-92b6-04809460b01a
The Crowdsourcing Hypothesis Tests Collaboration
Landy, Justin
f24a63d1-af99-4130-aedc-f36e6a184525
Jia, Miaolei (Liam)
aa9e0400-7b44-4a42-a8e5-03e04fce96b9
Ding, Isabel
c842358c-9edf-4ca6-85a9-e1f8aba6fabe
Dawson, Ian
dff1b440-6c83-4354-92b6-04809460b01a

Landy, Justin, Jia, Miaolei (Liam), Ding, Isabel and Dawson, Ian , The Crowdsourcing Hypothesis Tests Collaboration (2019) Crowdsourcing hypothesis tests: making transparent how design choices shape research results. Psychological Bulletin. (In Press)

Record type: Article

Abstract

To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.

Text
Crowdsourcing Hypothesis Tests - Pre-Publication Version - Accepted Manuscript
Download (1MB)

More information

Accepted/In Press date: 29 October 2019
Keywords: Crowdsourcing, scientific transparency, stimulus sampling, forecasting, conceptual replications, research robustness

Identifiers

Local EPrints ID: 435290
URI: https://eprints.soton.ac.uk/id/eprint/435290
ISSN: 0033-2909
PURE UUID: c0125cf3-e66d-4c09-bf61-1c8fe4ba6984
ORCID for Ian Dawson: ORCID iD orcid.org/0000-0003-0555-9682

Catalogue record

Date deposited: 30 Oct 2019 17:30
Last modified: 06 Nov 2019 01:33

Export record

Contributors

Author: Justin Landy
Author: Miaolei (Liam) Jia
Author: Isabel Ding
Author: Ian Dawson ORCID iD

University divisions

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of https://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×