Insights into the accuracy of social scientists' forecasts of societal change
Insights into the accuracy of social scientists' forecasts of societal change
How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2; N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a sample from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data.
forecasting, metascience, prejudice, political polarization, well-being, expert judgment
Sung, Ming-Chien
2114f823-bc7f-4306-a775-67aee413aa03
Tai, Chung-Ching
b3370b23-7410-4254-99bc-6711046e1095
9 February 2023
Sung, Ming-Chien
2114f823-bc7f-4306-a775-67aee413aa03
Tai, Chung-Ching
b3370b23-7410-4254-99bc-6711046e1095
[Unknown type: UNSPECIFIED]
Abstract
How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2; N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a sample from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data.
Text
Grossmann et al.preprint
- Author's Original
Text
2022 forecasting_collaborative_Grossmann_NHB
- Accepted Manuscript
More information
Submitted date: 26 June 2022
Accepted/In Press date: 19 December 2022
e-pub ahead of print date: 9 February 2023
Published date: 9 February 2023
Keywords:
forecasting, metascience, prejudice, political polarization, well-being, expert judgment
Identifiers
Local EPrints ID: 473140
URI: http://eprints.soton.ac.uk/id/eprint/473140
ISSN: 2397-3374
PURE UUID: 5bb8266a-e0d5-407d-8532-c31ed28bf29e
Catalogue record
Date deposited: 10 Jan 2023 18:35
Last modified: 13 Apr 2024 04:01
Export record
Altmetrics
Contributors
Corporate Author: Igor Grossmann
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics