Resolving content moderation dilemmas between free speech and harmful misinformation
Resolving content moderation dilemmas between free speech and harmful misinformation
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (
N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
Humans, Speech, Communication, Morals, Emotions, Politics, Social Media
e2210666120
Kozyreva, Anastasia
f6e13d9e-4a8a-421a-aeb6-cdd005b2f29d
Herzog, Stefan M
fc37c359-102b-4244-ad0d-0ef787fdecf6
Lewandowsky, Stephan
6af40b0a-ba76-44e2-9255-f98840991c4c
Hertwig, Ralph
37f2ccd6-8058-4822-96b2-8c4a8bb03ad3
Lorenz-Spreen, Philipp
5f2a3044-704a-4433-9f93-c1ccfd8ab127
Leiser, Mark
33793e7a-be3d-46aa-89f6-f4bbdb9310cf
Reifler, Jason
426301a1-f90b-470d-a076-04a9d716c491
7 February 2023
Kozyreva, Anastasia
f6e13d9e-4a8a-421a-aeb6-cdd005b2f29d
Herzog, Stefan M
fc37c359-102b-4244-ad0d-0ef787fdecf6
Lewandowsky, Stephan
6af40b0a-ba76-44e2-9255-f98840991c4c
Hertwig, Ralph
37f2ccd6-8058-4822-96b2-8c4a8bb03ad3
Lorenz-Spreen, Philipp
5f2a3044-704a-4433-9f93-c1ccfd8ab127
Leiser, Mark
33793e7a-be3d-46aa-89f6-f4bbdb9310cf
Reifler, Jason
426301a1-f90b-470d-a076-04a9d716c491
Kozyreva, Anastasia, Herzog, Stefan M, Lewandowsky, Stephan, Hertwig, Ralph, Lorenz-Spreen, Philipp, Leiser, Mark and Reifler, Jason
(2023)
Resolving content moderation dilemmas between free speech and harmful misinformation.
Proceedings of the National Academy of Sciences of the United States of America, 120 (7), .
(doi:10.1073/pnas.2210666120).
Abstract
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (
N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
Text
kozyreva-et-al-2023-resolving-content-moderation-dilemmas-between-free-speech-and-harmful-misinformation
- Version of Record
More information
Accepted/In Press date: 9 November 2022
Published date: 7 February 2023
Keywords:
Humans, Speech, Communication, Morals, Emotions, Politics, Social Media
Identifiers
Local EPrints ID: 495122
URI: http://eprints.soton.ac.uk/id/eprint/495122
ISSN: 0027-8424
PURE UUID: 37892217-2052-47cf-99a5-c3f9ac7ec2d9
Catalogue record
Date deposited: 29 Oct 2024 17:48
Last modified: 05 Nov 2024 03:09
Export record
Altmetrics
Contributors
Author:
Anastasia Kozyreva
Author:
Stefan M Herzog
Author:
Stephan Lewandowsky
Author:
Ralph Hertwig
Author:
Philipp Lorenz-Spreen
Author:
Mark Leiser
Author:
Jason Reifler
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics