The University of Southampton
University of Southampton Institutional Repository

Judgments of research co-created by generative AI: experimental evidence

Judgments of research co-created by generative AI: experimental evidence
Judgments of research co-created by generative AI: experimental evidence
The introduction of ChatGPT has fuelled a public debate on the use of generative AI (large language models; LLMs), including its use by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust and devalue researchers and scientific output. Participants (N=402) considered a researcher who delegates elements of the research process to a PhD student or LLM, and rated (1) moral acceptability, (2) trust in the scientist to oversee future projects, and (3) the accuracy and quality of the output. People judged delegating to an LLM as less acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85). We discuss how this devaluation might transfer into the underreporting of generative AI use.
cs.HC, cs.AI, cs.CL, cs.CY, econ.GN, q-fin.EC, K.4.2; I.2.7
arXiv
Niszczota, Paweł
245606ca-f904-42f1-b6df-404d86f0f52a
Conway, Paul
765aaaf9-173f-44cf-be9a-c8ffbb51e286
Niszczota, Paweł
245606ca-f904-42f1-b6df-404d86f0f52a
Conway, Paul
765aaaf9-173f-44cf-be9a-c8ffbb51e286

[Unknown type: UNSPECIFIED]

Record type: UNSPECIFIED

Abstract

The introduction of ChatGPT has fuelled a public debate on the use of generative AI (large language models; LLMs), including its use by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust and devalue researchers and scientific output. Participants (N=402) considered a researcher who delegates elements of the research process to a PhD student or LLM, and rated (1) moral acceptability, (2) trust in the scientist to oversee future projects, and (3) the accuracy and quality of the output. People judged delegating to an LLM as less acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85). We discuss how this devaluation might transfer into the underreporting of generative AI use.

Text
2305.11873v1 - Author's Original
Download (2MB)

More information

Submitted date: 3 May 2023
Additional Information: 10 pages, 2 tables, 1 figure
Keywords: cs.HC, cs.AI, cs.CL, cs.CY, econ.GN, q-fin.EC, K.4.2; I.2.7

Identifiers

Local EPrints ID: 479451
URI: http://eprints.soton.ac.uk/id/eprint/479451
PURE UUID: 017d3ce2-0556-40ce-a1fd-ad244ae3d6fc
ORCID for Paul Conway: ORCID iD orcid.org/0000-0003-4649-6008

Catalogue record

Date deposited: 24 Jul 2023 16:59
Last modified: 17 Mar 2024 04:17

Export record

Altmetrics

Contributors

Author: Paweł Niszczota
Author: Paul Conway ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×