The University of Southampton
University of Southampton Institutional Repository

Judgements of research co-created by Generative AI: experimental evidence

Judgements of research co-created by Generative AI: experimental evidence
Judgements of research co-created by Generative AI: experimental evidence

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using Generative AI (large language models; LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether - after deciding to delegate the research process - they would trust the scientist (that decided to delegate) to oversee future projects. Thirdly, they rated the expected accuracy and quality of the output from the delegated research process. Our results show that people judged delegating to an LLM as less morally acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85). We discuss how this devaluation might transfer into the underreporting of Generative AI use.

ChatGPT, experiment, Generative AI, GPT, large language models, metascience, trust in science
2392-1641
101-114
Niszczota, Paweł
245606ca-f904-42f1-b6df-404d86f0f52a
Conway, Paul
765aaaf9-173f-44cf-be9a-c8ffbb51e286
Niszczota, Paweł
245606ca-f904-42f1-b6df-404d86f0f52a
Conway, Paul
765aaaf9-173f-44cf-be9a-c8ffbb51e286

Niszczota, Paweł and Conway, Paul (2023) Judgements of research co-created by Generative AI: experimental evidence. Economics and Business Review, 9 (2), 101-114. (doi:10.18559/ebr.2023.2.744).

Record type: Article

Abstract

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using Generative AI (large language models; LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether - after deciding to delegate the research process - they would trust the scientist (that decided to delegate) to oversee future projects. Thirdly, they rated the expected accuracy and quality of the output from the delegated research process. Our results show that people judged delegating to an LLM as less morally acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85). We discuss how this devaluation might transfer into the underreporting of Generative AI use.

Text
05-Niszczota - Version of Record
Available under License Creative Commons Attribution.
Download (1MB)

More information

e-pub ahead of print date: 1 April 2023
Published date: 1 April 2023
Keywords: ChatGPT, experiment, Generative AI, GPT, large language models, metascience, trust in science

Identifiers

Local EPrints ID: 482522
URI: http://eprints.soton.ac.uk/id/eprint/482522
ISSN: 2392-1641
PURE UUID: 4e042852-58dc-4152-8571-1197e66da545
ORCID for Paul Conway: ORCID iD orcid.org/0000-0003-4649-6008

Catalogue record

Date deposited: 10 Oct 2023 16:42
Last modified: 18 Mar 2024 04:09

Export record

Altmetrics

Contributors

Author: Paweł Niszczota
Author: Paul Conway ORCID iD

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×