When generative artificial intelligence meets academic integrity: educational opportunities & challenges in a digital age
When generative artificial intelligence meets academic integrity: educational opportunities & challenges in a digital age
Our education systems have been acutely shaped by the rapid digitalisation of services (Selwyn, 2016). Recent interventions in the form of artificial intelligence, particularly the rise of language learning models (LLMs) (for example, ChatGPT), have effectively perturbed the teaching and learning industry across educational levels and institutions globally. Despite the plethora of views across sectors, there are relatively fewer empirically charged scholarly discussions on the issue. Redressing this gap, this project explores opportunities and challenges of using generative artificial intelligence (GenAI) vis-à-vis academic integrity in higher education (HE) settings.
Fieldwork for this project was carried out between September and December 2023. It involved semistructured one-to-one formal interviews (n=10) and two focus groups (n=5) with educators who at the time of the fieldwork also played the role of academic integrity officers (AIOs) across faculties at a Russell Group university in England.
Research findings suggest that GenAI as a shadow education (or e-tutoring) tool can be beneficial in terms of expanding access to multiple knowledge bases and digital skills of future graduates. At the same time, its use has the potential to disrupt current quality assurance practices and undermine university principles and values of cultivating critical and creative thinking and learning skills – notably through the homogenisation of learning experiences often based on erroneous (and un-equalising) assumptions. Furthermore, staff views on GenAI vary by discipline due to, for example, their teaching and learning practices, perceived relationship between HE and the relevant industry, and assessment modes and designs. Key recommendations include a clear university-level policy on GenAI use and a revitalisation of academic integrity education and guidelines in partnership with staff and students across and within disciplines.
British Educational Research Association
Gupta, Achala
a30fa79d-e9dc-4237-93d4-bdaf8816780a
6 August 2024
Gupta, Achala
a30fa79d-e9dc-4237-93d4-bdaf8816780a
Gupta, Achala
(2024)
When generative artificial intelligence meets academic integrity: educational opportunities & challenges in a digital age
(Education in a digital age: BERA Small Grants Fund research reports),
London.
British Educational Research Association, 14pp.
Abstract
Our education systems have been acutely shaped by the rapid digitalisation of services (Selwyn, 2016). Recent interventions in the form of artificial intelligence, particularly the rise of language learning models (LLMs) (for example, ChatGPT), have effectively perturbed the teaching and learning industry across educational levels and institutions globally. Despite the plethora of views across sectors, there are relatively fewer empirically charged scholarly discussions on the issue. Redressing this gap, this project explores opportunities and challenges of using generative artificial intelligence (GenAI) vis-à-vis academic integrity in higher education (HE) settings.
Fieldwork for this project was carried out between September and December 2023. It involved semistructured one-to-one formal interviews (n=10) and two focus groups (n=5) with educators who at the time of the fieldwork also played the role of academic integrity officers (AIOs) across faculties at a Russell Group university in England.
Research findings suggest that GenAI as a shadow education (or e-tutoring) tool can be beneficial in terms of expanding access to multiple knowledge bases and digital skills of future graduates. At the same time, its use has the potential to disrupt current quality assurance practices and undermine university principles and values of cultivating critical and creative thinking and learning skills – notably through the homogenisation of learning experiences often based on erroneous (and un-equalising) assumptions. Furthermore, staff views on GenAI vary by discipline due to, for example, their teaching and learning practices, perceived relationship between HE and the relevant industry, and assessment modes and designs. Key recommendations include a clear university-level policy on GenAI use and a revitalisation of academic integrity education and guidelines in partnership with staff and students across and within disciplines.
Text
Gupta_Small-grants-reports-2024_final-text
- Version of Record
More information
Published date: 6 August 2024
Identifiers
Local EPrints ID: 493221
URI: http://eprints.soton.ac.uk/id/eprint/493221
PURE UUID: 01a3a19d-3f6a-4a9e-9641-db23a344ab23
Catalogue record
Date deposited: 28 Aug 2024 16:49
Last modified: 29 Aug 2024 02:00
Export record
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics