Could artificial intelligence write mental health nursing care plans?
Could artificial intelligence write mental health nursing care plans?
Background: artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care-delivery is not yet known.
Aims: to use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors’ clinical experience and existing guidance.
Materials & methods:basic text commands were input into ChatGPT about a fictitious person called ‘Emily’ who presents with self-injurious behaviour. The output from ChatGPT was then evaluated against the authors’ clinical experience and current (national) care guidance.
Results: ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way.
Discussion: AI use is increasing in direct-care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care-recipient engagement, ‘recycle’ existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms.
Conclusion: use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT’s ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
art of nursing, nursing role, quality of care, self-harm, therapeutic relationships
79-86
Woodnutt, Samuel
dbb6678a-2b2b-4e7c-9a12-f9d838555116
Allen, Chris
b7924cd0-80a6-4379-9915-720e0a124e78
Snowden, Jasmine
b19720c9-d6e0-4208-b145-60fecfa3760e
Flynn, Matt
5a7c40b8-eb0d-4d80-9793-08a1c3254ebc
Hall, Simon
734099e5-1bf8-4a64-9440-96805ca3459b
Libberton, Paula
d6c21e87-26b1-4842-bb74-d897de7dba14
Purvis, Francesca
b42c5cce-4b59-4515-9681-56991221f034
February 2024
Woodnutt, Samuel
dbb6678a-2b2b-4e7c-9a12-f9d838555116
Allen, Chris
b7924cd0-80a6-4379-9915-720e0a124e78
Snowden, Jasmine
b19720c9-d6e0-4208-b145-60fecfa3760e
Flynn, Matt
5a7c40b8-eb0d-4d80-9793-08a1c3254ebc
Hall, Simon
734099e5-1bf8-4a64-9440-96805ca3459b
Libberton, Paula
d6c21e87-26b1-4842-bb74-d897de7dba14
Purvis, Francesca
b42c5cce-4b59-4515-9681-56991221f034
Woodnutt, Samuel, Allen, Chris, Snowden, Jasmine, Flynn, Matt, Hall, Simon, Libberton, Paula and Purvis, Francesca
(2024)
Could artificial intelligence write mental health nursing care plans?
Journal of Psychiatric and Mental Health Nursing, 31 (1), .
(doi:10.1111/jpm.12965).
Abstract
Background: artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care-delivery is not yet known.
Aims: to use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors’ clinical experience and existing guidance.
Materials & methods:basic text commands were input into ChatGPT about a fictitious person called ‘Emily’ who presents with self-injurious behaviour. The output from ChatGPT was then evaluated against the authors’ clinical experience and current (national) care guidance.
Results: ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way.
Discussion: AI use is increasing in direct-care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care-recipient engagement, ‘recycle’ existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms.
Conclusion: use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT’s ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
Text
Psychiatric Ment Health Nurs - 2023 - Woodnutt
- Version of Record
Text
Psychiatric Ment Health Nurs - 2023 - Woodnutt - Could artificial intelligence write mental health nursing care plans
- Version of Record
More information
Accepted/In Press date: 23 July 2023
e-pub ahead of print date: 4 August 2023
Published date: February 2024
Keywords:
art of nursing, nursing role, quality of care, self-harm, therapeutic relationships
Identifiers
Local EPrints ID: 480625
URI: http://eprints.soton.ac.uk/id/eprint/480625
ISSN: 1351-0126
PURE UUID: ce0bb154-c8ca-415f-9bf4-108fc5251b06
Catalogue record
Date deposited: 08 Aug 2023 16:31
Last modified: 10 Apr 2024 02:11
Export record
Altmetrics
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics