The University of Southampton
University of Southampton Institutional Repository

Trust, accountability, and autonomy in knowledge graph-based AI for self-determination

Trust, accountability, and autonomy in knowledge graph-based AI for self-determination
Trust, accountability, and autonomy in knowledge graph-based AI for self-determination
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making and a wide range of Artificial Intelligence (AI) services across major corporations such as Google, Walmart, and AirBnb. KGs complement Machine Learning (ML) algorithms by providing data context and semantics, thereby enabling further inference and question-answering capabilities. The integration of KGs with neuronal learning (e.g., Large Language Models (LLMs)) is currently a topic of active research, commonly named neuro-symbolic AI. Despite the numerous benefits that can be accomplished with KG-based AI, its growing ubiquity within online services may result in the loss of self-determination for citizens as a fundamental societal issue. The more we rely on these technologies, which are often centralised, the less citizens will be able to determine their own destinies. To counter this threat, AI regulation, such as the European Union (EU) AI Act, is being proposed in certain regions. The regulation sets what technologists need to do, leading to questions concerning: How can the output of AI systems be trusted? What is needed to ensure that the data fuelling and the inner workings of these artefacts are transparent? How can AI be made accountable for its decision-making? This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination. Drawing upon this conceptual framework, challenges and opportunities for citizen self-determination are illustrated and analysed in a real-world scenario. As a result, we propose a research agenda aimed at accomplishing the recommended objectives.
9:1–9:32
Ibáñez, Luis-Daniel
65a2e20b-74a9-427d-8c4c-2330285153ed
Domingue, John
cd25567d-c2c1-4dad-a65c-be86a96f6150
Kirrane, Sabrina
2166c68c-3dfd-47d2-861a-9cdf47ed104b
Seneviratne, Oshani
09ca5e23-7140-4562-9b50-00f79846361d
Third, Aisling
f291f6ae-f53f-4268-a8b0-6a6ae2f13a73
Vidal, Maria-Esther
a54df2bd-be80-4737-8295-6305b9dcf3df
Ibáñez, Luis-Daniel
65a2e20b-74a9-427d-8c4c-2330285153ed
Domingue, John
cd25567d-c2c1-4dad-a65c-be86a96f6150
Kirrane, Sabrina
2166c68c-3dfd-47d2-861a-9cdf47ed104b
Seneviratne, Oshani
09ca5e23-7140-4562-9b50-00f79846361d
Third, Aisling
f291f6ae-f53f-4268-a8b0-6a6ae2f13a73
Vidal, Maria-Esther
a54df2bd-be80-4737-8295-6305b9dcf3df

Ibáñez, Luis-Daniel, Domingue, John, Kirrane, Sabrina, Seneviratne, Oshani, Third, Aisling and Vidal, Maria-Esther (2023) Trust, accountability, and autonomy in knowledge graph-based AI for self-determination. Transactions on Graph Data and Knowledge, 1 (1), 9:1–9:32, [9]. (doi:10.4230/TGDK.1.1.9).

Record type: Article

Abstract

Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making and a wide range of Artificial Intelligence (AI) services across major corporations such as Google, Walmart, and AirBnb. KGs complement Machine Learning (ML) algorithms by providing data context and semantics, thereby enabling further inference and question-answering capabilities. The integration of KGs with neuronal learning (e.g., Large Language Models (LLMs)) is currently a topic of active research, commonly named neuro-symbolic AI. Despite the numerous benefits that can be accomplished with KG-based AI, its growing ubiquity within online services may result in the loss of self-determination for citizens as a fundamental societal issue. The more we rely on these technologies, which are often centralised, the less citizens will be able to determine their own destinies. To counter this threat, AI regulation, such as the European Union (EU) AI Act, is being proposed in certain regions. The regulation sets what technologists need to do, leading to questions concerning: How can the output of AI systems be trusted? What is needed to ensure that the data fuelling and the inner workings of these artefacts are transparent? How can AI be made accountable for its decision-making? This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination. Drawing upon this conceptual framework, challenges and opportunities for citizen self-determination are illustrated and analysed in a real-world scenario. As a result, we propose a research agenda aimed at accomplishing the recommended objectives.

Text
2310.19503 - Accepted Manuscript
Available under License Creative Commons Attribution.
Download (1MB)
Text
TGDK.1.1.9 - Version of Record
Available under License Creative Commons Attribution.
Download (1MB)

More information

Accepted/In Press date: 17 November 2023
Published date: 19 December 2023

Identifiers

Local EPrints ID: 485783
URI: http://eprints.soton.ac.uk/id/eprint/485783
PURE UUID: 5897d750-14a1-4665-b266-480ed8c3fe33
ORCID for Luis-Daniel Ibáñez: ORCID iD orcid.org/0000-0001-6993-0001

Catalogue record

Date deposited: 19 Dec 2023 17:33
Last modified: 11 Jul 2024 01:53

Export record

Altmetrics

Contributors

Author: Luis-Daniel Ibáñez ORCID iD
Author: John Domingue
Author: Sabrina Kirrane
Author: Oshani Seneviratne
Author: Aisling Third
Author: Maria-Esther Vidal

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×