The SPATIAL architecture: design and development experiences from gauging and monitoring the AI inference capabilities of modern applications
The SPATIAL architecture: design and development experiences from gauging and monitoring the AI inference capabilities of modern applications
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
947-959
Ottun, Abdul-Rasheed
fb7ac1c4-c78b-4f7b-9936-987ed2e93dd0
Marasinghe, Rasinthe
3590094d-53c2-40fd-a583-600d8ddab433
Elemosho, Toluwani
fbd47ba4-5184-4392-b727-1374cf37f1bd
Ragab, Mohamed
70b66274-31dc-474c-82a1-f838ad062a14
22 August 2024
Ottun, Abdul-Rasheed
fb7ac1c4-c78b-4f7b-9936-987ed2e93dd0
Marasinghe, Rasinthe
3590094d-53c2-40fd-a583-600d8ddab433
Elemosho, Toluwani
fbd47ba4-5184-4392-b727-1374cf37f1bd
Ragab, Mohamed
70b66274-31dc-474c-82a1-f838ad062a14
Ottun, Abdul-Rasheed, Marasinghe, Rasinthe and Elemosho, Toluwani
,
et al.
(2024)
The SPATIAL architecture: design and development experiences from gauging and monitoring the AI inference capabilities of modern applications.
In 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS).
IEEE.
.
(doi:10.1109/ICDCS60910.2024.00092).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
This record has no associated files available for download.
More information
Published date: 22 August 2024
Venue - Dates:
44th IEEE International Conference on Distributed Computing Systems <br/>: ICDCS 2024, , Jersey City, United States, 2024-07-23 - 2024-07-26
Identifiers
Local EPrints ID: 494991
URI: http://eprints.soton.ac.uk/id/eprint/494991
PURE UUID: f74c88c8-bfde-4da6-a0f1-8ec1ef4ae071
Catalogue record
Date deposited: 25 Oct 2024 16:33
Last modified: 25 Oct 2024 16:33
Export record
Altmetrics
Contributors
Author:
Abdul-Rasheed Ottun
Author:
Rasinthe Marasinghe
Author:
Toluwani Elemosho
Author:
Mohamed Ragab
Corporate Author: et al.
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics