The University of Southampton
University of Southampton Institutional Repository

ACT-FLEX: a symbolic and generative AI integration architecture for generalisable and explainable robotic disassembly tasks

ACT-FLEX: a symbolic and generative AI integration architecture for generalisable and explainable robotic disassembly tasks
ACT-FLEX: a symbolic and generative AI integration architecture for generalisable and explainable robotic disassembly tasks

The increasing demand for personalised products poses new challenges for sustainable manufacturing, particularly for the large-scale disassembly required for circular economy initiatives. Traditional robotic systems lack the flexibility and generalisability needed to adapt across diverse product types and robot morphologies. In this work, we present ACT-FLEX, a cognitive architecture inspired by psychological models of human cognition. We used ACT-FLEX to show that pre-trained Large Language Models (LLMs) as well as Vision-Language Models (VLMs) can enable flexible, explainable and generalisable robotic disassembly across robot morphologies and tasks. Towards this, our architecture incorporates both symbolic reasoning, multimodal perception and morphology-aware action generation to support decision-making in dynamic environments. We validate ACT-FLEX across simulated and physical experiments involving multiple disassembly scenarios, product configurations and various robotic platforms including UR10, KUKA iiwa, Panda Franka and uArm Swift Pro. Results show transferability across robot platforms and robustness to product variations, with success rates of up to 80% in physical trials. This work demonstrates a step towards realising transparent, retaskable and sustainable robotic systems for Industry 5.0.

ACT-R, Cognitive architecture, Generative pre-trained transformers, Manufacturing, Robotics
0736-5845
Ajraou, Houdeyfa
1b083859-79b0-4629-bc2c-11aed5ef2103
Ward, Rob
8998877f-7556-4a07-a4bb-a447d976f578
Farbiz, Farzam
94929dfc-bbba-4191-9a38-13ba86e6643f
Graf, Erich
1a5123e2-8f05-4084-a6e6-837dcfc66209
Miaolong, Yuan
34c484c2-dfb5-4e5a-b0a6-bade039b8595
Shijun, Yan
efc1ffc9-b6de-4939-a085-16e9b7096024
Oyekan, John
6f644c7c-eeb0-4abc-ade0-53a126fe769a
Ajraou, Houdeyfa
1b083859-79b0-4629-bc2c-11aed5ef2103
Ward, Rob
8998877f-7556-4a07-a4bb-a447d976f578
Farbiz, Farzam
94929dfc-bbba-4191-9a38-13ba86e6643f
Graf, Erich
1a5123e2-8f05-4084-a6e6-837dcfc66209
Miaolong, Yuan
34c484c2-dfb5-4e5a-b0a6-bade039b8595
Shijun, Yan
efc1ffc9-b6de-4939-a085-16e9b7096024
Oyekan, John
6f644c7c-eeb0-4abc-ade0-53a126fe769a

Ajraou, Houdeyfa, Ward, Rob, Farbiz, Farzam, Graf, Erich, Miaolong, Yuan, Shijun, Yan and Oyekan, John (2026) ACT-FLEX: a symbolic and generative AI integration architecture for generalisable and explainable robotic disassembly tasks. Robotics and Computer-Integrated Manufacturing, 101, [103277]. (doi:10.1016/j.rcim.2026.103277).

Record type: Article

Abstract

The increasing demand for personalised products poses new challenges for sustainable manufacturing, particularly for the large-scale disassembly required for circular economy initiatives. Traditional robotic systems lack the flexibility and generalisability needed to adapt across diverse product types and robot morphologies. In this work, we present ACT-FLEX, a cognitive architecture inspired by psychological models of human cognition. We used ACT-FLEX to show that pre-trained Large Language Models (LLMs) as well as Vision-Language Models (VLMs) can enable flexible, explainable and generalisable robotic disassembly across robot morphologies and tasks. Towards this, our architecture incorporates both symbolic reasoning, multimodal perception and morphology-aware action generation to support decision-making in dynamic environments. We validate ACT-FLEX across simulated and physical experiments involving multiple disassembly scenarios, product configurations and various robotic platforms including UR10, KUKA iiwa, Panda Franka and uArm Swift Pro. Results show transferability across robot platforms and robustness to product variations, with success rates of up to 80% in physical trials. This work demonstrates a step towards realising transparent, retaskable and sustainable robotic systems for Industry 5.0.

Text
ACT_FLEX_Resubmission - Accepted Manuscript
Restricted to Repository staff only until 16 March 2027.
Request a copy

More information

Accepted/In Press date: 23 February 2026
e-pub ahead of print date: 16 March 2026
Published date: 1 October 2026
Additional Information: Publisher Copyright: © 2026 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Keywords: ACT-R, Cognitive architecture, Generative pre-trained transformers, Manufacturing, Robotics

Identifiers

Local EPrints ID: 510808
URI: http://eprints.soton.ac.uk/id/eprint/510808
ISSN: 0736-5845
PURE UUID: fc4dc998-b0ef-4a21-9e56-76b9caea2d52
ORCID for Erich Graf: ORCID iD orcid.org/0000-0002-3162-4233

Catalogue record

Date deposited: 22 Apr 2026 16:42
Last modified: 23 Apr 2026 01:43

Export record

Altmetrics

Contributors

Author: Houdeyfa Ajraou
Author: Rob Ward
Author: Farzam Farbiz
Author: Erich Graf ORCID iD
Author: Yuan Miaolong
Author: Yan Shijun
Author: John Oyekan

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×