The University of Southampton
University of Southampton Institutional Repository

Chat with the environment: interactive multimodal perception using large language models

Chat with the environment: interactive multimodal perception using large language models
Chat with the environment: interactive multimodal perception using large language models
Programming robot behavior in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning. Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning ability in few-shot robotic planning. However, it remains challenging to ground LLMs in multimodal sensory input and continuous action output, while enabling a robot to interact with its environment and acquire novel information as its policies unfold. We develop a robot interaction scenario with a partially observable state, which necessitates a robot to decide on a range of epistemic actions in order to sample sensory information among multiple modalities, before being able to execute the task correctly. An interactive perception framework is therefore proposed with an LLM as its backbone, whose ability is exploited to instruct epistemic actions and to reason over the resulting multimodal sensations (vision, sound, haptics, proprioception), as well as to plan an entire task execution based on the interactively acquired information. Our study demonstrates that LLMs can provide high-level planning and reasoning skills and control interactive robot behavior in a multimodal environment, while multimodal modules with the context of the environmental state help ground the LLMs and extend their processing ability. The project website can be found at https://matcha-model.github.io/.
IEEE
Zhao, Xufeng
ae4f9f4a-4377-4e49-adb7-9efeb4999cc7
Li, Mengdi
ec7f5699-fd06-4cdb-8bc2-8e53efe5f50a
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc
Zhao, Xufeng
ae4f9f4a-4377-4e49-adb7-9efeb4999cc7
Li, Mengdi
ec7f5699-fd06-4cdb-8bc2-8e53efe5f50a
Weber, Cornelius
4e097e6c-840c-460a-8572-e8759f137e43
Hafez, Muhammad Burhan
e8c991ab-d800-46f2-abeb-cb169a1ed47e
Wermter, Stefan
80682cc6-4251-420a-af8a-f4d616fb0fcc

Zhao, Xufeng, Li, Mengdi, Weber, Cornelius, Hafez, Muhammad Burhan and Wermter, Stefan (2023) Chat with the environment: interactive multimodal perception using large language models. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 7 pp . (doi:10.1109/IROS55552.2023.10342363).

Record type: Conference or Workshop Item (Paper)

Abstract

Programming robot behavior in a complex world faces challenges on multiple levels, from dextrous low-level skills to high-level planning and reasoning. Recent pre-trained Large Language Models (LLMs) have shown remarkable reasoning ability in few-shot robotic planning. However, it remains challenging to ground LLMs in multimodal sensory input and continuous action output, while enabling a robot to interact with its environment and acquire novel information as its policies unfold. We develop a robot interaction scenario with a partially observable state, which necessitates a robot to decide on a range of epistemic actions in order to sample sensory information among multiple modalities, before being able to execute the task correctly. An interactive perception framework is therefore proposed with an LLM as its backbone, whose ability is exploited to instruct epistemic actions and to reason over the resulting multimodal sensations (vision, sound, haptics, proprioception), as well as to plan an entire task execution based on the interactively acquired information. Our study demonstrates that LLMs can provide high-level planning and reasoning skills and control interactive robot behavior in a multimodal environment, while multimodal modules with the context of the environmental state help ground the LLMs and extend their processing ability. The project website can be found at https://matcha-model.github.io/.

Text
IROS paper 2023 Zhao Li Weber Hafez Wermter - Accepted Manuscript
Restricted to Repository staff only until 13 December 2025.
Request a copy

More information

Published date: 13 December 2023
Venue - Dates: IEEE/RSJ International Conference on Intelligent Robots and Systems, , Detroit, United States, 2023-10-01 - 2023-10-05

Identifiers

Local EPrints ID: 496190
URI: http://eprints.soton.ac.uk/id/eprint/496190
PURE UUID: 22b2022f-7041-44a1-9523-54ce736a3843
ORCID for Muhammad Burhan Hafez: ORCID iD orcid.org/0000-0003-1670-8962

Catalogue record

Date deposited: 06 Dec 2024 17:35
Last modified: 07 Dec 2024 03:13

Export record

Altmetrics

Contributors

Author: Xufeng Zhao
Author: Mengdi Li
Author: Cornelius Weber
Author: Muhammad Burhan Hafez ORCID iD
Author: Stefan Wermter

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×