The University of Southampton
University of Southampton Institutional Repository

Detect closer surfaces that can be seen: new modeling and evaluation in cross-domain 3D object detection

Detect closer surfaces that can be seen: new modeling and evaluation in cross-domain 3D object detection
Detect closer surfaces that can be seen: new modeling and evaluation in cross-domain 3D object detection

The performance of domain adaptation technologies has not yet reached an ideal level in the current 3D object detection field for autonomous driving, which is mainly due to significant differences in the size of vehicles, as well as the environments they operate in when applied across domains. These factors together hinder the effective transfer and application of knowledge learned from specific datasets. Since the existing evaluation metrics are initially designed for evaluation on a single domain by calculating the 2D or 3D overlap between the prediction and ground-truth bounding boxes, they often suffer from the overfitting problem caused by the size differences among datasets. This raises a fundamental question related to the evaluation of the 3D object detection models’ cross-domain performance: Do we really need models to maintain excellent performance in their original 3D bounding boxes after being applied across domains? From a practical application perspective, one of our main focuses is actually on preventing collisions between vehicles and other obstacles, especially in cross-domain scenarios where correctly predicting the size of vehicles is much more difficult. In other words, as long as a model can accurately identify the closest surfaces to the ego vehicle, it is sufficient to effectively avoid obstacles. In this paper, we propose two metrics to measure 3D object detection models’ ability of detecting the closer surfaces to the sensor on the ego vehicle, which can be used to evaluate their cross-domain performance more comprehensively and reasonably. Furthermore, we propose a refinement head, named EdgeHead, to guide models to focus more on the learnable closer surfaces, which can greatly improve the cross-domain performance of existing models not only under our new metrics, but even also under the original BEV/3D metrics. Our code is available at https://github.com/Galaxy-ZRX/EdgeHead.

0922-6389
65-72
IOS Press
Zhang, Ruixiao
fc3c4eb9-b692-4ab3-8056-030cb6731fc5
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Lee, Juheon
cd382ebf-0bcc-47b8-a60d-68c6540d31bb
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Endriss, Ulle
Melo, Francisco S.
Bach, Kerstin
Bugarin-Diz, Alberto
Alonso-Moral, Jose M.
Barro, Senen
Heintz, Fredrik
Zhang, Ruixiao
fc3c4eb9-b692-4ab3-8056-030cb6731fc5
Wu, Yihong
2876bede-25f1-47a5-9e08-b98be99b2d31
Lee, Juheon
cd382ebf-0bcc-47b8-a60d-68c6540d31bb
Cai, Xiaohao
de483445-45e9-4b21-a4e8-b0427fc72cee
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Endriss, Ulle
Melo, Francisco S.
Bach, Kerstin
Bugarin-Diz, Alberto
Alonso-Moral, Jose M.
Barro, Senen
Heintz, Fredrik

Zhang, Ruixiao, Wu, Yihong, Lee, Juheon, Cai, Xiaohao and Prugel-Bennett, Adam (2024) Detect closer surfaces that can be seen: new modeling and evaluation in cross-domain 3D object detection. Endriss, Ulle, Melo, Francisco S., Bach, Kerstin, Bugarin-Diz, Alberto, Alonso-Moral, Jose M., Barro, Senen and Heintz, Fredrik (eds.) In ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. vol. 392, IOS Press. pp. 65-72 . (doi:10.3233/FAIA240472).

Record type: Conference or Workshop Item (Paper)

Abstract

The performance of domain adaptation technologies has not yet reached an ideal level in the current 3D object detection field for autonomous driving, which is mainly due to significant differences in the size of vehicles, as well as the environments they operate in when applied across domains. These factors together hinder the effective transfer and application of knowledge learned from specific datasets. Since the existing evaluation metrics are initially designed for evaluation on a single domain by calculating the 2D or 3D overlap between the prediction and ground-truth bounding boxes, they often suffer from the overfitting problem caused by the size differences among datasets. This raises a fundamental question related to the evaluation of the 3D object detection models’ cross-domain performance: Do we really need models to maintain excellent performance in their original 3D bounding boxes after being applied across domains? From a practical application perspective, one of our main focuses is actually on preventing collisions between vehicles and other obstacles, especially in cross-domain scenarios where correctly predicting the size of vehicles is much more difficult. In other words, as long as a model can accurately identify the closest surfaces to the ego vehicle, it is sufficient to effectively avoid obstacles. In this paper, we propose two metrics to measure 3D object detection models’ ability of detecting the closer surfaces to the sensor on the ego vehicle, which can be used to evaluate their cross-domain performance more comprehensively and reasonably. Furthermore, we propose a refinement head, named EdgeHead, to guide models to focus more on the learnable closer surfaces, which can greatly improve the cross-domain performance of existing models not only under our new metrics, but even also under the original BEV/3D metrics. Our code is available at https://github.com/Galaxy-ZRX/EdgeHead.

Text
FAIA-392-FAIA240472 - Version of Record
Download (588kB)

More information

Published date: 16 October 2024
Venue - Dates: 27th European Conference on Artificial Intelligence, ECAI 2024, , Santiago de Compostela, Spain, 2024-10-19 - 2024-10-24

Identifiers

Local EPrints ID: 502154
URI: http://eprints.soton.ac.uk/id/eprint/502154
ISSN: 0922-6389
PURE UUID: 6a28e33f-0bef-4e8d-ba71-e52d5ac431b7
ORCID for Yihong Wu: ORCID iD orcid.org/0000-0003-3340-2535
ORCID for Xiaohao Cai: ORCID iD orcid.org/0000-0003-0924-2834

Catalogue record

Date deposited: 17 Jun 2025 16:50
Last modified: 22 Aug 2025 02:29

Export record

Altmetrics

Contributors

Author: Ruixiao Zhang
Author: Yihong Wu ORCID iD
Author: Juheon Lee
Author: Xiaohao Cai ORCID iD
Author: Adam Prugel-Bennett
Editor: Ulle Endriss
Editor: Francisco S. Melo
Editor: Kerstin Bach
Editor: Alberto Bugarin-Diz
Editor: Jose M. Alonso-Moral
Editor: Senen Barro
Editor: Fredrik Heintz

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×