1.

Record Nr.

UNINA9910742498103321

Autore

Zhang Xinyu

Titolo

Multi-sensor Fusion for Autonomous Driving [[electronic resource] /] / by Xinyu Zhang, Jun Li, Zhiwei Li, Huaping Liu, Mo Zhou, Li Wang, Zhenhong Zou

Pubbl/distr/stampa

Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2023

ISBN

981-9932-80-7

Edizione

[1st ed. 2023.]

Descrizione fisica

1 online resource (237 pages)

Altri autori (Persone)

LiJun

LiZhiwei

LiuHuaping

ZhouMo

WangLi

ZouZhenhong

Disciplina

629.046

Soggetti

Robotics

Computer vision

Data mining

Computer Vision

Data Mining and Knowledge Discovery

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di contenuto

Part I: Basic -- Chapter 1. Introduction -- Chapter 2. Overview of Data Fusion in Autonomous Driving Perception -- Part II: Method -- Chapter 3. Multi-sensor Calibration -- Chapter 4. Multi-sensor Object Detection -- Chapter 5. Multi-sensor Scene Segmentation -- Chapter 6. Multi-sensor Fusion Localization -- Part III: Advance -- Chapter 7. OpenMPD: An Open Multimodal Perception Dataset -- Chapter 8. Vehicle-Road Multi-view Interactive Data Fusion -- Chapter 9. Information Quality in Data Fusion -- Chapter 10. Conclusions.

Sommario/riassunto

Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues,



this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture. This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms. In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.