Camera-view supervision for bird's-eye-view semantic segmentation

Front Big Data. 2024 Nov 15:7:1431346. doi: 10.3389/fdata.2024.1431346. eCollection 2024.

Abstract

Bird's-eye-view Semantic Segmentation (BEVSS) is a powerful and crucial component of planning and control systems in many autonomous vehicles. Current methods rely on end-to-end learning to train models, leading to indirectly supervised and inaccurate camera-to-BEV projections. We propose a novel method of supervising feature extraction with camera-view depth and segmentation information, which improves the quality of feature extraction and projection in the BEVSS pipeline. Our model, evaluated on the nuScenes dataset, shows a 3.8% improvement in Intersection-over-Union (IoU) for vehicle segmentation and a 30-fold reduction in depth error compared to baselines, while maintaining competitive inference times of 32 FPS. This method offers more accurate and reliable BEVSS for real-time autonomous driving systems. The codes and implementation details and code can be found at https://github.com/bluffish/sucam.

Keywords: autonomous driving (AD); birds-eye-view; nuScenes dataset; perception; segmentation; supervision.

Grants and funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.