ECCV workshop on
3D Perception for
Autonomous Driving

A workshop at the European Conference on Computer Vision (ECCV 2022), October 23-27, Tel Aviv.

This workshop will discuss the recent advances in 3D perception for autonomous driving and will include a challenge with a prize.

Autonomous driving relies heavily on computer vision to guarantee safe driving. It involves solving many important tasks such as object detection, scene segmentation, motion prediction, and ego-motion calculation. All are important for safe planning in the autonomous driving task. While many academic works have focused on using 2D images to perform perception, it is widely agreed that adding other modalities, such as 3D LiDAR data, can improve scene understanding and safety.

Using 3D information for autonomous driving has its unique challenges. A LiDAR reacts differently than a camera to different weather conditions and has challenges in its data annotation. The data is not represented on a grid as is the case with 2D images. Thus, a dedicated effort is required for perception software to process 3D data. The workshop will discuss the challenges and advantages in performing 3D perception for autonomous driving, and the recent trends in the field through a set of lectures by leading experts in the field.

The workshop includes a sim2real challenge with the state-of-the-art InnovizTwo LiDAR and NVIDIA Drive-Sim LiDAR simulator. 

Tentative Schedule

Time Title
9:00-9:10 Introduction
9:10-9:50 Luca Carlone
9:50-10:30 Marco Pavone
10:30-11:00 Coffee break
11:00-11:40 Cyrill Stachniss
11:40-12:00 Challenge awards ceremony
12:00-13:30 Lunch
13:30-14:10 Raquel Urtasun
14:10-14:30 Challenge talk 1
14:30-16:10 Deva Ramanan
16:10-16:40 Coffee Break
16:40-17:20 Kris Kitani
17:20-17:40 Challenge talk 2
17:40-18:20 Drago Anguelov

Workshop Format

The workshop will consist of a series of invited talks on recent developments in 3D perception for autonomous driving. There will be a challenge on 3D perception for autonomous driving. The objective of the challenge is to work in real-time with unknown objects that were not present during the training of the perception model. that will use more realistic performance measures than what the predominant models currently use (IoU), which does not truly represent the impact of the detection on the driving safety. The challenge will be conducted in a setting that is closer to the real deployment of vision-based solutions for autonomous driving, i.e., the quality of solution will be measured as a function of computational complexity. This is important, especially due to the fact that real-time perception is rarely discussed in most of the current benchmarks and published solutions.

Two submissions to the challenge will be selected to be presented in the workshop as a contributed talk that describes their solution. The presenters will receive a prize. Proceedings will not be published as part of the workshop. However, the workshop organizers may prepare an extended report that summarizes the challenge and its results.


One day