HiMo: High-Speed Objects Motion Compensation in Point Cloud
- Qingwen Zhang1,2
- Ajinkya Khoche1,2
- Yi Yang1,2
- Li Ling1
- Sina Sharif Mansouri2
- Olov Andersson1
- Patric Jensfelt1
1KTH Royal Institute of Technology 2Scania CV AB
Abstract
LiDAR point clouds often contain motion-induced distortions, degrading the accuracy of object appearances in the captured data. In this paper, we first characterize the underlying reasons for the point cloud distortion issue and show that it is present in public datasets. We find that distortion is more pronounced in high-speed environments such as highways, as well as in multi-LiDAR configurations, a common setup for heavy vehicles. Previous work has dealt with distortion from the ego-motion but fails to consider distortion from the motion of other objects. We therefore introduce a novel undistortion pipeline, HiMo, that leverages scene flow estimation for object motion compensation, correcting the depiction of dynamic objects. We further propose an extension of a state-of-the-art self-supervised scene flow method. Due to the lack of well-established motion distortion metrics in the literature, we also propose two metrics for evaluation: compensation accuracy at a point level and shape similarity on objects. To demonstrate the efficacy of our method, we conduct extensive experiments on the Argoverse 2 dataset and a new real-world dataset. Our new dataset is collected from heavy vehicles equipped with multi-LiDARs and on highways as opposed to mostly urban settings in the existing datasets. The source code, including all methods and the evaluation data, is provided.
Figure 1: Multi-LiDARs are equipped in our heavy vehicles to avoid self-occlusion. (a) shows an example placement with 6 LiDARs. The point colors in (b-c) correspond to the LiDAR from which the points are captured. (b) illustrates the distortion of static structure due to fast-moving ego vehicle. Raw shows the raw data, w. egc shows the ego-motion compensation results. (c) demonstrates distortion caused by motion of other objects, which depends on the velocity of the said objects. In such case, ego-motion compensation alone is insufficient. In comparison, our HiMu pipeline successfully undistorted the point clouds completely, resulting in accurate representation of the objects.
Results
Here are supplementary video and full qualitative results for the main paper. Please refer to our main paper for more details analysis.
Compensation result on different dataset using our HiMo pipeline
In this section, we present three interactive comparison between raw data with ego-motion compensation only and our HiMo pipeline using SeFlow++. Results include our Scania highway data, Argoverse 2 data and Zenseact Open Dataset (ZOD) that cover three different configurations and sensor types.
Our Scania Dataset (Left w. ego-motion comp. | Right w. HiMo), 8x Ous-32 LiDAR
Data scene id: batch_184_20211217173504, timestamp: 075
Argoverse 2 Dataset (Left w. ego-motion comp. | Right w. HiMo), 2x VLP-32 LiDAR
Data scene id: 76916359-96f4-3274-81fe-bb145d497c11, timestamp: 315968469059874000
ZOD Dataset (Left w. ego-motion comp. | Right w. HiMo), 1x VLP-128 LiDAR
Data scene id: 000018, timestamp: 16547760125175530
Comparison on different scene flow methods inside HiMo pipeline
We show the undistorted effect of different flow methods inside our HiMo pipeline as a supplement to the main paper for clarity.
Data scene id: batch_062_20211022162703, timestamp: 078
The raw data with ego-motion compensation still shows a truck object exhibiting significant distortion.
It provides the full qualitative results on the performance of different flow methods inside HiMo pipeline.
Data scene id: batch_184_20211217173504, timestamp: 075
The raw data with ego-motion compensation still shows big point distortion of two fast moving car objects in the scenarios.
It provides the full qualitative results on the performance of different flow methods inside HiMo pipeline.