ICASSP 2018
Robust Object-aware Sample Consensus with Application to LiDAR Odometry
Hui Cheng, Yongheng Hu, Chongyu Chen*, and Liang Lin
ICASSP 2018

Abstract


Random sample consensus (RANSAC) is a popular paradigm for parameter estimation with outlier detection, which plays an essential role in 3D robot vision, especially for LiDAR odometry. The success of RANSAC strongly depends on the probability of selecting a subset of pure inliers, which sets barriers to robust and fast parameter estimation. Although significant efforts have been made to improve RANSAC in various scenarios, its strong dependency on inlier selection is still a problem. In this paper, we propose to address such dependency in the context of LiDAR odometry by robust object-aware sample consensus (ROSAC). In the proposed ROSAC, the sampling strategy is adjusted to preserve object shapes and a new consensus method is developed based on robust low-dimensional subspace analysis. It is demonstrated in extensive experiments that the proposed paradigm works well in LiDAR odometry, achieving estimation of 3D pose with superior accuracy compared to RANSAC. Even for the case of RANSAC failure, ROSAC still achieves up to 67% of improvement in accuracy compared to baseline LiDAR odometry. Since a partially parallel implementation of ROSAC already leads to a significant speedup, we believe it can be extended to other problems of parameter estimation with both higher accuracy and efficiency.

An intuitively explanation

As shown in the above figure,  the parameters to be estimated usually lie in a low-rank manifold with sparsely corrupted data. In this case, “low-rank + sparse” (LRSD) is a better structure than “low-rank + Gaussian” (SVD). 

Our proposed ROSAC selects a set of samples on such manifold and resorts to low-dimensional subspace analysis to estimate the high-accuracy parameters.

 

 

Experiment


  • Object aware sampling vs. other sampling methods:

  • Application to LiDAR odometry

ROSAC achieves improvements up to 67.00% in estimation accuracy (eand eθ are the translation and rotation errors) compared to the baseline iterative closest point (ICP) algorithm.

  • Execution time compared to original ICP.