Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR

Nevindu M. Batagoda*

UW-Madison

Harry Zhang

UW-Madison

Adithya Pediredla

Dartmouth

Dan Negrut

UW-Madison

Under Review

Technical Video

Abstract


Robust autonomous navigation in environments with limited visibility remains a critical challenge in robotics. We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation. Our method enables mobile robots to ``see around corners" by utilizing multi-bounce light information, effectively expanding their perceptual range without additional infrastructure. We propose a three-module pipeline: (1) Sensing, which captures multi-bounce histograms using SPAD-based LiDAR; (2) Perception, which estimates occupancy maps of hidden regions from these histograms using a convolutional neural network; and (3) Control, which allows a robot to follow safe paths based on the estimated occupancy. We evaluate our approach through simulations and real-world experiments on a mobile robot navigating an L-shaped corridor with hidden obstacles. Our work represents the first experimental demonstration of NLOS imaging for autonomous navigation, paving the way for safer and more efficient robotic systems operating in complex environments. We also contribute a novel dynamics-integrated transient rendering framework for simulating NLOS scenarios, facilitating future research in this domain.

Paper


Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR

Aaron Young*, Nevindu M. Batagoda*, Harry Zhang, Akshat Dave, Adithya Pediredla, Dan Negrut, Ramesh Raskar

description Paper preprint (PDF, 2.3 MB)
description arXiv version
insert_comment BibTeX

Method


Our pipeline for NLOS-aided Autonomous Navigation. (a) Sensing: We capture multi-bounce histograms using single-photon LiDAR to gather information about hidden regions. (b) Perception: Captured histograms are then processed using a data-driven approach to estimate the occupancy map of the occluded region. (c) Control: Then the optimal path is planned based on the estimated occupancy map for navigating around obstacles.


Robust autonomous navigation in environments with limited visibility is crucial for enhancing the safety and efficiency of mobile robots. As these robots are increasingly deployed in various applications—from industrial settings to urban environments—they must be capable of detecting and avoiding hidden obstacles to prevent accidents. Traditional perception systems often struggle with blind spots, which can lead to collisions and operational inefficiencies. Therefore, improving a robot's ability to "see around corners" and navigate effectively in such challenging environments is paramount.

To address these challenges, we propose a novel pipeline that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR. Our approach consists of three key modules:

  1. Sensing: This module captures multi-bounce histograms from SPAD-based LiDAR, gathering information about hidden regions.
  2. Perception: The captured histograms are processed to estimate occupancy maps of the occluded areas, providing the robot with a clearer understanding of its surroundings.
  3. Control: Based on the estimated occupancy maps, the control module plans safe paths, allowing the robot to navigate effectively around obstacles.
By integrating these three components, our pipeline enhances the robot's perceptual capabilities, enabling it to navigate complex environments safely and efficiently.

Simulated Results


Our simulations validate the effectiveness of the NLOS approach in various scenarios, specifically focusing on a mobile robot navigating through an L-shaped corridor. The results indicate a marked improvement in collision avoidance when utilizing multi-bounce light data compared to traditional line-of-sight (LOS) methods. For instance, at speeds ranging from 8 to 11 m/s, the NLOS perception reduced collision rates significantly: by 14% at 9–10 m/s and 9% at 10–11 m/s. These findings underscore the crucial role of detecting hidden objects in maintaining safe navigation, particularly at higher speeds where traditional systems struggle.

Simulated navigation results showcasing the performance of our NLOS pipeline.

Real Results


In real-world experiments, our NLOS-enabled robot demonstrated a substantial improvement in navigation efficiency. By leveraging multi-bounce LiDAR data, the robot successfully avoided obstacles in blind corners, achieving a trajectory that was 33% shorter and taking nearly twice as much time to navigate when using only LOS information. This improvement showcases the practical benefits of integrating NLOS perception into robotic systems, highlighting its potential for enhancing safety and efficiency in complex environments.

Real-world navigation results demonstrating the effectiveness of NLOS perception.

Citation


@inproceedings{young2024robospad,
    author = {Young, Aaron and Batagoda, Nevindu M. and Zhang, Harry and Dave, Akshat and 
        Pediredla, Adithya and Negrut, Dan and Raskar, Ramesh},
    title = {Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR},
    booktitle = {ArXiv},
    year = {2024}
}

Acknowledgements


The website template was adapted from Tzofi Klinghoffer. This work was supported in part through NSF project CMMI2153855. AY was supported by the NSF GRFP (No. 2022339767).

* Equal contribution.