3D Perception with Low-cost 2D LIDAR and Edge Computing for Enhanced Obstacle Detection
Victor Kathan Sarker, Li Qingqing and Tomi Westerlund
Autonomous small robots are getting a lot of attention for their use in industrial and domestic purposes. These are equipped with few to numerous amounts of sensors to help understand the operating environment, take appropriate next-step decisions, prioritize actions and operate with little to no human intervention. Light detection and ranging (LIDAR) sensors are widely used for detecting objects around an autonomous robot and to drive by effectively avoiding any possible obstacle. However, cheap LIDARs available in the market mostly can measure the distance of surrounding objects with a rotating LASER beam and an accompanying sensor, thus obtaining a two-dimensional (2D) map. This is insufficient for extracting enough contextual data for autonomous robot movement. In this paper, we propose a novel approach for enhancing the detection capabilities of a typical 2D LIDAR to yield perception in three dimensions (3D). This can facilitate more efficient object detection while keeping the costs of small-robots to an affordable level. The proposed approach requires little customization to achieve reasonably enhanced mapping. From our experiments, it is evident that our proposed method provides better mapping and detection capabilities using commonly available low-cost 2D LIDAR. Furthermore, integration of Edge computing makes data processing efficient and noticeably increases battery life of a robot resulting in a longer run-time.