Mapping and accurate localization via sensor fusion with deep learning

We are working on achieving accurate 3D mapping and positioning using data fusion from 3D lidars, RGB-D cameras, inertial sensors, GNSS sensors and wheel odometry.

Autonomous robots or small vehicles that will operate in dense urban areas need accurate positioning algorithms to be able to locate themselves in complex and dynamic environments. We have been developing algorithms to increase the accuracy of self-localization based on local sensor data and a given map of the environment. We take into account the possibility of the maps being corrupted in some areas and therefore preprocessing is needed. The images below correspond to the algorithm we have developed foe the JDD Digital Globalization Challenge 2019, on “Multi-sensor fusion localization for autonomous vehicles”.

Publications

[1] Li Qingqing, Jorge Peña Queralta, Tuan Nguyen Gia, Tomi Westerlund, "Offloading Monocular Visual Odometry with Edge Computing: Optimizing Image Quality in Multi-Robot Systems", The 5th International Conference on Systems, Control and Communications , ACM (2019) . (View) (Download) (Researchgate)

[2] Li Qingqing, Jorge Peña Queralta, Tuan Nguyen Gia, Zhuo Zou, Tomi Westerlund, "Multi Sensor Fusion for Navigation and Mapping in Autonomous Vehicles: Accurate Localization in Urban Environments", Unmanned Systems (2020) . The 9th IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and the 9th IEEE International Conference on Robotics, Automation and Mechatronics (RAM). (View) (Download) (Researchgate) Nominated to Best Paper Award.

[3] Li Qingqing, Jorge Peña Queralta, Tuan Nguyen Gia, Zhuo Zou, Hannu Tenhunen and Tomi Westerlund, "Detecting Water Reflection Symmetries in Point Clouds for Camera Position Calibration in Unmanned Surface Vehicles", The 19th International Symposium on Communications and Information Technologies (ISCIT) , IEEE (2019) . (View) (Download) (Researchgate)