Offloading Monocular Visual Odometry with Edge Computing: Optimizing Image Quality in Multi-Robot Systems

Li Qingqing, Jorge Peña Queralta, Tuan Nguyen Gia, Tomi Westerlund

Fleets of autonomous mobile robots are becoming ubiquitous in industrial environments such as logistic warehouses. This ubiquity has led in the Internet of Things field towards more distributed network architectures, which have crystallized under the rising edge and fog computing paradigms. In this paper, we propose the combination of an edge computing approach with computational offloading for mobile robot navigation. As smaller and relatively simpler robots become more capable, their penetration in different domains rises. These large multi-robot systems are often characterized by constrained computational and sensing resources. An efficient computational offloading scheme has the potential to bring multiple operational enhancements. However, with the most cost-effective autonomous navigation method being visual-inertial odometry, streaming high-quality images can induce latency increments with a consequent negative impact on operational performance. In this paper, we analyze the impact that image quality and compression have on the state-of-the-art on visual inertial odometry. Our results indicate that over one order of magnitude in image size and network bandwidth can be reduced without compromising the accuracy of the odometry methods even in challenging environments.This opens the door to further optimization by dynamically assessing the trade-off between image quality, network load, latency and performance of the visual-inertial odometry and localization accuracy.