INTERSECT - Multi-Source Sensor Fusion for Object Recognition

INTERSECT (Inter-vehicle sensor fusion for enhanced object classification and situational awareness in cooperative mapping of an a priori unknown environment) is a project that extends the concept of sensor fusion for object recognition and classification to multiple mobile sources with potentially heterogeneous computing capabilities. This will allow a more robust situational awareness in situations where different autonomous vehicles are operating in the same areas.

The core idea of this project is to develop new methods and algorithms that provide a more robust situational awareness of an autonomous agent's environment by means of efficient and effective object detection, classification and movement prediction through the collaboration with other agents operating in the same environment. Multi-source and multi-temporal data fusion are topics that only recently have started to develop. INTERSECT targets to leverage the three-dimensional information available from different points of view, which are given by spatially separated autonomous agents. In particular, research will be directed towards object recognition and classification based on object three-dimensional models that include parameters such as spatial shape, surface texture or temperature. 3D Convolutional Neural Networks are the key for advanced classification of objects based on 3D models and attributes such as colour, texture or temperature.


[1] Jorge Peña Queralta, Li Qingqing, Zhuo Zou, and Tomi Westerlund, "Enhancing Autonomy with Blockchain and Multi-Access Edge Computing in Distributed Robotic Systems", The Fifth International Conference on Fog and Mobile Edge Computing (FMEC 2020) , IEEE (2020) . (View) (Download)