Abstract
Visual-inertial odometry (VIO), the fusion of visual and inertial sensor data, has been shown to be functional for navigation in GNSS denied environments. Recently, dense optical flow based end-to-end trained deep learning VIO models have gained superior performance in outdoor navigation. In this paper, we introduced a novel visual-inertial sensor fusion approach based on Vision Transformer architecture with a cross-attention mechanism, specifically designed to better integrate potentially poor quality optical flow features with inertial data. Although optical flow based VIO models have obtained superior performance in outdoor vehicle navigation, both in accuracy and ease of calibration, we shown how their suitability for indoor pedestrian navigation is still far from existing feature matching based methods. We compare the performance of traditional VIO models against deep learning based VIO models on the KITTI benchmark dataset and our custom pedestrian navigation dataset. We show how end-to-end trained VIO models using optical flow were significantly outperformed by simpler visual odometry models utilizing feature matching. Our findings indicate that due to the robustness against occlusion and camera shake, feature matching is better suited for indoor pedestrian navigation, whereas dense optical flow remains viable for vehicular data. Therefore, the most feasible way forward will be the integration of our novel model with feature based visual data encoding.