基于动态目标检测的视觉SLAM优化算法

Visual SLAM optimization algorithm based on dynamic object detection

  • 摘要: 搭载在移动机器人上的同步定位与建图(simultaneous localization and mapping,SLAM)系统在实际环境中,时常因动态物体的影响而导致SLAM系统的定位精度低,严重时会使相机定位位姿失败,基于此,提出一种YOLO(you only look once)动态目标检测网络与LK光流法相结合的RDFP-SLAM算法。该算法在视觉里程计线程中通过目标检测网络YOLOv5,对相机获取图像进行动态目标检测,再利用LK光流法判断预期动态目标检测框中真正的动态特征点并剔除,剩余静态特征点参与位姿估计及建图,最终在公开数据集TUM、KITTI和现实动态环境中进行实验测试。实验结果表明,RDFP-SLAM算法在多种视觉传感器及室内、室外不同环境的影响下,时间消耗相较于同类型的算法仍有大幅度减少,且有效提升了动态环境下特征提取的精度,该系统的鲁棒性、实时性和定位结果均得到优化。

     

    Abstract: In the actual environment, the localization accuracy of the simultaneous localization and mapping (SLAM) system mounted on the mobile robot is often low due to the influence of dynamic objects, and the camera orientation position will fail when it is serious. On this basis, a RDFP-SLAM algorithm combining you only look once (YOLO) dynamic object detection network and LK optical flow method was proposed. In the visual odometry thread, the object detection network YOLOv5 was used to detect the dynamic target in the image acquired by the camera, then the LK optical flow method was used to determine the real dynamic feature points in the expected dynamic target detection box and remove them, and the remaining static feature points were involved in pose estimation and mapping. Finally, the experimental test was carried out in the public data set TUM, KITTI and the real dynamic environment. Experimental results show that under the influence of multiple visual sensors and different indoor and outdoor environments, the RDFP-SLAM algorithm still has a significant reduction in time consumption compared with the same type of algorithms, and effectively improves the accuracy of feature extraction in dynamic environment, so that the robustness, real-time performance and positioning results of the system are optimized.

     

/

返回文章
返回