基于感知深度神经网络的视觉跟踪  被引量:8

Robust Visual Tracking via Perceptive Deep Neural Network

在线阅读下载全文

作  者:侯志强[1] 戴铂 胡丹[1] 余旺盛[1] 陈晨[1] 范舜奕 

机构地区:[1]空军工程大学信息与导航学院,西安710077

出  处:《电子与信息学报》2016年第7期1616-1623,共8页Journal of Electronics & Information Technology

基  金:国家自然科学基金(61175029;61473309);陕西省自然科学基金(2015JM6269;2015JM6269;2016JM6050)~~

摘  要:视觉跟踪系统中,高效的特征表达是决定跟踪鲁棒性的关键,而多线索融合是解决复杂跟踪问题的有效手段。该文首先提出一种基于多网络并行、自适应触发的感知深度神经网络;然后,建立一个基于深度学习的、多线索融合的分块目标模型。目标分块的实现成倍地减少了网络输入的维度,从而大幅降低了网络训练时的计算复杂度;在跟踪过程中,模型能够根据各子块的置信度动态调整权重,提高对目标姿态变化、光照变化、遮挡等复杂情况的适应性。在大量的测试数据上进行了实验,通过对跟踪结果进行定性和定量分析表明,所提出算法具有很强的鲁棒性,能够比较稳定地跟踪目标。In a visual tracking system, the feature description plays the most important role. Multi-cue fusion is an effective way to solve the tracking problem under many complex conditions. Therefore, a perceptive deep neural network based on multi parallel networks which can be triggered adaptively is proposed. Then, using the multi-cue fusion, a new tracking method based on deep learning is established, in which the target can be adaptively fragmented. The fragment decreases the input dimension, thus reducing the computation complexity. During the tracking process, the model can dynamically adjust the weights of fragments according to the reliability of them, which is able to improve the flexibility of the tracker to deal with some complex circumstances, such as target posture change, light change and occluded by other objects. Qualitative and quantitative analysis on challenging benchmark video sequences show that the proposed tracking method is robust and can track the moving target robustly.

关 键 词:视觉跟踪 特征表达 深度学习 感知深度神经网络 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象