检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张乐 韩华[1] 王春媛 马才良 王婉君 汤辰玉 ZHANG Le;HAN Hua;WANG Chunyuan;MA Cailiang;WANG Wanjun;TANG Chenyu(School of Electronic and Electrical Engineering,Shanghai University of Engineering Science,Shanghai 201620,China)
机构地区:[1]上海工程技术大学电子电气工程学院,上海201620
出 处:《智能计算机与应用》2022年第12期159-163,168,共6页Intelligent Computer and Applications
基 金:国家自然科学基金(61305014);上海市自然科学基金(22ZR1426200)。
摘 要:为了提升基于模型预测的目标跟踪算法在复杂场景中的跟踪表现,提出了基于特征聚合的方法来获得更加具有判别力的鲁棒特征图,然后将该特征图送入模型预测器中对目标进行在线预测,最终能在多种复杂场景下实现实时鲁棒的跟踪任务。该方法的具体设计流程为:改进特征提取网络,并对特征提取网络的最后几层进行多层特征聚合操作。实验表明:所提出的算法在VOT2018数据集的EAO(Expect Average Overlap)指标上比基线算法高了4.88%;在UAV123数据集的成功率(Success rate)和精确率(Precision rate)指标上比基线算法分别提高了4.5%和4.4%。In order to improve the tracking performance of object tracking algorithm based on model prediction in complex scenes,a method based on data augmentation and feature fusion is proposed.The method can obtain a more discriminative robust feature map,and then the feature map is sent to model predictor to carry out online prediction,finally realizes real-time robust tracking tasks in a variety of complex scenes.The specific design of the method is:feature extraction network is improved,after that multi-layer feature aggregation operation is performed on the last two layers of feature extraction network.Experiments show that the proposed algorithm is 4.88%higher than baseline algorithm on the EAO(Expected Average Overlap)indicator of VOT2018 dataset;it is higher than baseline on the Success rate and Precision rate indicators of UAV123 dataset by 4.5%and 4.4%,respectively.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:13.59.235.245