检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:姜世豪 朱明[1] JIANG Shi-Hao;ZHU Ming(School of Information Science and Technology,University of Science and Technology of China,Hefei 230026,China)
机构地区:[1]中国科学技术大学信息科学技术学院,合肥230026
出 处:《计算机系统应用》2024年第6期192-200,共9页Computer Systems & Applications
基 金:科技创新特区计划(20-163-14-LZ-001-004-01)。
摘 要:视觉导航旨在通过环境中的视觉信息提供导航依据,其中关键任务之一就是目标检测.传统的目标检测方法需要大量的标注,且只关注图像本身,并未充分利用视觉导航任务中的数据相似性.针对以上问题,本文提出一种基于历史图像信息的自监督训练任务.该方法聚合同一位置的多时刻图像,通过信息熵区分前景与背景,将图像增强后传入SimSiam自监督范式进行训练.并改进SimSiam投影层和预测层中的MLP为卷积注意力模块和卷积模块,改进损失函数为多维向量间损失,以提取图像中的多维特征.最后,将自监督预训练所得模型用于下游任务的训练.实验表明,在处理后的nuScenes数据集上,本文提出的方法有效提高了下游分类及检测任务的精度,在下游分类任务上Top5准确率达到66.95%,检测任务上mAP达到40.02%.Visual navigation uses the visual information in the environment as the navigation basis,and one of the key tasks of visual navigation is object detection.Traditional object detection methods require a large number of annotations and only focus on the image itself,failing to fully utilize the data similarity in visual navigation tasks.To solve the above problem,this paper proposes a self-supervised training task based on historical image information.In this method,multimoment images at the same location are aggregated.Furthermore,the foreground and the background are distinguished by information entropy,and the images are enhanced and then sent into the simple siamese(SimSiam) self-supervised paradigm for training.In addition,the multi-layer perception(MLP) networks in the projection and prediction layers of the SimSiam paradigm are upgraded into a convolutional attention module and a convolution module,and the loss function is improved into one of the losses among multi-dimensional vectors,thereby extracting multi-dimensional features from the images.Finally,the model pre-trained by the self-supervised paradigm is used to train the model for downstream tasks.Experiments reveal that the proposed method effectively improves the precision of downstream classification and detection tasks on the processed nuScenes dataset.Its Top5 precision on downstream classification tasks reaches 66.95%,and its mean average precision(mAP) on downstream detection tasks reaches 40.02%.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.188.54.133