检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李琳辉[1,2] 张鑫亮 付一帆 连静[1,2] 马家旭 Li Linhui;Zhang Xinliang;Fu Yifan;Lian Jing;Ma Jiaxu(School of Automotive Engineering,Dalian University of Technology,Dalian 116024;Dalian University of Technology,State Key Laboratory of Structural Analysis for Industrial Equipment,Dalian 116024)
机构地区:[1]大连理工大学汽车工程学院,大连116024 [2]大连理工大学工业装备结构分析国家重点实验室,大连116024
出 处:《汽车工程》2023年第12期2280-2290,共11页Automotive Engineering
基 金:国家自然科学基金(61976039,52172382);大连市科技创新基金(2021JJ12GX015);中央高校基本科研业务费专项基金项目(DUT22JC09)资助。
摘 要:针对自动驾驶复杂场景下的视觉目标难以实现快速准确检测的问题,提出一种基于注意力机制的TC-YOLOv7检测算法,应用于可见光与红外以及后融合场景。首先,基于CBAM和Transformer注意力机制模块改进YOLOv7基准检测模型,并利用多场景数据集进行可见光和红外检测效果验证。其次,构建并验证SS-PostFusion、DS-PostFusion、DD-PostFusion 3种不同非极大值抑制后融合方法的检测效果。最后,结合TC-YOLOv7与DDPostFusion方法,与单传感器检测结果进行对比。结果表明,在晴天、夜间、雾、雨、雪可见光和红外场景下,TC-YOLOv7相比基准YOLOv7 mAP@. 5均有3%以上精度提升;在综合场景测试集中,使用TC-YOLOv7后融合方法相比可见光检测精度提升4.5%,相比红外检测精度提升11.1%,相比YOLOv7后融合方法提升0.6%,且TC-YOLOv7后融合方法的推理速度为39 fps,满足自动驾驶场景实时性要求。For the problem that it is difficult to achieve fast and accurate detection of visual targets in com-plex scenes of autonomous driving,a TC-YOLOv7 detection algorithm based on attention mechanism is proposed,which is applied to visible light,infrared and post-fusion scenarios.Firstly,the YOLOv7 benchmark detection mod-el is improved based on the CBAM and Transformer attention mechanism modules,and the performance of visible light and infrared detection is verified by multi-scene datasets.Secondly,the detection methods of three different non-maximum suppression post-fusion methods including SS-PostFusion,DS-PostFusion,and DD-PostFusion are constructed,with the performance verified.Finally,the method combining TC-YOLOv7 and DD-PostFusion is com-pared with the single-sensor detection results.The results show that the TC-YOLOv7 method has more than 3%accu-racy improvement compared with the benchmark method YOLOv7 mAP@.5 in daytime,night,haze,rain,snow vis-ible light and infrared scenes.In the comprehensive scene test set,the TC-YOLOv7 post-fusion method improves the detection accuracy by 4.5%compared with visual light detection,by 11.1%compared with infrared detection and by 0.6%compared with the YOLOv7 post-fusion method.Furthermore,the TC-YOLOv7 post-fusion method in-ference speed is 39 fps,meeting the real-time requirements of autonomous driving scenarios.
关 键 词:深度学习 传感器融合 YOLO 注意力机制 非极大值抑制
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117