基于多尺度特征融合的双模态行人检测方法  

Dual-modal pedestrian detection method based on multi-scale feature fusion

在线阅读下载全文

作  者:杨环宇 高潇 杨丽君 王军[1] 薄煜明[1] Yang Huanyu;Gao Xiao;Yang Lijun;Wang Jun;Bo Yuming(School of Automation,Nanjing University of Science and Technology,Nanjing 210094,China;Marine Design and Research Institute of China,Shanghai 200011,China)

机构地区:[1]南京理工大学自动化学院,江苏南京210094 [2]中国船舶集团有限公司第七〇八研究所,上海200011

出  处:《南京理工大学学报》2024年第5期650-660,F0002,共12页Journal of Nanjing University of Science and Technology

摘  要:在城市安全、自主交通和智能监控领域,先进的行人检测技术至关重要。当前基于可见光成像的方法在低光或恶劣天气条件下面临局限性,导致检测精度降低。为了应对这些挑战,该文提出了一种新型的双模态行人检测方法,该方法采用改进的YOLOv7模型,并增强了模态对齐(MA)和差分模态融合(DMF)模块。这些模块有效地利用双模态数据,结合可见光和红外成像,以改善各种环境条件下的检测性能。实验结果表明,所提出的方法在多种场景下均显著提高了行人检测的准确性,这为复杂环境下的检测任务提供了一种有潜力的解决方案。In the realms of urban safety,autonomous transportation,and intelligent surveillance,advanced pedestrian detection technology is crucial.Current methods based on visible light imaging face limitations in low-light or adverse weather conditions,leading to reduced detection accuracy.To address these challenges,this paper introduces a novel dual-modal pedestrian detection method,utilizing an enhanced YOLOv7 model supplemented by modal alignment(MA)and differential modal fusion(DMF)modules.These modules effectively harness dual-modal data,combining visible light and infrared imaging to improve detection performance under various environmental conditions.Experimental results indicate that the proposed method significantly enhances pedestrian detection accuracy across various scenarios,offering a promising solution for detection tasks in complex environments.

关 键 词:行人检测 深度学习 多模态融合 YOLOv7 

分 类 号:TP29[自动化与计算机技术—检测技术与自动化装置]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象