检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张纪友 李俊 郭霏霏[4] 李琦铭 Zhang Jiyou;Li Jun;Guo Feifei;Li Qiming(School of Mechanical and Electrical Engineering,Fujian Agriculture and Forestry University,Fuzhou 350002,China;University of Chinese Academy of Sciences,Fujian,Quanzhou 362200,China;Quanzhou lnstitute of Equipment Manufacturing,Haixi Institute,CAS,Quanzhou 362200,China;Quanzhou Vocational and Technical University,Quanzhou 362000,China)
机构地区:[1]福建农林大学机电工程学院,福州350002 [2]中国科学院大学福建学院,泉州362200 [3]中国科学院海西研究院泉州装备制造研究中心,泉州362200 [4]泉州职业技术大学,泉州362000
出 处:《电子测量与仪器学报》2025年第2期123-135,共13页Journal of Electronic Measurement and Instrumentation
基 金:国家自然科学基金(62102394);福建省科技计划(2023N3010)项目资助。
摘 要:针对现有运动分割方法在交通场景下实用性方面的不足,性能和验证时间难以平衡的问题,提出用于几何信息学习的图结构运动分割方法(GS-Net)。GS-Net由点嵌入模块、局部上下文融合模块、全局双边正则化模块和分类模块组成。其中,点嵌入模块将原始关键特征点数据从低维线性难可分的空间映射到高维线性易可分的空间,有利于网络学习图像中运动对象之间的关系;局部上下文融合模块利用双分支图结构分别在特征空间和几何空间提取局部信息,随后将两种类型的信息融合得到更强大的局部特征表征;全局双边正则化模块则利用逐点和逐通道的全局感知来增强局部上下文融合模块得到的局部特征表征;分类模块将前面得到的增强局部特征表征映射回低维分类空间进行分割。GS-Net在KT3DMoSeg数据集的误分类率均值和中值分别为2.47%和0.49%,较于SubspaceNet分别降低8.15%和7.95%;较于SUBSET分别降低7.2%和0.57%。同时,GSNet在网络推理速度相比SubspaceNet和SUBSET均提升两个数量级;GS-Net在FBMS数据集召回率和F-measure分别为82.53%和81.93%,较于SubspaceNet分别提升13.33%和5.36%,较于SUBSET分别提升9.66%和3.71%。实验结果表明GSNet能够快速、精确地分割出真实交通场景中的运动物体。The graph-structured motion segmentation method(GS-Net)for geometric information learning is proposed to address the shortcomings of existing motion segmentation methods in terms of their practicality in traffic scenarios,and the difficulty in balancing performance and validation time.GS-Net consists of a point embedding module,a local context fusion module,a global bilateral regularization module,and a classification module.The point embedding module maps the original key feature point data from a lowdimensional linearly difficult-to-differentiate space to a high-dimensional linearly easy-to-differentiate space,which is conducive to the network learning the relationship between moving objects in the image;the local context fusion module utilizes the dual-branching graph structure to extract local information from both the feature space and the geometric space,and then fuses the two types of information to obtain a more powerful local feature representation,The global bilateral regularization module uses point-by-point and channel-bychannel global sensing to enhance the local feature representations obtained by the local context fusion module;the classification module maps the enhanced local feature representations back to the low-dimensional classification space for segmentation.GS-Net’s mean and median misclassification rates on the KT3DMoSeg dataset are 2.47%and 0.49%,respectively,which are 8.15%and 7.95%lower than those of SubspaceNet,and 7.2%and 0.57%lower than those of SUBSET.Meanwhile,GS-Net improves the network inference speed by two orders of magnitude compared to both SubspaceNet and SUBSET.GS-Net’s recall and F-measure on the FBMS dataset are 82.53%and 81.93%,respectively,showing improvements of 13.33%and 5.36%compared to SubspaceNet,and 9.66%and 3.71%compared to SUBSET,respectively.The experimental results demonstrate that GS-Net can quickly and accurately segment moving objects in real traffic scenes.
关 键 词:运动分割 关键点提取 图结构 特征融合 深度学习 自动驾驶
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程] TN911.73[自动化与计算机技术—控制科学与工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49