检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:黄心怡 张勇[2] HUANG Xinyi;ZHANG Yong(China Railway Eryuan Engineering Group Co.Ltd.,Communication Signal Design and Research Institute,Chengdu 610031,China;Beijing Jiaotong University Electronic and Communication Engineering,Beijing 100091,China)
机构地区:[1]中铁二院工程集团有限责任公司通信信号设计研究院,成都610031 [2]北京交通大学电子与通信工程系,北京100091
出 处:《铁路计算机应用》2023年第7期1-6,共6页Railway Computer Application
基 金:国家自然科学基金项目(F030205)。
摘 要:传统的轨道分割方法无法满足列车运行时对轨道区域感知的实时性和准确性要求。文章研究基于RGBD融合图像及改进U-net的轨道区域分割方法,将RGB图像与深度图像进行融合,获得RGBD融合图像,将其输入到改进后的U-net中,建立轨道区域分割模型。经实验验证,与仅输入RGB图像的U-net模型相比,轨道区域分割模型的F1值提升了约0.28,平均交并比提升了约0.1,像素准确率提升了0.0026,证明其对轨道区域分割的精确度更高,同时,验证了该模型的网络性能也得到了显著提升。Traditional track segmentation methods cannot meet the real-time and accurate requirements for perception of track areas during train operation.This paper studied a track region segmentation method based on RGBD fusion images and improved U-net.The RGB images were fused with depth images to obtain RGBD fusion images,which were input into the improved U-net to establish a track region segmentation model.After experimental verification,compared with the U-net model that only inputs RGB images,the F1 value of the track region segmentation model has been improved by about 0.28,the Mean Intersection over Union(MIoU)has been improved by about 0.1,and the Pixel Accuracy(PA)has been improved by 0.0026,proving its higher accuracy in track region segmentation.At the same time,it has been verified that the network performance of the model has also been significantly improved.
关 键 词:U-net 轨道分割 RGBD融合图像 深度图 卷积神经网络(CNN)
分 类 号:U285.49[交通运输工程—交通信息工程及控制] U213.2[交通运输工程—道路与铁道工程] TP39[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.229