基于Transformer架构的城市街景语义分割方法研究  

Research on urban street view semantic segmentation method based on Transformer architecture

在线阅读下载全文

作  者:熊炜[1,2] 赵迪 孙鹏 刘粤 XIONG Wei;ZHAO Di;SUN Peng;LIU Yue(School of Electrical and Electronic Engineering,Hubei University of Technology,Wuhan,Hubei 430068,China;Department of Computer Science and Engineering,University of South Carolina,Columbia,SC 29201,USA)

机构地区:[1]湖北工业大学电气与电子工程学院,湖北武汉430068 [2]美国南卡罗来纳大学计算机科学与工程系,南卡哥伦比亚29201

出  处:《光电子.激光》2024年第12期1240-1249,共10页Journal of Optoelectronics·Laser

基  金:国家自然科学基金(62202148);湖北省自然科学基金(2019CFB530);湖北省科技厅重大专项(2019ZYYD020);襄阳湖北工业大学产业研究院科研项目(XYYJ2022C05);国家留学基金(201808420418)资助项目。

摘  要:针对部分Transformer网络在进行城市街景图像分割时,没有充分利用网络中的多尺度特征和上下文信息,导致分割结果部分大目标存在孔洞、小目标边缘分割不精细等缺陷,本文提出了基于Transformer架构的以提取多尺度特征和汇聚上下文信息为主的Trans-AsfNet方法解决此问题。该分割方法引入了Swin Transformer作为新的特征提取网络,加强信息的长距离依赖;提出了一种自适应子空间特征融合模块(adaptive subspace feature fusion,ASFF)加强网络对多尺度特征的提取能力,同时设计了一种有效全局上下文聚合模块(efficient global context aggregation,EGCA)提升网络的上下文信息的聚合能力,利用丰富的多尺度信息进行特征解码与信息补偿,然后聚合不同尺度的上下文信息以强化理解目标的语义信息,进而消除大目标孔洞,提高小目标像素的边缘分割精度。将Trans-AsfNet方法通过CamVid城市街景数据集验证测试,实验结果表明,该网络基本可以消除分割孔洞缺陷、提升小目标边缘的分割效果,在CamVid测试集上MIoU达到了69.5%。When segmenting urban street view images in parital Transformer network,multi-scale features and context information in the network are not fully utilized,leading to defects such as holes in large targets and imprecise edge segmentation of small targets.In this paper,a Trans-AsfNet method based on Transformer architecture is proposed to extract multi-scale features and aggregate context information to solve this problem.The segmentation method introduces Swin Transformer as a new feature extraction network to strengthen the long-distance dependence of information.An adaptive subspace feature fusion module(ASFF)is proposed to strengthen the network's ability to extract multiscale features,and an effective global context aggregation module(EGCA)is designed to improve the context information aggregation capability of the network,and uses rich multi-scale information for feature decoding and information compensation.Then,the context information of different scales is aggregated to strengthen the semantic information of the understanding target,so as to eliminate the holes of large targets and improve the edge segmentation accuracy of small target pixels.The Trans-AsfNet method is verified and tested by the CamVid urban street view dataset,and the experimental results show that the network can basically eliminate the segmentation hole defects and improve the segmentation effect of small target edges,and the MIoU reaches 69.5%on the CamVid test set.

关 键 词:TRANSFORMER 城市街景 上下文信息 语义分割 特征融合 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象