融合双向注意的全卷积编解码显著区域检测  

Integrated Bidirectional Attention Salient Region Detection Based on Full Convolution and Encoder-Decoder

在线阅读下载全文

作  者:刘丽英 田媚[1] 黄雅平[1] 邹琪[1] Liu Liying;Tian Mei;Huang Yaping;Zou Qi(School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044)

机构地区:[1]北京交通大学计算机与信息技术学院

出  处:《计算机辅助设计与图形学学报》2019年第7期1139-1147,共9页Journal of Computer-Aided Design & Computer Graphics

基  金:国家自然科学基金(61473031,61472029);北京市教委科研计划一般项目(SM20191001107,PXM2019_014213_000007)

摘  要:为准确定位复杂背景下的显著区域,优化显著图的稀疏性问题,融合自底向上和自顶向下的注意信息,提出一种全卷积编解码显著区域检测模型.首先构建基于VGG16网络的全卷积网络,并进行与之对称的解码操作;然后在解码过程中自顶向下地将高层特征与低层高分辨率特征相连接,输出不同分辨率特征下的显著图;最后对其采用最小二乘估计法找到最优权值进行加权结合,得到最终的显著图.在5个公开数据集上与当前流行的模型进行对比,结果表明该模型的性能优于其他模型.To locate salient regions accurately in complex backgrounds and optimize sparseness of the salient re- gions, we propose a salient region detection model based on fully convolutional networks and encoder-decoder, which consider both bottom-up and top-down attention information. Firstly, we build a fully convolutional net- work which incorporates a symmetric decoding operation. Then, in the process of decoding, we further concate- nate the high-level features with the low-level high-resolution features. Finally, we use the least-square estimator method to estimate the optimal weights and obtain the final saliency map. We conduct extensive experiments on five public datasets. The experimental results demonstrate that our model is effective and accurate to salient re- gion detection compared with state-of-art methods.

关 键 词:显著区域检测 全卷积网络 编码 解码 最小二乘估计 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象