检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘俊辰 张文波[1] 杨大为[1] LIU Jun-Chen;ZHANG Wen-Bo;YANG Da-Wei(School of Information Science and Engineering,Shenyang Ligong University,Shenyang 110159,China)
机构地区:[1]沈阳理工大学信息科学与工程学院,沈阳110159
出 处:《计算机系统应用》2025年第3期143-151,共9页Computer Systems & Applications
基 金:辽宁省自然科学基金面上项目(2022-MS-276)。
摘 要:基于Transformer方法凭借自注意力机制在图像超分辨率重建领域中展现出卓越的性能,然而自注意力机制也带来了非常高的计算成本,针对此问题提出一种基于混合泛化Transformer的轻量化图像超分辨率重建模型.该模型建立在SwinIR网络架构的基础上,首先,采用矩形窗口自注意机制(RWSA),利用不同头部的水平和垂直矩形窗口代替传统的正方形窗口模式,整合跨越不同窗口的特征.其次,引用递归泛化自注意力机制(RGSA)将输入特征递归地聚合到具有代表性的特征映射中,然后利用交叉注意力来提取全局信息,同时将RWSA和RGSA交替结合,以更有效地利用全局上下文信息.最后,为了激活更多的像素以获得更好的恢复,使用通道注意力机制和自注意力机制并联地对输入图像进行特征提取.在5种基准数据集的测试结果表明,该模型在保持模型参数轻量化的同时取得了更好的重建性能.Transformer method,relying on a self-attention mechanism,exhibits remarkable performance in the field of image super-resolution reconstruction.Nevertheless,the self-attention mechanism also brings about a very high computational cost.To address this issue,a lightweight image super-resolution reconstruction model based on a hybrid generalized Transformer is proposed.This model is built based on the SwinIR network architecture.Firstly,the rectangular window self-attention(RWSA)mechanism is adopted.It utilizes horizontal and vertical rectangular windows with different heads to replace the traditional square window pattern,integrating features across different windows.Secondly,the recursive generalized self-attention(RGSA)mechanism is introduced to recursively aggregate input features into representative feature maps,followed by the application of cross-attention to extract global information.Meanwhile,RWSA and RGSA are alternately combined to make more effective use of global context information.Finally,to activate more pixels for better recovery,the channel attention mechanism and self-attention mechanism are used in parallel to extract features from the input image.Test results of five benchmark datasets show that this model achieves better reconstruction performance while keeping the model parameters lightweight.
关 键 词:超分辨率重建 轻量化 通道注意力 矩形窗口自注意力 递归泛化自注意力
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.16.24.18