检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:吴海峰 兰强 WU Haifeng;LAN Qiang(School of Computer and Information,Anqing Normal University,Anqing 246133,China)
机构地区:[1]安庆师范大学计算机与信息学院,安徽安庆246133
出 处:《安庆师范大学学报(自然科学版)》2024年第3期78-83,共6页Journal of Anqing Normal University(Natural Science Edition)
基 金:安徽省自然科学基金(2108085MF216)。
摘 要:基于深度融合生成对抗网络(DF-GAN)多个融合模块相互独立,以致网络融合深度较浅并难以得到最优融合结果的问题,本文提出了一种基于深度传播融合生成对抗网络(DPF-GAN)的文本生成图像算法。该算法通过拼接相邻的仿射模块和融合模块,让前面的融合信息传播至后面的融合模块中,从而促进文本和图像更深层次地融合。实验表明,在CUB-200-2011和COCO数据集上,DPF-GAN生成的图像质量要优于DF-GAN,特别是CUB-200-2011数据集的FID指标减少了11.34%。与递归仿射变换生成对抗网络(RAT-GAN)相比,DPF-GAN的空间复杂度更低且推理速度更快。The multiple fusion modules of deep fusion generative adversarial network(DF-GAN)were independent of each other,which leaded to a shallow fusion depth and made it difficult to obtain the optimal fusion result.Hence,a text-to-image synthesis algorithm which based on deep propagated fusion generative adversarial network(DPF-GAN)was proposed to solve these issues.This algorithm connected adjacent affine and fusion modules through concatenation,so that the previous fusion information can be propagated to the subsequent fusion modules.This facilitates a deeper integration of text and image.Through experimental results on the CUB-200-2011 dataset and COCO dataset,found that the quality of images which generated by DPF-GAN was better than DF-GAN.The FID score on CUB-200-2011 dataset was decreased by approximately 11.34%compared to DF-GAN.Compared to the Recurrent affine transformation generative adversarial network(RAT-GAN),DPF-GAN offers lower spatial complexity and faster inference speed.
关 键 词:文本生成图像 生成对抗网络 仿射变换 深度传播融合 单级主干
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49