检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张帅[1] 杨雪霞[2] ZHANG Shuai;YANG Xue-xia(Teaching Research Center,Taiyuan Radio and Television University,Taiyuan 030024,China;College of Applied Sciences,Taiyuan University of Science and Technology,Taiyuan 030024,China)
机构地区:[1]太原广播电视大学教学研究中心,山西太原030024 [2]太原科技大学应用科学学院,山西太原030024
出 处:《软件导刊》2020年第8期216-220,共5页Software Guide
基 金:国家自然科学基金项目(11602157);山西电大校基金规划项目(SXDDKT201903)。
摘 要:针对传统文本—图像对抗模型中,由于反卷积网络参数过多容易产生过拟合现象,导致生成图像质量较差,而线性分解方法无法解决文本—图像对抗模型中输入单一的问题,提出一种在线性分解基础上加入流形插值的算法,并对传统DCGAN模型进行改进,以提高图像的鲁棒性。仿真实验结果表明,生成花卉图像的FID分数降低了4.73%,生成鸟类的FID分数降低了4.11%,在Oxford-102和CUB两个数据集上生成图像的人类评估分数分别降低了75.64%和58.95%,初始分数分别提高14.88%和14.39%,说明新模型生成的图片更符合人类视角,图片特征更为丰富。In the implementation of the traditional text image confrontation model,many parameters of deconvolution network are easy to produce over fitting phenomenon,resulting in poor image quality,the linear decomposition method cannot solve the problem of sin⁃gle input in the text image confrontation model.In this paper,an algorithm based on linear decomposition with popular interpolation is proposed,and the traditional DCGAN model is improved to enhance its robustness to image size.Through simulation experiment,the FID score of flower image and bird image is reduced by 4.73%and 4.11%,the human evaluation scores of the images generated on ox⁃ford-102 and cub data sets are 75.64%and 58.95%lower than the original,and the initial scores are 14.88%and 14.39%higher.The experimental results show that the image generated by the new model is more in line with the human perspective,and the image fea⁃tures are more abundant.
分 类 号:TP317.4[自动化与计算机技术—计算机软件与理论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.38