检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:周小诗 张梓葳 文娟 ZHOU Xiao-shi;ZHANG Zi-wei;WEN Juan(College of Information and Electrical Engineering,China Agricultural University,Beijing 100083,China)
机构地区:[1]中国农业大学信息与电气工程学院,北京100083
出 处:《计算机科学》2021年第S02期557-564,584,共9页Computer Science
基 金:国家自然科学基金(61802410);中国高校科学基金(2019TC047)。
摘 要:生成式自然语言信息隐藏在自然语言生成过程中嵌入秘密信息。目前主流的生成式自然语言隐藏方法采用一个简单的循环神经网络(Recurrent Neural Networks,RNN)或长短时记忆网络(Long Short-Term Memory,LSTM)进行载密文本的生成。这种方法生成的载密文本长度有限,且句子和句子之间没有语义关联。为了解决这个问题,提出了能够生成长句且句与句之间能保持语义关系的机器翻译隐写算法Seq2Seq-Stega。采用序列到序列(Sequence to Sequence,Seq2Seq)模型作为文本隐写的编码器和解码器,源语句的信息可以保证目标载密句的语义关联性。此外,根据每一时刻模型计算的单词概率分布,设计了候选池的选词策略,并引入了平衡源语句与目标句的贡献度的注意力超参数。通过实验比较了不同选词阈值和注意力参数下模型的隐藏容量和生成文本的质量。与其他3种生成式模型的对比实验表明,该算法能够保持长距离语义关联,并具有较好的抗隐写分析能力。Generation-based natural language steganography embeds secret information during text generation under the guidance of secret bitstream.The current generation-based steganographic methods are based on recurrent neural networks(RNN)or long short-term memory(LSTM),which can only generate short stego text because the semantic quality becomes worse as the length of the sentence increases.Moreover,there is hardly any semantic connection between sentences.To address this issue,this paper proposes a neural machine translation steganography algorithm,namely Seq2Seq-Stega,that can generate long text in which semantic relationship maintains well between words and sentences.An encoder-decoder model based on sequence-to-sequence(Seq2Seq)structure is used as our translation model.The source sentence can offer extra information and ensure the semantic relevance between the target stego sentences.In addition,according to the word probability distribution obtained by the model,we design a word selection strategy to form the candidate pool.An attention hyperparameter is introduced to balance the contribution of the source sentence and the target sentence.Experimental results show the hidden capacity and the text quality under different word selection thresholds and attention parameters.Comparative experiments with other three generation-based models show that Seq2Seq-Stega can maintain long-distance semantic connections and better resist steganalysis.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222