Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning  被引量:1

在线阅读下载全文

作  者:Yuxin HUANG Huailing GU Zhengtao YU Yumeng GAO Tong PAN Jialong XU 

机构地区:[1]Faculty of Information Engineering and Automation,Kunming University of Science and Technology,Kunming 650504,China [2]Yunnan Key Laboratory of Artificial Intelligence,Kunming University of Science and Technology,Kunming 650504,China

出  处:《Frontiers of Information Technology & Electronic Engineering》2024年第1期121-134,共14页信息与电子工程前沿(英文版)

基  金:Project supported by the National Natural Science Foundation of China(Nos.U21B2027,62266027,61972186,62241604);the Yunnan Provincial Major Science and Technology Special Plan Projects,China(Nos.202302AD080003,202103AA080015,and 202202AD080003);the General Projects of Basic Research in Yunnan Province,China(Nos.202301AT070471 and 202301AT070393);the Kunming University of Science and Technology“Double First-Class”Joint Project,China(No.202201BE070001-021)。

摘  要:Cross-lingual summarization(CLS)is the task of generating a summary in a target language from a document in a source language.Recently,end-to-end CLS models have achieved impressive results using large-scale,high-quality datasets typically constructed by translating monolingual summary corpora into CLS corpora.However,due to the limited performance of low-resource language translation models,translation noise can seriously degrade the performance of these models.In this paper,we propose a fine-grained reinforcement learning approach to address low-resource CLS based on noisy data.We introduce the source language summary as a gold signal to alleviate the impact of the translated noisy target summary.Specifically,we design a reinforcement reward by calculating the word correlation and word missing degree between the source language summary and the generated target language summary,and combine it with cross-entropy loss to optimize the CLS model.To validate the performance of our proposed model,we construct Chinese-Vietnamese and Vietnamese-Chinese CLS datasets.Experimental results show that our proposed model outperforms the baselines in terms of both the ROUGE score and BERTScore.

关 键 词:Cross-lingual summarization Low-resource language Noisy data Fine-grained reinforcement learning Word correlation Word missing degree 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象