Paradigm Shift in Natural Language Processing  被引量:11

在线阅读下载全文

作  者:Tian-Xiang Sun Xiang-Yang Liu Xi-Peng Qiu Xuan-Jing Huang 

机构地区:[1]School of Computer Science,Fudan University,Shanghai 200438,China [2]Shanghai Key Laboratory of Intelligent Information Processing,Fudan University,Shanghai 200438,China

出  处:《Machine Intelligence Research》2022年第3期169-183,共15页机器智能研究(英文版)

基  金:supported by National Natural Science Foundation of China(No.62022027).

摘  要:In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

关 键 词:Natural language processing pre-trained language models deep learning sequence-to-sequence paradigm shift 

分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象