人类语言习得的亲知还原模式——从ChatGPT的言知还原模式说起  被引量:7

The Model of Familiarity Reduction in Human Language Acquisition——From the Viewpoint of ChatGPT's Model of Verbal Reduction

在线阅读下载全文

作  者:陈保亚[1] 陈樾 Chen Baoya;Chen Yue(Center for Chinese Linguistics,Department of Chinese Language and Literature,Peking University,Beijing 100871 China;Department of Mathematics,theUniversity of Georgia,USA)

机构地区:[1]北京大学中国语言学研究中心,北京100871 [2]佐治亚大学数学系,美国

出  处:《北京大学学报(哲学社会科学版)》2024年第2期167-174,共8页Journal of Peking University(Philosophy and Social Sciences)

基  金:国家社科基金重大项目“我国民族音乐文化与语言数据集成及共演化研究”(22&ZD218)。

摘  要:尽管语言人工智能的大语言模型ChatGPT取得了比较大的进展,哲学上的图灵和塞尔之争仍然在继续。不过ChatGPT能够生成符合语法的崭新的句子,一定还原出了语言单位(tokens)和规则,解决了长期以来人工智能中自然语言理解的难题,这是一个重要的转折。ChatGPT的学习模型依赖强大的运算能力和计算机的海量存储能力,这两种能力可以合称为强储算能力。相比之下,人脑只具有弱储算能力。正是因为弱储算能力的限制,人脑语言学习不可能完全走ChatGPT的语言学习模式。人脑是在基于经验的亲知活动中还原出有限的单位和规则,从而生成崭新的句子。ChatGPT目前采用的是言知学习模式,而不是基于经验的亲知学习模式,将来的大语言模型可能扩展出亲知学习模式,真正模拟人类获得亲知还原模式。那个时候或许可以说机器人真正理解了自然语言,哲学上的图灵和塞尔之争或许可能得到解决。Despite the significant progress made in ChatGPT,a big language model of artificial intelligence,the philosophical debate between Turing(图灵)and Searle(塞尔)is still going on.However,ChatGPT is able to generate new sentences that conform to grammar,and has certainly reduced language units(tokens)and rules,thus solving the long-standing problems of natural language understanding in artificial intelligence.This is an important turning point.The learning model of ChatGPT relies on strong computing power and the massive storage capacity of computers,which can be collectively referred to as strong power of storaging and computing.In contrast,human brain has only weak of storaging and computing power reduce.It is precisely because of the limitations of its weak power of storaging and computing that human brain language learning cannot completely follow the language learning model of ChatGPT.Human brain reduce limited units and rules through activities of familiarity based on experience,thereby generating new sentences.ChatGPT currently adopts a text-based learning model,rather than an experience-based learning model of familiarity.The future bigger language models may expand to include a learning model of familiarity,truly simulating the model of familiarity reduction in human language acquisition.Until then,it may be said that robots can truly understand natural language,and the philosophical dispute between Turing and Searle may have been resolved.

关 键 词:人工智能 图灵测试 中国房间 自然语言理解 思维 

分 类 号:H08[语言文字—语言学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象