检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:袁毓林 YUAN Yulin
机构地区:[1]澳门大学人文学院中国语言文学系,中国澳门519000 [2]北京大学中文系/中国语言学研究中心,100871
出 处:《语言教学与研究》2025年第1期35-49,共15页Language Teaching and Linguistic Studies
基 金:澳门大学讲座教授研究与发展基金(CPG2024-00005-FAH);启动研究基金(SRG2022-00011-FAH)支持。
摘 要:本文通过考察ChatGPT在指称歧义句、花园幽径句和递归嵌套句等三种复杂句的语义理解上的表现,来讨论下列问题:ChatGPT等现代大型语言模型到底能不能理解语言的意义?它们真的只是一种不顾意义的随机鹦鹉吗?文章发现,ChatGPT在指称歧义句和一般递归结构的理解方面表现出色,在花园幽径句的理解方面表现平凡;在中心内嵌的递归结构的理解方面,跟人类一样表现不佳。由此,文章指出,ChatGPT等语言大模型能够理解语句的语言性意义,但是未必能够理解语句背后隐微的交际意义和具身意义。最后,文章展望,语言学可为建设关于多种意义的理解的新型智能科学做出独特的贡献。This paper tests ChatGPT's performance in understanding the semantic meanings of three types of complex sentences:referential ambiguity sentences,garden-path sentences,and recursive sentences,especially center-embedding recursive sentences,for answering the following questions:Can modern large language models like ChatGPT truly understand the meaning of natural languages?Are they merely stochastic parrots that disregard meaning?The paper finds that ChatGPT excels in understanding referential ambiguity sentences and normal recursive structures but performs averagely in garden-path sentences.And similar to humans,it struggles in confusion to understand center-embedding recursive structures.Based on these tests,the paper argues that while large language models like ChatGPT can comprehend the linguistic meaning of sentences,they may not fully grasp the subtle communicative and embodied meanings behind them.Finally,the paper looks ahead,suggesting that linguistics study can make a unique contribution to the development of a new intelligent science that understands the multifaceted dimensions of meaning.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.147.104.221