机构地区:[1]亚洲理工学院发展与可持续性学院,泰国空滦12120 [2]中国人民大学数字人文研究中心,北京100872
出 处:《太原理工大学学报(社会科学版)》2025年第1期110-120,136,共12页Journal of Taiyuan University of Technology(Social Science Edition)
基 金:国家社会科学基金高校思政课研究专项“法治教育融入高校思政课流程优化研究”(21VSZ031);河南省人文社会科学资助项目“新时代中国共产党人斗争精神的生成逻辑、科学内涵与弘扬路径研究”(2024-ZZJH-229);河南省教育厅民办教育专项课题研究项目“河南民办高校师资队伍质量提升的战略实证研究”(2022-MBJYZXKT-027);河南省社科联调研课题“当代新兴ChatGPT技术的伦理风险及其治理路径研究”(SKL-2023-252);郑州工商学院科研创新项目“基于ChatGPT类生成式人工智能的红色文化传播现状与策略研究”(2023-KYZD-01);郑州工商学院校级教育教学改革研究重点项目“中国特色工匠精神视域下民办高校大学生创新能力培养策略研究”(GSJG2023003)。
摘 要:当前,人类越来越依赖机器人和其他形式的人工智能来实现自身的信仰,知识的见证来源或工具性知识来源的问题逐渐受到关注,可靠主义视角下的机器人证言或知识来源的分析不仅有助于对基本知识来源分类的认识论问题进行澄清,而且也拓展了可靠主义理论对于机器人知识的理解,进一步推动了当代知识论研究的实践转向。通过对可靠主义的概念界定及其划界问题的探讨,从机器人“这么说”中获得的知识分类、机器人证言的确证性、机器人知识的工具性三个层面对相关文献进行梳理与分析,可以得出将机器人证言视为知识来源往往会产生重要的社会和法律后果。有鉴于此,拟证明至少有一些机器人能够做证,并对可靠主义划界问题的三种解决路径进行探究,包括认知主义的解决路径、负责任的解决路径、自然信任的解决路径,但这些路径最终都没有成功解决可靠主义的划界问题。同时,基于机器人欺骗的实证性案例研究,提出可靠主义划界问题的社会信任解决路径,认为尽管某些被设计成具有欺骗性机器人,但人类对话者应该将它们及其证言视为知识的确证来源之一,发展可靠的人机关系要在认识论上立足于能够反映当代技术与人类本真关系的信念视域,在方法论上对机器人知识采用价值多元的方法,这对于现在和未来动态且可持续的人机交互具有重要意义。At present,humans are increasingly relying on robots and other forms of artificial intelligence to fulfill their own beliefs.The issue of testimonial or instrumental knowledge sources has gradually drawn attention.The analysis of robot testimony or knowledge sources from the perspective of reliabilism not only helps to clarify the epistemological issue of the classification of basic knowledge sources,but also expands the understanding of reliabilism theory regarding robot knowledge,and further promotes the practical turn of contemporary epistemological research.The discussion of the definition and demarcation issue of reliabilism and the review and analysis of literature about the classification of knowledge obtained from what robots“say”,the justifiability of robot testimony,and the instrumentality of the robot’s knowledge have revealed that treating robot testimony as a source of knowledge generally results in significant social and legal consequences.In view of this,the study intends to prove that at least some robots can testify,and to explore three solutions to the demarcation problem of reliabilism,namely,the cognitivist solution,the responsible solution,and the natural trust solution.However,none of these solutions ultimately succeed in resolving the demarcation problem.In the mean time,based on empirical case studies of robot deception,a social trust solution to the demarcation problem of reliabilism is proposed.It is believed that although some robots are designed to be deceptive,human interlocutors should regard them and their testimony as a source of knowledge justification.Therefore,the development of a reliable human-robot relationship should be based on an epistemological belief perspective that can reflect the genuine relationship between contemporary technology and humans,and a methodological value-pluralistic approach should be adopted for robot knowledge,which is of great significance for current and future dynamic and sustainable human-robot interactions.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...