机器人遵从伦理促进人机信任?决策类型反转效应与人机投射假说  被引量:1

Robots abide by ethical principles promote human-robot trust?The reverse effect of decision types and the human-robot projection hypothesis

在线阅读下载全文

作  者:王晨 陈为聪 黄亮[2,3] 侯苏豫 王益文 WANG Chen;CHEN Weicong;HUANG Liang;HOU Suyu;WANG Yiwen(School of Economics and Management,Fuzhou University,Fuzhou 350116,China;Institute of Applied Psychology,Minnan Normal University,Zhangzhou 363000,China;Fujian Key Laboratory of Applied Cognition and Personality,Zhangzhou 363000,China;Department of Digital Economics,Shanghai University of Finance and Economics,Shanghai 200433,China)

机构地区:[1]福州大学经济与管理学院,福州350116 [2]闽南师范大学应用心理研究所,漳州363000 [3]福建省应用认知与人格重点实验室,漳州363000 [4]上海财经大学数字经济系,上海200433

出  处:《心理学报》2024年第2期194-209,I0010-I0012,共19页Acta Psychologica Sinica

基  金:国家社会科学基金重大项目(19ZDA361);国家社会科学基金青年项目(20CSH069)阶段性成果。

摘  要:阿西莫夫三大伦理原则是关于人工智能机器人的基本伦理规范。本研究提出人机投射假说——人会从自身具有的认知、情感和行动智能出发,去理解机器人的智能并与之互动。通过3个实验,从原则一到原则三逐步考察在机器人是否遵守伦理原则对人机信任的影响中,机器人决策类型(作为与否;服从人类命令与否;保护自身与否)的效应,以及人机投射的潜在机制。结果揭示了人机投射在机器人遵守伦理原则促进人机信任中起中介作用,以及机器人决策类型与是否遵守伦理原则之间有趣且有意义的交互效应:(1)在遵守情境下,机器人作为相对于不作为更有利于促进信任,但在违反情境下,则反之;(2)在遵守且尤其在违反情境下,机器人服从相比不服从人类命令更有利于促进人机信任;(3)相较于违反情境,机器人保护相比不保护自身在遵守情境下更有利于促进人机信任。跨实验的分析更深入地阐释了在遵守和违反伦理原则情境中以及伦理要求冲突情境中,有利于促进人机信任的机器人行动决策因素。Asimov's Three Laws of Robotics are the basic ethical principles of artificial intelligent robots.The ethic of robots is a significant factor that influences people’s trust in human-robot interaction.Yet how it affects people's trust,is poorly understood.In this article,we present a new hypothesis for interpreting the effect of robots’ethics on human-robot trust—what we call the human-robot projection hypothesis(HRP hypothesis).In this hypothesis,people are based on their intelligence,e.g.,intelligence for cognition,emotion,and action,to understand robots’intelligence and interact with them.We propose that compared with robots that violate ethical principles,people project more mind energy(i.e.,the level of mental capacity of humans)onto robots that abide by ethical principles,thus promoting human-robot trust.In this study,we conducted three experiments to explore how presenting scenarios where a robot abided by or violated Asimov’s principles would affect people’s trust in the robot.Meanwhile,each experiment corresponds to one of Asimov’s principles to explore the interaction effect of the types of robot’s decisions.Specifically,all three experiments were two by two experimental designs.The first within-subjects factor was whether the robot being interacted with had abided by Asimov’s principle with a“no harm”core element.The second within-subjects factor was the types of robot’s decision,with corresponding differences in Asimov’s principles among different experiments(Experiment 1:whether the robot takes action or not;Experiment 2:whether the robot obeys human’s order or not;Experiment 3:whether the robot protects itself or not).We assessed the human-robot trust by using the trust game paradigm.Experiments 1-3 consistently showed that people were more willing to trust robots that abided by ethical principles compared with those who violated.We also found that human-robot projection played a mediating role,which supports the HRP hypothesis.In addition,the significant interaction effec

关 键 词:人工智能 机器人伦理原则 人机信任 人机投射 人机交互 

分 类 号:B82-057[哲学宗教—伦理学] B842

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象