人机协作视角下的自动系统伦理  被引量:3

The Ethics of Automatic Systems from the Perspective of Human-Machine Collaboration

在线阅读下载全文

作  者:徐献军 XU Xianjun

机构地区:[1]同济大学人文学院

出  处:《哲学分析》2023年第3期144-155,199,共13页Philosophical Analysis

摘  要:随着民用无人驾驶汽车、军用无人机等为代表的自动系统的大量应用,相关伦理问题也出现了。在自动系统对人类造成伤害时,谁该对此负责?由于出现了新的伦理主体,即自动系统自主体,传统的伦理学不能直接用来确定责任归属。大多数的自动系统伦理讨论都有一个前提:自主体是独立于人的,而且,要克服责任鸿沟,就需要更为成熟的技术。这相当于把伦理问题还原为技术问题。如果换一种思路,不假设自主体可独立于人,则可在人机协作视角中去探讨自主体所应遵守的伦理规范。相应地,未来的自动系统研究,应该遵守人机协作的伦理,避免研发完全独立于人的自主体,而始终让人对自动系统进行监管,并承担协作的伦理责任。With the extensive application of autonomous systems represented by civilian unmanned vehicles and military unmanned aerial vehicles,the corresponding ethical issues of automatic systems have also emerged,that is,who should be responsible for human casualties or losses caused by autonomous systems?Due to the emergence of a new ethical agent,that is,an automatic system that is an agent,traditional ethical theories cannot be directly used to determine the attribution of responsibility.But most discussions of ethics have a premise,i.e.,the agent of automatic systems is an agent independent of humans and overcoming the responsibility gap requires more mature technology.This is equivalent to reducing ethical issues to technical issues.This paper tries not to regard the agent as an autonomous agent that can be independent of humans,but advocates to discuss the ethical norms that the agent should abide by from the perspective of human-machine collaboration.Correspondingly,future research of automatic systems should abide by the ethics of human-machine collaboration,that is,to avoid research and development of agent that can be completely independent of humans,and always allow humans to supervise automatic systems and take the ethical responsibility of collaboration.

关 键 词:自动系统 责任鸿沟 人机协作 自主体 

分 类 号:B82[哲学宗教—伦理学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象