信任人工智能何以可能?  被引量:6

How Is Trust in Artificial Intelligence Possible?

在线阅读下载全文

作  者:包傲日格乐 曾毅 BAO Ao-ri-ge-le;ZENG Yi(Department of Philosophy,School of Humanities,University of Chinese Academy of Sciences/Institute of Philosophy,Chinese Academy of Sciences,Beijing 100190;Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China)

机构地区:[1]中国科学院大学人文学院哲学系/中国科学院哲学研究所,北京100190 [2]中国科学院自动化研究所,北京100190

出  处:《自然辩证法研究》2023年第2期67-73,共7页Studies in Dialectics of Nature

摘  要:信任是脆弱的委托人面对风险,与受托人达成共同期望的过程。值得信任意味着人工智能需要有能力被信任。但现有人工智能信任说明皆存在困难:由于目前的人工智能仅是一个工具,外表伦理的信任路径面临责任逃脱;理性、情感与规范信任路径无法承担人工智能作为受托人的期望;责任网络化路径试图通过集体责任逃避人工智能信任问题。人工智能信任有待从非工具性、非依赖性的理性社会规范意义上,满足意向、能力、善意、心理积极、预期期望接受、脆弱性风险等需要,从信任语境的规范理解以及信任过程的无监督出发,寻求信任关系的建构。Trust is the process by which vulnerable principals face risks and reach common expectations with their trustees. Being trustworthy implies that AI needs to be capable of being trusted. However, all existing explanations on trust over AI have difficulties: the ethical trust by appearance path faces escaping from responsibility because current AI is only a tool;the rational, emotional and normative trust path cannot afford the expectations of AI as a trustee;and the responsibility network path tries to escape from the trust in AI problem through collective responsibility. In this paper, we argue that trust in AI is yet to seek the construction of trust relationship from the sense of non-instrumental, non-dependent, rational social norms, satisfying the needs of intention, competence, goodwill, psychological positivity, expected expectation acceptance, and vulnerability risk, from the normative understanding on the context of trust, and from the un-supervision of trust process.

关 键 词:人工智能伦理 信任 外表伦理 脆弱性风险 

分 类 号:N031[自然科学总论—科学技术哲学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象