人工智能巨风险研究:形成机制、路径及未来治理  被引量:6

Research on Artificial Intelligence Giant Risk: Formation Mechanism,Path and Future Governance

在线阅读下载全文

作  者:王彦雨[1] 雍熙 高芳[3] WANG Yan-yu;YONG Xi;GAO Fang(Institute for the History of Natural Science,Chinese Academy of Sciences,Beijing 100190;Information Center of Ministry of Water Resources of the People’s Republic of China,Beijing 100190;Institute of Scientific and Technical Information of China,Ministry of Science and Technology of People’s Republic of China,Beijing 100038,China)

机构地区:[1]中国科学院自然科学史研究所,北京100190 [2]中华人民共和国水利部信息中心,北京100190 [3]中国科学技术信息研究所,北京100038

出  处:《自然辩证法研究》2023年第1期104-110,共7页Studies in Dialectics of Nature

摘  要:在西方社会风险相关理论的基础上,提出“人工智能巨风险(AI巨风险)”概念,认为在关注AI伦理及散发风险等问题的基础上,应对人工智能应用与传播过程中所可能引发的大范围、高强度、规模化社会风险问题给予足够关注,即人工智能在应用与传播过程中所引发的“巨风险”。主要类型如机器自涉式AI巨风险、社会系统侵入式AI巨风险、非对称破坏式AI巨风险等。“AI巨风险”的形成往往是多个密相关因素的共振式耦合作用,如技术能力强度、技术稳健度、技术应用向度、社会粘合度、政治/社会对抗强度、社会心理因素等。为防止AI所可能带来的大范围危害、遏制AI巨风险的形成,应积极构建“机器风险学”,从学理层面更深入分析AI巨风险形成的动力学机制,并在未来的社会治理实践中,坚持预防治理原则、技治管理原则、强约束原则、备份原则、协商原则。On the basis of western social risk-related theories, the concept of “artificial intelligence giant risk(AI giant risk)” is proposed. It is believed that besides paying attention to issues such as AI ethics and AI sporadic risks, sufficient attentions should be paid to large-scale, high-intensity, and large-scale social risk issues caused by AI during its process of application and dissemination. The main types of “AI giant risk” mainly include machine self-involved risks, social system intrusive risks, asymmetric destructive risks, etc. For formation of “AI giant risk”, it is often the resonance coupling effect of multiple closely related factors, such as technical capability, technology robustness, technology application dimension, social cohesion, political/social confrontation, social psychological factors, etc. In order to curb the formation of AI giant risks, we should first promote the development of “machine risk studies”, to analyze the dynamic formation mechanism of AI giant risks, and besides, the following principles must be adhered to: principle of preventive governance, promoting technical development to counter technology risk, principle of strong restraint, principle of backup, and principle of consultation.

关 键 词:巨风险 人工智能 非对称破坏式风险 治理 

分 类 号:N031[自然科学总论—科学技术哲学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象