人工智能偏见与冲突治理的内在主义进路及其知识表示  

Innate approaches to bias and conflict management in AI development and their knowledge representation

在线阅读下载全文

作  者:徐进 王珏[2,3,4] XU Jin;WANG Jue

机构地区:[1]东南大学物理学院,江苏南京210096 [2]东南大学人文学院,江苏南京210096 [3]东南大学道德发展智库,江苏南京210096 [4]东南大学AI伦理实验室,江苏南京210096

出  处:《东南大学学报(哲学社会科学版)》2024年第6期43-50,147,148,共10页Journal of Southeast University(Philosophy and Social Science)

基  金:江苏省社会科学基金项目“‘信息茧房’效应对大学生非伦理行为的影响与对策研究”(22MLB010);江苏省社科应用研究精品工程课题“AI算法的偏见评估与治理方法研究”(24SLB-01);江苏省道德发展智库课题“中国文化语境下人工智能伦理的知识表示研究”(20230017);国家社会科学基金重大项目“改革开放四十年中国伦理道德数据库建设研究”(18ZDA022)阶段性成果;中央基本科研业务费支助。

摘  要:全球人工智能技术的发展呈现出寡头垄断与层级分化的格局。开发者的地域、文化、教育背景差异,以及训练数据的社会、文化、政治属性差异,正在加剧知识垄断和伦理垄断,放大文化偏见和价值观冲突。人工智能偏见和冲突的治理需要以大科学的研究理念,生成伦理与技术的同步链接,以确保技术与伦理的协同发展。伦理治理的内在主义进路具有哲学和技术两方面的学理基础:将伦理准则作为人工智能的逻辑起点而非评价标准,创造具有道德能动性的人工智能,是防范、化解偏见和冲突的有效策略;预训练-微调的技术范式和微调数据集是内在主义进路的技术基础。以本体表示法作为结构化伦理知识表达和语义推理的基础,设计再微调的技术路线和伦理数据集对大模型进行伦理智能优化,为建构人工智能的道德能动性提供了方法论和路线图。以中国伦理为例的本体知识表示,论证了人工智能偏见与冲突治理的内在主义进路何以可能。The development of AI technology shows the characteristic oligopoly dominance and hierarchical differentiation. Differences in the geographical, cultural and educational backgrounds of developers, as well as the social, cultural and political attributes of training data, are exacerbating knowledge and ethical monopolies, thus amplifying cultural biases and value conflicts. To combat this, it requires the synchronous linkages between ethics and technology and their collaborative development. The innate approach to ethical governance has philosophical and technological theoretical foundations. First, taking ethical principles as the logical starting point for AI rather than as evaluation criteria;for example, creating AI with moral agency is an effective strategy for preventing and resolving biases and conflicts. The pretraining - finetuning technical paradigm and finetuning dataset form the technological basis for the innate approach. Second, using ontology as the basis for the structured representation of ethical knowledge and semantic reasoning, and designing technical pathways and ethical datasets for further finetuning large models provides a methodological and roadmap for ethically optimizing AI to construct moral agency. The example of ontological knowledge representation of Chinese ethics demonstrates how the innate approaches to managing biases and conflicts in AI are possible.

关 键 词:人工智能 伦理 治理 内在主义 知识表示 偏见冲突 

分 类 号:B82-057[哲学宗教—伦理学] TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象