检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:程海东 胡孝聪 陈凡 CHENG Haidong;HU Xiaocong;CHEN Fan
机构地区:[1]东北大学马克思主义学院哲学系
出 处:《哲学分析》2024年第1期174-185,199,共13页Philosophical Analysis
基 金:2022年度国家社科基金一般项目“人工智能体道德实践的途径与机制研究”(项目编号:22BZX026)。
摘 要:人工智能体道德建模的目的是让它们在道德实践中与人类和其他智能体更好地相处。目前主要有四种建模策略:隐式、自上而下式、自下而上式和混合式。但是这些策略存在技术层面的困境,如设计范式、道德转译和算法黑箱等问题,并且在社会层面上也难以应对伦理对齐的困境。这是因为现行策略认为人工智能体在道德实践中发挥着线性作用,能够独立实现某种外在的道德规范,从而将人工智能体从实践中隔离出来。采用分布式道德机制来实现人工智能体道德建模,将人工智能体置于多智能体系统中,可以在人与人工智能体的交互式道德实践中形成开放、包容的道德规范和分配式道德责任。这种机制不仅能够消解现行策略的困境,还能够促进人工智能体与人类之间形成道德共生关系。The purpose of moral modeling for artificial agents is to enable them to interact better with humans and other artificial agents in moral practices.Currently,there are four main modeling strategies:implicit,top-down,bottom-up,and hybrid.However,these strategies face technical challenges such as design patterns,moral translation,and algorithmic black box,as well as ethical alignment challenges at the social level.This is because the current strategies assume that artificial agents play a linear role in moral practices and can independently achieve some external moral norm,thus isolating artificial agents from practice.By using distributed moral mechanisms to achieve moral modeling for artificial agents,placing artificial agents in a multiagent system can form an open and inclusive moral norm and distributed moral responsibility in interactive ethical practices between humans and artificial agents.This mechanism can not only alleviate the current difficulties of the existing strategies but also promote a moral symbiotic relationship between artificial agents and humans.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222