可解释性与AI算法决策系统的感知公平性问题研究  

Explainability and Perceived Fairness in AI Algorithmic Decision-Making Systems

在线阅读下载全文

作  者:赵伟 ZHAO Wei(School of Marxism,Central China Normal University,Wuhan 430079,China)

机构地区:[1]华中师范大学马克思主义学院,武汉430079

出  处:《科学技术哲学研究》2024年第6期115-121,共7页Studies in Philosophy of Science and Technology

基  金:国家社科基金重大项目“马克思共同体思想及当代价值研究”(23&ZD200);华中师范大学中央高校基本科研业务费,重大与特色研究项目(XJ2023001501)。

摘  要:最近的学术界,特别是在人机交互(HCI)领域,越来越关注人工智能公平的感知方面,即研究设计者、最终用户、决策主体和各种其他利益相关者如何感知公平。虽然公平性和透明度的计算定义是当今非常流行的研究主题,但人们对从更广泛的角度看待算法系统的必要性有了理解,这也涉及它们的社会影响(例如,遵守社会规范、道德判断、用户感知)。文章研究旨在了解算法系统提供的解释对非专家理解公平感知的影响。我们的目标是调查解释是否以及如何影响用户对系统结果的公平感知。解释可用于提高算法系统的透明度、可信度和对公平性的感知,而最佳解释应该是基于系统特征以及用户的人口特征和个性特征创建个性化的解释。Recently,in the academic community,especially in the field of human-computer interaction(HCI),there has increasingly focused on the perceived fairness of artificial intelligence,that is,how designers,end users,decision makers,and various other stakeholders perceive fairness.Although the computational definition of fairness and transparency is a very popular research topic today,there is an understanding of the necessity to look at algo⁃rithmic systems from a broader perspective,which also involves their social impact(such as adherence to social norms,moral judgments,and user perception).This paper aims to understand the impact of explanations provided by algorithmic systems on non-expert comprehension and fairness perception.Our goal is to investigate whether and how explanations affect users’fairness perception in system results.Explanation can be used to improve the transparency,credibility,and perception of fairness of algorithm systems,and the best explanation should be to create personalized explanations based on system characteristics,as well as the users’demographic and personality characteristics.

关 键 词:算法决策系统 计算公平 感知公平 可解释性 

分 类 号:G301[文化科学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象