检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:彭丽徽[1] 张琼 李天一 PENG Lihui;ZHANG Qiong;LI Tianyi(School of Public Administration of Xiangtan University,Xiangtan 411105;Network Security Detachment,Shenyang Railway Public Security Office,Shenyang 110167)
机构地区:[1]湘潭大学公共管理学院,湘潭411105 [2]沈阳铁路公安处网络安全保卫支队,沈阳110167
出 处:《农业图书情报学报》2024年第5期23-31,共9页Journal of Library and Information Science in Agriculture
基 金:湖南省图书馆学会中青年人才库重点课题“数智时代老年人健康信息规避行为及干预机制研究”(XHZD1023)。
摘 要:[目的/意义]本研究旨在探讨人工智能在政府数据治理中的应用及其带来的算法歧视问题,并提出相应的解决策略,以保障公民的合法权益和政府的公信力。[方法/过程]通过文献归纳法分析人工智能算法在政府数据治理中的具体应用,识别出算法歧视的成因,包括数据片面性、设计者的观念以及社会偏见等,进一步探讨算法歧视的潜在风险并给出相应的防控措施。[结果/结论]研究表明,人工智能嵌入政府数据治理在提升效率的同时也带来了算法歧视风险。据此,本研究提出明确算法公平、制定行业规范、完善问责机制、优化数据环境等防控措施,以确保人工智能在政府数据治理中有效造福人民。[Purpose/Significance]The purpose of this study is to provide an in-depth analysis of the widespread application of artificial intelligence(AI)technology in the field of government data governance and its far-reaching implications,with a particular focus on the core issue of algorithmic discrimination.With the rapid development of AI technology,it has demonstrated great potential in government decision support,public service optimization,and policy impact prediction,but it has also sparked extensive debate on issues such as algorithmic bias,privacy invasion,and fairness.Through systematic analysis,this study aims to reveal the potential risks of AI algorithms in government data governance,especially the causes and manifestations of algorithmic discrimination,and then it proposes effective solutions to protect citizens'legitimate rights and interests from being violated,and to maintain government credibility and social justice.[Method/Process]This study adopts the literature induction method to extensively collect domestic and international related data on the application of AI in government data governance,including academic papers,policy documents,and case studies.Through systematic review and in-depth analysis,we clarified the specific application scenarios of AI algorithms in government data governance and their role mechanisms.On this basis,this study further identified the key factors that led to algorithmic discrimination,including but not limited to the one-sidedness of data collection and processing,the subjective bias of the algorithm designers,and the influence of inherent social biases on the algorithms.It then explored the potential risks of algorithmic discrimination,including exacerbating social inequality,restricting civil rights,and undermining government credibility,and provided an in-depth analysis through a combination of theoretical modeling and case studies.[Results/Conclusions]The results of the study show that while the embedding of AI technology in government data governance has significan
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:13.59.210.36