检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:何群[1] 柯英杰 HE Qun;KE Yingjie(Law School,Fuzhou University,Fuzhou 350000,China)
出 处:《合肥工业大学学报(社会科学版)》2024年第4期8-17,50,共11页Journal of Hefei University of Technology(Social Sciences)
基 金:教育部哲学社会科学研究后期资助项目(21JHQ076)。
摘 要:生成式人工智能驱动的强人工智能可能具备辨认控制能力和自主决策、自主行动的能力,算法、训练模型、数据参数的发展使强人工智能认识法律行为及其意义并对其作出允洽回应成为可能。制造商与使用者等相关主体对人工智能犯罪风险有限的注意义务,无法满足强人工智能犯罪的潜在治理需求。文章提出修正人类中心主义观念,基于风险防控的需要拟制强人工智能的法律人格,综合考虑强人工智能对法律行为的学习、理解和回应能力,对自身行为的辨认和控制能力以及对刑事责任的承受能力,依据实际情形针对性地判断强人工智能的刑事责任能力,推动人工智能成为法定刑事责任主体,以实现刑法治理与科技发展的并行不悖。The strong artificial intelligence(AI)driven by generative AI may possess the capabilities to recognize and control,make autonomous decisions,and take independent actions.The development of algorithms,training models,and data parameters makes it possible for strong AI to comprehend legal behavior and its implications,thereby enabling it to respond appropriately.The limited duty of care regarding AI crime risks owed by manufacturers and relevant stakeholders such as users falls short of meeting the potential governance needs for criminal acts involving strong AI.The anthropocentric paradigm should be amended,and the legal personhood for strong AI should be formulated based on the necessity of risk prevention and control,involving a comprehensive evaluation of its capacities to learn,comprehend,and respond to legal behaviors,to identify and regulate its own actions,and to bear criminal responsibility.This process also entails setting criteria for determining strong AI’s criminal responsibility capacity,thus advancing the recognition of AI as a statutory subject of criminal responsibility,aiming to achieve the parallel development of criminal law governance and technological advancement without contradiction.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.63