Code of Conduct  

在线阅读下载全文

作  者:Huo Siyi Cao Ran 

机构地区:[1]不详

出  处:《China Weekly》2025年第5期28-31,共4页中国新闻周刊(英文版)

摘  要:Effective Al governance is crucial for balancing innovation and risk,says policy advisor Xue Lan of Tsinghua University.Since early 2025,the emergence of Al has reshaped daily life.Yet while generative Al tools can serve as powerful personal consultants,they often produce misleading or entirely false information that appears highly convincing.This phenomenon,called Al hallucination,occurs when large language models(LLMs)perceive patterns or objects that do not actually exist,producing outputs that are nonsensical or altogether inaccurate.

关 键 词:al hallucinationoccurs GOVERNANCE RISK artificial intelligence policy advisor large language models llms perceive balancing innovation risksays INNOVATION 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象