检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Qun LI Haixin SUN Fu XIAO Yiming WANG Xinping GAO Bir BHANU
机构地区:[1]School of Computer Science,Nanjing University of Posts and Telecommunications,Nanjing 210023,China [2]Purple Mountain Laboratories,Nanjing 211111,China [3]Department of Electrical and Computer Engineering,University of California at Riverside,Riverside 92521,USA
出 处:《Science China(Information Sciences)》2025年第1期389-390,共2页中国科学(信息科学)(英文版)
基 金:supported by National Natural Science Foundation of China(Grant Nos.62276143,62302233);Natural Science Foundation of Jiangsu Province(Grant No.BK20231287);National Science Fund for Distinguished Young Scholars of China(Grant No.62125203).
摘 要:Large language models(LLMs)have recently shown remarkable performance in a variety of natural language processing(NLP)tasks.To further explore LLMs’reasoning abilities in solving complex problems,recent research[1–3]has investigated chain-of-thought(CoT)reasoning in complex multimodal scenarios,such as science question answering(ScienceQA)tasks[4],by fine-tuning multimodal models through human-annotated CoT rationales.However,collected CoT rationales often miss the necessary reasoning steps and specific expertise.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.138.36.87