How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites  被引量:1

在线阅读下载全文

作  者:Zhe CHEN Weiyun WANG Hao TIAN Shenglong YE Zhangwei GAO Erfei CUI Wenwen TONG Kongzhi HU Jiapeng LUO Zheng MA Ji MA Jiaqi WANG Xiaoyi DONG Hang YAN Hewei GUO Conghui HE Botian SHI Zhenjiang JIN Chao XU Bin WANG Xingjian WEI Wei LI Wenjian ZHANG Bo ZHANG Pinlong CAI Licheng WEN Xiangchao YAN Min DOU Lewei LU Xizhou ZHU Tong LU Dahua LIN Yu QIAO Jifeng DAI Wenhai WANG 

机构地区:[1]State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China [2]Shanghai AI Laboratory,Shanghai 200232,China [3]SenseTime Research,Shanghai 200233,China [4]Department of Electronic Engineering,Tsinghua University,Beijing 100084,China [5]School of Computer Science,Fudan University,Shanghai 200433,China [6]Department of Information Engineering,The Chinese University of Hong Kong,Hong Kong 999077,China

出  处:《Science China(Information Sciences)》2024年第12期1-18,共18页中国科学(信息科学)(英文版)

基  金:supported by National Key R&D Program of China(Grant Nos.2022ZD0160102,2022ZD0161300);National Natural Science Foundation of China(Grant Nos.62372223,U24A20330,62376134);China Mobile Zijin Innovation Institute(Grant No.NR2310J7M);Youth Ph.D.Student Research Project under the National Natural Science Foundation(Grant No.623B2050)。

摘  要:In this paper,we introduce InternVL 1.5,an open-source multimodal large language model(MLLM)to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.We introduce three simple improvements.(1)Strong vision encoder:we explored a continuous learning strategy for the large-scale vision foundation model—InternViT-6B,boosting its visual understanding capabilities,and making it can be transferred and reused in different LLMs.(2)Dynamic high-resolution:we divide images into tiles ranging from 1 to 40 of 448×448 pixels according to the aspect ratio and resolution of the input images,which supports up to 4K resolution input.(3)High-quality bilingual dataset:we carefully collected a high-quality bilingual dataset that covers common scenes,document images,and annotated them with English and Chinese question-answer pairs,significantly enhancing performance in optical character recognition(OCR)and Chinese-related tasks.We evaluate InternVL 1.5 through a series of benchmarks and comparative studies.Compared to both open-source and proprietary commercial models,InternVL 1.5 shows competitive performance,achieving state-of-the-art results in 8 of 18 multimodal benchmarks.Code and models are available at https://github.com/OpenGVLab/InternVL.

关 键 词:multimodal model OPEN-SOURCE vision encoder dynamic resolution bilingual dataset 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象