Multimodal Pretraining from Monolingual to Multilingual  被引量:1

在线阅读下载全文

作  者:Liang Zhang Ludan Ruan Anwen Hu Qin Jin 

机构地区:[1]School of Information,Renmin University of China,Beijing 100872,China

出  处:《Machine Intelligence Research》2023年第2期220-232,共13页机器智能研究(英文版)

基  金:supported by the National Natural Science Foundation of China(No.62072462);the National Key R&D Program of China(No.2020AAA0108600);the Large-scale Pretraining Program 468 of Beijing Academy of Artificial Intelligence(BAAI).

摘  要:Multimodal pretraining has made convincing achievements in various downstream tasks in recent years.However,since the majority of the existing works construct models based on English,their applications are limited by language.In this work,we address this issue by developing models with multimodal and multilingual capabilities.We explore two types of methods to extend multimodal pretraining model from monolingual to multilingual.Specifically,we propose a pretraining-based model named multilingual multimodal pretraining(MLMM),and two generalization-based models named multilingual CLIP(M-CLIP)and multilingual acquisition(MLA).In addition,we further extend the generalization-based models to incorporate the audio modality and develop the multilingual CLIP for vision,language,and audio(CLIP4VLA).Our models achieve state-of-the-art performances on multilingual vision-text retrieval,visual question answering,and image captioning benchmarks.Based on the experimental results,we discuss the pros and cons of the two types of models and their potential practical applications.

关 键 词:Multilingual pretraining multimodal pretraining cross-lingual transfer multilingual generation cross-modal retrieval 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象