Editorial for Special Issue on Multi-modal Representation Learning  

在线阅读下载全文

作  者:Deng-Ping Fan Nick Barnes Ming-Ming Cheng Luc Van Gool 

机构地区:[1]Nankai University,ChinaÐ Zürich,Switzerland [2]Australian National University,Australia [3]Nankai University,China [4]ETH Zürich,Switzerland

出  处:《Machine Intelligence Research》2024年第4期615-616,共2页机器智能研究(英文版)

摘  要:The past decade has witnessed the impressive and steady development of single-modal AI technologies in several fields,thanks to the emergence of deep learning.Less studied,however,is multi-modal AI-commonly considered the next generation of AI-which utilizes complementary context concealed in different-modality inputs to improve performance.Humans naturally learn to form a global concept from multiple modalities(i.e.,sight,hearing,touch,smell,and taste),even when some are incomplete or missing.Thus,in addition to the two popular modalities(vision and language),other types of data such as depth,infrared information,and events are also important for multi-modal learning in real-world scenes.

关 键 词:MODAL utilize INCOMPLETE 

分 类 号:TP181[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象