MKGViLT:visual-and-language transformer based on medical knowledge graph embedding  

在线阅读下载全文

作  者:CUI Wencheng SHI Wentao SHAO Hong 崔文成

机构地区:[1]School of Information Science and Engineering,Shenyang University of Technology,Shenyang 110870,P.R.China

出  处:《High Technology Letters》2025年第1期73-85,共13页高技术通讯(英文版)

基  金:Supported by the National Natural Science Foundation of China(No.62001313);the Liaoning Professional Talent Protect(No.XLYC2203046);the Shenyang Municipal Medical Engineering Cross Research Foundation of China(No.22-321-32-09).

摘  要:Medical visual question answering(MedVQA)aims to enhance diagnostic confidence and deepen patientsunderstanding of their health conditions.While the Transformer architecture is widely used in multimodal fields,its application in MedVQA requires further enhancement.A critical limitation of contemporary MedVQA systems lies in the inability to integrate lifelong knowledge with specific patient data to generate human-like responses.Existing Transformer-based MedVQA models require enhancing their capabitities for interpreting answers through the applications of medical image knowledge.The introduction of the medical knowledge graph visual language transformer(MKGViLT),designed for joint medical knowledge graphs(KGs),addresses this challenge.MKGViLT incorporates an enhanced Transformer structure to effectively extract features and combine modalities for MedVQA tasks.The MKGViLT model delivers answers based on richer background knowledge,thereby enhancing performance.The efficacy of MKGViLT is evaluated using the SLAKE and P-VQA datasets.Experimental results show that MKGViLT surpasses the most advanced methods on the SLAKE dataset.

关 键 词:knowledge graph(KG) medical vision question answer(MedVQA) vision-andlanguage transformer 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象