检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:赵智慧[1] 周毅[2] 李炜弘[3] 汤朝晖[3] 郭强 陈日高[2] ZHAO Zhihui;ZHOU Yi;LI Weihong;TANG Zhaohui;GUO Qiang;CHEN Rigao(Intelligent Medical College,Chengdu University of Traditional Chinese Medicine,Chengdu 610075,China;Hospital of Chengdu University of Traditional Chinese Medicine,Chengdu 610072,China;Basic Medical College,Chengdu University of Traditional Chinese Medicine,Chengdu 610075,China;Chengdu Integrated Traditional Chinese and Western Medicine Hospital,Chengdu 610095,China)
机构地区:[1]成都中医药大学智能医学学院,成都610075 [2]成都中医药大学附属医院,成都610072 [3]成都中医药大学基础医学院,成都610075 [4]成都市中西医结合医院,成都610095
出 处:《世界科学技术-中医药现代化》2024年第4期908-918,共11页Modernization of Traditional Chinese Medicine and Materia Medica-World Science and Technology
基 金:国家科学技术部重点研发计划(2017YFC1703304):面向重大疾病的舌象智能诊断模式与临床评价,负责人:李炜弘;中国博士后科学基金(2022MD723720):基于机器视觉与信号处理技术的“视觉脉诊”研究,负责人:赵智慧;成都中医药大学校基金博士后专项(BSH2023026):基于深度学习图像处理技术的脉象智能诊断模型构建研究,负责人:赵智慧
摘 要:目的为适应互联网+智能医疗的时代需求,纳入舌诊仪图像数据及问诊结构化数据,采用深度学习、多模态融合等方法构建2型糖尿病中医证素辨证模型,为中医智能化辨证提供实验支撑和科学依据。方法共纳入2585例2型糖尿病患者,邀请3位专家分别进行证素辨证标记。基于深度全连接神经网络、U2-Net与ResNet34等网络构建基于舌图数据、症候数据的症候辨证模型(S-Model)、舌图辨证模型(T-Model),并采用多模态融合技术构建以二者为共同输入的多模态融合辨证模型(TS-Model)。通过F1值、精确率、召回率等对比不同模型预测性能。结果T-Model对十四类证素的预测F1值波动于0.000%-86.726%,S-Model的预测F1值波动于0.000%-97.826%,TS-Mode的预测F1值波动于55.556%-99.065%。与T-Model、S-Model对比,TS-Model整体F1值较高且稳定。结论基于深度学习多模态融合技术构建中医证素智能辨证模型性能较好。多模态融合技术适用于中医证素辨证模型优化,为下一步建立四诊信息全客观化的高度智能证素辨证模型提供方法学支持。Objective To construct a TCM Zhengsu differentiation model for type 2 diabetes based on deep learning and multimodal fusion,thus providing algorithmic support for full intelligence in TCM Zhengsu differentiation.Methods A total of 2585 patients with type 2 diabetes were recruited.Three experts were invited to perform the Zhengsu differentiation separately.Deep fully connected neural networks,U2-Net and ResNet34 networks were applied to construct the symptom-based differentiation model(S-Model)and the tongue image-based differentiation model(T-Model),respectively,while multimodal fusion techniques were employed to build the multimodal fusion model(TS-Model)with the above two as co-inputs.Finally,the prediction performance of the above models was compared by F1 value,accuracy,and recall.Results The predicted F1 values of the T-Model fluctuated from 0.000%to 86.726%,while those in the S-Model and TS-Model fluctuated from 0.000%to 97.826%and from 55.556%to 99.065%,respectively.A stable and high F1 value was found in the TS-Model.Conclusion The multimodal fusion technique was demonstrated to be applicable in the TCM Zhengsu differentiation model,which provided methodological support for developingof a fully intelligent Zhengsu differentiation model with high objective four diagnostic information.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.46