机构地区:[1]Departments of Electrical Engineering,Convergence IT Engineering,Mechanical Engineering,Medical Science and Engineering,Graduate School of Artificial Intelligence,and Medical Device Innovation Center,Pohang University of Science and Technology(POSTECH),Pohang,Republic of Korea [2]Opticho Inc.,Pohang,Republic of Korea [3]Department of Health Sciences and Technology,Gachon Advanced Institute for Health Sciences and Technology(GAIHST),Gachon University,Incheon,Republic of Korea [4]Cancer Research Institute,College of Medicine,The Catholic University of Korea,Seoul,Republic of Korea [5]Department of Hospital Pathology,Seoul St.Mary’s Hospital,College of Medicine,The Catholic University of Korea,Seoul,Republic of Korea
出 处:《Light(Science & Applications)》2024年第10期2353-2366,共14页光(科学与应用)(英文版)
基 金:supported by the following sources:Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2020R1A6A1A03047902);NRF grant funded by the Ministry of Science and ICT(MSIT)(2023R1A2C3004880,2021M3C1C3097624);Korea Medical Device Development Fund grant funded by the Korea government(MSIT,the Ministry of Trade,Industry and Energy,the Ministry of Health&Welfare,the Ministry of Food and Drug Safety)(Project Number:1711195277,RS-2020-KD000008,1711196475,RS-2023-00243633);Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2019-II191906,Artificial Intelligence Graduate School Program(POSTECH));BK21 FOUR program.
摘 要:In pathological diagnostics,histological images highlight the oncological features of excised specimens,but they require laborious and costly staining procedures.Despite recent innovations in label-free microscopy that simplify complex staining procedures,technical limitations and inadequate histological visualization are still problems in clinical settings.Here,we demonstrate an interconnected deep learning(DL)-based framework for performing automated virtual staining,segmentation,and classification in label-free photoacoustic histology(PAH)of human specimens.The framework comprises three components:(1)an explainable contrastive unpaired translation(E-CUT)method for virtual H&E(VHE)staining,(2)an U-net architecture for feature segmentation,and(3)a DL-based stepwise feature fusion method(StepFF)for classification.The framework demonstrates promising performance at each step of its application to human liver cancers.In virtual staining,the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm,making VHE images highly similar to real H&E ones.In segmentation,various features(e.g.,the cell area,number of cells,and the distance between cell nuclei)have been successfully segmented in VHE images.Finally,by using deep feature vectors from PAH,VHE,and segmented images,StepFF has achieved a 98.00%classification accuracy,compared to the 94.80%accuracy of conventional PAH classification.In particular,StepFF’s classification reached a sensitivity of 100%based on the evaluation of three pathologists,demonstrating its applicability in real clinical settings.This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...