机构地区:[1]Faculty of Informatics, University of Fukuchiyama, Kyoto, Japan [2]Breast Center Dokkyo Medical University Hospital, Tochigi, Japan [3]School of Hearth Sciences, Fukushima Medical University, Fukushima, Japan [4]School of Radiological Technology, Gunma Prefectural College of Health Sciences, Gunma, Japan
出 处:《Open Journal of Medical Imaging》2023年第3期63-83,共21页医学影像期刊(英文)
摘 要:In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.In a convolutional neural network (CNN) classification model for diagnosing medical images, transparency and interpretability of the model’s behavior are crucial in addition to high classification accuracy, and it is highly important to explicitly demonstrate them. In this study, we constructed an interpretable CNN-based model for breast density classification using spectral information from mammograms. We evaluated whether the model’s prediction scores provided reliable probability values using a reliability diagram and visualized the basis for the final prediction. In constructing the classification model, we modified ResNet50 and introduced algorithms for extracting and inputting image spectra, visualizing network behavior, and quantifying prediction ambiguity. From the experimental results, our proposed model demonstrated not only high classification accuracy but also higher reliability and interpretability compared to the conventional CNN models that use pixel information from images. Furthermore, our proposed model can detect misclassified data and indicate explicit basis for prediction. The results demonstrated the effectiveness and usefulness of our proposed model from the perspective of credibility and transparency.
关 键 词:Explainable AI t-SNE ENTROPY Wavelet Transform MAMMOGRAM
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...