1
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
2
|
Cho K, Kim KD, Jeong J, Nam Y, Kim J, Choi C, Lee S, Hong GS, Seo JB, Kim N. Approximating Intermediate Feature Maps of Self-Supervised Convolution Neural Network to Learn Hard Positive Representations in Chest Radiography. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01032-x. [PMID: 38381382 DOI: 10.1007/s10278-024-01032-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/22/2024]
Abstract
Recent advances in contrastive learning have significantly improved the performance of deep learning models. In contrastive learning of medical images, dealing with positive representation is sometimes difficult because some strong augmentation techniques can disrupt contrastive learning owing to the subtle differences between other standardized CXRs compared to augmented positive pairs; therefore, additional efforts are required. In this study, we propose intermediate feature approximation (IFA) loss, which improves the performance of contrastive convolutional neural networks by focusing more on positive representations of CXRs without additional augmentations. The IFA loss encourages the feature maps of a query image and its positive pair to resemble each other by maximizing the cosine similarity between the intermediate feature outputs of the original data and the positive pairs. Therefore, we used the InfoNCE loss, which is commonly used loss to address negative representations, and the IFA loss, which addresses positive representations, together to improve the contrastive network. We evaluated the performance of the network using various downstream tasks, including classification, object detection, and a generative adversarial network (GAN) inversion task. The downstream task results demonstrated that IFA loss can improve the performance of effectively overcoming data imbalance and data scarcity; furthermore, it can serve as a perceptual loss encoder for GAN inversion. In addition, we have made our model publicly available to facilitate access and encourage further research and collaboration in the field.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Jiheon Jeong
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Yujin Nam
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Jeeyoung Kim
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Changyong Choi
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Soyoung Lee
- Department of Bioengineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, South Korea.
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Khosravi B, Mickley JP, Rouzrokh P, Taunton MJ, Larson AN, Erickson BJ, Wyles CC. Anonymizing Radiographs Using an Object Detection Deep Learning Algorithm. Radiol Artif Intell 2023; 5:e230085. [PMID: 38074777 PMCID: PMC10698585 DOI: 10.1148/ryai.230085] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/11/2023] [Accepted: 08/25/2023] [Indexed: 02/02/2024]
Abstract
Radiographic markers contain protected health information that must be removed before public release. This work presents a deep learning algorithm that localizes radiographic markers and selectively removes them to enable de-identified data sharing. The authors annotated 2000 hip and pelvic radiographs to train an object detection computer vision model. Data were split into training, validation, and test sets at the patient level. Extracted markers were then characterized using an image processing algorithm, and potentially useful markers (eg, "L" and "R") without identifying information were retained. The model achieved an area under the precision-recall curve of 0.96 on the internal test set. The de-identification accuracy was 100% (400 of 400), with a de-identification false-positive rate of 1% (eight of 632) and a retention accuracy of 93% (359 of 386) for laterality markers. The algorithm was further validated on an external dataset of chest radiographs, achieving a de-identification accuracy of 96% (221 of 231). After fine-tuning the model on 20 images from the external dataset to investigate the potential for improvement, a 99.6% (230 of 231, P = .04) de-identification accuracy and decreased false-positive rate of 5% (26 of 512) were achieved. These results demonstrate the effectiveness of a two-pass approach in image de-identification. Keywords: Conventional Radiography, Skeletal-Axial, Thorax, Experimental Investigations, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2023 See also the commentary by Chang and Li in this issue.
Collapse
Affiliation(s)
| | | | - Pouria Rouzrokh
- From the Orthopedic Surgery Artificial Intelligence Laboratory,
Department of Orthopedic Surgery (B.K., J.P.M., P.R., M.J.T., A.N.L., C.C.W.),
Radiology Informatics Laboratory, Department of Radiology (B.K., P.R., B.J.E.),
Department of Orthopedic Surgery (M.J.T., A.N.L., C.C.W.), and Department of
Clinical Anatomy (C.C.W.), Mayo Clinic, 200 1st St SW, Rochester, MN
55905
| | - Michael J. Taunton
- From the Orthopedic Surgery Artificial Intelligence Laboratory,
Department of Orthopedic Surgery (B.K., J.P.M., P.R., M.J.T., A.N.L., C.C.W.),
Radiology Informatics Laboratory, Department of Radiology (B.K., P.R., B.J.E.),
Department of Orthopedic Surgery (M.J.T., A.N.L., C.C.W.), and Department of
Clinical Anatomy (C.C.W.), Mayo Clinic, 200 1st St SW, Rochester, MN
55905
| | - A. Noelle Larson
- From the Orthopedic Surgery Artificial Intelligence Laboratory,
Department of Orthopedic Surgery (B.K., J.P.M., P.R., M.J.T., A.N.L., C.C.W.),
Radiology Informatics Laboratory, Department of Radiology (B.K., P.R., B.J.E.),
Department of Orthopedic Surgery (M.J.T., A.N.L., C.C.W.), and Department of
Clinical Anatomy (C.C.W.), Mayo Clinic, 200 1st St SW, Rochester, MN
55905
| | - Bradley J. Erickson
- From the Orthopedic Surgery Artificial Intelligence Laboratory,
Department of Orthopedic Surgery (B.K., J.P.M., P.R., M.J.T., A.N.L., C.C.W.),
Radiology Informatics Laboratory, Department of Radiology (B.K., P.R., B.J.E.),
Department of Orthopedic Surgery (M.J.T., A.N.L., C.C.W.), and Department of
Clinical Anatomy (C.C.W.), Mayo Clinic, 200 1st St SW, Rochester, MN
55905
| | - Cody C. Wyles
- From the Orthopedic Surgery Artificial Intelligence Laboratory,
Department of Orthopedic Surgery (B.K., J.P.M., P.R., M.J.T., A.N.L., C.C.W.),
Radiology Informatics Laboratory, Department of Radiology (B.K., P.R., B.J.E.),
Department of Orthopedic Surgery (M.J.T., A.N.L., C.C.W.), and Department of
Clinical Anatomy (C.C.W.), Mayo Clinic, 200 1st St SW, Rochester, MN
55905
| |
Collapse
|
4
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Kim KD, Kyung S, Jang M, Ji S, Lee DH, Yoon HM, Kim N. Enhancement of Non-Linear Deep Learning Model by Adjusting Confounding Variables for Bone Age Estimation in Pediatric Hand X-rays. J Digit Imaging 2023; 36:2003-2014. [PMID: 37268839 PMCID: PMC10501988 DOI: 10.1007/s10278-023-00849-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/04/2023] Open
Abstract
In medicine, confounding variables in a generalized linear model are often adjusted; however, these variables have not yet been exploited in a non-linear deep learning model. Sex plays important role in bone age estimation, and non-linear deep learning model reported their performances comparable to human experts. Therefore, we investigate the properties of using confounding variables in a non-linear deep learning model for bone age estimation in pediatric hand X-rays. The RSNA Pediatric Bone Age Challenge (2017) dataset is used to train deep learning models. The RSNA test dataset is used for internal validation, and 227 pediatric hand X-ray images with bone age, chronological age, and sex information from Asan Medical Center (AMC) for external validation. U-Net based autoencoder, U-Net multi-task learning (MTL), and auxiliary-accelerated MTL (AA-MTL) models are chosen. Bone age estimations adjusted by input, output prediction, and without adjusting the confounding variables are compared. Additionally, ablation studies for model size, auxiliary task hierarchy, and multiple tasks are conducted. Correlation and Bland-Altman plots between ground truth and model-predicted bone ages are evaluated. Averaged saliency maps based on image registration are superimposed on representative images according to puberty stage. In the RSNA test dataset, adjusting by input shows the best performances regardless of model size, with mean average errors (MAEs) of 5.740, 5.478, and 5.434 months for the U-Net backbone, U-Net MTL, and AA-MTL models, respectively. However, in the AMC dataset, the AA-MTL model that adjusts the confounding variable by prediction shows the best performance with an MAE of 8.190 months, whereas the other models show the best performances by adjusting the confounding variables by input. Ablation studies of task hierarchy reveal no significant differences in the results of the RSNA dataset. However, predicting the confounding variable in the second encoder layer and estimating bone age in the bottleneck layer shows the best performance in the AMC dataset. Ablations studies of multiple tasks reveal that leveraging confounding variables plays an important role regardless of multiple tasks. To estimate bone age in pediatric X-rays, the clinical setting and balance between model size, task hierarchy, and confounding adjustment method play important roles in performance and generalizability; therefore, proper adjusting methods of confounding variables to train deep learning-based models are required for improved models.
Collapse
Affiliation(s)
- Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, 05505, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Sunghwan Ji
- Department of Internal Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, 05505, Republic of Korea
- Department of Translational Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, Seoul, 05505, Republic of Korea
| | - Dong Hee Lee
- College of Medicine, The Catholic University of Korea, Seoul, 06591, Republic of Korea
| | - Hee Mang Yoon
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, 05505, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea.
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, 05505, Republic of Korea.
| |
Collapse
|
6
|
Cho K, Kim KD, Nam Y, Jeong J, Kim J, Choi C, Lee S, Lee JS, Woo S, Hong GS, Seo JB, Kim N. CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning. J Digit Imaging 2023; 36:902-910. [PMID: 36702988 PMCID: PMC10287612 DOI: 10.1007/s10278-023-00782-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/12/2023] [Accepted: 01/16/2023] [Indexed: 01/27/2023] Open
Abstract
Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS .
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Yujin Nam
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jiheon Jeong
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jeeyoung Kim
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Changyong Choi
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Soyoung Lee
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jun Soo Lee
- Department of Industrial Engineering, Seoul National University, Seoul, Republic of Korea
| | - Seoyeon Woo
- Department of Biomedical Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea.
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|