1
|
Hernández-Vázquez N, Santos-Arce SR, Hernández-Gordillo D, Salido-Ruiz RA, Torres-Ramos S, Román-Godínez I. Fibrous Tissue Semantic Segmentation in CT Images of Diffuse Interstitial Lung Disease. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01420-x. [PMID: 39904943 DOI: 10.1007/s10278-025-01420-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 01/14/2025] [Accepted: 01/16/2025] [Indexed: 02/06/2025]
Abstract
Interstitial-lung-disease progression assessment and diagnosis via radiological findings on computed tomography images require significant time and effort from expert physicians. Accurate results from these analyses are critical for treatment decisions. Automatic semantic segmentation of radiological findings has been developed recently using convolutional neural networks (CNN). However, on the one hand, few works present individual performance scores for radiological findings that allow for accurately measuring fibrosis segmentation performances; on the other hand, the poorly annotated quality of available databases may mislead researcher observations. This study presents a CNN methodology employing three different architectures (U-net, LinkNet, and FPN) with transfer learning and data augmentation to enhance the performance in semantic segmentation of fibrosis-related radiological findings (FRF). In addition, considering the poor quality of manual CT tagging on available datasets, we use two alternative evaluation strategies, first using only the fibrosis region of interest. Second, re-tagging and validating the test set by an expert pulmonologist. Using DICOM images from the Interstitial Lung Diseases Database, the implemented approach achieves a Jaccard Score Index of 0.7355 with a standard deviation of 0.0699 and a Dice Similarity Coefficient of 0.8459 with a standard deviation of 0.0470 comparable to state-of-the-art performance in FRF semantic segmentation. Also, a visual evaluation of the images automatically tagged by our proposal was performed by a pulmonologist. Our proposed method successfully identifies these FRF areas, demonstrating its effectiveness. Also, the pulmonologist revealed discrepancies in the dataset tags, indicating deficiencies in FRF annotations.
Collapse
Affiliation(s)
- Natanael Hernández-Vázquez
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Stewart R Santos-Arce
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Daniel Hernández-Gordillo
- UMAE, Hospital de Especialidades, CMNO, Av. Belisario Domínguez 1000 Col. Independencia, Guadalajara, 44340, Jalisco, Mexico
| | - Ricardo A Salido-Ruiz
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Sulema Torres-Ramos
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Israel Román-Godínez
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico.
| |
Collapse
|
2
|
Asaf MZ, Rasul H, Akram MU, Hina T, Rashid T, Shaukat A. A Modified Deep Semantic Segmentation Model for Analysis of Whole Slide Skin Images. Sci Rep 2024; 14:23489. [PMID: 39379448 PMCID: PMC11461484 DOI: 10.1038/s41598-024-71080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 08/23/2024] [Indexed: 10/10/2024] Open
Abstract
Automated segmentation of biomedical image has been recognized as an important step in computer-aided diagnosis systems for detection of abnormalities. Despite its importance, the segmentation process remains an open challenge due to variations in color, texture, shape diversity and boundaries. Semantic segmentation often requires deeper neural networks to achieve higher accuracy, making the segmentation model more complex and slower. Due to the need to process a large number of biomedical images, more efficient and cheaper image processing techniques for accurate segmentation are needed. In this article, we present a modified deep semantic segmentation model that utilizes the backbone of EfficientNet-B3 along with UNet for reliable segmentation. We trained our model on Non-melanoma skin cancer segmentation for histopathology dataset to divide the image in 12 different classes for segmentation. Our method outperforms the existing literature with an increase in average class accuracy from 79 to 83%. Our approach also shows an increase in overall accuracy from 85 to 94%.
Collapse
Affiliation(s)
- Muhammad Zeeshan Asaf
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Hamid Rasul
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan.
| | - Tazeen Hina
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Tayyab Rashid
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Arslan Shaukat
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, 44000, Pakistan
| |
Collapse
|
3
|
Sun H, Liu M, Liu A, Deng M, Yang X, Kang H, Zhao L, Ren Y, Xie B, Zhang R, Dai H. Developing the Lung Graph-Based Machine Learning Model for Identification of Fibrotic Interstitial Lung Diseases. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:268-279. [PMID: 38343257 PMCID: PMC10976920 DOI: 10.1007/s10278-023-00909-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/06/2023] [Accepted: 10/09/2023] [Indexed: 03/02/2024]
Abstract
Accurate detection of fibrotic interstitial lung disease (f-ILD) is conducive to early intervention. Our aim was to develop a lung graph-based machine learning model to identify f-ILD. A total of 417 HRCTs from 279 patients with confirmed ILD (156 f-ILD and 123 non-f-ILD) were included in this study. A lung graph-based machine learning model based on HRCT was developed for aiding clinician to diagnose f-ILD. In this approach, local radiomics features were extracted from an automatically generated geometric atlas of the lung and used to build a series of specific lung graph models. Encoding these lung graphs, a lung descriptor was gained and became as a characterization of global radiomics feature distribution to diagnose f-ILD. The Weighted Ensemble model showed the best predictive performance in cross-validation. The classification accuracy of the model was significantly higher than that of the three radiologists at both the CT sequence level and the patient level. At the patient level, the diagnostic accuracy of the model versus radiologists A, B, and C was 0.986 (95% CI 0.959 to 1.000), 0.918 (95% CI 0.849 to 0.973), 0.822 (95% CI 0.726 to 0.904), and 0.904 (95% CI 0.836 to 0.973), respectively. There was a statistically significant difference in AUC values between the model and 3 physicians (p < 0.05). The lung graph-based machine learning model could identify f-ILD, and the diagnostic performance exceeded radiologists which could aid clinicians to assess ILD objectively.
Collapse
Affiliation(s)
- Haishuang Sun
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
- Department of Medical Oncology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong Province, 510060, China
| | - Min Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China.
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Anqi Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Mei Deng
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Xiaoyan Yang
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Han Kang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd., Beijing, 100025, China
| | - Ling Zhao
- Department of Clinical Pathology, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Yanhong Ren
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Bingbing Xie
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | | | - Huaping Dai
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China.
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| |
Collapse
|
4
|
Wang Y, Yu X, Yang Y, Zhang X, Zhang Y, Zhang L, Feng R, Xue J. A multi-branched semantic segmentation network based on twisted information sharing pattern for medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107914. [PMID: 37992569 DOI: 10.1016/j.cmpb.2023.107914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 10/12/2023] [Accepted: 11/03/2023] [Indexed: 11/24/2023]
Abstract
BACKGROUND Semantic segmentation plays an indispensable role in clinical diagnosis support, intelligent surgical assistance, personalized treatment planning, and drug development, making it a core area of research in smart healthcare. However, the main challenge in medical image semantic segmentation lies in the accuracy bottleneck, primarily due to the low interactivity of feature information and the lack of deep exploration of local features during feature fusion. METHODS To address this issue, a novel approach called Twisted Information-sharing Pattern for Multi-branched Network (TP-MNet) has been proposed. This architecture facilitates the mutual transfer of features among neighboring branches at the next level, breaking the barrier of semantic isolation and achieving the goal of semantic fusion. Additionally, performing a secondary feature mining during the transfer process effectively enhances the detection accuracy. Building upon the Twisted Pattern transmission in the encoding and decoding stages, enhanced and refined modules for feature fusion have been developed. These modules aim to capture key features of lesions by acquiring contextual semantic information in a broader context. RESULTS The experiments extensively and objectively validated the TP-MNet on 5 medical datasets and compared it with 21 other semantic segmentation models using 7 metrics. Through metric analysis, image comparisons, process examination, and ablation tests, the superiority of TP-MNet was convincingly demonstrated. Additionally, further investigations were conducted to explore the limitations of TP-MNet, thereby clarifying the practical utility of the Twisted Information-sharing Pattern. CONCLUSIONS TP-MNet adopts the Twisted Information-sharing Pattern, leading to a substantial improvement in the semantic fusion effect and directly contributing to enhanced segmentation performance on medical images. Additionally, this semantic broadcasting mode not only underscores the importance of semantic fusion but also highlights a pivotal direction for the advancement of multi-branched architectures.
Collapse
Affiliation(s)
- Yuefei Wang
- College of Computer Science, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Xi Yu
- Stirling College, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China.
| | - Yixi Yang
- Institute of Cancer Biology and Drug Discovery, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Xiang Zhang
- College of Computer Science, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Yutong Zhang
- College of Computer Science, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Li Zhang
- College of Computer Science, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Ronghui Feng
- Stirling College, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| | - Jiajing Xue
- Stirling College, Chengdu University, 2025 Chengluo Rd., Chengdu, Sichuan 610106, China
| |
Collapse
|
5
|
Lai Y, Liu X, Hou F, Han Z, E L, Su N, Du D, Wang Z, Zheng W, Wu Y. Severity-stratification of interstitial lung disease by deep learning enabled assessment and quantification of lesion indicators from HRCT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:323-338. [PMID: 38306087 DOI: 10.3233/xst-230218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
BACKGROUND Interstitial lung disease (ILD) represents a group of chronic heterogeneous diseases, and current clinical practice in assessment of ILD severity and progression mainly rely on the radiologist-based visual screening, which greatly restricts the accuracy of disease assessment due to the high inter- and intra-subjective observer variability. OBJECTIVE To solve these problems, in this work, we propose a deep learning driven framework that can assess and quantify lesion indicators and outcome the prediction of severity of ILD. METHODS In detail, we first present a convolutional neural network that can segment and quantify five types of lesions including HC, RO, GGO, CONS, and EMPH from HRCT of ILD patients, and then we conduct quantitative analysis to select the features related to ILD based on the segmented lesions and clinical data. Finally, a multivariate prediction model based on nomogram to predict the severity of ILD is established by combining multiple typical lesions. RESULTS Experimental results showed that three lesions of HC, RO, and GGO could accurately predict ILD staging independently or combined with other HRCT features. Based on the HRCT, the used multivariate model can achieve the highest AUC value of 0.755 for HC, and the lowest AUC value of 0.701 for RO in stage I, and obtain the highest AUC value of 0.803 for HC, and the lowest AUC value of 0.733 for RO in stage II. Additionally, our ILD scoring model could achieve an average accuracy of 0.812 (0.736 - 0.888) in predicting the severity of ILD via cross-validation. CONCLUSIONS In summary, our proposed method provides effective segmentation of ILD lesions by a comprehensive deep-learning approach and confirms its potential effectiveness in improving diagnostic accuracy for clinicians.
Collapse
Affiliation(s)
- Yexin Lai
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xueyu Liu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Fan Hou
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhiyong Han
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Linning E
- Department of Radiology, People's Hospital of Longhua, Shenzhen, China
| | - Ningling Su
- Department of Radiology, Shanxi Bethune Hospital, Taiyuan, Shanxi, China
| | - Dianrong Du
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhichong Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Wen Zheng
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Yongfei Wu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
6
|
Liu N, Fenster A, Tessier D, Chun J, Gou S, Chong J. Self-supervised enhanced thyroid nodule detection in ultrasound examination video sequences with multi-perspective evaluation. Phys Med Biol 2023; 68:235007. [PMID: 37918343 DOI: 10.1088/1361-6560/ad092a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 11/02/2023] [Indexed: 11/04/2023]
Abstract
Objective.Ultrasound is the most commonly used examination for the detection and identification of thyroid nodules. Since manual detection is time-consuming and subjective, attempts to introduce machine learning into this process are ongoing. However, the performance of these methods is limited by the low signal-to-noise ratio and tissue contrast of ultrasound images. To address these challenges, we extend thyroid nodule detection from image-based to video-based using the temporal context information in ultrasound videos.Approach.We propose a video-based deep learning model with adjacent frame perception (AFP) for accurate and real-time thyroid nodule detection. Compared to image-based methods, AFP can aggregate semantically similar contextual features in the video. Furthermore, considering the cost of medical image annotation for video-based models, a patch scale self-supervised model (PASS) is proposed. PASS is trained on unlabeled datasets to improve the performance of the AFP model without additional labelling costs.Main results.The PASS model is trained by 92 videos containing 23 773 frames, of which 60 annotated videos containing 16 694 frames were used to train and evaluate the AFP model. The evaluation is performed from the video, frame, nodule, and localization perspectives. In the evaluation of the localization perspective, we used the average precision metric with the intersection-over-union threshold set to 50% (AP@50), which is the area under the smoothed Precision-Recall curve. Our proposed AFP improved AP@50 from 0.256 to 0.390, while the PASS-enhanced AFP further improved the AP@50 to 0.425. AFP and PASS also improve the performance in the valuations of other perspectives based on the localization results.Significance.Our video-based model can mitigate the effects of low signal-to-noise ratio and tissue contrast in ultrasound images and enable the accurate detection of thyroid nodules in real-time. The evaluation from multiple perspectives of the ablation experiments demonstrates the effectiveness of our proposed AFP and PASS models.
Collapse
Affiliation(s)
- Ningtao Liu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710126, People's Republic of China
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
- Department of Medical Imaging, Western University, London, ON, N6A 5A5, Canada
- Department of Medical Biophysics, Western University, London, ON, N6A 5C1, Canada
| | - David Tessier
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Jin Chun
- Schulich School of Medicine, Western University, London, ON, N6A 5C1, Canada
| | - Shuiping Gou
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710126, People's Republic of China
| | - Jaron Chong
- Department of Medical Imaging, Western University, London, ON, N6A 5A5, Canada
| |
Collapse
|
7
|
Jiang X, Su N, Quan S, E L, Li R. Computed Tomography Radiomics-based Prediction Model for Gender-Age-Physiology Staging of Connective Tissue Disease-associated Interstitial Lung Disease. Acad Radiol 2023; 30:2598-2605. [PMID: 36868880 DOI: 10.1016/j.acra.2023.01.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/29/2023] [Accepted: 01/29/2023] [Indexed: 03/05/2023]
Abstract
PURPOSE To analyze the feasibility of predicting gender-age-physiology (GAP) staging in patients with connective tissue disease-associated interstitial lung disease (CTD-ILD) by radiomics based on computed tomography (CT) of the chest. MATERIALS AND METHODS Chest CT images of 184 patients with CTD-ILD were retrospectively analyzed. GAP staging was performed on the basis of gender, age, and pulmonary function test results. GAP I, II, and III have 137, 36, and 11 cases, respectively. The cases in GAP Ⅱ and Ⅲ were then combined into one group, and the two groups of patients were randomly divided into the training and testing groups with a 7:3 ratio. The radiomics features were extracted using AK software. Multivariate logistic regression analysis was then conducted to establish a radiomics model. A nomogram model was established on the basis of Rad-score and clinical factors (age and gender). RESULTS For the radiomics model, four significant radiomics features were selected to construct the model and showed excellent ability to differentiate GAP I from GAP Ⅱ and Ⅲ in both the training group (the area under the curve [AUC] = 0.803, 95% confidence interval [CI]: 0.724-0.874) and testing group (AUC = 0.801, 95% CI:0.663-0.912). The nomogram model that combined clinical factors and radiomics features improved higher accuracy of both training (88.4% vs. 82.1%) and testing (83.3% vs. 79.2%). CONCLUSION The disease severity assessment of patients with CTD-ILD can be evaluated by applying the radiomics method based on CT images. The nomogram model demonstrates better performance for predicting the GAP staging.
Collapse
Affiliation(s)
- Xiaopeng Jiang
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China
| | - Ningling Su
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China
| | - Shuai Quan
- GE HealthCare China (Shanghai), Shanghai, 210000, China
| | - Linning E
- Affiliated Longhua People's Hospital, Southern Medical University (Longhua People's Hospital), Shenzhen, 518110, China
| | - Rui Li
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China.
| |
Collapse
|
8
|
Tyagi S, Kushnure DT, Talbar SN. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation. Comput Med Imaging Graph 2023; 108:102258. [PMID: 37315396 DOI: 10.1016/j.compmedimag.2023.102258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/29/2023] [Accepted: 05/29/2023] [Indexed: 06/16/2023]
Abstract
Lung cancer has the highest mortality rate. Its diagnosis and treatment analysis depends upon the accurate segmentation of the tumor. It becomes tedious if done manually as radiologists are overburdened with numerous medical imaging tests due to the increase in cancer patients and the COVID pandemic. Automatic segmentation techniques play an essential role in assisting medical experts. The segmentation approaches based on convolutional neural networks have provided state-of-the-art performances. However, they cannot capture long-range relations due to the region-based convolutional operator. Vision Transformers can resolve this issue by capturing global multi-contextual features. To explore this advantageous feature of the vision transformer, we propose an approach for lung tumor segmentation using an amalgamation of the vision transformer and convolutional neural network. We design the network as an encoder-decoder structure with convolution blocks deployed in the initial layers of the encoder to capture the features carrying essential information and the corresponding blocks in the final layers of the decoder. The deeper layers utilize the transformer blocks with a self-attention mechanism to capture more detailed global feature maps. We use a recently proposed unified loss function that combines cross-entropy and dice-based losses for network optimization. We trained our network on a publicly available NSCLC-Radiomics dataset and tested its generalizability on our dataset collected from a local hospital. We could achieve average dice coefficients of 0.7468 and 0.6847 and Hausdorff distances of 15.336 and 17.435 on public and local test data, respectively.
Collapse
Affiliation(s)
- Shweta Tyagi
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India.
| | - Devidas T Kushnure
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sanjay N Talbar
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| |
Collapse
|
9
|
Chen Y, Wang T, Tang H, Zhao L, Zhang X, Tan T, Gao Q, Du M, Tong T. CoTrFuse: a novel framework by fusing CNN and transformer for medical image segmentation. Phys Med Biol 2023; 68:175027. [PMID: 37605997 DOI: 10.1088/1361-6560/acede8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/07/2023] [Indexed: 08/23/2023]
Abstract
Medical image segmentation is a crucial and intricate process in medical image processing and analysis. With the advancements in artificial intelligence, deep learning techniques have been widely used in recent years for medical image segmentation. One such technique is the U-Net framework based on the U-shaped convolutional neural networks (CNN) and its variants. However, these methods have limitations in simultaneously capturing both the global and the remote semantic information due to the restricted receptive domain caused by the convolution operation's intrinsic features. Transformers are attention-based models with excellent global modeling capabilities, but their ability to acquire local information is limited. To address this, we propose a network that combines the strengths of both CNN and Transformer, called CoTrFuse. The proposed CoTrFuse network uses EfficientNet and Swin Transformer as dual encoders. The Swin Transformer and CNN Fusion module are combined to fuse the features of both branches before the skip connection structure. We evaluated the proposed network on two datasets: the ISIC-2017 challenge dataset and the COVID-QU-Ex dataset. Our experimental results demonstrate that the proposed CoTrFuse outperforms several state-of-the-art segmentation methods, indicating its superiority in medical image segmentation. The codes are available athttps://github.com/BinYCn/CoTrFuse.
Collapse
Affiliation(s)
- Yuanbin Chen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Hui Tang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Longxuan Zhao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Xinlin Zhang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tao Tan
- Faculty of Applied Science, Macao Polytechnic University, Macao 999078, People's Republic of China
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, People's Republic of China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou 350116, People's Republic of China
| |
Collapse
|
10
|
Cai GW, Liu YB, Feng QJ, Liang RH, Zeng QS, Deng Y, Yang W. Semi-Supervised Segmentation of Interstitial Lung Disease Patterns from CT Images via Self-Training with Selective Re-Training. Bioengineering (Basel) 2023; 10:830. [PMID: 37508857 PMCID: PMC10375953 DOI: 10.3390/bioengineering10070830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/22/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Guang-Wei Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Yun-Bi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qian-Jin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Rui-Hong Liang
- Department of Medical Imaging Center, Nanfang Hospital of Southern Medical University, Guangzhou 510515, China
| | - Qing-Si Zeng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Yu Deng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
11
|
Huang Y, Jiao J, Yu J, Zheng Y, Wang Y. RsALUNet: A reinforcement supervision U-Net-based framework for multi-ROI segmentation of medical images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
12
|
Wu M, Cui G, Lv S, Chen L, Tian Z, Yang M, Bai W. Deep convolutional neural networks for multiple histologic types of ovarian tumors classification in ultrasound images. Front Oncol 2023; 13:1154200. [PMID: 37427129 PMCID: PMC10326903 DOI: 10.3389/fonc.2023.1154200] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/12/2023] [Indexed: 07/11/2023] Open
Abstract
Objective This study aimed to evaluate and validate the performance of deep convolutional neural networks when discriminating different histologic types of ovarian tumor in ultrasound (US) images. Material and methods Our retrospective study took 1142 US images from 328 patients from January 2019 to June 2021. Two tasks were proposed based on US images. Task 1 was to classify benign and high-grade serous carcinoma in original ovarian tumor US images, in which benign ovarian tumor was divided into six classes: mature cystic teratoma, endometriotic cyst, serous cystadenoma, granulosa-theca cell tumor, mucinous cystadenoma and simple cyst. The US images in task 2 were segmented. Deep convolutional neural networks (DCNN) were applied to classify different types of ovarian tumors in detail. We used transfer learning on six pre-trained DCNNs: VGG16, GoogleNet, ResNet34, ResNext50, DensNet121 and DensNet201. Several metrics were adopted to assess the model performance: accuracy, sensitivity, specificity, FI-score and the area under the receiver operating characteristic curve (AUC). Results The DCNN performed better in labeled US images than in original US images. The best predictive performance came from the ResNext50 model. The model had an overall accuracy of 0.952 for in directly classifying the seven histologic types of ovarian tumors. It achieved a sensitivity of 90% and a specificity of 99.2% for high-grade serous carcinoma, and a sensitivity of over 90% and a specificity of over 95% in most benign pathological categories. Conclusion DCNN is a promising technique for classifying different histologic types of ovarian tumors in US images, and provide valuable computer-aided information.
Collapse
Affiliation(s)
- Meijing Wu
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Guangxia Cui
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Shuchang Lv
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Lijiang Chen
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Zongmei Tian
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Min Yang
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Wenpei Bai
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
13
|
Haubold J, Zeng K, Farhand S, Stalke S, Steinberg H, Bos D, Meetschen M, Kureishi A, Zensen S, Goeser T, Maier S, Forsting M, Nensa F. AI co-pilot: content-based image retrieval for the reading of rare diseases in chest CT. Sci Rep 2023; 13:4336. [PMID: 36928759 PMCID: PMC10020154 DOI: 10.1038/s41598-023-29949-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
The aim of the study was to evaluate the impact of the newly developed Similar patient search (SPS) Web Service, which supports reading complex lung diseases in computed tomography (CT), on the diagnostic accuracy of residents. SPS is an image-based search engine for pre-diagnosed cases along with related clinical reference content ( https://eref.thieme.de ). The reference database was constructed using 13,658 annotated regions of interest (ROIs) from 621 patients, comprising 69 lung diseases. For validation, 50 CT scans were evaluated by five radiology residents without SPS, and three months later with SPS. The residents could give a maximum of three diagnoses per case. A maximum of 3 points was achieved if the correct diagnosis without any additional diagnoses was provided. The residents achieved an average score of 17.6 ± 5.0 points without SPS. By using SPS, the residents increased their score by 81.8% to 32.0 ± 9.5 points. The improvement of the score per case was highly significant (p = 0.0001). The residents required an average of 205.9 ± 350.6 s per case (21.9% increase) when SPS was used. However, in the second half of the cases, after the residents became more familiar with SPS, this increase dropped to 7%. Residents' average score in reading complex chest CT scans improved by 81.8% when the AI-driven SPS with integrated clinical reference content was used. The increase in time per case due to the use of the SPS was minimal.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - Ke Zeng
- Siemens Medical Solutions Inc., Malvern, PA, USA
| | | | | | - Hannah Steinberg
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Denise Bos
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Mathias Meetschen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Anisa Kureishi
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sebastian Zensen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Tim Goeser
- Department of Radiology and Neuroradiology, Kliniken Maria Hilf, Viersener Str. 450, 41063, Mönchengladbach, NRW, Germany
| | - Sandra Maier
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
14
|
Zhang G, Luo L, Zhang L, Liu Z. Research Progress of Respiratory Disease and Idiopathic Pulmonary Fibrosis Based on Artificial Intelligence. Diagnostics (Basel) 2023; 13:diagnostics13030357. [PMID: 36766460 PMCID: PMC9914063 DOI: 10.3390/diagnostics13030357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/06/2023] [Accepted: 01/16/2023] [Indexed: 01/21/2023] Open
Abstract
Machine Learning (ML) is an algorithm based on big data, which learns patterns from the previously observed data through classifying, predicting, and optimizing to accomplish specific tasks. In recent years, there has been rapid development in the field of ML in medicine, including lung imaging analysis, intensive medical monitoring, mechanical ventilation, and there is need for intubation etiology prediction evaluation, pulmonary function evaluation and prediction, obstructive sleep apnea, such as biological information monitoring and so on. ML can have good performance and is a great potential tool, especially in the imaging diagnosis of interstitial lung disease. Idiopathic pulmonary fibrosis (IPF) is a major problem in the treatment of respiratory diseases, due to the abnormal proliferation of fibroblasts, leading to lung tissue destruction. The diagnosis mainly depends on the early detection of imaging and early treatment, which can effectively prolong the life of patients. If the computer can be used to assist the examination results related to the effects of fibrosis, a timely diagnosis of such diseases will be of great value to both doctors and patients. We also previously proposed a machine learning algorithm model that can play a good clinical guiding role in early imaging prediction of idiopathic pulmonary fibrosis. At present, AI and machine learning have great potential and ability to transform many aspects of respiratory medicine and are the focus and hotspot of research. AI needs to become an invisible, seamless, and impartial auxiliary tool to help patients and doctors make better decisions in an efficient, effective, and acceptable way. The purpose of this paper is to review the current application of machine learning in various aspects of respiratory diseases, with the hope to provide some help and guidance for clinicians when applying algorithm models.
Collapse
Affiliation(s)
- Gerui Zhang
- Department of Critical Care Unit, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
| | - Lin Luo
- Department of Critical Care Unit, The Second Hospital of Dalian Medical University, 467 Zhongshan Road, Shahekou District, Dalian 116023, China
| | - Limin Zhang
- Department of Respiratory, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
| | - Zhuo Liu
- Department of Respiratory, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
- Correspondence:
| |
Collapse
|
15
|
Yuan Y, Li C, Xu L, Zhu S, Hua Y, Zhang J. CSM-Net: Automatic joint segmentation of intima-media complex and lumen in carotid artery ultrasound images. Comput Biol Med 2022; 150:106119. [PMID: 37859275 DOI: 10.1016/j.compbiomed.2022.106119] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 08/25/2022] [Accepted: 09/17/2022] [Indexed: 11/18/2022]
Abstract
The intima-media thickness (IMT) is an effective biomarker for atherosclerosis, which is commonly measured by ultrasound technique. However, the intima-media complex (IMC) segmentation for the IMT is challenging due to confused IMC boundaries and various noises. In this paper, we propose a flexible method CSM-Net for the joint segmentation of IMC and Lumen in carotid ultrasound images. Firstly, the cascaded dilated convolutions combined with the squeeze-excitation module are introduced for exploiting more contextual features on the highest-level layer of the encoder. Furthermore, a triple spatial attention module is utilized for emphasizing serviceable features on each decoder layer. Besides, a multi-scale weighted hybrid loss function is employed to resolve the class-imbalance issues. The experiments are conducted on a private dataset of 100 images for IMC and Lumen segmentation, as well as on two public datasets of 1600 images for IMC segmentation. For the private dataset, our method obtain the IMC Dice, Lumen Dice, Precision, Recall, and F1 score of 0.814 ± 0.061, 0.941 ± 0.024, 0.911 ± 0.044, 0.916 ± 0.039, and 0.913 ± 0.027, respectively. For the public datasets, we obtain the IMC Dice, Precision, Recall, and F1 score of 0.885 ± 0.067, 0.885 ± 0.070, 0.894 ± 0.089, and 0.885 ± 0.067, respectively. The results demonstrate that the proposed method precedes some cutting-edge methods, and the ablation experiments show the validity of each module. The proposed method may be useful for the IMC segmentation of carotid ultrasound images in the clinic. Our code is publicly available at https://github.com/yuanyc798/US-IMC-code.
Collapse
Affiliation(s)
- Yanchao Yuan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Hefei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
| | - Cancheng Li
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Hefei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
| | - Lu Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Hefei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
| | - Shangming Zhu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
| | - Yang Hua
- Department of Vascular Ultrasonography, XuanWu Hospital, Capital Medical University, Beijing, China; Beijing Diagnostic Center of Vascular Ultrasound, Beijing, China; Center of Vascular Ultrasonography, Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, Beijing, China.
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Hefei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.
| |
Collapse
|
16
|
Zhang Z, Liu N, Guo Z, Jiao L, Fenster A, Jin W, Zhang Y, Chen J, Yan C, Gou S. Ageing and degeneration analysis using ageing-related dynamic attention on lateral cephalometric radiographs. NPJ Digit Med 2022; 5:151. [PMID: 36168038 PMCID: PMC9515216 DOI: 10.1038/s41746-022-00681-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/22/2022] [Indexed: 11/25/2022] Open
Abstract
With the increase of the ageing in the world's population, the ageing and degeneration studies of physiological characteristics in human skin, bones, and muscles become important topics. Research on the ageing of bones, especially the skull, are paid much attention in recent years. In this study, a novel deep learning method representing the ageing-related dynamic attention (ARDA) is proposed. The proposed method can quantitatively display the ageing salience of the bones and their change patterns with age on lateral cephalometric radiographs images (LCR) images containing the craniofacial and cervical spine. An age estimation-based deep learning model based on 14142 LCR images from 4 to 40 years old individuals is trained to extract ageing-related features, and based on these features the ageing salience maps are generated by the Grad-CAM method. All ageing salience maps with the same age are merged as an ARDA map corresponding to that age. Ageing salience maps show that ARDA is mainly concentrated in three regions in LCR images: the teeth, craniofacial, and cervical spine regions. Furthermore, the dynamic distribution of ARDA at different ages and instances in LCR images is quantitatively analyzed. The experimental results on 3014 cases show that ARDA can accurately reflect the development and degeneration patterns in LCR images.
Collapse
Affiliation(s)
- Zhiyong Zhang
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
- Department of Orthodontics, the Affiliated Stomatological Hospital of Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Ningtao Liu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China
- Robarts Research Institute, Western University, London, N6A 3K7, ON, Canada
| | - Zhang Guo
- Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, 710071, Shaanxi, China
| | - Licheng Jiao
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, N6A 3K7, ON, Canada
| | - Wenfan Jin
- Department of Radiology, the Affiliated Stomatological Hospital of Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Yuxiang Zhang
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
| | - Jie Chen
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
| | - Chunxia Yan
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China.
| | - Shuiping Gou
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China.
| |
Collapse
|
17
|
Najeeb S, Bhuiyan MIH. Spatial feature fusion in 3D convolutional autoencoders for lung tumor segmentation from 3D CT images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
Helen Sulochana C, Praylin Selva Blessy SA. Interstitial lung disease detection using template matching combined sparse coding and blended multi class support vector machine. Proc Inst Mech Eng H 2022; 236:1492-1501. [DOI: 10.1177/09544119221113722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Interstitial lung disease (ILD), representing a collection of disorders, is considered to be the deadliest one, which increases the mortality rate of humans. In this paper, an automated scheme for detection and classification of ILD patterns is presented, which eliminates low inter-class feature variation and high intra-class feature variation in patterns, caused by translation and illumination effects. A novel and efficient feature extraction method named Template-Matching Combined Sparse Coding (TMCSC) is proposed, which extracts features invariant to translation and illumination effects, from defined regions of interest (ROI) within lung parenchyma. The translated image patch is compared with all possible templates of the image using template matching process. The corresponding sparse matrix for the set of translated image patches and their nearest template is obtained by minimizing the objective function of the similarity matrix of translated image patch and the template. A novel Blended-Multi Class Support Vector Machine (B-MCSVM) is designed for tackling high-intra class feature variation problems, which provides improved classification accuracy. Region of interests (ROIs) of five lung tissue patterns (healthy, emphysema, ground glass, micronodule, and fibrosis) selected from an internal multimedia database that contains high-resolution computed tomography (HRCT) image series are identified and utilized in this work. Performance of the proposed scheme outperforms most of the state-of-art multi-class classification algorithms.
Collapse
Affiliation(s)
- C Helen Sulochana
- St. Xaviers Catholic College of Engineering, Chunkankadai, Tamil Nadu, India
| | | |
Collapse
|
19
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
20
|
Pasupathy V, Khilar R. Advancements in deep structured learning based medical image interpretation. JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES 2022. [DOI: 10.1080/02522667.2022.2094550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
Affiliation(s)
- Vijayalakshmi Pasupathy
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - Rashmita Khilar
- Department of Information Technology, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India
| |
Collapse
|
21
|
Shanker RRBJ, Zhang MH, Ginat DT. Semantic Segmentation of Extraocular Muscles on Computed Tomography Images Using Convolutional Neural Networks. Diagnostics (Basel) 2022; 12:1553. [PMID: 35885459 PMCID: PMC9325103 DOI: 10.3390/diagnostics12071553] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/22/2022] Open
Abstract
Computed tomography (CT) imaging of the orbit with measurement of extraocular muscle size can be useful for diagnosing and monitoring conditions that affect extraocular muscles. However, the manual measurement of extraocular muscle size can be time-consuming and tedious. The purpose of this study is to evaluate the effectiveness of deep learning algorithms in segmenting extraocular muscles and measuring muscle sizes from CT images. Consecutive CT scans of orbits from 210 patients between 1 January 2010 and 31 December 2019 were used. Extraocular muscles were manually annotated in the studies, which were then used to train the deep learning algorithms. The proposed U-net algorithm can segment extraocular muscles on coronal slices of 32 test samples with an average dice score of 0.92. The thickness and area measurements from predicted segmentations had a mean absolute error (MAE) of 0.35 mm and 3.87 mm2, respectively, with a corresponding mean absolute percentage error (MAPE) of 7 and 9%, respectively. On qualitative analysis of 32 test samples, 30 predicted segmentations from the U-net algorithm were accepted while 2 were rejected. Based on the results from quantitative and qualitative evaluation, this study demonstrates that CNN-based deep learning algorithms are effective at segmenting extraocular muscles and measuring muscles sizes.
Collapse
Affiliation(s)
| | - Michael H. Zhang
- Department of Radiology, University of Chicago, Chicago, IL 60615, USA; (R.R.B.J.S.); (M.H.Z.)
| | - Daniel T. Ginat
- Department of Radiology, Section of Neuroradiology, University of Chicago, Chicago, IL 60615, USA
| |
Collapse
|
22
|
Furukawa T, Oyama S, Yokota H, Kondoh Y, Kataoka K, Johkoh T, Fukuoka J, Hashimoto N, Sakamoto K, Shiratori Y, Hasegawa Y. A comprehensible machine learning tool to differentially diagnose idiopathic pulmonary fibrosis from other chronic interstitial lung diseases. Respirology 2022; 27:739-746. [PMID: 35697345 DOI: 10.1111/resp.14310] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 05/19/2022] [Indexed: 12/21/2022]
Abstract
BACKGROUND AND OBJECTIVE Idiopathic pulmonary fibrosis (IPF) has poor prognosis, and the multidisciplinary diagnostic agreement is low. Moreover, surgical lung biopsies pose comorbidity risks. Therefore, using data from non-invasive tests usually employed to assess interstitial lung diseases (ILDs), we aimed to develop an automated algorithm combining deep learning and machine learning that would be capable of detecting and differentiating IPF from other ILDs. METHODS We retrospectively analysed consecutive patients presenting with ILD between April 2007 and July 2017. Deep learning was used for semantic image segmentation of HRCT based on the corresponding labelled images. A diagnostic algorithm was then trained using the semantic results and non-invasive findings. Diagnostic accuracy was assessed using five-fold cross-validation. RESULTS In total, 646,800 HRCT images and the corresponding labelled images were acquired from 1068 patients with ILD, of whom 42.7% had IPF. The average segmentation accuracy was 96.1%. The machine learning algorithm had an average diagnostic accuracy of 83.6%, with high sensitivity, specificity and kappa coefficient values (80.7%, 85.8% and 0.665, respectively). Using Cox hazard analysis, IPF diagnosed using this algorithm was a significant prognostic factor (hazard ratio, 2.593; 95% CI, 2.069-3.250; p < 0.001). Diagnostic accuracy was good even in patients with usual interstitial pneumonia patterns on HRCT and those with surgical lung biopsies. CONCLUSION Using data from non-invasive examinations, the combined deep learning and machine learning algorithm accurately, easily and quickly diagnosed IPF in a population with various ILDs.
Collapse
Affiliation(s)
- Taiki Furukawa
- Department of Respiratory Medicine, Nagoya University Graduate School of Medicine, Nagoya, Japan.,Image Processing Research Team, RIKEN Center for Advanced Photonics, Wako, Japan.,Medical IT Center, Nagoya University Hospital, Nagoya, Japan
| | - Shintaro Oyama
- Image Processing Research Team, RIKEN Center for Advanced Photonics, Wako, Japan.,Medical IT Center, Nagoya University Hospital, Nagoya, Japan
| | - Hideo Yokota
- Image Processing Research Team, RIKEN Center for Advanced Photonics, Wako, Japan.,Advanced Data Science Project, Information R&D and Strategy Headquarters, RIKEN, Wako, Japan
| | - Yasuhiro Kondoh
- Department of Respiratory Medicine and Allergy, Tosei General Hospital, Seto, Japan
| | - Kensuke Kataoka
- Department of Respiratory Medicine and Allergy, Tosei General Hospital, Seto, Japan
| | - Takeshi Johkoh
- Department of Radiology, Kansai Rosai Hospital, Amagaski, Japan
| | - Junya Fukuoka
- Department of Pathology, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Naozumi Hashimoto
- Department of Respiratory Medicine, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Koji Sakamoto
- Department of Respiratory Medicine, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | | | - Yoshinori Hasegawa
- Nagoya Medical Center, National Hospitalization Organization, Nagoya, Japan
| |
Collapse
|
23
|
Song L, Liu X, Chen S, Liu S, Liu X, Muhammad K, Bhattacharyya S. A deep fuzzy model for diagnosis of COVID-19 from CT images. Appl Soft Comput 2022; 122:108883. [PMID: 35474916 PMCID: PMC9027534 DOI: 10.1016/j.asoc.2022.108883] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 03/31/2022] [Accepted: 04/13/2022] [Indexed: 01/26/2023]
Abstract
From early 2020, a novel coronavirus disease pneumonia has shown a global "pandemic" trend at an extremely fast speed. Due to the magnitude of its harm, it has become a major global public health event. In the face of dramatic increase in the number of patients with COVID-19, the need for quick diagnosis of suspected cases has become particularly critical. Therefore, this paper constructs a fuzzy classifier, which aims to detect infected subjects by observing and analyzing the CT images of suspected patients. Firstly, a deep learning algorithm is used to extract the low-level features of CT images in the COVID-CT dataset. Subsequently, we analyze the extracted feature information with attribute reduction algorithm to obtain features with high recognition. Then, some key features are selected as the input for the fuzzy diagnosis model to the training model. Finally, several images in the dataset are used as the test set to test the trained fuzzy classifier. The obtained accuracy rate is 94.2%, and the F1-score is 93.8%. Experimental results show that, compared with the deep learning diagnosis methods widely used in medical image analysis, the proposed fuzzy model improves the accuracy and efficiency of diagnosis, which consequently helps to curb the spread of COVID-19.
Collapse
Affiliation(s)
- Liping Song
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, College of Information Science and Engineering, China
- Hunan Xiangjiang Artificial Intelligence Academy; Hunan Normal University, Changsha, 410000, China
| | - Xinyu Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, College of Information Science and Engineering, China
- Hunan Xiangjiang Artificial Intelligence Academy; Hunan Normal University, Changsha, 410000, China
| | - Shuqi Chen
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, College of Information Science and Engineering, China
- Hunan Xiangjiang Artificial Intelligence Academy; Hunan Normal University, Changsha, 410000, China
| | - Shuai Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, College of Information Science and Engineering, China
- Hunan Xiangjiang Artificial Intelligence Academy; Hunan Normal University, Changsha, 410000, China
| | - Xiangbin Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, College of Information Science and Engineering, China
- Hunan Xiangjiang Artificial Intelligence Academy; Hunan Normal University, Changsha, 410000, China
| | - Khan Muhammad
- Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), Department of Applied Artificial Intelligence, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Republic of Korea
| | | |
Collapse
|
24
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
25
|
Farheen F, Shamil MS, Ibtehaz N, Rahman MS. Revisiting segmentation of lung tumors from CT images. Comput Biol Med 2022; 144:105385. [PMID: 35299044 DOI: 10.1016/j.compbiomed.2022.105385] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 03/02/2022] [Accepted: 03/03/2022] [Indexed: 12/24/2022]
Abstract
Lung cancer is a leading cause of death throughout the world. Because the prompt diagnosis of tumors allows oncologists to discern their nature, type, and mode of treatment, tumor detection and segmentation from CT scan images is a crucial field of study. This paper investigates lung tumor segmentation via a two-dimensional Discrete Wavelet Transform (DWT) on the LOTUS dataset (31,247 training, and 4458 testing samples) and a Deeply Supervised MultiResUNet model. Coupling the DWT, which is used to achieve a more meticulous textural analysis while integrating information from neighboring CT slices, with the deep supervision of the model architecture results in an improved dice coefficient of 0.8472. A key characteristic of our approach is its avoidance of 3D kernels (despite being used for a 3D segmentation task), thereby making it quite lightweight.
Collapse
Affiliation(s)
- Farhanaz Farheen
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, 1230, Bangladesh; Department of CSE, United International University, Dhaka, Bangladesh
| | - Md Salman Shamil
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, 1230, Bangladesh; Department of CSE, United International University, Dhaka, Bangladesh
| | - Nabil Ibtehaz
- Department of Computer Science, Purdue University, West Lafayette, IN, United States.
| | - M Sohel Rahman
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, 1230, Bangladesh.
| |
Collapse
|
26
|
Semantic segmentation of COVID-19 lesions with a multiscale dilated convolutional network. Sci Rep 2022; 12:1847. [PMID: 35115573 PMCID: PMC8814191 DOI: 10.1038/s41598-022-05527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 01/12/2022] [Indexed: 11/09/2022] Open
Abstract
Automatic segmentation of infected lesions from computed tomography (CT) of COVID-19 patients is crucial for accurate diagnosis and follow-up assessment. The remaining challenges are the obvious scale difference between different types of COVID-19 lesions and the similarity between the lesions and normal tissues. This work aims to segment lesions of different scales and lesion boundaries correctly by utilizing multiscale and multilevel features. A novel multiscale dilated convolutional network (MSDC-Net) is proposed against the scale difference of lesions and the low contrast between lesions and normal tissues in CT images. In our MSDC-Net, we propose a multiscale feature capture block (MSFCB) to effectively capture multiscale features for better segmentation of lesions at different scales. Furthermore, a multilevel feature aggregate (MLFA) module is proposed to reduce the information loss in the downsampling process. Experiments on the publicly available COVID-19 CT Segmentation dataset demonstrate that the proposed MSDC-Net is superior to other existing methods in segmenting lesion boundaries and large, medium, and small lesions, and achieves the best results in Dice similarity coefficient, sensitivity and mean intersection-over-union (mIoU) scores of 82.4%, 81.1% and 78.2%, respectively. Compared with other methods, the proposed model has an average improvement of 10.6% and 11.8% on Dice and mIoU. Compared with the existing methods, our network achieves more accurate segmentation of lesions at various scales and lesion boundaries, which will facilitate further clinical analysis. In the future, we consider integrating the automatic detection and segmentation of COVID-19, and conduct research on the automatic diagnosis system of COVID-19.
Collapse
|
27
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
28
|
Wang Q, Xue W, Zhang X, Jin F, Hahn J. S2FLNet: Hepatic steatosis detection network with body shape. Comput Biol Med 2022; 140:105088. [PMID: 34864582 PMCID: PMC9149137 DOI: 10.1016/j.compbiomed.2021.105088] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/25/2021] [Accepted: 11/26/2021] [Indexed: 12/18/2022]
Abstract
Fat accumulation in the liver cells can increase the risk of cardiac complications and cardiovascular disease mortality. Therefore, a way to quickly and accurately detect hepatic steatosis is critically important. However, current methods, e.g., liver biopsy, magnetic resonance imaging, and computerized tomography scan, are subject to high cost and/or medical complications. In this paper, we propose a deep neural network to estimate the degree of hepatic steatosis (low, mid, high) using only body shapes. The proposed network adopts dilated residual network blocks to extract refined features of input body shape maps by expanding the receptive field. Furthermore, to classify the degree of steatosis more accurately, we create a hybrid of the center loss and cross entropy loss to compact intra-class variations and separate inter-class differences. We performed extensive tests on the public medical dataset with various network parameters. Our experimental results show that the proposed network achieves a total accuracy of over 82% and offers an accurate and accessible assessment for hepatic steatosis.
Collapse
Affiliation(s)
- Qiyue Wang
- Department of Computer Science, The George Washington University, USA.
| | - Wu Xue
- Department of Statistics, The George Washington University, USA
| | - Xiaoke Zhang
- Department of Statistics, The George Washington University, USA
| | - Fang Jin
- Department of Statistics, The George Washington University, USA
| | - James Hahn
- Department of Computer Science, The George Washington University, USA
| |
Collapse
|
29
|
Suzuki Y, Kido S, Mabu S, Yanagawa M, Tomiyama N, Sato Y. Segmentation of Diffuse Lung Abnormality Patterns on Computed Tomography Images using Partially Supervised Learning. ADVANCED BIOMEDICAL ENGINEERING 2022. [DOI: 10.14326/abe.11.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Yuki Suzuki
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University
| | - Masahiro Yanagawa
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Noriyuki Tomiyama
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Yoshinobu Sato
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology
| |
Collapse
|
30
|
Byun S, Jung J, Hong H, Kim BS. Lung tumor segmentation using dual-coupling net with shape prior based on lung and mediastinal window images from chest CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1067-1083. [PMID: 35988260 DOI: 10.3233/xst-221191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Volumetric lung tumor segmentation is difficult due to the diversity of the sizes, locations and shapes of lung tumors, as well as the similarity in the intensity with surrounding tissue structures. OBJECTIVE We propose a dual-coupling net for accurate lung tumor segmentation in chest CT images regardless of sizes, locations and shapes of lung tumors.METHODSTo extract shape information from lung tumors and use it as shape prior, three-planar images including axial, coronal, and sagittal planes are trained on 2D-Nets. Two types of window images, lung and mediastinal window images, are trained on 2D-Nets to distinguish lung tumors from the thoracic region and to better separate the boundaries of lung tumors from adjacent tissue structures. To prevent false-positive outliers to adjacent structures and to consider the spatial information of lung tumors, pairs of tumor volume-of-interest (VOI) and tumor shape prior are trained on 3D-Net.RESULTSIn the first experiment, the dual-coupling net had the highest Dice Similarity Coefficient (DSC) of 75.7%, considering the shape prior as well as mediastinal window images to prevent the leakage of adjacent structures while maintaining the shape of the lung tumor, with 18.23% p, 3.7% p, 1.1% p, and 1.77% p higher DSCs than in the 2D-Net, 2.5D-Net, 3D-Net, and single-coupling net results, respectively. In the second experiment with annotations for two clinicians, the dual-coupling net showed outcomes of 67.73% and 65.07% regarding the DSC for each annotation. In the third experiment, the dual-coupling net showed 70.97% for the DSC.CONCLUSIONSThe dual-coupling net enables accurate segmentation by distinguishing lung tumors from surrounding tissue structures and thus yields the highest DSC value.
Collapse
Affiliation(s)
- Sohyun Byun
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Julip Jung
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Helen Hong
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | | |
Collapse
|
31
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
32
|
Wang Z, Hounye AH, Zhang J, Hou M, Qi M. Deep learning for abdominal adipose tissue segmentation with few labelled samples. Int J Comput Assist Radiol Surg 2021; 17:579-587. [PMID: 34845590 DOI: 10.1007/s11548-021-02533-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 11/04/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region. METHODS EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features. RESULTS We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation. CONCLUSION EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, 410083, China
- Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | | | - Jianglin Zhang
- Department of Detmatology, The Second Clinical Medical College, Shenzhen Peoples Hospital, Jinan University. The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, 518020, Guangdong, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, 410083, China.
| | - Min Qi
- Department of Plastic Surgery, Xiangya Hospital, Central South University, Changsha, 410008, China.
| |
Collapse
|
33
|
Baressi Šegota S, Lorencin I, Smolić K, Anđelić N, Markić D, Mrzljak V, Štifanić D, Musulin J, Španjol J, Car Z. Semantic Segmentation of Urinary Bladder Cancer Masses from CT Images: A Transfer Learning Approach. BIOLOGY 2021; 10:1134. [PMID: 34827126 PMCID: PMC8614660 DOI: 10.3390/biology10111134] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 01/11/2023]
Abstract
Urinary bladder cancer is one of the most common cancers of the urinary tract. This cancer is characterized by its high metastatic potential and recurrence rate. Due to the high metastatic potential and recurrence rate, correct and timely diagnosis is crucial for successful treatment and care. With the aim of increasing diagnosis accuracy, artificial intelligence algorithms are introduced to clinical decision making and diagnostics. One of the standard procedures for bladder cancer diagnosis is computer tomography (CT) scanning. In this research, a transfer learning approach to the semantic segmentation of urinary bladder cancer masses from CT images is presented. The initial data set is divided into three sub-sets according to image planes: frontal (4413 images), axial (4993 images), and sagittal (996 images). First, AlexNet is utilized for the design of a plane recognition system, and it achieved high classification and generalization performances with an AUCmicro¯ of 0.9999 and σ(AUCmicro) of 0.0006. Furthermore, by applying the transfer learning approach, significant improvements in both semantic segmentation and generalization performances were achieved. For the case of the frontal plane, the highest performances were achieved if pre-trained ResNet101 architecture was used as a backbone for U-net with DSC¯ up to 0.9587 and σ(DSC) of 0.0059. When U-net was used for the semantic segmentation of urinary bladder cancer masses from images in the axial plane, the best results were achieved if pre-trained ResNet50 was used as a backbone, with a DSC¯ up to 0.9372 and σ(DSC) of 0.0147. Finally, in the case of images in the sagittal plane, the highest results were achieved with VGG-16 as a backbone. In this case, DSC¯ values up to 0.9660 with a σ(DSC) of 0.0486 were achieved. From the listed results, the proposed semantic segmentation system worked with high performance both from the semantic segmentation and generalization standpoints. The presented results indicate that there is the possibility for the utilization of the semantic segmentation system in clinical practice.
Collapse
Affiliation(s)
- Sandi Baressi Šegota
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Ivan Lorencin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Klara Smolić
- Clinical Hospital Center Rijeka, Krešimirova 42, 51000 Rijeka, Croatia; (K.S.); (D.M.); (J.Š.)
| | - Nikola Anđelić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Dean Markić
- Clinical Hospital Center Rijeka, Krešimirova 42, 51000 Rijeka, Croatia; (K.S.); (D.M.); (J.Š.)
- Faculty of Medicine, Branchetta 20/1, University of Rijeka, 51000 Rijeka, Croatia
| | - Vedran Mrzljak
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Daniel Štifanić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Jelena Musulin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| | - Josip Španjol
- Clinical Hospital Center Rijeka, Krešimirova 42, 51000 Rijeka, Croatia; (K.S.); (D.M.); (J.Š.)
- Faculty of Medicine, Branchetta 20/1, University of Rijeka, 51000 Rijeka, Croatia
| | - Zlatan Car
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (S.B.Š.); (I.L.); (N.A.); (D.Š.); (J.M.); (Z.C.)
| |
Collapse
|
34
|
Zhu F, Zhang B. Analysis of the Clinical Characteristics of Tuberculosis Patients based on Multi-Constrained Computed Tomography (CT) Image Segmentation Algorithm. Pak J Med Sci 2021; 37:1705-1709. [PMID: 34712310 PMCID: PMC8520368 DOI: 10.12669/pjms.37.6-wit.4795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/02/2021] [Accepted: 06/15/2021] [Indexed: 11/23/2022] Open
Abstract
Objective: We used U-shaped convolutional neural network (U_Net) multi-constraint image segmentation method to compare the diagnosis and imaging characteristics of tuberculosis and tuberculosis with lung cancer patients with Computed Tomography (CT). Methods: We selected 160 patients with tuberculosis from the severity scoring (SVR) task is provided by ImageCLEF Tuberculosis 2019. According to the type of diagnosed disease, they were divided into tuberculosis combined with lung cancer group and others group, all patients were given chest CT scan, and the clinical manifestations, CT characteristics, and initial suspected diagnosis and missed diagnosis of different tumor diameters were observed and compared between the two groups. The research continued for a year in the office, mainly relying on a computer with GPU to carry out graphics analysis. Results: There were more patients with hemoptysis and hoarseness in pulmonary tuberculosis combined with lung cancer group than in the pulmonary others group (P<0.05), and the other symptoms were not significantly different (P>0.05). Tuberculosis combined with lung cancer group had fewer signs of calcification, streak shadow, speckle shadow, and cavitation than others group; however, tuberculosis combined with lung cancer group had more patients with mass shadow, lobular sign, spines sign, burr sign and vacuole sign than others group. Conclusion: The symptoms of hemoptysis and hoarseness in pulmonary tuberculosis patients need to consider whether the disease has progressed and the possibility of lung cancer lesions. CT imaging of pulmonary tuberculosis patients with lung cancer usually shows mass shadows, lobular signs, spines signs, burr signs, and vacuoles signs. It can be used as the basis for its diagnosis. Simultaneously, the U-Net-based segmentation method can effectively segment the lung parenchymal region, and the algorithm is better than traditional algorithms.
Collapse
Affiliation(s)
- Feng Zhu
- Feng Zhu, Attending Doctor Department of Respiratory and Critical Care Medicine, The Second Clinical Medical College, Yangtze University, Jingzhou Central Hospital, Jingzhou 434000, China
| | - Bo Zhang
- Bo Zhang, Attending Doctor, Radiological Department, The Second Clinical Medical College, Yangtze University, Jingzhou Central Hospital, Jingzhou 434000, China
| |
Collapse
|
35
|
Ryu S, Kim JH, Yu H, Jung HD, Chang SW, Park JJ, Hong S, Cho HJ, Choi YJ, Choi J, Lee JS. Diagnosis of obstructive sleep apnea with prediction of flow characteristics according to airway morphology automatically extracted from medical images: Computational fluid dynamics and artificial intelligence approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106243. [PMID: 34218170 DOI: 10.1016/j.cmpb.2021.106243] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 06/15/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Obstructive sleep apnea syndrome (OSAS) is being observed in an increasing number of cases. It can be diagnosed using several methods such as polysomnography. OBJECTIVES To overcome the challenges of time and cost faced by conventional diagnostic methods, this paper proposes computational fluid dynamics (CFD) and machine-learning approaches that are derived from the upper-airway morphology with automatic segmentation using deep learning. METHOD We adopted a 3D UNet deep-learning model to perform medical image segmentation. 3D UNet prevents the feature-extraction loss that may occur by concatenating layers and extracts the anteroposterior coordination and width of the airway morphology. To create flow characteristics of the upper airway training data, we analyzed the changes in flow characteristics according to the upper-airway morphology using CFD. A multivariate Gaussian process regression (MVGPR) model was used to train the flow characteristic values. The trained MVGPR enables the prompt prediction of the aerodynamic features of the upper airway without simulation. Unlike conventional regression methods, MVGPR can be trained by considering the correlation between the flow characteristics. As a diagnostic step, a support vector machine (SVM) with predicted aerodynamic and biometric features was used in this study to classify patients as healthy or suffering from moderate OSAS. SVM is beneficial as it is easy to learn even with a small dataset, and it can diagnose various flow characteristics as factors while enhancing the feature via the kernel function. As the patient dataset is small, the Monte Carlo cross-validation was used to validate the trained model. Furthermore, to overcome the imbalanced data problem, the oversampling method was applied. RESULT The segmented upper-airway results of the high-resolution and low-resolution models present overall average dice coefficients of 0.76±0.041 and 0.74±0.052, respectively. Furthermore, the classification accuracy, sensitivity, specificity, and F1-score of the diagnosis algorithm were 81.5%, 89.3%, 86.2%, and 87.6%, respectively. CONCLUSION The convenience and accuracy of sleep apnea diagnosis are improved using deep learning and machine learning. Further, the proposed method can aid clinicians in making appropriate decisions to evaluate the possible applications of OSAS.
Collapse
Affiliation(s)
- Susie Ryu
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea
| | - Jun Hong Kim
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea
| | - Heejin Yu
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea
| | - Hwi-Dong Jung
- Department of Oral and Maxillofacial Surgery, Oral Science Research Center, Yonsei University College of Dentistry, Seoul, South Korea
| | - Suk Won Chang
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Jeong Jin Park
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Soonhyuk Hong
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyung-Ju Cho
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoon Jeong Choi
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea; Department of Orthodontics, The Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, South Korea
| | - Jongeun Choi
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea
| | - Joon Sang Lee
- School of Mechanical Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, South Korea; Department of Orthodontics, The Institute of Craniofacial Deformity, Yonsei University College of Dentistry, Seoul, South Korea.
| |
Collapse
|
36
|
Das P, Pal C, Acharyya A, Chakrabarti A, Basu S. Deep neural network for automated simultaneous intervertebral disc (IVDs) identification and segmentation of multi-modal MR images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106074. [PMID: 33906011 DOI: 10.1016/j.cmpb.2021.106074] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Lower back pain in humans has become a major risk. Classical approaches follow a non-invasive imaging technique for the assessment of spinal intervertebral disc (IVDs) abnormalities, where identification and segmentation of discs are done separately, making it a time-consuming phenomenon. This necessitates designing a robust automated and simultaneous IVDs identification and segmentation of multi-modality MRI images. METHODS We introduced a novel deep neural network architecture coined as 'RIMNet', a Region-to-Image Matching Network model, capable of performing an automated and simultaneous IVDs identification and segmentation of MRI images. The multi-modal input data is being fed to the network with a dropout strategy, by randomly disabling modalities in mini-batches. The performance accuracy as a function of the testing dataset was determined. The execution of the deep neural network model was evaluated by computing the IVDs Identification Accuracy, Dice coefficient, MDOC, Average Symmetric Surface Distance, Jaccard Coefficient, Hausdorff Distance and F1 Score. RESULTS Proposed model has attained 94% identification accuracy, dice coefficient value of 91.7±1% in segmentation and MDOC 90.2±1%. Our model also achieved 0.87±0.02 for Jaccard Coefficient, 0.54±0.04 for ASD and 0.62±0.02 mm Hausdorff Distance. The results have been validated and compared with other methodologies on dataset of MICCAI IVD 2018 challenge. CONCLUSIONS Our proposed deep-learning methodology is capable of performing simultaneous identification and segmentation on IVDs MRI images of the human spine with high accuracy.
Collapse
Affiliation(s)
- Pabitra Das
- A.K.Choudhury School of Information Technology, University of Calcutta, Kolkata 700106, India.
| | - Chandrajit Pal
- Advanced Embedded System and IC Design Laboratory, Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India
| | - Amit Acharyya
- Advanced Embedded System and IC Design Laboratory, Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India
| | - Amlan Chakrabarti
- A.K.Choudhury School of Information Technology, University of Calcutta, Kolkata 700106, India
| | - Saumyajit Basu
- Kothari Medical Centre, 8/3, Alipore Rd, Alipore, Kolkata 700027, India
| |
Collapse
|
37
|
Jiang S, Li H, Jin Z. A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis. IEEE J Biomed Health Inform 2021; 25:1483-1494. [PMID: 33449890 DOI: 10.1109/jbhi.2021.3052044] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Owing to the high incidence rate and the severe impact of skin cancer, the precise diagnosis of malignant skin tumors is a significant goal, especially considering treatment is normally effective if the tumor is detected early. Limited published histopathological image sets and the lack of an intuitive correspondence between the features of lesion areas and a certain type of skin cancer pose a challenge to the establishment of high-quality and interpretable computer-aided diagnostic (CAD) systems. To solve this problem, a light-weight attention mechanism-based deep learning framework, namely, DRANet, is proposed to differentiate 11 types of skin diseases based on a real histopathological image set collected by us during the last 10 years. The CAD system can output not only the name of a certain disease but also a visualized diagnostic report showing possible areas related to the disease. The experimental results demonstrate that the DRANet obtains significantly better performance than baseline models (i.e., InceptionV3, ResNet50, VGG16, and VGG19) with comparable parameter size and competitive accuracy with fewer model parameters. Visualized results produced by the hidden layers of the DRANet actually highlight part of the class-specific regions of diagnostic points and are valuable for decision making in the diagnosis of skin diseases.
Collapse
|
38
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
39
|
Mohammed KK, Hassanien AE, Afify HM. A 3D image segmentation for lung cancer using V.Net architecture based deep convolutional networks. J Med Eng Technol 2021; 45:337-343. [PMID: 33843414 DOI: 10.1080/03091902.2021.1905895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Lung segmentation of chest CT scan is utilised to identify lung cancer and this step is also critical in other diagnostic pathways. Therefore, powerful algorithms to accomplish this accurate segmentation task are highly needed in the medical imaging domain, where the tumours are required to be segmented with the lung parenchyma. Also, the lung parenchyma needs to be detached from the tumour regions that are often confused with the lung tissue. Recently, lung semantic segmentation is more suitable to allocate each pixel in the image to a predefined class based on fully convolutional networks (FCNs). In this paper, CT cancer scans from the Task06_Lung database were applied to FCN that was inspired by V.Net architecture for efficiently selecting a region of interest (ROI) using the 3D segmentation. This lung database is segregated into 64 training images and 32 testing images. The proposed system is generalised by three steps including data preprocessing, data augmentation and neural network based on the V-Net model. Then, it was evaluated by dice score coefficient (DSC) to calculate the ratio of the segmented image and the ground truth image. This proposed system outperformed other previous schemes for 3D lung segmentation with an average DCS of 80% for ROI and 98% for surrounding lung tissues. Moreover, this system demonstrated that 3D views of lung tumours in CT images precisely carried tumour estimation and robust lung segmentation.
Collapse
Affiliation(s)
- Kamel K Mohammed
- Center for Virus Research and Studies, Al-Azhar University, Cairo, Egypt.,Scientific Research Group in Egypt (SRGE), Cairo, Egypt
| | - Aboul Ella Hassanien
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt.,Faculty of Computers and Information, Cairo University, Giza, Egypt
| | - Heba M Afify
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt.,Systems and Biomedical Engineering Department, Higher Institute of Engineering in El-Shorouk City, Cairo, Egypt
| |
Collapse
|
40
|
Sahu P, Zhao Y, Bhatia P, Bogoni L, Jerebko A, Qin H. Structure Correction for Robust Volume Segmentation in Presence of Tumors. IEEE J Biomed Health Inform 2021; 25:1151-1162. [PMID: 32750948 DOI: 10.1109/jbhi.2020.3004296] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
CNN based lung segmentation models in absence of diverse training dataset fail to segment lung volumes in presence of severe pathologies such as large masses, scars, and tumors. To rectify this problem, we propose a multi-stage algorithm for lung volume segmentation from CT scans. The algorithm uses a 3D CNN in the first stage to obtain a coarse segmentation of the left and right lungs. In the second stage, shape correction is performed on the segmentation mask using a 3D structure correction CNN. A novel data augmentation strategy is adopted to train a 3D CNN which helps in incorporating global shape prior. Finally, the shape corrected segmentation mask is up-sampled and refined using a parallel flood-fill operation. The proposed multi-stage algorithm is robust in the presence of large nodules/tumors and does not require labeled segmentation masks for entire pathological lung volume for training. Through extensive experiments conducted on publicly available datasets such as NSCLC, LUNA, and LOLA11 we demonstrate that the proposed approach improves the recall of large juxtapleural tumor voxels by at least 15% over state-of-the-art models without sacrificing segmentation accuracy in case of normal lungs. The proposed method also meets the requirement of CAD software by performing segmentation within 5 seconds which is significantly faster than present methods.
Collapse
|
41
|
Sun S, Ren H, Dan T, Wei W. 3D segmentation of lungs with juxta-pleural tumor using the improved active shape model approach. Technol Health Care 2021; 29:385-398. [PMID: 33682776 PMCID: PMC8150541 DOI: 10.3233/thc-218037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND AND OBJECTIVE: At present, there are many methods for pathological lung segmentation. However, there are still two unresolved problems. (1) The search steps in traditional ASM is a least square optimization method, which is sensitive to outlier marker points, and it makes the profile update to the transition area in the middle of normal lung tissue and tumor rather than a true lung contour. (2) If the noise images exist in the training dataset, the corrected shape model cannot be constructed. METHODS: To solve the first problem, we proposed a new ASM algorithm. Firstly, we detected these outlier marker points by a distance method, and then the different searching functions to the abnormal and normal marker points are applied. To solve the second problem, robust principal component analysis (RPCA) of low rank theory can remove noise, so the proposed method combines RPCA instead of PCA with ASM to solve this problem. Low rank decompose for marker points matrix of training dataset and covariance matrix of PCA will be done before segmentation using ASM. RESULTS: Using the proposed method to segment 122 lung images with juxta-pleural tumors of EMPIRE10 database, got the overlap rate with the gold standard as 94.5%. While the accuracy of ASM based on PCA is only 69.5%. CONCLUSIONS: The results showed that when the noise sample is contained in the training sample set, a good segmentation result for the lungs with juxta-pleural tumors can be obtained by the ASM based on RPCA.
Collapse
Affiliation(s)
- Shenshen Sun
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| | - Huizhi Ren
- College of Mechanical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Tian Dan
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| | - Wu Wei
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| |
Collapse
|
42
|
Suri JS, Agarwal S, Gupta SK, Puvvula A, Biswas M, Saba L, Bit A, Tandel GS, Agarwal M, Patrick A, Faa G, Singh IM, Oberleitner R, Turk M, Chadha PS, Johri AM, Miguel Sanches J, Khanna NN, Viskovic K, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou A, Misra DP, Agarwal V, Kitas GD, Ahluwalia P, Teji J, Al-Maini M, Dhanjil SK, Sockalingam M, Saxena A, Nicolaides A, Sharma A, Rathore V, Ajuluchukwu JNA, Fatemi M, Alizad A, Viswanathan V, Krishnan PK, Naidu S. A narrative review on characterization of acute respiratory distress syndrome in COVID-19-infected lungs using artificial intelligence. Comput Biol Med 2021; 130:104210. [PMID: 33550068 PMCID: PMC7813499 DOI: 10.1016/j.compbiomed.2021.104210] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 01/03/2021] [Accepted: 01/03/2021] [Indexed: 02/06/2023]
Abstract
COVID-19 has infected 77.4 million people worldwide and has caused 1.7 million fatalities as of December 21, 2020. The primary cause of death due to COVID-19 is Acute Respiratory Distress Syndrome (ARDS). According to the World Health Organization (WHO), people who are at least 60 years old or have comorbidities that have primarily been targeted are at the highest risk from SARS-CoV-2. Medical imaging provides a non-invasive, touch-free, and relatively safer alternative tool for diagnosis during the current ongoing pandemic. Artificial intelligence (AI) scientists are developing several intelligent computer-aided diagnosis (CAD) tools in multiple imaging modalities, i.e., lung computed tomography (CT), chest X-rays, and lung ultrasounds. These AI tools assist the pulmonary and critical care clinicians through (a) faster detection of the presence of a virus, (b) classifying pneumonia types, and (c) measuring the severity of viral damage in COVID-19-infected patients. Thus, it is of the utmost importance to fully understand the requirements of for a fast and successful, and timely lung scans analysis. This narrative review first presents the pathological layout of the lungs in the COVID-19 scenario, followed by understanding and then explains the comorbid statistical distributions in the ARDS framework. The novelty of this review is the approach to classifying the AI models as per the by school of thought (SoTs), exhibiting based on segregation of techniques and their characteristics. The study also discusses the identification of AI models and its extension from non-ARDS lungs (pre-COVID-19) to ARDS lungs (post-COVID-19). Furthermore, it also presents AI workflow considerations of for medical imaging modalities in the COVID-19 framework. Finally, clinical AI design considerations will be discussed. We conclude that the design of the current existing AI models can be improved by considering comorbidity as an independent factor. Furthermore, ARDS post-processing clinical systems must involve include (i) the clinical validation and verification of AI-models, (ii) reliability and stability criteria, and (iii) easily adaptable, and (iv) generalization assessments of AI systems for their use in pulmonary, critical care, and radiological settings.
Collapse
Affiliation(s)
- Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA.
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA, USA; Department of Computer Science Engineering, PSIT, Kanpur, India
| | - Suneet K Gupta
- Department of Computer Science Engineering, Bennett University, India
| | - Anudeep Puvvula
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA; Annu's Hospitals for Skin and Diabetes, Nellore, AP, India
| | - Mainak Biswas
- Department of Computer Science Engineering, JIS University, Kolkata, India
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, Cagliari, Italy
| | - Arindam Bit
- Department of Biomedical Engineering, NIT, Raipur, India
| | - Gopal S Tandel
- Department of Computer Science Engineering, VNIT, Nagpur, India
| | - Mohit Agarwal
- Department of Computer Science Engineering, Bennett University, India
| | | | - Gavino Faa
- Department of Pathology - AOU of Cagliari, Italy
| | - Inder M Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA
| | | | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, Delmenhorst, Germany
| | - Paramjit S Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA
| | - Amer M Johri
- Department of Medicine, Division of Cardiology, Queen's University, Kingston, Ontario, Canada
| | - J Miguel Sanches
- Institute of Systems and Robotics, Instituto Superior Tecnico, Lisboa, Portugal
| | - Narendra N Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi, India
| | | | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, Athens, Greece
| | - John R Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA, USA
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI, USA
| | - Martin Miner
- Men's Health Center, Miriam Hospital Providence, Rhode Island, USA
| | - David W Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI, USA
| | | | - Petros P Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, Greece
| | - George Tsoulfas
- Aristoteleion University of Thessaloniki, Thessaloniki, Greece
| | | | | | - Vikas Agarwal
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley, UK
| | - George D Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley, UK; Arthritis Research UK Epidemiology Unit, Manchester University, Manchester, UK
| | - Puneet Ahluwalia
- Max Institute of Cancer Care, Max Superspeciality Hospital, New Delhi, India
| | - Jagjit Teji
- Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, USA
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, Canada
| | | | | | - Ajit Saxena
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi, India
| | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre and University of Nicosia Medical School, Cyprus
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA, USA
| | - Vijay Rathore
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA
| | | | - Mostafa Fatemi
- Dept. of Physiology & Biomedical Engg., Mayo Clinic College of Medicine and Science, MN, USA
| | - Azra Alizad
- Dept. of Radiology, Mayo Clinic College of Medicine and Science, MN, USA
| | - Vijay Viswanathan
- MV Hospital for Diabetes and Professor M Viswanathan Diabetes Research Centre, Chennai, India
| | - P K Krishnan
- Neurology Department, Fortis Hospital, Bangalore, India
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN, USA
| |
Collapse
|
43
|
Chen C, Xiao R, Zhang T, Lu Y, Guo X, Wang J, Chen H, Wang Z. Pathological lung segmentation in chest CT images based on improved random walker. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105864. [PMID: 33280937 DOI: 10.1016/j.cmpb.2020.105864] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Pathological lung segmentation as a pretreatment step in the diagnosis of lung diseases has been widely explored. Because of the complexity of pathological lung structures and gray blur of the border, accurate lung segmentation in clinical 3D computed tomography images is a challenging task. In view of the current situation, the work proposes a fast and accurate pathological lung segmentation method. The following contributions have been made: First, the edge weights introduce spatial information and clustering information, so that walkers can use more image information during walking. Second, a Gaussian Distribution of seed point set is established to further expand the possibility of selection between fake seed points and real seed points. Finally, the pre-parameter is calculated using original seed points, and the final results are fitted with new seed points. METHODS This study proposes a segmentation method based on an improved random walker algorithm. The proposed method consists of the following steps: First, a gray value is used as the sample distribution. Gaussian mixture model is used to obtain the clustering probability of an image. Thus, the spatial distance and clustering result are added as new weights, and the new edge weights are used to construct a random walker map. Second, a large number of marked points are automatically selected, and the intermediate results are obtained from the newly constructed map and retained only as pre-parameters. When new seed points are introduced, the probability value of the walker is quickly calculated from the new parameters and pre-parameters, and the final segmentation result can be obtained. RESULTS The proposed method was tested on 65 sets of CT cases. Quantitative evaluation with different methods confirms the high accuracy on our dataset (98.55%) and LOLA11 dataset (97.41%). Similarly, the average segmentation time (10.5s) is faster than random walker (1,332.5s). CONCLUSIONS The comparison of the experimental results show that the proposed method can accurately and quickly obtain pathological lung processing results. Therefore, it has potential clinical applications.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China; Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, China.
| | - Tao Zhang
- Department of Thoracic Surgery, Chinese PLA General Hospital, Beijing, 100853, China.
| | - Yuanyuan Lu
- Department of Ultrasound, Chinese PLA General Hospital, Beijing, 100853, China
| | - Xiaoyu Guo
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Jiayu Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Hongyu Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Zhiliang Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| |
Collapse
|
44
|
Abstract
As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.
Collapse
|
45
|
AI-driven quantification, staging and outcome prediction of COVID-19 pneumonia. Med Image Anal 2020; 67:101860. [PMID: 33171345 PMCID: PMC7558247 DOI: 10.1016/j.media.2020.101860] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 08/24/2020] [Accepted: 09/29/2020] [Indexed: 12/11/2022]
Abstract
Coronavirus disease 2019 (COVID-19) emerged in 2019 and disseminated around the world rapidly. Computed tomography (CT) imaging has been proven to be an important tool for screening, disease quantification and staging. The latter is of extreme importance for organizational anticipation (availability of intensive care unit beds, patient management planning) as well as to accelerate drug development through rapid, reproducible and quantified assessment of treatment response. Even if currently there are no specific guidelines for the staging of the patients, CT together with some clinical and biological biomarkers are used. In this study, we collected a multi-center cohort and we investigated the use of medical imaging and artificial intelligence for disease quantification, staging and outcome prediction. Our approach relies on automatic deep learning-based disease quantification using an ensemble of architectures, and a data-driven consensus for the staging and outcome prediction of the patients fusing imaging biomarkers with clinical and biological attributes. Highly promising results on multiple external/independent evaluation cohorts as well as comparisons with expert human readers demonstrate the potentials of our approach.
Collapse
|
46
|
Pöhler GH, Klimeš F, Behrendt L, Voskrebenzev A, Gonzalez CC, Wacker F, Hohlfeld JM, Vogel‐Claussen J. Repeatability of Phase‐Resolved Functional Lung (
PREFUL
)‐
MRI
Ventilation and Perfusion Parameters in Healthy Subjects and
COPD
Patients. J Magn Reson Imaging 2020; 53:915-927. [DOI: 10.1002/jmri.27385] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 09/14/2020] [Accepted: 09/17/2020] [Indexed: 12/21/2022] Open
Affiliation(s)
- Gesa H. Pöhler
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Filip Klimeš
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Lea Behrendt
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Andreas Voskrebenzev
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Cristian Crisosto Gonzalez
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Frank Wacker
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| | - Jens M. Hohlfeld
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
- Department of Respiratory Medicine Hannover Medical School Hannover Germany
- Fraunhofer Institute of Toxicology and Experimental Medicine Hannover Germany
| | - Jens Vogel‐Claussen
- Institute for Diagnostic and Interventional Radiology Hannover Medical School Hannover Germany
- Biomedical Research in Endstage and Obstructive Lung Disease (BREATH), Member of the German Center for Lung Research (DZL) Hannover Germany
| |
Collapse
|
47
|
Liu C, Zhao R, Xie W, Pang M. Pathological lung segmentation based on random forest combined with deep model and multi-scale superpixels. Neural Process Lett 2020; 52:1631-1649. [PMID: 32837245 PMCID: PMC7413019 DOI: 10.1007/s11063-020-10330-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Accurate segmentation of lungs in pathological thoracic computed tomography (CT) scans plays an important role in pulmonary disease diagnosis. However, it is still a challenging task due to the variability of pathological lung appearances and shapes. In this paper, we proposed a novel segmentation algorithm based on random forest (RF), deep convolutional network, and multi-scale superpixels for segmenting pathological lungs from thoracic CT images accurately. A pathological thoracic CT image is first segmented based on multi-scale superpixels, and deep features, texture, and intensity features extracted from superpixels are taken as inputs of a group of RF classifiers. With the fusion of classification results of RFs by a fractional-order gray correlation approach, we capture an initial segmentation of pathological lungs. We finally utilize a divide-and-conquer strategy to deal with segmentation refinement combining contour correction of left lungs and region repairing of right lungs. Our algorithm is tested on a group of thoracic CT images affected with interstitial lung diseases. Experiments show that our algorithm can achieve a high segmentation accuracy with an average DSC of 96.45% and PPV of 95.07%. Compared with several existing lung segmentation methods, our algorithm exhibits a robust performance on pathological lung segmentation. Our algorithm can be employed reliably for lung field segmentation of pathologic thoracic CT images with a high accuracy, which is helpful to assist radiologists to detect the presence of pulmonary diseases and quantify its shape and size in regular clinical practices.
Collapse
Affiliation(s)
- Caixia Liu
- Institute of EduInfo Science and Engineering, Nanjing Normal University, Nanjing, China
| | - Ruibin Zhao
- Institute of EduInfo Science and Engineering, Nanjing Normal University, Nanjing, China
| | - Wangli Xie
- Institute of EduInfo Science and Engineering, Nanjing Normal University, Nanjing, China
| | - Mingyong Pang
- Institute of EduInfo Science and Engineering, Nanjing Normal University, Nanjing, China
| |
Collapse
|
48
|
Wang Y, Wang N, Xu M, Yu J, Qin C, Luo X, Yang X, Wang T, Li A, Ni D. Deeply-Supervised Networks With Threshold Loss for Cancer Detection in Automated Breast Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:866-876. [PMID: 31442972 DOI: 10.1109/tmi.2019.2936500] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
ABUS, or Automated breast ultrasound, is an innovative and promising method of screening for breast examination. Comparing to common B-mode 2D ultrasound, ABUS attains operator-independent image acquisition and also provides 3D views of the whole breast. Nonetheless, reviewing ABUS images is particularly time-intensive and errors by oversight might occur. For this study, we offer an innovative 3D convolutional network, which is used for ABUS for automated cancer detection, in order to accelerate reviewing and meanwhile to obtain high detection sensitivity with low false positives (FPs). Specifically, we offer a densely deep supervision method in order to augment the detection sensitivity greatly by effectively using multi-layer features. Furthermore, we suggest a threshold loss in order to present voxel-level adaptive threshold for discerning cancer vs. non-cancer, which can attain high sensitivity with low false positives. The efficacy of our network is verified from a collected dataset of 219 patients with 614 ABUS volumes, including 745 cancer regions, and 144 healthy women with a total of 900 volumes, without abnormal findings. Extensive experiments demonstrate our method attains a sensitivity of 95% with 0.84 FP per volume. The proposed network provides an effective cancer detection scheme for breast examination using ABUS by sustaining high sensitivity with low false positives. The code is publicly available at https://github.com/nawang0226/abus_code.
Collapse
|
49
|
Abstract
OBJECTIVES The objective of this study is to assess the performance of a computer-aided diagnosis (CAD) system (INTACT system) for the automatic classification of high-resolution computed tomography images into 4 radiological diagnostic categories and to compare this with the performance of radiologists on the same task. MATERIALS AND METHODS For the comparison, a total of 105 cases of pulmonary fibrosis were studied (54 cases of nonspecific interstitial pneumonia and 51 cases of usual interstitial pneumonia). All diagnoses were interstitial lung disease board consensus diagnoses (radiologically or histologically proven cases) and were retrospectively selected from our database. Two subspecialized chest radiologists made a consensual ground truth radiological diagnosis, according to the Fleischner Society recommendations. A comparison analysis was performed between the INTACT system and 2 other radiologists with different years of experience (readers 1 and 2). The INTACT system consists of a sequential pipeline in which first the anatomical structures of the lung are segmented, then the various types of pathological lung tissue are identified and characterized, and this information is then fed to a random forest classifier able to recommend a radiological diagnosis. RESULTS Reader 1, reader 2, and INTACT achieved similar accuracy for classifying pulmonary fibrosis into the original 4 categories: 0.6, 0.54, and 0.56, respectively, with P > 0.45. The INTACT system achieved an F-score (harmonic mean for precision and recall) of 0.56, whereas the 2 readers, on average, achieved 0.57 (P = 0.991). For the pooled classification (2 groups, with and without the need for biopsy), reader 1, reader 2, and CAD had similar accuracies of 0.81, 0.70, and 0.81, respectively. The F-score was again similar for the CAD system and the radiologists. The CAD system and the average reader reached F-scores of 0.80 and 0.79 (P = 0.898). CONCLUSIONS We found that a computer-aided detection algorithm based on machine learning was able to classify idiopathic pulmonary fibrosis with similar accuracy to a human reader.
Collapse
|
50
|
Ebner L, Christodoulidis S, Stathopoulou T, Geiser T, Stalder O, Limacher A, Heverhagen JT, Mougiakakou SG, Christe A. Meta-analysis of the radiological and clinical features of Usual Interstitial Pneumonia (UIP) and Nonspecific Interstitial Pneumonia (NSIP). PLoS One 2020; 15:e0226084. [PMID: 31929532 PMCID: PMC6957301 DOI: 10.1371/journal.pone.0226084] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 11/18/2019] [Indexed: 02/02/2023] Open
Abstract
PURPOSE To conduct a meta-analysis to determine specific computed tomography (CT) patterns and clinical features that discriminate between nonspecific interstitial pneumonia (NSIP) and usual interstitial pneumonia (UIP). MATERIALS AND METHODS The PubMed/Medline and Embase databases were searched for studies describing the radiological patterns of UIP and NSIP in chest CT images. Only studies involving histologically confirmed diagnoses and a consensus diagnosis by an interstitial lung disease (ILD) board were included in this analysis. The radiological patterns and patient demographics were extracted from suitable articles. We used random-effects meta-analysis by DerSimonian & Laird and calculated pooled odds ratios for binary data and pooled mean differences for continuous data. RESULTS Of the 794 search results, 33 articles describing 2,318 patients met the inclusion criteria. Twelve of these studies included both NSIP (338 patients) and UIP (447 patients). NSIP-patients were significantly younger (NSIP: median age 54.8 years, UIP: 59.7 years; mean difference (MD) -4.4; p = 0.001; 95% CI: -6.97 to -1.77), less often male (NSIP: median 52.8%, UIP: 73.6%; pooled odds ratio (OR) 0.32; p<0.001; 95% CI: 0.17 to 0.60), and less often smokers (NSIP: median 55.1%, UIP: 73.9%; OR 0.42; p = 0.005; 95% CI: 0.23 to 0.77) than patients with UIP. The CT findings from patients with NSIP revealed significantly lower levels of the honeycombing pattern (NSIP: median 28.9%, UIP: 73.4%; OR 0.07; p<0.001; 95% CI: 0.02 to 0.30) with less peripheral predominance (NSIP: median 41.8%, UIP: 83.3%; OR 0.21; p<0.001; 95% CI: 0.11 to 0.38) and more subpleural sparing (NSIP: median 40.7%, UIP: 4.3%; OR 16.3; p = 0.005; 95% CI: 2.28 to 117). CONCLUSION Honeycombing with a peripheral predominance was significantly associated with a diagnosis of UIP. The NSIP pattern showed more subpleural sparing. The UIP pattern was predominantly observed in elderly males with a history of smoking, whereas NSIP occurred in a younger patient population.
Collapse
Affiliation(s)
- Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | | | - Thomai Stathopoulou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Thomas Geiser
- Department for Pulmonary Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Odile Stalder
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Andreas Limacher
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Johannes T. Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Stavroula G. Mougiakakou
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| |
Collapse
|