1
|
Ou J, Jiang L, Bai T, Zhan P, Liu R, Xiao H. ResTransUnet: An effective network combined with Transformer and U-Net for liver segmentation in CT scans. Comput Biol Med 2024; 177:108625. [PMID: 38823365 DOI: 10.1016/j.compbiomed.2024.108625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 04/15/2024] [Accepted: 05/18/2024] [Indexed: 06/03/2024]
Abstract
Liver segmentation is a fundamental prerequisite for the diagnosis and surgical planning of hepatocellular carcinoma. Traditionally, the liver contour is drawn manually by radiologists using a slice-by-slice method. However, this process is time-consuming and error-prone, depending on the radiologist's experience. In this paper, we propose a new end-to-end automatic liver segmentation framework, named ResTransUNet, which exploits the transformer's ability to capture global context for remote interactions and spatial relationships, as well as the excellent performance of the original U-Net architecture. The main contribution of this paper lies in proposing a novel fusion network that combines Unet and Transformer architectures. In the encoding structure, a dual-path approach is utilized, where features are extracted separately using both convolutional neural networks (CNNs) and Transformer networks. Additionally, an effective feature enhancement unit is designed to transfer the global features extracted by the Transformer network to the CNN for feature enhancement. This model aims to address the drawbacks of traditional Unet-based methods, such as feature loss during encoding and poor capture of global features. Moreover, it avoids the disadvantages of pure Transformer models, which suffer from large parameter sizes and high computational complexity. The experimental results on the LiTS2017 dataset demonstrate remarkable performance for our proposed model, with Dice coefficients, volumetric overlap error (VOE), and relative volume difference (RVD) values for liver segmentation reaching 0.9535, 0.0804, and -0.0007, respectively. Furthermore, to further validate the model's generalization capability, we conducted tests on the 3Dircadb, Chaos, and Sliver07 datasets. The experimental results demonstrate that the proposed method outperforms other closely related models with higher liver segmentation accuracy. In addition, significant improvements can be achieved by applying our method when handling liver segmentation with small and discontinuous liver regions, as well as blurred liver boundaries. The code is available at the website: https://github.com/Jouiry/ResTransUNet.
Collapse
Affiliation(s)
- Jiajie Ou
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Linfeng Jiang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China; School of Computing and College of Design and Engineering, National University of Singapore, Singapore.
| | - Ting Bai
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Peidong Zhan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Ruihua Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| |
Collapse
|
2
|
Yu C, Pei H. Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification. ENTROPY (BASEL, SWITZERLAND) 2024; 26:400. [PMID: 38785649 PMCID: PMC11119260 DOI: 10.3390/e26050400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
Collapse
Affiliation(s)
- Chenglin Yu
- School of Electrtronic & Information Engineering and Communication Engineering, Guangzhou City University of Technology, Guangzhou 510800, China
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Scinece and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
3
|
Dong J, Cheng G, Zhang Y, Peng C, Song Y, Tong R, Lin L, Chen YW. Tailored multi-organ segmentation with model adaptation and ensemble. Comput Biol Med 2023; 166:107467. [PMID: 37725849 DOI: 10.1016/j.compbiomed.2023.107467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 08/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Collapse
Affiliation(s)
- Jiahua Dong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Guohua Cheng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yue Zhang
- Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, 215163, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, China.
| | - Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China
| | - Yu Song
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| | - Ruofeng Tong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lanfen Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| |
Collapse
|
4
|
Isaksson LJ, Summers P, Mastroleo F, Marvaso G, Corrao G, Vincini MG, Zaffaroni M, Ceci F, Petralia G, Orecchia R, Jereczek-Fossa BA. Automatic Segmentation with Deep Learning in Radiotherapy. Cancers (Basel) 2023; 15:4389. [PMID: 37686665 PMCID: PMC10486603 DOI: 10.3390/cancers15174389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/10/2023] Open
Abstract
This review provides a formal overview of current automatic segmentation studies that use deep learning in radiotherapy. It covers 807 published papers and includes multiple cancer sites, image types (CT/MRI/PET), and segmentation methods. We collect key statistics about the papers to uncover commonalities, trends, and methods, and identify areas where more research might be needed. Moreover, we analyzed the corpus by posing explicit questions aimed at providing high-quality and actionable insights, including: "What should researchers think about when starting a segmentation study?", "How can research practices in medical image segmentation be improved?", "What is missing from the current corpus?", and more. This allowed us to provide practical guidelines on how to conduct a good segmentation study in today's competitive environment that will be useful for future research within the field, regardless of the specific radiotherapeutic subfield. To aid in our analysis, we used the large language model ChatGPT to condense information.
Collapse
Affiliation(s)
- Lars Johannes Isaksson
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| | - Paul Summers
- Division of Radiology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Federico Mastroleo
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Translational Medicine, University of Piemonte Orientale (UPO), 20188 Novara, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Giulia Corrao
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Maria Giulia Vincini
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Mattia Zaffaroni
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Francesco Ceci
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Division of Nuclear Medicine, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Giuseppe Petralia
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Precision Imaging and Research Unit, Department of Medical Imaging and Radiation Sciences, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| |
Collapse
|
5
|
Azuri I, Wattad A, Peri-Hanania K, Kashti T, Rosen R, Caspi Y, Istaiti M, Wattad M, Applbaum Y, Zimran A, Revel-Vilk S, C. Eldar Y. A Deep-Learning Approach to Spleen Volume Estimation in Patients with Gaucher Disease. J Clin Med 2023; 12:5361. [PMID: 37629403 PMCID: PMC10455264 DOI: 10.3390/jcm12165361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
The enlargement of the liver and spleen (hepatosplenomegaly) is a common manifestation of Gaucher disease (GD). An accurate estimation of the liver and spleen volumes in patients with GD, using imaging tools such as magnetic resonance imaging (MRI), is crucial for the baseline assessment and monitoring of the response to treatment. A commonly used method in clinical practice to estimate the spleen volume is the employment of a formula that uses the measurements of the craniocaudal length, diameter, and thickness of the spleen in MRI. However, the inaccuracy of this formula is significant, which, in turn, emphasizes the need for a more precise and reliable alternative. To this end, we employed deep-learning techniques, to achieve a more accurate spleen segmentation and, subsequently, calculate the resulting spleen volume with higher accuracy on a testing set cohort of 20 patients with GD. Our results indicate that the mean error obtained using the deep-learning approach to spleen volume estimation is 3.6 ± 2.7%, which is significantly lower than the common formula approach, which resulted in a mean error of 13.9 ± 9.6%. These findings suggest that the integration of deep-learning methods into the clinical routine practice for spleen volume calculation could lead to improved diagnostic and monitoring outcomes.
Collapse
Affiliation(s)
- Ido Azuri
- Bioinformatics Unit, Department of Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ameer Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Keren Peri-Hanania
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Tamar Kashti
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ronnie Rosen
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yaron Caspi
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Majdolen Istaiti
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Makram Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Yaakov Applbaum
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Ari Zimran
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Shoshana Revel-Vilk
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| |
Collapse
|
6
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:bioengineering10040492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
7
|
Wali A, Ahmad M, Naseer A, Tamoor M, Gilani S. StynMedGAN: Medical images augmentation using a new GAN model for improved diagnosis of diseases. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
Collapse
Affiliation(s)
- Aamir Wali
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Muzammil Ahmad
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Maria Tamoor
- Department of Computer Science, Forman Christian College University, Zahoor Ilahi Road, Lahore, Pakistan
| | - S.A.M. Gilani
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| |
Collapse
|
8
|
Conticchio M, Maggialetti N, Rescigno M, Brunese MC, Vaschetti R, Inchingolo R, Calbi R, Ferraro V, Tedeschi M, Fantozzi MR, Avella P, Calabrese A, Memeo R, Scardapane A. Hepatocellular Carcinoma with Bile Duct Tumor Thrombus: A Case Report and Literature Review of 890 Patients Affected by Uncommon Primary Liver Tumor Presentation. J Clin Med 2023; 12:jcm12020423. [PMID: 36675352 PMCID: PMC9861411 DOI: 10.3390/jcm12020423] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/13/2022] [Accepted: 12/29/2022] [Indexed: 01/06/2023] Open
Abstract
Bile duct tumor thrombus (BDTT) is an uncommon finding in hepatocellular carcinoma (HCC), potentially mimicking cholangiocarcinoma (CCA). Recent studies have suggested that HCC with BDTT could represent a prognostic factor. We report the case of a 47-year-old male patient admitted to the University Hospital of Bari with abdominal pain. Blood tests revealed the presence of an untreated hepatitis B virus infection (HBV), with normal liver function and without jaundice. Abdominal ultrasonography revealed a cirrhotic liver with a segmental dilatation of the third bile duct segment, confirmed by a CT scan and liver MRI, which also identified a heterologous mass. No other focal hepatic lesions were identified. A percutaneous ultrasound-guided needle biopsy was then performed, detecting a moderately differentiated HCC. Finally, the patient underwent a third hepatic segmentectomy, and the histopathological analysis confirmed the endobiliary localization of HCC. Subsequently, the patient experienced a nodular recurrence in the fourth hepatic segment, which was treated with ultrasound-guided percutaneous radiofrequency ablation (RFA). This case shows that HCC with BDTT can mimic different types of tumors. It also indicates the value of an early multidisciplinary patient assessment to obtain an accurate diagnosis of HCC with BDTT, which may have prognostic value that has not been recognized until now.
Collapse
Affiliation(s)
- Maria Conticchio
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Nicola Maggialetti
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | - Marco Rescigno
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | - Maria Chiara Brunese
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
- Correspondence:
| | - Roberto Vaschetti
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | | | - Roberto Calbi
- Radiology Unit, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Valentina Ferraro
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Michele Tedeschi
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | | | - Pasquale Avella
- Department of Clinical Medicine and Surgery, “Federico II” University of Naples, 80131 Naples, Italy
| | | | - Riccardo Memeo
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Arnaldo Scardapane
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| |
Collapse
|
9
|
An active contour model reinforced by convolutional neural network and texture description. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
10
|
Two-Stage Deep Learning Model for Automated Segmentation and Classification of Splenomegaly. Cancers (Basel) 2022; 14:cancers14225476. [PMID: 36428569 PMCID: PMC9688308 DOI: 10.3390/cancers14225476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/22/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
Abstract
Splenomegaly is a common cross-sectional imaging finding with a variety of differential diagnoses. This study aimed to evaluate whether a deep learning model could automatically segment the spleen and identify the cause of splenomegaly in patients with cirrhotic portal hypertension versus patients with lymphoma disease. This retrospective study included 149 patients with splenomegaly on computed tomography (CT) images (77 patients with cirrhotic portal hypertension, 72 patients with lymphoma) who underwent a CT scan between October 2020 and July 2021. The dataset was divided into a training (n = 99), a validation (n = 25) and a test cohort (n = 25). In the first stage, the spleen was automatically segmented using a modified U-Net architecture. In the second stage, the CT images were classified into two groups using a 3D DenseNet to discriminate between the causes of splenomegaly, first using the whole abdominal CT, and second using only the spleen segmentation mask. The classification performances were evaluated using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Occlusion sensitivity maps were applied to the whole abdominal CT images, to illustrate which regions were important for the prediction. When trained on the whole abdominal CT volume, the DenseNet was able to differentiate between the lymphoma and liver cirrhosis in the test cohort with an AUC of 0.88 and an ACC of 0.88. When the model was trained on the spleen segmentation mask, the performance decreased (AUC = 0.81, ACC = 0.76). Our model was able to accurately segment splenomegaly and recognize the underlying cause. Training on whole abdomen scans outperformed training using the segmentation mask. Nonetheless, considering the performance, a broader and more general application to differentiate other causes for splenomegaly is also conceivable.
Collapse
|
11
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
12
|
A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering (Basel) 2022; 9:bioengineering9080343. [PMID: 35892756 PMCID: PMC9394419 DOI: 10.3390/bioengineering9080343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 11/24/2022] Open
Abstract
In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.
Collapse
|
13
|
Abstract
Liver segmentation is a crucial step in surgical planning from computed tomography scans. The possibility to obtain a precise delineation of the liver boundaries with the exploitation of automatic techniques can help the radiologists, reducing the annotation time and providing more objective and repeatable results. Subsequent phases typically involve liver vessels’ segmentation and liver segments’ classification. It is especially important to recognize different segments, since each has its own vascularization, and so, hepatic segmentectomies can be performed during surgery, avoiding the unnecessary removal of healthy liver parenchyma. In this work, we focused on the liver segments’ classification task. We exploited a 2.5D Convolutional Neural Network (CNN), namely V-Net, trained with the multi-class focal Dice loss. The idea of focal loss was originally thought as the cross-entropy loss function, aiming at focusing on “hard” samples, avoiding the gradient being overwhelmed by a large number of falsenegatives. In this paper, we introduce two novel focal Dice formulations, one based on the concept of individual voxel’s probability and another related to the Dice formulation for sets. By applying multi-class focal Dice loss to the aforementioned task, we were able to obtain respectable results, with an average Dice coefficient among classes of 82.91%. Moreover, the knowledge of anatomic segments’ configurations allowed the application of a set of rules during the post-processing phase, slightly improving the final segmentation results, obtaining an average Dice coefficient of 83.38%. The average accuracy was close to 99%. The best model turned out to be the one with the focal Dice formulation based on sets. We conducted the Wilcoxon signed-rank test to check if these results were statistically significant, confirming their relevance.
Collapse
|