1
|
Mei G, Yu J. Research on CT image segmentation and classification of liver tumors based on attention mechanism and improved U-Net model. Technol Health Care 2025:9287329251329294. [PMID: 40302502 DOI: 10.1177/09287329251329294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2025]
Abstract
BackgroundLiver cancer is still one of the most common causes of death from cancer globally. The accurate segmentation of liver tumors from CT images is critical for diagnosis, treatment planning, and tracking. Conventional segmentation techniques frequently struggle to handle the intricacy of medical images, requiring the usage of sophisticated artificial intelligence (AI) methods to enhance accuracy and effectiveness.ObjectiveThe main objective of this study is to create and test an improved U-Net model (AM-UNet) that incorporates an attention mechanism to enhance the segmentation and classification accuracy of liver tumors in CT images. This method seeks to surpass previous techniques in terms of accuracy, precision, recall, and F1 score.MethodsThe dataset used includes 194 liver tumor CT scans obtained from 131 individuals for training and 70 for testing. The open-source 3DIRCAD-B dataset, which is incorporated into LiTS, contains images of both normal and pathological conditions. Preprocessing methods such as Median Filtering (MF) and Histogram Equalization (HE) were used to reduce noise and improve contrast. The AM-UNet model was then used to segment the tumors before classifying them as malignant or benign. The efficiency was assessed utilizing metrics like accuracy, precision, recall, F1-score, and ROC (Receiver Operating Characteristic).ResultsThe suggested AM-UNet model produced excellent outcomes, with a recall of 95%, accuracy of 92%, precision of 94%, and an F1-score of 93%. These metrics show that the model outperforms conventional techniques in correctly segmenting and classifying liver tumors in CT images.ConclusionThe AM-UNet model improves the segmentation and classification of liver tumors, providing substantial performance metrics over traditional methods. Its utilization can transform liver cancer diagnosis by assisting physicians in accurate tumor identification and treatment planning, resulting in improved patient results.
Collapse
Affiliation(s)
- Guang Mei
- Gongqing College of Nanchang University, Jiujiang, China
| | - Jinhua Yu
- Gongqing College of Nanchang University, Jiujiang, China
| |
Collapse
|
2
|
Jaitner N, Ludwig J, Meyer T, Boehm O, Anders M, Huang B, Jordan J, Schaeffter T, Sack I, Reiter R. Automated liver and spleen segmentation for MR elastography maps using U-Nets. Sci Rep 2025; 15:10762. [PMID: 40155744 PMCID: PMC11953449 DOI: 10.1038/s41598-025-95157-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2024] [Accepted: 03/19/2025] [Indexed: 04/01/2025] Open
Abstract
To compare pretrained and trained U-Nets for liver and spleen segmentation in multifrequency magnetic resonance elastography (MRE) magnitude images for automated quantification of shear wave speed (SWS). Seventy-two healthy participants (34 ± 11 years; BMI, 23 ± 2 kg/m2; 51 men) underwent multifrequency MRE at 1.5T or 3T. Volumes of interest (VOIs) of liver and spleen were generated from MRE magnitude images with mixed T2-T2* image contrast and then transferred to SWS maps. Pretrained and trained 2D and 3D U-Nets were compared with ground truth values obtained by manual segmentation using correlation analysis, intraclass correlation coefficients (ICCs), and Dice scores. For both VOI and SWS values, pairwise comparison revealed no statistically significant difference between ground truth and pretrained and trained U-Nets (all p ≥ 0.95). There was a strong positive correlation for SWS between ground truth and U-Nets with R = 0.99 for liver and R = 0.81-0.84 for spleen. ICC was 0.99 for liver and 0.90-0.92 for spleen, indicating excellent agreement for liver and good agreement for spleen for all U-Nets investigated. Dice scores showed excellent segmentation performance for all networks with the 2D U-Net achieving slightly higher values for the liver (0.95) and spleen (0.90), though the differences between the three tested U-Nets were minimal. The excellent performance we found for automated liver and spleen segmentation when applying 2D and 3D U-Nets to MRE magnitude images suggests that fully automated quantification of MRE parameters within anatomical regions is feasible by leveraging the previously unexploited anatomical information conveyed in MRE magnitude images.
Collapse
Affiliation(s)
- Noah Jaitner
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Jakob Ludwig
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Tom Meyer
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Oliver Boehm
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Matthias Anders
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Biru Huang
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Jakob Jordan
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Tobias Schaeffter
- Division of Medical Physics and Metrological Information Technology, Physikalisch- Technische Bundesanstalt, Abbestr. 2-12, 10587, Berlin, Germany
- Department of Medical Engineering, Technical University Berlin, Straße des 17. Juni 135, 10623, Berlin, Germany
| | - Ingolf Sack
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Rolf Reiter
- Department of Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany.
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, BIH Biomedical Innovation Academy, BIH Charité Digital Clinician Scientist Program, Charitéplatz 1, 10117, Berlin, Germany.
| |
Collapse
|
3
|
Asciak L, Kyeremeh J, Luo X, Kazakidi A, Connolly P, Picard F, O'Neill K, Tsaftaris SA, Stewart GD, Shu W. Digital twin assisted surgery, concept, opportunities, and challenges. NPJ Digit Med 2025; 8:32. [PMID: 39815013 PMCID: PMC11736137 DOI: 10.1038/s41746-024-01413-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 12/22/2024] [Indexed: 01/18/2025] Open
Abstract
Computer-assisted surgery is becoming essential in modern medicine to accurately plan, guide, and perform surgeries. Similarly, Digital Twin technology is expected to be instrumental in the future of surgery, owing to its capacity to virtually replicate patient-specific interventions whilst providing real-time updates to clinicians. This perspective introduces the term Digital Twin-Assisted Surgery and discusses its potential to improve surgical precision and outcome, along with key challenges for successful clinical translation.
Collapse
Affiliation(s)
- Lisa Asciak
- Department of Biomedical Engineering, Wolfson Centre, University of Strathclyde, Glasgow, UK
| | - Justicia Kyeremeh
- Department of Surgery, University of Cambridge, Cambridge Biomedical Campus, Cambridge, UK
- CRUK Cambridge Centre, Cambridge Biomedical Campus, Cambridge, UK
| | - Xichun Luo
- Centre for Precision Manufacturing, DMEM, University of Strathclyde, Glasgow, UK
| | - Asimina Kazakidi
- Department of Biomedical Engineering, Wolfson Centre, University of Strathclyde, Glasgow, UK
| | - Patricia Connolly
- Department of Biomedical Engineering, Wolfson Centre, University of Strathclyde, Glasgow, UK
| | - Frederic Picard
- Department of Biomedical Engineering, Wolfson Centre, University of Strathclyde, Glasgow, UK
- NHS Golden Jubilee University National Hospital, Clydebank, Glasgow, UK
| | - Kevin O'Neill
- Department of Neurosurgery, Division of Surgery and Cancer, Imperial College Healthcare NHS Trust, London, UK
| | - Sotirios A Tsaftaris
- Imaging, Data and Communications, The University of Edinburgh, EH9 3FG, Edinburgh, UK
| | - Grant D Stewart
- Department of Surgery, University of Cambridge, Cambridge Biomedical Campus, Cambridge, UK
- CRUK Cambridge Centre, Cambridge Biomedical Campus, Cambridge, UK
| | - Wenmiao Shu
- Department of Biomedical Engineering, Wolfson Centre, University of Strathclyde, Glasgow, UK.
| |
Collapse
|
4
|
Karimi A, Seraj J, Mirzadeh Sarcheshmeh F, Fazli K, Seraj A, Eslami P, Khanmohamadi M, Sajjadian Moosavi H, Ghattan Kashani H, Sajjadian Moosavi A, Shariat Panahi M. Improving spleen segmentation in ultrasound images using a hybrid deep learning framework. Sci Rep 2025; 15:1670. [PMID: 39799236 PMCID: PMC11724980 DOI: 10.1038/s41598-025-85632-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 01/06/2025] [Indexed: 01/15/2025] Open
Abstract
This paper introduces a novel method for spleen segmentation in ultrasound images, using a two-phase training approach. In the first phase, the SegFormerB0 network is trained to provide an initial segmentation. In the second phase, the network is further refined using the Pix2Pix structure, which enhances attention to details and corrects any erroneous or additional segments in the output. This hybrid method effectively combines the strengths of both SegFormer and Pix2Pix to produce highly accurate segmentation results. We have assembled the Spleenex dataset, consisting of 450 ultrasound images of the spleen, which is the first dataset of its kind in this field. Our method has been validated on this dataset, and the experimental results show that it outperforms existing state-of-the-art models. Specifically, our approach achieved a mean Intersection over Union (mIoU) of 94.17% and a mean Dice (mDice) score of 96.82%, surpassing models such as Splenomegaly Segmentation Network (SSNet), U-Net, and Variational autoencoder based methods. The proposed method also achieved a Mean Percentage Length Error (MPLE) of 3.64%, further demonstrating its accuracy. Furthermore, the proposed method has demonstrated strong performance even in the presence of noise in ultrasound images, highlighting its practical applicability in clinical environments.
Collapse
Affiliation(s)
- Ali Karimi
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Javad Seraj
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | | | - Kasra Fazli
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Amirali Seraj
- Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| | - Parisa Eslami
- Department of Information Systems, University of Maryland, Baltimore County, Baltimore, USA
| | | | | | | | | | - Masoud Shariat Panahi
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| |
Collapse
|
5
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
6
|
Ou J, Jiang L, Bai T, Zhan P, Liu R, Xiao H. ResTransUnet: An effective network combined with Transformer and U-Net for liver segmentation in CT scans. Comput Biol Med 2024; 177:108625. [PMID: 38823365 DOI: 10.1016/j.compbiomed.2024.108625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 04/15/2024] [Accepted: 05/18/2024] [Indexed: 06/03/2024]
Abstract
Liver segmentation is a fundamental prerequisite for the diagnosis and surgical planning of hepatocellular carcinoma. Traditionally, the liver contour is drawn manually by radiologists using a slice-by-slice method. However, this process is time-consuming and error-prone, depending on the radiologist's experience. In this paper, we propose a new end-to-end automatic liver segmentation framework, named ResTransUNet, which exploits the transformer's ability to capture global context for remote interactions and spatial relationships, as well as the excellent performance of the original U-Net architecture. The main contribution of this paper lies in proposing a novel fusion network that combines Unet and Transformer architectures. In the encoding structure, a dual-path approach is utilized, where features are extracted separately using both convolutional neural networks (CNNs) and Transformer networks. Additionally, an effective feature enhancement unit is designed to transfer the global features extracted by the Transformer network to the CNN for feature enhancement. This model aims to address the drawbacks of traditional Unet-based methods, such as feature loss during encoding and poor capture of global features. Moreover, it avoids the disadvantages of pure Transformer models, which suffer from large parameter sizes and high computational complexity. The experimental results on the LiTS2017 dataset demonstrate remarkable performance for our proposed model, with Dice coefficients, volumetric overlap error (VOE), and relative volume difference (RVD) values for liver segmentation reaching 0.9535, 0.0804, and -0.0007, respectively. Furthermore, to further validate the model's generalization capability, we conducted tests on the 3Dircadb, Chaos, and Sliver07 datasets. The experimental results demonstrate that the proposed method outperforms other closely related models with higher liver segmentation accuracy. In addition, significant improvements can be achieved by applying our method when handling liver segmentation with small and discontinuous liver regions, as well as blurred liver boundaries. The code is available at the website: https://github.com/Jouiry/ResTransUNet.
Collapse
Affiliation(s)
- Jiajie Ou
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Linfeng Jiang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China; School of Computing and College of Design and Engineering, National University of Singapore, Singapore.
| | - Ting Bai
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Peidong Zhan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Ruihua Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| |
Collapse
|
7
|
Yu C, Pei H. Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification. ENTROPY (BASEL, SWITZERLAND) 2024; 26:400. [PMID: 38785649 PMCID: PMC11119260 DOI: 10.3390/e26050400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
Collapse
Affiliation(s)
- Chenglin Yu
- School of Electrtronic & Information Engineering and Communication Engineering, Guangzhou City University of Technology, Guangzhou 510800, China
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Scinece and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
8
|
Dong J, Cheng G, Zhang Y, Peng C, Song Y, Tong R, Lin L, Chen YW. Tailored multi-organ segmentation with model adaptation and ensemble. Comput Biol Med 2023; 166:107467. [PMID: 37725849 DOI: 10.1016/j.compbiomed.2023.107467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 08/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Collapse
Affiliation(s)
- Jiahua Dong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Guohua Cheng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yue Zhang
- Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, 215163, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, China.
| | - Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China
| | - Yu Song
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| | - Ruofeng Tong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lanfen Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| |
Collapse
|
9
|
Isaksson LJ, Summers P, Mastroleo F, Marvaso G, Corrao G, Vincini MG, Zaffaroni M, Ceci F, Petralia G, Orecchia R, Jereczek-Fossa BA. Automatic Segmentation with Deep Learning in Radiotherapy. Cancers (Basel) 2023; 15:4389. [PMID: 37686665 PMCID: PMC10486603 DOI: 10.3390/cancers15174389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/10/2023] Open
Abstract
This review provides a formal overview of current automatic segmentation studies that use deep learning in radiotherapy. It covers 807 published papers and includes multiple cancer sites, image types (CT/MRI/PET), and segmentation methods. We collect key statistics about the papers to uncover commonalities, trends, and methods, and identify areas where more research might be needed. Moreover, we analyzed the corpus by posing explicit questions aimed at providing high-quality and actionable insights, including: "What should researchers think about when starting a segmentation study?", "How can research practices in medical image segmentation be improved?", "What is missing from the current corpus?", and more. This allowed us to provide practical guidelines on how to conduct a good segmentation study in today's competitive environment that will be useful for future research within the field, regardless of the specific radiotherapeutic subfield. To aid in our analysis, we used the large language model ChatGPT to condense information.
Collapse
Affiliation(s)
- Lars Johannes Isaksson
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| | - Paul Summers
- Division of Radiology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Federico Mastroleo
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Translational Medicine, University of Piemonte Orientale (UPO), 20188 Novara, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Giulia Corrao
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Maria Giulia Vincini
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Mattia Zaffaroni
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Francesco Ceci
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Division of Nuclear Medicine, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Giuseppe Petralia
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Precision Imaging and Research Unit, Department of Medical Imaging and Radiation Sciences, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| |
Collapse
|
10
|
Azuri I, Wattad A, Peri-Hanania K, Kashti T, Rosen R, Caspi Y, Istaiti M, Wattad M, Applbaum Y, Zimran A, Revel-Vilk S, C. Eldar Y. A Deep-Learning Approach to Spleen Volume Estimation in Patients with Gaucher Disease. J Clin Med 2023; 12:5361. [PMID: 37629403 PMCID: PMC10455264 DOI: 10.3390/jcm12165361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
The enlargement of the liver and spleen (hepatosplenomegaly) is a common manifestation of Gaucher disease (GD). An accurate estimation of the liver and spleen volumes in patients with GD, using imaging tools such as magnetic resonance imaging (MRI), is crucial for the baseline assessment and monitoring of the response to treatment. A commonly used method in clinical practice to estimate the spleen volume is the employment of a formula that uses the measurements of the craniocaudal length, diameter, and thickness of the spleen in MRI. However, the inaccuracy of this formula is significant, which, in turn, emphasizes the need for a more precise and reliable alternative. To this end, we employed deep-learning techniques, to achieve a more accurate spleen segmentation and, subsequently, calculate the resulting spleen volume with higher accuracy on a testing set cohort of 20 patients with GD. Our results indicate that the mean error obtained using the deep-learning approach to spleen volume estimation is 3.6 ± 2.7%, which is significantly lower than the common formula approach, which resulted in a mean error of 13.9 ± 9.6%. These findings suggest that the integration of deep-learning methods into the clinical routine practice for spleen volume calculation could lead to improved diagnostic and monitoring outcomes.
Collapse
Affiliation(s)
- Ido Azuri
- Bioinformatics Unit, Department of Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ameer Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Keren Peri-Hanania
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Tamar Kashti
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ronnie Rosen
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yaron Caspi
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Majdolen Istaiti
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Makram Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Yaakov Applbaum
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Ari Zimran
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Shoshana Revel-Vilk
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| |
Collapse
|
11
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
12
|
Wali A, Ahmad M, Naseer A, Tamoor M, Gilani S. StynMedGAN: Medical images augmentation using a new GAN model for improved diagnosis of diseases. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
Collapse
Affiliation(s)
- Aamir Wali
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Muzammil Ahmad
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| | - Maria Tamoor
- Department of Computer Science, Forman Christian College University, Zahoor Ilahi Road, Lahore, Pakistan
| | - S.A.M. Gilani
- Department of Computer Science, National University of Computer and Emerging Science, Faisal Town, Lahore, Pakistan
| |
Collapse
|
13
|
Conticchio M, Maggialetti N, Rescigno M, Brunese MC, Vaschetti R, Inchingolo R, Calbi R, Ferraro V, Tedeschi M, Fantozzi MR, Avella P, Calabrese A, Memeo R, Scardapane A. Hepatocellular Carcinoma with Bile Duct Tumor Thrombus: A Case Report and Literature Review of 890 Patients Affected by Uncommon Primary Liver Tumor Presentation. J Clin Med 2023; 12:423. [PMID: 36675352 PMCID: PMC9861411 DOI: 10.3390/jcm12020423] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/13/2022] [Accepted: 12/29/2022] [Indexed: 01/06/2023] Open
Abstract
Bile duct tumor thrombus (BDTT) is an uncommon finding in hepatocellular carcinoma (HCC), potentially mimicking cholangiocarcinoma (CCA). Recent studies have suggested that HCC with BDTT could represent a prognostic factor. We report the case of a 47-year-old male patient admitted to the University Hospital of Bari with abdominal pain. Blood tests revealed the presence of an untreated hepatitis B virus infection (HBV), with normal liver function and without jaundice. Abdominal ultrasonography revealed a cirrhotic liver with a segmental dilatation of the third bile duct segment, confirmed by a CT scan and liver MRI, which also identified a heterologous mass. No other focal hepatic lesions were identified. A percutaneous ultrasound-guided needle biopsy was then performed, detecting a moderately differentiated HCC. Finally, the patient underwent a third hepatic segmentectomy, and the histopathological analysis confirmed the endobiliary localization of HCC. Subsequently, the patient experienced a nodular recurrence in the fourth hepatic segment, which was treated with ultrasound-guided percutaneous radiofrequency ablation (RFA). This case shows that HCC with BDTT can mimic different types of tumors. It also indicates the value of an early multidisciplinary patient assessment to obtain an accurate diagnosis of HCC with BDTT, which may have prognostic value that has not been recognized until now.
Collapse
Affiliation(s)
- Maria Conticchio
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Nicola Maggialetti
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | - Marco Rescigno
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | - Maria Chiara Brunese
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | - Roberto Vaschetti
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| | | | - Roberto Calbi
- Radiology Unit, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Valentina Ferraro
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Michele Tedeschi
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | | | - Pasquale Avella
- Department of Clinical Medicine and Surgery, “Federico II” University of Naples, 80131 Naples, Italy
| | | | - Riccardo Memeo
- Unit of Hepatobiliary Surgery, Miulli Hospital, 70124 Acquaviva Delle Fonti, Italy
| | - Arnaldo Scardapane
- Interdisciplinary Department of Medicine, Section of Radiology and Radiation Oncology, University of Bari “Aldo Moro”, 70124 Bari, Italy
| |
Collapse
|
14
|
An active contour model reinforced by convolutional neural network and texture description. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
15
|
Two-Stage Deep Learning Model for Automated Segmentation and Classification of Splenomegaly. Cancers (Basel) 2022; 14:cancers14225476. [PMID: 36428569 PMCID: PMC9688308 DOI: 10.3390/cancers14225476] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/22/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
Abstract
Splenomegaly is a common cross-sectional imaging finding with a variety of differential diagnoses. This study aimed to evaluate whether a deep learning model could automatically segment the spleen and identify the cause of splenomegaly in patients with cirrhotic portal hypertension versus patients with lymphoma disease. This retrospective study included 149 patients with splenomegaly on computed tomography (CT) images (77 patients with cirrhotic portal hypertension, 72 patients with lymphoma) who underwent a CT scan between October 2020 and July 2021. The dataset was divided into a training (n = 99), a validation (n = 25) and a test cohort (n = 25). In the first stage, the spleen was automatically segmented using a modified U-Net architecture. In the second stage, the CT images were classified into two groups using a 3D DenseNet to discriminate between the causes of splenomegaly, first using the whole abdominal CT, and second using only the spleen segmentation mask. The classification performances were evaluated using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Occlusion sensitivity maps were applied to the whole abdominal CT images, to illustrate which regions were important for the prediction. When trained on the whole abdominal CT volume, the DenseNet was able to differentiate between the lymphoma and liver cirrhosis in the test cohort with an AUC of 0.88 and an ACC of 0.88. When the model was trained on the spleen segmentation mask, the performance decreased (AUC = 0.81, ACC = 0.76). Our model was able to accurately segment splenomegaly and recognize the underlying cause. Training on whole abdomen scans outperformed training using the segmentation mask. Nonetheless, considering the performance, a broader and more general application to differentiate other causes for splenomegaly is also conceivable.
Collapse
|
16
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
17
|
A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering (Basel) 2022; 9:bioengineering9080343. [PMID: 35892756 PMCID: PMC9394419 DOI: 10.3390/bioengineering9080343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 11/24/2022] Open
Abstract
In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.
Collapse
|
18
|
Abstract
Liver segmentation is a crucial step in surgical planning from computed tomography scans. The possibility to obtain a precise delineation of the liver boundaries with the exploitation of automatic techniques can help the radiologists, reducing the annotation time and providing more objective and repeatable results. Subsequent phases typically involve liver vessels’ segmentation and liver segments’ classification. It is especially important to recognize different segments, since each has its own vascularization, and so, hepatic segmentectomies can be performed during surgery, avoiding the unnecessary removal of healthy liver parenchyma. In this work, we focused on the liver segments’ classification task. We exploited a 2.5D Convolutional Neural Network (CNN), namely V-Net, trained with the multi-class focal Dice loss. The idea of focal loss was originally thought as the cross-entropy loss function, aiming at focusing on “hard” samples, avoiding the gradient being overwhelmed by a large number of falsenegatives. In this paper, we introduce two novel focal Dice formulations, one based on the concept of individual voxel’s probability and another related to the Dice formulation for sets. By applying multi-class focal Dice loss to the aforementioned task, we were able to obtain respectable results, with an average Dice coefficient among classes of 82.91%. Moreover, the knowledge of anatomic segments’ configurations allowed the application of a set of rules during the post-processing phase, slightly improving the final segmentation results, obtaining an average Dice coefficient of 83.38%. The average accuracy was close to 99%. The best model turned out to be the one with the focal Dice formulation based on sets. We conducted the Wilcoxon signed-rank test to check if these results were statistically significant, confirming their relevance.
Collapse
|