1
|
Sobotka D, Herold A, Perkonigg M, Beer L, Bastati N, Sablatnig A, Ba-Ssalamah A, Langs G. Improving Vessel Segmentation with Multi-Task Learning and Auxiliary Data Available Only During Model Training. Comput Med Imaging Graph 2024; 114:102369. [PMID: 38518411 DOI: 10.1016/j.compmedimag.2024.102369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/13/2024] [Accepted: 03/13/2024] [Indexed: 03/24/2024]
Abstract
Liver vessel segmentation in magnetic resonance imaging data is important for the computational analysis of vascular remodeling, associated with a wide spectrum of diffuse liver diseases. Existing approaches rely on contrast enhanced imaging data, but the necessary dedicated imaging sequences are not uniformly acquired. Images without contrast enhancement are acquired more frequently, but vessel segmentation is challenging, and requires large-scale annotated data. We propose a multi-task learning framework to segment vessels in liver MRI without contrast. It exploits auxiliary contrast enhanced MRI data available only during training to reduce the need for annotated training examples. Our approach draws on paired native and contrast enhanced data with and without vessel annotations for model training. Results show that auxiliary data improves the accuracy of vessel segmentation, even if they are not available during inference. The advantage is most pronounced if only few annotations are available for training, since the feature representation benefits from the shared task structure. A validation of this approach to augment a model for brain tumor segmentation confirms its benefits across different domains. An auxiliary informative imaging modality can augment expert annotations even if it is only available during training.
Collapse
Affiliation(s)
- Daniel Sobotka
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Alexander Herold
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Matthias Perkonigg
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Department of Medical Statistics, Informatics and Health Economics, Medical University of Innsbruck, Innsbruck, Austria
| | - Lucian Beer
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Nina Bastati
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Alina Sablatnig
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Ahmed Ba-Ssalamah
- Division of General and Paediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
2
|
Li S, Li XG, Zhou F, Zhang Y, Bie Z, Cheng L, Peng J, Li B. Automated segmentation of liver and hepatic vessels on portal venous phase computed tomography images using a deep learning algorithm. J Appl Clin Med Phys 2024:e14397. [PMID: 38773719 DOI: 10.1002/acm2.14397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 04/20/2024] [Accepted: 04/29/2024] [Indexed: 05/24/2024] Open
Abstract
BACKGROUND CT-image segmentation for liver and hepatic vessels can facilitate liver surgical planning. However, time-consuming process and inter-observer variations of manual segmentation have limited wider application in clinical practice. PURPOSE Our study aimed to propose an automated deep learning (DL) segmentation algorithm for liver and hepatic vessels on portal venous phase CT images. METHODS This retrospective study was performed to develop a coarse-to-fine DL-based algorithm that was trained, validated, and tested using private 413, 52, and 50 portal venous phase CT images, respectively. Additionally, the performance of the DL algorithm was extensively evaluated and compared with manual segmentation using an independent clinical dataset of preoperative contrast-enhanced CT images from 44 patients with hepatic focal lesions. The accuracy of DL-based segmentation was quantitatively evaluated using the Dice Similarity Coefficient (DSC) and complementary metrics [Normalized Surface Dice (NSD) and Hausdorff distance_95 (HD95) for liver segmentation, Recall and Precision for hepatic vessel segmentation]. The processing time for DL and manual segmentation was also compared. RESULTS Our DL algorithm achieved accurate liver segmentation with DSC of 0.98, NSD of 0.92, and HD95 of 1.52 mm. DL-segmentation of hepatic veins, portal veins, and inferior vena cava attained DSC of 0.86, 0.89, and 0.94, respectively. Compared with the manual approach, the DL algorithm significantly outperformed with better segmentation results for both liver and hepatic vessels, with higher accuracy of liver and hepatic vessel segmentation (all p < 0.001) in independent 44 clinical data. In addition, the DL method significantly reduced the manual processing time of clinical postprocessing (p < 0.001). CONCLUSIONS The proposed DL algorithm potentially enabled accurate and rapid segmentation for liver and hepatic vessels using portal venous phase contrast CT images.
Collapse
Affiliation(s)
- Shengwei Li
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Xiao-Guang Li
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Fanyu Zhou
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Yumeng Zhang
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Zhixin Bie
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Lin Cheng
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Jinzhao Peng
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| | - Bin Li
- Minimally Invasive Tumor Therapy Center, Beijing Hospital, Peking Union Medical College, Beijing, China
| |
Collapse
|
3
|
Xu J, Jiang W, Wu J, Zhang W, Zhu Z, Xin J, Zheng N, Wang B. Hepatic and portal vein segmentation with dual-stream deep neural network. Med Phys 2024. [PMID: 38648676 DOI: 10.1002/mp.17090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 02/16/2024] [Accepted: 03/01/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Liver lesions mainly occur inside the liver parenchyma, which are difficult to locate and have complicated relationships with essential vessels. Thus, preoperative planning is crucial for the resection of liver lesions. Accurate segmentation of the hepatic and portal veins (PVs) on computed tomography (CT) images is of great importance for preoperative planning. However, manually labeling the mask of vessels is laborious and time-consuming, and the labeling results of different clinicians are prone to inconsistencies. Hence, developing an automatic segmentation algorithm for hepatic and PVs on CT images has attracted the attention of researchers. Unfortunately, existing deep learning based automatic segmentation methods are prone to misclassifying peripheral vessels into wrong categories. PURPOSE This study aims to provide a fully automatic and robust semantic segmentation algorithm for hepatic and PVs, guiding subsequent preoperative planning. In addition, to address the deficiency of the public dataset for hepatic and PV segmentation, we revise the annotations of the Medical Segmentation Decathlon (MSD) hepatic vessel segmentation dataset and add the masks of the hepatic veins (HVs) and PVs. METHODS We proposed a structure with a dual-stream encoder combining convolution and Transformer block, named Dual-stream Hepatic Portal Vein segmentation Network, to extract local features and long-distance spatial information, thereby extracting anatomical information of hepatic and portal vein, avoiding misdivisions of adjacent peripheral vessels. Besides, a multi-scale feature fusion block based on dilated convolution is proposed to extract multi-scale features on expanded perception fields for local features, and a multi-level fusing attention module is introduced for efficient context information extraction. Paired t-test is conducted to evaluate the significant difference in dice between the proposed methods and the comparing methods. RESULTS Two datasets are constructed from the original MSD dataset. For each dataset, 50 cases are randomly selected for model evaluation in the scheme of 5-fold cross-validation. The results show that our method outperforms the state-of-the-art Convolutional Neural Network-based and transformer-based methods. Specifically, for the first dataset, our model reaches 0.815, 0.830, and 0.807 at overall dice, precision, and sensitivity. The dice of the hepatic and PVs are 0.835 and 0.796, which also exceed the numeric result of the comparing methods. Almost all the p-values of paired t-tests on the proposed approach and comparing approaches are smaller than 0.05. On the second dataset, the proposed algorithm achieves 0.749, 0.762, 0.726, 0.835, and 0.796 for overall dice, precision, sensitivity, dice for HV, and dice for PV, among which the first four numeric results exceed comparing methods. CONCLUSIONS The proposed method is effective in solving the problem of misclassifying interlaced peripheral veins for the HV and PV segmentation task and outperforming the comparing methods on the relabeled dataset.
Collapse
Affiliation(s)
- Jichen Xu
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Wei Jiang
- Research Center of Artificial Intelligence of Shangluo, Shangluo University, Shangluo, China
| | - Jiayi Wu
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Wei Zhang
- Beijing Jingzhen Medical Technology Ltd., Beijing, China
- Xi'an Zhizhenzhineng Technology Ltd., Xi'an, China
- School of Telecommunications Engineering, Xidian University, Xi'an, China
| | - Zhenyu Zhu
- Hepatobiliary Surgery Center, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Jingmin Xin
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Nanning Zheng
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Bo Wang
- Beijing Jingzhen Medical Technology Ltd., Beijing, China
- Xi'an Zhizhenzhineng Technology Ltd., Xi'an, China
- Huazhong University of Science and Technology, the Institute of Medical Equipment Science and Engineering, Wuhan, China
| |
Collapse
|
4
|
Yang X, He D, Li Y, Li C, Wang X, Zhu X, Sun H, Xu Y. Deep learning-based vessel extraction in 3D confocal microscope images of cleared human glioma tissues. BIOMEDICAL OPTICS EXPRESS 2024; 15:2498-2516. [PMID: 38633068 PMCID: PMC11019690 DOI: 10.1364/boe.516541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 04/19/2024]
Abstract
Comprehensive visualization and accurate extraction of tumor vasculature are essential to study the nature of glioma. Nowadays, tissue clearing technology enables 3D visualization of human glioma vasculature at micron resolution, but current vessel extraction schemes cannot well cope with the extraction of complex tumor vessels with high disruption and irregularity under realistic conditions. Here, we developed a framework, FineVess, based on deep learning to automatically extract glioma vessels in confocal microscope images of cleared human tumor tissues. In the framework, a customized deep learning network, named 3D ResCBAM nnU-Net, was designed to segment the vessels, and a novel pipeline based on preprocessing and post-processing was developed to refine the segmentation results automatically. On the basis of its application to a practical dataset, we showed that the FineVess enabled extraction of variable and incomplete vessels with high accuracy in challenging 3D images, better than other traditional and state-of-the-art schemes. For the extracted vessels, we calculated vascular morphological features including fractal dimension and vascular wall integrity of different tumor grades, and verified the vascular heterogeneity through quantitative analysis.
Collapse
Affiliation(s)
- Xiaodu Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Dian He
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Chenyang Li
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xinyue Wang
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xingzheng Zhu
- Institute of Applied Artificial Intelligence of the Guangdong-Hong Kong-Macao Greater Bay Area, Shenzhen Polytechnic University, Shenzhen, China
| | - Haitao Sun
- Clinical Biobank Center, Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Neurosurgery Center, The National Key Clinical Specialty, The Engineering Technology Research Center of Education Ministry of China on Diagnosis and Treatment of Cerebrovascular Disease, Guangdong Provincial Key Laboratory on Brain Function Repair and Regeneration, The Neurosurgery Institute of Guangdong Province Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Key Laboratory of Mental Health of the Ministry of Education, Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-Inspired Intelligence, Southern Medical University, Guangzhou, China
| | - Yingying Xu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| |
Collapse
|
5
|
Zhou Y, Zheng Y, Tian Y, Bai Y, Cai N, Wang P. SCAN: sequence-based context-aware association network for hepatic vessel segmentation. Med Biol Eng Comput 2024; 62:817-827. [PMID: 38032458 DOI: 10.1007/s11517-023-02975-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 11/22/2023] [Indexed: 12/01/2023]
Abstract
Accurate segmentation of hepatic vessel is significant for the surgeons to design the preoperative planning of liver surgery. In this paper, a sequence-based context-aware association network (SCAN) is designed for hepatic vessel segmentation, in which three schemes are incorporated to simultaneously extract the 2D features of hepatic vessels and capture the correlations between adjacent CT slices. The two schemes of slice-level attention module and graph association module are designed to bridge feature gaps between the encoder and the decoder in the low- and high-dimensional spaces. The region-edge constrained loss is designed to well optimize the proposed SCAN, which integrates cross-entropy loss, dice loss, and edge-constrained loss. Experimental results indicate that the proposed SCAN is superior to several existing deep learning frameworks, in terms of 0.845 DSC, 0.856 precision, 0.866 sensitivity, and 0.861 F1-score.
Collapse
Affiliation(s)
- Yinghong Zhou
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Yu Zheng
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Yinfeng Tian
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Youfang Bai
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Nian Cai
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China.
| | - Ping Wang
- Department of Hepatobiliary Surgery in the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| |
Collapse
|
6
|
Yu Y, Gao G, Gao X, Zhang Z, He Y, Shi L, Kang Z. A study on the radiomic correlation between CBCT and pCT scans based on modified 3D-RUnet image segmentation. Front Oncol 2024; 14:1301710. [PMID: 38463234 PMCID: PMC10921553 DOI: 10.3389/fonc.2024.1301710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Purpose The present study is based on evidence indicating a potential correlation between cone-beam CT (CBCT) measurements of tumor size, shape, and the stage of locally advanced rectal cancer. To further investigate this relationship, the study quantitatively assesses the correlation between positioning CT (pCT) and CBCT in the radiomics features of these cancers, and examines their potential for substitution. Methods In this study, 103 patients diagnosed with locally advanced rectal cancer and undergoing neoadjuvant chemoradiotherapy were selected as participants. Their CBCT and pCT images were used to divide the participants into two groups: a training set and a validation set, with a 7:3 ratio. An improved conventional 3D-RUNet (CLA-UNet) deep learning model was trained on the training set data and then applied to the validation set. The DSC, HD95 and ASSD were calculated for quantitative evaluation purposes. Then, radiomics features were extracted from 30 patients of the test set. Results The experiments demonstrate that, the modified model achieves an average DSC score 0.792 for pCT and 0.672 for CBCT scans. 1037 features were extracted from each patient's CBCT and pCT images, 73 image features were found to have R values greater than 0.9, including three features related to the staging and prognosis of rectal cancer. Conclusion In this study, we proposed an automatic, fast, and consistent method for rectal cancer GTV segmentation for pCT and CBCT scans. The findings of radiomic results indicate that CBCT images have significant research value in the field of radiomics.
Collapse
Affiliation(s)
- Yanjuan Yu
- College of Electronic Engineering, Zhangzhou Institute of Technology, Zhangzhou, Fujian, China
| | - Guanglu Gao
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Xiang Gao
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Zongkai Zhang
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Yipeng He
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Liwan Shi
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Zheng Kang
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| |
Collapse
|
7
|
Kock F, Thielke F, Abolmaali N, Meine H, Schenk A. Suitability of DNN-based vessel segmentation for SIRT planning. Int J Comput Assist Radiol Surg 2024; 19:233-240. [PMID: 37535263 PMCID: PMC10838818 DOI: 10.1007/s11548-023-03005-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/17/2023] [Indexed: 08/04/2023]
Abstract
PURPOSE The segmentation of the hepatic arteries (HA) is essential for state-of-the-art pre-interventional planning of selective internal radiation therapy (SIRT), a treatment option for malignant tumors in the liver. In SIRT a catheter is placed through the aorta into the tumor-feeding hepatic arteries, injecting small beads filled with radiation emitting material for local radioembolization. In this study, we evaluate the suitability of a deep neural network (DNN) based vessel segmentation for SIRT planning. METHODS We applied our DNN-based HA segmentation on 36 contrast-enhanced computed tomography (CT) scans from the arterial contrast agent phase and rated its segmentation quality as well as the overall image quality. Additionally, we applied a traditional machine learning algorithm for HA segmentation as comparison to our deep learning (DL) approach. Moreover, we assessed by expert ratings whether the produced HA segmentations can be used for SIRT planning. RESULTS The DL approach outperformed the traditional machine learning algorithm. The DL segmentation can be used for SIRT planning in [Formula: see text] of the cases, while the reference segmentations, which were manually created by experienced radiographers, are sufficient in [Formula: see text]. Seven DL cases cannot be used for SIRT planning while the corresponding reference segmentations are sufficient. However, there are two DL segmentations usable for SIRT, where the reference segmentations for the same cases were rated as insufficient. CONCLUSIONS HA segmentation is a difficult and time-consuming task. DL-based methods have the potential to support and accelerate the pre-interventional planning of SIRT therapy.
Collapse
Affiliation(s)
- Farina Kock
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, Bremen, 28359, Germany.
| | - Felix Thielke
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, Bremen, 28359, Germany
| | - Nasreddin Abolmaali
- Diagnostic and Interventional Radiology and Nuclear Medicine, St. Josef-Hospital, University Hospitals of the Ruhr University of Bochum, Gudrunstr. 56, Bochum, 44791, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, Bremen, 28359, Germany
| | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, Bremen, 28359, Germany
| |
Collapse
|
8
|
Cafarchio A, Iasiello M, Vanoli GP, Andreozzi A. Microwave ablation modeling with AMICA antenna: Validation by means a numerical analysis. Comput Biol Med 2023; 167:107669. [PMID: 37948968 DOI: 10.1016/j.compbiomed.2023.107669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 10/16/2023] [Accepted: 10/31/2023] [Indexed: 11/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Percutaneous microwave thermal ablation is based on electromagnetic waves that generate dielectric heating, and it is widely recognized as one of the mostly used techniques for tumor treatment. The aim of this work is to validate a predictive model capable of providing physicians with guidelines to be used during thermal ablation procedures avoiding collateral damage. METHODS A finite element commercial software, COMSOL Multiphysics, is employed to implement a tuning-parameter approach. Governing equations are written with reference to variable-porosity and Local Thermal Non-Equilibrium (LTNE) equations are employed. The simulations results are compared with available ex-vivo and in-vivo data with the help of regression analysis. For in-vivo data simulations, velocity vector modulus and direction are varied between 0.0007 and 0.0009 m/s and 90-270°, respectively, in order to use this parameter as a tuning one to simulate - and lately optimize with respect to the differences from experimental outcomes - all the possible directions of the blood flow with respect to the antenna, whose insertion angle is not registered in the dataset. RESULTS The model is validated using reference data provided by the manufacturer (AMICA), which is obtained from ex-vivo bovine liver. The model accurately predicts the size and shape of the ablated area, resulting in an overestimation lesser than 10 %. Additionally, predictive data are compared to an in-vivo dataset. The ablated volume is accurately predicted with a mean underestimation of 6 %. The sphericity index is calculated as 0.75 and 0.62 for the predictions and in-vivo data, respectively. CONCLUSION This study developed a predictive model for microwave ablation of liver tumors that showed good performance in predicting ablation dimensions and sphericity index for ex-vivo bovine liver and for in-vivo human liver data with the tuning technique. The study emphasizes the necessity for additional development and validation to enhance the accuracy and reliability of in-vivo application.
Collapse
Affiliation(s)
- A Cafarchio
- Dipartimento di Medicina e Scienze della Salute DIMES, Università degli Studi del Molise, Campobasso, Italy.
| | - M Iasiello
- Dipartimento di Ingegneria Industriale DII, Università degli Studi di Napoli "Federico II", Napoli, Italy
| | - G P Vanoli
- Dipartimento di Medicina e Scienze della Salute DIMES, Università degli Studi del Molise, Campobasso, Italy
| | - A Andreozzi
- Dipartimento di Ingegneria Industriale DII, Università degli Studi di Napoli "Federico II", Napoli, Italy
| |
Collapse
|
9
|
Radiya K, Joakimsen HL, Mikalsen KØ, Aahlin EK, Lindsetmo RO, Mortensen KE. Performance and clinical applicability of machine learning in liver computed tomography imaging: a systematic review. Eur Radiol 2023; 33:6689-6717. [PMID: 37171491 PMCID: PMC10511359 DOI: 10.1007/s00330-023-09609-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 05/13/2023]
Abstract
OBJECTIVES Machine learning (ML) for medical imaging is emerging for several organs and image modalities. Our objectives were to provide clinicians with an overview of this field by answering the following questions: (1) How is ML applied in liver computed tomography (CT) imaging? (2) How well do ML systems perform in liver CT imaging? (3) What are the clinical applications of ML in liver CT imaging? METHODS A systematic review was carried out according to the guidelines from the PRISMA-P statement. The search string focused on studies containing content relating to artificial intelligence, liver, and computed tomography. RESULTS One hundred ninety-one studies were included in the study. ML was applied to CT liver imaging by image analysis without clinicians' intervention in majority of studies while in newer studies the fusion of ML method with clinical intervention have been identified. Several were documented to perform very accurately on reliable but small data. Most models identified were deep learning-based, mainly using convolutional neural networks. Potentially many clinical applications of ML to CT liver imaging have been identified through our review including liver and its lesion segmentation and classification, segmentation of vascular structure inside the liver, fibrosis and cirrhosis staging, metastasis prediction, and evaluation of chemotherapy. CONCLUSION Several studies attempted to provide transparent result of the model. To make the model convenient for a clinical application, prospective clinical validation studies are in urgent call. Computer scientists and engineers should seek to cooperate with health professionals to ensure this. KEY POINTS • ML shows great potential for CT liver image tasks such as pixel-wise segmentation and classification of liver and liver lesions, fibrosis staging, metastasis prediction, and retrieval of relevant liver lesions from similar cases of other patients. • Despite presenting the result is not standardized, many studies have attempted to provide transparent results to interpret the machine learning method performance in the literature. • Prospective studies are in urgent call for clinical validation of ML method, preferably carried out by cooperation between clinicians and computer scientists.
Collapse
Affiliation(s)
- Keyur Radiya
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway.
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway.
| | - Henrik Lykke Joakimsen
- Institute of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
| | - Karl Øyvind Mikalsen
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
- UiT Machine Learning Group, Department of Physics and Technology, UiT the Arctic University of Norway, Tromso, Norway
| | - Eirik Kjus Aahlin
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
| | - Rolv-Ole Lindsetmo
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Head Clinic of Surgery, Oncology and Women Health, University Hospital of North Norway, Tromso, Norway
| | - Kim Erlend Mortensen
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
| |
Collapse
|
10
|
Wei X, Li H, Zhu T, Li W, Li Y, Sui R. Deep Learning with Automatic Data Augmentation for Segmenting Schisis Cavities in the Optical Coherence Tomography Images of X-Linked Juvenile Retinoschisis Patients. Diagnostics (Basel) 2023; 13:3035. [PMID: 37835778 PMCID: PMC10572414 DOI: 10.3390/diagnostics13193035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/09/2023] [Accepted: 09/15/2023] [Indexed: 10/15/2023] Open
Abstract
X-linked juvenile retinoschisis (XLRS) is an inherited disorder characterized by retinal schisis cavities, which can be observed in optical coherence tomography (OCT) images. Monitoring disease progression necessitates the accurate segmentation and quantification of these cavities; yet, current manual methods are time consuming and result in subjective interpretations, highlighting the need for automated and precise solutions. We employed five state-of-the-art deep learning models-U-Net, U-Net++, Attention U-Net, Residual U-Net, and TransUNet-for the task, leveraging a dataset of 1500 OCT images from 30 patients. To enhance the models' performance, we utilized data augmentation strategies that were optimized via deep reinforcement learning. The deep learning models achieved a human-equivalent accuracy level in the segmentation of schisis cavities, with U-Net++ surpassing others by attaining an accuracy of 0.9927 and a Dice coefficient of 0.8568. By utilizing reinforcement-learning-based automatic data augmentation, deep learning segmentation models demonstrate a robust and precise method for the automated segmentation of schisis cavities in OCT images. These findings are a promising step toward enhancing clinical evaluation and treatment planning for XLRS.
Collapse
Affiliation(s)
| | | | | | | | | | - Ruifang Sui
- Department of Ophthalmology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, No. 1, Shuai Fu Yuan, Beijing 100730, China; (X.W.); (H.L.); (T.Z.); (W.L.); (Y.L.)
| |
Collapse
|
11
|
Mortazi A, Cicek V, Keles E, Bagci U. Selecting the best optimizers for deep learning-based medical image segmentation. FRONTIERS IN RADIOLOGY 2023; 3:1175473. [PMID: 37810757 PMCID: PMC10551178 DOI: 10.3389/fradi.2023.1175473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/01/2023] [Indexed: 10/10/2023]
Abstract
Purpose The goal of this work is to explore the best optimizers for deep learning in the context of medical image segmentation and to provide guidance on how to design segmentation networks with effective optimization strategies. Approach Most successful deep learning networks are trained using two types of stochastic gradient descent (SGD) algorithms: adaptive learning and accelerated schemes. Adaptive learning helps with fast convergence by starting with a larger learning rate (LR) and gradually decreasing it. Momentum optimizers are particularly effective at quickly optimizing neural networks within the accelerated schemes category. By revealing the potential interplay between these two types of algorithms [LR and momentum optimizers or momentum rate (MR) in short], in this article, we explore the two variants of SGD algorithms in a single setting. We suggest using cyclic learning as the base optimizer and integrating optimal values of learning rate and momentum rate. The new optimization function proposed in this work is based on the Nesterov accelerated gradient optimizer, which is more efficient computationally and has better generalization capabilities compared to other adaptive optimizers. Results We investigated the relationship of LR and MR under an important problem of medical image segmentation of cardiac structures from MRI and CT scans. We conducted experiments using the cardiac imaging dataset from the ACDC challenge of MICCAI 2017, and four different architectures were shown to be successful for cardiac image segmentation problems. Our comprehensive evaluations demonstrated that the proposed optimizer achieved better results (over a 2% improvement in the dice metric) than other optimizers in the deep learning literature with similar or lower computational cost in both single and multi-object segmentation settings. Conclusions We hypothesized that the combination of accelerated and adaptive optimization methods can have a drastic effect in medical image segmentation performances. To this end, we proposed a new cyclic optimization method (Cyclic Learning/Momentum Rate) to address the efficiency and accuracy problems in deep learning-based medical image segmentation. The proposed strategy yielded better generalization in comparison to adaptive optimizers.
Collapse
Affiliation(s)
- Aliasghar Mortazi
- Department of Computer Vision and Image Analytic, Volastra Therapeutics, New York, NY, United States
| | - Vedat Cicek
- Department of Cardiology, Health Sciences University, Istanbul, Turkey
| | - Elif Keles
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL, United States
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, IL, United States
| |
Collapse
|
12
|
Wu M, Qian Y, Liao X, Wang Q, Heng PA. Hepatic vessel segmentation based on 3D swin-transformer with inductive biased multi-head self-attention. BMC Med Imaging 2023; 23:91. [PMID: 37422639 PMCID: PMC10329304 DOI: 10.1186/s12880-023-01045-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 06/05/2023] [Indexed: 07/10/2023] Open
Abstract
PURPOSE Segmentation of liver vessels from CT images is indispensable prior to surgical planning and aroused a broad range of interest in the medical image analysis community. Due to the complex structure and low-contrast background, automatic liver vessel segmentation remains particularly challenging. Most of the related researches adopt FCN, U-net, and V-net variants as a backbone. However, these methods mainly focus on capturing multi-scale local features which may produce misclassified voxels due to the convolutional operator's limited locality reception field. METHODS We propose a robust end-to-end vessel segmentation network called Inductive BIased Multi-Head Attention Vessel Net(IBIMHAV-Net) by expanding swin transformer to 3D and employing an effective combination of convolution and self-attention. In practice, we introduce voxel-wise embedding rather than patch-wise embedding to locate precise liver vessel voxels and adopt multi-scale convolutional operators to gain local spatial information. On the other hand, we propose the inductive biased multi-head self-attention which learns inductively biased relative positional embedding from initialized absolute position embedding. Based on this, we can gain more reliable queries and key matrices. RESULTS We conducted experiments on the 3DIRCADb dataset. The average dice and sensitivity of the four tested cases were 74.8[Formula: see text] and 77.5[Formula: see text], which exceed the results of existing deep learning methods and improved graph cuts method. The Branches Detected(BD)/Tree-length Detected(TD) indexes also proved the global/local feature capture ability better than other methods. CONCLUSION The proposed model IBIMHAV-Net provides an automatic, accurate 3D liver vessel segmentation with an interleaved architecture that better utilizes both global and local spatial features in CT volumes. It can be further extended for other clinical data.
Collapse
Affiliation(s)
- Mian Wu
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Yinling Qian
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Xiangyun Liao
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China.
| | - Qiong Wang
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Pheng-Ann Heng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
13
|
Lakshmipriya B, Pottakkat B, Ramkumar G. Deep learning techniques in liver tumour diagnosis using CT and MR imaging - A systematic review. Artif Intell Med 2023; 141:102557. [PMID: 37295904 DOI: 10.1016/j.artmed.2023.102557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/15/2023] [Accepted: 04/18/2023] [Indexed: 06/12/2023]
Abstract
Deep learning has become a thriving force in the computer aided diagnosis of liver cancer, as it solves extremely complicated challenges with high accuracy over time and facilitates medical experts in their diagnostic and treatment procedures. This paper presents a comprehensive systematic review on deep learning techniques applied for various applications pertaining to liver images, challenges faced by the clinicians in liver tumour diagnosis and how deep learning bridges the gap between clinical practice and technological solutions with an in-depth summary of 113 articles. Since, deep learning is an emerging revolutionary technology, recent state-of-the-art research implemented on liver images are reviewed with more focus on classification, segmentation and clinical applications in the management of liver diseases. Additionally, similar review articles in literature are reviewed and compared. The review is concluded by presenting the contemporary trends and unaddressed research issues in the field of liver tumour diagnosis, offering directions for future research in this field.
Collapse
Affiliation(s)
- B Lakshmipriya
- Department of Surgical Gastroenterology, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Biju Pottakkat
- Department of Surgical Gastroenterology, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India.
| | - G Ramkumar
- Department of Radio Diagnosis, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| |
Collapse
|
14
|
Zhao H, Chen J, Yun Z, Feng Q, Zhong L, Yang W. Whole mandibular canal segmentation using transformed dental CBCT volume in Frenet frame. Heliyon 2023; 9:e17651. [PMID: 37449128 PMCID: PMC10336514 DOI: 10.1016/j.heliyon.2023.e17651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 05/29/2023] [Accepted: 06/24/2023] [Indexed: 07/18/2023] Open
Abstract
Accurate segmentation of the mandibular canal is essential in dental implant and maxillofacial surgery, which can help prevent nerve or vascular damage inside the mandibular canal. Achieving this is challenging because of the low contrast in CBCT scans and the small scales of mandibular canal areas. Several innovative methods have been proposed for mandibular canal segmentation with positive performance. However, most of these methods segment the mandibular canal based on sliding patches, which may adversely affect the morphological integrity of the tubular structure. In this study, we propose whole mandibular canal segmentation using transformed dental CBCT volume in the Frenet frame. Considering the connectivity of the mandibular canal, we propose to transform the CBCT volume to obtain a sub-volume containing the whole mandibular canal based on the Frenet frame to ensure complete 3D structural information. Moreover, to further improve the performance of mandibular canal segmentation, we use clDice to guarantee the integrity of the mandibular canal structure and segment the mandibular canal. Experimental results on our CBCT dataset show that integrating the proposed transformed volume in the Frenet frame into other state-of-the-art methods achieves a 0.5%∼12.1% improvement in Dice performance. Our proposed method can achieve impressive results with a Dice value of 0.865 (±0.035), and a clDice value of 0.971 (±0.020), suggesting that our method can segment the mandibular canal with superior performance.
Collapse
Affiliation(s)
- Huanmiao Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Junhua Chen
- Stomatology Hospital of Guangzhou Medical University, Guangzhou, 510140, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| |
Collapse
|
15
|
Chen C, Zhou K, Wang Z, Zhang Q, Xiao R. All answers are in the images: A review of deep learning for cerebrovascular segmentation. Comput Med Imaging Graph 2023; 107:102229. [PMID: 37043879 DOI: 10.1016/j.compmedimag.2023.102229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 03/03/2023] [Accepted: 04/03/2023] [Indexed: 04/14/2023]
Abstract
Cerebrovascular imaging is a common examination. Its accurate cerebrovascular segmentation become an important auxiliary method for the diagnosis and treatment of cerebrovascular diseases, which has received extensive attention from researchers. Deep learning is a heuristic method that encourages researchers to derive answers from the images by driving datasets. With the continuous development of datasets and deep learning theory, it has achieved important success for cerebrovascular segmentation. Detailed survey is an important reference for researchers. To comprehensively analyze the newest cerebrovascular segmentation, we have organized and discussed researches centered on deep learning. This survey comprehensively reviews deep learning for cerebrovascular segmentation since 2015, it mainly includes sliding window based models, U-Net based models, other CNNs based models, small-sample based models, semi-supervised or unsupervised models, fusion based models, Transformer based models, and graphics based models. We organize the structures, improvement, and important parameters of these models, as well as analyze development trends and quantitative assessment. Finally, we have discussed the challenges and opportunities of possible research directions, hoping that our survey can provide researchers with convenient reference.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Kangneng Zhou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhiliang Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Qian Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China; China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan 100024, China.
| |
Collapse
|
16
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
17
|
A novel multi-attention, multi-scale 3D deep network for coronary artery segmentation. Med Image Anal 2023; 85:102745. [PMID: 36630869 DOI: 10.1016/j.media.2023.102745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 12/13/2022] [Accepted: 01/05/2023] [Indexed: 01/11/2023]
Abstract
Automatic segmentation of coronary arteries provides vital assistance to enable accurate and efficient diagnosis and evaluation of coronary artery disease (CAD). However, the task of coronary artery segmentation (CAS) remains highly challenging due to the large-scale variations exhibited by coronary arteries, their complicated anatomical structures and morphologies, as well as the low contrast between vessels and their background. To comprehensively tackle these challenges, we propose a novel multi-attention, multi-scale 3D deep network for CAS, which we call CAS-Net. Specifically, we first propose an attention-guided feature fusion (AGFF) module to efficiently fuse adjacent hierarchical features in the encoding and decoding stages to capture more effectively latent semantic information. Then, we propose a scale-aware feature enhancement (SAFE) module, aiming to dynamically adjust the receptive fields to extract more expressive features effectively, thereby enhancing the feature representation capability of the network. Furthermore, we employ the multi-scale feature aggregation (MSFA) module to learn a more distinctive semantic representation for refining the vessel maps. In addition, considering that the limited training data annotated with a quality golden standard are also a significant factor restricting the development of CAS, we construct a new dataset containing 119 cases consisting of coronary computed tomographic angiography (CCTA) volumes and annotated coronary arteries. Extensive experiments on our self-collected dataset and three publicly available datasets demonstrate that the proposed method has good segmentation performance and generalization ability, outperforming multiple state-of-the-art algorithms on various metrics. Compared with U-Net3D, the proposed method significantly improves the Dice similarity coefficient (DSC) by at least 4% on each dataset, due to the synergistic effect among the three core modules, AGFF, SAFE, and MSFA. Our implementation is released at https://github.com/Cassie-CV/CAS-Net.
Collapse
|
18
|
Alirr OI, Rahni AAA. Hepatic vessels segmentation using deep learning and preprocessing enhancement. J Appl Clin Med Phys 2023; 24:e13966. [PMID: 36933239 PMCID: PMC10161019 DOI: 10.1002/acm2.13966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/09/2023] [Accepted: 03/03/2023] [Indexed: 03/19/2023] Open
Abstract
PURPOSE Liver hepatic vessels segmentation is a crucial step for the diagnosis process in patients with hepatic diseases. Segmentation of liver vessels helps to study the liver internal segmental anatomy that helps in the preoperative planning of surgical treatment. METHODS Recently, the convolutional neural networks (CNN) have been proved to be efficient for the task of medical image segmentation. The paper proposes an automatic deep learning-based system for liver hepatic vessels segmentation of Computed Tomography (CT) datasets from different sources. The proposed work focuses on the combination of different steps; it starts by a preprocessing step to improve the vessels appearance within the liver region of interest in the CT scans. Coherence enhancing diffusion filtering (CED) and vesselness filtering methods are used to improve vessels contrast and intensity homogeneity. The proposed U-net based network architecture is implemented with modified residual block to include concatenation skip connection. The effect of enhancement using filtering step was studied. Also, the effect of data mismatch used in training and validation is studied. RESULTS The proposed method is evaluated using many CT datasets. Dice similarity coefficient (DSC) is used to evaluate the method. The average DSC score achieved a score 79%. CONCLUSIONS The proposed approach succeeded to segment liver vasculature from the liver envelope accurately, which makes it as potential tool for clinical preoperative planning.
Collapse
Affiliation(s)
- Omar Ibrahim Alirr
- College of Engineering and Technology, American University of the Middle East, Egaila, Kuwait
| | - Ashrani Aizzuddin Abd Rahni
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan, Bangi, Selangor, Malaysia
| |
Collapse
|
19
|
A skeleton context-aware 3D fully convolutional network for abdominal artery segmentation. Int J Comput Assist Radiol Surg 2023; 18:461-472. [PMID: 36273078 DOI: 10.1007/s11548-022-02767-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 09/26/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE This paper aims to propose a deep learning-based method for abdominal artery segmentation. Blood vessel structure information is essential to diagnosis and treatment. Accurate blood vessel segmentation is critical to preoperative planning. Although deep learning-based methods perform well on large organs, segmenting small organs such as blood vessels is challenging due to complicated branching structures and positions. We propose a 3D deep learning network from a skeleton context-aware perspective to improve segmentation accuracy. In addition, we propose a novel 3D patch generation method which could strengthen the structural diversity of a training data set. METHOD The proposed method segments abdominal arteries from an abdominal computed tomography (CT) volume using a 3D fully convolutional network (FCN). We add two auxiliary tasks to the network to extract the skeleton context of abdominal arteries. In addition, our skeleton-based patch generation (SBPG) method further enables the FCN to segment small arteries. SBPG generates a 3D patch from a CT volume by leveraging artery skeleton information. These methods improve the segmentation accuracies of small arteries. RESULTS We used 20 cases of abdominal CT volumes to evaluate the proposed method. The experimental results showed that our method outperformed previous segmentation accuracies. The averaged precision rate, recall rate, and F-measure were 95.5%, 91.0%, and 93.2%, respectively. Compared to a baseline method, our method improved 1.5% the averaged recall rate and 0.7% the averaged F-measure. CONCLUSIONS We present a skeleton context-aware 3D FCN to segment abdominal arteries from an abdominal CT volume. In addition, we propose a 3D patch generation method. Our fully automated method segmented most of the abdominal artery regions. The method produced competitive segmentation performance compared to previous methods.
Collapse
|
20
|
Wu R, Xin Y, Qian J, Dong Y. A multi-scale interactive U-Net for pulmonary vessel segmentation method based on transfer learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
21
|
Wang C, Huang Y, Liu C, Liu F, Hu X, Kuang X, An W, Liu C, Liu Y, Liu S, He R, Wang H, Qi X. Diagnosis of Clinically Significant Portal Hypertension Using CT- and MRI-based Vascular Model. Radiology 2023; 307:e221648. [PMID: 36719293 DOI: 10.1148/radiol.221648] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background Currently, the hepatic venous pressure gradient (HVPG) remains the reference standard for diagnosis of clinically significant portal hypertension (CSPH) but is limited by its invasiveness and availability. Purpose To investigate a vascular geometric model for noninvasive diagnosis of CSPH (HVPG ≥10 mm Hg) in patients with liver cirrhosis for both contrast-enhanced CT and MRI. Materials and Methods In this retrospective study, consecutive patients with liver cirrhosis who underwent HVPG measurement from August 2016 to April 2019 were included. Patients without hepatic diseases were included and marked as non-CSPH to balance the ratio of CSPH 1:1. A variety of vascular parameters were extracted from the portal vein, hepatic vein, aorta, and inferior vena cava and then entered into a vascular geometric model for identification of CSPH. Diagnostic performance was assessed with the area under the receiver operating characteristic curve (AUC). Results The model was developed and tested with retrospective data from 250 patients with liver cirrhosis and 273 patients without clinical evidence of hepatic disease at contrast-enhanced CT examination, including 213 patients with CSPH (mean age, 49 years ± 12 [SD]; 138 women) and 310 patients without CSPH (mean age, 50 years ± 9; 177 women). For external validation, an MRI data set with 224 patients with cirrhosis (mean age, 49 years ± 10; 158 women) and a CT data set with 106 patients with cirrhosis (mean age, 53 years ± 12; 71 women) were analyzed. Significant reductions in mean whole-vessel volumes were observed in the portal vein (ranging from 36.9 cm3 ± 16.0 to 29.6 cm3 ± 11.1; P < .05) and hepatic vein (ranging from 35.3 cm3 ± 21.5 to 22.4 cm3 ± 15.7; P < .05) when CSPH occurred. Similarly, the mean whole-vessel lengths were shorter in patients with CSPH (portal vein: 1.7 m ± 1.2 vs 3.0 m ± 2.4, P < .05; hepatic vein: 0.9 m ± 1.5 vs 1.8 m ± 1.5, P < .05) than in those without CSPH. The proposed vascular model performed well in the internal test set (mean AUC, 0.90 ± 0.02) and external test sets (mean AUCs, 0.84 ± 0.12 and 0.87 ± 0.11). Conclusion A contrast-enhanced CT- and MRI-based vascular model was proposed with good diagnostic consistency for hepatic venous pressure gradient measurement. ClinicalTrials.gov registration nos. NCT03138915 and NCT03766880 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Roldán-Alzate and Reeder in this issue.
Collapse
Affiliation(s)
- Chengyan Wang
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Yifei Huang
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Changchun Liu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Fuquan Liu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Xumei Hu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Xutong Kuang
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Weimin An
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Chuan Liu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Yanna Liu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Shanghao Liu
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Ruiling He
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - He Wang
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| | - Xiaolong Qi
- From the Human Phenome Institute (C.W., X.H., X.K., H.W.) and Institute of Science and Technology for Brain-inspired Intelligence (H.W.), Fudan University, Shanghai, China; Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China (Y.H., Chuan Liu, X.Q.); Department of Radiology, Fifth Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China (Changchun Liu, W.A.); Department of Interventional Therapy, Beijing Shijitan Hospital, Beijing, China (F.L.); Department of Gastroenterology and Hepatology, Beijing Youan Hospital, Capital Medical University, Beijing, China (Y.L.); and Institute of Portal Hypertension, The First Hospital of Lanzhou University, Lanzhou, China (S.L., R.H.)
| |
Collapse
|
22
|
Tong G, Jiang H, Yao YD. SDA-UNet: a hepatic vein segmentation network based on the spatial distribution and density awareness of blood vessels. Phys Med Biol 2023; 68. [PMID: 36623320 DOI: 10.1088/1361-6560/acb199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 01/09/2023] [Indexed: 01/11/2023]
Abstract
Objective.Hepatic vein segmentation is a fundamental task for liver diagnosis and surgical navigation planning. Unlike other organs, the liver is the only organ with two sets of venous systems. Meanwhile, the segmentation target distribution in the hepatic vein scene is extremely unbalanced. The hepatic veins occupy a small area in abdominal CT slices. The morphology of each person's hepatic vein is different, which also makes segmentation difficult. The purpose of this study is to develop an automated hepatic vein segmentation model that guides clinical diagnosis.Approach.We introduce the 3D spatial distribution and density awareness (SDA) of hepatic veins and propose an automatic segmentation network based on 3D U-Net which includes a multi-axial squeeze and excitation module (MASE) and a distribution correction module (DCM). The MASE restrict the activation area to the area with hepatic veins. The DCM improves the awareness of the sparse spatial distribution of the hepatic veins. To obtain global axial information and spatial information at the same time, we study the effect of different training strategies on hepatic vein segmentation. Our method was evaluated by a public dataset and a private dataset. The Dice coefficient achieves 71.37% and 69.58%, improving 3.60% and 3.30% compared to the other SOTA models, respectively. Furthermore, metrics based on distance and volume also show the superiority of our method.Significance.The proposed method greatly reduced false positive areas and improved the segmentation performance of the hepatic vein in CT images. It will assist doctors in making accurate diagnoses and surgical navigation planning.
Collapse
Affiliation(s)
- Guoyu Tong
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, United States of America
| |
Collapse
|
23
|
Huang X, Liu Y, Li Y, Qi K, Gao A, Zheng B, Liang D, Long X. Deep Learning-Based Multiclass Brain Tissue Segmentation in Fetal MRIs. SENSORS (BASEL, SWITZERLAND) 2023; 23:655. [PMID: 36679449 PMCID: PMC9862805 DOI: 10.3390/s23020655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Fetal brain tissue segmentation is essential for quantifying the presence of congenital disorders in the developing fetus. Manual segmentation of fetal brain tissue is cumbersome and time-consuming, so using an automatic segmentation method can greatly simplify the process. In addition, the fetal brain undergoes a variety of changes throughout pregnancy, such as increased brain volume, neuronal migration, and synaptogenesis. In this case, the contrast between tissues, especially between gray matter and white matter, constantly changes throughout pregnancy, increasing the complexity and difficulty of our segmentation. To reduce the burden of manual refinement of segmentation, we proposed a new deep learning-based segmentation method. Our approach utilized a novel attentional structural block, the contextual transformer block (CoT-Block), which was applied in the backbone network model of the encoder-decoder to guide the learning of dynamic attentional matrices and enhance image feature extraction. Additionally, in the last layer of the decoder, we introduced a hybrid dilated convolution module, which can expand the receptive field and retain detailed spatial information, effectively extracting the global contextual information in fetal brain MRI. We quantitatively evaluated our method according to several performance measures: dice, precision, sensitivity, and specificity. In 80 fetal brain MRI scans with gestational ages ranging from 20 to 35 weeks, we obtained an average Dice similarity coefficient (DSC) of 83.79%, an average Volume Similarity (VS) of 84.84%, and an average Hausdorff95 Distance (HD95) of 35.66 mm. We also used several advanced deep learning segmentation models for comparison under equivalent conditions, and the results showed that our method was superior to other methods and exhibited an excellent segmentation performance.
Collapse
Affiliation(s)
- Xiaona Huang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Liu
- Shenzhen Maternity and Child Healthcare Hospital, Shenzhen 518027, China
| | - Yuhan Li
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keying Qi
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ang Gao
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Bowen Zheng
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaojing Long
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
24
|
Lee SH, Lee J, Oh KS, Yoon JP, Seo A, Jeong Y, Chung SW. Automated 3-dimensional MRI segmentation for the posterosuperior rotator cuff tear lesion using deep learning algorithm. PLoS One 2023; 18:e0284111. [PMID: 37200275 DOI: 10.1371/journal.pone.0284111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/23/2023] [Indexed: 05/20/2023] Open
Abstract
INTRODUCTION Rotator cuff tear (RCT) is a challenging and common musculoskeletal disease. Magnetic resonance imaging (MRI) is a commonly used diagnostic modality for RCT, but the interpretation of the results is tedious and has some reliability issues. In this study, we aimed to evaluate the accuracy and efficacy of the 3-dimensional (3D) MRI segmentation for RCT using a deep learning algorithm. METHODS A 3D U-Net convolutional neural network (CNN) was developed to detect, segment, and visualize RCT lesions in 3D, using MRI data from 303 patients with RCTs. The RCT lesions were labeled by two shoulder specialists in the entire MR image using in-house developed software. The MRI-based 3D U-Net CNN was trained after the augmentation of a training dataset and tested using randomly selected test data (training: validation: test data ratio was 6:2:2). The segmented RCT lesion was visualized in a three-dimensional reconstructed image, and the performance of the 3D U-Net CNN was evaluated using the Dice coefficient, sensitivity, specificity, precision, F1-score, and Youden index. RESULTS A deep learning algorithm using a 3D U-Net CNN successfully detected, segmented, and visualized the area of RCT in 3D. The model's performance reached a 94.3% of Dice coefficient score, 97.1% of sensitivity, 95.0% of specificity, 84.9% of precision, 90.5% of F1-score, and Youden index of 91.8%. CONCLUSION The proposed model for 3D segmentation of RCT lesions using MRI data showed overall high accuracy and successful 3D visualization. Further studies are necessary to determine the feasibility of its clinical application and whether its use could improve care and outcomes.
Collapse
Affiliation(s)
- Su Hyun Lee
- Department of Orthopaedic Surgery, Seoul Red Cross Hospital, Seoul, Korea
| | - JiHwan Lee
- Department of Orthopedic Surgery, Myongji Hospital, Goyang-si, Korea
| | - Kyung-Soo Oh
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| | - Jong Pil Yoon
- Department of Orthopaedic Surgery, Kyungpook National University College of Medicine, Daegu, Korea
| | - Anna Seo
- SEEANN Solution, Yeonsu-gu, Incheon, Korea
| | | | - Seok Won Chung
- Department of Orthopaedic Surgery, Konkuk University School of Medicine, Seoul, Korea
| |
Collapse
|
25
|
Segmentation of Breast Tubules in H&E Images Based on a DKS-DoubleU-Net Model. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2961610. [PMID: 36246965 PMCID: PMC9553497 DOI: 10.1155/2022/2961610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 09/10/2022] [Indexed: 11/21/2022]
Abstract
The formation of breast tubules plays an important role in the pathological grading of breast cancer. Breast tubules surrounded by a large number of epithelial cells are located in the subcutaneous tissue of the chest. The shapes of breast tubules are various, including tubular, round, and oval, which makes the process of breast tubule segmentation a difficult task. Deep learning technology, capable of learning complex data structures via efficient representation, could help pathologists accurately detect breast tubules in hematoxylin and eosin (H&E) stained images. In this paper, we propose a deep learning model named DKS-DoubleU-Net to accurately segment breast tubules with complex appearances in H&E images. The proposed DKS-DoubleU-Net model suggests using a DenseNet module as the encoder of the second subnetwork of DoubleU-Net, which utilizes dense features between layers and strengthens the propagation of features extracted in all previous layers, in order to better discover the intrinsic characteristics of breast tubules with complex structures and diverse shapes. Moreover, a feature fusing module called Kernel Selecting Module (KSM) is inserted before each output layer of the two U-Net branches of the DoubleU-Net, to implement a multiscale feature fusion via a self-adaptive kernel selecting for the sake of accurate segmentation of breast tubules in different sizes. The experiments on the public BRACS dataset and a private clinical dataset have shown that our model achieves better segmentation performance, compared to the state-of-art models of U-Net, DoubleU-Net, ResUnet++, HRNet, and DeepLabV3+. Specifically, on the public BRACS dataset, our method produced an F1-Score of 92.98%, which outperforms the F1-Score of U-Net, DoubleU-Net, and HRNet by 4.24%, 0.37%, and 1.68%, respectively, and is much better than performances of DeepLabV3+ and ResUnet++ by 7.83% and 23.84%, respectively. On the private clinic dataset, the proposed model achieved an F1-Score of 73.13%, which has shown an improvement of 10.31%, 1.89%, 4.88%, 15.47%, and 31.1% to the performances of the U-Net, DoubleU-Net, HRNet, DeepLabV3+, and ResUnet++, respectively. Superior performance could also be observed when comparing the proposed DKS-DoubleU-Net with the others using the metrics of Dice and mIou.
Collapse
|
26
|
Guo N, Tian J, Wang L, Sun K, Mi L, Ming H, Zhe Z, Sun F. Discussion on the possibility of multi-layer intelligent technologies to achieve the best recover of musculoskeletal injuries: Smart materials, variable structures, and intelligent therapeutic planning. Front Bioeng Biotechnol 2022; 10:1016598. [PMID: 36246357 PMCID: PMC9561816 DOI: 10.3389/fbioe.2022.1016598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 09/14/2022] [Indexed: 11/16/2022] Open
Abstract
Although intelligent technologies has facilitated the development of precise orthopaedic, simple internal fixation, ligament reconstruction or arthroplasty can only relieve pain of patients in short-term. To achieve the best recover of musculoskeletal injuries, three bottlenecks must be broken through, which includes scientific path planning, bioactive implants and personalized surgical channels building. As scientific surgical path can be planned and built by through AI technology, 4D printing technology can make more bioactive implants be manufactured, and variable structures can establish personalized channels precisely, it is possible to achieve satisfied and effective musculoskeletal injury recovery with the progress of multi-layer intelligent technologies (MLIT).
Collapse
Affiliation(s)
- Na Guo
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
| | - Jiawen Tian
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
| | - Litao Wang
- College of Engineering, China Agricultural University, Beijing, China
| | - Kai Sun
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Lixin Mi
- Musculoskeletal Department, Beijing Rehabilitation Hospital, Beijing, China
| | - Hao Ming
- Orthopaedics, Chinese PLA General Hospital, Beijing, China
| | - Zhao Zhe
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Fuchun Sun
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
- *Correspondence: Fuchun Sun,
| |
Collapse
|
27
|
Hao W, Zhang J, Su J, Song Y, Liu Z, Liu Y, Qiu C, Han K. HPM-Net: Hierarchical progressive multiscale network for liver vessel segmentation in CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107003. [PMID: 35868034 DOI: 10.1016/j.cmpb.2022.107003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/29/2022] [Accepted: 07/03/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The segmentation and visualization of liver vessels in 3D CT images are essential for computer-aided diagnosis and preoperative planning of liver diseases. Due to the irregular structure of liver vessels and image noise, accurate extraction of liver vessels is difficult. In particular, accurate segmentation of small vessels is always a challenge, as multiple single down-sampling usually results in a loss of information. METHODS In this paper, we propose a hierarchical progressive multiscale learning network (HPM-Net) framework for liver vessel segmentation. Firstly, the hierarchical progressive multiscale learning network combines internal and external progressive learning methods to learn semantic information about liver vessels at different scales by acquiring receptive fields of different sizes. Secondly, to better capture vessel features, we propose a dual-branch progressive 3D Unet, which uses a dual-branch progressive (DBP) down-sampling strategy to reduce the loss of detailed information in the process of network down-sampling. Finally, a deep supervision mechanism is introduced into the framework and backbone network to speed up the network convergence and achieve better training of the network. RESULTS We conducted experiments on the public dataset 3Dircadb for liver vessel segmentation. The average dice coefficient and sensitivity of the proposed method reached 75.18% and 78.84%, respectively, both higher than the original network. CONCLUSION Experimental results show that the proposed hierarchical progressive multiscale network can accurately segment the labeled liver vessels from the CT images.
Collapse
Affiliation(s)
- Wen Hao
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| | - Jing Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Jun Su
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| | - Yuqing Song
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| | - Zhe Liu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China.
| | - Yi Liu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| | - Chengjian Qiu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| | - Kai Han
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
| |
Collapse
|
28
|
Song J, Joung S, Ghim YC, Hahn SH, Jang J, Lee J. Development of machine learning model for automatic ELM-burst detection without hyperparameter adjustment in KSTAR tokamak. NUCLEAR ENGINEERING AND TECHNOLOGY 2022. [DOI: 10.1016/j.net.2022.08.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
29
|
Survarachakan S, Prasad PJR, Naseem R, Pérez de Frutos J, Kumar RP, Langø T, Alaya Cheikh F, Elle OJ, Lindseth F. Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions. Artif Intell Med 2022; 130:102331. [DOI: 10.1016/j.artmed.2022.102331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 11/26/2022]
|
30
|
Enhanced Automatic Identification of Urban Community Green Space Based on Semantic Segmentation. LAND 2022. [DOI: 10.3390/land11060905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
At the neighborhood scale, recognizing urban community green space (UCGS) is important for residential living condition assessment and urban planning. However, current studies have embodied two key issues. Firstly, existing studies have focused on large geographic scales, mixing urban and rural areas, neglecting the accuracy of green space contours at fine geographic scales. Secondly, the green spaces covered by shadows often suffer misclassification. To address these issues, we created a neighborhood-scale urban community green space (UCGS) dataset and proposed a segmentation decoder for HRNet backbone with two auxiliary decoders. Our proposed model adds two additional branches to the low-resolution representations to improve their discriminative ability, thus enhancing the overall performance when the high- and low-resolution representations are fused. To evaluate the performance of the model, we tested it on a dataset that includes satellite images of Shanghai, China. The model outperformed the other nine models in UCGS extraction, with a precision of 83.01, recall of 85.69, IoU of 72.91, F1-score of 84.33, and OA of 89.31. Our model also improved the integrity of the identification of shaded green spaces over HRNetV2. The proposed method could offer a useful tool for efficient UCGS detection and mapping in urban planning.
Collapse
|
31
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
32
|
Techniques and Algorithms for Hepatic Vessel Skeletonization in Medical Images: A Survey. ENTROPY 2022; 24:e24040465. [PMID: 35455128 PMCID: PMC9031516 DOI: 10.3390/e24040465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/21/2022] [Accepted: 03/23/2022] [Indexed: 02/01/2023]
Abstract
Hepatic vessel skeletonization serves as an important means of hepatic vascular analysis and vessel segmentation. This paper presents a survey of techniques and algorithms for hepatic vessel skeletonization in medical images. We summarized the latest developments and classical approaches in this field. These methods are classified into five categories according to their methodological characteristics. The overview and brief assessment of each category are provided in the corresponding chapters, respectively. We provide a comprehensive summary among the cited publications, image modalities and datasets from various aspects, which hope to reveal the pros and cons of every method, summarize its achievements and discuss the challenges and future trends.
Collapse
|
33
|
Hazarika RA, Maji AK, Syiem R, Sur SN, Kandar D. Hippocampus Segmentation Using U-Net Convolutional Network from Brain Magnetic Resonance Imaging (MRI). J Digit Imaging 2022; 35:893-909. [PMID: 35304675 PMCID: PMC9485390 DOI: 10.1007/s10278-022-00613-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 01/04/2022] [Accepted: 01/14/2022] [Indexed: 12/21/2022] Open
Abstract
Hippocampus is a part of the limbic system in human brain that plays an important role in forming memories and dealing with intellectual abilities. In most of the neurological disorders related to dementia, such as, Alzheimer's disease, hippocampus is one of the earliest affected regions. Because there are no effective dementia drugs, an ambient assisted living approach may help to prevent or slow the progression of dementia. By segmenting and analyzing the size/shape of hippocampus, it may be possible to classify the early dementia stages. Because of complex structure, traditional image segmentation techniques can't segment hippocampus accurately. Machine learning (ML) is a well known tool in medical image processing that can predict and deliver the outcomes accurately by learning from it's previous results. Convolutional Neural Networks (CNN) is one of the most popular ML algorithms. In this work, a U-Net Convolutional Network based approach is used for hippocampus segmentation from 2D brain images. It is observed that, the original U-Net architecture can segment hippocampus with an average performance rate of 93.6%, which outperforms all other discussed state-of-arts. By using a filter size of [Formula: see text], the original U-Net architecture performs a sequence of convolutional processes. We tweaked the architecture further to extract more relevant features by replacing all [Formula: see text] kernels with three alternative kernels of sizes [Formula: see text], [Formula: see text], and [Formula: see text]. It is observed that, the modified architecture achieved an average performance rate of 96.5%, which outperforms the original U-Net model convincingly.
Collapse
Affiliation(s)
- Ruhul Amin Hazarika
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India.
| | - Arnab Kumar Maji
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India
| | - Raplang Syiem
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India
| | - Samarendra Nath Sur
- Department of Electronics and Communication Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar, Sikkim, 737136, India
| | - Debdatta Kandar
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India.
| |
Collapse
|
34
|
Yin J, Zhou Z, Xu S, Yang R, Liu K. A 3D Grouped Convolutional Network Fused with Conditional Random Field and Its Application in Image Multi-target Fine Segmentation. INT J COMPUT INT SYS 2022. [DOI: 10.1007/s44196-022-00065-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
AbstractAiming at the utilization of adjacent image correlation information in multi-target segmentation of 3D image slices and the optimization of segmentation results, a 3D grouped fully convolutional network fused with conditional random fields (3D-GFCN) is proposed. The model takes fully convolutional network (FCN) as the image segmentation infrastructure, and fully connected conditional random field (FCCRF) as the post-processing tool. It expands the 2D convolution into 3D operations, and uses a shortcut-connection structure to achieve feature fusion of different levels and scales, to realizes the fine-segmentation of 3D image slices. 3D-GFCN uses 3D convolution kernel to correlate the information of 3D image adjacent slices, uses the context correlation and probability exploration mechanism of FCCRF to optimize the segmentation results, and uses the grouped convolution to reduce the model parameters. The dice loss that can ignore the influence of background pixels is used as the training objective function to reduce the influence of the imbalance quantity between background pixels and target pixels. The model can automatically study and focus on target structures of different shapes and sizes in the image, highlight the salient features useful for specific tasks. In the mechanism, it can improve the shortcomings and limitations of the existing image segmentation algorithms, such as insignificant morphological features of the target image, weak correlation of spatial information and discontinuous segmentation results, and improve the accuracy of multi-target segmentation results and learning efficiency. Take abdominal abnormal tissue detection and multi-target segmentation based on 3D computer tomography (CT) images as verification experiments. In the case of small-scale and unbalanced data set, the average Dice coefficient is 88.8%, the Class Pixel Accuracy is 95.3%, and Intersection of Union is 87.8%. Compared with other methods, the performance evaluation index and segmentation accuracy are significantly improved. It shows that the proposed method has good applicability for solving typical multi-target image segmentation problems, such as boundary overlap, offset deformation and low contrast.
Collapse
|
35
|
Li X, Bala R, Monga V. Robust Deep 3D Blood Vessel Segmentation Using Structural Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1271-1284. [PMID: 34990361 DOI: 10.1109/tip.2021.3139241] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning has enabled significant improvements in the accuracy of 3D blood vessel segmentation. Open challenges remain in scenarios where labeled 3D segmentation maps for training are severely limited, as is often the case in practice, and in ensuring robustness to noise. Inspired by the observation that 3D vessel structures project onto 2D image slices with informative and unique edge profiles, we propose a novel deep 3D vessel segmentation network guided by edge profiles. Our network architecture comprises a shared encoder and two decoders that learn segmentation maps and edge profiles jointly. 3D context is mined in both the segmentation and edge prediction branches by employing bidirectional convolutional long-short term memory (BCLSTM) modules. 3D features from the two branches are concatenated to facilitate learning of the segmentation map. As a key contribution, we introduce new regularization terms that: a) capture the local homogeneity of 3D blood vessel volumes in the presence of biomarkers; and b) ensure performance robustness to domain-specific noise by suppressing false positive responses. Experiments on benchmark datasets with ground truth labels reveal that the proposed approach outperforms state-of-the-art techniques on standard measures such as DICE overlap and mean Intersection-over-Union. The performance gains of our method are even more pronounced when training is limited. Furthermore, the computational cost of our network inference is among the lowest compared with state-of-the-art.
Collapse
|
36
|
Luximon DC, Abdulkadir Y, Chow PE, Morris ED, Lamb JM. Machine-assisted interpolation algorithm for semi-automated segmentation of highly deformable organs. Med Phys 2022; 49:41-51. [PMID: 34783027 PMCID: PMC8758550 DOI: 10.1002/mp.15351] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/03/2021] [Accepted: 11/01/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Accurate and robust auto-segmentation of highly deformable organs (HDOs), for example, stomach or bowel, remains an outstanding problem due to these organs' frequent and large anatomical variations. Yet, time-consuming manual segmentation of these organs presents a particular challenge to time-limited modern radiotherapy techniques such as on-line adaptive radiotherapy and high-dose-rate brachytherapy. We propose a machine-assisted interpolation (MAI) that uses prior information in the form of sparse manual delineations to facilitate rapid, accurate segmentation of the stomach from low field magnetic resonance images (MRI) and the bowel from computed tomography (CT) images. METHODS Stomach MR images from 116 patients undergoing 0.35T MRI-guided abdominal radiotherapy and bowel CT images from 120 patients undergoing high dose rate pelvic brachytherapy treatment were collected. For each patient volume, the manual delineation of the HDO was extracted from every 8th slice. These manually drawn contours were first interpolated to obtain an initial estimate of the HDO contour. A two-channel 64 × 64 pixel patch-based convolutional neural network (CNN) was trained to localize the position of the organ's boundary on each slice within a five-pixel wide road using the image and interpolated contour estimate. This boundary prediction was then input, in conjunction with the image, to an organ closing CNN which output the final organ segmentation. A Dense-UNet architecture was used for both networks. The MAI algorithm was separately trained for the stomach segmentation and the bowel segmentation. Algorithm performance was compared against linear interpolation (LI) alone and against fully automated segmentation (FAS) using a Dense-UNet trained on the same datasets. The Dice Similarity Coefficient (DSC) and mean surface distance (MSD) metrics were used to compare the predictions from the three methods. Statistically significance was tested using Student's t test. RESULTS For the stomach segmentation, the mean DSC from MAI (0.91 ± 0.02) was 5.0% and 10.0% higher as compared to LI and FAS, respectively. The average MSD from MAI (0.77 ± 0.25 mm) was 0.54 and 3.19 mm lower compared to the two other methods. Only 7% of MAI stomach predictions resulted in a DSC < 0.8, as compared to 30% and 28% for LI and FAS, respectively. For the bowel segmentation, the mean DSC of MAI (0.90 ± 0.04) was 6% and 18% higher, and the average MSD of MAI (0.93 ± 0.48 mm) was 0.42 and 4.9 mm lower as compared to LI and FAS. Sixteen percent of the predicted contour from MAI resulted in a DSC < 0.8, as compared to 46% and 60% for FAS and LI, respectively. All comparisons between MAI and the baseline methods were found to be statistically significant (p-value < 0.001). CONCLUSIONS The proposed MAI algorithm significantly outperformed LI in terms of accuracy and robustness for both stomach segmentation from low-field MRIs and bowel segmentation from CT images. At this time, FAS methods for HDOs still require significant manual editing. Therefore, we believe that the MAI algorithm has the potential to expedite the process of HDO delineation within the radiation therapy workflow.
Collapse
Affiliation(s)
- Dishane C Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Yasin Abdulkadir
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Phillip E Chow
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Eric D Morris
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| |
Collapse
|
37
|
Robust deep 3-D architectures based on vascular patterns for liver vessel segmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
38
|
Kazami Y, Kaneko J, Keshwani D, Takahashi R, Kawaguchi Y, Ichida A, Ishizawa T, Akamatsu N, Arita J, Hasegawa K. Artificial intelligence enhances the accuracy of portal and hepatic vein extraction in computed tomography for virtual hepatectomy. JOURNAL OF HEPATO-BILIARY-PANCREATIC SCIENCES 2021; 29:359-368. [PMID: 34779139 DOI: 10.1002/jhbp.1080] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 10/05/2021] [Accepted: 10/24/2021] [Indexed: 01/16/2023]
Abstract
BACKGROUND/PURPOSE Current conventional algorithms used for 3-dimensional simulation in virtual hepatectomy still have difficulties distinguishing the portal vein (PV) and hepatic vein (HV). The accuracy of these algorithms was compared with a new deep-learning based algorithm (DLA) using artificial intelligence. METHODS A total of 110 living liver donor candidates until 2017, and 46 donor candidates until 2019 were allocated to the training group and validation groups for the DLA, respectively. All PV or HV branches were labeled based on Couinaud's segment classification and the Brisbane 2000 Terminology by hepato-biliary surgeons. Misclassified and missing branches were compared between a conventional tracking-based algorithm (TA) and DLA in the validation group. RESULTS The sensitivity, specificity, and Dice coefficient for the PV were 0.58, 0.98, and 0.69 using the TA; and 0.84, 0.97, and 0.90 using the DLA (P < .001, excluding specificity); and for the HV, 0.81, 087, and 0.83 using the TA; and 0.93, 0.94 and 0.94 using the DLA (P < .001 to P = .001). The DLA exhibited greater accuracy than the TA. CONCLUSION Compared with the TA, artificial intelligence enhanced the accuracy of extraction of the PV and HVs in computed tomography.
Collapse
Affiliation(s)
- Yusuke Kazami
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Junichi Kaneko
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Deepak Keshwani
- Imaging technology center, Fujifilm Corporation, Tokyo, Japan
| | - Ryugen Takahashi
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yoshikuni Kawaguchi
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Akihiko Ichida
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Takeaki Ishizawa
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Nobuhisa Akamatsu
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Junichi Arita
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kiyoshi Hasegawa
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
39
|
Wong J, Sigurdson S, Reformat M, Lou E. Centroid-based Distance Loss Function for Lamina Segmentation in 3D Ultrasound Spine Volumes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1723-1726. [PMID: 34891619 DOI: 10.1109/embc46164.2021.9631034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrasound imaging of the spine to diagnose the severity of scoliosis is a recent development in the field, offering 3D information that does not require a complicated procedure of reconstruction, unlike with radiography. Determining the severity of scoliosis on ultrasound volumes requires labelling vertebral features called laminae. To increase accuracy and reduce time spent on this task, this paper reported a novel custom centroid-based distance loss function for lamina segmentation in 3D ultrasound volumes, using convolutional neural networks (CNN). A comparison between the custom and two standard loss functions was performed by fitting a CNN with each loss function. The results showed that the custom loss network performed the best in terms of minimization of the distances between the centroids in the ground truth and the centroids in the predicted segmentation. On average, the custom network improved on the total distance between predicted and true centroids by 33 voxels (22%) when compared with the second best performing network, which used the Dice loss. In general, this novel custom loss function allowed the network to detect two more laminae on average in the lumbar region of the spine that the other networks tended to miss.
Collapse
|
40
|
DV-Net: Accurate liver vessel segmentation via dense connection model with D-BCE loss function. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107471] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
41
|
Zhao Z, Ma Z, Liu Y, Zeng Z, Chow PK. Multi-Slice Dense-Sparse Learning for Efficient Liver and Tumor Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3582-3585. [PMID: 34892013 DOI: 10.1109/embc46164.2021.9629698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate automatic liver and tumor segmentation plays a vital role in treatment planning and disease monitoring. Recently, deep convolutional neural network (DCNNs) has obtained tremendous success in 2D and 3D medical image segmentation. However, 2D DCNNs cannot fully leverage the inter-slice information, while 3D DCNNs are computationally expensive and memory intensive. To address these issues, we first propose a novel dense-sparse training flow from a data perspective, in which, densely adjacent slices and sparsely adjacent slices are extracted as inputs for regularizing DCNNs, thereby improving the model performance. Moreover, we design a 2.5D light-weight nnU-Net from a network perspective, in which, depthwise separable convolutions are adopted to improve the efficiency. Extensive experiments on the LiTS dataset have demonstrated the superiority of the proposed method.Clinical relevance- The proposed method can effectively segment livers and tumors from CT scans with low complexity, which can be easily implemented into clinical practice.
Collapse
|
42
|
Li C, Ma W, Sun L, Ding X, Huang Y, Wang G, Yu Y. Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
43
|
Li R, Huang YJ, Chen H, Liu X, Yu Y, Qian D, Wang L. 3D Graph-Connectivity Constrained Network for Hepatic Vessel Segmentation. IEEE J Biomed Health Inform 2021; 26:1251-1262. [PMID: 34613925 DOI: 10.1109/jbhi.2021.3118104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Segmentation of hepatic vessels from 3D CT images is necessary for accurate diagnosis and preoper-ative planning for liver cancer. However, due to the low contrast and high noises of CT images, automatic hepatic vessel segmentation is a challenging task. Hepatic vessels are connected branches containing thick and thin blood vessels, showing an important structural characteristic or a prior: the connectivity of blood vessels. However, this is rarely applied in existing methods. In this paper, we segment hepatic vessels from 3D CT images by utilizing the connectivity prior. To this end, a graph neural network (GNN) used to describe the connectivity prior of hepatic vessels is integrated into a general convolutional neu-ral network (CNN). Specifically, a graph attention network (GAT) is first used to model the graphical connectivity information of hepatic vessels, which can be trained with the vascular connectivity graph constructed directly from the ground truths. Second, the GAT is integrated with a lightweight 3D U-Net by an efficient mechanism called the plug-in mode, in which the GAT is incorporated into the U-Net as a multi-task branch and is only used to supervise the training procedure of the U-Net with the connectivity prior. The GAT will not be used in the inference stage, and thus will not increase the hardware and time costs of the inference stage compared with the U-Net. Therefore, hepatic vessel segmentation can be well improved in an efficient mode. Extensive experiments on two public datasets show that the proposed method is superior to related works in accuracy and connectivity of hepatic vessel segmentation.
Collapse
|
44
|
Evaluation of Deep Learning Segmentation Models for Detection of Pine Wilt Disease in Unmanned Aerial Vehicle Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13183594] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Pine wilt disease (PWD) is a serious threat to pine forests. Combining unmanned aerial vehicle (UAV) images and deep learning (DL) techniques to identify infected pines is the most efficient method to determine the potential spread of PWD over a large area. In particular, image segmentation using DL obtains the detailed shape and size of infected pines to assess the disease’s degree of damage. However, the performance of such segmentation models has not been thoroughly studied. We used a fixed-wing UAV to collect images from a pine forest in Laoshan, Qingdao, China, and conducted a ground survey to collect samples of infected pines and construct prior knowledge to interpret the images. Then, training and test sets were annotated on selected images, and we obtained 2352 samples of infected pines annotated over different backgrounds. Finally, high-performance DL models (e.g., fully convolutional networks for semantic segmentation, DeepLabv3+, and PSPNet) were trained and evaluated. The results demonstrated that focal loss provided a higher accuracy and a finer boundary than Dice loss, with the average intersection over union (IoU) for all models increasing from 0.656 to 0.701. From the evaluated models, DeepLLabv3+ achieved the highest IoU and an F1 score of 0.720 and 0.832, respectively. Also, an atrous spatial pyramid pooling module encoded multiscale context information, and the encoder–decoder architecture recovered location/spatial information, being the best architecture for segmenting trees infected by the PWD. Furthermore, segmentation accuracy did not improve as the depth of the backbone network increased, and neither ResNet34 nor ResNet50 was the appropriate backbone for most segmentation models.
Collapse
|
45
|
Watanabe S, Sakaguchi K, Murata D, Ishii K. Deep learning-based Hounsfield unit value measurement method for bolus tracking images in cerebral computed tomography angiography. Comput Biol Med 2021; 137:104824. [PMID: 34488029 DOI: 10.1016/j.compbiomed.2021.104824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 08/28/2021] [Accepted: 08/28/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Patient movement during bolus tracking (BT) impairs the accuracy of Hounsfield unit (HU) measurements. This study assesses the accuracy of measuring HU values in the internal carotid artery (ICA) using an original deep learning (DL)-based method as compared with using the conventional region of interest (ROI) setting method. METHOD A total of 722 BT images of 127 patients who underwent cerebral computed tomography angiography were selected retrospectively and divided into groups for training data, validation data, and test data. To segment the ICA using our proposed method, DL was performed using a convolutional neural network. The HU values in the ICA were obtained using our DL-based method and the ROI setting method. The ROI setting was performed with and without correcting for patient body movement (corrected ROI and settled ROI). We compared the proposed DL-based method with settled ROI to evaluate HU value differences from the corrected ROI, based on whether or not patients experienced involuntary movement during BT image acquisition. RESULTS Differences in HU values from the corrected ROI in the settled ROI and the proposed method were 23.8 ± 12.7 HU and 9.0 ± 6.4 HU in patients with body movement and 1.1 ± 1.6 HU and 3.9 ± 4.7 HU in patients without body movement, respectively. There were significant differences in both comparisons (P < 0.01). CONCLUSION DL-based method can improve the accuracy of HU value measurements for ICA in BT images with patient involuntary movement.
Collapse
Affiliation(s)
- Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan; Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kenta Sakaguchi
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Daisuke Murata
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| |
Collapse
|
46
|
Chen C, Zhou K, Zha M, Qu X, Guo X, Chen H, Wang Z, Xiao R. An Effective Deep Neural Network for Lung Lesions Segmentation From COVID-19 CT Images. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2021; 17:6528-6538. [PMID: 37981911 PMCID: PMC8545014 DOI: 10.1109/tii.2021.3059023] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 01/10/2021] [Accepted: 01/30/2021] [Indexed: 11/15/2023]
Abstract
Automatic segmentation of lung lesions from COVID-19 computed tomography (CT) images can help to establish a quantitative model for diagnosis and treatment. For this reason, this article provides a new segmentation method to meet the needs of CT images processing under COVID-19 epidemic. The main steps are as follows: First, the proposed region of interest extraction implements patch mechanism strategy to satisfy the applicability of 3-D network and remove irrelevant background. Second, 3-D network is established to extract spatial features, where 3-D attention model promotes network to enhance target area. Then, to improve the convergence of network, a combination loss function is introduced to lead gradient optimization and training direction. Finally, data augmentation and conditional random field are applied to realize data resampling and binary segmentation. This method was assessed with some comparative experiment. By comparison, the proposed method reached the highest performance. Therefore, it has potential clinical applications.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Kangneng Zhou
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Muxi Zha
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Xiangyan Qu
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Xiaoyu Guo
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Hongyu Chen
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Zhiliang Wang
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
| | - Ruoxiu Xiao
- School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijing100083China
- Institute of Artificial IntelligenceUniversity of Science and Technology BeijingBeijing100083China
| |
Collapse
|
47
|
Xin M, Wen J, Wang Y, Yu W, Fang B, Hu J, Xu Y, Linghu C. Blood Vessel Segmentation Based on the 3D Residual U-Net. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s021800142157007x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we propose blood vessel segmentation based on the 3D residual U-Net method. First, we integrate the residual block structure into the 3D U-Net. By exploring the influence of adding residual blocks at different positions in the 3D U-Net, we establish a novel and effective 3D residual U-Net. In addition, to address the challenges of pixel imbalance, vessel boundary segmentation, and small vessel segmentation, we develop a new weighted Dice loss function with a better effect than the weighted cross-entropy loss function. When training the model, we adopted a two-stage method from coarse-to-fine. In the fine stage, a local segmentation method of 3D sliding window is added. In the model testing phase, we used the 3D fixed-point method. Furthermore, we employ the 3D morphological closed operation to smooth the surfaces of vessels and volume analysis to remove noise blocks. To verify the accuracy and stability of our method, we compare our method with FCN, 3D DenseNet, and 3D U-Net. The experimental results indicate that our method has higher accuracy and better stability than the other studied methods and that the average Dice coefficients for hepatic veins and portal veins reach 71.7% and 76.5% in the coarse stage and 72.5% and 77.2% in the fine stage, respectively. In order to verify the robustness of the model, we conducted the same comparative experiment on the brain vessel datasets, and the average Dice coefficient reached 87.2%.
Collapse
Affiliation(s)
- Mulin Xin
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Jing Wen
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Yi Wang
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Wei Yu
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Bin Fang
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Jun Hu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| | - Yongmei Xu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| | - Chunhong Linghu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| |
Collapse
|
48
|
Su YH, Jiang W, Chitrakar D, Huang K, Peng H, Hannaford B. Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:5163. [PMID: 34372398 PMCID: PMC8346972 DOI: 10.3390/s21155163] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/19/2022]
Abstract
Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.
Collapse
Affiliation(s)
- Yun-Hsuan Su
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Wenfan Jiang
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Digesh Chitrakar
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Kevin Huang
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Haonan Peng
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| | - Blake Hannaford
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| |
Collapse
|
49
|
Yan Q, Wang B, Zhang W, Luo C, Xu W, Xu Z, Zhang Y, Shi Q, Zhang L, You Z. Attention-Guided Deep Neural Network With Multi-Scale Feature Fusion for Liver Vessel Segmentation. IEEE J Biomed Health Inform 2021; 25:2629-2642. [PMID: 33264097 DOI: 10.1109/jbhi.2020.3042069] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Liver vessel segmentation is fast becoming a key instrument in the diagnosis and surgical planning of liver diseases. In clinical practice, liver vessels are normally manual annotated by clinicians on each slice of CT images, which is extremely laborious. Several deep learning methods exist for liver vessel segmentation, however, promoting the performance of segmentation remains a major challenge due to the large variations and complex structure of liver vessels. Previous methods mainly using existing UNet architecture, but not all features of the encoder are useful for segmentation and some even cause interferences. To overcome this problem, we propose a novel deep neural network for liver vessel segmentation, called LVSNet, which employs special designs to obtain the accurate structure of the liver vessel. Specifically, we design Attention-Guided Concatenation (AGC) module to adaptively select the useful context features from low-level features guided by high-level features. The proposed AGC module focuses on capturing rich complemented information to obtain more details. In addition, we introduce an innovative multi-scale fusion block by constructing hierarchical residual-like connections within one single residual block, which is of great importance for effectively linking the local blood vessel fragments together. Furthermore, we construct a new dataset containing 40 thin thickness cases (0.625 mm) which consist of CT volumes and annotated vessels. To evaluate the effectiveness of the method with minor vessels, we also propose an automatic stratification method to split major and minor liver vessels. Extensive experimental results demonstrate that the proposed LVSNet outperforms previous methods on liver vessel segmentation datasets. Additionally, we conduct a series of ablation studies that comprehensively support the superiority of the underlying concepts.
Collapse
|
50
|
MSDS-UNet: A multi-scale deeply supervised 3D U-Net for automatic segmentation of lung tumor in CT. Comput Med Imaging Graph 2021; 92:101957. [PMID: 34325225 DOI: 10.1016/j.compmedimag.2021.101957] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 03/05/2021] [Accepted: 07/08/2021] [Indexed: 11/20/2022]
Abstract
Lung cancer is one of the most common and deadly malignant cancers. Accurate lung tumor segmentation from CT is therefore very important for correct diagnosis and treatment planning. The automated lung tumor segmentation is challenging due to the high variance in appearance and shape of the targeting tumors. To overcome the challenge, we present an effective 3D U-Net equipped with ResNet architecture and a two-pathway deep supervision mechanism to increase the network's capacity for learning richer representations of lung tumors from global and local perspectives. Extensive experiments on two real medical datasets: the lung CT dataset from Liaoning Cancer Hospital in China with 220 cases and the public dataset of TCIA with 422 cases. Our experiments demonstrate that our model achieves an average dice score (0.675), sensitivity (0.731) and F1-score (0.682) on the dataset from Liaoning Cancer Hospital, and an average dice score (0.691), sensitivity (0.746) and F1-score (0.724) on the TCIA dataset, respectively. The results demonstrate that the proposed 3D MSDS-UNet outperforms the state-of-the-art segmentation models for segmenting all scales of tumors, especially for small tumors. Moreover, we evaluated our proposed MSDS-UNet on another challenging volumetric medical image segmentation task: COVID-19 lung infection segmentation, which shows consistent improvement in the segmentation performance.
Collapse
|