51
|
Nikan S, Van Osch K, Bartling M, Allen DG, Rohani SA, Connors B, Agrawal SK, Ladak HM. PWD-3DNet: A Deep Learning-Based Fully-Automated Segmentation of Multiple Structures on Temporal Bone CT Scans. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:739-753. [PMID: 33226942 DOI: 10.1109/tip.2020.3038363] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional anatomy. Segmentation of intra-temporal anatomy based on computed tomography (CT) images is necessary for applications such as surgical training and rehearsal, amongst others. However, temporal bone segmentation is challenging due to the similar intensities and complicated anatomical relationships among critical structures, undetectable small structures on standard clinical CT, and the amount of time required for manual segmentation. This paper describes a single multi-class deep learning-based pipeline as the first fully automated algorithm for segmenting multiple temporal bone structures from CT volumes, including the sigmoid sinus, facial nerve, inner ear, malleus, incus, stapes, internal carotid artery and internal auditory canal. The proposed fully convolutional network, PWD-3DNet, is a patch-wise densely connected (PWD) three-dimensional (3D) network. The accuracy and speed of the proposed algorithm was shown to surpass current manual and semi-automated segmentation techniques. The experimental results yielded significantly high Dice similarity scores and low Hausdorff distances for all temporal bone structures with an average of 86% and 0.755 millimeter (mm), respectively. We illustrated that overlapping in the inference sub-volumes improves the segmentation performance. Moreover, we proposed augmentation layers by using samples with various transformations and image artefacts to increase the robustness of PWD-3DNet against image acquisition protocols, such as smoothing caused by soft tissue scanner settings and larger voxel sizes used for radiation reduction. The proposed algorithm was tested on low-resolution CTs acquired by another center with different scanner parameters than the ones used to create the algorithm and shows potential for application beyond the particular training data used in the study.
Collapse
|
52
|
Rister B, Yi D, Shivakumar K, Nobashi T, Rubin DL. CT-ORG, a new dataset for multiple organ segmentation in computed tomography. Sci Data 2020; 7:381. [PMID: 33177518 PMCID: PMC7658204 DOI: 10.1038/s41597-020-00715-8] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 10/01/2020] [Indexed: 12/05/2022] Open
Abstract
Despite the relative ease of locating organs in the human body, automated organ segmentation has been hindered by the scarcity of labeled training data. Due to the tedium of labeling organ boundaries, most datasets are limited to either a small number of cases or a single organ. Furthermore, many are restricted to specific imaging conditions unrepresentative of clinical practice. To address this need, we developed a diverse dataset of 140 CT scans containing six organ classes: liver, lungs, bladder, kidney, bones and brain. For the lungs and bones, we expedited annotation using unsupervised morphological segmentation algorithms, which were accelerated by 3D Fourier transforms. Demonstrating the utility of the data, we trained a deep neural network which requires only 4.3 s to simultaneously segment all the organs in a case. We also show how to efficiently augment the data to improve model generalization, providing a GPU library for doing so. We hope this dataset and code, available through TCIA, will be useful for training and evaluating organ segmentation models.
Collapse
Affiliation(s)
- Blaine Rister
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA, 94305, USA.
| | - Darvin Yi
- Department of Biomedical Data Science, Stanford University, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Kaushik Shivakumar
- Department of Biomedical Data Science, Stanford University, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tomomi Nobashi
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA, 94305, USA
| | - Daniel L Rubin
- Department of Biomedical Data Science, Stanford University, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA, 94305, USA
| |
Collapse
|
53
|
Generation of a local lung respiratory motion model using a weighted sparse algorithm and motion prior-based registration. Comput Biol Med 2020; 123:103913. [PMID: 32768049 DOI: 10.1016/j.compbiomed.2020.103913] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 06/15/2020] [Accepted: 07/10/2020] [Indexed: 11/22/2022]
Abstract
Respiration-introduced tumor location uncertainty is a challenge in lung percutaneous interventions, especially for the respiratory motion estimation of the tumor and surrounding vessel structures. In this work, a local motion modeling method is proposed based on whole-chest computed tomography (CT) and CT-fluoroscopy (CTF) scans. A weighted sparse statistical modeling (WSSM) method that can accurately capture location errors for each landmark point is proposed for lung motion prediction. By varying the sparse weight coefficients of the WSSM method, newly input motion information is approximately represented by a sparse linear combination of the respiratory motion repository and employed to serve as prior knowledge for the following registration process. We have also proposed an adaptive motion prior-based registration method to improve the motion prediction accuracy of the motion model in the region of interest (ROI). This registration method adopts a B-spline scheme to interactively weight the relative influence of the prior knowledge, model surface and image intensity information by locally controlling the deformation in the CTF image region. The proposed method has been evaluated on 15 image pairs between the end-expiratory (EE) and end-inspiratory (EI) phases and 31 four-dimensional CT (4DCT) datasets. The results reveal that the proposed WSSM method achieved a better motion prediction performance than other existing lung statistical motion modeling methods, and the motion prior-based registration method can generate more accurate local motion information in the ROI.
Collapse
|
54
|
Ju Z, Wu Q, Yang W, Gu S, Guo W, Wang J, Ge R, Quan H, Liu J, Qu B. Automatic segmentation of pelvic organs-at-risk using a fusion network model based on limited training samples. Acta Oncol 2020; 59:933-939. [PMID: 32568616 DOI: 10.1080/0284186x.2020.1775290] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Background: Efficient and accurate methods are needed to automatically segmenting organs-at-risk (OAR) to accelerate the radiotherapy workflow and decrease the treatment wait time. We developed and evaluated the use of a fused model Dense V-Network for its ability to accurately segment pelvic OAR.Material and methods: We combined two network models, Dense Net and V-Net, to establish the Dense V-Network algorithm. For the training model, we adopted 100 kV computed tomography (CT) images of patients with cervical cancer, including 80 randomly selected as training sets, by which to adjust parameters of the automatic segmentation model, and the remaining 20 as test sets to evaluate the performance of the convolutional neural network model. Three representative parameters were used to evaluate the segmentation results quantitatively.Results: Clinical results revealed that Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 mm; and Jaccard distance was within 2.3 mm. Except for the small intestine, the Hausdorff distance of other organs was less than 9.0 mm. Comparison of our approaches with those of the Atlas and other studies demonstrated that the Dense V-Network had more accurate and efficient performance and faster speed.Conclusions: The Dense V-Network algorithm can be used to automatically segment pelvic OARs accurately and efficiently, while shortening patients' waiting time and accelerating radiotherapy workflow.
Collapse
Affiliation(s)
- Zhongjian Ju
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| | - Qingnan Wu
- Department of Radiation Therapy, Peking University International Hospital, Beijing, China
| | - Wei Yang
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| | - Shanshan Gu
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| | - Wen Guo
- School of Physics Science and Technology, Wuhan University, Wuhan, China
| | - Jinyuan Wang
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| | - Ruigang Ge
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| | - Hong Quan
- School of Physics Science and Technology, Wuhan University, Wuhan, China
| | - Jie Liu
- Beijing Eastraycloud Technology Inc, Beijing, China
| | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of People’s Liberation Army General Hospital, Beijing, China
| |
Collapse
|
55
|
Highly Accurate and Memory Efficient Unsupervised Learning-Based Discrete CT Registration Using 2.5D Displacement Search. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 2020. [DOI: 10.1007/978-3-030-59716-0_19] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
56
|
Jmaiel M, Mokhtari M, Abdulrazak B, Aloulou H, Kallel S. Comparative Study of Relevant Methods for MRI/X Brain Image Registration. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7313302 DOI: 10.1007/978-3-030-51517-1_30] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Several methods of brain image registration have been proposed in order to overcome the requirement of clinicians. In this paper, we assess the performance of a hybrid method for brain image registration against the most used standard registration tools. Most traditional registration tools use different methods for mono- and multi-modal registration, whereas the hybrid registration method is providing both mono and multi-modal brain registration of PET, MRI and CT images. To determine the appropriate registration method, we used two challenging brain image datasets as well as two evaluation metrics. Results show that the hybrid method outperforms all other standard registration tools and has achieved promising accuracy for MRI/X brain image registration.
Collapse
|
57
|
Comparison of Multi-atlas Segmentation and U-Net Approaches for Automated 3D Liver Delineation in MRI. COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE 2020. [DOI: 10.1007/978-3-030-39343-4_41] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
58
|
Zhuang X, Li L, Payer C, Štern D, Urschler M, Heinrich MP, Oster J, Wang C, Smedby Ö, Bian C, Yang X, Heng PA, Mortazi A, Bagci U, Yang G, Sun C, Galisot G, Ramel JY, Brouard T, Tong Q, Si W, Liao X, Zeng G, Shi Z, Zheng G, Wang C, MacGillivray T, Newby D, Rhode K, Ourselin S, Mohiaddin R, Keegan J, Firmin D, Yang G. Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge. Med Image Anal 2019; 58:101537. [PMID: 31446280 PMCID: PMC6839613 DOI: 10.1016/j.media.2019.101537] [Citation(s) in RCA: 136] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 07/03/2019] [Accepted: 07/22/2019] [Indexed: 12/21/2022]
Abstract
Knowledge of whole heart anatomy is a prerequisite for many clinical applications. Whole heart segmentation (WHS), which delineates substructures of the heart, can be very valuable for modeling and analysis of the anatomy and functions of the heart. However, automating this segmentation can be challenging due to the large variation of the heart shape, and different image qualities of the clinical data. To achieve this goal, an initial set of training data is generally needed for constructing priors or for training. Furthermore, it is difficult to perform comparisons between different methods, largely due to differences in the datasets and evaluation metrics used. This manuscript presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The challenge provided 120 three-dimensional cardiac images covering the whole heart, including 60 CT and 60 MRI volumes, all acquired in clinical environments with manual delineation. Ten algorithms for CT data and eleven algorithms for MRI data, submitted from twelve groups, have been evaluated. The results showed that the performance of CT WHS was generally better than that of MRI WHS. The segmentation of the substructures for different categories of patients could present different levels of challenge due to the difference in imaging and variations of heart shapes. The deep learning (DL)-based methods demonstrated great potential, though several of them reported poor results in the blinded evaluation. Their performance could vary greatly across different network structures and training strategies. The conventional algorithms, mainly based on multi-atlas segmentation, demonstrated good performance, though the accuracy and computational efficiency could be limited. The challenge, including provision of the annotated training data and the blinded evaluation for submitted algorithms on the test data, continues as an ongoing benchmarking resource via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs/).
Collapse
Affiliation(s)
- Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, 200433, China; Fudan-Xinzailing Joint Research Center for Big Data, Fudan University, Shanghai, 200433, China.
| | - Lei Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, 8010, Austria
| | - Darko Štern
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, 8010, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, 8010, Austria
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lubeck, Lubeck, 23562, Germany
| | - Julien Oster
- Inserm, Université de Lorraine, IADI, U1254, Nancy, France
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm SE-14152, Sweden
| | - Örjan Smedby
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm SE-14152, Sweden
| | - Cheng Bian
- School of Biomed. Eng., Health Science Centre, Shenzhen University, Shenzhen, 518060, China
| | - Xin Yang
- Dept. of Comp. Sci. and Eng., The Chinese University of Hong Kong, Hong Kong, China
| | - Pheng-Ann Heng
- Dept. of Comp. Sci. and Eng., The Chinese University of Hong Kong, Hong Kong, China
| | - Aliasghar Mortazi
- Center for Research in Computer Vision (CRCV), University of Central Florida, Orlando, 32816, U.S
| | - Ulas Bagci
- Center for Research in Computer Vision (CRCV), University of Central Florida, Orlando, 32816, U.S
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Chenchen Sun
- School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Gaetan Galisot
- LIFAT (EA6300), Université de Tours, 64 avenue Jean Portalis, Tours, 37200, France
| | - Jean-Yves Ramel
- LIFAT (EA6300), Université de Tours, 64 avenue Jean Portalis, Tours, 37200, France
| | - Thierry Brouard
- LIFAT (EA6300), Université de Tours, 64 avenue Jean Portalis, Tours, 37200, France
| | - Qianqian Tong
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Weixin Si
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, SIAT, Shenzhen, China
| | - Xiangyun Liao
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Guodong Zeng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute for Surgical Technology & Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Zenglin Shi
- Institute for Surgical Technology & Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Guoyan Zheng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute for Surgical Technology & Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Chengjia Wang
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, U.K.; Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, U.K
| | - Tom MacGillivray
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, U.K
| | - David Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, U.K.; Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, U.K
| | - Kawal Rhode
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, U.K
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London, U.K
| | - Raad Mohiaddin
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, U.K.; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, London, U.K
| | - Jennifer Keegan
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, U.K.; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, London, U.K
| | - David Firmin
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, U.K.; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, London, U.K
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, U.K.; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, London, U.K..
| |
Collapse
|
59
|
Evaluation of deformable image registration algorithm for determination of accumulated dose for brachytherapy of cervical cancer patients. J Contemp Brachytherapy 2019; 11:469-478. [PMID: 31749857 PMCID: PMC6854864 DOI: 10.5114/jcb.2019.88762] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 09/17/2019] [Indexed: 12/30/2022] Open
Abstract
Purpose This study was designed to assess the dose accumulation (DA) of bladder and rectum between brachytherapy fractions using hybrid-based deformable image registration (DIR) and compare it with the simple summation (SS) approach of GEC-ESTRO in cervical cancer patients. Material and methods Patients (n = 137) with cervical cancer treated with 3D conformal radiotherapy and three fractions of high-dose-rate brachytherapy were selected. CT images were acquired to delineate organs at risk and targets according to GEC-ESTRO recommendations. In order to determine the DA for the bladder and rectum, hybrid-based DIR was done for three different fractions of brachytherapy and the results were compared with the standard GEC-ESTRO method. Also, we performed a phantom study to calculate the uncertainty of the hybrid-based DIR algorithm for contour matching and dose mapping. Results The mean ± standard deviation (SD) of the Dice similarity coefficient (DICE), Jaccard, Hausdorff distance (HD) and mean distance to agreement (MDA) in the DIR process were 0.94 ±0.02, 0.89 ±0.03, 8.44 ±3.56 and 0.72 ±0.22 for bladder and 0.89 ±0.05, 0.80 ±0.07, 15.46 ±10.14 and 1.19 ±0.59 for rectum, respectively. The median (Q1, Q3; maximum) GyEQD2 differences of total D2cc between DIR-based and SS methods for the bladder and rectum were reduced by –1.53 (–0.86, –2.98; –9.17) and –1.38 (–0.80, –2.14; –7.11), respectively. The mean ± SD of DICE, Jaccard, HD, and MDA for contour matching were 0.98 ±0.008, 0.97 ±0.01, 2.00 ±0.70 and 0.20 ±0.04, respectively for large deformation. Maximum uncertainty of dose mapping was about 3.58%. Conclusions The hybrid-based DIR algorithm demonstrated low registration uncertainty for both contour matching and dose mapping. The DA difference between DIR-based and SS approaches was statistically significant for both bladder and rectum and hybrid-based DIR showed potential to assess DA between brachytherapy fractions.
Collapse
|
60
|
Agier R, Valette S, Kéchichian R, Fanton L, Prost R. Hubless keypoint-based 3D deformable groupwise registration. Med Image Anal 2019; 59:101564. [PMID: 31590032 DOI: 10.1016/j.media.2019.101564] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 08/05/2019] [Accepted: 09/19/2019] [Indexed: 11/30/2022]
Abstract
We present a novel algorithm for Fast Registration Of image Groups (FROG), applied to large 3D image groups. Our approach extracts 3D SURF keypoints from images, computes matched pairs of keypoints and registers the group by minimizing pair distances in a hubless way i.e. without computing any central mean image. Using keypoints significantly reduces the problem complexity compared to voxel-based approaches, and enables us to provide an in-core global optimization, similar to the Bundle Adjustment for 3D reconstruction. As we aim to register images of different patients, the matching step yields many outliers. Then we propose a new EM-weighting algorithm which efficiently discards outliers. Global optimization is carried out with a fast gradient descent algorithm. This allows our approach to robustly register large datasets. The result is a set of diffeomorphic half transforms which link the volumes together and can be subsequently exploited for computational anatomy and landmark detection. We show experimental results on whole-body CT scans, with groups of up to 103 volumes. On a benchmark based on anatomical landmarks, our algorithm compares favorably with the star-groupwise voxel-based ANTs and NiftyReg approaches while being much faster. We also discuss the limitations of our approach for lower resolution images such as brain MRI.
Collapse
Affiliation(s)
- R Agier
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - S Valette
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France.
| | - R Kéchichian
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - L Fanton
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France; Hospices Civils de Lyon, GHC, Hôpital Edouard-Herriot, Service de médecine légale, LYON 69003, FRANCE
| | - R Prost
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| |
Collapse
|
61
|
Memory-efficient 2.5D convolutional transformer networks for multi-modal deformable registration with weak label supervision applied to whole-heart CT and MRI scans. Int J Comput Assist Radiol Surg 2019; 14:1901-1912. [DOI: 10.1007/s11548-019-02068-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 09/04/2019] [Indexed: 12/16/2022]
|
62
|
Müller S, Farag I, Weickert J, Braun Y, Lollert A, Dobberstein J, Hötker A, Graf N. Benchmarking Wilms' tumor in multisequence MRI data: why does current clinical practice fail? Which popular segmentation algorithms perform well? J Med Imaging (Bellingham) 2019; 6:034001. [PMID: 31338388 DOI: 10.1117/1.jmi.6.3.034001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 06/24/2019] [Indexed: 11/14/2022] Open
Abstract
Wilms' tumor is one of the most frequent malignant solid tumors in childhood. Accurate segmentation of tumor tissue is a key step during therapy and treatment planning. Since it is difficult to obtain a comprehensive set of tumor data of children, there is no benchmark so far allowing evaluation of the quality of human or computer-based segmentations. The contributions in our paper are threefold: (i) we present the first heterogeneous Wilms' tumor benchmark data set. It contains multisequence MRI data sets before and after chemotherapy, along with ground truth annotation, approximated based on the consensus of five human experts. (ii) We analyze human expert annotations and interrater variability, finding that the current clinical practice of determining tumor volume is inaccurate and that manual annotations after chemotherapy may differ substantially. (iii) We evaluate six computer-based segmentation methods, ranging from classical approaches to recent deep-learning techniques. We show that the best ones offer a quality comparable to human expert annotations.
Collapse
Affiliation(s)
- Sabine Müller
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany.,Saarland University, Faculty of Mathematics and Computer Science, Mathematical Image Analysis Group, Saarbrücken, Germany
| | - Iva Farag
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - Joachim Weickert
- Saarland University, Faculty of Mathematics and Computer Science, Mathematical Image Analysis Group, Saarbrücken, Germany
| | - Yvonne Braun
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - André Lollert
- Johannes Gutenberg University, Medical Center, Department of Diagnostic and Interventional Radiology, Mainz, Germany
| | - Jonas Dobberstein
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - Andreas Hötker
- University Hospital Zürich, Department of Diagnostic Radiology, Zürich, Switzerland
| | - Norbert Graf
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| |
Collapse
|
63
|
Rafiei S, Karimi N, Mirmahboub B, Najarian K, Felfeliyan B, Samavi S, Reza Soroushmehr SM. Liver Segmentation in Abdominal CT Images Using Probabilistic Atlas and Adaptive 3D Region Growing. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:6310-6313. [PMID: 31947285 DOI: 10.1109/embc.2019.8857835] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic liver segmentation plays a vital role in computer-aided diagnosis or treatment. Manual segmentation of organs is a tedious and challenging task and is prone to human errors. In this paper, we propose innovative pre-processing and adaptive 3D region growing methods with subject-specific conditions. To obtain strong edges and high contrast, we propose effective contrast enhancement algorithm then we use the atlas intensity distribution of most probable voxels in probability maps along with location before designing conditions for our 3D region growing method. We also incorporate the organ boundary to restrict the region growing. We compare our method with the label fusion of 13 organs on state-of-the-art Deeds registration method and achieved Dice score of 92.56%.
Collapse
|
64
|
Heinrich MP, Oktay O, Bouteldja N. OBELISK-Net: Fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions. Med Image Anal 2019; 54:1-9. [DOI: 10.1016/j.media.2019.02.006] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 01/10/2019] [Accepted: 02/12/2019] [Indexed: 11/15/2022]
|
65
|
Huo Y, Liu J, Xu Z, Harrigan RL, Assad A, Abramson RG, Landman BA. Robust Multicontrast MRI Spleen Segmentation for Splenomegaly Using Multi-Atlas Segmentation. IEEE Trans Biomed Eng 2019; 65:336-343. [PMID: 29364118 DOI: 10.1109/tbme.2017.2764752] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVE Magnetic resonance imaging (MRI) is an essential imaging modality in noninvasive splenomegaly diagnosis. However, it is challenging to achieve spleen volume measurement from three-dimensional MRI given the diverse structural variations of human abdomens as well as the wide variety of clinical MRI acquisition schemes. Multi-atlas segmentation (MAS) approaches have been widely used and validated to handle heterogeneous anatomical scenarios. In this paper, we propose to use MAS for clinical MRI spleen segmentation for splenomegaly. METHODS First, an automated segmentation method using the selective and iterative method for performance level estimation (SIMPLE) atlas selection is used to address the concerns of inhomogeneity for clinical splenomegaly MRI. Then, to further control outliers, semiautomated craniocaudal spleen length-based SIMPLE atlas selection (L-SIMPLE) is proposed to integrate a spatial prior in a Bayesian fashion and guide iterative atlas selection. Last, a graph cuts refinement is employed to achieve the final segmentation from the probability maps from MAS. RESULTS A clinical cohort of 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate both automated and semiautomated methods. CONCLUSION The results demonstrated that both methods achieved median Dice , and outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.97 Pearson correlation of volume measurements with the manual segmentation. SIGNIFICANCE In this paper, spleen segmentation on MRI splenomegaly using MAS has been performed.
Collapse
|
66
|
Heinrich MP. Closing the Gap Between Deep and Conventional Image Registration Using Probabilistic Dense Displacement Networks. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_6] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
67
|
Fuhrmann I, Probst U, Wiggermann P, Beyer L. Navigation Systems for Treatment Planning and Execution of Percutaneous Irreversible Electroporation. Technol Cancer Res Treat 2018; 17:1533033818791792. [PMID: 30071779 PMCID: PMC6077881 DOI: 10.1177/1533033818791792] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
The application of navigational systems has the potential to improve percutaneous interventions. The accuracy of ablation probe placement can be increased and radiation doses reduced. Two different types of systems can be distinguished, tracking systems and robotic systems. This review gives an overview of navigation devices for clinical application and summarizes first findings in the implementation of navigation in percutaneous interventions using irreversible electroporation. Because of the high number of navigation systems, this review focuses on commercially available ones.
Collapse
Affiliation(s)
- Irene Fuhrmann
- 1 Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Ute Probst
- 1 Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Philipp Wiggermann
- 1 Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Lukas Beyer
- 1 Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
68
|
Kechichian R, Valette S, Desvignes M. Automatic Multiorgan Segmentation via Multiscale Registration and Graph Cut. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2739-2749. [PMID: 29994393 DOI: 10.1109/tmi.2018.2851780] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose an automatic multiorgan segmentation method for 3-D radiological images of different anatomical contents and modalities. The approach is based on a simultaneous multilabel graph cut optimization of location, appearance, and spatial configuration criteria of target structures. Organ location is defined by target-specific probabilistic atlases (PA) constructed from a training dataset using a fast (2+1)D SURF-based multiscale registration method involving a simple four-parameter transformation. PAs are also used to derive target-specific organ appearance models represented as intensity histograms. The spatial configuration prior is derived from shortest-path constraints defined on the adjacency graph of structures. Thorough evaluations on Visceral project benchmarks and training dataset, as well as comparisons with the state-of-the-art confirm that our approach is comparable to and often outperforms similar approaches in multiorgan segmentation, thus proving that the combination of multiple suboptimal but complementary information sources can yield very good performance.
Collapse
|
69
|
Li D, Zhong W, Deh KM, Nguyen TD, Prince MR, Wang Y, Spincemaille P. Discontinuity Preserving Liver MR Registration with 3D Active Contour Motion Segmentation. IEEE Trans Biomed Eng 2018; 66:10.1109/TBME.2018.2880733. [PMID: 30418878 PMCID: PMC6565504 DOI: 10.1109/tbme.2018.2880733] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVE The sliding motion of the liver during respiration violates the homogeneous motion smoothness assumption in conventional non-rigid image registration and commonly results in compromised registration accuracy. This paper presents a novel approach, registration with 3D active contour motion segmentation (RAMS), to improve registration accuracy with discontinuity-aware motion regularization. METHODS A Markov random field-based discrete optimization with dense displacement sampling and self-similarity context metric is used for registration, while a graph cuts-based 3D active contour approach is applied to segment the sliding interface. In the first registration pass, a mask-free L1 regularization on an image-derived minimum spanning tree is performed to allow motion discontinuity. Based on the motion field estimates, a coarse segmentation finds the motion boundaries. Next, based on MR signal intensity, a fine segmentation aligns the motion boundaries with anatomical boundaries. In the second registration pass, smoothness constraints across the segmented sliding interface are removed by masked regularization on a minimum spanning forest and masked interpolation of the motion field. RESULTS For in vivo breath-hold abdominal MRI data, the motion masks calculated by RAMS are highly consistent with manual segmentations in terms of Dice similarity and bidirectional local distance measure. These automatically obtained masks are shown to substantially improve registration accuracy for both the proposed discrete registration as well as conventional continuous non-rigid algorithms. CONCLUSION/SIGNIFICANCE The presented results demonstrated the feasibility of automated segmentation of the respiratory sliding motion interface in liver MR images and the effectiveness of using the derived motion masks to preserve motion discontinuity.
Collapse
Affiliation(s)
- Dongxiao Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Wenxiong Zhong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Kofi M. Deh
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Thanh D. Nguyen
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Martin R. Prince
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| | - Yi Wang
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA., Department of Biomedical Engineering, Cornell University, Ithaca, NY 14853, USA
| | - Pascal Spincemaille
- Department of Radiology, Weill Cornell Medical College, New York, NY 10021, USA
| |
Collapse
|
70
|
Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, Davidson B, Pereira SP, Clarkson MJ, Barratt DC. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1822-1834. [PMID: 29994628 PMCID: PMC6076994 DOI: 10.1109/tmi.2018.2806309] [Citation(s) in RCA: 315] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
Collapse
|
71
|
Nazib A, Galloway J, Fookes C, Perrin D. Performance of Registration Tools on High-Resolution 3D Brain Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:566-569. [PMID: 30440460 DOI: 10.1109/embc.2018.8512403] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent progress in tissue clearing allows the imaging of entire organs at single-cell resolution. A necessary step in analysing these images is registration across samples. Existing methods of registration were developed for lower resolution image modalities (e.g., MRI) and it is unclear whether their performance and accuracy is satisfactory at this larger scale (several gigabytes for a whole mouse brain). In this study, we evaluated five freely available image registration tools. We used several performance metrics to assess accuracy, and completion time as a measure of efficiency. The results of this evaluation suggest that ANTS provides the best registration accuracy, while Elastix has the highest computational efficiency among the methods with an acceptable accuracy. The results also highlight the need to develop new registration methods optimised for these high-resolution 3D images.
Collapse
|
72
|
Heinrich MP, Blendowski M, Oktay O. TernaryNet: faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions. Int J Comput Assist Radiol Surg 2018; 13:1311-1320. [PMID: 29850978 DOI: 10.1007/s11548-018-1797-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/21/2018] [Indexed: 10/16/2022]
Abstract
PURPOSE Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). METHODS We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. RESULTS We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. CONCLUSIONS We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.
Collapse
Affiliation(s)
- Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Max Blendowski
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Ozan Oktay
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
73
|
|
74
|
Nandish S, Prabhu G, Rajagopal KV. Multiresolution image registration for multimodal brain images and fusion for better neurosurgical planning. Biomed J 2017; 40:329-338. [PMID: 29433836 PMCID: PMC6138619 DOI: 10.1016/j.bj.2017.09.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 08/22/2017] [Accepted: 09/14/2017] [Indexed: 11/22/2022] Open
Abstract
Background Imaging modalities in medicine gives complementary information. Inadequacy in clinical information made single imaging modality insufficient. There is a need for computer-based system that permits rapid acquisition of digital medical images and performs multi-modality registration, segmentation and three-dimensional planning of minimally invasive neurosurgical procedures. In this regard proposed article presents multimodal brain image registration and fusion for better neurosurgical planning. Methods In proposed work brain data is acquired from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) modalities. CT and MRI images are pre-processed and given for image registration. BSpline deformable registration and multiresolution image registration is performed on the CT and MRI sequence. CT is fixed image and MRI is moving image for registration. Later end result is fusion of CT and registered MRI sequences. Results BSpline deformable registration is performed on the slices gave promising results but on the sequences noise have been introduced in the resultant image because of multimodal and multiresolution input images. Then multiresolution registration technique is performed on the CT and MRI sequence of the brain which gave promising results. Conclusion The end resultant fused images are validated by the radiologists and mutual information measure is used to validate registration results. It is found that CT and MRI sequence with more number of slices gave promising results. Few cases with deformation during misregistrations recorded with low mutual information of about 0.3 and which is not acceptable and few recorded with 0.6 and above mutual information during registration gives promising results.
Collapse
Affiliation(s)
| | - Gopalakrishna Prabhu
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal, India
| | | |
Collapse
|
75
|
Multi-organ Segmentation Using Vantage Point Forests and Binary Context Features. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2016 2016. [DOI: 10.1007/978-3-319-46723-8_69] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|