1
|
Zhang Z, Keles E, Durak G, Taktak Y, Susladkar O, Gorade V, Jha D, Ormeci AC, Medetalibeyoglu A, Yao L, Wang B, Isler IS, Peng L, Pan H, Vendrami CL, Bourhani A, Velichko Y, Gong B, Spampinato C, Pyrros A, Tiwari P, Klatte DCF, Engels M, Hoogenboom S, Bolan CW, Agarunov E, Harfouch N, Huang C, Bruno MJ, Schoots I, Keswani RN, Miller FH, Gonda T, Yazici C, Tirkes T, Turkbey B, Wallace MB, Bagci U. Large-scale multi-center CT and MRI segmentation of pancreas with deep learning. Med Image Anal 2025; 99:103382. [PMID: 39541706 PMCID: PMC11698238 DOI: 10.1016/j.media.2024.103382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 10/24/2024] [Accepted: 10/27/2024] [Indexed: 11/16/2024]
Abstract
Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
Collapse
Affiliation(s)
- Zheyuan Zhang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Elif Keles
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Gorkem Durak
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Yavuz Taktak
- Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Onkar Susladkar
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Vandan Gorade
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Debesh Jha
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Asli C Ormeci
- Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Alpay Medetalibeyoglu
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA; Department of Internal Medicine, Istanbul University Faculty of Medicine, Istanbul, Turkey
| | - Lanhong Yao
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Bin Wang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Ilkin Sevgi Isler
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA; Department of Computer Science, University of Central Florida, Florida, FL, USA
| | - Linkai Peng
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Hongyi Pan
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Camila Lopes Vendrami
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Amir Bourhani
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Yury Velichko
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | | | | | - Ayis Pyrros
- Department of Radiology, Duly Health and Care and Department of Biomedical and Health Information Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Pallavi Tiwari
- Dept of Biomedical Engineering, University of Wisconsin-Madison, WI, USA
| | - Derk C F Klatte
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | - Megan Engels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | - Sanne Hoogenboom
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology and Metabolism, Amsterdam UMC, University of Amsterdam, Netherlands; Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Emil Agarunov
- Division of Gastroenterology and Hepatology, New York University, NY, USA
| | - Nassier Harfouch
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Chenchan Huang
- Department of Radiology, NYU Grossman School of Medicine, New York, NY, USA
| | - Marco J Bruno
- Departments of Gastroenterology and Hepatology, Erasmus Medical Center, Rotterdam, Netherlands
| | - Ivo Schoots
- Department of Radiology and Nuclear Medicine, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Rajesh N Keswani
- Departments of Gastroenterology and Hepatology, Northwestern University, IL, USA
| | - Frank H Miller
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Tamas Gonda
- Division of Gastroenterology and Hepatology, New York University, NY, USA
| | - Cemal Yazici
- Division of Gastroenterology and Hepatology, University of Illinois at Chicago, Chicago, IL, USA
| | - Temel Tirkes
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic in Florida, Jacksonville, USA
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA.
| |
Collapse
|
2
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
3
|
Terzi R. An Ensemble of Deep Learning Object Detection Models for Anatomical and Pathological Regions in Brain MRI. Diagnostics (Basel) 2023; 13:diagnostics13081494. [PMID: 37189595 DOI: 10.3390/diagnostics13081494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
This paper proposes ensemble strategies for the deep learning object detection models carried out by combining the variants of a model and different models to enhance the anatomical and pathological object detection performance in brain MRI. In this study, with the help of the novel Gazi Brains 2020 dataset, five different anatomical parts and one pathological part that can be observed in brain MRI were identified, such as the region of interest, eye, optic nerves, lateral ventricles, third ventricle, and a whole tumor. Firstly, comprehensive benchmarking of the nine state-of-the-art object detection models was carried out to determine the capabilities of the models in detecting the anatomical and pathological parts. Then, four different ensemble strategies for nine object detectors were applied to boost the detection performance using the bounding box fusion technique. The ensemble of individual model variants increased the anatomical and pathological object detection performance by up to 10% in terms of the mean average precision (mAP). In addition, considering the class-based average precision (AP) value of the anatomical parts, an up to 18% AP improvement was achieved. Similarly, the ensemble strategy of the best different models outperformed the best individual model by 3.3% mAP. Additionally, while an up to 7% better FAUC, which is the area under the TPR vs. FPPI curve, was achieved on the Gazi Brains 2020 dataset, a 2% better FAUC score was obtained on the BraTS 2020 dataset. The proposed ensemble strategies were found to be much more efficient in finding the anatomical and pathological parts with a small number of anatomic objects, such as the optic nerve and third ventricle, and producing higher TPR values, especially at low FPPI values, compared to the best individual methods.
Collapse
Affiliation(s)
- Ramazan Terzi
- Department of Big Data and Artificial Intelligence, Digital Transformation Office of the Presidency of Republic of Türkiye, Ankara 06100, Turkey
- Department of Computer Engineering, Amasya University, Amasya 05100, Turkey
| |
Collapse
|
4
|
Torosdagli N, Anwar S, Verma P, Liberton DK, Lee JS, Han WW, Bagci U. Relational reasoning network for anatomical landmarking. J Med Imaging (Bellingham) 2023; 10:024002. [PMID: 36891503 PMCID: PMC9986769 DOI: 10.1117/1.jmi.10.2.024002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Purpose We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.
Collapse
Affiliation(s)
| | - Syed Anwar
- University of Central Florida, Orlando, Florida, United States
- Children’s National Hospital, Sheikh Zayed Institute, Washington, District of Columbia, United States
- George Washington University, Washington, District of Columbia, United States
| | - Payal Verma
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Denise K. Liberton
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Janice S. Lee
- National Institute of Dental and Craniofacial Research (NIDCR), National Institutes of Health (NIH), Craniofacial Anomalies and Regeneration Section, Bethesda, Maryland, United States
| | - Wade W. Han
- Boston Children’s Hospital, Harvard Medical School, Department of Otolaryngology - Head and Neck Surgery, Boston, Maryland, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
- Ther-AI, LLC, Kissimmee, Florida, United States
- Northwestern University, Departments of Radiology, BME, and ECE, Machine and Hybrid Intelligence Lab, Chicago, Illinois, United States
| |
Collapse
|
5
|
deSouza NM, van der Lugt A, Deroose CM, Alberich-Bayarri A, Bidaut L, Fournier L, Costaridou L, Oprea-Lager DE, Kotter E, Smits M, Mayerhoefer ME, Boellaard R, Caroli A, de Geus-Oei LF, Kunz WG, Oei EH, Lecouvet F, Franca M, Loewe C, Lopci E, Caramella C, Persson A, Golay X, Dewey M, O'Connor JPB, deGraaf P, Gatidis S, Zahlmann G. Standardised lesion segmentation for imaging biomarker quantitation: a consensus recommendation from ESR and EORTC. Insights Imaging 2022; 13:159. [PMID: 36194301 PMCID: PMC9532485 DOI: 10.1186/s13244-022-01287-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lesion/tissue segmentation on digital medical images enables biomarker extraction, image-guided therapy delivery, treatment response measurement, and training/validation for developing artificial intelligence algorithms and workflows. To ensure data reproducibility, criteria for standardised segmentation are critical but currently unavailable. METHODS A modified Delphi process initiated by the European Imaging Biomarker Alliance (EIBALL) of the European Society of Radiology (ESR) and the European Organisation for Research and Treatment of Cancer (EORTC) Imaging Group was undertaken. Three multidisciplinary task forces addressed modality and image acquisition, segmentation methodology itself, and standards and logistics. Devised survey questions were fed via a facilitator to expert participants. The 58 respondents to Round 1 were invited to participate in Rounds 2-4. Subsequent rounds were informed by responses of previous rounds. RESULTS/CONCLUSIONS Items with ≥ 75% consensus are considered a recommendation. These include system performance certification, thresholds for image signal-to-noise, contrast-to-noise and tumour-to-background ratios, spatial resolution, and artefact levels. Direct, iterative, and machine or deep learning reconstruction methods, use of a mixture of CE marked and verified research tools were agreed and use of specified reference standards and validation processes considered essential. Operator training and refreshment were considered mandatory for clinical trials and clinical research. Items with a 60-74% agreement require reporting (site-specific accreditation for clinical research, minimal pixel number within lesion segmented, use of post-reconstruction algorithms, operator training refreshment for clinical practice). Items with ≤ 60% agreement are outside current recommendations for segmentation (frequency of system performance tests, use of only CE-marked tools, board certification of operators, frequency of operator refresher training). Recommendations by anatomical area are also specified.
Collapse
Affiliation(s)
- Nandita M deSouza
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK.
| | - Aad van der Lugt
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Christophe M Deroose
- Nuclear Medicine, University Hospitals Leuven, Leuven, Belgium.,Nuclear Medicine and Molecular Imaging, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | | | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, Lincoln, LN6 7TS, UK
| | - Laure Fournier
- INSERM, Radiology Department, AP-HP, Hopital Europeen Georges Pompidou, Université de Paris, PARCC, 75015, Paris, France
| | - Lena Costaridou
- School of Medicine, University of Patras, University Campus, Rio, 26 500, Patras, Greece
| | - Daniela E Oprea-Lager
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Elmar Kotter
- Department of Radiology, University Medical Center Freiburg, Freiburg, Germany
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Marius E Mayerhoefer
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria.,Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anna Caroli
- Department of Biomedical Engineering, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy
| | - Lioe-Fee de Geus-Oei
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.,Biomedical Photonic Imaging Group, University of Twente, Enschede, The Netherlands
| | - Wolfgang G Kunz
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Edwin H Oei
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Frederic Lecouvet
- Department of Radiology, Institut de Recherche Expérimentale et Clinique (IREC), Cliniques Universitaires Saint Luc, Université Catholique de Louvain (UCLouvain), 10 Avenue Hippocrate, 1200, Brussels, Belgium
| | - Manuela Franca
- Department of Radiology, Centro Hospitalar Universitário do Porto, Instituto de Ciências Biomédicas de Abel Salazar, University of Porto, Porto, Portugal
| | - Christian Loewe
- Division of Cardiovascular and Interventional Radiology, Department for Bioimaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Egesta Lopci
- Nuclear Medicine, IRCCS - Humanitas Research Hospital, via Manzoni 56, Rozzano, MI, Italy
| | - Caroline Caramella
- Radiology Department, Hôpital Marie Lannelongue, Institut d'Oncologie Thoracique, Université Paris-Saclay, Le Plessis-Robinson, France
| | - Anders Persson
- Department of Radiology, and Department of Health, Medicine and Caring Sciences, Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Xavier Golay
- Queen Square Institute of Neurology, University College London, London, UK
| | - Marc Dewey
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - James P B O'Connor
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | - Pim deGraaf
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sergios Gatidis
- Department of Radiology, University of Tubingen, Tübingen, Germany
| | - Gudrun Zahlmann
- Radiological Society of North America (RSNA), Oak Brook, IL, USA
| | | | | |
Collapse
|
6
|
Musculoskeletal MR Image Segmentation with Artificial Intelligence. ADVANCES IN CLINICAL RADIOLOGY 2022; 4:179-188. [PMID: 36815063 PMCID: PMC9943059 DOI: 10.1016/j.yacr.2022.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
7
|
Zhang G, Yang Z, Huo B, Chai S, Jiang S. Multiorgan segmentation from partially labeled datasets with conditional nnU-Net. Comput Biol Med 2021; 136:104658. [PMID: 34311262 DOI: 10.1016/j.compbiomed.2021.104658] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/30/2022]
Abstract
Accurate and robust multiorgan abdominal CT segmentation plays a significant role in numerous clinical applications, such as therapy treatment planning and treatment delivery. Almost all existing segmentation networks rely on fully annotated data with strong supervision. However, annotating fully annotated multiorgan data in CT images is both laborious and time-consuming. In comparison, massive partially labeled datasets are usually easily accessible. In this paper, we propose conditional nnU-Net trained on the union of partially labeled datasets for multiorgan segmentation. The deep model employs the state-of-the-art nnU-Net as the backbone and introduces a conditioning strategy by feeding auxiliary information into the decoder architecture as an additional input layer. This model leverages the prior conditional information to identify the organ class at the pixel-wise level and encourages organs' spatial information recovery. Furthermore, we adopt a deep supervision mechanism to refine the outputs at different scales and apply the combination of Dice loss and Focal loss to optimize the training model. Our proposed method is evaluated on seven publicly available datasets of the liver, pancreas, spleen and kidney, in which promising segmentation performance has been achieved. The proposed conditional nnU-Net breaks down the barriers between nonoverlapping labeled datasets and further alleviates the problem of data hunger in multiorgan segmentation.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Bin Huo
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shude Chai
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
8
|
LaLonde R, Xu Z, Irmakci I, Jain S, Bagci U. Capsules for biomedical image segmentation. Med Image Anal 2021; 68:101889. [PMID: 33246227 PMCID: PMC7944580 DOI: 10.1016/j.media.2020.101889] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 08/25/2020] [Accepted: 10/23/2020] [Indexed: 01/31/2023]
Abstract
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of "deconvolutional" capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects' thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules' ability to generalize to unseen handling of rotations/reflections on natural images.
Collapse
Affiliation(s)
- Rodney LaLonde
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL
| | | | | | - Sanjay Jain
- Johns Hopkins University, Baltimore, MD US State
| | - Ulas Bagci
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL.
| |
Collapse
|
9
|
Peña-Solórzano CA, Albrecht DW, Bassed RB, Burke MD, Dimmock MR. Findings from machine learning in clinical medical imaging applications - Lessons for translation to the forensic setting. Forensic Sci Int 2020; 316:110538. [PMID: 33120319 PMCID: PMC7568766 DOI: 10.1016/j.forsciint.2020.110538] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Machine learning (ML) techniques are increasingly being used in clinical medical imaging to automate distinct processing tasks. In post-mortem forensic radiology, the use of these algorithms presents significant challenges due to variability in organ position, structural changes from decomposition, inconsistent body placement in the scanner, and the presence of foreign bodies. Existing ML approaches in clinical imaging can likely be transferred to the forensic setting with careful consideration to account for the increased variability and temporal factors that affect the data used to train these algorithms. Additional steps are required to deal with these issues, by incorporating the possible variability into the training data through data augmentation, or by using atlases as a pre-processing step to account for death-related factors. A key application of ML would be then to highlight anatomical and gross pathological features of interest, or present information to help optimally determine the cause of death. In this review, we highlight results and limitations of applications in clinical medical imaging that use ML to determine key implications for their application in the forensic setting.
Collapse
Affiliation(s)
- Carlos A Peña-Solórzano
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - David W Albrecht
- Clayton School of Information Technology, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Richard B Bassed
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Michael D Burke
- Victorian Institute of Forensic Medicine, 57-83 Kavanagh St., Southbank, Melbourne, VIC 3006, Australia; Department of Forensic Medicine, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| | - Matthew R Dimmock
- Department of Medical Imaging and Radiation Sciences, Monash University, Wellington Rd, Clayton, Melbourne, VIC 3800, Australia.
| |
Collapse
|
10
|
Abstract
Lung nodule segmentation is an essential step in any CAD system for lung cancer detection and diagnosis. Traditional approaches for image segmentation are mainly morphology based or intensity based. Motion-based segmentation techniques tend to use the temporal information along with the morphology and intensity information to perform segmentation of regions of interest in videos. CT scans comprise of a sequence of dicom 2-D image slices similar to videos which also comprise of a sequence of image frames ordered on a timeline. In this work, Farneback, Horn-Schunck and Lucas-Kanade optical flow methods have been used for processing the dicom slices. The novelty of this work lies in the usage of optical flow methods, generally used in motion-based segmentation tasks, for the segmentation of nodules from CT images. Since thin-sliced CT scans are the imaging modality considered, they closely approximate the motion videos and are the primary motivation for using optical flow for lung nodule segmentation. This paper also provides a detailed comparative analysis and validates the effectiveness of using optical flow methods for segmentation. Finally, we propose methods to further improve the efficiency of segmentation using optical flow methods on CT scans.
Collapse
|
11
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
12
|
Ha S, Choi H, Paeng JC, Cheon GJ. Radiomics in Oncological PET/CT: a Methodological Overview. Nucl Med Mol Imaging 2019; 53:14-29. [PMID: 30828395 DOI: 10.1007/s13139-019-00571-4] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 11/27/2018] [Accepted: 01/02/2019] [Indexed: 02/07/2023] Open
Abstract
Radiomics is a medical imaging analysis approach based on computer-vision. Metabolic radiomics in particular analyses the spatial distribution patterns of molecular metabolism on PET images. Measuring intratumoral heterogeneity via image is one of the main targets of radiomics research, and it aims to build a image-based model for better patient management. The workflow of radiomics using texture analysis follows these steps: 1) imaging (image acquisition and reconstruction); 2) preprocessing (segmentation & quantization); 3) quantification (texture matrix design & texture feature extraction); and 4) analysis (statistics and/or machine learning). The parameters or conditions at each of these steps are effect on the results. In statistical testing or modeling, problems such as multiple comparisons, dependence on other variables, and high dimensionality of small sample size data should be considered. Standardization of methodology and harmonization of image quality are one of the most important challenges with radiomics methodology. Even though there are current issues in radiomics methodology, it is expected that radiomics will be clinically useful in personalized medicine for oncology.
Collapse
Affiliation(s)
- Seunggyun Ha
- 1Radiation Medicine Research Institute, Seoul National University College of Medicine, Seoul, South Korea
- 2Department of Nuclear Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Hongyoon Choi
- 2Department of Nuclear Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Jin Chul Paeng
- 2Department of Nuclear Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Gi Jeong Cheon
- 1Radiation Medicine Research Institute, Seoul National University College of Medicine, Seoul, South Korea
- 2Department of Nuclear Medicine, Seoul National University Hospital, Seoul, South Korea
- 3Cancer Research Institute, Seoul National University College of Medicine, Seoul, South Korea
| |
Collapse
|
13
|
Kechichian R, Valette S, Desvignes M. Automatic Multiorgan Segmentation via Multiscale Registration and Graph Cut. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2739-2749. [PMID: 29994393 DOI: 10.1109/tmi.2018.2851780] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose an automatic multiorgan segmentation method for 3-D radiological images of different anatomical contents and modalities. The approach is based on a simultaneous multilabel graph cut optimization of location, appearance, and spatial configuration criteria of target structures. Organ location is defined by target-specific probabilistic atlases (PA) constructed from a training dataset using a fast (2+1)D SURF-based multiscale registration method involving a simple four-parameter transformation. PAs are also used to derive target-specific organ appearance models represented as intensity histograms. The spatial configuration prior is derived from shortest-path constraints defined on the adjacency graph of structures. Thorough evaluations on Visceral project benchmarks and training dataset, as well as comparisons with the state-of-the-art confirm that our approach is comparable to and often outperforms similar approaches in multiorgan segmentation, thus proving that the combination of multiple suboptimal but complementary information sources can yield very good performance.
Collapse
|
14
|
Wang H, Zhang N, Huo L, Zhang B. Dual-modality multi-atlas segmentation of torso organs from [ 18F]FDG-PET/CT images. Int J Comput Assist Radiol Surg 2018; 14:473-482. [PMID: 30390179 DOI: 10.1007/s11548-018-1879-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
Abstract
PURPOSE Automated segmentation of torso organs from positron emission tomography/computed tomography (PET/CT) images is a prerequisite step for nuclear medicine image analysis. However, accurate organ segmentation from clinical PET/CT is challenging due to the poor soft tissue contrast in the low-dose CT image and the low spatial resolution of the PET image. To overcome these challenges, we developed a multi-atlas segmentation (MAS) framework for torso organ segmentation from 2-deoxy-2-[18F]fluoro-D-glucose PET/CT images. METHOD Our key idea is to use PET information to compensate for the imperfect CT contrast and use surface-based atlas fusion to overcome the low PET resolution. First, all the organs are segmented from CT using a conventional MAS method, and then the abdomen region of the PET image is automatically cropped. Focusing on the cropped PET image, a refined MAS segmentation of the abdominal organs is performed, using a surface-based atlas fusion approach to reach subvoxel accuracy. RESULTS This method was validated based on 69 PET/CT images. The Dice coefficients of the target organs were between 0.80 and 0.96, and the average surface distances were between 1.58 and 2.44 mm. Compared to the CT-based segmentation, the PET-based segmentation gained a Dice increase of 0.06 and an ASD decrease of 0.38 mm. The surface-based atlas fusion leads to significant accuracy improvement for the liver and kidneys and saved ~ 10 min computation time compared to volumetric atlas fusion. CONCLUSIONS The presented method achieves better segmentation accuracy than conventional MAS method within acceptable computation time for clinical applications.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Nan Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Li Huo
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Bin Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China.
| |
Collapse
|
15
|
Irmakci I, Hussein S, Savran A, Kalyani RR, Reiter D, Chia CW, Fishbein KW, Spencer RG, Ferrucci L, Bagci U. A Novel Extension to Fuzzy Connectivity for Body Composition Analysis: Applications in Thigh, Brain, and Whole Body Tissue Segmentation. IEEE Trans Biomed Eng 2018; 66:1069-1081. [PMID: 30176577 DOI: 10.1109/tbme.2018.2866764] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Magnetic resonance imaging (MRI) is the non-invasive modality of choice for body tissue composition analysis due to its excellent soft-tissue contrast and lack of ionizing radiation. However, quantification of body composition requires an accurate segmentation of fat, muscle, and other tissues from MR images, which remains a challenging goal due to the intensity overlap between them. In this study, we propose a fully automated, data-driven image segmentation platform that addresses multiple difficulties in segmenting MR images such as varying inhomogeneity, non-standardness, and noise, while producing a high-quality definition of different tissues. In contrast to most approaches in the literature, we perform segmentation operation by combining three different MRI contrasts and a novel segmentation tool, which takes into account variability in the data. The proposed system, based on a novel affinity definition within the fuzzy connectivity image segmentation family, prevents the need for user intervention and reparametrization of the segmentation algorithms. In order to make the whole system fully automated, we adapt an affinity propagation clustering algorithm to roughly identify tissue regions and image background. We perform a thorough evaluation of the proposed algorithm's individual steps as well as comparison with several approaches from the literature for the main application of muscle/fat separation. Furthermore, whole-body tissue composition and brain tissue delineation were conducted to show the generalization ability of the proposed system. This new automated platform outperforms other state-of-the-art segmentation approaches both in accuracy and efficiency.
Collapse
|
16
|
Xu Z, Gao M, Papadakis GZ, Luna B, Jain S, Mollura DJ, Bagci U. Joint solution for PET image segmentation, denoising, and partial volume correction. Med Image Anal 2018; 46:229-243. [PMID: 29627687 PMCID: PMC6080255 DOI: 10.1016/j.media.2018.03.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 10/17/2022]
Abstract
Segmentation, denoising, and partial volume correction (PVC) are three major processes in the quantification of uptake regions in post-reconstruction PET images. These problems are conventionally addressed by independent steps. In this study, we hypothesize that these three processes are dependent; therefore, jointly solving them can provide optimal support for quantification of the PET images. To achieve this, we utilize interactions among these processes when designing solutions for each challenge. We also demonstrate that segmentation can help in denoising and PVC by locally constraining the smoothness and correction criteria. For denoising, we adapt generalized Anscombe transformation to Gaussianize the multiplicative noise followed by a new adaptive smoothing algorithm called regional mean denoising. For PVC, we propose a volume consistency-based iterative voxel-based correction algorithm in which denoised and delineated PET images guide the correction process during each iteration precisely. For PET image segmentation, we use affinity propagation (AP)-based iterative clustering method that helps the integration of PVC and denoising algorithms into the delineation process. Qualitative and quantitative results, obtained from phantoms, clinical, and pre-clinical data, show that the proposed framework provides an improved and joint solution for segmentation, denoising, and partial volume correction.
Collapse
Affiliation(s)
- Ziyue Xu
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Mingchen Gao
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Georgios Z Papadakis
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Brian Luna
- University of California at Irvine, Irvine, CA, USA
| | - Sanjay Jain
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel J Mollura
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Ulas Bagci
- University of Central Florida, Orlando, FL, USA.
| |
Collapse
|
17
|
Wieclawek W. 3D marker-controlled watershed for kidney segmentation in clinical CT exams. Biomed Eng Online 2018; 17:26. [PMID: 29482560 PMCID: PMC5828230 DOI: 10.1186/s12938-018-0456-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 02/14/2018] [Indexed: 11/22/2022] Open
Abstract
Background Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient’s body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. Methods This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are ‘Very good’, whereas only 5 are ‘Good’ according to Cohen’s Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen’s Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as ‘Very good’ in 143–148 cases, as ‘Good’ in 15–21 cases and as ‘Moderate’ in 6–8 cases. Conclusions An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
Collapse
Affiliation(s)
- Wojciech Wieclawek
- Department of Informatics and Medical Equipment, Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland.
| |
Collapse
|
18
|
Yeghiazaryan V, Voiculescu I. Family of boundary overlap metrics for the evaluation of medical image segmentation. J Med Imaging (Bellingham) 2018; 5:015006. [PMID: 29487883 DOI: 10.1117/1.jmi.5.1.015006] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 01/11/2018] [Indexed: 11/14/2022] Open
Abstract
All medical image segmentation algorithms need to be validated and compared, yet no evaluation framework is widely accepted within the imaging community. None of the evaluation metrics that are popular in the literature are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, and shape) but no single metric covers all error types. We introduce a family of metrics, with hybrid characteristics. These metrics quantify the similarity or difference of segmented regions by considering their average overlap in fixed-size neighborhoods of points on the boundaries of those regions. Our metrics are more sensitive to combinations of segmentation error types than other metrics in the existing literature. We compare the metric performance on collections of segmentation results sourced from carefully compiled two-dimensional synthetic data and three-dimensional medical images. We show that our metrics: (1) penalize errors successfully, especially those around region boundaries; (2) give a low similarity score when existing metrics disagree, thus avoiding overly inflated scores; and (3) score segmentation results over a wider range of values. We analyze a representative metric from this family and the effect of its free parameter on error sensitivity and running time.
Collapse
Affiliation(s)
- Varduhi Yeghiazaryan
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| | - Irina Voiculescu
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| |
Collapse
|
19
|
Abstract
Medical image segmentation is a fundamental and challenging problem for analyzing medical images. Among different existing medical image segmentation methods, graph-based approaches are relatively new and show good features in clinical applications. In the graph-based method, pixels or regions in the original image are interpreted into nodes in a graph. By considering Markov random field to model the contexture information of the image, the medical image segmentation problem can be transformed into a graph-based energy minimization problem. This problem can be solved by the use of minimum s-t cut/ maximum flow algorithm. This review is devoted to cut-based medical segmentation methods, including graph cuts and graph search for region and surface segmentation. Different varieties of cut-based methods, including graph-cuts-based methods, model integrated graph cuts methods, graph-search-based methods, and graph search/graph cuts based methods, are systematically reviewed. Graph cuts and graph search with deep learning technique are also discussed.
Collapse
|
20
|
Xiang D, Bagci U, Jin C, Shi F, Zhu W, Yao J, Sonka M, Chen X. CorteXpert: A model-based method for automatic renal cortex segmentation. Med Image Anal 2017; 42:257-273. [DOI: 10.1016/j.media.2017.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 05/17/2017] [Accepted: 06/22/2017] [Indexed: 10/19/2022]
|
21
|
|
22
|
3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:9818506. [PMID: 28280519 PMCID: PMC5322574 DOI: 10.1155/2017/9818506] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 11/29/2016] [Accepted: 12/22/2016] [Indexed: 11/18/2022]
Abstract
Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images' inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels' appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach.
Collapse
|
23
|
Göçeri E. Fully automated liver segmentation using Sobolev gradient-based level set evolution. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2016; 32:e02765. [PMID: 26728097 DOI: 10.1002/cnm.2765] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Revised: 09/23/2015] [Accepted: 12/25/2015] [Indexed: 06/05/2023]
Abstract
Quantitative analysis and precise measurements on the liver have vital importance for pre-evaluation of surgical operations and require high accuracy in liver segmentation from all slices in a data set. However, automated liver segmentation from medical image data sets is more challenging than segmentation of any other organ due to various reasons such as vascular structures in the liver, high variability of liver shapes, similar intensity values, and unclear edges between liver and its adjacent organs. In this study, a variational level set-based segmentation approach is proposed to be efficient in terms of processing time and accuracy. The efficiency of this method is achieved by (1) automated initialization of a large initial contour, (2) using an adaptive signed pressure force function, and also (3) evolution of the level set with Sobolev gradient. Experimental results show that the proposed fully automated segmentation technique avoids local minima and stops evolution of the active contour at desired liver boundaries with high speed and accuracy. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Evgin Göçeri
- Department of Computer Engineering, Akdeniz University, 07058, Antalya, Turkey.
| |
Collapse
|
24
|
Lobachev O, Ulrich C, Steiniger BS, Wilhelmi V, Stachniss V, Guthe M. Feature-based multi-resolution registration of immunostained serial sections. Med Image Anal 2016; 35:288-302. [PMID: 27494805 DOI: 10.1016/j.media.2016.07.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2015] [Revised: 07/03/2016] [Accepted: 07/21/2016] [Indexed: 10/21/2022]
Abstract
The form and exact function of the blood vessel network in some human organs, like spleen and bone marrow, are still open research questions in medicine. In this paper, we propose a method to register the immunohistological stainings of serial sections of spleen and bone marrow specimens to enable the visualization and visual inspection of blood vessels. As these vary much in caliber, from mesoscopic (millimeter-range) to microscopic (few micrometers, comparable to a single erythrocyte), we need to utilize a multi-resolution approach. Our method is fully automatic; it is based on feature detection and sparse matching. We utilize a rigid alignment and then a non-rigid deformation, iteratively dealing with increasingly smaller features. Our tool pipeline can already deal with series of complete scans at extremely high resolution, up to 620 megapixels. The improvement presented increases the range of represented details up to smallest capillaries. This paper provides details on the multi-resolution non-rigid registration approach we use. Our application is novel in the way the alignment and subsequent deformations are computed (using features, i.e. "sparse"). The deformations are based on all images in the stack ("global"). We also present volume renderings and a 3D reconstruction of the vascular network in human spleen and bone marrow on a level not possible before. Our registration makes easy tracking of even smallest blood vessels possible, thus granting experts a better comprehension. A quantitative evaluation of our method and related state of the art approaches with seven different quality measures shows the efficiency of our method. We also provide z-profiles and enlarged volume renderings from three different registrations for visual inspection.
Collapse
Affiliation(s)
- Oleg Lobachev
- Visual Computing of University Bayreuth, 95440 Bayreuth, Germany.
| | - Christine Ulrich
- Psychology of Philipps-University Marburg, 35037 Marburg, Germany
| | - Birte S Steiniger
- Institute of Anatomy and Cell Biology of Philipps-University Marburg 35037 Marburg, Germany
| | - Verena Wilhelmi
- Institute of Anatomy and Cell Biology of Philipps-University Marburg 35037 Marburg, Germany
| | - Vitus Stachniss
- Restorative Dentistry and Endodontics of Philipps-University Marburg, 35037 Marburg, Germany
| | - Michael Guthe
- Visual Computing of University Bayreuth, 95440 Bayreuth, Germany
| |
Collapse
|
25
|
Mansoor A, Bagci U, Foster B, Xu Z, Papadakis GZ, Folio LR, Udupa JK, Mollura DJ. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. Radiographics 2016; 35:1056-76. [PMID: 26172351 DOI: 10.1148/rg.2015140232] [Citation(s) in RCA: 105] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.
Collapse
Affiliation(s)
- Awais Mansoor
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Ulas Bagci
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Brent Foster
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Ziyue Xu
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Georgios Z Papadakis
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Les R Folio
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Jayaram K Udupa
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Daniel J Mollura
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| |
Collapse
|
26
|
Dolz J, Massoptier L, Vermandel M. Segmentation algorithms of subcortical brain structures on MRI for radiotherapy and radiosurgery: A survey. Ing Rech Biomed 2015. [DOI: 10.1016/j.irbm.2015.06.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
27
|
Cai Y, Osman S, Sharma M, Landis M, Li S. Multi-Modality Vertebra Recognition in Arbitrary Views Using 3D Deformable Hierarchical Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1676-1693. [PMID: 25594966 DOI: 10.1109/tmi.2015.2392054] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Computer-aided diagnosis of spine problems relies on the automatic identification of spine structures in images. The task of automatic vertebra recognition is to identify the global spine and local vertebra structural information such as spine shape, vertebra location and pose. Vertebra recognition is challenging due to the large appearance variations in different image modalities/views and the high geometric distortions in spine shape. Existing vertebra recognitions are usually simplified as vertebrae detections, which mainly focuses on the identification of vertebra locations and labels but cannot support further spine quantitative assessment. In this paper, we propose a vertebra recognition method using 3D deformable hierarchical model (DHM) to achieve cross-modality local vertebra location+pose identification with accurate vertebra labeling, and global 3D spine shape recovery. We recast vertebra recognition as deformable model matching, fitting the input spine images with the 3D DHM via deformations. The 3D model-matching mechanism provides a more comprehensive vertebra location+pose+label simultaneous identification than traditional vertebra location+label detection, and also provides an articulated 3D mesh model for the input spine section. Moreover, DHM can conduct versatile recognition on volume and multi-slice data, even on single slice. Experiments show our method can successfully extract vertebra locations, labels, and poses from multi-slice T1/T2 MR and volume CT, and can reconstruct 3D spine model on different image views such as lumbar, cervical, even whole spine. The resulting vertebra information and the recovered shape can be used for quantitative diagnosis of spine problems and can be easily digitalized and integrated in modern medical PACS systems.
Collapse
|
28
|
Okada T, Linguraru MG, Hori M, Summers RM, Tomiyama N, Sato Y. Abdominal multi-organ segmentation from CT images using conditional shape-location and unsupervised intensity priors. Med Image Anal 2015; 26:1-18. [PMID: 26277022 DOI: 10.1016/j.media.2015.06.009] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 06/21/2015] [Accepted: 06/22/2015] [Indexed: 11/26/2022]
Abstract
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively.
Collapse
Affiliation(s)
- Toshiyuki Okada
- Department of Surgery, Faculty of Medicine, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8575, Japan
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC 20010, USA; Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington, DC 20037, USA
| | - Masatoshi Hori
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Ronald M Summers
- National Institutes of Health, Clinical Center, Radiology and Imaging Sciences, 10 Center Drive, Bethesda, MD 20892, USA
| | - Noriyuki Tomiyama
- Department of Radiology, Graduate School of Medicine, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Yoshinobu Sato
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan.
| |
Collapse
|
29
|
Discriminative dictionary learning for abdominal multi-organ segmentation. Med Image Anal 2015; 23:92-104. [DOI: 10.1016/j.media.2015.04.015] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Revised: 04/17/2015] [Accepted: 04/17/2015] [Indexed: 01/18/2023]
|
30
|
Xu Z, Burke RP, Lee CP, Baucom RB, Poulose BK, Abramson RG, Landman BA. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning. Med Image Anal 2015; 24:18-27. [PMID: 26046403 DOI: 10.1016/j.media.2015.05.009] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Revised: 04/14/2015] [Accepted: 05/13/2015] [Indexed: 11/16/2022]
Abstract
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining.
Collapse
Affiliation(s)
- Zhoubing Xu
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Ryan P Burke
- Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | | | | | | | - Richard G Abramson
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA; Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA; General Surgery, Vanderbilt University, Nashville, TN 37235, USA; Radiology and Radiological Science, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
31
|
Mansoor A, Bagci U, Xu Z, Foster B, Olivier KN, Elinoff JM, Suffredini AF, Udupa JK, Mollura DJ. A generic approach to pathological lung segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:2293-310. [PMID: 25020069 PMCID: PMC5542015 DOI: 10.1109/tmi.2014.2337057] [Citation(s) in RCA: 71] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
In this study, we propose a novel pathological lung segmentation method that takes into account neighbor prior constraints and a novel pathology recognition system. Our proposed framework has two stages; during stage one, we adapted the fuzzy connectedness (FC) image segmentation algorithm to perform initial lung parenchyma extraction. In parallel, we estimate the lung volume using rib-cage information without explicitly delineating lungs. This rudimentary, but intelligent lung volume estimation system allows comparison of volume differences between rib cage and FC based lung volume measurements. Significant volume difference indicates the presence of pathology, which invokes the second stage of the proposed framework for the refinement of segmented lung. In stage two, texture-based features are utilized to detect abnormal imaging patterns (consolidations, ground glass, interstitial thickening, tree-inbud, honeycombing, nodules, and micro-nodules) that might have been missed during the first stage of the algorithm. This refinement stage is further completed by a novel neighboring anatomy-guided segmentation approach to include abnormalities with weak textures, and pleura regions. We evaluated the accuracy and efficiency of the proposed method on more than 400 CT scans with the presence of a wide spectrum of abnormalities. To our best of knowledge, this is the first study to evaluate all abnormal imaging patterns in a single segmentation framework. The quantitative results show that our pathological lung segmentation method improves on current standards because of its high sensitivity and specificity and may have considerable potential to enhance the performance of routine clinical tasks.
Collapse
|
32
|
Göçeri E, Gürcan MN, Dicle O. Fully automated liver segmentation from SPIR image series. Comput Biol Med 2014; 53:265-78. [PMID: 25192606 DOI: 10.1016/j.compbiomed.2014.08.009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Revised: 08/04/2014] [Accepted: 08/10/2014] [Indexed: 10/24/2022]
Abstract
Accurate liver segmentation is an important component of surgery planning for liver transplantation, which enables patients with liver disease a chance to survive. Spectral pre-saturation inversion recovery (SPIR) image sequences are useful for liver vessel segmentation because vascular structures in the liver are clearly visible in these sequences. Although level-set based segmentation techniques are frequently used in liver segmentation due to their flexibility to adapt to different problems by incorporating prior knowledge, the need to initialize the contours on each slice is a common drawback of such techniques. In this paper, we present a fully automated variational level set approach for liver segmentation from SPIR image sequences. Our approach is designed to be efficient while achieving high accuracy. The efficiency is achieved by (1) automatically defining an initial contour for each slice, and (2) automatically computing weight values of each term in the applied energy functional at each iteration during evolution. Automated detection and exclusion of spurious structures (e.g. cysts and other bright white regions on the skin) in the pre-processing stage increases the accuracy and robustness. We also present a novel approach to reduce computational cost by employing binary regularization of level set function. A signed pressure force function controls the evolution of the active contour. The method was applied to ten data sets. In each image, the performance of the algorithm was measured using the receiver operating characteristics method in terms of accuracy, sensitivity and specificity. The accuracy of the proposed method was 96%. Quantitative analyses of results indicate that the proposed method can accurately, efficiently and consistently segment liver images.
Collapse
Affiliation(s)
- Evgin Göçeri
- Department of Computer Engineering, Pamukkale University, Denizli, Turkey.
| | - Metin N Gürcan
- Department of Biomedical Informatics, The Ohio State University, Columbus, OH, USA
| | - Oğuz Dicle
- Department of Radiology, Faculty of Medicine, Dokuz Eylul University, Narlıdere, Izmir, Turkey
| |
Collapse
|
33
|
Huang M, Yang W, Wu Y, Jiang J, Gao Y, Chen Y, Feng Q, Chen W, Lu Z. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images. PLoS One 2014; 9:e102754. [PMID: 25028970 PMCID: PMC4100908 DOI: 10.1371/journal.pone.0102754] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2014] [Accepted: 06/23/2014] [Indexed: 11/25/2022] Open
Abstract
This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.
Collapse
Affiliation(s)
- Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yao Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jun Jiang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yang Gao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, the Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- * E-mail:
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Zhentai Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
34
|
Foster B, Bagci U, Mansoor A, Xu Z, Mollura DJ. A review on segmentation of positron emission tomography images. Comput Biol Med 2014; 50:76-96. [PMID: 24845019 DOI: 10.1016/j.compbiomed.2014.04.014] [Citation(s) in RCA: 234] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2013] [Revised: 03/19/2014] [Accepted: 04/16/2014] [Indexed: 11/20/2022]
Abstract
Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results.
Collapse
Affiliation(s)
- Brent Foster
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Ulas Bagci
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States.
| | - Awais Mansoor
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Ziyue Xu
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| | - Daniel J Mollura
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, United States
| |
Collapse
|
35
|
Onal S, Lai-Yuen S, Bao P, Weitzenfeld A, Hart S. Fully automated localization of multiple pelvic bone structures on MRI. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2014:3353-3356. [PMID: 25570709 DOI: 10.1109/embc.2014.6944341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we present a fully automated localization method for multiple pelvic bone structures on magnetic resonance images (MRI). Pelvic bone structures are currently identified manually on MRI to identify reference points for measurement and evaluation of pelvic organ prolapse (POP). Given that this is a time-consuming and subjective procedure, there is a need to localize pelvic bone structures without any user interaction. However, bone structures are not easily differentiable from soft tissue on MRI as their pixel intensities tend to be very similar. In this research, we present a model that automatically identifies the bounding boxes of the bone structures on MRI using support vector machines (SVM) based classification and non-linear regression model that captures global and local information. Based on the relative locations of pelvic bones and organs, and local information such as texture features, the model identifies the location of the pelvic bone structures by establishing the association between their locations. Results show that the proposed method is able to locate the bone structures of interest accurately. The pubic bone, sacral promontory, and coccyx were correctly detected (DSI > 0.75) in 92%, 90%, and 88% of the testing images. This research aims to enable accurate, consistent and fully automated identification of pelvic bone structures on MRI to facilitate and improve the diagnosis of female pelvic organ prolapse.
Collapse
|
36
|
Bagci U, Udupa JK, Mendhiratta N, Foster B, Xu Z, Yao J, Chen X, Mollura DJ. Joint segmentation of anatomical and functional images: applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images. Med Image Anal 2013; 17:929-45. [PMID: 23837967 PMCID: PMC3795997 DOI: 10.1016/j.media.2013.05.004] [Citation(s) in RCA: 101] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2012] [Revised: 03/09/2013] [Accepted: 05/08/2013] [Indexed: 11/25/2022]
Abstract
We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use.
Collapse
Affiliation(s)
- Ulas Bagci
- Center for Infectious Diseases Imaging, National Institutes of Health, Bethesda, MD, United States; Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD, United States.
| | | | | | | | | | | | | | | |
Collapse
|
37
|
Foster B, Bagci U, Dey B, Luna B, Bishai W, Jain S, Mollura DJ. Segmentation of PET images for computer-aided functional quantification of tuberculosis in small animal models. IEEE Trans Biomed Eng 2013; 61:711-24. [PMID: 24235292 DOI: 10.1109/tbme.2013.2288258] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pulmonary infections often cause spatially diffuse and multi-focal radiotracer uptake in positron emission tomography (PET) images, which makes accurate quantification of the disease extent challenging. Image segmentation plays a vital role in quantifying uptake due to the distributed nature of immuno-pathology and associated metabolic activities in pulmonary infection, specifically tuberculosis (TB). For this task, thresholding-based segmentation methods may be better suited over other methods; however, performance of the thresholding-based methods depend on the selection of thresholding parameters, which are often suboptimal. Several optimal thresholding techniques have been proposed in the literature, but there is currently no consensus on how to determine the optimal threshold for precise identification of spatially diffuse and multi-focal radiotracer uptake. In this study, we propose a method to select optimal thresholding levels by utilizing a novel intensity affinity metric within the affinity propagation clustering framework. We tested the proposed method against 70 longitudinal PET images of rabbits infected with TB. The overall dice similarity coefficient between the segmentation from the proposed method and two expert segmentations was found to be 91.25 ±8.01% with a sensitivity of 88.80 ±12.59% and a specificity of 96.01 ±9.20%. High accuracy and heightened efficiency of our proposed method, as compared to other PET image segmentation methods, were reported with various quantification metrics.
Collapse
|
38
|
Wolz R, Chu C, Misawa K, Fujiwara M, Mori K, Rueckert D. Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1723-1730. [PMID: 23744670 DOI: 10.1109/tmi.2013.2265805] [Citation(s) in RCA: 136] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
A robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance. Many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs. We present a general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans. The method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation, two widely used methods in brain segmentation. The final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step, incorporating high-level spatial knowledge. The proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs. We have evaluated the segmentation on a database of 150 manually segmented CT images. The achieved results compare well to state-of-the-art methods, that are usually tailored to more specific questions, with Dice overlap values of 94%, 93%, 70%, and 92% for liver, kidneys, pancreas, and spleen, respectively.
Collapse
Affiliation(s)
- Robin Wolz
- Department of Computing, Imperial College London, London, UK.
| | | | | | | | | | | |
Collapse
|
39
|
Bagci U, Foster B, Miller-Jaster K, Luna B, Dey B, Bishai WR, Jonsson CB, Jain S, Mollura DJ. A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging. EJNMMI Res 2013; 3:55. [PMID: 23879987 PMCID: PMC3734217 DOI: 10.1186/2191-219x-3-55] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2013] [Accepted: 07/06/2013] [Indexed: 12/19/2022] Open
Abstract
Background Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases. Methods We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Results Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. Conclusions We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
Collapse
Affiliation(s)
- Ulas Bagci
- Center for Infectious Disease Imaging, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Abstract
BACKGROUND Germinal Centers (GC) are short-lived micro-anatomical structures, within lymphoid organs, where affinity maturation is initiated. Theoretical modeling of the dynamics of the GC reaction including follicular CD4+ T helper and the recently described follicular regulatory CD4+ T cell populations, predicts that the intensity and life span of such reactions is driven by both types of T cells, yet controlled primarily by follicular regulatory CD4+ T cells. In order to calibrate GC models, it is necessary to properly analyze the kinetics of GC sizes. Presently, the estimation of spleen GC volumes relies upon confocal microscopy images from 20-30 slices spanning a depth of ~ 20 - 50 μm, whose GC areas are analyzed, slice-by-slice, for subsequent 3D reconstruction and quantification. The quantity of data to be analyzed from such images taken for kinetics experiments is usually prohibitively large to extract semi-manually with existing software. As a result, the entire procedure is highly time-consuming, and inaccurate, thereby motivating the need for a new software tool that can automatically identify and calculate the 3D spot volumes from GC multidimensional images. RESULTS We have developed pyBioImage, an open source cross platform image analysis software application, written in python with C extensions that is specifically tailored to the needs of immunologic research involving 4D imaging of GCs. The software provides 1) support for importing many multi-image formats, 2) basic image processing and analysis, and 3) the ExtractGC module, that allows for automatic analysis and visualization of extracted GC volumes from multidimensional confocal microscopy images. We present concrete examples of different microscopy image data sets of GC that have been used in experimental and theoretical studies of mouse model GC dynamics. CONCLUSIONS The pyBioImage software framework seeks to be a general purpose image application for immunological research based on 4D imaging. The ExtractGC module uses a novel clustering algorithm for automatically extracting quantitative spatial information of a large number of GCs from a collection of confocal microscopy images. In addition, the software provides 3D visualization of the GCs reconstructed from the image stacks. The application is available for public use at http://sourceforge.net/projects/pybioimage/.
Collapse
Affiliation(s)
- David N Olivieri
- School of Computer Engineering, University of Vigo, Ourense, Spain.
| | | | | |
Collapse
|
41
|
Lecron F, Benjelloun M, Mahmoudi S. Cervical spine mobility analysis on radiographs: a fully automatic approach. Comput Med Imaging Graph 2012; 36:634-42. [PMID: 22981777 DOI: 10.1016/j.compmedimag.2012.08.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2012] [Revised: 07/19/2012] [Accepted: 08/23/2012] [Indexed: 10/27/2022]
Abstract
Conventional X-ray radiography remains nowadays the most common method to analyze spinal mobility in two dimensions. Therefore, the objective of this paper is to develop a framework dedicated to the fully automatic cervical spine mobility analysis on X-ray images. To this aim, we propose an approach based on three main steps: fully automatic vertebra detection, vertebra segmentation and angular measurement. The accuracy of the method was assessed for a total of 245 vertebræ. For the vertebra detection, we proposed an adapted version of two descriptors, namely Scale-invariant Feature Transform (SIFT) and Speeded-up Robust Features (SURF), coupled with a multi-class Support Vector Machine (SVM) classifier. Vertebræ are successfully detected in 89.8% of cases and it is demonstrated that SURF slightly outperforms SIFT. The Active Shape Model approach was considered as a segmentation procedure. We observed that a statistical shape model specific to the vertebral level improves the results. Angular errors of cervical spine mobility are presented. We showed that these errors remain within the inter-operator variability of the reference method.
Collapse
Affiliation(s)
- Fabian Lecron
- University of Mons, Place du Parc, 20, 7000 Mons, Belgium.
| | | | | |
Collapse
|
42
|
Bagci U, Yao J, Wu A, Caban J, Palmore TN, Suffredini AF, Aras O, Mollura DJ. Automatic detection and quantification of tree-in-bud (TIB) opacities from CT scans. IEEE Trans Biomed Eng 2012; 59:1620-32. [PMID: 22434795 PMCID: PMC3511590 DOI: 10.1109/tbme.2012.2190984] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This study presents a novel computer-assisted detection (CAD) system for automatically detecting and precisely quantifying abnormal nodular branching opacities in chest computed tomography (CT), termed tree-in-bud (TIB) opacities by radiology literature. The developed CAD system in this study is based on 1) fast localization of candidate imaging patterns using local scale information of the images, and 2) Möbius invariant feature extraction method based on learned local shape and texture properties of TIB patterns. For fast localization of candidate imaging patterns, we use ball-scale filtering and, based on the observation of the pattern of interest, a suitable scale selection is used to retain only small size patterns. Once candidate abnormality patterns are identified, we extract proposed shape features from regions where at least one candidate pattern occupies. The comparative evaluation of the proposed method with commonly used CAD methods is presented with a dataset of 60 chest CTs (laboratory confirmed 39 viral bronchiolitis human parainfluenza CTs and 21 normal chest CTs). The quantitative results are presented as the area under the receiver operator characteristics curves and a computer score (volume affected by TIB) provided as an output of the CAD system. In addition, a visual grading scheme is applied to the patient data by three well-trained radiologists. Interobserver and observer-computer agreements are obtained by the relevant statistical methods over different lung zones. Experimental results demonstrate that the proposed CAD system can achieve high detection rates with an overall accuracy of 90.96%. Moreover, correlations of observer-observer (R(2)=0.8848, and observer-CAD agreements (R(2)=0.824, validate the feasibility of the use of the proposed CAD system in detecting and quantifying TIB patterns.
Collapse
Affiliation(s)
- Ulas Bagci
- Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | | | | | | | |
Collapse
|