1
|
Selvam M, Chandrasekharan A, Sadanandan A, Anand VK, Ramesh S, Murali A, Krishnamurthi G. Radiomics analysis for distinctive identification of COVID-19 pulmonary nodules from other benign and malignant counterparts. Sci Rep 2024; 14:7079. [PMID: 38528100 DOI: 10.1038/s41598-024-57899-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/22/2024] [Indexed: 03/27/2024] Open
Abstract
This observational study investigated the potential of radiomics as a non-invasive adjunct to CT in distinguishing COVID-19 lung nodules from other benign and malignant lung nodules. Lesion segmentation, feature extraction, and machine learning algorithms, including decision tree, support vector machine, random forest, feed-forward neural network, and discriminant analysis, were employed in the radiomics workflow. Key features such as Idmn, skewness, and long-run low grey level emphasis were identified as crucial in differentiation. The model demonstrated an accuracy of 83% in distinguishing COVID-19 from other benign nodules and 88% from malignant nodules. This study concludes that radiomics, through machine learning, serves as a valuable tool for non-invasive discrimination between COVID-19 and other benign and malignant lung nodules. The findings suggest the potential complementary role of radiomics in patients with COVID-19 pneumonia exhibiting lung nodules and suspicion of concurrent lung pathologies. The clinical relevance lies in the utilization of radiomics analysis for feature extraction and classification, contributing to the enhanced differentiation of lung nodules, particularly in the context of COVID-19.
Collapse
Affiliation(s)
- Minmini Selvam
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India.
| | - Anupama Chandrasekharan
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India
| | - Abjasree Sadanandan
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| | - Vikas K Anand
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| | - Sidharth Ramesh
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| | - Arunan Murali
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| |
Collapse
|
2
|
Dwivedi V, Srinivasan B, Krishnamurthi G. Physics informed contour selection for rapid image segmentation. Sci Rep 2024; 14:6996. [PMID: 38523137 PMCID: PMC10961308 DOI: 10.1038/s41598-024-57281-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/15/2024] [Indexed: 03/26/2024] Open
Abstract
Effective training of deep image segmentation models is challenging due to the need for abundant, high-quality annotations. To facilitate image annotation, we introduce Physics Informed Contour Selection (PICS)-an interpretable, physics-informed algorithm for rapid image segmentation without relying on labeled data. PICS draws inspiration from physics-informed neural networks (PINNs) and an active contour model called snake. It is fast and computationally lightweight because it employs cubic splines instead of a deep neural network as a basis function. Its training parameters are physically interpretable because they directly represent control knots of the segmentation curve. Traditional snakes involve minimization of the edge-based loss functionals by deriving the Euler-Lagrange equation followed by its numerical solution. However, PICS directly minimizes the loss functional, bypassing the Euler Lagrange equations. It is the first snake variant to minimize a region-based loss function instead of traditional edge-based loss functions. PICS uniquely models the three-dimensional (3D) segmentation process with an unsteady partial differential equation (PDE), which allows accelerated segmentation via transfer learning. To demonstrate its effectiveness, we apply PICS for 3D segmentation of the left ventricle on a publicly available cardiac dataset. We also demonstrate PICS's capacity to encode the prior shape information as a loss term by proposing a new convexity-preserving loss term for left ventricle. Overall, PICS presents several novelties in network architecture, transfer learning, and physics-inspired losses for image segmentation, thereby showing promising outcomes and potential for further refinement.
Collapse
Affiliation(s)
- Vikas Dwivedi
- Atmospheric Science Research Center, State University of New York, Albany, NY, 12222, USA.
| | - Balaji Srinivasan
- Department of Mechanical Engineering, Indian Institute of Technology, Madras, Chennai, 600036, India
- Wadhwani School of Data Science and AI, Indian Institute of Technology, Madras, Chennai, 600036, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology, Madras, Chennai, 600036, India
- Wadhwani School of Data Science and AI, Indian Institute of Technology, Madras, Chennai, 600036, India
| |
Collapse
|
3
|
Selvam M, Chandrasekharan A, Sadanandan A, Anand VK, Murali A, Krishnamurthi G. Radiomics as a non-invasive adjunct to Chest CT in distinguishing benign and malignant lung nodules. Sci Rep 2023; 13:19062. [PMID: 37925565 PMCID: PMC10625576 DOI: 10.1038/s41598-023-46391-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 10/31/2023] [Indexed: 11/06/2023] Open
Abstract
In an observational study conducted from 2016 to 2021, we assessed the utility of radiomics in differentiating between benign and malignant lung nodules detected on computed tomography (CT) scans. Patients in whom a final diagnosis regarding the lung nodules was available according to histopathology and/or 2017 Fleischner Society guidelines were included. The radiomics workflow included lesion segmentation, region of interest (ROI) definition, pre-processing, and feature extraction. Employing random forest feature selection, we identified ten important radiomic features for distinguishing between benign and malignant nodules. Among the classifiers tested, the Decision Tree model demonstrated superior performance, achieving 79% accuracy, 75% sensitivity, 85% specificity, 82% precision, and 90% F1 score. The implementation of the XGBoost algorithm further enhanced these results, yielding 89% accuracy, 89% sensitivity, 89% precision, and an F1 score of 89%, alongside a specificity of 85%. Our findings highlight tumor texture as the primary predictor of malignancy, emphasizing the importance of texture-based features in computational oncology. Thus, our study establishes radiomics as a powerful, non-invasive adjunct to CT scans in the differentiation of lung nodules, with significant implications for clinical decision-making, especially for indeterminate nodules, and the enhancement of diagnostic and predictive accuracy in this clinical context.
Collapse
Affiliation(s)
- Minmini Selvam
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India.
| | - Anupama Chandrasekharan
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India
| | - Abjasree Sadanandan
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| | - Vikas Kumar Anand
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| | - Arunan Murali
- Department of Radiology and Imaging Sciences, Sri Ramachandra Institute of Higher Education and Research, Porur, Chennai, 600 116, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, 600 036, India
| |
Collapse
|
4
|
Bilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, Heinemann V, Pal C, Tang A, Kadoury S, Soler L, van Ginneken B, Greenspan H, Joskowicz L, Menze B. The Liver Tumor Segmentation Benchmark (LiTS). Med Image Anal 2023; 84:102680. [PMID: 36481607 PMCID: PMC10631490 DOI: 10.1016/j.media.2022.102680] [Citation(s) in RCA: 61] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 09/27/2022] [Accepted: 10/29/2022] [Indexed: 11/18/2022]
Abstract
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Collapse
Affiliation(s)
- Patrick Bilic
- Department of Informatics, Technical University of Munich, Germany
| | - Patrick Christ
- Department of Informatics, Technical University of Munich, Germany
| | - Hongwei Bran Li
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland.
| | | | - Avi Ben-Cohen
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Georgios Kaissis
- Institute for AI in Medicine, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Adi Szeskin
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Gabriel Chartrand
- The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada
| | - Fabian Lohöfer
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Julian Walter Holch
- Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Wieland Sommer
- Department of Radiology, University Hospital, LMU Munich, Germany
| | - Felix Hofmann
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; Department of Radiology, University Hospital, LMU Munich, Germany
| | - Alexandre Hostettler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | | | | | | | - Jacob Sosna
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Fernando Navarro
- Department of Informatics, Technical University of Munich, Germany; Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Johannes C Paetzold
- Department of Computing, Imperial College London, London, United Kingdom; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany
| | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Germany
| | - Jana Lipková
- Brigham and Women's Hospital, Harvard Medical School, USA
| | - Markus Rempfler
- Department of Informatics, Technical University of Munich, Germany
| | - Marie Piraud
- Department of Informatics, Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Jan Kirschke
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Benedikt Wiestler
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Zhiheng Zhang
- Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China
| | | | - Marcel Beetz
- Department of Informatics, Technical University of Munich, Germany
| | | | - Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | | | | | - Lei Bi
- School of Computer Science, the University of Sydney, Australia
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China
| | - Grzegorz Chlebus
- Fraunhofer MEVIS, Bremen, Germany; Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Denmark
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Xavier Giró-I-Nieto
- Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Felix Gruen
- Institute of Control Engineering, Technische Universität Braunschweig, Germany
| | - Xu Han
- Department of computer science, UNC Chapel Hill, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Denmark
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Paul Jäger
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Krishna Chaitanya Kaluva
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Mahendra Khened
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | | | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea
| | | | - Simon Kohl
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tomasz Konopczynski
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany
| | - Avinash Kori
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Ganapathy Krishnamurthi
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Fan Li
- Sensetime, Shanghai, China
| | - Hongchao Li
- Department of Computer Science, Guangdong University of Foreign Studies, China
| | - Junbo Li
- Philips Research China, Philips China Innovation Campus, Shanghai, China
| | - Xiaomeng Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - John Lowengrub
- Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; Center for Complex Biological Systems, University of California, Irvine, USA; Chao Family Comprehensive Cancer Center, University of California, Irvine, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, China
| | - Klaus Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | | | - Hans Meine
- Fraunhofer MEVIS, Bremen, Germany; Medical Image Computing Group, FB3, University of Bremen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Denmark
| | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Denmark
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jordi Pont-Tuset
- Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland
| | - Jin Qi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | | | - Ignacio Sarasua
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Andrea Schenk
- Fraunhofer MEVIS, Bremen, Germany; Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Zengming Shen
- Beckman Institute, University of Illinois at Urbana-Champaign, USA; Siemens Healthineers, USA
| | - Jordi Torres
- Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Christian Wachinger
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Leon Weninger
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co., Ltd, China
| | | | - Xiaoping Yang
- Department of Mathematics, Nanjing University, China
| | - Simon Chun-Ho Yu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Miao Yue
- CGG Services (Singapore) Pte. Ltd., Singapore
| | - Liping Zhang
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany
| | - Volker Heinemann
- Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany
| | | | - An Tang
- Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada
| | | | - Luc Soler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| |
Collapse
|
5
|
Da Q, Huang X, Li Z, Zuo Y, Zhang C, Liu J, Chen W, Li J, Xu D, Hu Z, Yi H, Guo Y, Wang Z, Chen L, Zhang L, He X, Zhang X, Mei K, Zhu C, Lu W, Shen L, Shi J, Li J, S S, Krishnamurthi G, Yang J, Lin T, Song Q, Liu X, Graham S, Bashir RMS, Yang C, Qin S, Tian X, Yin B, Zhao J, Metaxas DN, Li H, Wang C, Zhang S. DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system. Med Image Anal 2022; 80:102485. [DOI: 10.1016/j.media.2022.102485] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 12/19/2022]
|
6
|
Balasubramanian SL, Krishnamurthi G. X-ray scintillator lens-coupled with CMOS camera for pre-clinical cardiac vascular imaging—A feasibility study. PLoS One 2022; 17:e0262913. [PMID: 35148354 PMCID: PMC8836319 DOI: 10.1371/journal.pone.0262913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 01/07/2022] [Indexed: 11/23/2022] Open
Abstract
We present the design and characterization of an X-ray imaging system consisting of an off-the-shelf CMOS sensor optically coupled to a CsI scintillator. The camera can perform both high-resolution and functional cardiac imaging. High-resolution 3D imaging requires microfocus X-ray tubes and expensive detectors, while pre-clinical functional cardiac imaging requires high flux pulsed (clinical) X-ray tubes and high-end cameras. Our work describes an X-ray camera, namely an “optically coupled X-ray(OCX) detector,” used for both the aforementioned applications with no change in the specifications. We constructed the imaging detector with two different CMOS optical imaging cameras called CMOS sensors, 1.A monochrome CMOS sensor coupled with an f1.4 lens and 2.an RGB CMOS sensor coupled with an f0.95 prime lens. The imaging system consisted of our X-ray camera, micro-focus X-ray source (50kVp and 1mA), and a rotary stage controlled from a personal computer (PC) and LabVIEW interface. The detective quantum efficiency (DQE) of the imaging system(monochrome) estimated using a cascaded linear model was 17% at 10 lp/mm. The system modulation transfer function (MTF) and the noise power spectrum (NPS) were inputs to the DQE estimation. Because of the RGB camera’s low quantum efficiency (QE), the OCX detector DQE was 19% at 5 lp/mm. The contrast to noise ratio (CNR) at different frame rates was studied using the capillary tubes filled with various dilutions of iodinated contrast agents. In-vivo cardiac angiography demonstrated that blood vessels of the order of 100 microns or above were visible at 40 frames per second despite the low X-ray flux. For high-resolution 3D imaging, the system was characterized by imaging a cylindrical micro-CT contrast phantom and comparing it against images from a commercial scanner.
Collapse
Affiliation(s)
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology-Madras, Chennai, TamilNadu, India
- * E-mail:
| |
Collapse
|
7
|
Khened M, Kori A, Rajkumar H, Krishnamurthi G, Srinivasan B. A generalized deep learning framework for whole-slide image segmentation and analysis. Sci Rep 2021; 11:11579. [PMID: 34078928 PMCID: PMC8172839 DOI: 10.1038/s41598-021-90444-8] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 05/04/2021] [Indexed: 12/22/2022] Open
Abstract
Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
Collapse
Affiliation(s)
- Mahendra Khened
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Haran Rajkumar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India.
| | - Balaji Srinivasan
- Department of Mechanical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India
| |
Collapse
|
8
|
Natekar P, Kori A, Krishnamurthi G. Corrigendum: Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis. Front Comput Neurosci 2021; 15:651959. [PMID: 33584235 PMCID: PMC7879394 DOI: 10.3389/fncom.2021.651959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 01/12/2021] [Indexed: 12/02/2022] Open
Affiliation(s)
- Parth Natekar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | | |
Collapse
|
9
|
Kim YJ, Jang H, Lee K, Park S, Min SG, Hong C, Park JH, Lee K, Kim J, Hong W, Jung H, Liu Y, Rajkumar H, Khened M, Krishnamurthi G, Yang S, Wang X, Han CH, Kwak JT, Ma J, Tang Z, Marami B, Zeineh J, Zhao Z, Heng PA, Schmitz R, Madesta F, Rösch T, Werner R, Tian J, Puybareau E, Bovio M, Zhang X, Zhu Y, Chun SY, Jeong WK, Park P, Choi J. PAIP 2019: Liver cancer segmentation challenge. Med Image Anal 2020; 67:101854. [PMID: 33091742 DOI: 10.1016/j.media.2020.101854] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/22/2023]
Abstract
Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.
Collapse
Affiliation(s)
- Yoo Jung Kim
- Department of Biomedical Engineering, Seoul National University Hospital, Seoul, South Korea.
| | - Hyungjoon Jang
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea.
| | - Kyoungbun Lee
- Department of Pathology, Seoul National University Hospital, Seoul, South Korea.
| | - Seongkeun Park
- Department of Biomedical Engineering, Seoul National University Hospital, Seoul, South Korea
| | - Sung-Gyu Min
- Department of Pathology, Seoul National University Hospital, Seoul, South Korea
| | - Choyeon Hong
- Department of Pathology, Seoul National University Hospital, Seoul, South Korea
| | - Jeong Hwan Park
- Department of Pathology, Seoul Metropolitan Government-Seoul National University Boramae Medical Center, Seoul, South Korea
| | - Kanggeun Lee
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Jisoo Kim
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Wonjae Hong
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Hyun Jung
- Frederick National Laboratory for Cancer Research, Frederick, Maryland, United States
| | - Yanling Liu
- Frederick National Laboratory for Cancer Research, Frederick, Maryland, United States
| | - Haran Rajkumar
- Department of Engineering Design, Indian Institute Of Technology Madras, Chennai, Tamil Nadu, India
| | - Mahendra Khened
- Department of Engineering Design, Indian Institute Of Technology Madras, Chennai, Tamil Nadu, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute Of Technology Madras, Chennai, Tamil Nadu, India
| | - Sen Yang
- Sichuan University and Tencent AI Lab, Chengdu, Sichuan, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Chang Hee Han
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | | | | | - Bahram Marami
- The Center for Computational and Systems Pathology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Jack Zeineh
- The Center for Computational and Systems Pathology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Zixu Zhao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Rüdiger Schmitz
- Department for Interdisciplinary Endoscopy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; DAISYlabs, Forschungszentrum Medizintechnik Hamburg, Hamburg, Germany
| | - Frederic Madesta
- Instutute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; DAISYlabs, Forschungszentrum Medizintechnik Hamburg, Hamburg, Germany
| | - Thomas Rösch
- Department for Interdisciplinary Endoscopy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Rene Werner
- Instutute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; DAISYlabs, Forschungszentrum Medizintechnik Hamburg, Hamburg, Germany
| | - Jie Tian
- Shanghai JiaoTong University, Shanghai, China
| | | | | | | | - Yifeng Zhu
- University of Maine, Orono, ME, United States
| | - Se Young Chun
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea.
| | - Won-Ki Jeong
- Department of Computer Science and Engineering, College of Informatics, Korea University, Seoul, 02841, Korea.
| | | | - Jinwook Choi
- Department of Biomedical Engineering, Seoul National University Hospital, Seoul, South Korea.
| |
Collapse
|
10
|
Kurc T, Bakas S, Ren X, Bagari A, Momeni A, Huang Y, Zhang L, Kumar A, Thibault M, Qi Q, Wang Q, Kori A, Gevaert O, Zhang Y, Shen D, Khened M, Ding X, Krishnamurthi G, Kalpathy-Cramer J, Davis J, Zhao T, Gupta R, Saltz J, Farahani K. Segmentation and Classification in Digital Pathology for Glioma Research: Challenges and Deep Learning Approaches. Front Neurosci 2020; 14:27. [PMID: 32153349 PMCID: PMC7046596 DOI: 10.3389/fnins.2020.00027] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 01/10/2020] [Indexed: 12/12/2022] Open
Abstract
Biomedical imaging Is an important source of information in cancer research. Characterizations of cancer morphology at onset, progression, and in response to treatment provide complementary information to that gleaned from genomics and clinical data. Accurate extraction and classification of both visual and latent image features Is an increasingly complex challenge due to the increased complexity and resolution of biomedical image data. In this paper, we present four deep learning-based image analysis methods from the Computational Precision Medicine (CPM) satellite event of the 21st International Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) conference. One method Is a segmentation method designed to segment nuclei in whole slide tissue images (WSIs) of adult diffuse glioma cases. It achieved a Dice similarity coefficient of 0.868 with the CPM challenge datasets. Three methods are classification methods developed to categorize adult diffuse glioma cases into oligodendroglioma and astrocytoma classes using radiographic and histologic image data. These methods achieved accuracy values of 0.75, 0.80, and 0.90, measured as the ratio of the number of correct classifications to the number of total cases, with the challenge datasets. The evaluations of the four methods indicate that (1) carefully constructed deep learning algorithms are able to produce high accuracy in the analysis of biomedical image data and (2) the combination of radiographic with histologic image information improves classification performance.
Collapse
Affiliation(s)
- Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Xuhua Ren
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aditya Bagari
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Alexandre Momeni
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, China
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ashish Kumar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Marc Thibault
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qi Qi
- School of Informatics, Xiamen University, Xiamen, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Olivier Gevaert
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Yunlong Zhang
- School of Informatics, Xiamen University, Xiamen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Mahendra Khened
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Xinghao Ding
- School of Informatics, Xiamen University, Xiamen, China
| | | | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - James Davis
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Tianhao Zhao
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
11
|
Natekar P, Kori A, Krishnamurthi G. Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis. Front Comput Neurosci 2020; 14:6. [PMID: 32116620 PMCID: PMC7025464 DOI: 10.3389/fncom.2020.00006] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 01/17/2020] [Indexed: 11/13/2022] Open
Abstract
The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.
Collapse
Affiliation(s)
- Parth Natekar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | | |
Collapse
|
12
|
Khened M, Kollerathu VA, Krishnamurthi G. Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers. Med Image Anal 2019; 51:21-45. [DOI: 10.1016/j.media.2018.10.004] [Citation(s) in RCA: 108] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 10/11/2018] [Accepted: 10/18/2018] [Indexed: 10/28/2022]
|
13
|
Bernard O, Lalande A, Zotti C, Cervenansky F, Yang X, Heng PA, Cetin I, Lekadir K, Camara O, Gonzalez Ballester MA, Sanroma G, Napel S, Petersen S, Tziritas G, Grinias E, Khened M, Kollerathu VA, Krishnamurthi G, Rohe MM, Pennec X, Sermesant M, Isensee F, Jager P, Maier-Hein KH, Full PM, Wolf I, Engelhardt S, Baumgartner CF, Koch LM, Wolterink JM, Isgum I, Jang Y, Hong Y, Patravali J, Jain S, Humbert O, Jodoin PM. Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved? IEEE Trans Med Imaging 2018; 37:2514-2525. [PMID: 29994302 DOI: 10.1109/tmi.2018.2837502] [Citation(s) in RCA: 464] [Impact Index Per Article: 77.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the "Automatic Cardiac Diagnosis Challenge" dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.
Collapse
|
14
|
Khened M, Alex V, Krishnamurthi G. Densely Connected Fully Convolutional Network for Short-Axis Cardiac Cine MR Image Segmentation and Heart Diagnosis Using Random Forest. Lecture Notes in Computer Science 2018. [DOI: 10.1007/978-3-319-75541-0_15] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
15
|
Shaikh M, Anand G, Acharya G, Amrutkar A, Alex V, Krishnamurthi G. Brain Tumor Segmentation Using Dense Fully Convolutional Neural Network. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries 2018. [DOI: 10.1007/978-3-319-75238-9_27] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
16
|
Alex V, Vaidhya K, Thirunavukkarasu S, Kesavadas C, Krishnamurthi G. Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation. J Med Imaging (Bellingham) 2017; 4:041311. [PMID: 29285516 DOI: 10.1117/1.jmi.4.4.041311] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 11/16/2017] [Indexed: 12/13/2022] Open
Abstract
The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.
Collapse
Affiliation(s)
- Varghese Alex
- Indian Institute of Technology Madras, Department of Engineering Design, Chennai, India
| | - Kiran Vaidhya
- Indian Institute of Technology Madras, Department of Engineering Design, Chennai, India
| | | | - Chandrasekharan Kesavadas
- Sree Chitra Tirunal Institute for Medical Sciences and Technology, Department of Radiology, Trivandrum, India
| | | |
Collapse
|
17
|
Jacob A, Krishnamurthi G, Mathur M. Estimation of myocardial deformation using correlation image velocimetry. BMC Med Imaging 2017; 17:25. [PMID: 28381245 PMCID: PMC5382518 DOI: 10.1186/s12880-017-0195-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 03/02/2017] [Indexed: 11/24/2022] Open
Abstract
Background Tagged Magnetic Resonance (tMR) imaging is a powerful technique for determining cardiovascular abnormalities. One of the reasons for tMR not being used in routine clinical practice is the lack of easy-to-use tools for image analysis and strain mapping. In this paper, we introduce a novel interdisciplinary method based on correlation image velocimetry (CIV) to estimate cardiac deformation and strain maps from tMR images. Methods CIV, a cross-correlation based pattern matching algorithm, analyses a pair of images to obtain the displacement field at sub-pixel accuracy with any desired spatial resolution. This first time application of CIV to tMR image analysis is implemented using an existing open source Matlab-based software called UVMAT. The method, which requires two main input parameters namely correlation box size (CB) and search box size (SB), is first validated using a synthetic grid image with grid sizes representative of typical tMR images. Phantom and patient images obtained from a Medical Imaging grand challenge dataset (http://stacom.cardiacatlas.org/motion-tracking-challenge/) were then analysed to obtain cardiac displacement fields and strain maps. The results were then compared with estimates from Harmonic Phase analysis (HARP) technique. Results For a known displacement field imposed on both the synthetic grid image and the phantom image, CIV is accurate for 3-pixel and larger displacements on a 512 × 512 image with (CB,SB)=(25,55) pixels. Further validation of our method is achieved by showing that our estimated landmark positions on patient images fall within the inter-observer variability in the ground truth. The effectiveness of our approach to analyse patient images is then established by calculating dense displacement fields throughout a cardiac cycle, and were found to be physiologically consistent. Circumferential strains were estimated at the apical, mid and basal slices of the heart, and were shown to compare favorably with those of HARP over the entire cardiac cycle, except in a few (∼4) of the segments in the 17-segment AHA model. The radial strains, however, are underestimated by our method in most segments when compared with HARP. Conclusions In summary, we have demonstrated the capability of CIV to accurately and efficiently quantify cardiac deformation from tMR images. Furthermore, physiologically consistent displacement fields and circumferential strain curves in most regions of the heart indicate that our approach, upon automating some pre-processing steps and testing in clinical trials, can potentially be implemented in a clinical setting. Electronic supplementary material The online version of this article (doi:10.1186/s12880-017-0195-7) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Athira Jacob
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India.,Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, 21218, USA
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India.
| | - Manikandan Mathur
- Department of Aerospace Engineering, Indian Institute of Technology Madras, Chennai, 600036, India
| |
Collapse
|
18
|
Carass A, Roy S, Jog A, Cuzzocreo JL, Magrath E, Gherman A, Button J, Nguyen J, Prados F, Sudre CH, Jorge Cardoso M, Cawley N, Ciccarelli O, Wheeler-Kingshott CAM, Ourselin S, Catanese L, Deshpande H, Maurel P, Commowick O, Barillot C, Tomas-Fernandez X, Warfield SK, Vaidya S, Chunduru A, Muthuganapathy R, Krishnamurthi G, Jesson A, Arbel T, Maier O, Handels H, Iheme LO, Unay D, Jain S, Sima DM, Smeets D, Ghafoorian M, Platel B, Birenbaum A, Greenspan H, Bazin PL, Calabresi PA, Crainiceanu CM, Ellingsen LM, Reich DS, Prince JL, Pham DL. Longitudinal multiple sclerosis lesion segmentation: Resource and challenge. Neuroimage 2017; 148:77-102. [PMID: 28087490 PMCID: PMC5344762 DOI: 10.1016/j.neuroimage.2016.12.064] [Citation(s) in RCA: 125] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 11/15/2016] [Accepted: 12/19/2016] [Indexed: 01/12/2023] Open
Abstract
In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: (1) the sharing of a rich data set; (2) collaboration and comparison of the various avenues of research being pursued in the community; and (3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website2 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jennifer L Cuzzocreo
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Elizabeth Magrath
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| | - Adrian Gherman
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD 21205, USA
| | - Julia Button
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - James Nguyen
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Ferran Prados
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Carole H Sudre
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK
| | - Manuel Jorge Cardoso
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; Dementia Research Centre, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Niamh Cawley
- NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Olga Ciccarelli
- NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | | | - Sébastien Ourselin
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; Dementia Research Centre, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Laurence Catanese
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | | | - Pierre Maurel
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Olivier Commowick
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Christian Barillot
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Xavier Tomas-Fernandez
- Computational Radiology Laboratory, Boston Childrens Hospital, Boston, MA 02115, USA; Harvard Medical School, Boston, MA 02115, USA
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Childrens Hospital, Boston, MA 02115, USA; Harvard Medical School, Boston, MA 02115, USA
| | - Suthirth Vaidya
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Abhijith Chunduru
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Ramanathan Muthuganapathy
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Ganapathy Krishnamurthi
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Andrew Jesson
- Centre For Intelligent Machines, McGill University, Montréal, QC H3A 0E9, Canada
| | - Tal Arbel
- Centre For Intelligent Machines, McGill University, Montréal, QC H3A 0E9, Canada
| | - Oskar Maier
- Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Leonardo O Iheme
- Bahçeşehir University, Faculty of Engineering and Natural Sciences, 34349 Beşiktaş, Turkey
| | - Devrim Unay
- Bahçeşehir University, Faculty of Engineering and Natural Sciences, 34349 Beşiktaş, Turkey
| | | | | | | | - Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud University, 6525 HP Nijmegen, Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6525 GA Nijmegen, Netherlands
| | - Ariel Birenbaum
- Department of Electrical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel
| | - Pierre-Louis Bazin
- Department of Neurophysics, Max Planck Institute, 04103 Leipzig, Germany
| | - Peter A Calabresi
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | | | - Lotta M Ellingsen
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Electrical and Computer Engineering, University of Iceland, 107 Reykjavík, Iceland
| | - Daniel S Reich
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA; Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, Bethesda, MD 20892, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Dzung L Pham
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| |
Collapse
|
19
|
Narasimhan AK, Lakshmi B S, Santra TS, Rao MSR, Krishnamurthi G. Oxygenated graphene quantum dots (GQDs) synthesized using laser ablation for long-term real-time tracking and imaging. RSC Adv 2017. [DOI: 10.1039/c7ra10702a] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Synthesis of graphene quantom dots for single live cell imaging andin vivofluorescence imaging.
Collapse
Affiliation(s)
- Ashwin Kumar Narasimhan
- Department of Engineering Design
- IIT Madras
- Chennai
- India-600036
- Nanofunctional Materials Technology Centre (NFMTC)
| | | | | | - M. S. Ramachandra Rao
- Nanofunctional Materials Technology Centre (NFMTC)
- Material Science Research Centre
- Department of Physics
- IIT Madras
- Chennai
| | | |
Collapse
|
20
|
Pareek G, Acharya UR, Sree SV, Swapna G, Yantri R, Martis RJ, Saba L, Krishnamurthi G, Mallarini G, El-Baz A, Al Ekish S, Beland M, Suri JS. Prostate tissue characterization/classification in 144 patient population using wavelet and higher order spectra features from transrectal ultrasound images. Technol Cancer Res Treat 2013; 12:545-57. [PMID: 23745787 DOI: 10.7785/tcrt.2012.500346] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.
Collapse
Affiliation(s)
- Gyan Pareek
- Section of Minimally Invasive Urologic Surgery, The Warren Alpert Medical School of Brown University, Providence, RI 02905.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
21
|
Acharya UR, Faust O, S VS, Alvin APC, Krishnamurthi G, Seabra JCR, Sanches J, Suri JS. Understanding symptomatology of atherosclerotic plaque by image-based tissue characterization. Comput Methods Programs Biomed 2013; 110:66-75. [PMID: 23122720 DOI: 10.1016/j.cmpb.2012.09.008] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2011] [Revised: 09/07/2012] [Accepted: 09/24/2012] [Indexed: 06/01/2023]
Abstract
Characterization of carotid atherosclerosis and classification into either symptomatic or asymptomatic is crucial in terms of diagnosis and treatment planning for a range of cardiovascular diseases. This paper presents a computer-aided diagnosis (CAD) system (Atheromatic) that analyzes ultrasound images and classifies them into symptomatic and asymptomatic. The classification result is based on a combination of discrete wavelet transform, higher order spectra (HOS) and textural features. In this study, we compare support vector machine (SVM) classifiers with different kernels. The classifier with a radial basis function (RBF) kernel achieved an average accuracy of 91.7% as well as a sensitivity of 97%, and specificity of 80%. Thus, it is evident that the selected features and the classifier combination can efficiently categorize plaques into symptomatic and asymptomatic classes. Moreover, a novel symptomatic asymptomatic carotid index (SACI), which is an integrated index that is based on the significant features, has been proposed in this work. Each analyzed ultrasound image yields on SACI number. A high SACI value indicates that the image shows symptomatic and low value indicates asymptomatic plaques. We hope this SACI can support vascular surgeons during routine screening for asymptomatic plaques.
Collapse
Affiliation(s)
- U Rajendra Acharya
- Department of Electrical and Computer Engineering, Ann Polytechnic, Singapore 599489, Singapore
| | | | | | | | | | | | | | | |
Collapse
|
22
|
Acharya UR, Faust O, Sree SV, Alvin APC, Krishnamurthi G, Seabra JCR, Sanches J, Suri JS. Atheromatic™: symptomatic vs. asymptomatic classification of carotid ultrasound plaque using a combination of HOS, DWT & texture. Annu Int Conf IEEE Eng Med Biol Soc 2012; 2011:4489-92. [PMID: 22255336 DOI: 10.1109/iembs.2011.6091113] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Quantitative characterization of carotid atherosclerosis and classification into either symptomatic or asymptomatic is crucial in terms of diagnosis and treatment planning for a range of cardiovascular diseases. This paper presents a computer-aided diagnosis (CAD) system (Atheromatic™, patented technology from Biomedical Technologies, Inc., CA, USA) which analyzes ultrasound images and classifies them into symptomatic and asymptomatic. The classification result is based on a combination of discrete wavelet transform, higher order spectra and textural features. In this study, we compare support vector machine (SVM) classifiers with different kernels. The classifier with a radial basis function (RBF) kernel achieved an accuracy of 91.7% as well as a sensitivity of 97%, and specificity of 80%. Encouraged by this result, we feel that these features can be used to identify the plaque tissue type. Therefore, we propose an integrated index, a unique number called symptomatic asymptomatic carotid index (SACI) to discriminate symptomatic and asymptomatic carotid ultrasound images. We hope this SACI can be used as an adjunct tool by the vascular surgeons for daily screening.
Collapse
Affiliation(s)
- U Rajendra Acharya
- Department of Electrical and Computer Engineering, Ann Polytechnic, Singapore 599489.
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Acharya UR, Sree SV, Ribeiro R, Krishnamurthi G, Marinho RT, Sanches J, Suri JS. Data mining framework for fatty liver disease classification in ultrasound: A hybrid feature extraction paradigm. Med Phys 2012; 39:4255-4264. [DOI: 10.1118/1.4725759] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023] Open
|
24
|
Abstract
We compared image restoration methods [Richardson-Lucy (RL), Wiener, and Next-image] with measured "scatter" point-spread-functions, for removing subsurface fluorescence from section-and-image cryo-image volumes. All methods removed haze, delineated single cells from clusters, and improved visualization, but RL best represented structures. Contrast-to-noise and contrast-to-background improvement from RL and Wiener were comparable and 35% better than Next-image. Concerning detection of labeled cells, ROC analyses showed RL ≈Wiener > Next-image >> no processing. Next-image was faster than other methods and less prone to image processing artifacts. RL is recommended for the best restoration of the shape and size of fluorescent structures.
Collapse
Affiliation(s)
- Ganapathy Krishnamurthi
- 10900 Euclid Avenue, Wickenden Bldg, School of Biomedical Engineering, Cleveland OH 44106,
USA
| | - Charlie Y. Wang
- 10900 Euclid Avenue, Wickenden Bldg, School of Biomedical Engineering, Cleveland OH 44106,
USA
- Department of Radiology, Case Western Reserve University and Case Medical Center, Cleveland OH 44106,
USA
| | - Grant Steyer
- 10900 Euclid Avenue, Wickenden Bldg, School of Biomedical Engineering, Cleveland OH 44106,
USA
| | - David L. Wilson
- 10900 Euclid Avenue, Wickenden Bldg, School of Biomedical Engineering, Cleveland OH 44106,
USA
- Department of Radiology, Case Western Reserve University and Case Medical Center, Cleveland OH 44106,
USA
| |
Collapse
|
25
|
Krishnamurthi G, Stantz KM, Steinmetz R, Gattone VH, Cao M, Hutchins GD, Liang Y. Functional imaging in small animals using X-ray computed tomography--study of physiologic measurement reproducibility. IEEE Trans Med Imaging 2005; 24:832-43. [PMID: 16011312 DOI: 10.1109/tmi.2005.851385] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
X-ray computed tomography (CT) has been traditionally used for morphologic analysis and in the recent past has been used for physiology imaging. This paper seeks to demonstrate functional CT as an effective tool for monitoring changes in tissue physiology associated with disease processes and cellular and molecular level therapeutic processes. We investigated the effect of noise and sampling time on the uncertainty of tissue physiologic parameters. A whole body compartmental model of mouse was formulated to simulate tissue time density curves and study the deviation of tissue physiologic parameters from their true values. These results were then used to determine the appropriate scanning protocols for the experimental studies. Dynamic contrast enhanced CT (DCE-CT) was performed in mice following the injection of hydrophilic iodinated contrast agent (CA) at three different injection rates, namely 0.5 ml/min, 1 ml/min, and 2.0 ml/min. These experiments probed the Nyquist sampling limit for reproducibility of tissue physiologic parameters. Separate experiments were performed with three mice at four different X-ray tube currents corresponding to different image noise values. A two-compartment model (2CM) model was formulated to describe the contrast kinematics in the kidney cortex. Three different 2CMs were implemented namely the 4-parameter (4P), 5-parameter (5P), and the 6-parameter (6P) model. The tissue kinematics is fitted to the models by using the Levenberg-Marquardt algorithm implemented in IDL (RSI Inc.) programming language to minimize the weighted sum of squares. The relevant tissue physiologic parameters extracted from the models are the renal blood flow (RBF), glomerular filtration rate (GFR), fractional plasma volume, fractional tubular volumes and urine formation rates. The experimental results indicate that the deviation of the tissue physiologic parameters is within the limits required for tracking disease physiology in vivo and thus small animal functional X-ray CT would be able to determine changes in tissue physiology in vivo.
Collapse
|
26
|
Singh HD, Krishnamurthi G. Mean expiratory flow volume curve. Indian J Physiol Pharmacol 1981; 25:85-88. [PMID: 7275272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
|