1
|
Mahajan A, B G, Wadhwa S, Agarwal U, Baid U, Talbar S, Janu AK, Patil V, Noronha V, Mummudi N, Tibdewal A, Agarwal JP, Yadav S, Kumar Kaushal R, Puranik A, Purandare N, Prabhash K. Deep learning based automated epidermal growth factor receptor and anaplastic lymphoma kinase status prediction of brain metastasis in non-small cell lung cancer. Explor Target Antitumor Ther 2023; 4:657-668. [PMID: 37745691 PMCID: PMC10511818 DOI: 10.37349/etat.2023.00158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 04/13/2023] [Indexed: 09/26/2023] Open
Abstract
Aim The aim of this study was to investigate the feasibility of developing a deep learning (DL) algorithm for classifying brain metastases from non-small cell lung cancer (NSCLC) into epidermal growth factor receptor (EGFR) mutation and anaplastic lymphoma kinase (ALK) rearrangement groups and to compare the accuracy with classification based on semantic features on imaging. Methods Data set of 117 patients was analysed from 2014 to 2018 out of which 33 patients were EGFR positive, 43 patients were ALK positive and 41 patients were negative for either mutation. Convolutional neural network (CNN) architecture efficient net was used to study the accuracy of classification using T1 weighted (T1W) magnetic resonance imaging (MRI) sequence, T2 weighted (T2W) MRI sequence, T1W post contrast (T1post) MRI sequence, fluid attenuated inversion recovery (FLAIR) MRI sequences. The dataset was divided into 80% training and 20% testing. The associations between mutation status and semantic features, specifically sex, smoking history, EGFR mutation and ALK rearrangement status, extracranial metastasis, performance status and imaging variables of brain metastasis were analysed using descriptive analysis [chi-square test (χ2)], univariate and multivariate logistic regression analysis assuming 95% confidence interval (CI). Results In this study of 117 patients, the analysis by semantic method showed 79.2% of the patients belonged to ALK positive were non-smokers as compared to double negative groups (P = 0.03). There was a 10-fold increase in ALK positivity as compared to EGFR positivity in ring enhancing lesions patients (P = 0.015) and there was also a 6.4-fold increase in ALK positivity as compared to double negative groups in meningeal involvement patients (P = 0.004). Using CNN Efficient Net DL model, the study achieved 76% accuracy in classifying ALK rearrangement and EGFR mutations without manual segmentation of metastatic lesions. Analysis of the manually segmented dataset resulted in improved accuracy of 89% through this model. Conclusions Both semantic features and DL model showed comparable accuracy in classifying EGFR mutation and ALK rearrangement. Both methods can be clinically used to predict mutation status while biopsy or genetic testing is undertaken.
Collapse
Affiliation(s)
- Abhishek Mahajan
- Clatterbridge Centre for Oncology NHS Foundation Trust, L7 8YA Liverpool, UK
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Gurukrishna B
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Shweta Wadhwa
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Ujjwal Agarwal
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Ujjwal Baid
- Department of Electronics and Telecommunication Engineering, SGGS Institute of Engineering and Technology, Nanded 431606, Maharashtra, India
| | - Sanjay Talbar
- Department of Electronics and Telecommunication Engineering, SGGS Institute of Engineering and Technology, Nanded 431606, Maharashtra, India
| | - Amit Kumar Janu
- Department of Radiodiagnosis, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Vijay Patil
- Department of Medical Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Vanita Noronha
- Department of Medical Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Naveen Mummudi
- Department of Radiation Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Anil Tibdewal
- Department of Radiation Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - JP Agarwal
- Department of Radiation Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Subash Yadav
- Department of Pathology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Rajiv Kumar Kaushal
- Department of Pathology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Ameya Puranik
- Department of Nuclear Medicine, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Nilendu Purandare
- Department of Nuclear Medicine, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| | - Kumar Prabhash
- Department of Medical Oncology, Tata Memorial Hospital, Parel, Mumbai 400012, Maharashtra, India
| |
Collapse
|
2
|
Innani S, Dutande P, Baid U, Pokuri V, Bakas S, Talbar S, Baheti B, Guntuku SC. Generative adversarial networks based skin lesion segmentation. Sci Rep 2023; 13:13467. [PMID: 37596306 PMCID: PMC10439152 DOI: 10.1038/s41598-023-39648-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 07/28/2023] [Indexed: 08/20/2023] Open
Abstract
Skin cancer is a serious condition that requires accurate diagnosis and treatment. One way to assist clinicians in this task is using computer-aided diagnosis tools that automatically segment skin lesions from dermoscopic images. We propose a novel adversarial learning-based framework called Efficient-GAN (EGAN) that uses an unsupervised generative network to generate accurate lesion masks. It consists of a generator module with a top-down squeeze excitation-based compound scaled path, an asymmetric lateral connection-based bottom-up path, and a discriminator module that distinguishes between original and synthetic masks. A morphology-based smoothing loss is also implemented to encourage the network to create smooth semantic boundaries of lesions. The framework is evaluated on the International Skin Imaging Collaboration Lesion Dataset. It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and accuracy of 90.1%, 83.6%, and 94.5%, respectively. We also design a lightweight segmentation framework called Mobile-GAN (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters, thus resulting in faster inference times for low compute resource settings.
Collapse
Affiliation(s)
- Shubham Innani
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India.
| | - Prasad Dutande
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Ujjwal Baid
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Venu Pokuri
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sanjay Talbar
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Bhakti Baheti
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sharath Chandra Guntuku
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
3
|
Mehta R, Filos A, Baid U, Sako C, McKinley R, Rebsamen M, Dätwyler K, Meier R, Radojewski P, Murugesan GK, Nalawade S, Ganesh C, Wagner B, Yu FF, Fei B, Madhuranthakam AJ, Maldjian JA, Daza L, Gómez C, Arbeláez P, Dai C, Wang S, Reynaud H, Mo Y, Angelini E, Guo Y, Bai W, Banerjee S, Pei L, AK M, Rosas-González S, Zemmoura I, Tauber C, Vu MH, Nyholm T, Löfstedt T, Ballestar LM, Vilaplana V, McHugh H, Maso Talou G, Wang A, Patel J, Chang K, Hoebel K, Gidwani M, Arun N, Gupta S, Aggarwal M, Singh P, Gerstner ER, Kalpathy-Cramer J, Boutry N, Huard A, Vidyaratne L, Rahman MM, Iftekharuddin KM, Chazalon J, Puybareau E, Tochon G, Ma J, Cabezas M, Llado X, Oliver A, Valencia L, Valverde S, Amian M, Soltaninejad M, Myronenko A, Hatamizadeh A, Feng X, Dou Q, Tustison N, Meyer C, Shah NA, Talbar S, Weber MA, Mahajan A, Jakab A, Wiest R, Fathallah-Shaykh HM, Nazeri A, Milchenko1 M, Marcus D, Kotrotsou A, Colen R, Freymann J, Kirby J, Davatzikos C, Menze B, Bakas S, Gal Y, Arbel T. QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results. J Mach Learn Biomed Imaging 2022; 2022:https://www.melba-journal.org/papers/2022:026.html. [PMID: 36998700 PMCID: PMC10060060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 04/01/2023]
Abstract
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.
Collapse
Affiliation(s)
- Raghav Mehta
- Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada
| | - Angelos Filos
- Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England
| | - Ujjwal Baid
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Richard McKinley
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Michael Rebsamen
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Katrin Dätwyler
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland
- Human Performance Lab, Schulthess Clinic, Zurich, Switzerland
| | | | - Piotr Radojewski
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | - Sahil Nalawade
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Chandan Ganesh
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Ben Wagner
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Fang F. Yu
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Texas, USA
| | - Ananth J. Madhuranthakam
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Joseph A. Maldjian
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Laura Daza
- Universidad de los Andes, Bogotá, Colombia
| | | | | | - Chengliang Dai
- Data Science Institute, Imperial College London, London, UK
| | - Shuo Wang
- Data Science Institute, Imperial College London, London, UK
| | | | - Yuanhan Mo
- Data Science Institute, Imperial College London, London, UK
| | - Elsa Angelini
- NIHR Imperial BRC, ITMAT Data Science Group, Imperial College London, London, UK
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | - Subhashis Banerjee
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
- Department of CSE, University of Calcutta, Kolkata, India
- Division of Visual Information and Interaction (Vi2), Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Linmin Pei
- Department of Diagnostic Radiology, The University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Murat AK
- Department of Diagnostic Radiology, The University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | | | - Ilyess Zemmoura
- UMR U1253 iBrain, Université de Tours, Inserm, Tours, France
- Neurosurgery department, CHRU de Tours, Tours, France
| | - Clovis Tauber
- UMR U1253 iBrain, Université de Tours, Inserm, Tours, France
| | - Minh H. Vu
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tommy Löfstedt
- Department of Computing Science, Umeå University, Umeå, Sweden
| | - Laura Mora Ballestar
- Signal Theory and Communications Department, Universitat Politècnica de Catalunya, BarcelonaTech, Barcelona, Spain
| | - Veronica Vilaplana
- Signal Theory and Communications Department, Universitat Politècnica de Catalunya, BarcelonaTech, Barcelona, Spain
| | - Hugh McHugh
- Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
- Radiology Department, Auckland City Hospital, Auckland, New Zealand
| | | | - Alan Wang
- Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, New Zealand
| | - Jay Patel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katharina Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mishka Gidwani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Nishanth Arun
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Sharut Gupta
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Mehak Aggarwal
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Elizabeth R. Gerstner
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Nicolas Boutry
- EPITA Research and Development Laboratory (LRDE), France
| | - Alexis Huard
- EPITA Research and Development Laboratory (LRDE), France
| | - Lasitha Vidyaratne
- Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA
| | - Md Monibor Rahman
- Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA
| | - Khan M. Iftekharuddin
- Vision Lab, Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA
| | - Joseph Chazalon
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Biĉetre, France
| | - Elodie Puybareau
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Biĉetre, France
| | - Guillaume Tochon
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Biĉetre, France
| | - Jun Ma
- School of Science, Nanjing University of Science and Technology
| | - Mariano Cabezas
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Xavier Llado
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Liliana Valencia
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Sergi Valverde
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Mehdi Amian
- Department of Electrical and Computer Engineering, University of Tehran, Iran
| | | | | | | | - Xue Feng
- Biomedical Engineering, University of Virginia, Charlottesville, USA
| | - Quan Dou
- Biomedical Engineering, University of Virginia, Charlottesville, USA
| | - Nicholas Tustison
- Radiology and Medical Imaging, University of Virginia, Charlottesville, USA
| | - Craig Meyer
- Biomedical Engineering, University of Virginia, Charlottesville, USA
- Radiology and Medical Imaging, University of Virginia, Charlottesville, USA
| | - Nisarg A. Shah
- Department of Electrical Engineering, Indian Institute of Technology - Jodhpur, Jodhpur, India
| | - Sanjay Talbar
- SGGS Institute of Engineering and Technology, Nanded, India
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| | - Abhishek Mahajan
- Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Andras Jakab
- Center for MR-Research, University Children’s Hospital Zurich, Zurich, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, University of Bern, Inselspital, Bern University Hospital, Bern, Switzerland
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland
| | | | - Arash Nazeri
- Department of Radiology, Washington University, St. Louis, MO, USA
| | - Mikhail Milchenko1
- Department of Radiology, Washington University, St. Louis, MO, USA
- Neuroimaging Informatics and Analysis Center, Washington University, St. Louis, MO, USA
| | - Daniel Marcus
- Department of Radiology, Washington University, St. Louis, MO, USA
- Neuroimaging Informatics and Analysis Center, Washington University, St. Louis, MO, USA
| | - Aikaterini Kotrotsou
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rivka Colen
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - John Freymann
- Leidos Biomedical Research, Inc, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Justin Kirby
- Leidos Biomedical Research, Inc, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Yarin Gal
- Oxford Applied and Theoretical Machine Learning (OATML) Group, University of Oxford, Oxford, England
| | - Tal Arbel
- Centre for Intelligent Machines (CIM), McGill University, Montreal, QC, Canada
- MILA - Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| |
Collapse
|
4
|
Dutande P, Baid U, Talbar S. Deep Residual Separable Convolutional Neural Network for lung tumor segmentation. Comput Biol Med 2022; 141:105161. [PMID: 34999468 DOI: 10.1016/j.compbiomed.2021.105161] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 12/19/2021] [Indexed: 12/01/2022]
Abstract
Lung cancer is one of the deadliest types of cancers. Computed Tomography (CT) is a widely used technique to detect tumors present inside the lungs. Delineation of such tumors is particularly essential for analysis and treatment purposes. With the advancement in hardware technologies, Machine Learning and Deep Learning methods are outperforming the traditional methods in the field of medical imaging. In order to delineate lung cancer tumors, we have proposed a deep learning-based methodology which includes a maximum intensity projection based pre-processing method, two novel deep learning networks and an ensemble strategy. The two proposed networks named Deep Residual Separable Convolutional Neural Network 1 and 2 (DRS-CNN1 and DRS-CNN2) achieved better performance over the state-of-the-art U-net network and other segmentation networks. For fair comparison, we have evaluated the performances of all networks on Medical Segmentation Decathlon (MSD) and StructSeg 2019 datasets. The DRS-CNN2 achieved a mean Dice Similarity Coefficient (DSC) of 0.649, mean 95 Hausdorff Distance (HD95) of 18.26, mean Sensitivity 0.737 and a mean Precision of 0.765 on independent test sets.
Collapse
Affiliation(s)
- Prasad Dutande
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India.
| | - Ujjwal Baid
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India
| | - Sanjay Talbar
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India
| |
Collapse
|
5
|
Verma R, Kumar N, Patil A, Kurian NC, Rane S, Graham S, Vu QD, Zwager M, Raza SEA, Rajpoot N, Wu X, Chen H, Huang Y, Wang L, Jung H, Brown GT, Liu Y, Liu S, Jahromi SAF, Khani AA, Montahaei E, Baghshah MS, Behroozi H, Semkin P, Rassadin A, Dutande P, Lodaya R, Baid U, Baheti B, Talbar S, Mahbod A, Ecker R, Ellinger I, Luo Z, Dong B, Xu Z, Yao Y, Lv S, Feng M, Xu K, Zunair H, Hamza AB, Smiley S, Yin TK, Fang QR, Srivastava S, Mahapatra D, Trnavska L, Zhang H, Narayanan PL, Law J, Yuan Y, Tejomay A, Mitkari A, Koka D, Ramachandra V, Kini L, Sethi A. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE Trans Med Imaging 2021; 40:3413-3423. [PMID: 34086562 DOI: 10.1109/tmi.2021.3085712] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Collapse
|
6
|
|
7
|
Abstract
Accurate and automatic lung nodule segmentation is of prime importance for the lung cancer analysis and its fundamental step in computer-aided diagnosis (CAD) systems. However, various types of nodule and visual similarity with its surrounding chest region make it challenging to develop lung nodule segmentation algorithm. In this paper, we proposed the Deep Deconvolutional Residual Network (DDRN) based approach for the lung nodule segmentation from the CT images. Our approach is based on two key insights. Proposed deep deconvolutional residual network trained end to end and captures the diverse variety of the nodules from the 2D set of the CT images. Summation-based long skip connection from convolutional to deconvolutional part of the network preserves the spatial information lost during the pooling operation and captures the full resolution features. The proposed method is evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) dataset. Results indicate that our proposed method can successfully segment nodules and achieve the average Dice scores of 94.97%, and Jaccard index of 88.68%.
Collapse
Affiliation(s)
- Ganesh Singadkar
- Department of Electronics & Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India.
| | - Abhishek Mahajan
- Department of Radio-diagnosis, Tata Memorial Hospital, Mumbai, India
| | - Meenakshi Thakur
- Department of Radio-diagnosis, Tata Memorial Hospital, Mumbai, India
| | - Sanjay Talbar
- Department of Electronics & Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| |
Collapse
|
8
|
Pawar M, Talbar S. Local entropy maximization based image fusion for contrast enhancement of mammogram. Journal of King Saud University - Computer and Information Sciences 2021. [DOI: 10.1016/j.jksuci.2018.02.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
Thakur S, Doshi J, Pati S, Rathore S, Sako C, Bilello M, Ha SM, Shukla G, Flanders A, Kotrotsou A, Milchenko M, Liem S, Alexander GS, Lombardo J, Palmer JD, LaMontagne P, Nazeri A, Talbar S, Kulkarni U, Marcus D, Colen R, Davatzikos C, Erus G, Bakas S. Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training. Neuroimage 2020; 220:117081. [PMID: 32603860 PMCID: PMC7597856 DOI: 10.1016/j.neuroimage.2020.117081] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 05/24/2020] [Accepted: 06/19/2020] [Indexed: 01/18/2023] Open
Abstract
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.
Collapse
Affiliation(s)
- Siddhesh Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Jimit Doshi
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Gaurav Shukla
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiation Oncology, Christiana Care Health System, Philadelphia, PA, USA; Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Adam Flanders
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Aikaterini Kotrotsou
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, TX, USA
| | - Mikhail Milchenko
- Department of Radiology, Washington University, School of Medicine, St. Louis, MO, USA
| | - Spencer Liem
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Gregory S Alexander
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| | - Joseph Lombardo
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA; Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua D Palmer
- Department of Radiation Oncology, Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA; Department of Radiation Oncology, James Cancer Center, The Ohio State University, Columbus, OH, USA
| | - Pamela LaMontagne
- Department of Radiology, Washington University, School of Medicine, St. Louis, MO, USA
| | - Arash Nazeri
- Department of Radiology, Washington University, School of Medicine, St. Louis, MO, USA
| | - Sanjay Talbar
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Uday Kulkarni
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Daniel Marcus
- Department of Radiology, Washington University, School of Medicine, St. Louis, MO, USA
| | - Rivka Colen
- Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, TX, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
10
|
Baid U, Rane SU, Talbar S, Gupta S, Thakur MH, Moiyadi A, Mahajan A. Overall Survival Prediction in Glioblastoma With Radiomic Features Using Machine Learning. Front Comput Neurosci 2020; 14:61. [PMID: 32848682 PMCID: PMC7417437 DOI: 10.3389/fncom.2020.00061] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/27/2020] [Indexed: 02/05/2023] Open
Abstract
Glioblastoma is a WHO grade IV brain tumor, which leads to poor overall survival (OS) of patients. For precise surgical and treatment planning, OS prediction of glioblastoma (GBM) patients is highly desired by clinicians and oncologists. Radiomic research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, first-order, intensity-based volume and shape-based and textural radiomic features are extracted from fluid-attenuated inversion recovery (FLAIR) and T1ce MRI data. The region of interest is further decomposed with stationary wavelet transform with low-pass and high-pass filtering. Further, radiomic features are extracted on these decomposed images, which helped in acquiring the directional information. The efficiency of the proposed algorithm is evaluated on Brain Tumor Segmentation (BraTS) challenge training, validation, and test datasets. The proposed approach achieved 0.695, 0.571, and 0.558 on BraTS training, validation, and test datasets. The proposed approach secured the third position in BraTS 2018 challenge for the OS prediction task.
Collapse
Affiliation(s)
- Ujjwal Baid
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Swapnil U Rane
- Department of Pathology, Tata Memorial Centre, ACTREC, HBNI, Navi-Mumbai, India
| | - Sanjay Talbar
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sudeep Gupta
- Department of Medical Oncology, Tata Memorial Centre, ACTREC, HBNI, Navi-Mumbai, India
| | - Meenakshi H Thakur
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, HBNI, Mumbai, India
| | - Aliasgar Moiyadi
- Department of Neurosurgery Services, Tata Memorial Centre, Tata Memorial Hospital, HBNI, Mumbai, India
| | - Abhishek Mahajan
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, HBNI, Mumbai, India
| |
Collapse
|
11
|
Baid U, Talbar S, Rane S, Gupta S, Thakur MH, Moiyadi A, Sable N, Akolkar M, Mahajan A. A Novel Approach for Fully Automatic Intra-Tumor Segmentation With 3D U-Net Architecture for Gliomas. Front Comput Neurosci 2020; 14:10. [PMID: 32132913 PMCID: PMC7041417 DOI: 10.3389/fncom.2020.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 02/05/2023] Open
Abstract
Purpose: Gliomas are the most common primary brain malignancies, with varying degrees of aggressiveness and prognosis. Understanding of tumor biology and intra-tumor heterogeneity is necessary for planning personalized therapy and predicting response to therapy. Accurate tumoral and intra-tumoral segmentation on MRI is the first step toward understanding the tumor biology through computational methods. The purpose of this study was to design a segmentation algorithm and evaluate its performance on pre-treatment brain MRIs obtained from patients with gliomas. Materials and Methods: In this study, we have designed a novel 3D U-Net architecture that segments various radiologically identifiable sub-regions like edema, enhancing tumor, and necrosis. Weighted patch extraction scheme from the tumor border regions is proposed to address the problem of class imbalance between tumor and non-tumorous patches. The architecture consists of a contracting path to capture context and the symmetric expanding path that enables precise localization. The Deep Convolutional Neural Network (DCNN) based architecture is trained on 285 patients, validated on 66 patients and tested on 191 patients with Glioma from Brain Tumor Segmentation (BraTS) 2018 challenge dataset. Three dimensional patches are extracted from multi-channel BraTS training dataset to train 3D U-Net architecture. The efficacy of the proposed approach is also tested on an independent dataset of 40 patients with High Grade Glioma from our tertiary cancer center. Segmentation results are assessed in terms of Dice Score, Sensitivity, Specificity, and Hausdorff 95 distance (ITCN intra-tumoral classification network). Result: Our proposed architecture achieved Dice scores of 0.88, 0.83, and 0.75 for the whole tumor, tumor core and enhancing tumor, respectively, on BraTS validation dataset and 0.85, 0.77, 0.67 on test dataset. The results were similar on the independent patients' dataset from our hospital, achieving Dice scores of 0.92, 0.90, and 0.81 for the whole tumor, tumor core and enhancing tumor, respectively. Conclusion: The results of this study show the potential of patch-based 3D U-Net for the accurate intra-tumor segmentation. From experiments, it is observed that the weighted patch-based segmentation approach gives comparable performance with the pixel-based approach when there is a thin boundary between tumor subparts.
Collapse
Affiliation(s)
- Ujjwal Baid
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sanjay Talbar
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Swapnil Rane
- Department of Pathology, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Sudeep Gupta
- Department of Medical Oncology, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Meenakshi H Thakur
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Aliasgar Moiyadi
- Department of Neurosurgery Services, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Nilesh Sable
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Mayuresh Akolkar
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Abhishek Mahajan
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| |
Collapse
|
12
|
Sapate S, Talbar S, Mahajan A, Sable N, Desai S, Thakur M. Breast cancer diagnosis using abnormalities on ipsilateral views of digital mammograms. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2019.04.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
13
|
Thakur SP, Doshi J, Pati S, Ha SM, Sako C, Talbar S, Kulkarni U, Davatzikos C, Erus G, Bakas S. Skull-Stripping of Glioblastoma MRI Scans Using 3D Deep Learning. Brainlesion 2019; 11992:57-68. [PMID: 32577629 PMCID: PMC7311100 DOI: 10.1007/978-3-030-46640-4_6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Skull-stripping is an essential pre-processing step in computational neuro-imaging directly impacting subsequent analyses. Existing skull-stripping methods have primarily targeted non-pathologicallyaffected brains. Accordingly, they may perform suboptimally when applied on brain Magnetic Resonance Imaging (MRI) scans that have clearly discernible pathologies, such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint. We have identified a retrospective dataset of 1,796 mpMRI brain tumor scans, with corresponding manually-inspected and verified gold-standard brain tissue segmentations, acquired during standard clinical practice under varying acquisition protocols at the Hospital of the University of Pennsylvania. Our quantitative evaluation identified DeepMedic as the best performing method (Dice = 97.9, Hausdorf f 95 = 2.68). We release this pre-trained model through the Cancer Imaging Phenomics Toolkit (CaPTk) platform.
Collapse
Affiliation(s)
- Siddhesh P Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Shri Guru Gobind Singhji Institute of Engineering and Technology (SGGS), Nanded, Maharashtra, India
| | - Jimit Doshi
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sanjay Talbar
- Shri Guru Gobind Singhji Institute of Engineering and Technology (SGGS), Nanded, Maharashtra, India
| | - Uday Kulkarni
- Shri Guru Gobind Singhji Institute of Engineering and Technology (SGGS), Nanded, Maharashtra, India
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
14
|
Gandhamal A, Talbar S, Gajre S, Razak R, Hani AFM, Kumar D. Fully automated subchondral bone segmentation from knee MR images: Data from the Osteoarthritis Initiative. Comput Biol Med 2017; 88:110-125. [DOI: 10.1016/j.compbiomed.2017.07.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 06/17/2017] [Accepted: 07/06/2017] [Indexed: 11/16/2022]
|
15
|
Gandhamal A, Talbar S, Gajre S, Hani AFM, Kumar D. Local gray level S-curve transformation – A generalized contrast enhancement technique for medical images. Comput Biol Med 2017; 83:120-133. [DOI: 10.1016/j.compbiomed.2017.03.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Revised: 02/09/2017] [Accepted: 03/01/2017] [Indexed: 10/20/2022]
|