1
|
Multiscale topology optimization of pelvic bone for combined walking and running gait cycles. Comput Methods Biomech Biomed Engin 2024; 27:796-812. [PMID: 37129885 DOI: 10.1080/10255842.2023.2205541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/10/2023] [Indexed: 05/03/2023]
Abstract
We propose a multiscale topology optimization procedure of pelvic bone using weighted compliance minimization. In macroscale optimization, a level set-based method is used, which gives a binary structure. In microscale optimization, cubic lattice-based homogenization is done while keeping the global geometry fixed. For the macroscale, a volume constraint equal to the volume of the pelvic bone is imposed, whereas, for the microscale, a mass constraint equal to the mass of the pelvic bone is imposed. The optimal geometries are compared with pelvic bone using different metrics and show good similarity with the same. Designed geometries are additively manufactured and experimentally tested for stiffness.
Collapse
|
2
|
Semantically redundant training data removal and deep model classification performance: A study with chest X-rays. Comput Med Imaging Graph 2024; 115:102379. [PMID: 38608333 DOI: 10.1016/j.compmedimag.2024.102379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 03/28/2024] [Accepted: 04/01/2024] [Indexed: 04/14/2024]
Abstract
Deep learning (DL) has demonstrated its innate capacity to independently learn hierarchical features from complex and multi-dimensional data. A common understanding is that its performance scales up with the amount of training data. However, the data must also exhibit variety to enable improved learning. In medical imaging data, semantic redundancy, which is the presence of similar or repetitive information, can occur due to the presence of multiple images that have highly similar presentations for the disease of interest. Also, the common use of augmentation methods to generate variety in DL training could limit performance when indiscriminately applied to such data. We hypothesize that semantic redundancy would therefore tend to lower performance and limit generalizability to unseen data and question its impact on classifier performance even with large data. We propose an entropy-based sample scoring approach to identify and remove semantically redundant training data and demonstrate using the publicly available NIH chest X-ray dataset that the model trained on the resulting informative subset of training data significantly outperforms the model trained on the full training set, during both internal (recall: 0.7164 vs 0.6597, p<0.05) and external testing (recall: 0.3185 vs 0.2589, p<0.05). Our findings emphasize the importance of information-oriented training sample selection as opposed to the conventional practice of using all available training data.
Collapse
|
3
|
Uncovering the effects of model initialization on deep model generalization: A study with adult and pediatric chest X-ray images. PLOS DIGITAL HEALTH 2024; 3:e0000286. [PMID: 38232121 DOI: 10.1371/journal.pdig.0000286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 12/04/2023] [Indexed: 01/19/2024]
Abstract
Model initialization techniques are vital for improving the performance and reliability of deep learning models in medical computer vision applications. While much literature exists on non-medical images, the impacts on medical images, particularly chest X-rays (CXRs) are less understood. Addressing this gap, our study explores three deep model initialization techniques: Cold-start, Warm-start, and Shrink and Perturb start, focusing on adult and pediatric populations. We specifically focus on scenarios with periodically arriving data for training, thereby embracing the real-world scenarios of ongoing data influx and the need for model updates. We evaluate these models for generalizability against external adult and pediatric CXR datasets. We also propose novel ensemble methods: F-score-weighted Sequential Least-Squares Quadratic Programming (F-SLSQP) and Attention-Guided Ensembles with Learnable Fuzzy Softmax to aggregate weight parameters from multiple models to capitalize on their collective knowledge and complementary representations. We perform statistical significance tests with 95% confidence intervals and p-values to analyze model performance. Our evaluations indicate models initialized with ImageNet-pretrained weights demonstrate superior generalizability over randomly initialized counterparts, contradicting some findings for non-medical images. Notably, ImageNet-pretrained models exhibit consistent performance during internal and external testing across different training scenarios. Weight-level ensembles of these models show significantly higher recall (p<0.05) during testing compared to individual models. Thus, our study accentuates the benefits of ImageNet-pretrained weight initialization, especially when used with weight-level ensembles, for creating robust and generalizable deep learning solutions.
Collapse
|
4
|
Evaluation of an artificial intelligence-based system for echocardiographic estimation of right atrial pressure. Int J Cardiovasc Imaging 2023; 39:2437-2450. [PMID: 37682418 PMCID: PMC10692014 DOI: 10.1007/s10554-023-02941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/18/2023] [Indexed: 09/09/2023]
Abstract
Current noninvasive estimation of right atrial pressure (RAP) by inferior vena cava (IVC) measurement during echocardiography may have significant inter-rater variability due to different levels of observers' experience. Therefore, there is a need to develop new approaches to decrease the variability of IVC analysis and RAP estimation. This study aims to develop a fully automated artificial intelligence (AI)-based system for automated IVC analysis and RAP estimation. We presented a multi-stage AI system to identify the IVC view, select good quality images, delineate the IVC region and quantify its thickness, enabling temporal tracking of its diameter and collapsibility changes. The automated system was trained and tested on expert manual IVC and RAP reference measurements obtained from 255 patients during routine clinical workflow. The performance was evaluated using Pearson correlation and Bland-Altman analysis for IVC values, as well as macro accuracy and chi-square test for RAP values. Our results show an excellent agreement (r=0.96) between automatically computed versus manually measured IVC values, and Bland-Altman analysis showed a small bias of [Formula: see text]0.33 mm. Further, there is an excellent agreement ([Formula: see text]) between automatically estimated versus manually derived RAP values with a macro accuracy of 0.85. The proposed AI-based system accurately quantified IVC diameter, collapsibility index, both are used for RAP estimation. This automated system could serve as a paradigm to perform IVC analysis in routine echocardiography and support various cardiac diagnostic applications.
Collapse
|
5
|
Can Deep Adult Lung Segmentation Models Generalize to the Pediatric Population? EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120531. [PMID: 37397242 PMCID: PMC10310063 DOI: 10.1016/j.eswa.2023.120531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Lung segmentation in chest X-rays (CXRs) is an important prerequisite for improving the specificity of diagnoses of cardiopulmonary diseases in a clinical decision support system. Current deep learning models for lung segmentation are trained and evaluated on CXR datasets in which the radiographic projections are captured predominantly from the adult population. However, the shape of the lungs is reported to be significantly different across the developmental stages from infancy to adulthood. This might result in age-related data domain shifts that would adversely impact lung segmentation performance when the models trained on the adult population are deployed for pediatric lung segmentation. In this work, our goal is to (i) analyze the generalizability of deep adult lung segmentation models to the pediatric population and (ii) improve performance through a stage-wise, systematic approach consisting of CXR modality-specific weight initializations, stacked ensembles, and an ensemble of stacked ensembles. To evaluate segmentation performance and generalizability, novel evaluation metrics consisting of mean lung contour distance (MLCD) and average hash score (AHS) are proposed in addition to the multi-scale structural similarity index measure (MS-SSIM), the intersection of union (IoU), Dice score, 95% Hausdorff distance (HD95), and average symmetric surface distance (ASSD). Our results showed a significant improvement (p < 0.05) in cross-domain generalization through our approach. This study could serve as a paradigm to analyze the cross-domain generalizability of deep segmentation models for other medical imaging modalities and applications.
Collapse
|
6
|
Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : SECOND INTERNATIONAL WORKSHOP, MILLAND 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. MILLAND (WORKSHOP) : (2ND : 2023 : VANCOUVER, B... 2023; 14307:128-137. [PMID: 38415180 PMCID: PMC10896252 DOI: 10.1007/978-3-031-44917-8_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.
Collapse
|
7
|
Semantically Redundant Training Data Removal and Deep Model Classification Performance: A Study with Chest X-rays. ARXIV 2023:arXiv:2309.09773v1. [PMID: 37986725 PMCID: PMC10659445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Deep learning (DL) has demonstrated its innate capacity to independently learn hierarchical features from complex and multi-dimensional data. A common understanding is that its performance scales up with the amount of training data. Another data attribute is the inherent variety. It follows, therefore, that semantic redundancy, which is the presence of similar or repetitive information, would tend to lower performance and limit generalizability to unseen data. In medical imaging data, semantic redundancy can occur due to the presence of multiple images that have highly similar presentations for the disease of interest. Further, the common use of augmentation methods to generate variety in DL training may be limiting performance when applied to semantically redundant data. We propose an entropy-based sample scoring approach to identify and remove semantically redundant training data. We demonstrate using the publicly available NIH chest X-ray dataset that the model trained on the resulting informative subset of training data significantly outperforms the model trained on the full training set, during both internal (recall: 0.7164 vs 0.6597, p<0.05) and external testing (recall: 0.3185 vs 0.2589, p<0.05). Our findings emphasize the importance of information-oriented training sample selection as opposed to the conventional practice of using all available training data.
Collapse
|
8
|
Cross Dataset Analysis of Domain Shift in CXR Lung Region Detection. Diagnostics (Basel) 2023; 13:diagnostics13061068. [PMID: 36980375 PMCID: PMC10047562 DOI: 10.3390/diagnostics13061068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/03/2023] [Accepted: 03/07/2023] [Indexed: 03/18/2023] Open
Abstract
Domain shift is one of the key challenges affecting reliability in medical imaging-based machine learning predictions. It is of significant importance to investigate this issue to gain insights into its characteristics toward determining controllable parameters to minimize its impact. In this paper, we report our efforts on studying and analyzing domain shift in lung region detection in chest radiographs. We used five chest X-ray datasets, collected from different sources, which have manual markings of lung boundaries in order to conduct extensive experiments toward this goal. We compared the characteristics of these datasets from three aspects: information obtained from metadata or an image header, image appearance, and features extracted from a pretrained model. We carried out experiments to evaluate and compare model performances within each dataset and across datasets in four scenarios using different combinations of datasets. We proposed a new feature visualization method to provide explanations for the applied object detection network on the obtained quantitative results. We also examined chest X-ray modality-specific initialization, catastrophic forgetting, and model repeatability. We believe the observations and discussions presented in this work could help to shed some light on the importance of the analysis of training data for medical imaging machine learning research, and could provide valuable guidance for domain shift analysis.
Collapse
|
9
|
Assessing Inter-Annotator Agreement for Medical Image Segmentation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:21300-21312. [PMID: 37008654 PMCID: PMC10062409 DOI: 10.1109/access.2023.3249759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Artificial Intelligence (AI)-based medical computer vision algorithm training and evaluations depend on annotations and labeling. However, variability between expert annotators introduces noise in training data that can adversely impact the performance of AI algorithms. This study aims to assess, illustrate and interpret the inter-annotator agreement among multiple expert annotators when segmenting the same lesion(s)/abnormalities on medical images. We propose the use of three metrics for the qualitative and quantitative assessment of inter-annotator agreement: 1) use of a common agreement heatmap and a ranking agreement heatmap; 2) use of the extended Cohen's kappa and Fleiss' kappa coefficients for a quantitative evaluation and interpretation of inter-annotator reliability; and 3) use of the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm, as a parallel step, to generate ground truth for training AI models and compute Intersection over Union (IoU), sensitivity, and specificity to assess the inter-annotator reliability and variability. Experiments are performed on two datasets, namely cervical colposcopy images from 30 patients and chest X-ray images from 336 tuberculosis (TB) patients, to demonstrate the consistency of inter-annotator reliability assessment and the importance of combining different metrics to avoid bias assessment.
Collapse
|
10
|
Assessing the Impact of Image Resolution on Deep Learning for TB Lesion Segmentation on Frontal Chest X-rays. Diagnostics (Basel) 2023; 13:diagnostics13040747. [PMID: 36832235 PMCID: PMC9955202 DOI: 10.3390/diagnostics13040747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/10/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023] Open
Abstract
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations with an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments and identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study, which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary; however, identifying the optimal image resolution is critical to achieving superior performance.
Collapse
|
11
|
Does image resolution impact chest X-ray based fine-grained Tuberculosis-consistent lesion segmentation? ARXIV 2023:2301.04032. [PMID: 36789135 PMCID: PMC9928051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Abstract
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the Tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations using an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments, and (ii) identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary, however, identifying the optimal image resolution is critical to achieving superior performance.
Collapse
|
12
|
Data Characterization for Reliable AI in Medicine. RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION : 5TH INTERNATIONAL CONFERENCE, RTIP2R 2022, KINGSVILLE, TX, USA, DECEMBER 01-02, 2022, REVISED SELECTED PAPERS. INTERNATIONAL CONFERENCE ON RECENT TRENDS IN IMAGE PROCESSING AND... 2023; 1704:3-11. [PMID: 36780238 PMCID: PMC9912175 DOI: 10.1007/978-3-031-23599-3_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Research in Artificial Intelligence (AI)-based medical computer vision algorithms bear promises to improve disease screening, diagnosis, and subsequently patient care. However, these algorithms are highly impacted by the characteristics of the underlying data. In this work, we discuss various data characteristics, namely Volume, Veracity, Validity, Variety, and Velocity, that impact the design, reliability, and evolution of machine learning in medical computer vision. Further, we discuss each characteristic and the recent works conducted in our research lab that informed our understanding of the impact of these characteristics on the design of medical decision-making algorithms and outcome reliability.
Collapse
|
13
|
Advances in Deep Learning for Tuberculosis Screening using Chest X-rays: The Last 5 Years Review. J Med Syst 2022; 46:82. [PMID: 36241922 PMCID: PMC9568934 DOI: 10.1007/s10916-022-01870-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 09/19/2022] [Indexed: 11/16/2022]
Abstract
There has been an explosive growth in research over the last decade exploring machine learning techniques for analyzing chest X-ray (CXR) images for screening cardiopulmonary abnormalities. In particular, we have observed a strong interest in screening for tuberculosis (TB). This interest has coincided with the spectacular advances in deep learning (DL) that is primarily based on convolutional neural networks (CNNs). These advances have resulted in significant research contributions in DL techniques for TB screening using CXR images. We review the research studies published over the last five years (2016-2021). We identify data collections, methodical contributions, and highlight promising methods and challenges. Further, we discuss and compare studies and identify those that offer extension beyond binary decisions for TB, such as region-of-interest localization. In total, we systematically review 54 peer-reviewed research articles and perform meta-analysis.
Collapse
|
14
|
Image Quality Classification for Automated Visual Evaluation of Cervical Precancer. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : FIRST INTERNATIONAL WORKSHOP, MILLAND 2022, HELD IN CONJUNCTION WITH MICCAI 2022, SINGAPORE, SEPTEMBER 22, 2022, PROCEEDINGS. MILLAND (WORKSHOP) (1ST : 2022 : SINGAPORE) 2022; 13559:206-217. [PMID: 36315110 PMCID: PMC9614805 DOI: 10.1007/978-3-031-16760-7_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Image quality control is a critical element in the process of data collection and cleaning. Both manual and automated analyses alike are adversely impacted by bad quality data. There are several factors that can degrade image quality and, correspondingly, there are many approaches to mitigate their negative impact. In this paper, we address image quality control toward our goal of improving the performance of automated visual evaluation (AVE) for cervical precancer screening. Specifically, we report efforts made toward classifying images into four quality categories ("unusable", "unsatisfactory", "limited", and "evaluable") and improving the quality classification performance by automatically identifying mislabeled and overly ambiguous images. The proposed new deep learning ensemble framework is an integration of several networks that consists of three main components: cervix detection, mislabel identification, and quality classification. We evaluated our method using a large dataset that comprises 87,420 images obtained from 14,183 patients through several cervical cancer studies conducted by different providers using different imaging devices in different geographic regions worldwide. The proposed ensemble approach achieved higher performance than the baseline approaches.
Collapse
|
15
|
Real-time echocardiography image analysis and quantification of cardiac indices. Med Image Anal 2022; 80:102438. [PMID: 35868819 PMCID: PMC9310146 DOI: 10.1016/j.media.2022.102438] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 01/24/2022] [Accepted: 03/28/2022] [Indexed: 11/24/2022]
Abstract
Deep learning has a huge potential to transform echocardiography in clinical practice and point of care ultrasound testing by providing real-time analysis of cardiac structure and function. Automated echocardiography analysis is benefited through use of machine learning for tasks such as image quality assessment, view classification, cardiac region segmentation, and quantification of diagnostic indices. By taking advantage of high-performing deep neural networks, we propose a novel and eicient real-time system for echocardiography analysis and quantification. Our system uses a self-supervised modality-specific representation trained using a publicly available large-scale dataset. The trained representation is used to enhance the learning of target echo tasks with relatively small datasets. We also present a novel Trilateral Attention Network (TaNet) for real-time cardiac region segmentation. The proposed network uses a module for region localization and three lightweight pathways for encoding rich low-level, textural, and high-level features. Feature embeddings from these individual pathways are then aggregated for cardiac region segmentation. This network is fine-tuned using a joint loss function and training strategy. We extensively evaluate the proposed system and its components, which are echo view retrieval, cardiac segmentation, and quantification, using four echocardiography datasets. Our experimental results show a consistent improvement in the performance of echocardiography analysis tasks with enhanced computational eiciency that charts a path toward its adoption in clinical practice. Specifically, our results show superior real-time performance in retrieving good quality echo from individual cardiac view, segmenting cardiac chambers with complex overlaps, and extracting cardiac indices that highly agree with the experts’ values. The source code of our implementation can be found in the project ‘ s GitHub page.
Collapse
|
16
|
Annotations of Lung Abnormalities in the Shenzhen Chest X-ray Dataset for Computer-Aided Screening of Pulmonary Diseases. DATA 2022; 7. [PMID: 36381384 PMCID: PMC9645800 DOI: 10.3390/data7070095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2022] Open
Abstract
Developments in deep learning techniques have led to significant advances in automated abnormality detection in radiological images and paved the way for their potential use in computer-aided diagnosis (CAD) systems. However, the development of CAD systems for pulmonary tuberculosis (TB) diagnosis is hampered by the lack of training data that is of good visual and diagnostic quality, of sufficient size, variety, and, where relevant, containing fine region annotations. This study presents a collection of annotations/segmentations of pulmonary radiological manifestations that are consistent with TB in the publicly available and widely used Shenzhen chest X-ray (CXR) dataset made available by the U.S. National Library of Medicine and obtained via a research collaboration with No. 3. People’s Hospital Shenzhen, China. The goal of releasing these annotations is to advance the state-of-the-art for image segmentation methods toward improving the performance of fine-grained segmentation of TB-consistent findings in digital Chest X-ray images. The annotation collection comprises the following: 1) annotation files in JSON (JavaScript Object Notation) format that indicate locations and shapes of 19 lung pattern abnormalities for 336 TB patients; 2) mask files saved in PNG format for each abnormality per TB patient; 3) a CSV (comma-separated values) file that summarizes lung abnormality types and numbers per TB patient. To the best of our knowledge, this is the first collection of pixel-level annotations of TB-consistent findings in CXRs. Dataset:https://data.lhncbc.nlm.nih.gov/public/Tuberculosis-Chest-X-ray-Datasets/Shenzhen-Hospital-CXR-Set/Annotations/index.html.
Collapse
|
17
|
A Deep Modality-Specific Ensemble for Improving Pneumonia Detection in Chest X-rays. Diagnostics (Basel) 2022; 12:diagnostics12061442. [PMID: 35741252 PMCID: PMC9221627 DOI: 10.3390/diagnostics12061442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 06/05/2022] [Accepted: 06/08/2022] [Indexed: 12/02/2022] Open
Abstract
Pneumonia is an acute respiratory infectious disease caused by bacteria, fungi, or viruses. Fluid-filled lungs due to the disease result in painful breathing difficulties and reduced oxygen intake. Effective diagnosis is critical for appropriate and timely treatment and improving survival. Chest X-rays (CXRs) are routinely used to screen for the infection. Computer-aided detection methods using conventional deep learning (DL) models for identifying pneumonia-consistent manifestations in CXRs have demonstrated superiority over traditional machine learning approaches. However, their performance is still inadequate to aid in clinical decision-making. This study improves upon the state of the art as follows. Specifically, we train a DL classifier on large collections of CXR images to develop a CXR modality-specific model. Next, we use this model as the classifier backbone in the RetinaNet object detection network. We also initialize this backbone using random weights and ImageNet-pretrained weights. Finally, we construct an ensemble of the best-performing models resulting in improved detection of pneumonia-consistent findings. Experimental results demonstrate that an ensemble of the top-3 performing RetinaNet models outperformed individual models in terms of the mean average precision (mAP) metric (0.3272, 95% CI: (0.3006,0.3538)) toward this task, which is markedly higher than the state of the art (mAP: 0.2547). This performance improvement is attributed to the key modifications in initializing the weights of classifier backbones and constructing model ensembles to reduce prediction variance compared to individual constituent models.
Collapse
|
18
|
Uncertainty Quantification in Segmenting Tuberculosis-Consistent Findings in Frontal Chest X-rays. Biomedicines 2022; 10:biomedicines10061323. [PMID: 35740345 PMCID: PMC9220007 DOI: 10.3390/biomedicines10061323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/30/2022] [Accepted: 06/03/2022] [Indexed: 12/10/2022] Open
Abstract
Deep learning (DL) methods have demonstrated superior performance in medical image segmentation tasks. However, selecting a loss function that conforms to the data characteristics is critical for optimal performance. Further, the direct use of traditional DL models does not provide a measure of uncertainty in predictions. Even high-quality automated predictions for medical diagnostic applications demand uncertainty quantification to gain user trust. In this study, we aim to investigate the benefits of (i) selecting an appropriate loss function and (ii) quantifying uncertainty in predictions using a VGG16-based-U-Net model with the Monto–Carlo (MCD) Dropout method for segmenting Tuberculosis (TB)-consistent findings in frontal chest X-rays (CXRs). We determine an optimal uncertainty threshold based on several uncertainty-related metrics. This threshold is used to select and refer highly uncertain cases to an expert. Experimental results demonstrate that (i) the model trained with a modified Focal Tversky loss function delivered superior segmentation performance (mean average precision (mAP): 0.5710, 95% confidence interval (CI): (0.4021,0.7399)), (ii) the model with 30 MC forward passes during inference further improved and stabilized performance (mAP: 0.5721, 95% CI: (0.4032,0.7410), and (iii) an uncertainty threshold of 0.7 is observed to be optimal to refer highly uncertain cases.
Collapse
|
19
|
DeBoNet: A deep bone suppression model ensemble to improve disease detection in chest radiographs. PLoS One 2022; 17:e0265691. [PMID: 35358235 PMCID: PMC8970404 DOI: 10.1371/journal.pone.0265691] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 03/06/2022] [Indexed: 11/18/2022] Open
Abstract
Automatic detection of some pulmonary abnormalities using chest X-rays may be impacted adversely due to obscuring by bony structures like the ribs and the clavicles. Automated bone suppression methods would increase soft tissue visibility and enhance automated disease detection. We evaluate this hypothesis using a custom ensemble of convolutional neural network models, which we call DeBoNet, that suppresses bones in frontal CXRs. First, we train and evaluate variants of U-Nets, Feature Pyramid Networks, and other proposed custom models using a private collection of CXR images and their bone-suppressed counterparts. The DeBoNet, constructed using the top-3 performing models, outperformed the individual models in terms of peak signal-to-noise ratio (PSNR) (36.7977±1.6207), multi-scale structural similarity index measure (MS-SSIM) (0.9848±0.0073), and other metrics. Next, the best-performing bone-suppression model is applied to CXR images that are pooled from several sources, showing no abnormality and other findings consistent with COVID-19. The impact of bone suppression is demonstrated by evaluating the gain in performance in detecting pulmonary abnormality consistent with COVID-19 disease. We observe that the model trained on bone-suppressed CXRs (MCC: 0.9645, 95% confidence interval (0.9510, 0.9780)) significantly outperformed (p < 0.05) the model trained on non-bone-suppressed images (MCC: 0.7961, 95% confidence interval (0.7667, 0.8255)) in detecting findings consistent with COVID-19 indicating benefits derived from automatic bone suppression on disease classification. The code is available at https://github.com/sivaramakrishnan-rajaraman/Bone-Suppresion-Ensemble.
Collapse
|
20
|
Detecting Tuberculosis-Consistent Findings in Lateral Chest X-Rays Using an Ensemble of CNNs and Vision Transformers. Front Genet 2022; 13:864724. [PMID: 35281798 PMCID: PMC8907925 DOI: 10.3389/fgene.2022.864724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 02/10/2022] [Indexed: 11/25/2022] Open
Abstract
Research on detecting Tuberculosis (TB) findings on chest radiographs (or Chest X-rays: CXR) using convolutional neural networks (CNNs) has demonstrated superior performance due to the emergence of publicly available, large-scale datasets with expert annotations and availability of scalable computational resources. However, these studies use only the frontal CXR projections, i.e., the posterior-anterior (PA), and the anterior-posterior (AP) views for analysis and decision-making. Lateral CXRs which are heretofore not studied help detect clinically suspected pulmonary TB, particularly in children. Further, Vision Transformers (ViTs) with built-in self-attention mechanisms have recently emerged as a viable alternative to the traditional CNNs. Although ViTs demonstrated notable performance in several medical image analysis tasks, potential limitations exist in terms of performance and computational efficiency, between the CNN and ViT models, necessitating a comprehensive analysis to select appropriate models for the problem under study. This study aims to detect TB-consistent findings in lateral CXRs by constructing an ensemble of the CNN and ViT models. Several models are trained on lateral CXR data extracted from two large public collections to transfer modality-specific knowledge and fine-tune them for detecting findings consistent with TB. We observed that the weighted averaging ensemble of the predictions of CNN and ViT models using the optimal weights computed with the Sequential Least-Squares Quadratic Programming method delivered significantly superior performance (MCC: 0.8136, 95% confidence intervals (CI): 0.7394, 0.8878, p < 0.05) compared to the individual models and other ensembles. We also interpreted the decisions of CNN and ViT models using class-selective relevance maps and attention maps, respectively, and combined them to highlight the discriminative image regions contributing to the final output. We observed that (i) the model accuracy is not related to disease region of interest (ROI) localization and (ii) the bitwise-AND of the heatmaps of the top-2-performing models delivered significantly superior ROI localization performance in terms of mean average precision [mAP@(0.1 0.6) = 0.1820, 95% CI: 0.0771,0.2869, p < 0.05], compared to other individual models and ensembles. The code is available at https://github.com/sivaramakrishnan-rajaraman/Ensemble-of-CNN-and-ViT-for-TB-detection-in-lateral-CXR.
Collapse
|
21
|
Open World Active Learning for Echocardiography View Classification. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12033:120330J. [PMID: 36860349 PMCID: PMC9972485 DOI: 10.1117/12.2612578] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Existing works for automated echocardiography view classification are designed under the assumption that the views in the testing set must belong to a limited number of views that have appeared in the training set. Such a design is called closed world classification. This assumption may be too strict for real-world environments that are open and often have unseen examples, drastically weakening the robustness of the classical view classification approaches. In this work, we developed an open world active learning approach for echocardiography view classification, where the network classifies images of known views into their respective classes and identifies images of unknown views. Then, a clustering approach is used to cluster the unknown views into various groups to be labeled by echocardiologists. Finally, the new labeled samples are added to the initial set of known views and used to update the classification network. This process of actively labeling unknown clusters and integrating them into the classification model significantly increases the efficiency of data labeling and the robustness of the classifier. Our results using an echocardiography dataset containing known and unknown views showed the superiority of the proposed approach as compared to the closed world view classification approaches.
Collapse
|
22
|
Novel loss functions for ensemble-based medical image classification. PLoS One 2021; 16:e0261307. [PMID: 34968393 PMCID: PMC8718001 DOI: 10.1371/journal.pone.0261307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 11/29/2021] [Indexed: 01/08/2023] Open
Abstract
Medical images commonly exhibit multiple abnormalities. Predicting them requires multi-class classifiers whose training and desired reliable performance can be affected by a combination of factors, such as, dataset size, data source, distribution, and the loss function used to train deep neural networks. Currently, the cross-entropy loss remains the de-facto loss function for training deep learning classifiers. This loss function, however, asserts equal learning from all classes, leading to a bias toward the majority class. Although the choice of the loss function impacts model performance, to the best of our knowledge, we observed that no literature exists that performs a comprehensive analysis and selection of an appropriate loss function toward the classification task under study. In this work, we benchmark various state-of-the-art loss functions, critically analyze model performance, and propose improved loss functions for a multi-class classification task. We select a pediatric chest X-ray (CXR) dataset that includes images with no abnormality (normal), and those exhibiting manifestations consistent with bacterial and viral pneumonia. We construct prediction-level and model-level ensembles to improve classification performance. Our results show that compared to the individual models and the state-of-the-art literature, the weighted averaging of the predictions for top-3 and top-5 model-level ensembles delivered significantly superior classification performance (p < 0.05) in terms of MCC (0.9068, 95% confidence interval (0.8839, 0.9297)) metric. Finally, we performed localization studies to interpret model behavior and confirm that the individual models and ensembles learned task-specific features and highlighted disease-specific regions of interest. The code is available at https://github.com/sivaramakrishnan-rajaraman/multiloss_ensemble_models.
Collapse
|
23
|
Trilateral Attention Network for Real-Time Cardiac Region Segmentation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:118205-118214. [PMID: 35317287 PMCID: PMC8936584 DOI: 10.1109/access.2021.3107303] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The accurate segmentation of cardiac images into anatomically meaningful regions is critical for the extraction of quantitative cardiac indices. The common pipeline for segmentation comprises regions of interest (ROIs) localization and segmentation stages that are independent of each other and typically performed using separate models. In this paper, we propose an end-to-end network, called Trilateral Attention Network (TaNet), for real-time region localization and segmentation. TaNet has a module for ROIs localization and three segmentation pathways: spatial pathway, handcrafted pathway, and context pathway. The localization module focuses segmentation attention on the desired region while learning the context relationship between different regions in the image. The localized regions are then sent to the three pathways for segmentation. The spatial pathway, which has regular convolutional kernels, is used to extract deep features at different levels of abstraction. The handcrafted pathway, which has hand-designed convolutional kernels, is used to extract a unique set of features complementary to the deep features. Finally, the context (or global) pathway is used to enlarge the receptive field. By jointly training TaNet for localization and segmentation, TaNet achieved superior performance, in terms of accuracy and speed, when evaluated on two echocardiography datasets for cardiac region segmentation.
Collapse
|
24
|
UMS-Rep: Unified modality-specific representation for efficient medical image analysis. INFORMATICS IN MEDICINE UNLOCKED 2021; 24:100571. [PMID: 38577267 PMCID: PMC10994192 DOI: 10.1016/j.imu.2021.100571] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Medical image analysis typically includes several tasks such as enhancement, segmentation, and classification. Traditionally, these tasks are implemented using separate deep learning models for separate tasks, which is not efficient because it involves unnecessary training repetitions, demands greater computational resources, and requires a relatively large amount of labeled data. In this paper, we propose a multi-task training approach for medical image analysis, where individual tasks are fine-tuned simultaneously through relevant knowledge transfer using a unified modality-specific feature representation (UMS-Rep). We explore different fine-tuning strategies to demonstrate the impact of the strategy on the performance of target medical image tasks. We experiment with different visual tasks (e.g., image denoising, segmentation, and classification) to highlight the advantages offered with our approach for two imaging modalities, chest X-ray and Doppler echocardiography. Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance. Specifically, the proposed approach improves accuracy (up to ∼ 9% ↑) and decreases computational time (up to ∼ 86% ↓) as compared to the baseline approach. Further, our results prove that the performance of target tasks in medical images is highly influenced by the utilized fine-tuning strategy.
Collapse
|
25
|
Improved Semantic Segmentation of Tuberculosis-Consistent Findings in Chest X-rays Using Augmented Training of Modality-Specific U-Net Models with Weak Localizations. Diagnostics (Basel) 2021; 11:diagnostics11040616. [PMID: 33808240 PMCID: PMC8065621 DOI: 10.3390/diagnostics11040616] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 03/25/2021] [Accepted: 03/28/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning (DL) has drawn tremendous attention for object localization and recognition in both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional hand-crafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to a relevant target task than those pretrained on stock photography images. This character helps improve model adaptation, generalization, and class-specific region of interest (ROI) localization. In this study, we train chest X-ray (CXR) modality-specific U-Nets and other state-of-the-art U-Net models for semantic segmentation of tuberculosis (TB)-consistent findings. Automated segmentation of such manifestations could help radiologists reduce errors and supplement decision-making while improving patient care and productivity. Our approach uses the publicly available TBX11K CXR dataset with weak TB annotations, typically provided as bounding boxes, to train a set of U-Net models. Next, we improve the results by augmenting the training data with weak localization, postprocessed into an ROI mask, from a DL classifier trained to classify CXRs as showing normal lungs or suspected TB manifestations. Test data are individually derived from the TBX11K CXR training distribution and other cross-institutional collections, including the Shenzhen TB and Montgomery TB CXR datasets. We observe that our augmented training strategy helped the CXR modality-specific U-Net models achieve superior performance with test data derived from the TBX11K CXR training distribution and cross-institutional collections (p < 0.05). We believe that this is the first study to i) use CXR modality-specific U-Nets for semantic segmentation of TB-consistent ROIs and ii) evaluate the segmentation performance while augmenting the training data with weak TB-consistent localizations.
Collapse
|
26
|
Analyzing inter-reader variability affecting deep ensemble learning for COVID-19 detection in chest radiographs. PLoS One 2020; 15:e0242301. [PMID: 33180877 PMCID: PMC7660555 DOI: 10.1371/journal.pone.0242301] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 11/01/2020] [Indexed: 01/17/2023] Open
Abstract
Data-driven deep learning (DL) methods using convolutional neural networks (CNNs) demonstrate promising performance in natural image computer vision tasks. However, their use in medical computer vision tasks faces several limitations, viz., (i) adapting to visual characteristics that are unlike natural images; (ii) modeling random noise during training due to stochastic optimization and backpropagation-based learning strategy; (iii) challenges in explaining DL black-box behavior to support clinical decision-making; and (iv) inter-reader variability in the ground truth (GT) annotations affecting learning and evaluation. This study proposes a systematic approach to address these limitations through application to the pandemic-caused need for Coronavirus disease 2019 (COVID-19) detection using chest X-rays (CXRs). Specifically, our contribution highlights significant benefits obtained through (i) pretraining specific to CXRs in transferring and fine-tuning the learned knowledge toward improving COVID-19 detection performance; (ii) using ensembles of the fine-tuned models to further improve performance over individual constituent models; (iii) performing statistical analyses at various learning stages for validating results; (iv) interpreting learned individual and ensemble model behavior through class-selective relevance mapping (CRM)-based region of interest (ROI) localization; and, (v) analyzing inter-reader variability and ensemble localization performance using Simultaneous Truth and Performance Level Estimation (STAPLE) methods. We find that ensemble approaches markedly improved classification and localization performance, and that inter-reader variability and performance level assessment helps guide algorithm design and parameter optimization. To the best of our knowledge, this is the first study to construct ensembles, perform ensemble-based disease ROI localization, and analyze inter-reader variability and algorithm performance for COVID-19 detection in CXRs.
Collapse
|
27
|
Malaria Screener: a smartphone application for automated malaria screening. BMC Infect Dis 2020; 20:825. [PMID: 33176716 PMCID: PMC7656677 DOI: 10.1186/s12879-020-05453-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 09/24/2020] [Indexed: 12/14/2022] Open
Abstract
Background Light microscopy is often used for malaria diagnosis in the field. However, it is time-consuming and quality of the results depends heavily on the skill of microscopists. Automating malaria light microscopy is a promising solution, but it still remains a challenge and an active area of research. Current tools are often expensive and involve sophisticated hardware components, which makes it hard to deploy them in resource-limited areas. Results We designed an Android mobile application called Malaria Screener, which makes smartphones an affordable yet effective solution for automated malaria light microscopy. The mobile app utilizes high-resolution cameras and computing power of modern smartphones to screen both thin and thick blood smear images for P. falciparum parasites. Malaria Screener combines image acquisition, smear image analysis, and result visualization in its slide screening process, and is equipped with a database to provide easy access to the acquired data. Conclusion Malaria Screener makes the screening process faster, more consistent, and less dependent on human expertise. The app is modular, allowing other research groups to integrate their methods and models for image processing and machine learning, while acquiring and analyzing their data.
Collapse
|
28
|
Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-rays. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:115041-115050. [PMID: 32742893 PMCID: PMC7394290 DOI: 10.1109/access.2020.3003810] [Citation(s) in RCA: 106] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 06/17/2020] [Indexed: 05/08/2023]
Abstract
We demonstrate use of iteratively pruned deep learning model ensembles for detecting pulmonary manifestation of COVID-19 with chest X-rays. This disease is caused by the novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) virus, also known as the novel Coronavirus (2019-nCoV). A custom convolutional neural network and a selection of ImageNet pretrained models are trained and evaluated at patient-level on publicly available CXR collections to learn modality-specific feature representations. The learned knowledge is transferred and fine-tuned to improve performance and generalization in the related task of classifying CXRs as normal, showing bacterial pneumonia, or COVID-19-viral abnormalities. The best performing models are iteratively pruned to reduce complexity and improve memory efficiency. The predictions of the best-performing pruned models are combined through different ensemble strategies to improve classification performance. Empirical evaluations demonstrate that the weighted average of the best-performing pruned models significantly improves performance resulting in an accuracy of 99.01% and area under the curve of 0.9972 in detecting COVID-19 findings on CXRs. The combined use of modality-specific knowledge transfer, iterative model pruning, and ensemble learning resulted in improved predictions. We expect that this model can be quickly adopted for COVID-19 screening using chest radiographs.
Collapse
|
29
|
Weakly Labeled Data Augmentation for Deep Learning: A Study on COVID-19 Detection in Chest X-Rays. Diagnostics (Basel) 2020; 10:E358. [PMID: 32486140 PMCID: PMC7345787 DOI: 10.3390/diagnostics10060358] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 05/26/2020] [Accepted: 05/29/2020] [Indexed: 01/05/2023] Open
Abstract
The novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a pandemic resulting in over 2.7 million infected individuals and over 190,000 deaths and growing. Assertions in the literature suggest that respiratory disorders due to COVID-19 commonly present with pneumonia-like symptoms which are radiologically confirmed as opacities. Radiology serves as an adjunct to the reverse transcription-polymerase chain reaction test for confirmation and evaluating disease progression. While computed tomography (CT) imaging is more specific than chest X-rays (CXR), its use is limited due to cross-contamination concerns. CXR imaging is commonly used in high-demand situations, placing a significant burden on radiology services. The use of artificial intelligence (AI) has been suggested to alleviate this burden. However, there is a dearth of sufficient training data for developing image-based AI tools. We propose increasing training data for recognizing COVID-19 pneumonia opacities using weakly labeled data augmentation. This follows from a hypothesis that the COVID-19 manifestation would be similar to that caused by other viral pathogens affecting the lungs. We expand the training data distribution for supervised learning through the use of weakly labeled CXR images, automatically pooled from publicly available pneumonia datasets, to classify them into those with bacterial or viral pneumonia opacities. Next, we use these selected images in a stage-wise, strategic approach to train convolutional neural network-based algorithms and compare against those trained with non-augmented data. Weakly labeled data augmentation expands the learned feature space in an attempt to encompass variability in unseen test distributions, enhance inter-class discrimination, and reduce the generalization error. Empirical evaluations demonstrate that simple weakly labeled data augmentation (Acc: 0.5555 and Acc: 0.6536) is better than baseline non-augmented training (Acc: 0.2885 and Acc: 0.5028) in identifying COVID-19 manifestations as viral pneumonia. Interestingly, adding COVID-19 CXRs to simple weakly labeled augmented training data significantly improves the performance (Acc: 0.7095 and Acc: 0.8889), suggesting that COVID-19, though viral in origin, creates a uniquely different presentation in CXRs compared with other viral pneumonia manifestations.
Collapse
|
30
|
Assessment of an ensemble of machine learning models toward abnormality detection in chest radiographs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3689-3692. [PMID: 31946676 DOI: 10.1109/embc.2019.8856715] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Respiratory diseases account for a significant proportion of deaths and disabilities across the world. Chest X-ray (CXR) analysis remains a common diagnostic imaging modality for confirming intra-thoracic cardiopulmonary abnormalities. However, there remains an acute shortage of expert radiologists, particularly in under-resourced settings, resulting in severe interpretation delays. These issues can be mitigated by a computer-aided diagnostic (CADx) system to supplement decision-making and improve throughput while preserving and possibly improving the standard-of-care. Systems reported in the literature or popular media use handcrafted features and/or data-driven algorithms like deep learning (DL) to learn underlying data distributions. The remarkable success of convolutional neural networks (CNN) toward image recognition tasks has made them a promising choice for automated medical image analyses. However, CNNs suffer from high variance and may overfit due to their sensitivity to training data fluctuations. Ensemble learning helps to reduce this variance by combining predictions of multiple learning algorithms to construct complex, non-linear functions and improve robustness and generalization. This study aims to construct and assess the performance of an ensemble of machine learning (ML) models applied to the challenge of classifying normal and abnormal CXRs and significantly reducing the diagnostic load of radiologists and primary-care physicians.
Collapse
|
31
|
Training deep learning algorithms with weakly labeled pneumonia chest X-ray data for COVID-19 detection. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2020:2020.05.04.20090803. [PMID: 32511448 PMCID: PMC7239073 DOI: 10.1101/2020.05.04.20090803] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has caused a pandemic resulting in over 2.7 million infected individuals and over 190,000 deaths and growing. Respiratory disorders in COVID-19 caused by the virus commonly present as viral pneumonia-like opacities in chest X-ray images which are used as an adjunct to the reverse transcription-polymerase chain reaction test for confirmation and evaluating disease progression. The surge places high demand on medical services including radiology expertise. However, there is a dearth of sufficient training data for developing image-based automated decision support tools to alleviate radiological burden. We address this insufficiency by expanding training data distribution through use of weakly-labeled images pooled from publicly available CXR collections showing pneumonia-related opacities. We use the images in a stage-wise, strategic approach and train convolutional neural network-based algorithms to detect COVID-19 infections in CXRs. It is observed that weakly-labeled data augmentation improves performance with the baseline test data compared to non-augmented training by expanding the learned feature space to encompass variability in the unseen test distribution to enhance inter-class discrimination, reduce intra-class similarity and generalization error. Augmentation with COVID-19 CXRs from individual collections significantly improves performance compared to baseline non-augmented training and weakly-labeled augmentation toward detecting COVID-19 like viral pneumonia in the publicly available COVID-19 CXR collections. This underscores the fact that COVID-19 CXRs have a distinct pattern and hence distribution, unlike non-COVID-19 viral pneumonia and other infectious agents.
Collapse
|
32
|
Detection and visualization of abnormality in chest radiographs using modality-specific convolutional neural network ensembles. PeerJ 2020; 8:e8693. [PMID: 32211231 PMCID: PMC7083159 DOI: 10.7717/peerj.8693] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 02/05/2020] [Indexed: 11/20/2022] Open
Abstract
Convolutional neural networks (CNNs) trained on natural images are extremely successful in image classification and localization due to superior automated feature extraction capability. In extending their use to biomedical recognition tasks, it is important to note that visual features of medical images tend to be uniquely different than natural images. There are advantages offered through training these networks on large scale medical common modality image collections pertaining to the recognition task. Further, improved generalization in transferring knowledge across similar tasks is possible when the models are trained to learn modality-specific features and then suitably repurposed for the target task. In this study, we propose modality-specific ensemble learning toward improving abnormality detection in chest X-rays (CXRs). CNN models are trained on a large-scale CXR collection to learn modality-specific features and then repurposed for detecting and localizing abnormalities. Model predictions are combined using different ensemble strategies toward reducing prediction variance and sensitivity to the training data while improving overall performance and generalization. Class-selective relevance mapping (CRM) is used to visualize the learned behavior of the individual models and their ensembles. It localizes discriminative regions of interest (ROIs) showing abnormal regions and offers an improved explanation of model predictions. It was observed that the model ensembles demonstrate superior localization performance in terms of Intersection of Union (IoU) and mean Average Precision (mAP) metrics than any individual constituent model.
Collapse
|
33
|
Assessment of Data Augmentation Strategies Toward Performance Improvement of Abnormality Classification in Chest Radiographs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:841-844. [PMID: 31946026 DOI: 10.1109/embc.2019.8857516] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Image augmentation is a commonly performed technique to prevent class imbalance in datasets to compensate for insufficient training samples, or to prevent model overfitting. Traditional augmentation (TA) techniques include various image transformations, such as rotation, translation, channel splitting, etc. Alternatively, Generative Adversarial Network (GAN), due to its proven ability to synthesize convincingly-realistic images, has been used to perform image augmentation as well. However, it is unclear whether GAN augmentation (GA) strategy provides an advantage over TA for medical image classification tasks. In this paper, we study the usefulness of TA and GA for classifying abnormal chest X-ray (CXR) images. We first trained a progressive-growing GAN (PG-GAN) to synthesize high-resolution CXRs for performing GA. Then, we trained an abnormality classifier using three training sets individually - training set with TA, with GA and with no augmentation (NA). Finally, we analyzed the abnormality classifier's performance for the three training cases, which led to the following conclusions: (1) GAN strategy is not always superior to TA for improving the classifier's performance; (2) in comparison to NA, however, both TA and GA leads to a significant performance improvement; and, (3) increasing the quantity of images in TA and GA strategies also improves the classifier's performance.
Collapse
|
34
|
Modality-specific deep learning model ensembles toward improving TB detection in chest radiographs. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:27318-27326. [PMID: 32257736 PMCID: PMC7120763 DOI: 10.1109/access.2020.2971257] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The proposed study evaluates the efficacy of knowledge transfer gained through an ensemble of modality-specific deep learning models toward improving the state-of-the-art in Tuberculosis (TB) detection. A custom convolutional neural network (CNN) and selected popular pretrained CNNs are trained to learn modality-specific features from large-scale publicly available chest x-ray (CXR) collections including (i) RSNA dataset (normal = 8851, abnormal = 17833), (ii) Pediatric pneumonia dataset (normal = 1583, abnormal = 4273), and (iii) Indiana dataset (normal = 1726, abnormal = 2378). The knowledge acquired through modality-specific learning is transferred and fine-tuned for TB detection on the publicly available Shenzhen CXR collection (normal = 326, abnormal =336). The predictions of the best performing models are combined using different ensemble methods to demonstrate improved performance over any individual constituent model in classifying TB-infected and normal CXRs. The models are evaluated through cross-validation (n = 5) at the patient-level with an aim to prevent overfitting, improve robustness and generalization. It is observed that a stacked ensemble of the top-3 retrained models demonstrates promising performance (accuracy: 0.941; 95% confidence interval (CI): [0.899, 0.985], area under the curve (AUC): 0.995; 95% CI: [0.945, 1.00]). One-way ANOVA analyses show there are no statistically significant differences in accuracy (P = .759) and AUC (P = .831) among the ensemble methods. Knowledge transferred through modality-specific learning of relevant features helped improve the classification. The ensemble model resulted in reduced prediction variance and sensitivity to training data fluctuations. Results from their combined use are superior to the state-of-the-art.
Collapse
|
35
|
A novel stacked generalization of models for improved TB detection in chest radiographs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:718-721. [PMID: 30440497 DOI: 10.1109/embc.2018.8512337] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Chest x-ray (CXR) analysis is a common part of the protocol for confirming active pulmonary Tuberculosis (TB). However, many TB endemic regions are severely resource constrained in radiological services impairing timely detection and treatment. Computer-aided diagnosis (CADx) tools can supplement decision-making while simultaneously addressing the gap in expert radiological interpretation during mobile field screening. These tools use hand-engineered and/or convolutional neural networks (CNN) computed image features. CNN, a class of deep learning (DL) models, has gained research prominence in visual recognition. It has been shown that Ensemble learning has an inherent advantage of constructing non-linear decision making functions and improve visual recognition. We create a stacking of classifiers with hand-engineered and CNN features toward improving TB detection in CXRs. The results obtained are highly promising and superior to the state-of-the-art.
Collapse
|
36
|
Performance evaluation of deep neural ensembles toward malaria parasite detection in thin-blood smear images. PeerJ 2019; 7:e6977. [PMID: 31179181 PMCID: PMC6544011 DOI: 10.7717/peerj.6977] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 04/14/2019] [Indexed: 01/21/2023] Open
Abstract
Background Malaria is a life-threatening disease caused by Plasmodium parasites that infect the red blood cells (RBCs). Manual identification and counting of parasitized cells in microscopic thick/thin-film blood examination remains the common, but burdensome method for disease diagnosis. Its diagnostic accuracy is adversely impacted by inter/intra-observer variability, particularly in large-scale screening under resource-constrained settings. Introduction State-of-the-art computer-aided diagnostic tools based on data-driven deep learning algorithms like convolutional neural network (CNN) has become the architecture of choice for image recognition tasks. However, CNNs suffer from high variance and may overfit due to their sensitivity to training data fluctuations. Objective The primary aim of this study is to reduce model variance, improve robustness and generalization through constructing model ensembles toward detecting parasitized cells in thin-blood smear images. Methods We evaluate the performance of custom and pretrained CNNs and construct an optimal model ensemble toward the challenge of classifying parasitized and normal cells in thin-blood smear images. Cross-validation studies are performed at the patient level to ensure preventing data leakage into the validation and reduce generalization errors. The models are evaluated in terms of the following performance metrics: (a) Accuracy; (b) Area under the receiver operating characteristic (ROC) curve (AUC); (c) Mean squared error (MSE); (d) Precision; (e) F-score; and (f) Matthews Correlation Coefficient (MCC). Results It is observed that the ensemble model constructed with VGG-19 and SqueezeNet outperformed the state-of-the-art in several performance metrics toward classifying the parasitized and uninfected cells to aid in improved disease screening. Conclusions Ensemble learning reduces the model variance by optimally combining the predictions of multiple models and decreases the sensitivity to the specifics of training data and selection of training algorithms. The performance of the model ensemble simulates real-world conditions with reduced variance, overfitting and leads to improved generalization.
Collapse
|
37
|
Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities. Diagnostics (Basel) 2019; 9:diagnostics9020038. [PMID: 30987172 PMCID: PMC6627892 DOI: 10.3390/diagnostics9020038] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 03/29/2019] [Accepted: 04/01/2019] [Indexed: 11/19/2022] Open
Abstract
Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.
Collapse
|
38
|
Long-term outcome of diffuse large B-cell lymphoma: Impact of biosimilar rituximab and radiation. Indian J Cancer 2018; 54:430-435. [PMID: 29469072 DOI: 10.4103/ijc.ijc_241_17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Rituximab (R)-CHOP improves survival over CHOP in diffuse large B-cell lymphoma (DLBCL). The availability of biosimilar rituximab in India has increased access of this drug. We report on the impact of treatment on outcomes with special emphasis on the impact of biosimilar rituximab and radiation. METHODS Outcomes of adults (age 15-60 years) treated with CHOP+/- Rituximab radiation were analyzed retrospectively to look at baseline features, treatment, and event-free and overall survival (EFS and OS). RESULTS In the period 2000-2013, 444 patients (median age 47 years: 15-60; males: 288 [65%]; Stage III/IV: 224 [50%]; age-adjusted international prognostic index [aaIPI] Score 2 or 3 in 50%) received either CHOP (n = 325 [73%]) or RCHOP (n = 119 [27%]) therapy. Biosimilar rituximab and the original were used in 95 (80%) and 24 (20%) patients, respectively. Radiation was given in 134 (30%) patients (Stages I and II, 100/220 [45%] and Stages III and IV, 34/224 [15%]). After a median follow-up of 46 (0.2-126) months, the 5-year EFS and OS were 59% and 68%, respectively. The factors predicting inferior EFS and OS were age> 40 years, performance status 2-4, Stage III/IV, hemoglobin <12 g/dL, the aaIPI Score 2 or 3, and nonuse of rituximab and radiation. Radiation used in early stage disease benefitted all subgroups regardless of bulky disease, use of rituximab, or the number of cycles of chemotherapy. Addition of rituximab improved survival across all categories of aaIPI. CONCLUSION Availability of biosimilar rituximab has increased access and survival of patients with DLBCL in India. Radiotherapy improved outcomes in early stages.
Collapse
|
39
|
Understanding the learned behavior of customized convolutional neural networks toward malaria parasite detection in thin blood smear images. J Med Imaging (Bellingham) 2018; 5:034501. [PMID: 30035153 DOI: 10.1117/1.jmi.5.3.034501] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/25/2018] [Indexed: 11/14/2022] Open
Abstract
Convolutional neural networks (CNNs) have become the architecture of choice for visual recognition tasks. However, these models are perceived as black boxes since there is a lack of understanding of the learned behavior from the underlying task of interest. This lack of transparency is a serious drawback, particularly in applications involving medical screening and diagnosis since poorly understood model behavior could adversely impact subsequent clinical decision-making. Recently, researchers have begun working on this issue and several methods have been proposed to visualize and understand the behavior of these models. We highlight the advantages offered through visualizing and understanding the weights, saliencies, class activation maps, and region of interest localizations in customized CNNs applied to the challenge of classifying parasitized and uninfected cells to aid in malaria screening. We provide an explanation for the models' classification decisions. We characterize, evaluate, and statistically validate the performance of different customized CNNs keeping every training subject's data separate from the validation set.
Collapse
|
40
|
Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 2018; 6:e4568. [PMID: 29682411 PMCID: PMC5907772 DOI: 10.7717/peerj.4568] [Citation(s) in RCA: 98] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 03/13/2018] [Indexed: 01/14/2023] Open
Abstract
Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.
Collapse
|
41
|
Phase II study of interim PET-CT-guided response-adapted therapy in advanced Hodgkin's lymphoma. Ann Oncol 2015; 26:1170-1174. [PMID: 25701453 DOI: 10.1093/annonc/mdv077] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 02/09/2015] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Combination chemotherapy ABVD (doxorubicin, bleomycin, vinblastine and dacarabazine) cures ∼70% of patients with advanced Hodgkin's lymphoma (aHL, stages IIB, III and IV) while more toxic escalated BEACOPP (EB, combination of bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine and prednisolone) increases cure rates to 85%. Patients with a positive interim positron emission tomography-computerized tomography (PET-CT) scan after two cycles (PET-2) of ABVD have very poor outcomes with continued ABVD. Intensifying therapy with EB in PET-2-positive patients ('response-adapted therapy') may improve cure rates, whereas the negative patients can continue ABVD alone. PATIENTS AND METHODS Eligible patients with newly diagnosed aHL received two cycles of ABVD and underwent PET-2 (scored with semi-quantitative 5-point visual criteria, 'Deauville score'). PET-2-negative patients continued four additional cycles of ABVD, whereas PET-2-positive patients received four cycles of EB. A phase II sample size of 50 was estimated keeping the lower and higher proportion of rejection of the event-free survival (EFS) as 70% and 85%, respectively. RESULTS Fifty patients [median age 28 (12-60) years; male : female: 39 : 11; stages: IIB-3 (6%), III-29 (58%) and IV-18 (36%); International Prognostic Score (IPS): 0-3: 34 (68%); 4-7: 16 (32%)] were enrolled; 49 underwent PET-2. Eight (16%) were PET-2-positive, whereas 41 (84%) were negative. Forty-seven were evaluable for EFS and all 50 for overall survival (OS). The 2-year EFS was 76% (95% CI: 68-83) and OS was 88% (95% CI: 82-94). PET-2 was strongly prognostic-2-year EFS, negative versus positive: 82% versus 50%; P = 0.013. CONCLUSION PET-2 response-adapted strategy could not achieve EFS of 85% in aHL. However, escalated therapy improved outcomes in PET-2-positive patients compared with historical data. TRIAL REGISTRATION CTRI/2012/06/002741 (http://www.ctri.nic.in) and NCT01304849 (http://www.clinicaltrials.gov).
Collapse
|
42
|
Abstract
Our previous studies have shown PTH to be an effective relaxant of smooth muscle throughout the mammalian tract. Recently, we found PTHrP to be equally as potent and effective on the gut as PTH, and we hypothesized that PTHrP, rather than PTH, might be the natural ligand for the gut receptors which mediate GI smooth muscle relaxation. To approach this question, we asked whether rat GI tissue expresses mRNA for PTHrP. Using selective reverse transcription and PCR we have found PTHrP mRNA in smooth muscle throughout the rat GI tract and in gastric and colonic mucosa as well. Our findings support the idea that PTHrP can be produced by GI tissues and that it may function there as an autocrine or paracrine factor. One of its actions may involve regulation of GI muscle tone.
Collapse
|
43
|
Performance Evaluation of Bio-Inspired Optimization Algorithms in Resolving Chromosomal Occlusions. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2015. [DOI: 10.1166/jmihi.2015.1381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
44
|
Design of a Functional Training Prototype for Neonatal Resuscitation. CHILDREN-BASEL 2014; 1:441-56. [PMID: 27417489 PMCID: PMC4928735 DOI: 10.3390/children1030441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2014] [Revised: 11/06/2014] [Accepted: 11/07/2014] [Indexed: 11/16/2022]
Abstract
Birth Asphyxia is considered to be one of the leading causes of neonatal mortality around the world. Asphyxiated neonates require skilled resuscitation to survive the neonatal period. The project aims to train health professionals in a basic newborn care using a prototype with an ultimate objective to have one person at every delivery trained in neonatal resuscitation. This prototype will be a user-friendly device with which one can get trained in performing neonatal resuscitation in resource-limited settings. The prototype consists of a Force Sensing Resistor (FSR) that measures the pressure applied and is interfaced with Arduino® which controls the Liquid Crystal Display (LCD) and Light Emitting Diode (LED) indication for pressure and compression counts. With the increase in population and absence of proper medical care, the need for neonatal resuscitation program is not well addressed. The proposed work aims at offering a promising solution for training health care individuals on resuscitating newborn babies under low resource settings.
Collapse
|
45
|
Classification of Denver System of Chromosomes Using Similarity Classifier Guided by OWA Operators. Curr Bioinform 2014. [DOI: 10.2174/1574893608666131231231238] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
46
|
Automated registration of sequential breath-hold dynamic contrast-enhanced MR images: a comparison of three techniques. Magn Reson Imaging 2011; 29:668-82. [PMID: 21531108 DOI: 10.1016/j.mri.2011.02.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2010] [Revised: 11/04/2010] [Accepted: 02/20/2011] [Indexed: 10/18/2022]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly in use as an investigational biomarker of response in cancer clinical studies. Proper registration of images acquired at different time points is essential for deriving diagnostic information from quantitative pharmacokinetic analysis of these data. Motion artifacts in the presence of time-varying intensity due to contrast enhancement make this registration problem challenging. DCE-MRI of chest and abdominal lesions is typically performed during sequential breath-holds, which introduces misregistration due to inconsistent diaphragm positions and also places constraints on temporal resolution vis-à-vis free-breathing. In this work, we have employed a computer-generated DCE-MRI phantom to compare the performance of two published methods, Progressive Principal Component Registration and Pharmacokinetic Model-Driven Registration, with Sequential Elastic Registration (SER) to register adjacent time-sample images using a published general-purpose elastic registration algorithm. In all three methods, a 3D rigid-body registration scheme with a mutual information similarity measure was used as a preprocessing step. The DCE-MRI phantom images were mathematically deformed to simulate misregistration, which was corrected using the three schemes. All three schemes were comparably successful in registering large regions of interest (ROIs) such as muscle, liver, and spleen. SER was superior in retaining tumor volume and shape, and in registering smaller but important ROIs such as tumor core and tumor rim. The performance of SER on clinical DCE-MRI data sets is also presented.
Collapse
|
47
|
Predicting Subcutaneous Glucose Concentration in Humans: Data-Driven Glucose Modeling. IEEE Trans Biomed Eng 2009; 56:246-54. [DOI: 10.1109/tbme.2008.2005937] [Citation(s) in RCA: 96] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
48
|
An Allelic Series of Mutations in the Kit ligand Gene of Mice. II. Effects of Ethylnitrosourea-Induced Kitl Point Mutations on Survival and Peripheral Blood Cells of KitlSteel Mice. Genetics 2002; 162:341-53. [PMID: 12242245 PMCID: PMC1462233 DOI: 10.1093/genetics/162.1.341] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
The ligand for the Kit receptor tyrosine kinase is Kit ligand (Kitl; also known as mast cell growth factor, stem cell factor, and Steel factor), which is encoded at the Steel (Sl) locus of mice. Previous studies revealed that KitlSl mutations have semidominant effects; mild pigmentation defects and macrocytic, hypoplastic anemia occur in heterozygous mice, and more severe pigmentation defects and anemia occur in homozygotes. Lethality also occurs in mice homozygous for severe KitlSl mutations. We describe the effects of seven new N-ethyl-N-nitrosourea (ENU)-induced KitlSl mutations and two previously characterized severe KitlSl mutations on pigmentation, peripheral blood cells, and mouse survival. Mice heterozygous for each of the nine mutations had reduced coat pigmentation and macrocytosis of peripheral blood. In the case of some of these mutations, however, red blood cell (RBC) counts, hemoglobin concentrations, and hematocrits were normal in heterozygotes, even though homozygotes exhibited severely reduced RBC counts and lethality. In homozygous mice, the extent of anemia generally correlates with effects on viability for most KitlSl mutations; i.e., most mutations that cause lethality also cause a more severe anemia than that of mutations that allow viability. Interestingly, lethality and anemia were not directly correlated in the case of one KitlSl mutation.
Collapse
|
49
|
An Allelic Series of Mutations in the Kit ligand Gene of Mice. I. Identification of Point Mutations in Seven Ethylnitrosourea-Induced KitlSteel Alleles. Genetics 2002; 162:331-40. [PMID: 12242244 PMCID: PMC1462231 DOI: 10.1093/genetics/162.1.331] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
An allelic series of mutations is an extremely valuable genetic resource for understanding gene function. Here we describe eight mutant alleles at the Steel (Sl) locus of mice that were induced with N-ethyl-N-nitrosourea (ENU). The product of the Sl locus is Kit ligand (or Kitl; also known as mast cell growth factor, stem cell factor, and Steel factor), which is a member of the helical cytokine superfamily and is the ligand for the Kit receptor tyrosine kinase. Seven of the eight ENU-induced KitlSl alleles, of which five cause missense mutations, one causes a nonsense mutation and exon skipping, and one affects a splice site, were found to contain point mutations in Kitl. Interestingly, each of the five missense mutations affects residues that are within, or very near, conserved α-helical domains of Kitl. These ENU-induced mutants should provide important information on structural requirements for function of Kitl and other helical cytokines.
Collapse
|
50
|
Abstract
Clinically significant recurrence of lupus nephritis in the renal allograft is low, with an incidence of 1 to 3%, and usually occurs within the first 6 years after transplantation. We report an unusual case of a patient with end-stage renal disease caused by lupus nephritis who received a kidney transplant from a living relative; 13 years later, the patient had a severe recurrence of diffuse proliferative lupus nephritis. The patient relapsed after receiving intravenous cyclophosphamide therapy and had a partial response to oral mycophenolate mofetil. In this report we review the risk factors for the recurrence of the systemic lupus erythematosus in the kidney graft and the anti-lupus activity of mycophenolate mofetil.
Collapse
|