1
|
Peng J, Wang P, Pedersoli M, Desrosiers C. Boundary-aware information maximization for self-supervised medical image segmentation. Med Image Anal 2024; 94:103150. [PMID: 38574545 DOI: 10.1016/j.media.2024.103150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/24/2024] [Accepted: 03/20/2024] [Indexed: 04/06/2024]
Abstract
Self-supervised representation learning can boost the performance of a pre-trained network on downstream tasks for which labeled data is limited. A popular method based on this paradigm, known as contrastive learning, works by constructing sets of positive and negative pairs from the data, and then pulling closer the representations of positive pairs while pushing apart those of negative pairs. Although contrastive learning has been shown to improve performance in various classification tasks, its application to image segmentation has been more limited. This stems in part from the difficulty of defining positive and negative pairs for dense feature maps without having access to pixel-wise annotations. In this work, we propose a novel self-supervised pre-training method that overcomes the challenges of contrastive learning in image segmentation. Our method leverages Information Invariant Clustering (IIC) as an unsupervised task to learn a local representation of images in the decoder of a segmentation network, but addresses three important drawbacks of this approach: (i) the difficulty of optimizing the loss based on mutual information maximization; (ii) the lack of clustering consistency for different random transformations of the same image; (iii) the poor correspondence of clusters obtained by IIC with region boundaries in the image. Toward this goal, we first introduce a regularized mutual information maximization objective that encourages the learned clusters to be balanced and consistent across different image transformations. We also propose a boundary-aware loss based on cross-correlation, which helps the learned clusters to be more representative of important regions in the image. Compared to contrastive learning applied in dense features, our method does not require computing positive and negative pairs and also enhances interpretability through the visualization of learned clusters. Comprehensive experiments involving four different medical image segmentation tasks reveal the high effectiveness of our self-supervised representation learning method. Our results show the proposed method to outperform by a large margin several state-of-the-art self-supervised and semi-supervised approaches for segmentation, reaching a performance close to full supervision with only a few labeled examples.
Collapse
Affiliation(s)
- Jizong Peng
- ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada.
| | - Ping Wang
- ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada
| | - Marco Pedersoli
- ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada
| | | |
Collapse
|
2
|
Théberge A, Desrosiers C, Boré A, Descoteaux M, Jodoin PM. What matters in reinforcement learning for tractography. Med Image Anal 2024; 93:103085. [PMID: 38219499 DOI: 10.1016/j.media.2024.103085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 12/15/2023] [Accepted: 01/05/2024] [Indexed: 01/16/2024]
Abstract
Recently, deep reinforcement learning (RL) has been proposed to learn the tractography procedure and train agents to reconstruct the structure of the white matter without manually curated reference streamlines. While the performances reported were competitive, the proposed framework is complex, and little is still known about the role and impact of its multiple parts. In this work, we thoroughly explore the different components of the proposed framework, such as the choice of the RL algorithm, seeding strategy, the input signal and reward function, and shed light on their impact. Approximately 7,400 models were trained for this work, totalling nearly 41,000 h of GPU time. Our goal is to guide researchers eager to explore the possibilities of deep RL for tractography by exposing what works and what does not work with the category of approach. As such, we ultimately propose a series of recommendations concerning the choice of RL algorithm, the input to the agents, the reward function and more to help future work using reinforcement learning for tractography. We also release the open source codebase, trained models, and datasets for users and researchers wanting to explore reinforcement learning for tractography.
Collapse
Affiliation(s)
- Antoine Théberge
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada, J1K 2R1.
| | - Christian Desrosiers
- Département de génie logiciel et des TI, École de technologie supérieure, Montréal, QC, Canada, H3C 1K3
| | - Arnaud Boré
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada, J1K 2R1
| | - Maxime Descoteaux
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada, J1K 2R1
| | - Pierre-Marc Jodoin
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada, J1K 2R1
| |
Collapse
|
3
|
Liu J, Desrosiers C, Yu D, Zhou Y. Semi-Supervised Medical Image Segmentation Using Cross-Style Consistency With Shape-Aware and Local Context Constraints. IEEE Trans Med Imaging 2024; 43:1449-1461. [PMID: 38032771 DOI: 10.1109/tmi.2023.3338269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Despite the remarkable progress in semi-supervised medical image segmentation methods based on deep learning, their application to real-life clinical scenarios still faces considerable challenges. For example, insufficient labeled data often makes it difficult for networks to capture the complexity and variability of the anatomical regions to be segmented. To address these problems, we design a new semi-supervised segmentation framework that aspires to produce anatomically plausible predictions. Our framework comprises two parallel networks: shape-agnostic and shape-aware networks. These networks learn from each other, enabling effective utilization of unlabeled data. Our shape-aware network implicitly introduces shape guidance to capture shape fine-grained information. Meanwhile, shape-agnostic networks employ uncertainty estimation to further obtain reliable pseudo-labels for the counterpart. We also employ a cross-style consistency strategy to enhance the network's utilization of unlabeled data. It enriches the dataset to prevent overfitting and further eases the coupling of the two networks that learn from each other. Our proposed architecture also incorporates a novel loss term that facilitates the learning of the local context of segmentation by the network, thereby enhancing the overall accuracy of prediction. Experiments on three different datasets of medical images show that our method outperforms many excellent semi-supervised segmentation methods and outperforms them in perceiving shape. The code can be seen at https://github.com/igip-liu/SLC-Net.
Collapse
|
4
|
Gaillochet M, Desrosiers C, Lombaert H. Active learning for medical image segmentation with stochastic batches. Med Image Anal 2023; 90:102958. [PMID: 37769549 DOI: 10.1016/j.media.2023.102958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 10/03/2023]
Abstract
The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data is particularly difficult for medical image segmentation tasks because of the limited expert availability and intensive manual effort required. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On the one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches (SB) during sampling in AL. Stochastic batch querying is a simple and effective add-on that can be used on top of any uncertainty-based metric. Extensive experiments on two medical image segmentation datasets show that our strategy consistently improves conventional uncertainty-based sampling methods. Our method can hence act as a strong baseline for medical image segmentation. The code is available on: https://github.com/Minimel/StochasticBatchAL.git.
Collapse
Affiliation(s)
| | | | - Hervé Lombaert
- ETS Montréal, 1100 Notre-Dame St W, Montreal H3C 1K3, QC, Canada
| |
Collapse
|
5
|
Gopinath K, Desrosiers C, Lombaert H. Learning joint surface reconstruction and segmentation, from brain images to cortical surface parcellation. Med Image Anal 2023; 90:102974. [PMID: 37774534 DOI: 10.1016/j.media.2023.102974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 09/14/2023] [Accepted: 09/16/2023] [Indexed: 10/01/2023]
Abstract
Reconstructing and segmenting cortical surfaces from MRI is essential to a wide range of brain analyses. However, most approaches follow a multi-step slow process, such as a sequential spherical inflation and registration, which requires considerable computation times. To overcome the limitations arising from these multi-steps, we propose SegRecon, an integrated end-to-end deep learning method to jointly reconstruct and segment cortical surfaces directly from an MRI volume in one single step. We train a volume-based neural network to predict, for each voxel, the signed distances to multiple nested surfaces and their corresponding spherical representation in atlas space. This is, for instance, useful for jointly reconstructing and segmenting the white-to-gray-matter interface and the gray-matter-to-CSF (pial) surface. We evaluate the performance of our surface reconstruction and segmentation method with a comprehensive set of experiments on the MindBoggle, ABIDE and OASIS datasets. Our reconstruction error is found to be less than 0.52 mm and 0.97 mm in terms of average Hausdorff distance to the FreeSurfer generated surfaces. Likewise, the parcellation results show over 4% improvements in average Dice with respect to FreeSurfer, in addition to an observed drastic speed-up from hours to seconds of computation on a standard desktop station.
Collapse
|
6
|
Beizaee F, Bona M, Desrosiers C, Dolz J, Lodygensky G. Determining regional brain growth in premature and mature infants in relation to age at MRI using deep neural networks. Sci Rep 2023; 13:13259. [PMID: 37582862 PMCID: PMC10427665 DOI: 10.1038/s41598-023-40244-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 08/07/2023] [Indexed: 08/17/2023] Open
Abstract
Neonatal MRIs are used increasingly in preterm infants. However, it is not always feasible to analyze this data. Having a tool that assesses brain maturation during this period of extraordinary changes would be immensely helpful. Approaches based on deep learning approaches could solve this task since, once properly trained and validated, they can be used in practically any system and provide holistic quantitative information in a matter of minutes. However, one major deterrent for radiologists is that these tools are not easily interpretable. Indeed, it is important that structures driving the results be detailed and survive comparison to the available literature. To solve these challenges, we propose an interpretable pipeline based on deep learning to predict postmenstrual age at scan, a key measure for assessing neonatal brain development. For this purpose, we train a state-of-the-art deep neural network to segment the brain into 87 different regions using normal preterm and term infants from the dHCP study. We then extract informative features for brain age estimation using the segmented MRIs and predict the brain age at scan with a regression model. The proposed framework achieves a mean absolute error of 0.46 weeks to predict postmenstrual age at scan. While our model is based solely on structural T2-weighted images, the results are superior to recent, arguably more complex approaches. Furthermore, based on the extracted knowledge from the trained models, we found that frontal and parietal lobes are among the most important structures for neonatal brain age estimation.
Collapse
Affiliation(s)
- Farzad Beizaee
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada.
- Department of Pediatrics, CHU Sainte-Justine, University of Montreal, Montreal, QC, H3T 1C5, Canada.
| | - Michele Bona
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Christian Desrosiers
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Jose Dolz
- Software and IT Department, École de Technologie Supérieure, Montreal, QC, H3C 1K3, Canada
| | - Gregory Lodygensky
- Department of Pediatrics, CHU Sainte-Justine, University of Montreal, Montreal, QC, H3T 1C5, Canada
- Canadian Neonatal Brain Platform, Montreal, QC, Canada
| |
Collapse
|
7
|
Chaddad A, Tan G, Liang X, Hassan L, Rathore S, Desrosiers C, Katib Y, Niazi T. Advancements in MRI-Based Radiomics and Artificial Intelligence for Prostate Cancer: A Comprehensive Review and Future Prospects. Cancers (Basel) 2023; 15:3839. [PMID: 37568655 PMCID: PMC10416937 DOI: 10.3390/cancers15153839] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this technique is effective, non-invasive methods such as radiomics have gained popularity for extracting imaging features to develop predictive models for clinical tasks. The aim is to minimize invasive processes for improved management of prostate cancer (PCa). This study reviews recent research progress in MRI-based radiomics for PCa, including the radiomics pipeline and potential factors affecting personalized diagnosis. The integration of artificial intelligence (AI) with medical imaging is also discussed, in line with the development trend of radiogenomics and multi-omics. The survey highlights the need for more data from multiple institutions to avoid bias and generalize the predictive model. The AI-based radiomics model is considered a promising clinical tool with good prospects for application.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Guina Tan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Xiaojuan Liang
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Lama Hassan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | | | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Yousef Katib
- Department of Radiology, Taibah University, Al Madinah 42361, Saudi Arabia
| | - Tamim Niazi
- Lady Davis Institute for Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| |
Collapse
|
8
|
Wang P, Peng J, Pedersoli M, Zhou Y, Zhang C, Desrosiers C. Shape-aware Joint Distribution Alignment for Cross-domain Image Segmentation. IEEE Trans Med Imaging 2023; PP:1-1. [PMID: 37027662 DOI: 10.1109/tmi.2023.3247941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
We present an unsupervised domain adaptation method for image segmentation which aligns high-order statistics, computed for the source and target domains, encoding domain-invariant spatial relationships between segmentation classes. Our method first estimates the joint distribution of predictions for pair of pixels whose relative position corresponds to a given spatial displacement. Domain adaptation is then achieved by aligning the joint distributions of source and target images, computed for a set of displacements. Two enhancements of this method are proposed. The first one uses an efficient multi-scale strategy that enables capturing long-range relationships in the statistics. The second one extends the joint distribution alignment loss to features in intermediate layers of the network by computing their cross-correlation. We test our method on the task of unpaired multi-modal cardiac segmentation using the Multi-Modality Whole Heart Segmentation Challenge dataset and prostate segmentation task where images from two datasets are taken as data in different domains. Our results show the advantages of our method compared to recent approaches for cross-domain image segmentation. Code is available at https://github.com/WangPing521/Domain_adaptation_shape_prior.
Collapse
|
9
|
Wang P, Peng J, Pedersoli M, Zhou Y, Zhang C, Desrosiers C. CAT: Constrained Adversarial Training for Anatomically-plausible Semi-supervised Segmentation. IEEE Trans Med Imaging 2023; PP:1-1. [PMID: 37022409 DOI: 10.1109/tmi.2023.3243069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Deep learning models for semi-supervised medical image segmentation have achieved unprecedented performance for a wide range of tasks. Despite their high accuracy, these models may however yield predictions that are considered anatomically impossible by clinicians. Moreover, incorporating complex anatomical constraints into standard deep learning frameworks remains challenging due to their non-differentiable nature. To address these limitations, we propose a Constrained Adversarial Training (CAT) method that learns how to produce anatomically plausible segmentations. Unlike approaches focusing solely on accuracy measures like Dice, our method considers complex anatomical constraints like connectivity, convexity, and symmetry which cannot be easily modeled in a loss function. The problem of non-differentiable constraints is solved using a Reinforce algorithm which enables to obtain a gradient for violated constraints. To generate constraint-violating examples on the fly, and thereby obtain useful gradients, our method adopts an adversarial training strategy which modifies training images to maximize the constraint loss, and then updates the network to be robust to these adversarial examples. The proposed method offers a generic and efficient way to add complex segmentation constraints on top of any segmentation network. Experiments on synthetic data and four clinically-relevant datasets demonstrate the effectiveness of our method in terms of segmentation accuracy and anatomical plausibility.
Collapse
|
10
|
Liu B, Desrosiers C, Ben Ayed I, Dolz J. Segmentation with mixed supervision: Confidence maximization helps knowledge distillation. Med Image Anal 2023; 83:102670. [PMID: 36413905 DOI: 10.1016/j.media.2022.102670] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/12/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022]
Abstract
Despite achieving promising results in a breadth of medical image segmentation tasks, deep neural networks (DNNs) require large training datasets with pixel-wise annotations. Obtaining these curated datasets is a cumbersome process which limits the applicability of DNNs in scenarios where annotated images are scarce. Mixed supervision is an appealing alternative for mitigating this obstacle. In this setting, only a small fraction of the data contains complete pixel-wise annotations and other images have a weaker form of supervision, e.g., only a handful of pixels are labeled. In this work, we propose a dual-branch architecture, where the upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch. Combined with a standard cross-entropy loss over the labeled pixels, our novel formulation integrates two important terms: (i) a Shannon entropy loss defined over the less-supervised images, which encourages confident student predictions in the bottom branch; and (ii) a Kullback-Leibler (KL) divergence term, which transfers the knowledge (i.e., predictions) of the strongly supervised branch to the less-supervised branch and guides the entropy (student-confidence) term to avoid trivial solutions. We show that the synergy between the entropy and KL divergence yields substantial improvements in performance. We also discuss an interesting link between Shannon-entropy minimization and standard pseudo-mask generation, and argue that the former should be preferred over the latter for leveraging information from unlabeled pixels. We evaluate the effectiveness of the proposed formulation through a series of quantitative and qualitative experiments using two publicly available datasets. Results demonstrate that our method significantly outperforms other strategies for semantic segmentation within a mixed-supervision framework, as well as recent semi-supervised approaches. Moreover, in line with recent observations in classification, we show that the branch trained with reduced supervision and guided by the top branch largely outperforms the latter. Our code is publicly available: https://github.com/by-liu/ConfKD.
Collapse
Affiliation(s)
| | | | - Ismail Ben Ayed
- ÉTS Montréal, Canada; Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Canada
| | - Jose Dolz
- ÉTS Montréal, Canada; Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), Canada.
| |
Collapse
|
11
|
Anctil-Robitaille B, Théberge A, Jodoin PM, Descoteaux M, Desrosiers C, Lombaert H. Manifold-aware synthesis of high-resolution diffusion from structural imaging. Front Neuroimaging 2022; 1:930496. [PMID: 37555146 PMCID: PMC10406190 DOI: 10.3389/fnimg.2022.930496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 08/16/2022] [Indexed: 08/10/2023]
Abstract
The physical and clinical constraints surrounding diffusion-weighted imaging (DWI) often limit the spatial resolution of the produced images to voxels up to eight times larger than those of T1w images. The detailed information contained in accessible high-resolution T1w images could help in the synthesis of diffusion images with a greater level of detail. However, the non-Euclidean nature of diffusion imaging hinders current deep generative models from synthesizing physically plausible images. In this work, we propose the first Riemannian network architecture for the direct generation of diffusion tensors (DT) and diffusion orientation distribution functions (dODFs) from high-resolution T1w images. Our integration of the log-Euclidean Metric into a learning objective guarantees, unlike standard Euclidean networks, the mathematically-valid synthesis of diffusion. Furthermore, our approach improves the fractional anisotropy mean squared error (FA MSE) between the synthesized diffusion and the ground-truth by more than 23% and the cosine similarity between principal directions by almost 5% when compared to our baselines. We validate our generated diffusion by comparing the resulting tractograms to our expected real data. We observe similar fiber bundles with streamlines having <3% difference in length, <1% difference in volume, and a visually close shape. While our method is able to generate diffusion images from structural inputs in a high-resolution space within 15 s, we acknowledge and discuss the limits of diffusion inference solely relying on T1w images. Our results nonetheless suggest a relationship between the high-level geometry of the brain and its overall white matter architecture that remains to be explored.
Collapse
Affiliation(s)
- Benoit Anctil-Robitaille
- The Shape Lab, Department of Computer and Software Engineering, ETS Montreal, Montreal, QC, Canada
| | - Antoine Théberge
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Sherbrooke University, Sherbrooke, QC, Canada
| | - Pierre-Marc Jodoin
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Sherbrooke University, Sherbrooke, QC, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Sherbrooke University, Sherbrooke, QC, Canada
| | - Christian Desrosiers
- The Shape Lab, Department of Computer and Software Engineering, ETS Montreal, Montreal, QC, Canada
| | - Hervé Lombaert
- The Shape Lab, Department of Computer and Software Engineering, ETS Montreal, Montreal, QC, Canada
| |
Collapse
|
12
|
Liu J, Cui Z, Desrosiers C, Lu S, Zhou Y. Grayscale self-adjusting network with weak feature enhancement for 3D lumbar anatomy segmentation. Med Image Anal 2022; 81:102567. [PMID: 35994969 DOI: 10.1016/j.media.2022.102567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 07/11/2022] [Accepted: 08/04/2022] [Indexed: 11/15/2022]
Abstract
The automatic segmentation of lumbar anatomy is a fundamental problem for the diagnosis and treatment of lumbar disease. The recent development of deep learning techniques has led to remarkable progress in this task, including the possible segmentation of nerve roots, intervertebral discs, and dural sac in a single step. Despite these advances, lumbar anatomy segmentation remains a challenging problem due to the weak contrast and noise of input images, as well as the variability of intensities and size in lumbar structures across different subjects. To overcome these challenges, we propose a coarse-to-fine deep neural network framework for lumbar anatomy segmentation, which obtains a more accurate segmentation using two strategies. First, a progressive refinement process is employed to correct low-confidence regions by enhancing the feature representation in these regions. Second, a grayscale self-adjusting network (GSA-Net) is proposed to optimize the distribution of intensities dynamically. Experiments on datasets comprised of 3D computed tomography (CT) and magnetic resonance (MR) images show the advantage of our method over current segmentation approaches and its potential for diagnosing and lumbar disease treatment.
Collapse
Affiliation(s)
- Jinhua Liu
- School of Software, Shandong University, Jinan, China
| | - Zhiming Cui
- Department of Computer Science, The University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Christian Desrosiers
- Software and IT Engineering Department, École de technologie supérieure, Montreal, Canada
| | - Shuyi Lu
- School of Software, Shandong University, Jinan, China
| | - Yuanfeng Zhou
- School of Software, Shandong University, Jinan, China.
| |
Collapse
|
13
|
Abstract
Deep learning methods have shown outstanding potential in dermatology for skin lesion detection and identification, however, they usually require annotations beforehand and can only classify lesion classes seen in the training set. Moreover, large-scale, open-sourced medical datasets normally have far fewer annotated classes than in real life, further aggravating the problem. This paper proposes a novel method called DNF-OOD, which applies a non-parametric deep forest-based approach to the problem of out-of-distribution (OOD) detection. By leveraging a maximum probabilistic routing strategy and over-confidence penalty term, the proposed method can achieve better performance on the task of detecting OOD skin lesion images, which is challenging due to the large intra-class variability in such images. We evaluate our OOD detection method on images from two large, publicly-available skin lesion datasets, ISIC2019 and DermNet, and compare it against recently-proposed approaches. Results demonstrate the potential of our DNF-OOD framework for detecting OOD skin images.
Collapse
|
14
|
Chauvin L, Kumar K, Desrosiers C, Wells W, Toews M. Efficient Pairwise Neuroimage Analysis Using the Soft Jaccard Index and 3D Keypoint Sets. IEEE Trans Med Imaging 2022; 41:836-845. [PMID: 34699353 PMCID: PMC9022638 DOI: 10.1109/tmi.2021.3123252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We propose a novel pairwise distance measure between image keypoint sets, for the purpose of large-scale medical image indexing. Our measure generalizes the Jaccard index to account for soft set equivalence (SSE) between keypoint elements, via an adaptive kernel framework modeling uncertainty in keypoint appearance and geometry. A new kernel is proposed to quantify the variability of keypoint geometry in location and scale. Our distance measure may be estimated between O (N 2) image pairs in [Formula: see text] operations via keypoint indexing. Experiments report the first results for the task of predicting family relationships from medical images, using 1010 T1-weighted MRI brain volumes of 434 families including monozygotic and dizygotic twins, siblings and half-siblings sharing 100%-25% of their polymorphic genes. Soft set equivalence and the keypoint geometry kernel improve upon standard hard set equivalence (HSE) and appearance kernels alone in predicting family relationships. Monozygotic twin identification is near 100%, and three subjects with uncertain genotyping are automatically paired with their self-reported families, the first reported practical application of image-based family identification. Our distance measure can also be used to predict group categories, sex is predicted with an AUC = 0.97. Software is provided for efficient fine-grained curation of large, generic image datasets.
Collapse
|
15
|
Gopinath K, Desrosiers C, Lombaert H. Learnable Pooling in Graph Convolutional Networks for Brain Surface Analysis. IEEE Trans Pattern Anal Mach Intell 2022; 44:864-876. [PMID: 33006927 DOI: 10.1109/tpami.2020.3028391] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Brain surface analysis is essential to neuroscience, however, the complex geometry of the brain cortex hinders computational methods for this task. The difficulty arises from a discrepancy between 3D imaging data, which is represented in Euclidean space, and the non-Euclidean geometry of the highly-convoluted brain surface. Recent advances in machine learning have enabled the use of neural networks for non-Euclidean spaces. These facilitate the learning of surface data, yet pooling strategies often remain constrained to a single fixed-graph. This paper proposes a new learnable graph pooling method for processing multiple surface-valued data to output subject-based information. The proposed method innovates by learning an intrinsic aggregation of graph nodes based on graph spectral embedding. We illustrate the advantages of our approach with in-depth experiments on two large-scale benchmark datasets. The ablation study in the paper illustrates the impact of various factors affecting our learnable pooling method. The flexibility of the pooling strategy is evaluated on four different prediction tasks, namely, subject-sex classification, regression of cortical region sizes, classification of Alzheimer's disease stages, and brain age regression. Our experiments demonstrate the superiority of our learnable pooling approach compared to other pooling techniques for graph convolutional networks, with results improving the state-of-the-art in brain surface analysis.
Collapse
|
16
|
Chaddad A, Daniel P, Zhang M, Rathore S, Sargos P, Desrosiers C, Niazi T. Deep radiomic signature with immune cell markers predicts the survival of glioma patients. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2020.10.117] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
17
|
Chaddad A, Hassan L, Desrosiers C. Deep Radiomic Analysis for Predicting Coronavirus Disease 2019 in Computerized Tomography and X-Ray Images. IEEE Trans Neural Netw Learn Syst 2022; 33:3-11. [PMID: 34669582 DOI: 10.1109/tnnls.2021.3119071] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
This article proposes to encode the distribution of features learned from a convolutional neural network (CNN) using a Gaussian mixture model (GMM). These parametric features, called GMM-CNN, are derived from chest computed tomography (CT) and X-ray scans of patients with coronavirus disease 2019 (COVID-19). We use the proposed GMM-CNN features as input to a robust classifier based on random forests (RFs) to differentiate between COVID-19 and other pneumonia cases. Our experiments assess the advantage of GMM-CNN features compared with standard CNN classification on test images. Using an RF classifier (80% samples for training; 20% samples for testing), GMM-CNN features encoded with two mixture components provided a significantly better performance than standard CNN classification ( ). Specifically, our method achieved an accuracy in the range of 96.00%-96.70% and an area under the receiver operator characteristic (ROC) curve in the range of 99.29%-99.45%, with the best performance obtained by combining GMM-CNN features from both CT and X-ray images. Our results suggest that the proposed GMM-CNN features could improve the prediction of COVID-19 in chest CT and X-ray scans.
Collapse
|
18
|
Chaddad A, Li J, Lu Q, Li Y, Okuwobi IP, Tanougast C, Desrosiers C, Niazi T. Can Autism Be Diagnosed with Artificial Intelligence? A Narrative Review. Diagnostics (Basel) 2021; 11:2032. [PMID: 34829379 PMCID: PMC8618159 DOI: 10.3390/diagnostics11112032] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 10/31/2021] [Accepted: 10/31/2021] [Indexed: 11/16/2022] Open
Abstract
Radiomics with deep learning models have become popular in computer-aided diagnosis and have outperformed human experts on many clinical tasks. Specifically, radiomic models based on artificial intelligence (AI) are using medical data (i.e., images, molecular data, clinical variables, etc.) for predicting clinical tasks such as autism spectrum disorder (ASD). In this review, we summarized and discussed the radiomic techniques used for ASD analysis. Currently, the limited radiomic work of ASD is related to the variation of morphological features of brain thickness that is different from texture analysis. These techniques are based on imaging shape features that can be used with predictive models for predicting ASD. This review explores the progress of ASD-based radiomics with a brief description of ASD and the current non-invasive technique used to classify between ASD and healthy control (HC) subjects. With AI, new radiomic models using the deep learning techniques will be also described. To consider the texture analysis with deep CNNs, more investigations are suggested to be integrated with additional validation steps on various MRI sites.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada;
| | - Jiali Li
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Qizong Lu
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Yujie Li
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Idowu Paul Okuwobi
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Camel Tanougast
- Laboratoire de Conception, Optimisation et Modélisation des Systèmes, University of Lorraine, 57070 Metz, France;
| | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada;
| | - Tamim Niazi
- Lady Davis Institute for Medical Research, McGill University, Montreal, QC H3T 1E2, Canada;
| |
Collapse
|
19
|
Delisle PL, Anctil-Robitaille B, Desrosiers C, Lombaert H. Realistic image normalization for multi-Domain segmentation. Med Image Anal 2021; 74:102191. [PMID: 34509168 DOI: 10.1016/j.media.2021.102191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 06/22/2021] [Accepted: 07/19/2021] [Indexed: 11/16/2022]
Abstract
Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.
Collapse
Affiliation(s)
| | | | | | - Herve Lombaert
- Department of Computer and Software Engineering, ETS Montreal, Canada
| |
Collapse
|
20
|
Abstract
Radiomics has shown remarkable potential for predicting the survival outcome for various types of cancers such as pancreatic ductal adenocarcinoma (PDAC). However, to date, there has been limited research using convolutional neural networks (CNN) with radiomic methods for this task, due to their requirement for large training sets. To overcome this issue, we propose a new type of radiomic descriptor modeling the distribution of learned features with a Gaussian mixture model (GMM). These parametric features (GMM-CNN) are computed from gross tumor volumes of PDAC patients defined semi-automatically in pre-operative computed tomography (CT) scans. We use the proposed GMM-CNN features as input to a robust classifier based on random forests (RF) to predict the survival outcome of patients with PDAC. Our experiments assess the advantage of GMM-CNN compared to employing the same 3D CNN model directly, standard radiomic (i.e., histogram, texture and shape), conditional entropy (CENT) based on 3DCNN, and clinical features (i.e., serum carbohydrate antigen 19-9 and chemotherapy neoadjuvant). Using the RF model (100 samples for training; 59 samples for validation), GMM-CNN features provided the highest area under the ROC curve (AUC) of 72.0% (p = 6.4×10-5) compared to 64.0% (p = 0.01) for the 3D CNN model output, 66.8% (p = 0.01) for standard radiomic features, 64.2% (p = 0.003) for CENT, and 57.6% (p = 0.3) for clinical variables. Our results suggest that the proposed GMM-CNN features used with a RF classifier can significantly improve the capacity to prognosticate PDAC patients prior to surgery via routinely-acquired imaging data.
Collapse
|
21
|
Kim BN, Dolz J, Jodoin PM, Desrosiers C. Privacy-Net: An Adversarial Approach for Identity-Obfuscated Segmentation of Medical Images. IEEE Trans Med Imaging 2021; 40:1737-1749. [PMID: 33710953 DOI: 10.1109/tmi.2021.3065727] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper presents a client/server privacy-preserving network in the context of multicentric medical image analysis. Our approach is based on adversarial learning which encodes images to obfuscate the patient identity while preserving enough information for a target task. Our novel architecture is composed of three components: 1) an encoder network which removes identity-specific features from input medical images, 2) a discriminator network that attempts to identify the subject from the encoded images, 3) a medical image analysis network which analyzes the content of the encoded images (segmentation in our case). By simultaneously fooling the discriminator and optimizing the medical analysis network, the encoder learns to remove privacy-specific features while keeping those essentials for the target task. Our approach is illustrated on the problem of segmenting brain MRI from the large-scale Parkinson Progression Marker Initiative (PPMI) dataset. Using longitudinal data from PPMI, we show that the discriminator learns to heavily distort input images while allowing for highly accurate segmentation results. Our results also demonstrate that an encoder trained on the PPMI dataset can be used for segmenting other datasets, without the need for retraining. The code is made available at: https://github.com/bachkimn/Privacy-Net-An-Adversarial-Approach-forIdentity-Obfuscated-Segmentation-of-MedicalImages.
Collapse
|
22
|
Wang P, Peng J, Pedersoli M, Zhou Y, Zhang C, Desrosiers C. Self-paced and self-consistent co-training for semi-supervised image segmentation. Med Image Anal 2021; 73:102146. [PMID: 34274692 DOI: 10.1016/j.media.2021.102146] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/19/2021] [Accepted: 06/21/2021] [Indexed: 11/25/2022]
Abstract
Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method. To help distillate information from unlabeled images, we first design a self-paced learning strategy for co-training that lets jointly-trained neural networks focus on easier-to-segment regions first, and then gradually consider harder ones. This is achieved via an end-to-end differentiable loss in the form of a generalized Jensen Shannon Divergence (JSD). Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy. The robustness of individual models is further improved using a self-ensembling loss that enforces their prediction to be consistent across different training iterations. We demonstrate the potential of our method on three challenging image segmentation problems with different image modalities, using a small fraction of labeled data. Results show clear advantages in terms of performance compared to the standard co-training baselines and recently proposed state-of-the-art approaches for semi-supervised segmentation.
Collapse
Affiliation(s)
- Ping Wang
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Jizong Peng
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Marco Pedersoli
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Yuanfeng Zhou
- School of Software, Shandong University, Jinan, 250101, China.
| | - Caiming Zhang
- School of Software, Shandong University, Jinan, 250101, China.
| | - Christian Desrosiers
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| |
Collapse
|
23
|
Mei J, Desrosiers C, Frasnelli J. Machine Learning for the Diagnosis of Parkinson's Disease: A Review of Literature. Front Aging Neurosci 2021; 13:633752. [PMID: 34025389 PMCID: PMC8134676 DOI: 10.3389/fnagi.2021.633752] [Citation(s) in RCA: 70] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 03/22/2021] [Indexed: 12/26/2022] Open
Abstract
Diagnosis of Parkinson's disease (PD) is commonly based on medical observations and assessment of clinical signs, including the characterization of a variety of motor symptoms. However, traditional diagnostic approaches may suffer from subjectivity as they rely on the evaluation of movements that are sometimes subtle to human eyes and therefore difficult to classify, leading to possible misclassification. In the meantime, early non-motor symptoms of PD may be mild and can be caused by many other conditions. Therefore, these symptoms are often overlooked, making diagnosis of PD at an early stage challenging. To address these difficulties and to refine the diagnosis and assessment procedures of PD, machine learning methods have been implemented for the classification of PD and healthy controls or patients with similar clinical presentations (e.g., movement disorders or other Parkinsonian syndromes). To provide a comprehensive overview of data modalities and machine learning methods that have been used in the diagnosis and differential diagnosis of PD, in this study, we conducted a literature review of studies published until February 14, 2020, using the PubMed and IEEE Xplore databases. A total of 209 studies were included, extracted for relevant information and presented in this review, with an investigation of their aims, sources of data, types of data, machine learning methods and associated outcomes. These studies demonstrate a high potential for adaptation of machine learning methods and novel biomarkers in clinical decision making, leading to increasingly systematic, informed diagnosis of PD.
Collapse
Affiliation(s)
- Jie Mei
- Chemosensory Neuroanatomy Lab, Department of Anatomy, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières, QC, Canada
| | - Christian Desrosiers
- Laboratoire d'Imagerie, de Vision et d'Intelligence Artificielle (LIVIA), Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, QC, Canada
| | - Johannes Frasnelli
- Chemosensory Neuroanatomy Lab, Department of Anatomy, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières, QC, Canada
- Centre de Recherche de l'Hôpital du Sacré-Coeur de Montréal, Centre Intégré Universitaire de Santé et de Services Sociaux du Nord-de-l'Île-de-Montréal (CIUSSS du Nord-de-l'Île-de-Montréal), Montreal, QC, Canada
| |
Collapse
|
24
|
Théberge A, Desrosiers C, Descoteaux M, Jodoin PM. Track-to-Learn: A general framework for tractography with deep reinforcement learning. Med Image Anal 2021; 72:102093. [PMID: 34023562 DOI: 10.1016/j.media.2021.102093] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 04/15/2021] [Accepted: 04/22/2021] [Indexed: 11/26/2022]
Abstract
Diffusion MRI tractography is currently the only non-invasive tool able to assess the white-matter structural connectivity of a brain. Since its inception, it has been widely documented that tractography is prone to producing erroneous tracks while missing true positive connections. Recently, supervised learning algorithms have been proposed to learn the tracking procedure implicitly from data, without relying on anatomical priors. However, these methods rely on curated streamlines that are very hard to obtain. To remove the need for such data but still leverage the expressiveness of neural networks, we introduce Track-To-Learn: A general framework to pose tractography as a deep reinforcement learning problem. Deep reinforcement learning is a type of machine learning that does not depend on ground-truth data but rather on the concept of "reward". We implement and train algorithms to maximize returns from a reward function based on the alignment of streamlines with principal directions extracted from diffusion data. We show competitive results on known data and little loss of performance when generalizing to new, unseen data, compared to prior machine learning-based tractography algorithms. To the best of our knowledge, this is the first successful use of deep reinforcement learning for tractography.
Collapse
Affiliation(s)
- Antoine Théberge
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, CA, J1K 2R1.
| | - Christian Desrosiers
- Département de génie logiciel et des TI, École de technologie supérieure, Montréal, QC, CA, H3C 1K3
| | - Maxime Descoteaux
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, CA, J1K 2R1
| | - Pierre-Marc Jodoin
- Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, CA, J1K 2R1
| |
Collapse
|
25
|
Chaddad A, Hassan L, Desrosiers C. Deep CNN models for predicting COVID-19 in CT and x-ray images. J Med Imaging (Bellingham) 2021; 8:014502. [PMID: 33912622 PMCID: PMC8071782 DOI: 10.1117/1.jmi.8.s1.014502] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/26/2021] [Indexed: 01/12/2023] Open
Abstract
Purpose: Coronavirus disease 2019 (COVID-19) is a new infection that has spread worldwide and with no automatic model to reliably detect its presence from images. We aim to investigate the potential of deep transfer learning to predict COVID-19 infection using chest computed tomography (CT) and x-ray images. Approach: Regions of interest (ROI) corresponding to ground-glass opacities (GGO), consolidations, and pleural effusions were labeled in 100 axial lung CT images from 60 COVID-19-infected subjects. These segmented regions were then employed as an additional input to six deep convolutional neural network (CNN) architectures (AlexNet, DenseNet, GoogleNet, NASNet-Mobile, ResNet18, and DarkNet), pretrained on natural images, to differentiate between COVID-19 and normal CT images. We also explored the model's ability to classify x-ray images as COVID-19, non-COVID-19 pneumonia, or normal. Performance on test images was measured with global accuracy and area under the receiver operating characteristic curve (AUC). Results: When using raw CT images as input to the tested models, the highest accuracy of 82% and AUC of 88.16% is achieved. Incorporating the three ROIs as an additional model inputs further boosts performance to an accuracy of 82.30% and an AUC of 90.10% (DarkNet). For x-ray images, we obtained an outstanding AUC of 97% for classifying COVID-19 versus normal versus other. Combing chest CT and x-ray images, DarkNet architecture achieves the highest accuracy of 99.09% and AUC of 99.89% in classifying COVID-19 from non-COVID-19. Our results confirm the ability of deep CNNs with transfer learning to predict COVID-19 in both chest CT and x-ray images. Conclusions: The proposed method could help radiologists increase the accuracy of their diagnosis and increase efficiency in COVID-19 management.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Guilin University of Electronic Technology, School of Artificial Intelligence, Guilin, China
| | - Lama Hassan
- Guilin University of Electronic Technology, School of Artificial Intelligence, Guilin, China
| | | |
Collapse
|
26
|
Kervadec H, Bouchtiba J, Desrosiers C, Granger E, Dolz J, Ben Ayed I. Boundary loss for highly unbalanced segmentation. Med Image Anal 2020; 67:101851. [PMID: 33080507 DOI: 10.1016/j.media.2020.101851] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 05/29/2020] [Accepted: 07/24/2020] [Indexed: 12/27/2022]
Abstract
Widely used loss functions for CNN segmentation, e.g., Dice or cross-entropy, are based on integrals over the segmentation regions. Unfortunately, for highly unbalanced segmentations, such regional summations have values that differ by several orders of magnitude across classes, which affects training performance and stability. We propose a boundary loss, which takes the form of a distance metric on the space of contours, not regions. This can mitigate the difficulties of highly unbalanced problems because it uses integrals over the interface between regions instead of unbalanced integrals over the regions. Furthermore, a boundary loss complements regional information. Inspired by graph-based optimization techniques for computing active-contour flows, we express a non-symmetric L2 distance on the space of contours as a regional integral, which avoids completely local differential computations involving contour points. This yields a boundary loss expressed with the regional softmax probability outputs of the network, which can be easily combined with standard regional losses and implemented with any existing deep network architecture for N-D segmentation. We report comprehensive evaluations and comparisons on different unbalanced problems, showing that our boundary loss can yield significant increases in performances while improving training stability. Our code is publicly available1.
Collapse
Affiliation(s)
- Hoel Kervadec
- ÉTS Montréal, CRCHUM (University of Montreal Hospital Centre) Canada.
| | - Jihene Bouchtiba
- ÉTS Montréal, CRCHUM (University of Montreal Hospital Centre) Canada
| | | | - Eric Granger
- ÉTS Montréal, CRCHUM (University of Montreal Hospital Centre) Canada
| | - Jose Dolz
- ÉTS Montréal, CRCHUM (University of Montreal Hospital Centre) Canada
| | - Ismail Ben Ayed
- ÉTS Montréal, CRCHUM (University of Montreal Hospital Centre) Canada
| |
Collapse
|
27
|
Zhang M, Desrosiers C, Guo Y, Khundrakpam B, Al-Sharif N, Kiar G, Valdes-Sosa P, Poline JB, Evans A. Brain status modeling with non-negative projective dictionary learning. Neuroimage 2020; 206:116226. [PMID: 31593792 DOI: 10.1016/j.neuroimage.2019.116226] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/01/2019] [Accepted: 09/24/2019] [Indexed: 02/02/2023] Open
Abstract
Accurate prediction of individuals' brain age is critical to establish a baseline for normal brain development. This study proposes to model brain development with a novel non-negative projective dictionary learning (NPDL) approach, which learns a discriminative representation of multi-modal neuroimaging data for predicting brain age. Our approach encodes the variability of subjects in different age groups using separate dictionaries, projecting features into a low-dimensional manifold such that information is preserved only for the corresponding age group. The proposed framework improves upon previous discriminative dictionary learning methods by incorporating orthogonality and non-negativity constraints, which remove representation redundancy and perform implicit feature selection. We study brain development on multi-modal brain imaging data from the PING dataset (N = 841, age = 3-21 years). The proposed analysis uses our NDPL framework to predict the age of subjects based on cortical measures from T1-weighted MRI and connectome from diffusion weighted imaging (DWI). We also investigate the association between age prediction and cognition, and study the influence of gender on prediction accuracy. Experimental results demonstrate the usefulness of NDPL for modeling brain development.
Collapse
Affiliation(s)
- Mingli Zhang
- Montreal Neurological Institute, McGill University, Montreal, H3A 2B4, Canada.
| | - Christian Desrosiers
- Department of Software and IT Engineering, École de Technologie supérieure (ETS), Montreal, H3C 1K3, Canada
| | - Yuhong Guo
- School of Computer Science, Carleton University, Canada
| | | | - Noor Al-Sharif
- Montreal Neurological Institute, McGill University, Montreal, H3A 2B4, Canada
| | - Greg Kiar
- Montreal Neurological Institute, McGill University, Montreal, H3A 2B4, Canada
| | - Pedro Valdes-Sosa
- University of Electronic Science and Technology of China/ Cuban Neuroscience Center, China
| | | | - Alan Evans
- Montreal Neurological Institute, McGill University, Montreal, H3A 2B4, Canada
| |
Collapse
|
28
|
Chauvin L, Kumar K, Wachinger C, Vangel M, de Guise J, Desrosiers C, Wells W, Toews M. Neuroimage signature from salient keypoints is highly specific to individuals and shared by close relatives. Neuroimage 2020; 204:116208. [PMID: 31546048 PMCID: PMC6931906 DOI: 10.1016/j.neuroimage.2019.116208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 09/05/2019] [Accepted: 09/17/2019] [Indexed: 01/12/2023] Open
Abstract
Neuroimaging studies typically adopt a common feature space for all data, which may obscure aspects of neuroanatomy only observable in subsets of a population, e.g. cortical folding patterns unique to individuals or shared by close relatives. Here, we propose to model individual variability using a distinctive keypoint signature: a set of unique, localized patterns, detected automatically in each image by a generic saliency operator. The similarity of an image pair is then quantified by the proportion of keypoints they share using a novel Jaccard-like measure of set overlap. Experiments demonstrate the keypoint method to be highly efficient and accurate, using a set of 7536 T1-weighted MRIs pooled from four public neuroimaging repositories, including twins, non-twin siblings, and 3334 unique subjects. All same-subject image pairs are identified by a similarity threshold despite confounds including aging and neurodegenerative disease progression. Outliers reveal previously unknown data labeling inconsistencies, demonstrating the usefulness of the keypoint signature as a computational tool for curating large neuroimage datasets.
Collapse
Affiliation(s)
| | | | - Christian Wachinger
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA; Laboratory for Artificial Intelligence in Medical Imaging, University Hospital, LMU, Munich, Germany
| | - Marc Vangel
- Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | | | | | - William Wells
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| | | |
Collapse
|
29
|
Chaddad A, Daniel P, Sabri S, Desrosiers C, Abdulkarim B. Integration of Radiomic and Multi-omic Analyses Predicts Survival of Newly Diagnosed IDH1 Wild-Type Glioblastoma. Cancers (Basel) 2019; 11:cancers11081148. [PMID: 31405148 PMCID: PMC6721570 DOI: 10.3390/cancers11081148] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 08/05/2019] [Accepted: 08/08/2019] [Indexed: 12/21/2022] Open
Abstract
Predictors of patient outcome derived from gene methylation, mutation, or expression are severely limited in IDH1 wild-type glioblastoma (GBM). Radiomics offers an alternative insight into tumor characteristics which can provide complementary information for predictive models. The study aimed to evaluate whether predictive models which integrate radiomic, gene, and clinical (multi-omic) features together offer an increased capacity to predict patient outcome. A dataset comprising 200 IDH1 wild-type GBM patients, derived from The Cancer Imaging Archive (TCIA) (n = 71) and the McGill University Health Centre (n = 129), was used in this study. Radiomic features (n = 45) were extracted from tumor volumes then correlated to biological variables and clinical outcomes. By performing 10-fold cross-validation (n = 200) and utilizing independent training/testing datasets (n = 100/100), an integrative model was derived from multi-omic features and evaluated for predictive strength. Integrative models using a limited panel of radiomic (sum of squares variance, large zone/low gray emphasis, autocorrelation), clinical (therapy type, age), genetic (CIC, PIK3R1, FUBP1) and protein expression (p53, vimentin) yielded a maximal AUC of 78.24% (p = 2.9 × 10−5). We posit that multi-omic models using the limited set of ‘omic’ features outlined above can improve capacity to predict the outcome for IDH1 wild-type GBM patients.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H4A 3J1, Canada
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montréal, QC H3C 1K3, Canada
| | - Paul Daniel
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H4A 3J1, Canada
| | - Siham Sabri
- Department of Pathology, McGill University, Montreal, QC H4A 3J1, Canada
- Research Institute of the McGill University Health Centre, Glen Site, Montreal, QC H4A 3J1, Canada
| | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montréal, QC H3C 1K3, Canada
| | - Bassam Abdulkarim
- Division of Radiation Oncology, Department of Oncology, McGill University, Montreal, QC H4A 3J1, Canada.
- Research Institute of the McGill University Health Centre, Glen Site, Montreal, QC H4A 3J1, Canada.
| |
Collapse
|
30
|
Gopinath K, Desrosiers C, Lombaert H. Graph Convolutions on Spectral Embeddings for Cortical Surface Parcellation. Med Image Anal 2019; 54:297-305. [DOI: 10.1016/j.media.2019.03.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 12/31/2018] [Accepted: 03/27/2019] [Indexed: 10/27/2022]
|
31
|
Dolz J, Gopinath K, Yuan J, Lombaert H, Desrosiers C, Ben Ayed I. HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation. IEEE Trans Med Imaging 2019; 38:1116-1126. [PMID: 30387726 DOI: 10.1109/tmi.2018.2878669] [Citation(s) in RCA: 146] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet that connects each layer to every other layer in a feed-forward fashion and has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on six month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available.
Collapse
|
32
|
Wang L, Nie D, Li G, Puybareau E, Dolz J, Zhang Q, Wang F, Xia J, Wu Z, Chen J, Thung KH, Bui TD, Shin J, Zeng G, Zheng G, Fonov VS, Doyle A, Xu Y, Moeskops P, Pluim JPW, Desrosiers C, Ayed IB, Sanroma G, Benkarim OM, Casamitjana A, Vilaplana V, Lin W, Li G, Shen D. Benchmark on Automatic 6-month-old Infant Brain Segmentation Algorithms: The iSeg-2017 Challenge. IEEE Trans Med Imaging 2019; 38:2219-2230. [PMID: 30835215 PMCID: PMC6754324 DOI: 10.1109/tmi.2019.2901712] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Accurate segmentation of infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is an indispensable foundation for early studying of brain growth patterns and morphological changes in neurodevelopmental disorders. Nevertheless, in the isointense phase (approximately 6-9 months of age), due to inherent myelination and maturation process, WM and GM exhibit similar levels of intensity in both T1-weighted (T1w) and T2-weighted (T2w) MR images, making tissue segmentation very challenging. Despite many efforts were devoted to brain segmentation, only few studies have focused on the segmentation of 6-month infant brain images. With the idea of boosting methodological development in the community, iSeg-2017 challenge (http://iseg2017.web.unc.edu) provides a set of 6-month infant subjects with manual labels for training and testing the participating methods. Among the 21 automatic segmentation methods participating in iSeg-2017, we review the 8 top-ranked teams, in terms of Dice ratio, modified Hausdorff distance and average surface distance, and introduce their pipelines, implementations, as well as source codes. We further discuss limitations and possible future directions. We hope the dataset in iSeg-2017 and this review article could provide insights into methodological development for the community.
Collapse
|
33
|
Dolz J, Desrosiers C, Ben Ayed I. IVD-Net: Intervertebral Disc Localization and Segmentation in MRI with a Multi-modal UNet. Lecture Notes in Computer Science 2019. [DOI: 10.1007/978-3-030-13736-6_11] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
34
|
Dolz J, Ben Ayed I, Desrosiers C. Dense Multi-path U-Net for Ischemic Stroke Lesion Segmentation in Multiple Image Modalities. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries 2019. [DOI: 10.1007/978-3-030-11723-8_27] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
35
|
Carass A, Cuzzocreo JL, Han S, Hernandez-Castillo CR, Rasser PE, Ganz M, Beliveau V, Dolz J, Ben Ayed I, Desrosiers C, Thyreau B, Romero JE, Coupé P, Manjón JV, Fonov VS, Collins DL, Ying SH, Onyike CU, Crocetti D, Landman BA, Mostofsky SH, Thompson PM, Prince JL. Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images. Neuroimage 2018; 183:150-172. [PMID: 30099076 PMCID: PMC6271471 DOI: 10.1016/j.neuroimage.2018.08.003] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 08/03/2018] [Accepted: 08/03/2018] [Indexed: 01/26/2023] Open
Abstract
The human cerebellum plays an essential role in motor control, is involved in cognitive function (i.e., attention, working memory, and language), and helps to regulate emotional responses. Quantitative in-vivo assessment of the cerebellum is important in the study of several neurological diseases including cerebellar ataxia, autism, and schizophrenia. Different structural subdivisions of the cerebellum have been shown to correlate with differing pathologies. To further understand these pathologies, it is helpful to automatically parcellate the cerebellum at the highest fidelity possible. In this paper, we coordinated with colleagues around the world to evaluate automated cerebellum parcellation algorithms on two clinical cohorts showing that the cerebellum can be parcellated to a high accuracy by newer methods. We characterize these various methods at four hierarchical levels: coarse (i.e., whole cerebellum and gross structures), lobe, subdivisions of the vermis, and the lobules. Due to the number of labels, the hierarchy of labels, the number of algorithms, and the two cohorts, we have restricted our analyses to the Dice measure of overlap. Under these conditions, machine learning based methods provide a collection of strategies that are efficient and deliver parcellations of a high standard across both cohorts, surpassing previous work in the area. In conjunction with the rank-sum computation, we identified an overall winning method.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Jennifer L Cuzzocreo
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD, 21287, USA
| | - Shuo Han
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA; Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, 20892, USA
| | - Carlos R Hernandez-Castillo
- Consejo Nacional de Ciencia y Tecnología, Instituto de Neuroetología, Universidad Veracruzana, Xalapa, Mexico
| | - Paul E Rasser
- Priority Research Centre for Brain & Mental Health and Stroke & Brain Injury, University of Newcastle, Callaghan, NSW, Australia
| | - Melanie Ganz
- Neurobiology Research Unit, Rigshospitalet, Copenhagen, Denmark; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Vincent Beliveau
- Neurobiology Research Unit, Rigshospitalet, Copenhagen, Denmark; Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Jose Dolz
- Laboratory for Imagery, Vision, and Artificial Intelligence, École de Technologie Supérieure, Montreal, QC, Canada
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision, and Artificial Intelligence, École de Technologie Supérieure, Montreal, QC, Canada
| | - Christian Desrosiers
- Laboratory for Imagery, Vision, and Artificial Intelligence, École de Technologie Supérieure, Montreal, QC, Canada
| | - Benjamin Thyreau
- Institute of Development, Aging and Cancer, Tohoku University, Japan
| | - José E Romero
- Instituto Universitario de Tecnologías de la Información y Comunicaciones (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
| | - Pierrick Coupé
- University of Bordeaux, LaBRI, UMR 5800, PICTURA, Talence, F-33400, France; CNRS, LaBRI, UMR 5800, PICTURA, Talence, F-33400, France
| | - José V Manjón
- Instituto Universitario de Tecnologías de la Información y Comunicaciones (ITACA), Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain
| | - Vladimir S Fonov
- Image Processing Laboratory, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - D Louis Collins
- Image Processing Laboratory, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Sarah H Ying
- Department of Neurology, The Johns Hopkins School of Medicine, Baltimore, MD, 21287, USA
| | - Chiadi U Onyike
- Department of Psychiatry and Behavioral Sciences, The Johns Hopkins School of Medicine, Baltimore, MD, 21287, USA
| | - Deana Crocetti
- Center for Neurodevelopmental Medicine and Imaging Research, Kennedy Krieger Institute, Baltimore, MD, 21205, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | - Stewart H Mostofsky
- Center for Neurodevelopmental Medicine and Imaging Research, Kennedy Krieger Institute, Baltimore, MD, 21205, USA; Department of Neurology, The Johns Hopkins School of Medicine, Baltimore, MD, 21287, USA; Department of Psychiatry and Behavioral Sciences, The Johns Hopkins School of Medicine, Baltimore, MD, 21287, USA
| | - Paul M Thompson
- Imaging Genetics Center, Mark and Mary Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Marina del Rey, CA, 90292, USA; Departments of Neurology, Pediatrics, Psychiatry, Radiology, Engineering, and Ophthalmology, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
36
|
Dolz J, Xu X, Rony J, Yuan J, Liu Y, Granger E, Desrosiers C, Zhang X, Ben Ayed I, Lu H. Multiregion segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. Med Phys 2018; 45:5482-5493. [DOI: 10.1002/mp.13240] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 10/03/2018] [Accepted: 10/05/2018] [Indexed: 12/15/2022] Open
Affiliation(s)
- Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA) École de technologie supérieure Montréal QCH3C 1K3Canada
| | - Xiaopan Xu
- School of Biomedical Engineering Fourth Military Medical University Xi’an Shaanxi710032China
| | - Jérôme Rony
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA) École de technologie supérieure Montréal QCH3C 1K3Canada
| | - Jing Yuan
- School of Mathematics and Statistics Xidian University Xi’an 710071China
| | - Yang Liu
- School of Biomedical Engineering Fourth Military Medical University Xi’an Shaanxi710032China
| | - Eric Granger
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA) École de technologie supérieure Montréal QCH3C 1K3Canada
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA) École de technologie supérieure Montréal QCH3C 1K3Canada
| | - Xi Zhang
- School of Biomedical Engineering Fourth Military Medical University Xi’an Shaanxi710032China
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA) École de technologie supérieure Montréal QCH3C 1K3Canada
| | - Hongbing Lu
- School of Biomedical Engineering Fourth Military Medical University Xi’an Shaanxi710032China
| |
Collapse
|
37
|
Zhang M, Desrosiers C. High-quality Image Restoration Using Low-Rank Patch Regularization and Global Structure Sparsity. IEEE Trans Image Process 2018; 28:868-879. [PMID: 30296228 DOI: 10.1109/tip.2018.2874284] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In recent years, approaches based on nonlocal self similarity and global structure regularization have led to significant improvements in image restoration. Nonlocal self similarity exploits the repetitiveness of small image patches as a powerful prior in the reconstruction process. Likewise, global structure regularization is based on the principle that the structure of objects in the image is represented by a relatively small portion of pixels. Enforcing this structural information to be sparse can thus reduce the occurrence of reconstruction artifacts. So far, most image restoration approaches have considered one of these two strategies, but not both. This paper presents a novel image restoration method that combines nonlocal self similarity and global structure sparsity in a single efficient model. Group of similar patches are reconstructed simultaneously, via an adaptive regularization technique based on the weighted nuclear norm. Moreover, global structure is preserved using an innovative strategy, which decomposes the image into a smooth component and a sparse residual, the latter regularized using l1 norm. An optimization technique, based on the Alternating Direction Method of Multipliers (ADMM) algorithm, is used to recover corrupted images efficiently. The performance of the proposed method is evaluated on two important image restoration tasks: image completion and super-resolution. Experimental results show our method to outperform state-of-the-art approaches for these tasks, for various types and levels of image corruption.
Collapse
|
38
|
Kumar K, Toews M, Chauvin L, Colliot O, Desrosiers C. Multi-modal brain fingerprinting: A manifold approximation based framework. Neuroimage 2018; 183:212-226. [PMID: 30099077 DOI: 10.1016/j.neuroimage.2018.08.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Revised: 06/22/2018] [Accepted: 08/02/2018] [Indexed: 12/01/2022] Open
Abstract
This work presents an efficient framework, based on manifold approximation, for generating brain fingerprints from multi-modal data. The proposed framework represents images as bags of local features which are used to build a subject proximity graph. Compact fingerprints are obtained by projecting this graph in a low-dimensional manifold using spectral embedding. Experiments using the T1/T2-weighted MRI, diffusion MRI, and resting-state fMRI data of 945 Human Connectome Project subjects demonstrate the benefit of combining multiple modalities, with multi-modal fingerprints more discriminative than those generated from individual modalities. Results also highlight the link between fingerprint similarity and genetic proximity, monozygotic twins having more similar fingerprints than dizygotic or non-twin siblings. This link is also reflected in the differences of feature correspondences between twin/sibling pairs, occurring in major brain structures and across hemispheres. The robustness of the proposed framework to factors like image alignment and scan resolution, as well as the reproducibility of results on retest scans, suggest the potential of multi-modal brain fingerprinting for characterizing individuals in a large cohort analysis.
Collapse
Affiliation(s)
- Kuldeep Kumar
- Laboratory for Imagery, Vision and Artificial Intelligence, École de technologie supérieure, 1100 Notre-Dame W., Montreal, QC, H3C1K3, Canada; Inria Paris, Aramis Project-Team, 75013, Paris, France.
| | - Matthew Toews
- Laboratory for Imagery, Vision and Artificial Intelligence, École de technologie supérieure, 1100 Notre-Dame W., Montreal, QC, H3C1K3, Canada
| | - Laurent Chauvin
- Laboratory for Imagery, Vision and Artificial Intelligence, École de technologie supérieure, 1100 Notre-Dame W., Montreal, QC, H3C1K3, Canada
| | - Olivier Colliot
- Sorbonne Universités, UPMC Univ Paris 06, Inserm, CNRS, Institut du cerveau et la moelle (ICM) - Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, F-75013, Paris, France; Inria Paris, Aramis Project-Team, 75013, Paris, France; AP-HP, Departments of Neurology and Neuroradiology, Hôpital Pitié-Salpêtrière, 75013, Paris, France
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence, École de technologie supérieure, 1100 Notre-Dame W., Montreal, QC, H3C1K3, Canada
| |
Collapse
|
39
|
Mhiri M, Desrosiers C, Cheriet M. Convolutional pyramid of bidirectional character sequences for the recognition of handwritten words. Pattern Recognit Lett 2018. [DOI: 10.1016/j.patrec.2018.04.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
40
|
Chaddad A, Daniel P, Desrosiers C, Toews M, Abdulkarim B. Novel Radiomic Features Based on Joint Intensity Matrices for Predicting Glioblastoma Patient Survival Time. IEEE J Biomed Health Inform 2018; 23:795-804. [PMID: 29993848 DOI: 10.1109/jbhi.2018.2825027] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a novel set of image texture features generalizing standard grey-level co-occurrence matrices (GLCM) to multimodal image data through joint intensity matrices (JIMs). These are used to predict the survival of glioblastoma multiforme (GBM) patients from multimodal MRI data. The scans of 73 GBM patients from the Cancer Imaging Archive are used in our study. Necrosis, active tumor, and edema/invasion subregions of GBM phenotypes are segmented using the coregistration of contrast-enhanced T1-weighted (CE-T1) images and its corresponding fluid-attenuated inversion recovery (FLAIR) images. Texture features are then computed from the JIM of these GBM subregions and a random forest model is employed to classify patients into short or long survival groups. Our survival analysis identified JIM features in necrotic (e.g., entropy and inverse-variance) and edema (e.g., entropy and contrast) subregions that are moderately correlated with survival time (i.e., Spearman rank correlation of 0.35). Moreover, nine features were found to be associated with GBM survival with a Hazard-ratio range of 0.38-2.1 and a significance level of p < 0.05 following Holm-Bonferroni correction. These features also led to the highest accuracy in a univariate analysis for predicting the survival group of patients, with AUC values in the range of 68-70%. Considering multiple features for this task, JIM features led to significantly higher AUC values than those based on standard GLCMs and gene expression. Furthermore, an AUC of 77.56% with p = 0.003 was achieved when combining JIM, GLCM, and gene expression features into a single radiogenomic signature. In summary, our study demonstrated the usefulness of modeling the joint intensity characteristics of CE-T1 and FLAIR images for predicting the prognosis of patients with GBM.
Collapse
|
41
|
Mhiri M, Abuelwafa S, Desrosiers C, Cheriet M. Hierarchical representation learning using spherical k-means for segmentation-free word spotting. Pattern Recognit Lett 2018. [DOI: 10.1016/j.patrec.2017.11.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
42
|
Chaddad A, Desrosiers C, Toews M, Abdulkarim B. Predicting survival time of lung cancer patients using radiomic analysis. Oncotarget 2017; 8:104393-104407. [PMID: 29262648 PMCID: PMC5732814 DOI: 10.18632/oncotarget.22251] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Accepted: 10/02/2017] [Indexed: 12/16/2022] Open
Abstract
OBJECTIVES This study investigates the prediction of Non-small cell lung cancer (NSCLC) patient survival outcomes based on radiomic texture and shape features automatically extracted from tumor image data. MATERIALS AND METHODS Retrospective analysis involves CT scans of 315 NSCLC patients from The Cancer Imaging Archive (TCIA). A total of 24 image features are computed from labeled tumor volumes of patients within groups defined using NSCLC subtype and TNM staging information. Spearman's rank correlation, Kaplan-Meier estimation and log-rank tests were used to identify features related to long/short NSCLC patient survival groups. Automatic random forest classification was used to predict patient survival group from multivariate feature data. Significance is assessed at P < 0.05 following Holm-Bonferroni correction for multiple comparisons. RESULTS Significant correlations between radiomic features and survival were observed for four clinical groups: (group, [absolute correlation range]): (large cell carcinoma (LCC) [0.35, 0.43]), (tumor size T2, [0.31, 0.39]), (non lymph node metastasis N0, [0.3, 0.33]), (TNM stage I, [0.39, 0.48]). Significant log-rank relationships between features and survival time were observed for three clinical groups: (group, hazard ratio): (LCC, 3.0), (LCC, 3.9), (T2, 2.5) and (stage I, 2.9). Automatic survival prediction performance (i.e. below/above median) is superior for combined radiomic features with age-TNM in comparison to standard TNM clinical staging information (clinical group, mean area-under-the-ROC-curve (AUC)): (LCC, 75.73%), (N0, 70.33%), (T2, 70.28%) and (TNM-I, 76.17%). CONCLUSION Quantitative lung CT imaging features can be used as indicators of survival, in particular for patients with large-cell-carcinoma (LCC), primary-tumor-sizes (T2) and no lymph-node-metastasis (N0).
Collapse
Affiliation(s)
- Ahmad Chaddad
- Division of Radiation Oncology, McGill University, Montréal, Canada
- The Laboratory for Imagery, Vision and Artificial Intelligence, Ecole de Technologie Supérieure, Montréal, Canada
| | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, Ecole de Technologie Supérieure, Montréal, Canada
| | - Matthew Toews
- The Laboratory for Imagery, Vision and Artificial Intelligence, Ecole de Technologie Supérieure, Montréal, Canada
| | | |
Collapse
|
43
|
Abstract
PURPOSE Early detection of blood vessel pathologies can be made through the evaluation of functional and structural abnormalities in the arteries, including the arterial distensibility measure. We propose a feasibility study on computing arterial distensibility automatically from monoplane 2D X-ray sequences for both small arteries (such as coronary arteries) and larger arteries (such as the aorta). METHODS To compute the distensibility measure, three steps were developed: First, the segment of an artery is extracted using our graph-based segmentation method. Then, the same segment is tracked in the moving sequence using our spatio-temporal segmentation method: the Temporal Vessel Walker. Finally, the diameter of the artery is measured automatically at each frame of the sequence based on the segmentation results. RESULTS The method was evaluated using one simulated sequence and 4 patients' angiograms depicting the coronary arteries and three depicting the ascending aorta. Results of the simulated sequence achieved a Dice index of 98%, with a mean squared error in diameter measurement of [Formula: see text] mm. Results obtained from patients' X-ray sequences are consistent with manual assessment of the diameter by experts. CONCLUSIONS The proposed method measures changes in diameter of a specific segment of a blood vessel during the cardiac sequence, automatically based on monoplane 2D X-ray sequence. Such information might become a key to help physicians in the detection of variations of arterial stiffness associated with early stages of various vasculopathies.
Collapse
Affiliation(s)
- Faten M'hiri
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada.
| | - Luc Duong
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada
| | - Christian Desrosiers
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada
| | - Nagib Dahdah
- Department of Cardiology, Sainte-Justine Hospital, Montreal, Canada
| | - Joaquim Miró
- Department of Cardiology, Sainte-Justine Hospital, Montreal, Canada
| | - Mohamed Cheriet
- Automated Production Engineering, École de technologie supérieure, Montreal, Canada
| |
Collapse
|
44
|
Fechter T, Adebahr S, Baltas D, Ben Ayed I, Desrosiers C, Dolz J. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk. Med Phys 2017; 44:6341-6352. [DOI: 10.1002/mp.12593] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 08/31/2017] [Accepted: 09/08/2017] [Indexed: 12/25/2022] Open
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics; Department of Radiation Oncology; Medical Center; Faculty of Medicine; University of Freiburg; German Cancer Consortium (DKTK) Partner Site Freiburg; German Cancer Research Center (DKFZ); Heidelberg Germany
| | - Sonja Adebahr
- Department of Radiation Oncology; Medical Center; Faculty of Medicine; University of Freiburg; German Cancer Consortium (DKTK) Partner Site Freiburg; German Cancer Research Center (DKFZ); Heidelberg Germany
| | - Dimos Baltas
- Division of Medical Physics; Department of Radiation Oncology; Medical Center; Faculty of Medicine; University of Freiburg; German Cancer Consortium (DKTK) Partner Site Freiburg; German Cancer Research Center (DKFZ); Heidelberg Germany
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA); École de technologie supérieure; Montréal Canada
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA); École de technologie supérieure; Montréal Canada
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA); École de technologie supérieure; Montréal Canada
| |
Collapse
|
45
|
Chaddad A, Desrosiers C, Toews M. Radiomic analysis of multi-contrast brain MRI for the prediction of survival in patients with glioblastoma multiforme. Annu Int Conf IEEE Eng Med Biol Soc 2017; 2016:4035-4038. [PMID: 28325002 DOI: 10.1109/embc.2016.7591612] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Image texture features are effective at characterizing the microstructure of cancerous tissues. This paper proposes predicting the survival times of glioblastoma multiforme (GBM) patients using texture features extracted in multi-contrast brain MRI images. Texture features are derived locally from contrast enhancement, necrosis and edema regions in T1-weighted post-contrast and fluid-attenuated inversion-recovery (FLAIR) MRIs, based on the gray-level co-occurrence matrix representation. A statistical analysis based on the Kaplan-Meier method and log-rank test is used to identify the texture features related with the overall survival of GBM patients. Results are presented on a dataset of 39 GBM patients. For FLAIR images, four features (Energy, Correlation, Variance and Inverse of Variance) from contrast enhancement regions and a feature (Homogeneity) from edema regions were shown to be associated with survival times (p-value <; 0.01). Likewise, in T1-weighted images, three features (Energy, Correlation, and Variance) from contrast enhancement regions were found to be useful for predicting the overall survival of GBM patients. These preliminary results show the advantages of texture analysis in predicting the prognosis of GBM patients from multi-contrast brain MRI.
Collapse
|
46
|
Chaddad A, Desrosiers C, Hassan L, Tanougast C. Hippocampus and amygdala radiomic biomarkers for the study of autism spectrum disorder. BMC Neurosci 2017; 18:52. [PMID: 28821235 PMCID: PMC6389224 DOI: 10.1186/s12868-017-0373-0] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Accepted: 07/07/2017] [Indexed: 02/08/2023] Open
Abstract
Background Emerging evidence suggests the presence of neuroanatomical abnormalities in subjects with autism spectrum disorder (ASD). Identifying anatomical correlates could thus prove useful for the automated diagnosis of ASD. Radiomic analyses based on MRI texture features have shown a great potential for characterizing differences occurring from tissue heterogeneity, and for identifying abnormalities related to these differences. However, only a limited number of studies have investigated the link between image texture and ASD. This paper proposes the study of texture features based on grey level co-occurrence matrix (GLCM) as a means for characterizing differences between ASD and development control (DC) subjects. Our study uses 64 T1-weighted MRI scans acquired from two groups of subjects: 28 typical age range subjects 4–15 years old (14 ASD and 14 DC, age-matched), and 36 non-typical age range subjects 10–24 years old (20 ASD and 16 DC). GLCM matrices are computed from manually labeled hippocampus and amygdala regions, and then encoded as texture features by applying 11 standard Haralick quantifier functions. Significance tests are performed to identify texture differences between ASD and DC subjects. An analysis using SVM and random forest classifiers is then carried out to find the most discriminative features, and use these features for classifying ASD from DC subjects. Results Preliminary results show that all 11 features derived from the hippocampus (typical and non-typical age) and 4 features extracted from the amygdala (non-typical age) have significantly different distributions in ASD subjects compared to DC subjects, with a significance of p < 0.05 following Holm–Bonferroni correction. Features derived from hippocampal regions also demonstrate high discriminative power for differentiating between ASD and DC subjects, with classifier accuracy of 67.85%, sensitivity of 62.50%, specificity of 71.42%, and the area under the ROC curve (AUC) of 76.80% for age-matched subjects with typical age range. Conclusions Results demonstrate the potential of hippocampal texture features as a biomarker for the diagnosis and characterization of ASD.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Laboratory for Imagery, Vision and Artificial Intelligence, Ecole de Technologie Supérieure, Montreal, Canada. .,Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, France.
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence, Ecole de Technologie Supérieure, Montreal, Canada
| | - Lama Hassan
- Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, France
| | - Camel Tanougast
- Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, France
| |
Collapse
|
47
|
Haj-Hassan H, Chaddad A, Harkouss Y, Desrosiers C, Toews M, Tanougast C. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network. J Pathol Inform 2017; 8:1. [PMID: 28400990 PMCID: PMC5360018 DOI: 10.4103/jpi.jpi_47_16] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Accepted: 01/20/2017] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). METHODS Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. RESULTS An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. CONCLUSIONS Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.
Collapse
Affiliation(s)
- Hawraa Haj-Hassan
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France; Faculty of Engineering, Lebanese University, Beirut, Lebanon
| | - Ahmad Chaddad
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France; Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
| | | | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
| | - Matthew Toews
- Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
| | - Camel Tanougast
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France
| |
Collapse
|
48
|
M'hiri F, Duong L, Desrosiers C, Leye M, Miró J, Cheriet M. A graph-based approach for spatio-temporal segmentation of coronary arteries in X-ray angiographic sequences. Comput Biol Med 2016; 79:45-58. [PMID: 27744180 DOI: 10.1016/j.compbiomed.2016.10.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 09/30/2016] [Accepted: 10/01/2016] [Indexed: 01/10/2023]
Abstract
The segmentation and tracking of coronary arteries (CAs) are critical steps for the computation of biophysical measurements in pediatric interventional cardiology. In the literature, most methods are focused on either segmenting the vessel lumen or on tracking the vessel centerline. However, they do not simultaneously combine the segmentation and tracking of a specific CA. This paper introduces a novel algorithm for CA segmentation and tracking from 2D X-ray angiography sequences. The proposed algorithm is based on the Temporal Vessel Walker (TVW) segmentation method, which combines graph-based formulation and temporal priors. Moreover, superpixel groups are used by TVW as image primitives to ensure a better extraction of the CA. The proposed algorithm, TVW with superpixels (SP-TVW), returns an accurate result to segment and track the artery along the angiogram. Quantitative results over 12 sequences of young patients show the accuracy of the proposed framework. The results return a mean recall of 84% in the dataset. In addition, the proposed method returned a Dice index of 70% in segmenting and tracking right coronary arteries and circumflex arteries. The performance of the proposed method surpasses the existing polyline method in tracking the centerline of CA with a more precise localization of the centerline, resulting in a smaller distance error of 0.23mm compared to 0.94mm.
Collapse
Affiliation(s)
- Faten M'hiri
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada.
| | - Luc Duong
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada
| | - Christian Desrosiers
- Department of Software and IT Engineering, École de technologie supérieure, Montreal, Canada
| | - Mohamed Leye
- Department of Cardiology, Sainte-Justine Hospital, Montreal, Canada
| | - Joaquim Miró
- Department of Cardiology, Sainte-Justine Hospital, Montreal, Canada
| | - Mohamed Cheriet
- Automated Production Engineering, École de technologie supérieure, Montreal, Canada
| |
Collapse
|
49
|
Chaddad A, Desrosiers C, Hassan L, Tanougast C. A quantitative study of shape descriptors from glioblastoma multiforme phenotypes for predicting survival outcome. Br J Radiol 2016; 89:20160575. [PMID: 27781499 DOI: 10.1259/bjr.20160575] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
OBJECTIVE Predicting the survival outcome of patients with glioblastoma multiforme (GBM) is of key importance to clinicians for selecting the optimal course of treatment. The goal of this study was to evaluate the usefulness of geometric shape features, extracted from MR images, as a potential non-invasive way to characterize GBM tumours and predict the overall survival times of patients with GBM. METHODS The data of 40 patients with GBM were obtained from the Cancer Genome Atlas and Cancer Imaging Archive. The T1 weighted post-contrast and fluid-attenuated inversion-recovery volumes of patients were co-registered and segmented into delineate regions corresponding to three GBM phenotypes: necrosis, active tumour and oedema/invasion. A set of two-dimensional shape features were then extracted slicewise from each phenotype region and combined over slices to describe the three-dimensional shape of these phenotypes. Thereafter, a Kruskal-Wallis test was employed to identify shape features with significantly different distributions across phenotypes. Moreover, a Kaplan-Meier analysis was performed to find features strongly associated with GBM survival. Finally, a multivariate analysis based on the random forest model was used for predicting the survival group of patients with GBM. RESULTS Our analysis using the Kruskal-Wallis test showed that all but one shape feature had statistically significant differences across phenotypes, with p-value < 0.05, following Holm-Bonferroni correction, justifying the analysis of GBM tumour shapes on a per-phenotype basis. Furthermore, the survival analysis based on the Kaplan-Meier estimator identified three features derived from necrotic regions (i.e. Eccentricity, Extent and Solidity) that were significantly correlated with overall survival (corrected p-value < 0.05; hazard ratios between 1.68 and 1.87). In the multivariate analysis, features from necrotic regions gave the highest accuracy in predicting the survival group of patients, with a mean area under the receiver-operating characteristic curve (AUC) of 63.85%. Combining the features of all three phenotypes increased the mean AUC to 66.99%, suggesting that shape features from different phenotypes can be used in a synergic manner to predict GBM survival. CONCLUSION Results show that shape features, in particular those extracted from necrotic regions, can be used effectively to characterize GBM tumours and predict the overall survival of patients with GBM. Advances in knowledge: Simple volumetric features have been largely used to characterize the different phenotypes of a GBM tumour (i.e. active tumour, oedema and necrosis). This study extends previous work by considering a wide range of shape features, extracted in different phenotypes, for the prediction of survival in patients with GBM.
Collapse
Affiliation(s)
- Ahmad Chaddad
- 1 Laboratory for Imagery, Vision and Artificial Intelligence, University of Québec, École de Technologie Supérieure, Montréal, QC, Canada.,2 Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, Lorraine, France
| | - Christian Desrosiers
- 1 Laboratory for Imagery, Vision and Artificial Intelligence, University of Québec, École de Technologie Supérieure, Montréal, QC, Canada
| | - Lama Hassan
- 1 Laboratory for Imagery, Vision and Artificial Intelligence, University of Québec, École de Technologie Supérieure, Montréal, QC, Canada.,2 Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, Lorraine, France
| | - Camel Tanougast
- 2 Laboratory of Conception, Optimization and Modeling of Systems, University of Lorraine, Metz, Lorraine, France
| |
Collapse
|
50
|
Chaddad A, Desrosiers C, Bouridane A, Toews M, Hassan L, Tanougast C. Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery. PLoS One 2016; 11:e0149893. [PMID: 26901134 PMCID: PMC4764026 DOI: 10.1371/journal.pone.0149893] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2015] [Accepted: 02/05/2016] [Indexed: 01/05/2023] Open
Abstract
PURPOSE This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma. MATERIALS AND METHODS In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models. RESULTS Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%. CONCLUSIONS These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
| | - Ahmed Bouridane
- School of Computing, Engineering and Information Sciences, Northumbria University, Newcastle, United Kingdom
| | - Matthew Toews
- Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure, Montréal, Québec, Canada
| | - Lama Hassan
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France
| | - Camel Tanougast
- Laboratory of Conception, Optimization and Modelling of Systems, University of Lorraine, Metz, Lorraine, France
| |
Collapse
|