1
|
Kelley W, Ngo N, Dalca AV, Fischl B, Zöllei L, Hoffmann M. BOOSTING SKULL-STRIPPING PERFORMANCE FOR PEDIATRIC BRAIN IMAGES. ARXIV 2024:arXiv:2402.16634v1. [PMID: 38463507 PMCID: PMC10925384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.
Collapse
Affiliation(s)
- William Kelley
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Nathan Ngo
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Adrian V Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Computer Science & Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Lilla Zöllei
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Division of Health Sciences and Technology, MIT, Cambridge, MA 02139, USA
| |
Collapse
|
2
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|