1
|
Martinez CS, Cuadra MB, Jorge J. BigBrain-MR: a new digital phantom with anatomically-realistic magnetic resonance properties at 100-µm resolution for magnetic resonance methods development. Neuroimage 2023; 273:120074. [PMID: 37004826 DOI: 10.1016/j.neuroimage.2023.120074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 03/16/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023] Open
Abstract
The benefits, opportunities and growing availability of ultra-high field magnetic resonance imaging (MRI) for humans have prompted an expansion in research and development efforts towards increasingly more advanced high-resolution imaging techniques. To maximize their effectiveness, these efforts need to be supported by powerful computational simulation platforms that can adequately reproduce the biophysical characteristics of MRI, with high spatial resolution. In this work, we have sought to address this need by developing a novel digital phantom with realistic anatomical detail up to 100-µm resolution, including multiple MRI properties that affect image generation. This phantom, termed BigBrain-MR, was generated from the publicly available BigBrain histological dataset and lower-resolution in-vivo 7T-MRI data, using a newly-developed image processing framework that allows mapping the general properties of the latter into the fine anatomical scale of the former. Overall, the mapping framework was found to be effective and robust, yielding a diverse range of realistic "in-vivo-like" MRI contrasts and maps at 100-µm resolution. BigBrain-MR was then tested in three imaging applications (motion effects and interpolation, super-resolution imaging, and parallel imaging reconstruction) to investigate its properties, value and validity as a simulation platform. The results consistently showed that BigBrain-MR can closely approximate the behavior of real in-vivo data, more realistically and with more extensive features than a more classic option such as the Shepp-Logan phantom. Its flexibility in simulating different contrast mechanisms and artifacts may also prove valuable for educational applications. BigBrain-MR is therefore deemed a favorable choice to support methodological development and demonstration in brain MRI, and has been made freely available to the community.
Collapse
|
2
|
Lipin M, Bennett J, Ying GS, Yu Y, Ashtari M. Improving the Quantification of the Lateral Geniculate Nucleus in Magnetic Resonance Imaging Using a Novel 3D-Edge Enhancement Technique. Front Comput Neurosci 2021; 15:708866. [PMID: 34924983 PMCID: PMC8677828 DOI: 10.3389/fncom.2021.708866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 11/02/2021] [Indexed: 11/13/2022] Open
Abstract
The lateral geniculate nucleus (LGN) is a small, inhomogeneous structure that relays major sensory inputs from the retina to the visual cortex. LGN morphology has been intensively studied due to various retinal diseases, as well as in the context of normal brain development. However, many of the methods used for LGN structural evaluations have not adequately addressed the challenges presented by the suboptimal routine MRI imaging of this structure. Here, we propose a novel method of edge enhancement that allows for high reliability and accuracy with regard to LGN morphometry, using routine 3D-MRI imaging protocols. This new algorithm is based on modeling a small brain structure as a polyhedron with its faces, edges, and vertices fitted with one plane, the intersection of two planes, and the intersection of three planes, respectively. This algorithm dramatically increases the contrast-to-noise ratio between the LGN and its surrounding structures as well as doubling the original spatial resolution. To show the algorithm efficacy, two raters (MA and ML) measured LGN volumes bilaterally in 19 subjects using the edge-enhanced LGN extracted areas from the 3D-T1 weighted images. The averages of the left and right LGN volumes from the two raters were 175 ± 8 and 174 ± 9 mm3, respectively. The intra-class correlations between raters were 0.74 for the left and 0.81 for the right LGN volumes. The high contrast edge-enhanced LGN images presented here, from a 7-min routine 3T-MRI acquisition, is qualitatively comparable to previously reported LGN images that were acquired using a proton density sequence with 30–40 averages and 1.5-h of acquisition time. The proposed edge-enhancement algorithm is not limited only to the LGN, but can significantly improve the contrast-to-noise ratio of any small deep-seated gray matter brain structure that is prone to high-levels of noise and partial volume effects, and can also increase their morphometric accuracy and reliability. An immensely useful feature of the proposed algorithm is that it can be used retrospectively on noisy and low contrast 3D brain images previously acquired as part of any routine clinical MRI visit.
Collapse
Affiliation(s)
- Mikhail Lipin
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Jean Bennett
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Gui-Shuang Ying
- Center for Preventative Ophthalmology and Biostatistics, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Yinxi Yu
- Center for Preventative Ophthalmology and Biostatistics, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Manzar Ashtari
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
3
|
Abadi E, Segars WP, Tsui BMW, Kinahan PE, Bottenus N, Frangi AF, Maidment A, Lo J, Samei E. Virtual clinical trials in medical imaging: a review. J Med Imaging (Bellingham) 2020; 7:042805. [PMID: 32313817 PMCID: PMC7148435 DOI: 10.1117/1.jmi.7.4.042805] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 03/23/2020] [Indexed: 12/13/2022] Open
Abstract
The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities.
Collapse
Affiliation(s)
- Ehsan Abadi
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - William P. Segars
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Benjamin M. W. Tsui
- Johns Hopkins University, Department of Radiology, Baltimore, Maryland, United States
| | - Paul E. Kinahan
- University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Nick Bottenus
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
- University of Colorado Boulder, Department of Mechanical Engineering, Boulder, Colorado, United States
| | - Alejandro F. Frangi
- University of Leeds, School of Computing, Leeds, United Kingdom
- University of Leeds, School of Medicine, Leeds, United Kingdom
| | - Andrew Maidment
- University of Pennsylvania, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - Joseph Lo
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Ehsan Samei
- Duke University, Department of Radiology, Durham, North Carolina, United States
| |
Collapse
|
4
|
Töger J, Sorensen T, Somandepalli K, Toutios A, Lingala SG, Narayanan S, Nayak K. Test-retest repeatability of human speech biomarkers from static and real-time dynamic magnetic resonance imaging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3323. [PMID: 28599561 PMCID: PMC5436977 DOI: 10.1121/1.4983081] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Static anatomical and real-time dynamic magnetic resonance imaging (RT-MRI) of the upper airway is a valuable method for studying speech production in research and clinical settings. The test-retest repeatability of quantitative imaging biomarkers is an important parameter, since it limits the effect sizes and intragroup differences that can be studied. Therefore, this study aims to present a framework for determining the test-retest repeatability of quantitative speech biomarkers from static MRI and RT-MRI, and apply the framework to healthy volunteers. Subjects (n = 8, 4 females, 4 males) are imaged in two scans on the same day, including static images and dynamic RT-MRI of speech tasks. The inter-study agreement is quantified using intraclass correlation coefficient (ICC) and mean within-subject standard deviation (σe). Inter-study agreement is strong to very strong for static measures (ICC: min/median/max 0.71/0.89/0.98, σe: 0.90/2.20/6.72 mm), poor to strong for dynamic RT-MRI measures of articulator motion range (ICC: 0.26/0.75/0.90, σe: 1.6/2.5/3.6 mm), and poor to very strong for velocities (ICC: 0.21/0.56/0.93, σe: 2.2/4.4/16.7 cm/s). In conclusion, this study characterizes repeatability of static and dynamic MRI-derived speech biomarkers using state-of-the-art imaging. The introduced framework can be used to guide future development of speech biomarkers. Test-retest MRI data are provided free for research use.
Collapse
Affiliation(s)
- Johannes Töger
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Tanner Sorensen
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Krishna Somandepalli
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Asterios Toutios
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Sajan Goud Lingala
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Shrikanth Narayanan
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| | - Krishna Nayak
- Ming Hsieh Department of Electrical Engineering, University of Southern California, 3740 McClintock Avenue, EEB 400, Los Angeles, California 90089-2560, USA
| |
Collapse
|