151
|
Largent A, Nunes JC, Lafond C, Périchon N, Castelli J, Rolland Y, Acosta O, de Crevoisier R. [MRI-based radiotherapy planning]. Cancer Radiother 2017; 21:788-798. [PMID: 28690126 DOI: 10.1016/j.canrad.2017.02.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Revised: 02/09/2017] [Accepted: 02/27/2017] [Indexed: 12/11/2022]
Abstract
MRI-based radiotherapy planning is a topical subject due to the introduction of a new generation of treatment machines combining a linear accelerator and a MRI. One of the issues for introducing MRI in this task is the lack of information to provide tissue density information required for dose calculation. To cope with this issue, two strategies may be distinguished from the literature. Either a synthetic CT scan is generated from the MRI to plan the dose, or a dose is generated from the MRI based on physical underpinnings. Within the first group, three approaches appear: bulk density mapping assign a homogeneous density to different volumes of interest manually defined on a patient MRI; machine learning-based approaches model local relationship between CT and MRI image intensities from multiple data, then applying the model to a new MRI; atlas-based approaches use a co-registered training data set (CT-MRI) which are registered to a new MRI to create a pseudo CT from spatial correspondences in a final fusion step. Within the second group, physics-based approaches aim at computing the dose directly from the hydrogen contained within the tissues, quantified by MRI. Excepting the physics approach, all these methods generate a synthetic CT called "pseudo CT", on which radiotherapy planning will be finally realized. This literature review shows that atlas- and machine learning-based approaches appear more accurate dosimetrically. Bulk density approaches are not appropriate for bone localization. The fastest methods are machine learning and the slowest are atlas-based approaches. The less automatized are bulk density assignation methods. The physical approaches appear very promising methods. Finally, the validation of these methods is crucial for a clinical practice, in particular in the perspective of adaptive radiotherapy delivered by a linear accelerator combined with an MRI scanner.
Collapse
Affiliation(s)
- A Largent
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Inserm, UMR 1099, 263, avenue du Général-Leclerc, 35042 Rennes, France
| | - J-C Nunes
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Inserm, UMR 1099, 263, avenue du Général-Leclerc, 35042 Rennes, France
| | - C Lafond
- Département de radiothérapie, centre régional de lutte contre le cancer Eugène-Marquis, avenue de la Bataille-Flandres-Dunkerque, 35042 Rennes, France
| | - N Périchon
- Département de radiothérapie, centre régional de lutte contre le cancer Eugène-Marquis, avenue de la Bataille-Flandres-Dunkerque, 35042 Rennes, France
| | - J Castelli
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Département de radiothérapie, centre régional de lutte contre le cancer Eugène-Marquis, avenue de la Bataille-Flandres-Dunkerque, 35042 Rennes, France; Inserm, UMR 1099, 263, avenue du Général-Leclerc, 35042 Rennes, France
| | - Y Rolland
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Département d'imagerie médicale, centre régional de lutte contre le cancer Eugène-Marquis, avenue de la Bataille-Flandres-Dunkerque, 35042 Rennes, France
| | - O Acosta
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Inserm, UMR 1099, 263, avenue du Général-Leclerc, 35042 Rennes, France
| | - R de Crevoisier
- Laboratoire traitement du signal et de l'image, campus de Beaulieu, université de Rennes 1, 263, avenue du Général-Leclerc, 35042 Rennes, France; Département de radiothérapie, centre régional de lutte contre le cancer Eugène-Marquis, avenue de la Bataille-Flandres-Dunkerque, 35042 Rennes, France; Inserm, UMR 1099, 263, avenue du Général-Leclerc, 35042 Rennes, France.
| |
Collapse
|
152
|
Yang J, Jian Y, Jenkins N, Behr SC, Hope TA, Larson PEZ, Vigneron D, Seo Y. Quantitative Evaluation of Atlas-based Attenuation Correction for Brain PET in an Integrated Time-of-Flight PET/MR Imaging System. Radiology 2017; 284:169-179. [DOI: 10.1148/radiol.2017161603] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Jaewon Yang
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Yiqiang Jian
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Nathaniel Jenkins
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Spencer C. Behr
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Thomas A. Hope
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Peder E. Z. Larson
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Daniel Vigneron
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| | - Youngho Seo
- From the Department of Radiology and Biomedical Imaging, UCSF Physics Research Laboratory, University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946 (J.Y., N.J., S.C.B., T.A.H., P.E.Z.L., D.V., Y.S.); GE Healthcare, Waukesha, Wis (Y.J.); and Department of Radiology, San Francisco VA Medical Center, San Francisco, Calif (T.A.H.)
| |
Collapse
|
153
|
Yang J, Wiesinger F, Kaushik S, Shanbhag D, Hope TA, Larson PEZ, Seo Y. Evaluation of Sinus/Edge-Corrected Zero-Echo-Time-Based Attenuation Correction in Brain PET/MRI. J Nucl Med 2017; 58:1873-1879. [PMID: 28473594 DOI: 10.2967/jnumed.116.188268] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Accepted: 04/12/2017] [Indexed: 01/23/2023] Open
Abstract
In brain PET/MRI, the major challenge of zero-echo-time (ZTE)-based attenuation correction (ZTAC) is the misclassification of air/tissue/bone mixtures or their boundaries. Our study aimed to evaluate a sinus/edge-corrected (SEC) ZTAC (ZTACSEC), relative to an uncorrected (UC) ZTAC (ZTACUC) and a CT atlas-based attenuation correction (ATAC). Methods: Whole-body 18F-FDG PET/MRI scans were obtained for 12 patients after PET/CT scans. Only data acquired at a bed station that included the head were used for this study. Using PET data from PET/MRI, we applied ZTACUC, ZTACSEC, ATAC, and reference CT-based attenuation correction (CTAC) to PET attenuation correction. For ZTACUC, the bias-corrected and normalized ZTE was converted to pseudo-CT with air (-1,000 HU for ZTE < 0.2), soft-tissue (42 HU for ZTE > 0.75), and bone (-2,000 × [ZTE - 1] + 42 HU for 0.2 ≤ ZTE ≤ 0.75). Afterward, in the pseudo-CT, sinus/edges were automatically estimated as a binary mask through morphologic processing and edge detection. In the binary mask, the overestimated values were rescaled below 42 HU for ZTACSEC For ATAC, the atlas deformed to MR in-phase was segmented to air, inner air, soft tissue, and continuous bone. For the quantitative evaluation, PET mean uptake values were measured in twenty 1-mL volumes of interest distributed throughout brain tissues. The PET uptake was compared using a paired t test. An error histogram was used to show the distribution of voxel-based PET uptake differences. Results: Compared with CTAC, ZTACSEC achieved the overall PET quantification accuracy (0.2% ± 2.4%, P = 0.23) similar to CTAC, in comparison with ZTACUC (5.6% ± 3.5%, P < 0.01) and ATAC (-0.9% ± 5.0%, P = 0.03). Specifically, a substantial improvement with ZTACSEC (0.6% ± 2.7%, P < 0.01) was found in the cerebellum, in comparison with ZTACUC (8.1% ± 3.5%, P < 0.01) and ATAC (-4.1% ± 4.3%, P < 0.01). The histogram of voxel-based uptake differences demonstrated that ZTACSEC reduced the magnitude and variation of errors substantially, compared with ZTACUC and ATAC. Conclusion: ZTACSEC can provide an accurate PET quantification in brain PET/MRI, comparable to the accuracy achieved by CTAC, particularly in the cerebellum.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | | | | | | | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California
| | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California
| |
Collapse
|
154
|
Bahrami K, Shi F, Rekik I, Gao Y, Shen D. 7T-guided super-resolution of 3T MRI. Med Phys 2017; 44:1661-1677. [PMID: 28177548 DOI: 10.1002/mp.12132] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 12/22/2016] [Accepted: 01/13/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper. METHODS First, we propose a mapping from 3T MRI to 7T MRI space, using regression random forest. The mapped 3T MR images serve as intermediate results with similar appearance as 7T MR images. Second, we predict the final higher resolution 7T-like MR images based on sparse representation, using paired local dictionaries for both the mapped 3T MR images and 7T MR images. RESULTS Based on 15 subjects with both 3T and 7T MR images, the predicted 7T-like MR images by our method can best match the ground-truth 7T MR images, compared to other methods. Meanwhile, the experiment on brain tissue segmentation shows that our 7T-like MR images lead to the highest accuracy in the segmentation of WM, GM, and CSF brain tissues, compared to segmentations of 3T MR images as well as the reconstructed 7T-like MR images by other methods. CONCLUSIONS We propose a novel method for prediction of high-resolution 7T-like MR images from low-resolution 3T MR images. Our predicted 7T-like MR images demonstrate better spatial resolution compared to 3T MR images, as well as prediction results by other comparison methods. Such high-quality 7T-like MR images could better facilitate disease diagnosis and intervention.
Collapse
Affiliation(s)
- Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Islem Rekik
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| |
Collapse
|
155
|
Zhang J, Zhang L, Xiang L, Shao Y, Wu G, Zhou X, Shen D, Wang Q. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution. PATTERN RECOGNITION 2017; 63:531-541. [PMID: 29062159 PMCID: PMC5650249 DOI: 10.1016/j.patcog.2016.09.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Collapse
Affiliation(s)
- Jinpeng Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lichi Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Lei Xiang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu 226019, China
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Xiaodong Zhou
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201815, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
156
|
Yang X, Lei Y, Shu HK, Rossi P, Mao H, Shim H, Curran WJ, Liu T. Pseudo CT Estimation from MRI Using Patch-based Random Forest. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133:101332Q. [PMID: 31607771 PMCID: PMC6788808 DOI: 10.1117/12.2253936] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute
| | - Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute
| | - Hui-Kuo Shu
- Department of Radiation Oncology, Winship Cancer Institute
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hyunsuk Shim
- Department of Radiation Oncology, Winship Cancer Institute
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Emory University, Atlanta, GA
| | | | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute
| |
Collapse
|
157
|
Edmund JM, Nyholm T. A review of substitute CT generation for MRI-only radiation therapy. Radiat Oncol 2017; 12:28. [PMID: 28126030 PMCID: PMC5270229 DOI: 10.1186/s13014-016-0747-y] [Citation(s) in RCA: 243] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 12/21/2016] [Indexed: 12/13/2022] Open
Abstract
Radiotherapy based on magnetic resonance imaging as the sole modality (MRI-only RT) is an area of growing scientific interest due to the increasing use of MRI for both target and normal tissue delineation and the development of MR based delivery systems. One major issue in MRI-only RT is the assignment of electron densities (ED) to MRI scans for dose calculation and a similar need for attenuation correction can be found for hybrid PET/MR systems. The ED assigned MRI scan is here named a substitute CT (sCT). In this review, we report on a collection of typical performance values for a number of main approaches encountered in the literature for sCT generation as compared to CT. A literature search in the Scopus database resulted in 254 papers which were included in this investigation. A final number of 50 contributions which fulfilled all inclusion criteria were categorized according to applied method, MRI sequence/contrast involved, number of subjects included and anatomical site investigated. The latter included brain, torso, prostate and phantoms. The contributions geometric and/or dosimetric performance metrics were also noted. The majority of studies are carried out on the brain for 5–10 patients with PET/MR applications in mind using a voxel based method. T1 weighted images are most commonly applied. The overall dosimetric agreement is in the order of 0.3–2.5%. A strict gamma criterion of 1% and 1mm has a range of passing rates from 68 to 94% while less strict criteria show pass rates > 98%. The mean absolute error (MAE) is between 80 and 200 HU for the brain and around 40 HU for the prostate. The Dice score for bone is between 0.5 and 0.95. The specificity and sensitivity is reported in the upper 80s% for both quantities and correctly classified voxels average around 84%. The review shows that a variety of promising approaches exist that seem clinical acceptable even with standard clinical MRI sequences. A consistent reference frame for method benchmarking is probably necessary to move the field further towards a widespread clinical implementation.
Collapse
Affiliation(s)
- Jens M Edmund
- Radiotherapy Research Unit, Department of Oncology, Herlev & Gentofte Hospital, Copenhagen University, Herlev, Denmark. .,Niels Bohr Institute, Copenhagen University, Copenhagen, Denmark.
| | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, SE-901 87, Sweden.,Medical Radiation Physics, Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
158
|
Abstract
PET/MR is a promising multimodality imaging approach. Attenuation is by far the largest correction required for quantitative PET imaging. MR-based attenuation correction have been extensively pursued, especially for brain imaging, in the past several years. In this article, we review atlas and direct imaging MR-based PET attenuation correction methods. The technical principles behind these methods are detailed and the advantages and disadvantages of these methods are discussed.
Collapse
Affiliation(s)
- Yasheng Chen
- Department of Neurology, BJC Institute of Health - WUSM 09205, Washington University in St. Louis, St Louis, MO 63110, USA
| | - Hongyu An
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, 510 South Kingshighway, WPAV CCIR, CB 8131, St Louis, MO 63110, USA.
| |
Collapse
|
159
|
Wu Z, Gao Y, Shi F, Jewells V, Shen D. Automatic Hippocampal Subfield Segmentation from 3T Multi-modality Images. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2016; 10019:229-236. [PMID: 28603791 PMCID: PMC5464731 DOI: 10.1007/978-3-319-47157-0_28] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Hippocampal subfields play important and divergent roles in both memory formation and early diagnosis of many neurological diseases, but automatic subfield segmentation is less explored due to its small size and poor image contrast. In this paper, we propose an automatic learning-based hippocampal subfields segmentation framework using multi-modality 3TMR images, including T1 MRI and resting-state fMRI (rs-fMRI). To do this, we first acquire both 3T and 7T T1 MRIs for each training subject, and then the 7T T1 MRI are linearly registered onto the 3T T1 MRI. Six hippocampal subfields are manually labeled on the aligned 7T T1 MRI, which has the 7T image contrast but sits in the 3T T1 space. Next, corresponding appearance and relationship features from both 3T T1 MRI and rs-fMRI are extracted to train a structured random forest as a multi-label classifier to conduct the segmentation. Finally, the subfield segmentation is further refined iteratively by additional context features and updated relationship features. To our knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using 3T routine T1 MRI and rs-fMRI. The quantitative comparison between our results and manual ground truth demonstrates the effectiveness of our method. Besides, we also find that (a) multi-modality features significantly improved subfield segmentation performance due to the complementary information among modalities; (b) automatic segmentation results using 3T multimodality images are partially comparable to those on 7T T1 MRI.
Collapse
Affiliation(s)
- Zhengwang Wu
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Feng Shi
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Valerie Jewells
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
160
|
Huang L, Jin Y, Gao Y, Thung KH, Shen D. Longitudinal clinical score prediction in Alzheimer's disease with soft-split sparse regression based random forest. Neurobiol Aging 2016; 46:180-91. [PMID: 27500865 PMCID: PMC5152677 DOI: 10.1016/j.neurobiolaging.2016.07.005] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Revised: 07/04/2016] [Accepted: 07/06/2016] [Indexed: 12/20/2022]
Abstract
Alzheimer's disease (AD) is an irreversible neurodegenerative disease and affects a large population in the world. Cognitive scores at multiple time points can be reliably used to evaluate the progression of the disease clinically. In recent studies, machine learning techniques have shown promising results on the prediction of AD clinical scores. However, there are multiple limitations in the current models such as linearity assumption and missing data exclusion. Here, we present a nonlinear supervised sparse regression-based random forest (RF) framework to predict a variety of longitudinal AD clinical scores. Furthermore, we propose a soft-split technique to assign probabilistic paths to a test sample in RF for more accurate predictions. In order to benefit from the longitudinal scores in the study, unlike the previous studies that often removed the subjects with missing scores, we first estimate those missing scores with our proposed soft-split sparse regression-based RF and then utilize those estimated longitudinal scores at all the previous time points to predict the scores at the next time point. The experiment results demonstrate that our proposed method is superior to the traditional RF and outperforms other state-of-art regression models. Our method can also be extended to be a general regression framework to predict other disease scores.
Collapse
Affiliation(s)
- Lei Huang
- Department of Radiology, Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yan Jin
- Department of Radiology, Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology, Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Kim-Han Thung
- Department of Radiology, Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology, Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
161
|
Learning-Based Multimodal Image Registration for Prostate Cancer Radiation Therapy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2016; 9902:1-9. [PMID: 28975161 DOI: 10.1007/978-3-319-46726-9_1] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Computed tomography (CT) is widely used for dose planning in the radiotherapy of prostate cancer. However, CT has low tissue contrast, thus making manual contouring difficult. In contrast, magnetic resonance (MR) image provides high tissue contrast and is thus ideal for manual contouring. If MR image can be registered to CT image of the same patient, the contouring accuracy of CT could be substantially improved, which could eventually lead to high treatment efficacy. In this paper, we propose a learning-based approach for multimodal image registration. First, to fill the appearance gap between modalities, a structured random forest with auto-context model is learnt to synthesize MRI from CT and vice versa. Then, MRI-to-CT registration is steered in a dual manner of registering images with same appearances, i.e., (1) registering the synthesized CT with CT, and (2) also registering MRI with the synthesized MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration results. Experiments on pelvic CT and MR images have shown the improved registration performance by our proposed method, compared with the existing non-learning based registration methods.
Collapse
|
162
|
Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS : FIRST INTERNATIONAL WORKSHOP, LABELS 2016, AND SECOND INTERNATIONAL WORKSHOP, DLMIA 2016, HELD IN CONJUNCTION WITH MICCAI 2016, ATHENS, GREECE, OCTOBER 21, 2016, PROCEEDINGS 2016; 2016:170-178. [PMID: 29075680 DOI: 10.1007/978-3-319-46976-8_18] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Computed tomography (CT) is critical for various clinical applications, e.g., radiotherapy treatment planning and also PET attenuation correction. However, CT exposes radiation during CT imaging, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve any radiation. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiotherapy planning. In this paper, we propose a 3D deep learning based method to address this challenging problem. Specifically, a 3D fully convolutional neural network (FCN) is adopted to learn an end-to-end nonlinear mapping from MR image to CT image. Compared to the conventional convolutional neural network (CNN), FCN generates structured output and can better preserve the neighborhood information in the predicted CT image. We have validated our method in a real pelvic CT/MRI dataset. Experimental results show that our method is accurate and robust for predicting CT image from MRI image, and also outperforms three state-of-the-art methods under comparison. In addition, the parameters, such as network depth and activation function, are extensively studied to give an insight for deep learning based regression tasks in our application.
Collapse
|
163
|
Bahrami K, Shi F, Zong X, Shin HW, An H, Shen D. Reconstruction of 7T-Like Images From 3T MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2085-97. [PMID: 27046894 PMCID: PMC5147737 DOI: 10.1109/tmi.2016.2549918] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images.
Collapse
Affiliation(s)
- Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Xiaopeng Zong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Hae Won Shin
- Departments of Neurology and Neurosurgery, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Hongyu An
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| |
Collapse
|
164
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|