1
|
Shin M, Seo M, Lee K, Yoon K. Super-resolution techniques for biomedical applications and challenges. Biomed Eng Lett 2024; 14:465-496. [PMID: 38645589 PMCID: PMC11026337 DOI: 10.1007/s13534-024-00365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/12/2024] [Accepted: 02/18/2024] [Indexed: 04/23/2024] Open
Abstract
Super-resolution (SR) techniques have revolutionized the field of biomedical applications by detailing the structures at resolutions beyond the limits of imaging or measuring tools. These techniques have been applied in various biomedical applications, including microscopy, magnetic resonance imaging (MRI), computed tomography (CT), X-ray, electroencephalogram (EEG), ultrasound, etc. SR methods are categorized into two main types: traditional non-learning-based methods and modern learning-based approaches. In both applications, SR methodologies have been effectively utilized on biomedical images, enhancing the visualization of complex biological structures. Additionally, these methods have been employed on biomedical data, leading to improvements in computational precision and efficiency for biomedical simulations. The use of SR techniques has resulted in more detailed and accurate analyses in diagnostics and research, essential for early disease detection and treatment planning. However, challenges such as computational demands, data interpretation complexities, and the lack of unified high-quality data persist. The article emphasizes these issues, underscoring the need for ongoing development in SR technologies to further improve biomedical research and patient care outcomes.
Collapse
Affiliation(s)
- Minwoo Shin
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Minjee Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyunghyun Lee
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyungho Yoon
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| |
Collapse
|
2
|
Sample C, Rahmim A, Uribe C, Bénard F, Wu J, Fedrigo R, Clark H. Neural blind deconvolution for deblurring and supersampling PSMA PET. Phys Med Biol 2024; 69:085025. [PMID: 38513292 DOI: 10.1088/1361-6560/ad36a9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
Objective. To simultaneously deblur and supersample prostate specific membrane antigen (PSMA) positron emission tomography (PET) images using neural blind deconvolution.Approach. Blind deconvolution is a method of estimating the hypothetical 'deblurred' image along with the blur kernel (related to the point spread function) simultaneously. Traditionalmaximum a posterioriblind deconvolution methods require stringent assumptions and suffer from convergence to a trivial solution. A method of modelling the deblurred image and kernel with independent neural networks, called 'neural blind deconvolution' had demonstrated success for deblurring 2D natural images in 2020. In this work, we adapt neural blind deconvolution to deblur PSMA PET images while simultaneous supersampling to double the original resolution. We compare this methodology with several interpolation methods in terms of resultant blind image quality metrics and test the model's ability to predict accurate kernels by re-running the model after applying artificial 'pseudokernels' to deblurred images. The methodology was tested on a retrospective set of 30 prostate patients as well as phantom images containing spherical lesions of various volumes.Main results. Neural blind deconvolution led to improvements in image quality over other interpolation methods in terms of blind image quality metrics, recovery coefficients, and visual assessment. Predicted kernels were similar between patients, and the model accurately predicted several artificially-applied pseudokernels. Localization of activity in phantom spheres was improved after deblurring, allowing small lesions to be more accurately defined.Significance. The intrinsically low spatial resolution of PSMA PET leads to partial volume effects (PVEs) which negatively impact uptake quantification in small regions. The proposed method can be used to mitigate this issue, and can be straightforwardly adapted for other imaging modalities.
Collapse
Affiliation(s)
- Caleb Sample
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
| | - Carlos Uribe
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Functional Imaging, BC Cancer, Vancouver, BC, CA, Canada
| | - François Bénard
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, CA, Canada
| | - Jonn Wu
- Department of Radiation Oncology, BC Cancer, Vancouver, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Roberto Fedrigo
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Haley Clark
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| |
Collapse
|
3
|
Yu M, Han M, Baek J. Impact of using sinogram domain data in the super-resolution of CT images on diagnostic information. Med Phys 2024; 51:2817-2833. [PMID: 37883787 DOI: 10.1002/mp.16807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/19/2023] [Accepted: 10/01/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND In recent times, deep-learning-based super-resolution (DL-SR) techniques for computed tomography (CT) images have shown outstanding results in terms of full-reference image quality (FR-IQ) metrics (e.g., root mean square error and structural similarity index metric), which assesses IQ by measuring its similarity to the high-resolution (HR) image. In addition, IQ can be evaluated via task-based IQ (Task-IQ) metrics that evaluate the ability to perform specific tasks. Ironically, most proposed image domain-based SR techniques are not possible to improve a Task-IQ metric, which assesses the amount of information related to diagnosis. PURPOSE In the case of CT imaging systems, sinogram domain data can be utilized for SR techniques. Therefore, this study aims to investigate the impact of utilizing sinogram domain data on diagnostic information restoration ability. METHODS We evaluated three DL-SR techniques: using image domain data (Image-SR), using sinogram domain data (Sinogram-SR), and using sinogram as well as image domain data (Dual-SR). For Task-IQ evaluation, the Rayleigh discrimination task was used to evaluate diagnostic ability by focusing on the resolving power aspect, and an ideal observer (IO) can be used to perform the task. In this study, we used a convolutional neural network (CNN)-based IO that approximates the IO performance. We compared the IO performances of the SR techniques according to the data domain to evaluate the discriminative information restoration ability. RESULTS Overall, the low-resolution (LR) and SR exhibit lower IO performances compared with that of HR owing to their degraded discriminative information when detector binning is used. Next, between the SR techniques, Image-SR does not show superior IO performances compared to the LR image, but Sinogram-SR and Dual-SR show superior IO performances than the LR image. Furthermore, in Sinogram-SR, we confirm that FR-IQ and IO performance are positively correlated. These observations demonstrate that sinogram domain upsampling improves the representation ability for discriminative information in the image domain compared to the LR and Image-SR. CONCLUSIONS Unlike Image-SR, Sinogram-SR can improve the amount of discriminative information present in the image domain. This demonstrates that to improve the amount of discriminative information on the resolving power aspect, it is necessary to employ sinogram domain processing.
Collapse
Affiliation(s)
- Minwoo Yu
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Minah Han
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| |
Collapse
|
4
|
Yang G, Li C, Yao Y, Wang G, Teng Y. Quasi-supervised learning for super-resolution PET. Comput Med Imaging Graph 2024; 113:102351. [PMID: 38335784 DOI: 10.1016/j.compmedimag.2024.102351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/15/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Low resolution of positron emission tomography (PET) limits its diagnostic performance. Deep learning has been successfully applied to achieve super-resolution PET. However, commonly used supervised learning methods in this context require many pairs of low- and high-resolution (LR and HR) PET images. Although unsupervised learning utilizes unpaired images, the results are not as good as that obtained with supervised deep learning. In this paper, we propose a quasi-supervised learning method, which is a new type of weakly-supervised learning methods, to recover HR PET images from LR counterparts by leveraging similarity between unpaired LR and HR image patches. Specifically, LR image patches are taken from a patient as inputs, while the most similar HR patches from other patients are found as labels. The similarity between the matched HR and LR patches serves as a prior for network construction. Our proposed method can be implemented by designing a new network or modifying an existing network. As an example in this study, we have modified the cycle-consistent generative adversarial network (CycleGAN) for super-resolution PET. Our numerical and experimental results qualitatively and quantitatively show the merits of our method relative to the state-of-the-art methods. The code is publicly available at https://github.com/PigYang-ops/CycleGAN-QSDL.
Collapse
Affiliation(s)
- Guangtong Yang
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Chen Li
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China.
| |
Collapse
|
5
|
Balaji V, Song TA, Malekzadeh M, Heidari P, Dutta J. Artificial Intelligence for PET and SPECT Image Enhancement. J Nucl Med 2024; 65:4-12. [PMID: 37945384 PMCID: PMC10755520 DOI: 10.2967/jnumed.122.265000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/10/2023] [Indexed: 11/12/2023] Open
Abstract
Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.
Collapse
Affiliation(s)
- Vibha Balaji
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Tzu-An Song
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Masoud Malekzadeh
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Pedram Heidari
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Joyita Dutta
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| |
Collapse
|
6
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
7
|
Gong K, Johnson K, El Fakhri G, Li Q, Pan T. PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
Affiliation(s)
- Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, 32611, FL, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
| | - Keith Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, 77030, TX, USA
| |
Collapse
|
8
|
Yu Y, She K, Liu J, Cai X, Shi K, Kwon OM. A super-resolution network for medical imaging via transformation analysis of wavelet multi-resolution. Neural Netw 2023; 166:162-173. [PMID: 37487412 DOI: 10.1016/j.neunet.2023.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/15/2023] [Accepted: 07/04/2023] [Indexed: 07/26/2023]
Abstract
In recent years, deep learning super-resolution models for progressive reconstruction have achieved great success. However, these models which refer to multi-resolution analysis basically ignore the information contained in the lower subspaces and do not explore the correlation between features in the wavelet and spatial domain, resulting in not fully utilizing the auxiliary information brought by multi-resolution analysis with multiple domains. Therefore, we propose a super-resolution network based on the wavelet multi-resolution framework (WMRSR) to capture the auxiliary information contained in multiple subspaces and to be aware of the interdependencies between spatial domain and wavelet domain features. Initially, the wavelet multi-resolution input (WMRI) is generated by combining wavelet sub-bands obtained from each subspace through wavelet multi-resolution analysis and the corresponding spatial domain image content, which serves as input to the network. Then, the WMRSR captures the corresponding features from the WMRI in the wavelet domain and spatial domain, respectively, and fuses them adaptively, thus learning fully explored features in multi-resolution and multi-domain. Finally, the high-resolution images are gradually reconstructed in the wavelet multi-resolution framework by our convolution-based wavelet transform module which is suitable for deep neural networks. Extensive experiments conducted on two public datasets demonstrate that our method outperforms other state-of-the-art methods in terms of objective and visual qualities.
Collapse
Affiliation(s)
- Yue Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, Sichuan, China.
| | - Kun She
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, Sichuan, China.
| | - Jinhua Liu
- School of Mathematical and Computer Sciences, Shangrao Normal University, Shangrao 334001, Jiangxi, China.
| | - Xiao Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, Sichuan, China.
| | - Kaibo Shi
- School of Electronic Information and Electrical Engineering, Chengdu University, Chengdu, 610106, Sichuan, China.
| | - O M Kwon
- School of Electrical Engineering, Chungbuk National University, Chungdae-ro, Seowon-Gu, 28644, Cheongju, South Korea.
| |
Collapse
|
9
|
Kimberly WT, Sorby-Adams AJ, Webb AG, Wu EX, Beekman R, Bowry R, Schiff SJ, de Havenon A, Shen FX, Sze G, Schaefer P, Iglesias JE, Rosen MS, Sheth KN. Brain imaging with portable low-field MRI. NATURE REVIEWS BIOENGINEERING 2023; 1:617-630. [PMID: 37705717 PMCID: PMC10497072 DOI: 10.1038/s44222-023-00086-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/06/2023] [Indexed: 09/15/2023]
Abstract
The advent of portable, low-field MRI (LF-MRI) heralds new opportunities in neuroimaging. Low power requirements and transportability have enabled scanning outside the controlled environment of a conventional MRI suite, enhancing access to neuroimaging for indications that are not well suited to existing technologies. Maximizing the information extracted from the reduced signal-to-noise ratio of LF-MRI is crucial to developing clinically useful diagnostic images. Progress in electromagnetic noise cancellation and machine learning reconstruction algorithms from sparse k-space data as well as new approaches to image enhancement have now enabled these advancements. Coupling technological innovation with bedside imaging creates new prospects in visualizing the healthy brain and detecting acute and chronic pathological changes. Ongoing development of hardware, improvements in pulse sequences and image reconstruction, and validation of clinical utility will continue to accelerate this field. As further innovation occurs, portable LF-MRI will facilitate the democratization of MRI and create new applications not previously feasible with conventional systems.
Collapse
Affiliation(s)
- W Taylor Kimberly
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Annabel J Sorby-Adams
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrew G Webb
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Rachel Beekman
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| | - Ritvij Bowry
- Departments of Neurosurgery and Neurology, McGovern Medical School, University of Texas Health Neurosciences, Houston, TX, USA
| | - Steven J Schiff
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | - Adam de Havenon
- Division of Vascular Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Francis X Shen
- Harvard Medical School Center for Bioethics, Harvard law School, Boston, MA, USA
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
| | - Gordon Sze
- Department of Radiology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Pamela Schaefer
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, University College London, London, UK
- Computer Science and AI Laboratory, Massachusetts Institute of Technology, Boston, MA, USA
| | - Matthew S Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kevin N Sheth
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| |
Collapse
|
10
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
11
|
Kazerouni A, Aghdam EK, Heidari M, Azad R, Fayyaz M, Hacihaliloglu I, Merhof D. Diffusion models in medical imaging: A comprehensive survey. Med Image Anal 2023; 88:102846. [PMID: 37295311 DOI: 10.1016/j.media.2023.102846] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 05/12/2023] [Accepted: 05/16/2023] [Indexed: 06/12/2023]
Abstract
Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems. A diffusion probabilistic model defines a forward diffusion stage where the input data is gradually perturbed over several steps by adding Gaussian noise and then learns to reverse the diffusion process to retrieve the desired noise-free data from noisy data samples. Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples in spite of their known computational burdens. Capitalizing on the advances in computer vision, the field of medical imaging has also observed a growing interest in diffusion models. With the aim of helping the researcher navigate this profusion, this survey intends to provide a comprehensive overview of diffusion models in the discipline of medical imaging. Specifically, we start with an introduction to the solid theoretical foundation and fundamental concepts behind diffusion models and the three generic diffusion modeling frameworks, namely, diffusion probabilistic models, noise-conditioned score networks, and stochastic differential equations. Then, we provide a systematic taxonomy of diffusion models in the medical domain and propose a multi-perspective categorization based on their application, imaging modality, organ of interest, and algorithms. To this end, we cover extensive applications of diffusion models in the medical domain, including image-to-image translation, reconstruction, registration, classification, segmentation, denoising, 2/3D generation, anomaly detection, and other medically-related challenges. Furthermore, we emphasize the practical use case of some selected approaches, and then we discuss the limitations of the diffusion models in the medical domain and propose several directions to fulfill the demands of this field. Finally, we gather the overviewed studies with their available open-source implementations at our GitHub.1 We aim to update the relevant latest papers within it regularly.
Collapse
Affiliation(s)
- Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | | | - Ilker Hacihaliloglu
- Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Medicine, University of British Columbia, Vancouver, Canada
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
12
|
Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, Nieuwoudt M, Li HX, Hu Y, Wang S. Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
Affiliation(s)
- Changwei Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Xuhang Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Chi Man Pun
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Guoli Huang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ashirbani Saha
- Department of Oncology and School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
| | - Martin Nieuwoudt
- Institute for Biomedical Engineering, Stellenbosch University, Stellenbosch, South Africa
| | - Han-Xiong Li
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Yong Hu
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
13
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
14
|
Karamov R, Breite C, Lomov SV, Sergeichev I, Swolfs Y. Super-Resolution Processing of Synchrotron CT Images for Automated Fibre Break Analysis of Unidirectional Composites. Polymers (Basel) 2023; 15:polym15092206. [PMID: 37177352 PMCID: PMC10180951 DOI: 10.3390/polym15092206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/28/2023] [Accepted: 04/29/2023] [Indexed: 05/15/2023] Open
Abstract
Fibre breaks govern the strength of unidirectional composite materials under tension. The progressive development of fibre breaks is studied using in situ X-ray computed tomography, especially with synchrotron radiation. However, even with synchrotron radiation, the resolution of the time-resolved in situ images is not sufficient for a fully automated analysis of continuous mechanical deformations. We therefore investigate the possibility of increasing the quality of low-resolution in situ scans by means of super-resolution (SR) using 3D deep learning techniques, thus facilitating the subsequent fibre break identification. We trained generative neural networks (GAN) on datasets of high-(0.3 μm) and low-resolution (1.6 μm) statically acquired images. These networks were then applied to a low-resolution (1.1 μm) noisy image of a continuously loaded specimen. The statistical parameters of the fibre breaks used for the comparison are the number of individual breaks and the number of 2-plets and 3-plets per specimen volume. The fully automated process achieves an average accuracy of 82% of manually identified fibre breaks, while the semi-automated one reaches 92%. The developed approach allows the use of faster, low-resolution in situ tomography without losing the quality of the identified physical parameters.
Collapse
Affiliation(s)
- Radmir Karamov
- The Center Materials Technologies, Skolkovo Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia
- Department of Materials Engineering, KU Leuven Kasteelpark Arenberg 44, 3001 Leuven, Belgium
| | - Christian Breite
- Department of Materials Engineering, KU Leuven Kasteelpark Arenberg 44, 3001 Leuven, Belgium
| | - Stepan V Lomov
- Department of Materials Engineering, KU Leuven Kasteelpark Arenberg 44, 3001 Leuven, Belgium
| | - Ivan Sergeichev
- The Center Materials Technologies, Skolkovo Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia
| | - Yentl Swolfs
- Department of Materials Engineering, KU Leuven Kasteelpark Arenberg 44, 3001 Leuven, Belgium
| |
Collapse
|
15
|
Improving the diagnostic performance of computed tomography angiography for intracranial large arterial stenosis by a novel super-resolution algorithm based on multi-scale residual denoising generative adversarial network. Clin Imaging 2023; 96:1-8. [PMID: 36731372 DOI: 10.1016/j.clinimag.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/12/2023] [Accepted: 01/18/2023] [Indexed: 01/30/2023]
Abstract
BACKGROUND Computed tomography angiography (CTA) is very popular because it is characterized by rapidity and accessibility. However, CTA is inferior to digital subtraction angiography (DSA) in the diagnosis of intracranial artery stenosis or occlusion. DSA is an invasive examination, so we optimized the quality of cephalic CTA images. METHODS We used 5000 CTA images to train multi-scale residual denoising generative adversarial network (MRDGAN). And then 71 CTA images with intracranial large arterial stenosis were treated by Super-Resolution based on Generative Adversarial Network (SRGAN), Enhanced Super-Resolution based on Generative Adversarial Network (ESRGAN) and post-trained MRDGAN, respectively. Peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM) of the SRGAN, ESRGAN, MRDGAN and original CTA images were measured respectively. The qualities of MRDGAN and original images were visually assessed using a 4-point scale. The diagnostic coherence of digital subtraction angiography (DSA) with MRDGAN and original images was analyzed. RESULTS The PSNR was significantly higher in the MRDGAN CTA images (35.96 ± 1.51) than in the original (31.51 ± 1.43), SRGAN (25.75 ± 1.18) and ESRGAN (30.36 ± 1.05) CTA images (all P < 0.001). The SSIM was significantly higher in the MRDGAN CTA images (0.95 ± 0.02) than in the SRGAN (0.88 ± 0.03) and ESRGAN (0.90 ± 0.02) CTA images (all P < 0.01). The visual assessment was significantly higher in the MRDGAN CTA images (3.52 ± 0.58) than in the original CTA images (2.39 ± 0.69) (P < 0.05). The diagnostic coherence between MRDGAN and DSA (κ = 0.89) was superior to that between original images and DSA (κ = 0.62). CONCLUSION Our MRDGAN can effectively optimize original CTA images and improve its clinical diagnostic value for intracranial large artery stenosis.
Collapse
|
16
|
Iglesias JE, Billot B, Balbastre Y, Magdamo C, Arnold SE, Das S, Edlow BL, Alexander DC, Golland P, Fischl B. SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. SCIENCE ADVANCES 2023; 9:eadd3607. [PMID: 36724222 PMCID: PMC9891693 DOI: 10.1126/sciadv.add3607] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 01/04/2023] [Indexed: 05/10/2023]
Abstract
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, "SynthSR," that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer's disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
Collapse
Affiliation(s)
- Juan E. Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Yaël Balbastre
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Colin Magdamo
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Steven E. Arnold
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sudeshna Das
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Brian L. Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel C. Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
17
|
Flaus A, Deddah T, Reilhac A, Leiris ND, Janier M, Merida I, Grenier T, McGinnity CJ, Hammers A, Lartizien C, Costes N. PET image enhancement using artificial intelligence for better characterization of epilepsy lesions. Front Med (Lausanne) 2022; 9:1042706. [PMID: 36465898 PMCID: PMC9708713 DOI: 10.3389/fmed.2022.1042706] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 10/21/2022] [Indexed: 11/16/2023] Open
Abstract
INTRODUCTION [18F]fluorodeoxyglucose ([18F]FDG) brain PET is used clinically to detect small areas of decreased uptake associated with epileptogenic lesions, e.g., Focal Cortical Dysplasias (FCD) but its performance is limited due to spatial resolution and low contrast. We aimed to develop a deep learning-based PET image enhancement method using simulated PET to improve lesion visualization. METHODS We created 210 numerical brain phantoms (MRI segmented into 9 regions) and assigned 10 different plausible activity values (e.g., GM/WM ratios) resulting in 2100 ground truth high quality (GT-HQ) PET phantoms. With a validated Monte-Carlo PET simulator, we then created 2100 simulated standard quality (S-SQ) [18F]FDG scans. We trained a ResNet on 80% of this dataset (10% used for validation) to learn the mapping between S-SQ and GT-HQ PET, outputting a predicted HQ (P-HQ) PET. For the remaining 10%, we assessed Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE) against GT-HQ PET. For GM and WM, we computed recovery coefficients (RC) and coefficient of variation (COV). We also created lesioned GT-HQ phantoms, S-SQ PET and P-HQ PET with simulated small hypometabolic lesions characteristic of FCDs. We evaluated lesion detectability on S-SQ and P-HQ PET both visually and measuring the Relative Lesion Activity (RLA, measured activity in the reduced-activity ROI over the standard-activity ROI). Lastly, we applied our previously trained ResNet on 10 clinical epilepsy PETs to predict the corresponding HQ-PET and assessed image quality and confidence metrics. RESULTS Compared to S-SQ PET, P-HQ PET improved PNSR, SSIM and RMSE; significatively improved GM RCs (from 0.29 ± 0.03 to 0.79 ± 0.04) and WM RCs (from 0.49 ± 0.03 to 1 ± 0.05); mean COVs were not statistically different. Visual lesion detection improved from 38 to 75%, with average RLA decreasing from 0.83 ± 0.08 to 0.67 ± 0.14. Visual quality of P-HQ clinical PET improved as well as reader confidence. CONCLUSION P-HQ PET showed improved image quality compared to S-SQ PET across several objective quantitative metrics and increased detectability of simulated lesions. In addition, the model generalized to clinical data. Further evaluation is required to study generalization of our method and to assess clinical performance in larger cohorts.
Collapse
Affiliation(s)
- Anthime Flaus
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| | | | - Anthonin Reilhac
- Brain Health Imaging Centre, Center for Addiction and Mental Health (CAHMS), Toronto, ON, Canada
| | - Nicolas De Leiris
- Departement of Nuclear Medicine, CHU Grenoble Alpes, University Grenoble Alpes, Grenoble, France
- Laboratoire Radiopharmaceutiques Biocliniques, University Grenoble Alpes, INSERM, CHU Grenoble Alpes, Grenoble, France
| | - Marc Janier
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
| | | | - Thomas Grenier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Colm J. McGinnity
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Alexander Hammers
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Carole Lartizien
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Nicolas Costes
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| |
Collapse
|
18
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
19
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
20
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
21
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
22
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
23
|
Generative Adversarial Networks in Brain Imaging: A Narrative Review. J Imaging 2022; 8:jimaging8040083. [PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
Collapse
|
24
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
25
|
Jeong JJ, Tariq A, Adejumo T, Trivedi H, Gichoya JW, Banerjee I. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. J Digit Imaging 2022; 35:137-152. [PMID: 35022924 PMCID: PMC8921387 DOI: 10.1007/s10278-021-00556-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 11/28/2022] Open
Abstract
In recent years, generative adversarial networks (GANs) have gained tremendous popularity for various imaging related tasks such as artificial image generation to support AI training. GANs are especially useful for medical imaging-related tasks where training datasets are usually limited in size and heavily imbalanced against the diseased class. We present a systematic review, following the PRISMA guidelines, of recent GAN architectures used for medical image analysis to help the readers in making an informed decision before employing GANs in developing medical image classification and segmentation models. We have extracted 54 papers that highlight the capabilities and application of GANs in medical imaging from January 2015 to August 2020 and inclusion criteria for meta-analysis. Our results show four main architectures of GAN that are used for segmentation or classification in medical imaging. We provide a comprehensive overview of recent trends in the application of GANs in clinical diagnosis through medical image segmentation and classification and ultimately share experiences for task-based GAN implementations.
Collapse
Affiliation(s)
- Jiwoong J Jeong
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA.
| | - Amara Tariq
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA
| | | | - Hari Trivedi
- Department of Radiology, Emory School of Medicine, Atlanta, USA
| | - Judy W Gichoya
- Department of Radiology, Emory School of Medicine, Atlanta, USA
| | - Imon Banerjee
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA.,Department of Radiology, Emory School of Medicine, Atlanta, USA
| |
Collapse
|
26
|
Bogdanovic B, Solari EL, Villagran Asiares A, McIntosh L, van Marwick S, Schachoff S, Nekolla SG. PET/MR Technology: Advancement and Challenges. Semin Nucl Med 2021; 52:340-355. [PMID: 34969520 DOI: 10.1053/j.semnuclmed.2021.11.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 01/07/2023]
Abstract
When this article was written, it coincided with the 11th anniversary of the installation of our PET/MR device in Munich. In fact, this was the first fully integrated device to be in clinical use. During this time, we have observed many interesting behaviors, to put it kindly. However, it is more critical that in this process, our understanding of the system also improved - including the advantages and limitations from a technical, logistical, and medical perspective. The last decade of PET/MRI research has certainly been characterized by most sites looking for a "key application." There were many ideas in this context and before and after the devices became available, some of which were based on the earlier work with integrating data from single devices. These involved validating classical PET methods with MRI (eg, perfusion or oncology diagnostics). More important, however, were the scenarios where intermodal synergies could be expected. In this review, we look back on this decade-long journey, at the challenges overcome and those still to come.
Collapse
Affiliation(s)
- Borjana Bogdanovic
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | - Esteban Lucas Solari
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | - Alberto Villagran Asiares
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | - Lachlan McIntosh
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
| | - Sandra van Marwick
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | - Sylvia Schachoff
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | - Stephan G Nekolla
- Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Munich, Germany; DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany.
| |
Collapse
|
27
|
Song TA, Yang F, Dutta J. Noise2Void: unsupervised denoising of PET images. Phys Med Biol 2021; 66. [PMID: 34663767 DOI: 10.1088/1361-6560/ac30a0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 10/18/2021] [Indexed: 11/11/2022]
Abstract
Objective:Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning.Significance:Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data.Approach:In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network.Main Results:We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.
Collapse
Affiliation(s)
- Tzu-An Song
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Fan Yang
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Joyita Dutta
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America.,Massachusetts General Hospital, Boston, MA 02114, United States of America
| |
Collapse
|
28
|
Zhang Z, Yu S, Qin W, Liang X, Xie Y, Cao G. Self-supervised CT super-resolution with hybrid model. Comput Biol Med 2021; 138:104775. [PMID: 34666243 DOI: 10.1016/j.compbiomed.2021.104775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 12/19/2022]
Abstract
Software-based methods can improve CT spatial resolution without changing the hardware of the scanner or increasing the radiation dose to the object. In this work, we aim to develop a deep learning (DL) based CT super-resolution (SR) method that can reconstruct low-resolution (LR) sinograms into high-resolution (HR) CT images. We mathematically analyzed imaging processes in the CT SR imaging problem and synergistically integrated the SR model in the sinogram domain and the deblur model in the image domain into a hybrid model (SADIR). SADIR incorporates the CT domain knowledge and is unrolled into a DL network (SADIR-Net). The SADIR-Net is a self-supervised network, which can be trained and tested with a single sinogram. SADIR-Net was evaluated through SR CT imaging of a Catphan700 physical phantom and a real porcine phantom, and its performance was compared to the other state-of-the-art (SotA) DL-based CT SR methods. On both phantoms, SADIR-Net obtains the highest information fidelity criterion (IFC), structure similarity index (SSIM), and lowest root-mean-square-error (RMSE). As to the modulation transfer function (MTF), SADIR-Net also obtains the best result and improves the MTF50% by 69.2% and MTF10% by 69.5% compared with FBP. Alternatively, the spatial resolutions at MTF50% and MTF10% from SADIR-Net can reach 91.3% and 89.3% of the counterparts reconstructed from the HR sinogram with FBP. The results show that SADIR-Net can provide performance comparable to the other SotA methods for CT SR reconstruction, especially in the case of extremely limited training data or even no data at all. Thus, the SADIR method could find use in improving CT resolution without changing the hardware of the scanner or increasing the radiation dose to the object.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, 94305-5847, CA, USA; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shaode Yu
- College of Information and Communication Engineering, Communication University of China, Beijing 100024, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Guohua Cao
- Virginia Polytechnic Institute & State University, Blacksburg, VA 24061, USA.
| |
Collapse
|
29
|
Liu G, Cao Z, Xu Q, Zhang Q, Yang F, Xie X, Hao J, Shi Y, Bernhardt BC, He Y, Shi F, Lu G, Zhang Z. Recycling diagnostic MRI for empowering brain morphometric research - Critical & practical assessment on learning-based image super-resolution. Neuroimage 2021; 245:118687. [PMID: 34732323 DOI: 10.1016/j.neuroimage.2021.118687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/17/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022] Open
Abstract
Preliminary studies have shown the feasibility of deep learning (DL)-based super-resolution (SR) technique for reconstructing thick-slice/gap diagnostic MR images into high-resolution isotropic data, which would be of great significance for brain research field if the vast amount of diagnostic MRI data could be successively put into brain morphometric study. However, less evidence has addressed the practicability of the strategy, because lack of a large-sample available real data for constructing DL model. In this work, we employed a large cohort (n = 2052) of peculiar data with both low through-plane resolution diagnostic and high-resolution isotropic brain MR images from identical subjects. By leveraging a series of SR approaches, including a proposed novel DL algorithm of Structure Constrained Super Resolution Network (SCSRN), the diagnostic images were transformed to high-resolution isotropic data to meet the criteria of brain research in voxel-based and surface-based morphometric analyses. We comprehensively assessed image quality and the practicability of the reconstructed data in a variety of morphometric analysis scenarios. We further compared the performance of SR approaches to the ground truth high-resolution isotropic data. The results showed (i) DL-based SR algorithms generally improve the quality of diagnostic images and render morphometric analysis more accurate, especially, with the most superior performance of the novel approach of SCSRN. (ii) Accuracies vary across brain structures and methods, and (iii) performance increases were higher for voxel than for surface based approaches. This study supports that DL-based image super-resolution potentially recycle huge amount of routine diagnostic brain MRI deposited in sleeping state, and turning them into useful data for neurometric research.
Collapse
Affiliation(s)
- Gaoping Liu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Zehong Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qiang Xu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Qirui Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Fang Yang
- Department of Neurology, Jinling Hospital, Nanjing University School of Medicine, Nanjing 210002, China
| | - Xinyu Xie
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Jingru Hao
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Yinghuan Shi
- Department of Computer Science and Technology, Nanjing University, Nanjing 210046, China
| | - Boris C Bernhardt
- Multimodal Imaging and Connectome Analysis Laboratory, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada
| | - Yichu He
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Guangming Lu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| | - Zhiqiang Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| |
Collapse
|
30
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
31
|
Iglesias JE, Billot B, Balbastre Y, Tabari A, Conklin J, Gilberto González R, Alexander DC, Golland P, Edlow BL, Fischl B. Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrast. Neuroimage 2021; 237:118206. [PMID: 34048902 PMCID: PMC8354427 DOI: 10.1016/j.neuroimage.2021.118206] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 12/14/2022] Open
Abstract
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA.
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Yaël Balbastre
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Azadeh Tabari
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - R Gilberto González
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Neuroradiology Division, Massachusetts General Hospital, Boston, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| |
Collapse
|
32
|
Liu M, Wang K, Ji R, Ge SS, Chen J. Pose transfer generation with semantic parsing attention network for person re-identification. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107024] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
33
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 130] [Impact Index Per Article: 43.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
34
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
35
|
Slart RHJA, Williams MC, Juarez-Orozco LE, Rischpler C, Dweck MR, Glaudemans AWJM, Gimelli A, Georgoulias P, Gheysens O, Gaemperli O, Habib G, Hustinx R, Cosyns B, Verberne HJ, Hyafil F, Erba PA, Lubberink M, Slomka P, Išgum I, Visvikis D, Kolossváry M, Saraste A. Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging 2021; 48:1399-1413. [PMID: 33864509 PMCID: PMC8113178 DOI: 10.1007/s00259-021-05341-z] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 03/25/2021] [Indexed: 12/18/2022]
Abstract
In daily clinical practice, clinicians integrate available data to ascertain the diagnostic and prognostic probability of a disease or clinical outcome for their patients. For patients with suspected or known cardiovascular disease, several anatomical and functional imaging techniques are commonly performed to aid this endeavor, including coronary computed tomography angiography (CCTA) and nuclear cardiology imaging. Continuous improvement in positron emission tomography (PET), single-photon emission computed tomography (SPECT), and CT hardware and software has resulted in improved diagnostic performance and wide implementation of these imaging techniques in daily clinical practice. However, the human ability to interpret, quantify, and integrate these data sets is limited. The identification of novel markers and application of machine learning (ML) algorithms, including deep learning (DL) to cardiovascular imaging techniques will further improve diagnosis and prognostication for patients with cardiovascular diseases. The goal of this position paper of the European Association of Nuclear Medicine (EANM) and the European Association of Cardiovascular Imaging (EACVI) is to provide an overview of the general concepts behind modern machine learning-based artificial intelligence, highlights currently prefered methods, practices, and computational models, and proposes new strategies to support the clinical application of ML in the field of cardiovascular imaging using nuclear cardiology (hybrid) and CT techniques.
Collapse
Affiliation(s)
- Riemer H J A Slart
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands.
- Faculty of Science and Technology Biomedical, Photonic Imaging, University of Twente, Enschede, The Netherlands.
| | - Michelle C Williams
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Luis Eduardo Juarez-Orozco
- Department of Cardiology, Division Heart & Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
- University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Marc R Dweck
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Andor W J M Glaudemans
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
| | | | - Panagiotis Georgoulias
- Department of Nuclear Medicine, Faculty of Medicine, University of Thessaly, University Hospital of Larissa, Larissa, Greece
| | - Olivier Gheysens
- Department of Nuclear Medicine, Cliniques Universitaires Saint-Luc and Institute of Clinical and Experimental Research (IREC), Université catholique de Louvain (UCLouvain), Brussels, Belgium
| | | | - Gilbert Habib
- APHM, Cardiology Department, La Timone Hospital, Marseille, France
- IRD, APHM, MEPHI, IHU-Méditerranée Infection, Aix Marseille Université, Marseille, France
| | - Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, ULiège, Liège, Belgium
| | - Bernard Cosyns
- Department of Cardiology, Centrum voor Hart en Vaatziekten, Universitair Ziekenhuis Brussel, 101 Laarbeeklaan, 1090, Brussels, Belgium
| | - Hein J Verberne
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fabien Hyafil
- Department of Nuclear Medicine, DMU IMAGINA, Georges-Pompidou European Hospital, Assistance Publique - Hôpitaux de Paris, F-75015, Paris, France
- University of Paris, PARCC, INSERM, F-75006, Paris, France
| | - Paola A Erba
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
- Department of Nuclear Medicine (P.A.E.), University of Pisa, Pisa, Italy
- Department of Translational Research and New Technology in Medicine (P.A.E.), University of Pisa, Pisa, Italy
| | - Mark Lubberink
- Department of Surgical Sciences/Radiology, Uppsala University, Uppsala, Sweden
- Medical Physics, Uppsala University Hospital, Uppsala, Sweden
| | - Piotr Slomka
- Department of Imaging, Medicine, and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Ivana Išgum
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC - location AMC, University of Amsterdam, 1105, Amsterdam, AZ, Netherlands
| | | | - Márton Kolossváry
- MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, Budapest, Hungary
| | - Antti Saraste
- Turku PET Centre, Turku University Hospital, University of Turku, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| |
Collapse
|
36
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
37
|
Meikle SR, Sossi V, Roncali E, Cherry SR, Banati R, Mankoff D, Jones T, James M, Sutcliffe J, Ouyang J, Petibon Y, Ma C, El Fakhri G, Surti S, Karp JS, Badawi RD, Yamaya T, Akamatsu G, Schramm G, Rezaei A, Nuyts J, Fulton R, Kyme A, Lois C, Sari H, Price J, Boellaard R, Jeraj R, Bailey DL, Eslick E, Willowson KP, Dutta J. Quantitative PET in the 2020s: a roadmap. Phys Med Biol 2021; 66:06RM01. [PMID: 33339012 PMCID: PMC9358699 DOI: 10.1088/1361-6560/abd4f7] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Positron emission tomography (PET) plays an increasingly important role in research and clinical applications, catalysed by remarkable technical advances and a growing appreciation of the need for reliable, sensitive biomarkers of human function in health and disease. Over the last 30 years, a large amount of the physics and engineering effort in PET has been motivated by the dominant clinical application during that period, oncology. This has led to important developments such as PET/CT, whole-body PET, 3D PET, accelerated statistical image reconstruction, and time-of-flight PET. Despite impressive improvements in image quality as a result of these advances, the emphasis on static, semi-quantitative 'hot spot' imaging for oncologic applications has meant that the capability of PET to quantify biologically relevant parameters based on tracer kinetics has not been fully exploited. More recent advances, such as PET/MR and total-body PET, have opened up the ability to address a vast range of new research questions, from which a future expansion of applications and radiotracers appears highly likely. Many of these new applications and tracers will, at least initially, require quantitative analyses that more fully exploit the exquisite sensitivity of PET and the tracer principle on which it is based. It is also expected that they will require more sophisticated quantitative analysis methods than those that are currently available. At the same time, artificial intelligence is revolutionizing data analysis and impacting the relationship between the statistical quality of the acquired data and the information we can extract from the data. In this roadmap, leaders of the key sub-disciplines of the field identify the challenges and opportunities to be addressed over the next ten years that will enable PET to realise its full quantitative potential, initially in research laboratories and, ultimately, in clinical practice.
Collapse
Affiliation(s)
- Steven R Meikle
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Canada
| | - Emilie Roncali
- Department of Biomedical Engineering, University of California, Davis, United States of America
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Richard Banati
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
- Australian Nuclear Science and Technology Organisation, Sydney, Australia
| | - David Mankoff
- Department of Radiology, University of Pennsylvania, United States of America
| | - Terry Jones
- Department of Radiology, University of California, Davis, United States of America
| | - Michelle James
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), CA, United States of America
- Department of Neurology and Neurological Sciences, Stanford University, CA, United States of America
| | - Julie Sutcliffe
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Internal Medicine, University of California, Davis, CA, United States of America
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Suleman Surti
- Department of Radiology, University of Pennsylvania, United States of America
| | - Joel S Karp
- Department of Radiology, University of Pennsylvania, United States of America
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Taiga Yamaya
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Go Akamatsu
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Georg Schramm
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Johan Nuyts
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Roger Fulton
- Brain and Mind Centre, The University of Sydney, Australia
- Department of Medical Physics, Westmead Hospital, Sydney, Australia
| | - André Kyme
- Brain and Mind Centre, The University of Sydney, Australia
- School of Biomedical Engineering, Faculty of Engineering and IT, The University of Sydney, Australia
| | - Cristina Lois
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Hasan Sari
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Julie Price
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Ronald Boellaard
- Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam University Medical Center, location VUMC, Netherlands
| | - Robert Jeraj
- Departments of Medical Physics, Human Oncology and Radiology, University of Wisconsin, United States of America
- Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
| | - Dale L Bailey
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Enid Eslick
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
| | - Kathy P Willowson
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, United States of America
| |
Collapse
|
38
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
39
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
40
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
41
|
Cao F, Yao K, Liang J. Deconvolutional neural network for image super-resolution. Neural Netw 2020; 132:394-404. [PMID: 33010715 DOI: 10.1016/j.neunet.2020.09.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 09/12/2020] [Accepted: 09/15/2020] [Indexed: 11/24/2022]
Abstract
This study builds a fully deconvolutional neural network (FDNN) and addresses the problem of single image super-resolution (SISR) by using the FDNN. Although SISR using deep neural networks has been a major research focus, the problem of reconstructing a high resolution (HR) image with an FDNN has received little attention. A few recent approaches toward SISR are to embed deconvolution operations into multilayer feedforward neural networks. This paper constructs a deep FDNN for SISR that possesses two remarkable advantages compared to existing SISR approaches. The first improves the network performance without increasing the depth of the network or embedding complex structures. The second replaces all convolution operations with deconvolution operations to implement an effective reconstruction. That is, the proposed FDNN only contains deconvolution layers and learns an end-to-end mapping from low resolution (LR) to HR images. Furthermore, to avoid the oversmoothness of the mean squared error loss, the trained image is treated as a probability distribution, and the Kullback-Leibler divergence is introduced into the final loss function to achieve enhanced recovery. Although the proposed FDNN only has 10 layers, it is successfully evaluated through extensive experiments. Compared with other state-of-the-art methods and deep convolution neural networks with 20 or 30 layers, the proposed FDNN achieves better performance for SISR.
Collapse
Affiliation(s)
- Feilong Cao
- Department of Applied Mathematics, College of Sciences, China Jiliang University, Hangzhou 310018, Zhejiang, China.
| | - Kaixuan Yao
- Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China.
| | - Jiye Liang
- Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China.
| |
Collapse
|
42
|
Umehara K. [1. Deep Learning Super-resolution in Medical Imaging: What Is It and How to Use It]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:524-533. [PMID: 32435038 DOI: 10.6009/jjrt.2020_jsrt_76.5.524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Affiliation(s)
- Kensuke Umehara
- National Institutes for Quantum and Radiological Science and Technology
| |
Collapse
|