151
|
Deep learning for compressive sensing: a ubiquitous systems perspective. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10259-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractCompressive sensing (CS) is a mathematically elegant tool for reducing the sensor sampling rate, potentially bringing context-awareness to a wider range of devices. Nevertheless, practical issues with the sampling and reconstruction algorithms prevent further proliferation of CS in real world domains, especially among heterogeneous ubiquitous devices. Deep learning (DL) naturally complements CS for adapting the sampling matrix, reconstructing the signal, and learning from the compressed samples. While the CS–DL integration has received substantial research interest recently, it has not yet been thoroughly surveyed, nor has any light been shed on practical issues towards bringing the CS–DL to real world implementations in the ubiquitous computing domain. In this paper we identify main possible ways in which CS and DL can interplay, extract key ideas for making CS–DL efficient, outline major trends in the CS–DL research space, and derive guidelines for the future evolution of CS–DL within the ubiquitous computing domain.
Collapse
|
152
|
Liu J, Tian Y, Duzgol C, Akin O, Ağıldere AM, Haberal KM, Coşkun M. Virtual contrast enhancement for CT scans of abdomen and pelvis. Comput Med Imaging Graph 2022; 100:102094. [PMID: 35914340 PMCID: PMC10227907 DOI: 10.1016/j.compmedimag.2022.102094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 06/07/2022] [Accepted: 06/16/2022] [Indexed: 11/19/2022]
Abstract
Contrast agents are commonly used to highlight blood vessels, organs, and other structures in magnetic resonance imaging (MRI) and computed tomography (CT) scans. However, these agents may cause allergic reactions or nephrotoxicity, limiting their use in patients with kidney dysfunctions. In this paper, we propose a generative adversarial network (GAN) based framework to automatically synthesize contrast-enhanced CTs directly from the non-contrast CTs in the abdomen and pelvis region. The respiratory and peristaltic motion can affect the pixel-level mapping of contrast-enhanced learning, which makes this task more challenging than other body parts. A perceptual loss is introduced to compare high-level semantic differences of the enhancement areas between the virtual contrast-enhanced and actual contrast-enhanced CT images. Furthermore, to accurately synthesize the intensity details as well as remain texture structures of CT images, a dual-path training schema is proposed to learn the texture and structure features simultaneously. Experiment results on three contrast phases (i.e. arterial, portal, and delayed phase) show the potential to synthesize virtual contrast-enhanced CTs directly from non-contrast CTs of the abdomen and pelvis for clinical evaluation.
Collapse
Affiliation(s)
- Jingya Liu
- The City College of New York, New York, NY 10031, USA
| | - Yingli Tian
- The City College of New York, New York, NY 10031, USA.
| | - Cihan Duzgol
- Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | | | | | | |
Collapse
|
153
|
Ryu K, Alkan C, Vasanawala SS. Improving high frequency image features of deep learning reconstructions via k-space refinement with null-space kernel. Magn Reson Med 2022; 88:1263-1272. [PMID: 35426470 PMCID: PMC9246879 DOI: 10.1002/mrm.29261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 03/20/2022] [Accepted: 03/21/2022] [Indexed: 11/12/2022]
Abstract
PURPOSE Deep learning (DL) based reconstruction using unrolled neural networks has shown great potential in accelerating MRI. However, one of the major drawbacks is the loss of high-frequency details and textures in the output. The purpose of the study is to propose a novel refinement method that uses null-space kernel to refine k-space and improve blurred image details and textures. METHODS The proposed method constrains the output of the DL to comply to the linear neighborhood relationship calibrated in the auto-calibration lines. To demonstrate efficacy, we tested our refinement method on the DL reconstruction under a variety of conditions (i.e., dataset, unrolled neural networks, and under-sampling scheme). Specifically, the method was tested on three large-scale public datasets (knee and brain) from fastMRI's multi-coil track. RESULTS The proposed scheme visually reduces the structural error in the k-space domain, enhance the homogeneity of the k-space intensity. Consequently, reconstructed image shows sharper images with enhanced details and textures. The proposed method is also successful in improving high-frequency image details (SSIM, GMSD) without sacrificing overall image error (PSNR). CONCLUSION Our findings imply that refining DL output using the proposed method may generally improve DL reconstruction as tested with various large-scale dataset and networks.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Department of Radiology, Stanford University, CA, USA
| | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, CA, USA
| | | |
Collapse
|
154
|
Shoeibi A, Moridian P, Khodatars M, Ghassemi N, Jafari M, Alizadehsani R, Kong Y, Gorriz JM, Ramírez J, Khosravi A, Nahavandi S, Acharya UR. An overview of deep learning techniques for epileptic seizures detection and prediction based on neuroimaging modalities: Methods, challenges, and future works. Comput Biol Med 2022; 149:106053. [DOI: 10.1016/j.compbiomed.2022.106053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/17/2022] [Accepted: 08/17/2022] [Indexed: 02/01/2023]
|
155
|
DIIK-Net: A Full-resolution Cross-domain Deep Interaction Convolutional Neural Network for MR Image Reconstruction. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
156
|
Shastri SK, Ahmad R, Metzler CA, Schniter P. Denoising Generalized Expectation-Consistent Approximation for MR Image Recovery. IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY 2022; 3:528-542. [PMID: 36970644 PMCID: PMC10032362 DOI: 10.1109/jsait.2022.3207109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
To solve inverse problems, plug-and-play (PnP) methods replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN). Although such methods yield accurate solutions, they can be improved. For example, denoisers are usually designed/trained to remove white Gaussian noise, but the denoiser input error in PnP algorithms is usually far from white or Gaussian. Approximate message passing (AMP) methods provide white and Gaussian denoiser input error, but only when the forward operator is sufficiently random. In this work, for Fourier-based forward operators, we propose a PnP algorithm based on generalized expectation-consistent (GEC) approximation-a close cousin of AMP-that offers predictable error statistics at each iteration, as well as a new DNN denoiser that leverages those statistics. We apply our approach to magnetic resonance (MR) image recovery and demonstrate its advantages over existing PnP and AMP methods.
Collapse
Affiliation(s)
- Saurav K Shastri
- Dept. of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43201, USA
| | - Rizwan Ahmad
- Dept. of Biomedical Engineering, The Ohio State University, Columbus, OH 43201, USA
| | | | - Philip Schniter
- Dept. of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43201, USA
| |
Collapse
|
157
|
Generation of realistic synthetic data using Multimodal Neural Ordinary Differential Equations. NPJ Digit Med 2022; 5:122. [PMID: 35986075 PMCID: PMC9391444 DOI: 10.1038/s41746-022-00666-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 07/25/2022] [Indexed: 11/11/2022] Open
Abstract
Individual organizations, such as hospitals, pharmaceutical companies, and health insurance providers, are currently limited in their ability to collect data that are fully representative of a disease population. This can, in turn, negatively impact the generalization ability of statistical models and scientific insights. However, sharing data across different organizations is highly restricted by legal regulations. While federated data access concepts exist, they are technically and organizationally difficult to realize. An alternative approach would be to exchange synthetic patient data instead. In this work, we introduce the Multimodal Neural Ordinary Differential Equations (MultiNODEs), a hybrid, multimodal AI approach, which allows for generating highly realistic synthetic patient trajectories on a continuous time scale, hence enabling smooth interpolation and extrapolation of clinical studies. Our proposed method can integrate both static and longitudinal data, and implicitly handles missing values. We demonstrate the capabilities of MultiNODEs by applying them to real patient-level data from two independent clinical studies and simulated epidemiological data of an infectious disease.
Collapse
|
158
|
Zhao J, Hou X, Pan M, Zhang H. Attention-based generative adversarial network in medical imaging: A narrative review. Comput Biol Med 2022; 149:105948. [PMID: 35994931 DOI: 10.1016/j.compbiomed.2022.105948] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/24/2022] [Accepted: 08/06/2022] [Indexed: 11/18/2022]
Abstract
As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resemble the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis.
Collapse
Affiliation(s)
- Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Meiqing Pan
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
159
|
Shao W, Leung KH, Xu J, Coughlin JM, Pomper MG, Du Y. Generation of Digital Brain Phantom for Machine Learning Application of Dopamine Transporter Radionuclide Imaging. Diagnostics (Basel) 2022; 12:1945. [PMID: 36010295 PMCID: PMC9406894 DOI: 10.3390/diagnostics12081945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/03/2022] [Accepted: 08/11/2022] [Indexed: 11/16/2022] Open
Abstract
While machine learning (ML) methods may significantly improve image quality for SPECT imaging for the diagnosis and monitoring of Parkinson's disease (PD), they require a large amount of data for training. It is often difficult to collect a large population of patient data to support the ML research, and the ground truth of lesion is also unknown. This paper leverages a generative adversarial network (GAN) to generate digital brain phantoms for training ML-based PD SPECT algorithms. A total of 594 PET 3D brain models from 155 patients (113 male and 42 female) were reviewed and 1597 2D slices containing the full or a portion of the striatum were selected. Corresponding attenuation maps were also generated based on these images. The data were then used to develop a GAN for generating 2D brain phantoms, where each phantom consisted of a radioactivity image and the corresponding attenuation map. Statistical methods including histogram, Fréchet distance, and structural similarity were used to evaluate the generator based on 10,000 generated phantoms. When the generated phantoms and training dataset were both passed to the discriminator, similar normal distributions were obtained, which indicated the discriminator was unable to distinguish the generated phantoms from the training datasets. The generated digital phantoms can be used for 2D SPECT simulation and serve as the ground truth to develop ML-based reconstruction algorithms. The cumulated experience from this work also laid the foundation for building a 3D GAN for the same application.
Collapse
Affiliation(s)
- Wenyi Shao
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Kevin H. Leung
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Jingyan Xu
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Jennifer M. Coughlin
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Martin G. Pomper
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
160
|
Liu X, Pang Y, Jin R, Liu Y, Wang Z. Dual-Domain Reconstruction Network with V-Net and K-Net for Fast MRI. Magn Reson Med 2022; 88:2694-2708. [PMID: 35942977 DOI: 10.1002/mrm.29400] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE To introduce a dual-domain reconstruction network with V-Net and K-Net for accurate MR image reconstruction from undersampled k-space data. METHODS Most state-of-the-art reconstruction methods apply U-Net or cascaded U-Nets in the image domain and/or k-space domain. Nevertheless, these methods have the following problems: (1) directly applying U-Net in the k-space domain is not optimal for extracting features; (2) classical image-domain-oriented U-Net is heavyweighted and hence inefficient when cascaded many times to yield good reconstruction accuracy; (3) classical image-domain-oriented U-Net does not make full use of information of the encoder network for extracting features in the decoder network; and (4) existing methods are ineffective in simultaneously extracting and fusing features in the image domain and its dual k-space domain. To tackle these problems, we present 3 different methods: (1) V-Net, an image-domain encoder-decoder subnetwork that is more lightweight for cascading and effective in fully utilizing features in the encoder for decoding; (2) K-Net, a k-space domain subnetwork that is more suitable for extracting hierarchical features in the k-space domain, and (3) KV-Net, a dual-domain reconstruction network in which V-Nets and K-Nets are effectively combined and cascaded. RESULTS Extensive experimental results on the fastMRI dataset demonstrate that the proposed KV-Net can reconstruct high-quality images and outperform state-of-the-art approaches with fewer parameters. CONCLUSIONS To reconstruct images effectively and efficiently from incomplete k-space data, we have presented a dual-domain KV-Net to combine K-Nets and V-Nets. The KV-Net achieves better results with 9% and 5% parameters than comparable methods (XPD-Net and i-RIM).
Collapse
Affiliation(s)
- Xiaohan Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yanwei Pang
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Ruiqi Jin
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yu Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Zhenchang Wang
- Beijing Friendship Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
161
|
Shao W, Zhou B. Dielectric Breast Phantoms by Generative Adversarial Network. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION 2022; 70:6256-6264. [PMID: 36969506 PMCID: PMC10038476 DOI: 10.1109/tap.2021.3121149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
In order to conduct the research of machine-learning (ML) based microwave breast imaging (MBI), a large number of digital dielectric breast phantoms that can be used as training data (ground truth) are required but are difficult to be achieved from practice. Although a few dielectric breast phantoms have been developed for research purpose, the number and the diversity are limited and is far inadequate to develop a robust ML algorithm for MBI. This paper presents a neural network method to generate 2D virtual breast phantoms that are similar to the real ones, which can be used to develop ML-based MBI in the future. The generated phantoms are similar but are different from those used in training. Each phantom consists of several images with each representing the distribution of a dielectric parameter in the breast map. Statistical analysis was performed over 10,000 generated phantoms to investigate the performance of the generative network. With the generative network, one may generate unlimited number of breast images with more variations, so the ML-based MBI will be more ready to deploy.
Collapse
Affiliation(s)
- Wenyi Shao
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | | |
Collapse
|
162
|
Chen EZ, Wang P, Chen X, Chen T, Sun S. Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2033-2047. [PMID: 35192462 DOI: 10.1109/tmi.2022.3153849] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.
Collapse
|
163
|
Obama Y, Ohno Y, Yamamoto K, Ikedo M, Yui M, Hanamatsu S, Ueda T, Ikeda H, Murayama K, Toyama H. MR imaging for shoulder diseases: Effect of compressed sensing and deep learning reconstruction on examination time and imaging quality compared with that of parallel imaging. Magn Reson Imaging 2022; 94:56-63. [DOI: 10.1016/j.mri.2022.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 07/03/2022] [Accepted: 08/02/2022] [Indexed: 11/29/2022]
|
164
|
Shao HC, Li T, Dohopolski MJ, Wang J, Cai J, Tan J, Wang K, Zhang Y. Real-time MRI motion estimation through an unsupervised k-space-driven deformable registration network (KS-RegNet). Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac762c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 06/06/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Purpose. Real-time three-dimensional (3D) magnetic resonance (MR) imaging is challenging because of slow MR signal acquisition, leading to highly under-sampled k-space data. Here, we proposed a deep learning-based, k-space-driven deformable registration network (KS-RegNet) for real-time 3D MR imaging. By incorporating prior information, KS-RegNet performs a deformable image registration between a fully-sampled prior image and on-board images acquired from highly-under-sampled k-space data, to generate high-quality on-board images for real-time motion tracking. Methods. KS-RegNet is an end-to-end, unsupervised network consisting of an input data generation block, a subsequent U-Net core block, and following operations to compute data fidelity and regularization losses. The input data involved a fully-sampled, complex-valued prior image, and the k-space data of an on-board, real-time MR image (MRI). From the k-space data, under-sampled real-time MRI was reconstructed by the data generation block to input into the U-Net core. In addition, to train the U-Net core to learn the under-sampling artifacts, the k-space data of the prior image was intentionally under-sampled using the same readout trajectory as the real-time MRI, and reconstructed to serve an additional input. The U-Net core predicted a deformation vector field that deforms the prior MRI to on-board real-time MRI. To avoid adverse effects of quantifying image similarity on the artifacts-ridden images, the data fidelity loss of deformation was evaluated directly in k-space. Results. Compared with Elastix and other deep learning network architectures, KS-RegNet demonstrated better and more stable performance. The average (±s.d.) DICE coefficients of KS-RegNet on a cardiac dataset for the 5- , 9- , and 13-spoke k-space acquisitions were 0.884 ± 0.025, 0.889 ± 0.024, and 0.894 ± 0.022, respectively; and the corresponding average (±s.d.) center-of-mass errors (COMEs) were 1.21 ± 1.09, 1.29 ± 1.22, and 1.01 ± 0.86 mm, respectively. KS-RegNet also provided the best performance on an abdominal dataset. Conclusion. KS-RegNet allows real-time MRI generation with sub-second latency. It enables potential real-time MR-guided soft tissue tracking, tumor localization, and radiotherapy plan adaptation.
Collapse
|
165
|
Beauferris Y, Teuwen J, Karkalousos D, Moriakov N, Caan M, Yiasemis G, Rodrigues L, Lopes A, Pedrini H, Rittner L, Dannecker M, Studenyak V, Gröger F, Vyas D, Faghih-Roohi S, Kumar Jethi A, Chandra Raju J, Sivaprakasam M, Lasby M, Nogovitsyn N, Loos W, Frayne R, Souza R. Multi-Coil MRI Reconstruction Challenge-Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Front Neurosci 2022; 16:919186. [PMID: 35873808 PMCID: PMC9298878 DOI: 10.3389/fnins.2022.919186] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/01/2022] [Indexed: 11/13/2022] Open
Abstract
Deep-learning-based brain magnetic resonance imaging (MRI) reconstruction methods have the potential to accelerate the MRI acquisition process. Nevertheless, the scientific community lacks appropriate benchmarks to assess the MRI reconstruction quality of high-resolution brain images, and evaluate how these proposed algorithms will behave in the presence of small, but expected data distribution shifts. The multi-coil MRI (MC-MRI) reconstruction challenge provides a benchmark that aims at addressing these issues, using a large dataset of high-resolution, three-dimensional, T1-weighted MRI scans. The challenge has two primary goals: (1) to compare different MRI reconstruction models on this dataset and (2) to assess the generalizability of these models to data acquired with a different number of receiver coils. In this paper, we describe the challenge experimental design and summarize the results of a set of baseline and state-of-the-art brain MRI reconstruction models. We provide relevant comparative information on the current MRI reconstruction state-of-the-art and highlight the challenges of obtaining generalizable models that are required prior to broader clinical adoption. The MC-MRI benchmark data, evaluation code, and current challenge leaderboard are publicly available. They provide an objective performance assessment for future developments in the field of brain MRI reconstruction.
Collapse
Affiliation(s)
- Youssef Beauferris
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Dimitrios Karkalousos
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - Nikita Moriakov
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Matthan Caan
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - George Yiasemis
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Lívia Rodrigues
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Alexandre Lopes
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Helio Pedrini
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Letícia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Maik Dannecker
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Viktor Studenyak
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Fabian Gröger
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Devendra Vyas
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | | | - Amrit Kumar Jethi
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Jaya Chandra Raju
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Mohanasankar Sivaprakasam
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
- Healthcare Technology Innovation Centre, Indian Institute of Technology Madras, Chennai, India
| | - Mike Lasby
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Nikita Nogovitsyn
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada
- Mood Disorders Program, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Wallace Loos
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Richard Frayne
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Roberto Souza
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
166
|
Di Ianni T, Airan RD. Deep-fUS: A Deep Learning Platform for Functional Ultrasound Imaging of the Brain Using Sparse Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1813-1825. [PMID: 35108201 PMCID: PMC9247015 DOI: 10.1109/tmi.2022.3148728] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Functional ultrasound (fUS) is a rapidly emerging modality that enables whole-brain imaging of neural activity in awake and mobile rodents. To achieve sufficient blood flow sensitivity in the brain microvasculature, fUS relies on long ultrasound data acquisitions at high frame rates, posing high demands on the sampling and processing hardware. Here we develop an image reconstruction method based on deep learning that significantly reduces the amount of data necessary while retaining imaging performance. We trained convolutional neural networks to learn the power Doppler reconstruction function from sparse sequences of ultrasound data with compression factors of up to 95%. High-quality images from in vivo acquisitions in rats were used for training and performance evaluation. We demonstrate that time series of power Doppler images can be reconstructed with sufficient accuracy to detect the small changes in cerebral blood volume (~10%) characteristic of task-evoked cortical activation, even though the network was not formally trained to reconstruct such image series. The proposed platform may facilitate the development of this neuroimaging modality in any setting where dedicated hardware is not available or in clinical scanners.
Collapse
|
167
|
|
168
|
Huang J, Wu Y, Wu H, Yang G. Fast MRI Reconstruction: How Powerful Transformers Are? ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2066-2070. [PMID: 36085682 DOI: 10.1109/embc48229.2022.9871475] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Magnetic resonance imaging (MRI) is a widely used non-radiative and non-invasive method for clinical interro-gation of organ structures and metabolism, with an inherently long scanning time. Methods by k-space undersampling and deep learning based reconstruction have been popularised to accelerate the scanning process. This work focuses on investigating how powerful transformers are for fast MRI by exploiting and comparing different novel network architectures. In particular, a generative adversarial network (GAN) based Swin transformer (ST-GAN) was introduced for the fast MRI reconstruction. To further preserve the edge and texture information, edge enhanced GAN based Swin transformer (EES-GAN) and texture enhanced GAN based Swin transformer (TES-GAN) were also developed, where a dual-discriminator GAN structure was applied. We compared our proposed GAN based transformers, standalone Swin transformer and other convolutional neural networks based GAN model in terms of the evaluation metrics PSNR, SSIM and FID. We showed that transformers work well for the MRI reconstruction from different undersampling conditions. The utilisation of GAN's adversarial structure improves the quality of images reconstructed when undersampled for 30% or higher. The code is publicly available at https://github.comJayanglab/SwinGANMR.
Collapse
|
169
|
Korkmaz Y, Dar SUH, Yurt M, Ozbey M, Cukur T. Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1747-1763. [PMID: 35085076 DOI: 10.1109/tmi.2022.3147426] [Citation(s) in RCA: 88] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.
Collapse
|
170
|
Jethi AK, Souza R, Ram K, Sivaprakasam M. Improving Fast MRI Reconstructions with Pretext Learning in Low-Data Regime. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2080-2083. [PMID: 36085855 DOI: 10.1109/embc48229.2022.9871369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Supervised deep learning methods have shown great promise for making magnetic resonance (MR) imaging scans faster. However, these supervised deep learning models need large volumes of labelled data to learn valuable representations and produce high-fidelity MR image reconstructions. The data used to train these models are often fully-sampled raw MR data, retrospectively under-sampled to simulate different MR acquisition acceleration factors. Obtaining high-quality, fully sampled raw MR data is costly and time-consuming. In this paper, we exploit the self supervision based learning by introducing a pretext method to boost feature learning using the more commonly available under-sampled MR data. Our experiments using different deep-learning-based reconstruction models in a low data regime demonstrate that self-supervision ensures stable training and improves MR image reconstruction.
Collapse
|
171
|
Karkalousos D, Noteboom S, Hulst HE, Vos FM, Caan MWA. Assessment of data consistency through cascades of independently recurrent inference machines for fast and robust accelerated MRI reconstruction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6cc2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 05/04/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Machine Learning methods can learn how to reconstruct magnetic resonance images (MRI) and thereby accelerate acquisition, which is of paramount importance to the clinical workflow. Physics-informed networks incorporate the forward model of accelerated MRI reconstruction in the learning process. With increasing network complexity, robustness is not ensured when reconstructing data unseen during training. We aim to embed data consistency (DC) in deep networks while balancing the degree of network complexity. While doing so, we will assess whether either explicit or implicit enforcement of DC in varying network architectures is preferred to optimize performance. Approach. We propose a scheme called Cascades of Independently Recurrent Inference Machines (CIRIM) to assess DC through unrolled optimization. Herein we assess DC both implicitly by gradient descent and explicitly by a designed term. Extensive comparison of the CIRIM to compressed sensing as well as other Machine Learning methods is performed: the End-to-End Variational Network (E2EVN), CascadeNet, KIKINet, LPDNet, RIM, IRIM, and UNet. Models were trained and evaluated on T1-weighted and FLAIR contrast brain data, and T2-weighted knee data. Both 1D and 2D undersampling patterns were evaluated. Robustness was tested by reconstructing 7.5× prospectively undersampled 3D FLAIR MRI data of multiple sclerosis (MS) patients with white matter lesions. Main results. The CIRIM performed best when implicitly enforcing DC, while the E2EVN required an explicit DC formulation. Through its cascades, the CIRIM was able to score higher on structural similarity and PSNR compared to other methods, in particular under heterogeneous imaging conditions. In reconstructing MS patient data, prospectively acquired with a sampling pattern unseen during model training, the CIRIM maintained lesion contrast while efficiently denoising the images. Significance. The CIRIM showed highly promising generalization capabilities maintaining a very fair trade-off between reconstructed image quality and fast reconstruction times, which is crucial in the clinical workflow.
Collapse
|
172
|
Yaqub M, Jinchao F, Arshid K, Ahmed S, Zhang W, Nawaz MZ, Mahmood T. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8750648. [PMID: 35756423 PMCID: PMC9225884 DOI: 10.1155/2022/8750648] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/12/2022] [Accepted: 05/21/2022] [Indexed: 02/08/2023]
Abstract
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Wenqian Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Zubair Nawaz
- College of Science and Shanghai Institute of Intelligent Electronics and Systems, Donghua University, 24105 Songjiang District, Shanghai, China
| | - Tariq Mahmood
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
173
|
Wang L, Wang C, Wang F, Chu YH, Yang Z, Wang H. EPI phase error correction with deep learning (PEC-DL) at 7 T. Magn Reson Med 2022; 88:1775-1784. [PMID: 35696532 DOI: 10.1002/mrm.29317] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE The phase mismatch between odd and even echoes in EPI causes Nyquist ghost artifacts. Existing ghost correction methods often suffer from severe residual artifacts and are ineffective with k-space undersampling data. This study proposed a deep learning-based method (PEC-DL) to correct phase errors for DWI at 7 Tesla. METHODS The acquired k-space data were divided into 2 independent undersampled datasets according to their readout polarities. Then the proposed PEC-DL network reconstructed 2 ghost-free images using the undersampled data without calibration and navigator data. The network was trained with fully sampled images and applied to two- and fourfold accelerated data. Healthy volunteers and patients with Moyamoya disease were recruited to validate the efficacy of the PEC-DL method. RESULTS The PEC-DL method was capable to mitigate the ghost artifacts in DWI in healthy volunteers as well as patients with Moyamoya disease. The fourfold accelerated results showed much less distortion in the lesions of the Moyamoya patient using high b-value DWI and the corresponding ADC maps. The ghost-to-signal ratios were significantly lower in PEC-DL images compared to conventional linear phase corrections, mini-entropy, and PEC-GRAPPA algorithms. CONCLUSION The proposed method can effectively eliminate ghost artifacts for full sampled and up to fourfold accelerated EPI data without calibration and navigator data.
Collapse
Affiliation(s)
- Lili Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, People's Republic of China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, People's Republic of China
| | - Fanwen Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, People's Republic of China
| | - Ying-Hua Chu
- MR Collaboration, Siemens Healthcare Ltd., Shanghai, People's Republic of China
| | - Zidong Yang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, People's Republic of China
| | - He Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, People's Republic of China.,MR Collaboration, Siemens Healthcare Ltd., Shanghai, People's Republic of China
| |
Collapse
|
174
|
Seo S, Luu HM, Choi SH, Park SH. Simultaneously optimizing sampling pattern for joint acceleration of multi-contrast MRI using model-based deep learning. Med Phys 2022; 49:5964-5980. [PMID: 35678739 DOI: 10.1002/mp.15790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/03/2022] [Accepted: 05/27/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Acceleration of MR imaging (MRI) is a popular research area, and usage of deep learning for acceleration has become highly widespread in the MR community. Joint acceleration of multiple-acquisition MRI was proven to be effective over a single-acquisition approach. Also, optimization in the sampling pattern demonstrated its advantage over conventional undersampling pattern. However, optimizing the sampling patterns for joint acceleration of multiple-acquisition MRI has not been investigated well. PURPOSE To develop a model-based deep learning scheme to optimize sampling patterns for a joint acceleration of multi-contrast MRI. METHODS The proposed scheme combines sampling pattern optimization and multi-contrast MRI reconstruction. It was extended from the physics-guided method of the joint model-based deep learning (J-MoDL) scheme to optimize the separate sampling pattern for each of multiple contrasts simultaneously for their joint reconstruction. Tests were performed with three contrasts of T2-weighted, FLAIR, and T1-weighted images. The proposed multi-contrast method was compared to (i) single-contrast method with sampling optimization (baseline J-MoDL), (ii) multi-contrast method without sampling optimization, and (iii) multi-contrast method with single common sampling optimization for all contrasts. The optimized sampling patterns were analyzed for sampling location overlap across contrasts. The scheme was also tested in a data-driven scenario, where the inversion between input and label was learned from the under-sampled data directly and tested on knee datasets for generalization test. RESULTS The proposed scheme demonstrated a quantitative and qualitative advantage over the single-contrast scheme with sampling pattern optimization and the multi-contrast scheme without sampling pattern optimization. Optimizing the separate sampling pattern for each of the multi-contrasts was superior to optimizing only one common sampling pattern for all contrasts. The proposed scheme showed less overlap in sampling locations than the single-contrast scheme. The main hypothesis was also held in the data-driven situation as well. The brain-trained model worked well on the knee images, demonstrating its generalizability. CONCLUSION Our study introduced an effective scheme that combines the sampling optimization and the multi-contrast acceleration. The seamless combination resulted in superior performance over the other existing methods.
Collapse
Affiliation(s)
- Sunghun Seo
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Huan Minh Luu
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
175
|
Lee JH, Yi J, Kim JH, Ryu K, Han D, Kim S, Lee S, Kim DY, Kim DH. Accelerated 3D myelin water imaging using joint spatio-temporal reconstruction. Med Phys 2022; 49:5929-5942. [PMID: 35678751 DOI: 10.1002/mp.15788] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 03/31/2022] [Accepted: 05/26/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To enable acceleration in 3D multi-echo gradient echo (mGRE) acquisition for myelin water imaging (MWI) by combining joint parallel imaging (JPI) and joint deep learning (JDL). METHODS We implemented a multistep reconstruction process using both advanced parallel imaging and deep learning network which can utilize joint spatiotemporal components between the multi-echo images to further accelerate 3D mGRE acquisition for MWI. In the first step, JPI was performed to estimate missing k-space lines. Next, JDL was implemented to reduce residual artifacts and produce high-fidelity reconstruction by using variable splitting optimization consisting of spatiotemporal denoiser block, data consistency block, and weighted average block. The proposed method was evaluated for MWI with 2D Cartesian uniform under-sampling for each echo, enabling scan times of up to approximately 2 min for 2 mm × 2 mm × 2 mm $2\ {\rm mm} \times 2\ {\rm mm} \times 2\ {\rm mm}$ 3D coverage. RESULTS The proposed method showed acceptable MWI quality with improved quantitative values compared to both JPI and JDL methods individually. The improved performance of the proposed method was demonstrated by the low normalized mean-square error and high-frequency error norm values of the reconstruction with high similarity to the fully sampled MWI. CONCLUSION Joint spatiotemporal reconstruction approach by combining JPI and JDL can achieve high acceleration factors for 3D mGRE-based MWI.
Collapse
Affiliation(s)
- Jae-Hun Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Jaeuk Yi
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Jun-Hyeong Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.,Department of Radiology, Stanford University, Stanford, California, USA
| | - Dongyeob Han
- Siemens Healthineers Ltd, Seoul, Republic of Korea
| | - Sewook Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Seul Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Deog Young Kim
- Department of Research Institute of Rehabilitation Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
176
|
Ranjan A, Lalwani D, Misra R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. MAGMA (NEW YORK, N.Y.) 2022; 35:449-457. [PMID: 34741702 DOI: 10.1007/s10334-021-00974-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In medical domain, cross-modality image synthesis suffers from multiple issues , such as context-misalignment, image distortion, image blurriness, and loss of details. The fundamental objective behind this study is to address these issues in estimating synthetic Computed tomography (sCT) scans from T2-weighted Magnetic Resonance Imaging (MRI) scans to achieve MRI-guided Radiation Treatment (RT). MATERIALS AND METHODS We proposed a conditional generative adversarial network (cGAN) with multiple residual blocks to estimate sCT from T2-weighted MRI scans using 367 paired brain MR-CT images dataset. Few state-of-the-art deep learning models were implemented to generate sCT including Pix2Pix model, U-Net model, autoencoder model and their results were compared, respectively. RESULTS Results with paired MR-CT image dataset demonstrate that the proposed model with nine residual blocks in generator architecture results in the smallest mean absolute error (MAE) value of [Formula: see text], and mean squared error (MSE) value of [Formula: see text], and produces the largest Pearson correlation coefficient (PCC) value of [Formula: see text], SSIM value of [Formula: see text] and peak signal-to-noise ratio (PSNR) value of [Formula: see text], respectively. We qualitatively evaluated our result by visual comparisons of generated sCT to original CT of respective MRI input. DISCUSSION The quantitative and qualitative comparison of this work demonstrates that deep learning-based cGAN model can be used to estimate sCT scan from a reference T2 weighted MRI scan. The overall accuracy of our proposed model outperforms different state-of-the-art deep learning-based models.
Collapse
Affiliation(s)
- Amit Ranjan
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India.
| | - Debanshu Lalwani
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| | - Rajiv Misra
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| |
Collapse
|
177
|
Song Y, Ren S, Lu Y, Fu X, Wong KKL. Deep learning-based automatic segmentation of images in cardiac radiography: A promising challenge. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106821. [PMID: 35487181 DOI: 10.1016/j.cmpb.2022.106821] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 04/08/2022] [Accepted: 04/17/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Due to the advancement of medical imaging and computer technology, machine intelligence to analyze clinical image data increases the probability of disease prevention and successful treatment. When diagnosing and detecting heart disease, medical imaging can provide high-resolution scans of every organ or tissue in the heart. The diagnostic results obtained by the imaging method are less susceptible to human interference. They can process numerous patient information, assist doctors in early detection of heart disease, intervene and treat patients, and improve the understanding of heart disease symptoms and clinical diagnosis of great significance. In a computer-aided diagnosis system, accurate segmentation of cardiac scan images is the basis and premise of subsequent thoracic function analysis and 3D image reconstruction. EXISTING TECHNIQUES This paper systematically reviews automatic methods and some difficulties for cardiac segmentation in radiographic images. Combined with recent advanced deep learning techniques, the feasibility of using deep learning network models for image segmentation is discussed, and the commonly used deep learning frameworks are compared. DEVELOPED INSIGHTS There are many standard methods for medical image segmentation, such as traditional methods based on regions and edges and methods based on deep learning. Because of characteristics of non-uniform grayscale, individual differences, artifacts and noise of medical images, the above image segmentation methods have certain limitations. It is tough to obtain the needed results sensitivity and accuracy when performing heart segmentation. The deep learning model proposed has achieved good results in image segmentation. Accurate segmentation improves the accuracy of disease diagnosis and reduces subsequent irrelevant computations. SUMMARY There are two requirements for accurate segmentation of radiological images. One is to use image segmentation to improve the development of computer-aided diagnosis. The other is to achieve complete segmentation of the heart. When there are lesions or deformities in the heart, there will be some abnormalities in the radiographic images, and the segmentation algorithm needs to segment the heart altogether. The quantity of processing inside a certain range will no longer be a restriction for real-time detection with the advancement of deep learning and the enhancement of hardware device performance.
Collapse
Affiliation(s)
- Yucheng Song
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Shengbing Ren
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Yu Lu
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.
| | - Xianghua Fu
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Kelvin K L Wong
- School of Computer Science and Engineering, Central South University, Changsha, China.
| |
Collapse
|
178
|
Li W, Zhu A, Xu Y, Yin H, Hua G. A Fast Multi-Scale Generative Adversarial Network for Image Compressed Sensing. ENTROPY 2022; 24:e24060775. [PMID: 35741496 PMCID: PMC9222711 DOI: 10.3390/e24060775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 02/01/2023]
Abstract
Recently, deep neural network-based image compressed sensing methods have achieved impressive success in reconstruction quality. However, these methods (1) have limitations in sampling pattern and (2) usually have the disadvantage of high computational complexity. To this end, a fast multi-scale generative adversarial network (FMSGAN) is implemented in this paper. Specifically, (1) an effective multi-scale sampling structure is proposed. It contains four different kernels with varying sizes so that decompose, and sample images effectively, which is capable of capturing different levels of spatial features at multiple scales. (2) An efficient lightweight multi-scale residual structure for deep image reconstruction is proposed to balance receptive field size and computational complexity. The key idea is to apply smaller convolution kernel sizes in the multi-scale residual structure to reduce the number of operations while maintaining the receptive field. Meanwhile, the channel attention structure is employed for enriching useful information. Moreover, perceptual loss is combined with MSE loss and adversarial loss as the optimization function to recover a finer image. Numerous experiments show that our FMSGAN achieves state-of-the-art image reconstruction quality with low computational complexity.
Collapse
Affiliation(s)
- Wenzong Li
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, China; (W.L.); (Y.X.); (H.Y.)
| | - Aichun Zhu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing 211800, China;
| | - Yonggang Xu
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, China; (W.L.); (Y.X.); (H.Y.)
| | - Hongsheng Yin
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, China; (W.L.); (Y.X.); (H.Y.)
| | - Gang Hua
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, China; (W.L.); (Y.X.); (H.Y.)
- Correspondence:
| |
Collapse
|
179
|
Zhang J, Han L, Sun J, Wang Z, Xu W, Chu Y, Xia L, Jiang M. Compressed sensing based dynamic MR image reconstruction by using 3D-total generalized variation and tensor decomposition: k-t TGV-TD. BMC Med Imaging 2022; 22:101. [PMID: 35624425 PMCID: PMC9137209 DOI: 10.1186/s12880-022-00826-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 05/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Compressed Sensing Magnetic Resonance Imaging (CS-MRI) is a promising technique to accelerate dynamic cardiac MR imaging (DCMRI). For DCMRI, the CS-MRI usually exploits image signal sparsity and low-rank property to reconstruct dynamic images from the undersampled k-space data. In this paper, a novel CS algorithm is investigated to improve dynamic cardiac MR image reconstruction quality under the condition of minimizing the k-space recording. METHODS The sparse representation of 3D cardiac magnetic resonance data is implemented by synergistically integrating 3D total generalized variation (3D-TGV) algorithm and high order singular value decomposition (HOSVD) based Tensor Decomposition, termed k-t TGV-TD method. In the proposed method, the low rank structure of the 3D dynamic cardiac MR data is performed with the HOSVD method, and the localized image sparsity is achieved by the 3D-TGV method. Moreover, the Fast Composite Splitting Algorithm (FCSA) method, combining the variable splitting with operator splitting techniques, is employed to solve the low-rank and sparse problem. Two different cardiac MR datasets (cardiac perfusion and cine MR datasets) are used to evaluate the performance of the proposed method. RESULTS Compared with the state-of-art methods, such as k-t SLR, 3D-TGV, HOSVD based tensor decomposition and low-rank plus sparse method, the proposed k-t TGV-TD method can offer improved reconstruction accuracy in terms of higher peak SNR (PSNR) and structural similarity index (SSIM). The proposed k-t TGV-TD method can achieve significantly better and stable reconstruction results than state-of-the-art methods in terms of both PSNR and SSIM, especially for cardiac perfusion MR dataset. CONCLUSIONS This work proved that the k-t TGV-TD method was an effective sparse representation way for DCMRI, which was capable of significantly improving the reconstruction accuracy with different acceleration factors.
Collapse
Affiliation(s)
- Jucheng Zhang
- Department of Clinical Engineering, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310019, People's Republic of China
| | - Lulu Han
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, People's Republic of China.,Zhejiang Aerospace HengJia Data Technology Co., Ltd., Jiaxing, People's Republic of China
| | - Jianzhong Sun
- Department of Radiology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Zhikang Wang
- Department of Clinical Engineering, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310019, People's Republic of China
| | - Wenlong Xu
- Department of Biomedical Engineering, China Jiliang University, Hangzhou, 310018, People's Republic of China
| | - Yonghua Chu
- Department of Clinical Engineering, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310019, People's Republic of China
| | - Ling Xia
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Mingfeng Jiang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, People's Republic of China.
| |
Collapse
|
180
|
Recent Trends in AI-Based Intelligent Sensing. ELECTRONICS 2022. [DOI: 10.3390/electronics11101661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has made astounding growth in domains of natural language processing, machine learning (ML), and computer vision. The methods based on AI enable a computer to learn and monitor activities by sensing the source of information in a real-time environment. The combination of these two technologies provides a promising solution in intelligent sensing. This survey provides a comprehensive summary of recent research on AI-based algorithms for intelligent sensing. This work also presents a comparative analysis of algorithms, models, influential parameters, available datasets, applications and projects in the area of intelligent sensing. Furthermore, we present a taxonomy of AI models along with the cutting edge approaches. Finally, we highlight challenges and open issues, followed by the future research directions pertaining to this exciting and fast-moving field.
Collapse
|
181
|
Wu W, Hu D, Cong W, Shan H, Wang S, Niu C, Yan P, Yu H, Vardhanabhuti V, Wang G. Stabilizing deep tomographic reconstruction: Part A. Hybrid framework and experimental results. PATTERNS (NEW YORK, N.Y.) 2022; 3:100474. [PMID: 35607623 PMCID: PMC9122961 DOI: 10.1016/j.patter.2022.100474] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/24/2021] [Accepted: 03/01/2022] [Indexed: 12/16/2022]
Abstract
A recent PNAS paper reveals that several popular deep reconstruction networks are unstable. Specifically, three kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missed in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. Here, we propose an analytic compressed iterative deep (ACID) framework to address this challenge. ACID synergizes a deep network trained on big data, kernel awareness from compressed sensing (CS)-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the ACID reconstruction is accurate, is stable, and sheds light on the converging mechanism of the ACID iteration under a bounded relative error norm assumption. ACID not only stabilizes an unstable deep reconstruction network but also is resilient against adversarial attacks to the whole ACID workflow, being superior to classic sparsity-regularized reconstruction and eliminating the three kinds of instabilities.
Collapse
Affiliation(s)
- Weiwen Wu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, China
| | - Wenxiang Cong
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Shaoyu Wang
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Pingkun Yan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hengyong Yu
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
182
|
End-to-End Deep Learning Approach for Perfusion Data: A Proof-of-Concept Study to Classify Core Volume in Stroke CT. Diagnostics (Basel) 2022; 12:diagnostics12051142. [PMID: 35626298 PMCID: PMC9139580 DOI: 10.3390/diagnostics12051142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/26/2022] [Accepted: 05/03/2022] [Indexed: 12/17/2022] Open
Abstract
(1) Background: CT perfusion (CTP) is used to quantify cerebral hypoperfusion in acute ischemic stroke. Conventional attenuation curve analysis is not standardized and might require input from expert users, hampering clinical application. This study aims to bypass conventional tracer-kinetic analysis with an end-to-end deep learning model to directly categorize patients by stroke core volume from raw, slice-reduced CTP data. (2) Methods: In this retrospective analysis, we included patients with acute ischemic stroke due to proximal occlusion of the anterior circulation who underwent CTP imaging. A novel convolutional neural network was implemented to extract spatial and temporal features from time-resolved imaging data. In a classification task, the network categorized patients into small or large core. In ten-fold cross-validation, the network was repeatedly trained, evaluated, and tested, using the area under the receiver operating characteristic curve (ROC-AUC). A final model was created in an ensemble approach and independently validated on an external dataset. (3) Results: 217 patients were included in the training cohort and 23 patients in the independent test cohort. Median core volume was 32.4 mL and was used as threshold value for the binary classification task. Model performance yielded a mean (SD) ROC-AUC of 0.72 (0.10) for the test folds. External independent validation resulted in an ensembled mean ROC-AUC of 0.61. (4) Conclusions: In this proof-of-concept study, the proposed end-to-end deep learning approach bypasses conventional perfusion analysis and allows to predict dichotomized infarction core volume solely from slice-reduced CTP images without underlying tracer kinetic assumptions. Further studies can easily extend to additional clinically relevant endpoints.
Collapse
|
183
|
Liu S, Schniter P, Ahmad R. MRI RECOVERY WITH A SELF-CALIBRATED DENOISER. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2022; 2022:1351-1355. [PMID: 35645618 PMCID: PMC9134859 DOI: 10.1109/icassp43922.2022.9746785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Plug-and-play (PnP) methods that employ application-specific denoisers have been proposed to solve inverse problems, including MRI reconstruction. However, training application-specific denoisers is not feasible for many applications due to the lack of training data. In this work, we propose a PnP-inspired recovery method that does not require data beyond the single, incomplete set of measurements. The proposed self-supervised method, called recovery with a self-calibrated denoiser (ReSiDe), trains the denoiser from the patches of the image being recovered. The denoiser training and a call to the denoising subroutine are performed in each iteration of a PnP algorithm, leading to a progressive refinement of the reconstructed image. For validation, we compare ReSiDe with a compressed sensing-based method and a PnP method with BM3D denoising using single-coil MRI brain data.
Collapse
Affiliation(s)
- Sizhuo Liu
- Department of Biomedical Engineering, Ohio State University, Columbus OH, 43210, USA
| | - Philip Schniter
- Department of Electrical and Computer Engineering, Ohio State University, Columbus OH, 43210, USA
| | - Rizwan Ahmad
- Department of Biomedical Engineering, Ohio State University, Columbus OH, 43210, USA
| |
Collapse
|
184
|
Zhang C, Moeller S, Demirel OB, Uğurbil K, Akçakaya M. Residual RAKI: A hybrid linear and non-linear approach for scan-specific k-space deep learning. Neuroimage 2022; 256:119248. [PMID: 35487456 PMCID: PMC9179026 DOI: 10.1016/j.neuroimage.2022.119248] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/07/2022] [Accepted: 04/23/2022] [Indexed: 10/31/2022] Open
Abstract
Parallel imaging is the most clinically used acceleration technique for magnetic resonance imaging (MRI) in part due to its easy inclusion into routine acquisitions. In k-space based parallel imaging reconstruction, sub-sampled k-space data are interpolated using linear convolutions. At high acceleration rates these methods have inherent noise amplification and reduced image quality. On the other hand, non-linear deep learning methods provide improved image quality at high acceleration, but the availability of training databases for different scans, as well as their interpretability hinder their adaptation. In this work, we present an extension of Robust Artificial-neural-networks for k-space Interpolation (RAKI), called residual-RAKI (rRAKI), which achieves scan-specific machine learning reconstruction using a hybrid linear and non-linear methodology. In rRAKI, non-linear CNNs are trained jointly with a linear convolution implemented via a skip connection. In effect, the linear part provides a baseline reconstruction, while the non-linear CNN that runs in parallel provides further reduction of artifacts and noise arising from the linear part. The explicit split between the linear and non-linear aspects of the reconstruction also help improve interpretability compared to purely non-linear methods. Experiments were conducted on the publicly available fastMRI datasets, as well as high-resolution anatomical imaging, comparing GRAPPA and its variants, compressed sensing, RAKI, Scan Specific Artifact Reduction in K-space (SPARK) and the proposed rRAKI. Additionally, highly-accelerated simultaneous multi-slice (SMS) functional MRI reconstructions were also performed, where the proposed rRAKI was compred to Read-out SENSE-GRAPPA and RAKI. Our results show that the proposed rRAKI method substantially improves the image quality compared to conventional parallel imaging, and offers sharper images compared to SPARK and ℓ1-SPIRiT. Furthermore, rRAKI shows improved preservation of time-varying dynamics compared to both parallel imaging and RAKI in highly-accelerated SMS fMRI.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
185
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
186
|
Berhane H, Scott MB, Barker AJ, McCarthy P, Avery R, Allen B, Malaisrie C, Robinson JD, Rigsby CK, Markl M. Deep learning-based velocity antialiasing of 4D-flow MRI. Magn Reson Med 2022; 88:449-463. [PMID: 35381116 PMCID: PMC9050855 DOI: 10.1002/mrm.29205] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 01/13/2022] [Accepted: 02/07/2022] [Indexed: 01/03/2023]
Abstract
Purpose To develop a convolutional neural network (CNN) for the robust and fast correction of velocity aliasing in 4D‐flow MRI. Methods This study included 667 adult subjects with aortic 4D‐flow MRI data with existing velocity aliasing (n = 362) and no velocity aliasing (n = 305). Additionally, 10 controls received back‐to‐back 4D‐flow scans with systemically varied velocity‐encoding sensitivity (vencs) at 60, 100, and 175 cm/s. The no‐aliasing data sets were used to simulate velocity aliasing by reducing the venc to 40%–70% of the original, alongside a ground truth locating all aliased voxels (153 training, 152 testing). The 152 simulated and 362 existing aliasing data sets were used for testing and compared with a conventional velocity antialiasing algorithm. Dice scores were calculated to quantify CNN performance. For controls, the venc 175‐cm/s scans were used as the ground truth and compared with the CNN‐corrected venc 60 and 100 cm/s data sets Results The CNN required 176 ± 30 s to perform compared with 162 ± 14 s for the conventional algorithm. The CNN showed excellent performance for the simulated data compared with the conventional algorithm (median range of Dice scores CNN: [0.89–0.99], conventional algorithm: [0.84–0.94], p < 0.001, across all simulated vencs) and detected more aliased voxels in existing velocity aliasing data sets (median detected CNN: 159 voxels [31–605], conventional algorithm: 65 [7–417], p < 0.001). For controls, the CNN showed Dice scores of 0.98 [0.95–0.99] and 0.96 [0.87–0.99] for venc = 60 cm/s and 100 cm/s, respectively, while flow comparisons showed moderate‐excellent agreement. Conclusion Deep learning enabled fast and robust velocity anti‐aliasing in 4D‐flow MRI.
Collapse
Affiliation(s)
- Haben Berhane
- Department of Biomedical EngineeringNorthwestern UniversityEvanstonIllinoisUSA
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
| | - Michael B. Scott
- Department of Biomedical EngineeringNorthwestern UniversityEvanstonIllinoisUSA
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
| | - Alex J. Barker
- Anschutz Medical CampusUniversity of ColoradoAuroraColoradoUSA
| | - Patrick McCarthy
- Division of Cardiac SurgeryNorthwestern MedicineChicagoIllinoisUSA
| | - Ryan Avery
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
| | - Brad Allen
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
| | - Chris Malaisrie
- Division of Cardiac SurgeryNorthwestern MedicineChicagoIllinoisUSA
| | - Joshua D. Robinson
- Department of Medical ImagingLurie Children's Hospital of ChicagoChicagoIllinoisUSA
| | - Cynthia K. Rigsby
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
- Department of Medical ImagingLurie Children's Hospital of ChicagoChicagoIllinoisUSA
| | - Michael Markl
- Department of Biomedical EngineeringNorthwestern UniversityEvanstonIllinoisUSA
- Department of RadiologyNorthwestern MedicineChicagoIllinoisUSA
| |
Collapse
|
187
|
Peng Y, Chen Z, Zhu W, Shi F, Wang M, Zhou Y, Xiang D, Chen X, Chen F. Automatic zoning for retinopathy of prematurity with semi-supervised feature calibration adversarial learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:1968-1984. [PMID: 35519283 PMCID: PMC9045915 DOI: 10.1364/boe.447224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/05/2022] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
Retinopathy of prematurity (ROP) is an eye disease, which affects prematurely born infants with low birth weight and is one of the main causes of children's blindness globally. In recent years, there are many studies on automatic ROP diagnosis, mainly focusing on ROP screening such as "Yes/No ROP" or "Mild/Severe ROP" and presence/absence detection of "plus disease". Due to the lack of corresponding high-quality annotations, there are few studies on ROP zoning, which is one of the important indicators to evaluate the severity of ROP. Moreover, how to effectively utilize the unlabeled data to train model is also worth studying. Therefore, we propose a novel semi-supervised feature calibration adversarial learning network (SSFC-ALN) for 3-level ROP zoning, which consists of two subnetworks: a generative network and a compound network. The generative network is a U-shape network for producing the reconstructed images and its output is taken as one of the inputs of the compound network. The compound network is obtained by extending a common classification network with a discriminator, introducing adversarial mechanism into the whole training process. Because the definition of ROP tells us where and what to focus on in the fundus images, which is similar to the attention mechanism. Therefore, to further improve classification performance, a new attention mechanism based feature calibration module (FCM) is designed and embedded in the compound network. The proposed method was evaluated on 1013 fundus images of 108 patients with 3-fold cross validation strategy. Compared with other state-of-the-art classification methods, the proposed method achieves high classification performance.
Collapse
Affiliation(s)
- Yuanyuan Peng
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Zhongyue Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Weifang Zhu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Fei Shi
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Meng Wang
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Yi Zhou
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Daoman Xiang
- Guangzhou Women and Children's Medical Center, Guangzhou 510623, China
| | - Xinjian Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou 215123, China
| | - Feng Chen
- Guangzhou Women and Children's Medical Center, Guangzhou 510623, China
| |
Collapse
|
188
|
Zufiria B, Qiu S, Yan K, Zhao R, Wang R, She H, Zhang C, Sun B, Herman P, Du Y, Feng Y. A feature-based convolutional neural network for reconstruction of interventional MRI. NMR IN BIOMEDICINE 2022; 35:e4231. [PMID: 31856431 DOI: 10.1002/nbm.4231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 06/10/2023]
Abstract
Real-time interventional MRI (I-MRI) could help to visualize the position of the interventional feature, thus improving patient outcomes in MR-guided neurosurgery. In particular, in deep brain stimulation, real-time visualization of the intervention procedure using I-MRI could improve the accuracy of the electrode placement. However, the requirements of a high undersampling rate and fast reconstruction speed for real-time imaging pose a great challenge for reconstruction of the interventional images. Based on recent advances in deep learning (DL), we proposed a feature-based convolutional neural network (FbCNN) for reconstructing interventional images from golden-angle radially sampled data. The method was composed of two stages: (a) reconstruction of the interventional feature and (b) feature refinement and postprocessing. With only five radially sampled spokes, the interventional feature was reconstructed with a cascade CNN. The final interventional image was constructed with a refined feature and a fully sampled reference image. With a comparison of traditional reconstruction techniques and recent DL-based methods, it was shown that only FbCNN could reconstruct the interventional feature and the final interventional image. With a reconstruction time of ~ 500 ms per frame and an acceleration factor of ~ 80, it was demonstrated that FbCNN had the potential for application in real-time I-MRI.
Collapse
Affiliation(s)
- Blanca Zufiria
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- KTH School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Suhao Qiu
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Kang Yan
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ruiyang Zhao
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runke Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chengcheng Zhang
- Department of Functional Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Bomin Sun
- Department of Functional Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Pawel Herman
- Division of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yiping Du
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuan Feng
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
189
|
Gong K, Han PK, El Fakhri G, Ma C, Li Q. Arterial spin labeling MR image denoising and reconstruction using unsupervised deep learning. NMR IN BIOMEDICINE 2022; 35:e4224. [PMID: 31865615 PMCID: PMC7306418 DOI: 10.1002/nbm.4224] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 10/21/2019] [Accepted: 10/22/2019] [Indexed: 05/07/2023]
Abstract
Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.
Collapse
Affiliation(s)
| | | | | | - Chao Ma
- Correspondence Chao Ma and Quanzheng Li, Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA, ,
| | - Quanzheng Li
- Correspondence Chao Ma and Quanzheng Li, Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA, ,
| |
Collapse
|
190
|
Pawar K, Chen Z, Shah NJ, Egan GF. Suppressing motion artefacts in MRI using an Inception-ResNet network with motion simulation augmentation. NMR IN BIOMEDICINE 2022; 35:e4225. [PMID: 31865624 DOI: 10.1002/nbm.4225] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 10/24/2019] [Accepted: 10/24/2019] [Indexed: 06/10/2023]
Abstract
The suppression of motion artefacts from MR images is a challenging task. The purpose of this paper was to develop a standalone novel technique to suppress motion artefacts in MR images using a data-driven deep learning approach. A simulation framework was developed to generate motion-corrupted images from motion-free images using randomly generated motion profiles. An Inception-ResNet deep learning network architecture was used as the encoder and was augmented with a stack of convolution and upsampling layers to form an encoder-decoder network. The network was trained on simulated motion-corrupted images to identify and suppress those artefacts attributable to motion. The network was validated on unseen simulated datasets and real-world experimental motion-corrupted in vivo brain datasets. The trained network was able to suppress the motion artefacts in the reconstructed images, and the mean structural similarity (SSIM) increased from 0.9058 to 0.9338. The network was also able to suppress the motion artefacts from the real-world experimental dataset, and the mean SSIM increased from 0.8671 to 0.9145. The motion correction of the experimental datasets demonstrated the effectiveness of the motion simulation generation process. The proposed method successfully removed motion artefacts and outperformed an iterative entropy minimization method in terms of the SSIM index and normalized root mean squared error, which were 5-10% better for the proposed method. In conclusion, a novel, data-driven motion correction technique has been developed that can suppress motion artefacts from motion-corrupted MR images. The proposed technique is a standalone, post-processing method that does not interfere with data acquisition or reconstruction parameters, thus making it suitable for routine clinical practice.
Collapse
Affiliation(s)
- Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- School of Psychological Sciences, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
| | - N Jon Shah
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Research Centre Jülich, Institute of Medicine, Jülich, Germany
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- School of Psychological Sciences, Monash University, Melbourne, Australia
| |
Collapse
|
191
|
Wang S, Ke Z, Cheng H, Jia S, Ying L, Zheng H, Liang D. DIMENSION: Dynamic MR imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training. NMR IN BIOMEDICINE 2022; 35:e4131. [PMID: 31482598 DOI: 10.1002/nbm.4131] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 06/10/2023]
Abstract
Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multi-supervised network training technique is developed to constrain the frequency domain information and the spatial domain information. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Ke
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Huitao Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and the Department of Electrical Engineering, The State University of New York, Buffalo, NY, USA
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
192
|
SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022; 8:905-919. [PMID: 35448707 PMCID: PMC9027099 DOI: 10.3390/tomography8020073] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/16/2022] Open
Abstract
There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the ‘Z’ plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.
Collapse
|
193
|
Liu X, Du H, Xu J, Qiu B. DBGAN: A dual-branch generative adversarial network for undersampled MRI reconstruction. Magn Reson Imaging 2022; 89:77-91. [PMID: 35339616 DOI: 10.1016/j.mri.2022.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/17/2022] [Accepted: 03/19/2022] [Indexed: 11/28/2022]
Abstract
Compressed sensing magnetic resonance imaging (CS-MRI) greatly accelerates the acquisition process and yield considerable reconstructed images. Deep learning was introduced into CS-MRI to further speed up the reconstruction process and improve the image quality. Recently, generative adversarial network (GAN) using two-stage cascaded U-Net structure as generator has been proven to be effective in MRI reconstruction. However, previous cascaded structure was limited to few feature information propagation channels thus may lead to information missing. In this paper, we proposed a GAN-based model, DBGAN, for MRI reconstruction from undersampled k-space data. The model uses cross-stage skip connection (CSSC) between two end-to-end cascaded U-Net in our generator to widen the channels of feature propagation. To avoid discrepancy between training and inference, we replaced classical batch normalization (BN) with instance normalization (IN) . A stage loss is involved in the loss function to boost the training performance. In addition, a bilinear interpolation decoder branch is introduced in the generator to supplement the missing information of the deconvolution decoder. Tested under five variant patterns with four undersampling rates on different modality of MRI data, the quantitative results show that DBGAN model achieves mean improvements of 3.65 dB in peak signal-to-noise ratio (PSNR) and 0.016 in normalized mean square error (NMSE) compared with state-of-the-art GAN-based methods on T1-Weighted brain dataset from MICCAI 2013 grand challenge. The qualitative visual results show that our method can reconstruct considerable images on brain and knee MRI data from different modality. Furthermore, DBGAN is light and fast - the model parameters are fewer than half of state-of-the-art GAN-based methods and each 256 × 256 image is reconstructed in 60 milliseconds, which is suitable for real-time processing.
Collapse
Affiliation(s)
- Xianzhe Liu
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Hongwei Du
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Jinzhang Xu
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui 230009, China
| | - Bensheng Qiu
- Center for Biomedical Image, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
194
|
Antunes N, Ferreira JC, Cardoso E. Generating personalized business card designs from images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25051-25073. [PMID: 35342325 PMCID: PMC8939401 DOI: 10.1007/s11042-022-12416-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 04/04/2021] [Accepted: 01/25/2022] [Indexed: 06/14/2023]
Abstract
Rising competition in the retail and hospitality sectors, especially in densely populated and touristic destinations is a growing concern for many business owners, who wish to deliver their brand communication strategy to the target audience. Many of these businesses rely on word-of-mouth marketing, delivering business cards to customers. Furthermore, the lack of a dedicated marketing team and budget for brand image consolidation and design creation often limits the brand expansion capability. The purpose of this study is to propose a novel system prototype that can suggest personalized designs for business cards, based on an existing business card picture. Using perspective transformation, text extraction and colour reduction techniques, we were able to obtain features from the original business card image and generate an alternative design, personalized for the end user. We have successfully been able to generate customized business cards for different business types, with textual information and a custom colour palette matching the original submitted image. All of the system modules were demonstrated to have positive results for the test cases and the proposal answered the main research question. Further research and development is required to adapt the current system to other marketing printouts, such as flyers or posters.
Collapse
Affiliation(s)
- Nuno Antunes
- ISTAR, Instituto Universitário de Lisboa (ISCTE-IUL), 1649-026 Lisboa, Portugal
- INOV Instituto de Engenharia de Sistemas e Computadores Inovação, Rua Alves Redol, 9, 1000-029 Lisbon, Portugal
| | - João Carlos Ferreira
- ISTAR, Instituto Universitário de Lisboa (ISCTE-IUL), 1649-026 Lisboa, Portugal
- INOV Instituto de Engenharia de Sistemas e Computadores Inovação, Rua Alves Redol, 9, 1000-029 Lisbon, Portugal
| | - Elsa Cardoso
- ISTAR, Instituto Universitário de Lisboa (ISCTE-IUL), 1649-026 Lisboa, Portugal
| |
Collapse
|
195
|
Feng J, Zhang W, Li Z, Jia K, Jiang S, Dehghani H, Pogue BW, Paulsen KD. Deep-learning based image reconstruction for MRI-guided near-infrared spectral tomography. OPTICA 2022; 9:264-267. [PMID: 35340570 PMCID: PMC8952193 DOI: 10.1364/optica.446576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/12/2022] [Indexed: 05/02/2023]
Abstract
Non-invasive near-infrared spectral tomography (NIRST) can incorporate the structural information provided by simultaneous magnetic resonance imaging (MRI), and this has significantly improved the images obtained of tissue function. However, the process of MRI guidance in NIRST has been time consuming because of the needs for tissue-type segmentation and forward diffuse modeling of light propagation. To overcome these problems, a reconstruction algorithm for MRI-guided NIRST based on deep learning is proposed and validated by simulation and real patient imaging data for breast cancer characterization. In this approach, diffused optical signals and MRI images were both used as the input to the neural network, and simultaneously recovered the concentrations of oxy-hemoglobin, deoxy-hemoglobin, and water via end-to-end training by using 20,000 sets of computer-generated simulation phantoms. The simulation phantom studies showed that the quality of the reconstructed images was improved, compared to that obtained by other existing reconstruction methods. Reconstructed patient images show that the well-trained neural network with only simulation data sets can be directly used for differentiating malignant from benign breast tumors.
Collapse
Affiliation(s)
- Jinchao Feng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Wanlong Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhe Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Kebin Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Shudong Jiang
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Hamid Dehghani
- School of Computer Science, University of Birmingham, Birmingham, B15 2TT, UK
| | - Brian W. Pogue
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Keith D. Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| |
Collapse
|
196
|
Ismail TF, Strugnell W, Coletti C, Božić-Iven M, Weingärtner S, Hammernik K, Correia T, Küstner T. Cardiac MR: From Theory to Practice. Front Cardiovasc Med 2022; 9:826283. [PMID: 35310962 PMCID: PMC8927633 DOI: 10.3389/fcvm.2022.826283] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/17/2022] [Indexed: 01/10/2023] Open
Abstract
Cardiovascular disease (CVD) is the leading single cause of morbidity and mortality, causing over 17. 9 million deaths worldwide per year with associated costs of over $800 billion. Improving prevention, diagnosis, and treatment of CVD is therefore a global priority. Cardiovascular magnetic resonance (CMR) has emerged as a clinically important technique for the assessment of cardiovascular anatomy, function, perfusion, and viability. However, diversity and complexity of imaging, reconstruction and analysis methods pose some limitations to the widespread use of CMR. Especially in view of recent developments in the field of machine learning that provide novel solutions to address existing problems, it is necessary to bridge the gap between the clinical and scientific communities. This review covers five essential aspects of CMR to provide a comprehensive overview ranging from CVDs to CMR pulse sequence design, acquisition protocols, motion handling, image reconstruction and quantitative analysis of the obtained data. (1) The basic MR physics of CMR is introduced. Basic pulse sequence building blocks that are commonly used in CMR imaging are presented. Sequences containing these building blocks are formed for parametric mapping and functional imaging techniques. Commonly perceived artifacts and potential countermeasures are discussed for these methods. (2) CMR methods for identifying CVDs are illustrated. Basic anatomy and functional processes are described to understand the cardiac pathologies and how they can be captured by CMR imaging. (3) The planning and conduct of a complete CMR exam which is targeted for the respective pathology is shown. Building blocks are illustrated to create an efficient and patient-centered workflow. Further strategies to cope with challenging patients are discussed. (4) Imaging acceleration and reconstruction techniques are presented that enable acquisition of spatial, temporal, and parametric dynamics of the cardiac cycle. The handling of respiratory and cardiac motion strategies as well as their integration into the reconstruction processes is showcased. (5) Recent advances on deep learning-based reconstructions for this purpose are summarized. Furthermore, an overview of novel deep learning image segmentation and analysis methods is provided with a focus on automatic, fast and reliable extraction of biomarkers and parameters of clinical relevance.
Collapse
Affiliation(s)
- Tevfik F. Ismail
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Cardiology Department, Guy's and St Thomas' Hospital, London, United Kingdom
| | - Wendy Strugnell
- Queensland X-Ray, Mater Hospital Brisbane, Brisbane, QLD, Australia
| | - Chiara Coletti
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
| | - Maša Božić-Iven
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
- Computer Assisted Clinical Medicine, Heidelberg University, Mannheim, Germany
| | | | - Kerstin Hammernik
- Lab for AI in Medicine, Technical University of Munich, Munich, Germany
- Department of Computing, Imperial College London, London, United Kingdom
| | - Teresa Correia
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Centre of Marine Sciences, Faro, Portugal
| | - Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospital of Tübingen, Tübingen, Germany
| |
Collapse
|
197
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
198
|
Chen Z, Chen Y, Xie Y, Li D, Christodoulou AG. Data-Consistent non-Cartesian deep subspace learning for efficient dynamic MR image reconstruction. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2022; 2022:10.1109/isbi52829.2022.9761497. [PMID: 35572068 PMCID: PMC9104888 DOI: 10.1109/isbi52829.2022.9761497] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Non-Cartesian sampling with subspace-constrained image reconstruction is a popular approach to dynamic MRI, but slow iterative reconstruction limits its clinical application. Data-consistent (DC) deep learning can accelerate reconstruction with good image quality, but has not been formulated for non-Cartesian subspace imaging. In this study, we propose a DC non-Cartesian deep subspace learning framework for fast, accurate dynamic MR image reconstruction. Four novel DC formulations are developed and evaluated: two gradient decent approaches, a directly solved approach, and a conjugate gradient approach. We applied a U-Net model with and without DC layers to reconstruct T1-weighted images for cardiac MR Multitasking (an advanced multidimensional imaging method), comparing our results to the iteratively reconstructed reference. Experimental results show that the proposed framework significantly improves reconstruction accuracy over the U-Net model without DC, while significantly accelerating the reconstruction over conventional iterative reconstruction.
Collapse
Affiliation(s)
- Zihao Chen
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
- Department of Bioengineering, UCLA, Los Angeles, USA
| | - Yuhua Chen
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
- Department of Bioengineering, UCLA, Los Angeles, USA
| | - Yibin Xie
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
| | - Debiao Li
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
- Department of Bioengineering, UCLA, Los Angeles, USA
| | - Anthony G Christodoulou
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
- Department of Bioengineering, UCLA, Los Angeles, USA
| |
Collapse
|
199
|
Kumar A, Mahapatra RP. Detection and diagnosis of COVID-19 infection in lungs images using deep learning techniques. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:462-475. [PMID: 35465214 PMCID: PMC9015307 DOI: 10.1002/ima.22697] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 06/14/2023]
Abstract
World's science and technologies have been challenged by the COVID-19 pandemic. Each and every community across the globe are trying to find a real-time novel method for accurate treatment and cure of COVID-19 infected patients. The most important lead to take from this pandemic is to detect the infected patients as soon as possible and provide them an accurate treatment. At present, the worldwide methodology to detect COVID-19 is reverse transcription-polymerase chain reaction (RT-PCR). This technique is costly and time taking. For this reason, the implementation of a novel method is required. This paper includes the use of deep learning analysis to develop a system for identifying COVID-19 patients. Proposed technique is based on convolution neural network (CNN) and deep neural network (DNN). This paper proposes two models, first is designing DNN on the basis of fractal feature of the images and second is designing CNN using lungs x-ray images. To find the infected area (tissues) of the lungs image using CNN architecture, segmentation process has been used. Developed CNN architecture gave results of classification with accuracy equal to 94.6% and sensitivity equal to 90.5% which is much better than the proposed DNN method, which gave accuracy 84.11% and sensitivity 84.7%. The outcome of the presented model shows 94.6% accuracy in detecting infected regions. Using this method the growth of the infected regions can be monitored and controlled. The designed model can also be used in post-COVID-19 analysis.
Collapse
Affiliation(s)
- Arun Kumar
- Department of ECE, Faculty of Engineering and TechnologySRM Institute of Science and Technology, NCR Campus, Delhi‐NCR CampusGhaziabadIndia
| | - Rajendra Prasad Mahapatra
- Department of CSE, Faculty of Engineering and TechnologySRM Institute of Science and Technology, NCR Campus, Delhi‐NCR CampusGhaziabadIndia
| |
Collapse
|
200
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|