1
|
Lin H, Zou J, Wang K, Feng Y, Xu C, Lyu J, Qin J. Dual-space high-frequency learning for transformer-based MRI super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108165. [PMID: 38631131 DOI: 10.1016/j.cmpb.2024.108165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) can provide rich and detailed high-contrast information of soft tissues, while the scanning of MRI is time-consuming. To accelerate MR imaging, a variety of Transformer-based single image super-resolution methods are proposed in recent years, achieving promising results thanks to their superior capability of capturing long-range dependencies. Nevertheless, most existing works prioritize the design of transformer attention blocks to capture global information. The local high-frequency details, which are pivotal to faithful MRI restoration, are unfortunately neglected. METHODS In this work, we propose a high-frequency enhanced learning scheme to effectively improve the awareness of high frequency information in current Transformer-based MRI single image super-resolution methods. Specifically, we present two entirely plug-and-play modules designed to equip Transformer-based networks with the ability to recover high-frequency details from dual spaces: 1) in the feature space, we design a high-frequency block (Hi-Fe block) paralleled with Transformer-based attention layers to extract rich high-frequency features; while 2) in the image intensity space, we tailor a high-frequency amplification module (HFA) to further refine the high-frequency details. By fully exploiting the merits of the two modules, our framework can recover abundant and diverse high-frequency information, rendering faithful MRI super-resolved results with fine details. RESULTS We integrated our modules with six Transformer-based models and conducted experiments across three datasets. The results indicate that our plug-and-play modules can enhance the super-resolution performance of all foundational models to varying degrees, surpassing the capabilities of existing state-of-the-art single image super-resolution networks. CONCLUSION Comprehensive comparison of super-resolution images and high-frequency maps from various methods, clearly demonstrating that our module possesses the capability to restore high-frequency information, showing huge potential in clinical practice for accelerated MRI reconstruction.
Collapse
Affiliation(s)
- Haoneng Lin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jing Zou
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong.
| | - Kang Wang
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Yidan Feng
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Cheng Xu
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jun Lyu
- Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
2
|
Chen X, Zheng H, Tang H, Li F. Multi-scale perceptual YOLO for automatic detection of clue cells and trichomonas in fluorescence microscopic images. Comput Biol Med 2024; 175:108500. [PMID: 38678942 DOI: 10.1016/j.compbiomed.2024.108500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 03/25/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
Vaginitis is a common disease among women and has a high recurrence rate. The primary diagnosis method is fluorescence microscopic inspection, but manual inspection is inefficient and can lead to false detection or missed detection. Automatic cell identification and localization in microscopic images are necessary. For vaginitis diagnosis, clue cells and trichomonas are two important indicators and are difficult to be detected because of the different scales and image characteristics. This study proposes a Multi-Scale Perceptual YOLO (MSP-YOLO) with super-resolution reconstruction branch to meet the detection requirements of clue cells and trichomonas. Based on the scales and image characteristics of clue cells and trichomonas, we employed a super-resolution reconstruction branch to the detection network. This branch guides the detection branch to focus on subtle feature differences. Simultaneously, we proposed an attention-based feature fusion module that is injected with dilated convolutional group. This module makes the network pay attention to the non-centered features of the large target clue cells, which contributes to the enhancement of detection sensitivity. Experimental results show that the proposed detection network MSP-YOLO can improve sensitivity without compromising specificity. For clue cell and trichomoniasis detection, the proposed network achieved sensitivities of 0.706 and 0.910, respectively, which were 0.218 and 0.051 higher than those of the baseline model. In this study, the characteristics of the super-resolution reconstruction task are used to guide the network to effectively extract and process image features. The novel proposed network has an increased sensitivity, which makes it possible to detect vaginitis automatically.
Collapse
Affiliation(s)
- Xi Chen
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Haoyue Zheng
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Haodong Tang
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China
| | - Fan Li
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, China.
| |
Collapse
|
3
|
Lin J, Miao QI, Surawech C, Raman SS, Zhao K, Wu HH, Sung K. High-Resolution 3D MRI With Deep Generative Networks via Novel Slice-Profile Transformation Super-Resolution. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:95022-95036. [PMID: 37711392 PMCID: PMC10501177 DOI: 10.1109/access.2023.3307577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
High-resolution magnetic resonance imaging (MRI) sequences, such as 3D turbo or fast spin-echo (TSE/FSE) imaging, are clinically desirable but suffer from long scanning time-related blurring when reformatted into preferred orientations. Instead, multi-slice two-dimensional (2D) TSE imaging is commonly used because of its high in-plane resolution but is limited clinically by poor through-plane resolution due to elongated voxels and the inability to generate multi-planar reformations due to staircase artifacts. Therefore, multiple 2D TSE scans are acquired in various orthogonal imaging planes, increasing the overall MRI scan time. In this study, we propose a novel slice-profile transformation super-resolution (SPTSR) framework with deep generative learning for through-plane super-resolution (SR) of multi-slice 2D TSE imaging. The deep generative networks were trained by synthesized low-resolution training input via slice-profile downsampling (SP-DS), and the trained networks inferred on the slice profile convolved (SP-conv) testing input for 5.5x through-plane SR. The network output was further slice-profile deconvolved (SP-deconv) to achieve an isotropic super-resolution. Compared to SMORE SR method and the networks trained by conventional downsampling, our SPTSR framework demonstrated the best overall image quality from 50 testing cases, evaluated by two abdominal radiologists. The quantitative analysis cross-validated the expert reader study results. 3D simulation experiments confirmed the quantitative improvement of the proposed SPTSR and the effectiveness of the SP-deconv step, compared to 3D ground-truths. Ablation studies were conducted on the individual contributions of SP-DS and SP-conv, networks structure, training dataset size, and different slice profiles.
Collapse
Affiliation(s)
- Jiahao Lin
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Electrical and Computer Engineering, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Q I Miao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, Faculty of Medicine, Chulalongkorn University, Bangkok 10330, Thailand
- Division of Diagnostic Radiology, Department of Radiology, King Chulalongkorn Memorial Hospital, Bangkok 10330, Thailand
| | - Steven S Raman
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kai Zhao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
4
|
Qu X, Ren C, Yan G, Zheng D, Tang W, Wang S, Lin H, Zhang J, Jiang J. Deep-Learning-Based Ultrasound Sound-Speed Tomography Reconstruction with Tikhonov Pseudo-Inverse Priori. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2079-2094. [PMID: 35922265 PMCID: PMC10448397 DOI: 10.1016/j.ultrasmedbio.2022.05.033] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
Ultrasound sound-speed tomography (USST) is a promising technology for breast imaging and breast cancer detection. Its reconstruction is a complex non-linear mapping from the projection data to the sound-speed image (SSI). The traditional reconstruction methods include mainly the ray-based methods and the waveform-based methods. The ray-based methods with linear approximation have low computational cost but low reconstruction quality; the full wave-based methods with the complex non-linear model have high quality but high cost. To achieve both high quality and low cost, we introduced traditional linear approximation as prior knowledge into a deep neural network and treated the complex non-linear mapping of USST reconstruction as a combination of linear mapping and non-linear mapping. In the proposed method, the linear mapping was seamlessly implemented with a fully connected layer and initialized using the Tikhonov pseudo-inverse matrix. The non-linear mapping was implemented using a U-shape Net (U-Net). Furthermore, we proposed the Tikhonov U-shape net (TU-Net), in which the linear mapping was done before the non-linear mapping, and the U-shape Tikhonov net (UT-Net), in which the non-linear mapping was done before the linear mapping. Moreover, we conducted simulations and experiments for evaluation. In the numerical simulation, the root-mean-squared error was 6.49 and 4.29 m/s for the UT-Net and TU-Net, the peak signal-to-noise ratio was 49.01 and 52.90 dB, the structural similarity was 0.9436 and 0.9761 and the reconstruction time was 10.8 and 11.3 ms, respectively. In this study, the SSIs obtained with the proposed methods exhibited high sound-speed accuracy. Both the UT-Net and the TU-Net achieved high quality and low computational cost.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Chujian Ren
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Guo Yan
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Jingya Zhang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA.
| |
Collapse
|
5
|
Sparse Dictionary-Based Magnetic Resonance Superresolution Imaging with Joint Loss Function Learning. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2206454. [PMID: 36072419 PMCID: PMC9444480 DOI: 10.1155/2022/2206454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 06/23/2022] [Accepted: 07/06/2022] [Indexed: 11/17/2022]
Abstract
Magnetic resonance image has important application value in disease diagnosis. Due to the particularity of its imaging mechanism, the resolution of hardware imaging needs to be improved by increasing radiation intensity and radiation time. Excess radiation can cause the body to overheat and, in severe cases, inactivate the protein. This problem is expected to be solved by the image superresolution method based on joint dictionary learning, which has good superresolution performance. In the process of dictionary learning, the loss function will directly affect the dictionary performance. The general method only uses the cascade error as the optimization function in dictionary training, and the method does not consider the individual reconstruction error of high- and low-resolution image dictionary. In order to solve the above problem, In this paper, the loss function of dictionary learning is optimized. While ensuring that the coefficients are sufficiently sparse, the high- and low-resolution dictionaries are trained separately to reduce the error generated by the joint high- and low-resolution dictionary block pair and increase the high-resolution reconstruction error. Experiments on neck and ankle MR images show that the proposed algorithm has better superresolution reconstruction performance on ×2 and ×4 compared with bicubic interpolation, nearest neighbor, and original dictionary learning algorithms.
Collapse
|
6
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
7
|
Ye Y. Sparse representation optimization of Gaussian mixed feature of image based on convolution neural network. Soft comput 2022. [DOI: 10.1007/s00500-021-06587-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
8
|
A multiscale double-branch residual attention network for anatomical-functional medical image fusion. Comput Biol Med 2021; 141:105005. [PMID: 34763846 DOI: 10.1016/j.compbiomed.2021.105005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 01/29/2023]
Abstract
Medical image fusion technology synthesizes complementary information from multimodal medical images. This technology is playing an increasingly important role in clinical applications. In this paper, we propose a new convolutional neural network, which is called the multiscale double-branch residual attention (MSDRA) network, for fusing anatomical-functional medical images. Our network contains a feature extraction module, a feature fusion module and an image reconstruction module. In the feature extraction module, we use three identical MSDRA blocks in series to extract image features. The MSDRA block has two branches. The first branch uses a multiscale mechanism to extract features of different scales with three convolution kernels of different sizes, while the second branch uses six 3 × 3 convolutional kernels. In addition, we propose the Feature L1-Norm fusion strategy to fuse the features obtained from the input images. Compared with the reference image fusion algorithms, MSDRA consumes less fusion time and achieves better results in visual quality and the objective metrics of Spatial Frequency (SF), Average Gradient (AG), Edge Intensity (EI), Quality-Aware Clustering (QAC), Variance (VAR), and Visual Information Fidelity for Fusion (VIFF).
Collapse
|
9
|
Huang B, Xiao H, Liu W, Zhang Y, Wu H, Wang W, Yang Y, Yang Y, Miller GW, Li T, Cai J. MRI super-resolution via realistic downsampling with adversarial learning. Phys Med Biol 2021; 66. [PMID: 34474407 DOI: 10.1088/1361-6560/ac232e] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/02/2021] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL) frameworks have demonstrated state-of-the-art performance in the super-resolution (SR) task of magnetic resonance imaging, but most performances have been achieved with simulated low-resolution (LR) images rather than LR images from real acquisition. Due to the limited generalizability of the SR network, enhancement is not guaranteed for real LR images because of the unreality of the training LR images. In this study, we proposed a DL-based SR framework with an emphasis on data construction to achieve better performance on real LR MR images. The framework comprised two steps: (a) downsampling training using a generative adversarial network (GAN) to construct more realistic and perfectly matched LR/high-resolution (HR) pairs. The downsampling GAN input was real LR and HR images. The generator translated the HR images to LR images and the discriminator distinguished the patch-level difference between the synthetic and real LR images. (b) SR training was performed using an enhance4d deep super-resolution network (EDSR). In the controlled experiments, three EDSRs were trained using our proposed method, Gaussian blur, and k-space zero-filling. As for the data, liver MR images were obtained from 24 patients using breath-hold serial LR and HR scans (only HR images were used in the conventional methods). The k-space zero-filling group delivered almost zero enhancement on the real LR images and the Gaussian group produced a considerable number of artifacts. The proposed method exhibited significantly better resolution enhancement and fewer artifacts compared with the other two networks. Our method outperformed the Gaussian method by an improvement of 0.111 ± 0.016 in the structural similarity index and 2.76 ± 0.98 dB in the peak signal-to-noise ratio. The blind/reference-less image spatial quality evaluator metric of the conventional Gaussian method and proposed method were 46.6 ± 4.2 and 34.1 ± 2.4, respectively.
Collapse
Affiliation(s)
- Bangyan Huang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yunhuan Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - G Wilson Miller
- Department of Radiology and Medical Imaging, The University of Virginia, Charlottesville, VA, United States of America
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| |
Collapse
|
10
|
Deep Learning-Based CT Imaging in Diagnosing Myeloma and Its Prognosis Evaluation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5436793. [PMID: 34552707 PMCID: PMC8452442 DOI: 10.1155/2021/5436793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 11/30/2022]
Abstract
Imaging examination plays an important role in the early diagnosis of myeloma. The study focused on the segmentation effects of deep learning-based models on CT images for myeloma, and the influence of different chemotherapy treatments on the prognosis of patients. Specifically, 186 patients with suspected myeloma were the research subjects. The U-Net model was adjusted to segment the CT images, and then, the Faster region convolutional neural network (RCNN) model was used to label the lesions. Patients were divided into bortezomib group (group 1, n = 128) and non-bortezomib group (group 2, n = 58). The biochemical indexes, blood routine indexes, and skeletal muscle of the two groups were compared before and after chemotherapy. The results showed that the improved U-Net model demonstrated good segmentation results, the Faster RCNN model can realize the labeling of the lesion area in the CT image, and the classification accuracy rate was as high as 99%. Compared with group 1, group 2 showed enlarged psoas major and erector spinae muscle after treatment and decreased bone marrow plasma cells content, blood M protein, urine 24 h light chain, pBNP, ß-2 microglobulin (β2MG), ALP, and white blood cell (WBC) levels (P < 0.05). In conclusion, deep learning is suggested in the segmentation and classification of CT images for myeloma, which can lift the detection accuracy. Two different chemotherapy regimens both improve the prognosis of patients, but the effects of non-bortezomib chemotherapy are better.
Collapse
|
11
|
Aghabiglou A, Eksioglu EM. Projection-Based cascaded U-Net model for MR image reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106151. [PMID: 34052771 DOI: 10.1016/j.cmpb.2021.106151] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 04/29/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Background and Objective: Recent studies in deep learning reveal that the U-Net stands out among the diverse set of deep models as an effective network structure, especially for imaging inverse problems. Initially, the U-Net model was developed to solve segmentation problems for biomedical images while using an annotated dataset. In this paper, we will study a novel application of the U-Net structure for the important inverse problem of MRI reconstruction. Deep networks are particularly efficient for the speed-up of the MR image reconstruction process by decreasing the data acquisition time, and they can significantly reduce the aliasing artifacts caused by the undersampling in the k-space. Our aim is to develop a novel and efficient cascaded U-Net framework for reconstructing MR images from undersampled k-space data. The new framework should have improved reconstruction performance when compared to competing methodologies. METHODS In this paper, a novel cascaded framework utilizing the U-Net as a sub-block is being proposed. The introduced U-Net cascade structure is applied to the magnetic resonance image reconstruction problem. The connection between the cascaded U-Nets is realized in the form of a recently developed projection-based updated data consistency layer. The novel structure is implemented in the PyTorch environment, which is one of the standards for deep learning implementations. The recently created fastMRI dataset which forms an important benchmark for MRI reconstruction is used for training and testing purposes. RESULTS We present simulation results comparing the novel method with a variety of competitive deep networks. The new cascaded U-Net structures PSNR performance stands on average 1.28 dB higher than the baseline U-Net. The improvement, when compared to the standard CNN, is on average 3.32 dB. CONCLUSIONS The proposed cascaded U-Net configuration results in an improved reconstruction performance when compared to the CNN, the cascaded CNN, and also the singular U-Net structures, where the singular U-Net forms the baseline reconstruction method from the fastMRI package. The use of the projection-based updated data consistency layer also leads to improved quantitative (including SSIM, PSNR, and NMSE results) and qualitative results when compared to the use of the conventional data consistency layer.
Collapse
Affiliation(s)
- Amir Aghabiglou
- Graduate School of Science, Engineering and Technology, Istanbul Technical University, Istanbul, Turkey.
| | - Ender M Eksioglu
- Electronics and Communication Engineering Department, Istanbul Technical University, Istanbul, Turkey.
| |
Collapse
|
12
|
Li G, Lv J, Tong X, Wang C, Yang G. High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial Network With Attention and Cyclic Loss. IEEE ACCESS 2021; 9:105951-105964. [DOI: 10.1109/access.2021.3099695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|