151
|
La Salvia M, Torti E, Leon R, Fabelo H, Ortega S, Martinez-Vega B, Callico GM, Leporati F. Deep Convolutional Generative Adversarial Networks to Enhance Artificial Intelligence in Healthcare: A Skin Cancer Application. SENSORS (BASEL, SWITZERLAND) 2022; 22:6145. [PMID: 36015906 PMCID: PMC9416026 DOI: 10.3390/s22166145] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/04/2022] [Accepted: 08/14/2022] [Indexed: 06/15/2023]
Abstract
In recent years, researchers designed several artificial intelligence solutions for healthcare applications, which usually evolved into functional solutions for clinical practice. Furthermore, deep learning (DL) methods are well-suited to process the broad amounts of data acquired by wearable devices, smartphones, and other sensors employed in different medical domains. Conceived to serve the role of diagnostic tool and surgical guidance, hyperspectral images emerged as a non-contact, non-ionizing, and label-free technology. However, the lack of large datasets to efficiently train the models limits DL applications in the medical field. Hence, its usage with hyperspectral images is still at an early stage. We propose a deep convolutional generative adversarial network to generate synthetic hyperspectral images of epidermal lesions, targeting skin cancer diagnosis, and overcome small-sized datasets challenges to train DL architectures. Experimental results show the effectiveness of the proposed framework, capable of generating synthetic data to train DL classifiers.
Collapse
Affiliation(s)
- Marco La Salvia
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Emanuele Torti
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Raquel Leon
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Himar Fabelo
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Samuel Ortega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
- Norwegian Institute of Food, Fisheries and Aquaculture Research (Nofima), 6122 Tromsø, Norway
| | - Beatriz Martinez-Vega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Gustavo M. Callico
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Francesco Leporati
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| |
Collapse
|
152
|
Zhao J, Hou X, Pan M, Zhang H. Attention-based generative adversarial network in medical imaging: A narrative review. Comput Biol Med 2022; 149:105948. [PMID: 35994931 DOI: 10.1016/j.compbiomed.2022.105948] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/24/2022] [Accepted: 08/06/2022] [Indexed: 11/18/2022]
Abstract
As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resemble the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis.
Collapse
Affiliation(s)
- Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Meiqing Pan
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
153
|
Liu H, Jin X, Liu L, Jin X. Low-Dose CT Image Denoising Based on Improved DD-Net and Local Filtered Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2692301. [PMID: 35965772 PMCID: PMC9365583 DOI: 10.1155/2022/2692301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 06/02/2022] [Accepted: 06/20/2022] [Indexed: 11/18/2022]
Abstract
Low-dose CT (LDCT) images can reduce the radiation damage to the patients; however, the unavoidable information loss will influence the clinical diagnosis under low-dose conditions, such as noise, streak artifacts, and smooth details. LDCT image denoising is a significant topic in medical image processing to overcome the above deficits. This work proposes an improved DD-Net (DenseNet and deconvolution-based network) joint local filtered mechanism, the DD-Net is enhanced by introducing improved residual dense block to strengthen the feature representation ability, and the local filtered mechanism and gradient loss are also employed to effectively restore the subtle structures. First, the LDCT image is inputted into the network to obtain the denoised image. The original loss between the denoised image and normal-dose CT (NDCT) image is calculated, and the difference image between the NDCT image and the denoised image is obtained. Second, a mask image is generated by taking a threshold operation to the difference image, and the filtered LDCT and NDCT images are obtained by conducting an elementwise multiplication operation with LDCT and NDCT images using the mask image. Third, the filtered image is inputted into the network to obtain the filtered denoised image, and the correction loss is calculated. At last, the sum of original loss and correction loss of the improved DD-Net is used to optimize the network. Considering that it is insufficient to generate the edge information using the combination of mean square error (MSE) and multiscale structural similarity (MS-SSIM), we introduce the gradient loss that can calculate the loss of the high-frequency portion. The experimental results show that the proposed method can achieve better performance than conventional schemes and most neural networks. Our source code is made available at https://github.com/LHE-IT/Low-dose-CT-Image-Denoising/tree/main/Local Filtered Mechanism.
Collapse
Affiliation(s)
- Hongen Liu
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Ling Liu
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, Yunnan, China
| |
Collapse
|
154
|
Liu J, Jiang H, Ning F, Li M, Pang W. DFSNE-Net: Deviant feature sensitive noise estimate network for low-dose CT denoising. Comput Biol Med 2022; 149:106061. [DOI: 10.1016/j.compbiomed.2022.106061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/10/2022] [Accepted: 08/27/2022] [Indexed: 11/26/2022]
|
155
|
Fan J, Liu Z, Yang D, Qiao J, Zhao J, Wang J, Hu W. Multimodal image translation via deep learning inference model trained in video domain. BMC Med Imaging 2022; 22:124. [PMID: 35836126 PMCID: PMC9281162 DOI: 10.1186/s12880-022-00854-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 07/04/2022] [Indexed: 11/10/2022] Open
Abstract
Background Current medical image translation is implemented in the image domain. Considering the medical image acquisition is essentially a temporally continuous process, we attempt to develop a novel image translation framework via deep learning trained in video domain for generating synthesized computed tomography (CT) images from cone-beam computed tomography (CBCT) images. Methods For a proof-of-concept demonstration, CBCT and CT images from 100 patients were collected to demonstrate the feasibility and reliability of the proposed framework. The CBCT and CT images were further registered as paired samples and used as the input data for the supervised model training. A vid2vid framework based on the conditional GAN network, with carefully-designed generators, discriminators and a new spatio-temporal learning objective, was applied to realize the CBCT–CT image translation in the video domain. Four evaluation metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC), and structural similarity (SSIM), were calculated on all the real and synthetic CT images from 10 new testing patients to illustrate the model performance. Results The average values for four evaluation metrics, including MAE, PSNR, NCC, and SSIM, are 23.27 ± 5.53, 32.67 ± 1.98, 0.99 ± 0.0059, and 0.97 ± 0.028, respectively. Most of the pixel-wise hounsfield units value differences between real and synthetic CT images are within 50. The synthetic CT images have great agreement with the real CT images and the image quality is improved with lower noise and artifacts compared with CBCT images. Conclusions We developed a deep-learning-based approach to perform the medical image translation problem in the video domain. Although the feasibility and reliability of the proposed framework were demonstrated by CBCT–CT image translation, it can be easily extended to other types of medical images. The current results illustrate that it is a very promising method that may pave a new path for medical image translation research.
Collapse
Affiliation(s)
- Jiawei Fan
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China
| | - Zhiqiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dong Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China
| | - Jian Qiao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China
| | - Jun Zhao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, People's Republic of China. .,Department of Oncology, Shanghai Medical College Fudan University, Shanghai, 200032, People's Republic of China. .,Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
156
|
Ng CKC. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. CHILDREN 2022; 9:children9071044. [PMID: 35884028 PMCID: PMC9320231 DOI: 10.3390/children9071044] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/11/2022] [Accepted: 07/11/2022] [Indexed: 01/19/2023]
Abstract
Radiation dose optimization is particularly important in pediatric radiology, as children are more susceptible to potential harmful effects of ionizing radiation. However, only one narrative review about artificial intelligence (AI) for dose optimization in pediatric computed tomography (CT) has been published yet. The purpose of this systematic review is to answer the question “What are the AI techniques and architectures introduced in pediatric radiology for dose optimization, their specific application areas, and performances?” Literature search with use of electronic databases was conducted on 3 June 2022. Sixteen articles that met selection criteria were included. The included studies showed deep convolutional neural network (CNN) was the most common AI technique and architecture used for dose optimization in pediatric radiology. All but three included studies evaluated AI performance in dose optimization of abdomen, chest, head, neck, and pelvis CT; CT angiography; and dual-energy CT through deep learning image reconstruction. Most studies demonstrated that AI could reduce radiation dose by 36–70% without losing diagnostic information. Despite the dominance of commercially available AI models based on deep CNN with promising outcomes, homegrown models could provide comparable performances. Future exploration of AI value for dose optimization in pediatric radiology is necessary due to small sample sizes and narrow scopes (only three modalities, CT, positron emission tomography/magnetic resonance imaging and mobile radiography, and not all examination types covered) of existing studies.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
157
|
Hsieh SS, Leng S, Yu L, Huber NR, McCollough CH. A minimum SNR criterion for computed tomography object detection in the projection domain. Med Phys 2022; 49:4988-4998. [PMID: 35754205 PMCID: PMC9446706 DOI: 10.1002/mp.15832] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 04/11/2022] [Accepted: 06/10/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND A common rule of thumb for object detection is the Rose criterion, which states that a signal must be five standard deviations above background to be detectable to a human observer. The validity of the Rose criterion in CT imaging is limited due to the presence of correlated noise. Recent reconstruction and denoising methodologies are also able to restore apparent image quality in very noisy conditions, and the ultimate limits of these methodologies are not yet known. PURPOSE To establish a lower bound on the minimum achievable signal-to-noise ratio (SNR) for object detection, below which detection performance is poor regardless of reconstruction or denoising methodology. METHODS We consider a numerical observer that operates on projection data and has perfect knowledge of the background and the objects to be detected, and determine the minimum projection SNR that is necessary to achieve predetermined lesion-level sensitivity and case-level specificity targets. We define a set of discrete signal objects that encompasses any lesion of interest and could include lesions of different sizes, shapes, and locations. The task is to determine which object of is present, or to state the null hypothesis that no object is present. We constrain each object in to have equivalent projection SNR and use Monte Carlo methods to calculate the required projection SNR necessary. Because our calculations are performed in projection space, they impose an upper limit on the performance possible from reconstructed images. We chose to be a collection of elliptical or circular low contrast metastases and simulated detection of these objects in a parallel beam system with Gaussian statistics. Unless otherwise stated, we assume a target of 80% lesion-level sensitivity and 80% case-level specificity and a search field of view that is 6 cm by 6 cm by 10 slices. RESULTS When contains only a single object, our problem is equivalent to two-alternative forced choice (2AFC) and the required projection SNR is 1.7. When consists of circular 6 mm lesions at different locations in space, the required projection SNR is 5.1. When is extended to include ellipses and circles of different sizes, the required projection SNR increases to 5.3. The required SNR increases if the sensitivity target, specificity target, or search field of view increases. CONCLUSIONS Even with perfect knowledge of the background and target objects, the ideal observer still requires an SNR of approximately 5. This is a lower bound on the SNR that would be required in real conditions, where the background and target objects are not known perfectly. Algorithms that denoise lesions with less than 5 projection SNR, regardless of the denoising methodology, are expected to show vanishing effects or false positive lesions. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Scott S Hsieh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Nathan R Huber
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | |
Collapse
|
158
|
He F, Wang Y, Tao X, Zhu M, Hong Z, Bian Z, Ma J. [Low-dose helical CT projection data restoration using noise estimation]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2022; 42:849-859. [PMID: 35790435 DOI: 10.12122/j.issn.1673-4254.2022.06.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To build a helical CT projection data restoration model at random low-dose levels. METHODS We used a noise estimation module to achieve noise estimation and obtained a low-dose projection noise variance map, which was used to guide projection data recovery by the projection data restoration module. A filtering back-projection algorithm (FBP) was finally used to reconstruct the images. The 3D wavelet group residual dense network (3DWGRDN) was adopted to build the network architecture of the noise estimation and projection data restoration module using asymmetric loss and total variational regularization. For validation of the model, 1/10 and 1/15 of normal dose helical CT images were restored using the proposed model and 3 other restoration models (IRLNet, REDCNN and MWResNet), and the results were visually and quantitatively compared. RESULTS Quantitative comparisons of the restored images showed that the proposed helical CT projection data restoration model increased the structural similarity index by 5.79% to 17.46% compared with the other restoration algorithms (P < 0.05). The image quality scores of the proposed method rated by clinical radiologists ranged from 7.19% to 17.38%, significantly higher than the other restoration algorithms (P < 0.05). CONCLUSION The proposed method can effectively suppress noises and reduce artifacts in the projection data at different low-dose levels while preserving the integrity of the edges and fine details of the reconstructed CT images.
Collapse
Affiliation(s)
- F He
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.,Pazhou Lab, Guangzhou, 510330, China
| | - Y Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.,Pazhou Lab, Guangzhou, 510330, China
| | - X Tao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - M Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.,Pazhou Lab, Guangzhou, 510330, China
| | - Z Hong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.,Pazhou Lab, Guangzhou, 510330, China
| | - Z Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - J Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
159
|
Liao J, Huang L, Qu M, Chen B, Wang G. Artificial Intelligence in Coronary CT Angiography: Current Status and Future Prospects. Front Cardiovasc Med 2022; 9:896366. [PMID: 35783834 PMCID: PMC9247240 DOI: 10.3389/fcvm.2022.896366] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/18/2022] [Indexed: 12/28/2022] Open
Abstract
Coronary heart disease (CHD) is the leading cause of mortality in the world. Early detection and treatment of CHD are crucial. Currently, coronary CT angiography (CCTA) has been the prior choice for CHD screening and diagnosis, but it cannot meet the clinical needs in terms of examination quality, the accuracy of reporting, and the accuracy of prognosis analysis. In recent years, artificial intelligence (AI) has developed rapidly in the field of medicine; it played a key role in auxiliary diagnosis, disease mechanism analysis, and prognosis assessment, including a series of studies related to CHD. In this article, the application and research status of AI in CCTA were summarized and the prospects of this field were also described.
Collapse
Affiliation(s)
- Jiahui Liao
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, China
| | - Lanfang Huang
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Meizi Qu
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Binghui Chen
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- *Correspondence: Binghui Chen
| | - Guojie Wang
- Department of Radiology, Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
- Guojie Wang
| |
Collapse
|
160
|
Han Y, Wu D, Kim K, Li Q. End-to-end deep learning for interior tomography with low-dose x-ray CT. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 04/07/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. There are several x-ray computed tomography (CT) scanning strategies used to reduce radiation dose, such as (1) sparse-view CT, (2) low-dose CT and (3) region-of-interest (ROI) CT (called interior tomography). To further reduce the dose, sparse-view and/or low-dose CT settings can be applied together with interior tomography. Interior tomography has various advantages in terms of reducing the number of detectors and decreasing the x-ray radiation dose. However, a large patient or a small field-of-view (FOV) detector can cause truncated projections, and then the reconstructed images suffer from severe cupping artifacts. In addition, although low-dose CT can reduce the radiation exposure dose, analytic reconstruction algorithms produce image noise. Recently, many researchers have utilized image-domain deep learning (DL) approaches to remove each artifact and demonstrated impressive performances, and the theory of deep convolutional framelets supports the reason for the performance improvement. Approach. In this paper, we found that it is difficult to solve coupled artifacts using an image-domain convolutional neural network (CNN) based on deep convolutional framelets. Significance. To address the coupled problem, we decouple it into two sub-problems: (i) image-domain noise reduction inside the truncated projection to solve low-dose CT problem and (ii) extrapolation of the projection outside the truncated projection to solve the ROI CT problem. The decoupled sub-problems are solved directly with a novel proposed end-to-end learning method using dual-domain CNNs. Main results. We demonstrate that the proposed method outperforms the conventional image-domain DL methods, and a projection-domain CNN shows better performance than the image-domain CNNs commonly used by many researchers.
Collapse
|
161
|
van Velzen SGM, de Vos BD, Noothout JMH, Verkooijen HM, Viergever MA, Išgum I. Generative models for reproducible coronary calcium scoring. J Med Imaging (Bellingham) 2022; 9:052406. [PMID: 35664539 DOI: 10.1117/1.jmi.9.5.052406] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 05/12/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Coronary artery calcium (CAC) score, i.e., the amount of CAC quantified in CT, is a strong and independent predictor of coronary heart disease (CHD) events. However, CAC scoring suffers from limited interscan reproducibility, which is mainly due to the clinical definition requiring application of a fixed intensity level threshold for segmentation of calcifications. This limitation is especially pronounced in non-electrocardiogram-synchronized computed tomography (CT) where lesions are more impacted by cardiac motion and partial volume effects. Therefore, we propose a CAC quantification method that does not require a threshold for segmentation of CAC. Approach: Our method utilizes a generative adversarial network (GAN) where a CT with CAC is decomposed into an image without CAC and an image showing only CAC. The method, using a cycle-consistent GAN, was trained using 626 low-dose chest CTs and 514 radiotherapy treatment planning (RTP) CTs. Interscan reproducibility was compared to clinical calcium scoring in RTP CTs of 1662 patients, each having two scans. Results: A lower relative interscan difference in CAC mass was achieved by the proposed method: 47% compared to 89% manual clinical calcium scoring. The intraclass correlation coefficient of Agatston scores was 0.96 for the proposed method compared to 0.91 for automatic clinical calcium scoring. Conclusions: The increased interscan reproducibility achieved by our method may lead to increased reliability of CHD risk categorization and improved accuracy of CHD event prediction.
Collapse
Affiliation(s)
- Sanne G M van Velzen
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands.,Amsterdam Cardiovascular Sciences, Heart Failure and Arrhythmias, Amsterdam, The Netherlands.,University of Amsterdam, Informatics Institute, Faculty of Science, Amsterdam, The Netherlands.,Utrecht University, University Medical Center Utrecht, Image Sciences Institute, Utrecht, The Netherlands
| | - Bob D de Vos
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands.,Amsterdam Cardiovascular Sciences, Heart Failure and Arrhythmias, Amsterdam, The Netherlands.,University of Amsterdam, Informatics Institute, Faculty of Science, Amsterdam, The Netherlands
| | - Julia M H Noothout
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands.,Amsterdam Cardiovascular Sciences, Heart Failure and Arrhythmias, Amsterdam, The Netherlands.,University of Amsterdam, Informatics Institute, Faculty of Science, Amsterdam, The Netherlands
| | - Helena M Verkooijen
- University Medical Center Utrecht, Imaging Division, Utrecht, The Netherlands
| | - Max A Viergever
- Utrecht University, University Medical Center Utrecht, Image Sciences Institute, Utrecht, The Netherlands
| | - Ivana Išgum
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands.,Amsterdam Cardiovascular Sciences, Heart Failure and Arrhythmias, Amsterdam, The Netherlands.,University of Amsterdam, Informatics Institute, Faculty of Science, Amsterdam, The Netherlands.,Amsterdam UMC location University of Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
| |
Collapse
|
162
|
Wu Q, Tang H, Liu H, Chen YC. Masked Joint Bilateral Filtering via Deep Image Prior for Digital X-ray Image Denoising. IEEE J Biomed Health Inform 2022; 26:4008-4019. [PMID: 35653453 DOI: 10.1109/jbhi.2022.3179652] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Medical image denoising faces great challenges. Although deep learning methods have shown great potential, their efficiency is severely affected by millions of trainable parameters. The non-linearity of neural networks also makes them difficult to be understood. Therefore, existing deep learning methods have been sparingly applied to clinical tasks. To this end, we integrate known filtering operators into deep learning and propose a novel Masked Joint Bilateral Filtering (MJBF) via deep image prior for digital X-ray image denoising. Specifically, MJBF consists of a deep image prior generator and an iterative filtering block. The deep image prior generator produces plentiful image priors by a multi-scale fusion network. The generated image priors serve as the guidance for the iterative filtering block, which is utilized for the actual edge-preserving denoising. The iterative filtering block contains three trainable Joint Bilateral Filters (JBFs), each with only 18 trainable parameters. Moreover, a masking strategy is introduced to reduce redundancy and improve the understanding of the proposed network. Experimental results on the ChestX-ray14 dataset and real data show that the proposed MJBF has achieved superior performance in terms of noise suppression and edge preservation. Tests on the portability of the proposed method demonstrate that this denoising modality is simple yet effective, and could have a clinical impact on medical imaging in the future.
Collapse
|
163
|
Qin M. A study on automatic correction of English grammar errors based on deep learning. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Grammatical error correction (GEC) is an important element in language learning. In this article, based on deep learning, the application of the Transformer model in GEC was briefly introduced. Then, in order to improve the performance of the model on GEC, it was optimized by a generative adversarial network (GAN). Experiments were conducted on two data sets. It was found that the performance of the GAN-combined Transformer model was significantly improved compared to the Transformer model. The F
0.5 value of the optimized model was 53.87 on CoNIL-2014, which was 2.69 larger than the Transformer model; the generalized language evaluation understanding (GLEU) value of the optimized model was 61.77 on JFLEG, which was 8.81 larger than that of the Transformer model. The optimized model also had a favorable correction performance in an actual English essay. The experimental results verify the reliability of the GAN-combined Transformer model on automatic English GEC, suggesting that the model can be further promoted and applied in practice.
Collapse
Affiliation(s)
- Mengyang Qin
- College of Tourism Management, Henan Vocational College of Agriculture , No. 38, Qingnianxi Road, Zhongmu County , Zhengzhou , Henan 451450 , China
| |
Collapse
|
164
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
165
|
Liu J, Kang Y, Xia Z, Qiang J, Zhang J, Zhang Y, Chen Y. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106851. [PMID: 35576686 DOI: 10.1016/j.cmpb.2022.106851] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 03/28/2022] [Accepted: 04/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Low-dose computed tomography (LDCT) has become increasingly important for alleviating X-ray radiation damage. However, reducing the administered radiation dose may lead to degraded CT images with amplified mottle noise and nonstationary streak artifacts. Previous studies have confirmed that deep learning (DL) is promising for improving LDCT imaging. However, most DL-based frameworks are built intuitively, lack interpretability, and suffer from image detail information loss, which has become a general challenging issue. METHODS A multiscale reweighted convolutional coding neural network (MRCON-Net) is developed to address the above problems. MRCON-Net is compact and more explainable than other networks. First, inspired by the learning-based reweighted iterative soft thresholding algorithm (ISTA), we extend traditional convolutional sparse coding (CSC) to its reweighted convolutional learning form. Second, we use dilated convolution to extract multiscale image features, allowing our single model to capture the correlations between features of different scales. Finally, to automatically adjust the elements in the feature code to correct the obtained solution, a channel attention (CA) mechanism is utilized to learn appropriate weights. RESULTS The visual results obtained based on the American Association of Physicians in Medicine (AAPM) Challenge and United Image Healthcare (UIH) clinical datasets confirm that the proposed model significantly reduces serious artifact noise while retaining the desired structures. Quantitative results show that the average structural similarity index measurement (SSIM) and peak signal-to-noise ratio (PSNR) achieved on the AAPM Challenge dataset are 0.9491 and 40.66, respectively, and the SSIM and PSNR achieved on the UIH clinical dataset are 0.915 and 42.44, respectively; these are promising quantitative results. CONCLUSION Compared with recent state-of-the-art methods, the proposed model achieves subtle structure-enhanced LDCT imaging. In addition, through ablation studies, the components of the proposed model are validated to achieve performance improvements.
Collapse
Affiliation(s)
- Jin Liu
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China.
| | - Yanqin Kang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China
| | - Zhenyu Xia
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - Jun Qiang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - JunFeng Zhang
- School of Computer and Information Engineering, Henan University of Economics and Law, Zhengzhou, China
| | - Yikun Zhang
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
166
|
Okamoto T, Kumakiri T, Haneishi H. Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography. Radiol Phys Technol 2022; 15:206-223. [PMID: 35622229 DOI: 10.1007/s12194-022-00661-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 04/27/2022] [Accepted: 04/28/2022] [Indexed: 11/27/2022]
Abstract
Micro-computed tomography (micro-CT) enables the non-destructive acquisition of three-dimensional (3D) morphological structures at the micrometer scale. Although it is expected to be used in pathology and histology to analyze the 3D microstructure of tissues, micro-CT imaging of tissue specimens requires a long scan time. A high-speed imaging method, sparse-view CT, can reduce the total scan time and radiation dose; however, it causes severe streak artifacts on tomographic images reconstructed with analytical algorithms due to insufficient sampling. In this paper, we propose an artifact reduction method for 3D volume projection data from sparse-view micro-CT. Specifically, we developed a patch-based lightweight fully convolutional network to estimate full-view 3D volume projection data from sparse-view 3D volume projection data. We evaluated the effectiveness of the proposed method using physically acquired datasets. The qualitative and quantitative results showed that the proposed method achieved high estimation accuracy and suppressed streak artifacts in the reconstructed images. In addition, we confirmed that the proposed method requires both short training and prediction times. Our study demonstrates that the proposed method has great potential for artifact reduction for 3D volume projection data under sparse-view conditions.
Collapse
Affiliation(s)
- Takayuki Okamoto
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
| | - Toshio Kumakiri
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, Chiba, 263-8522, Japan
| |
Collapse
|
167
|
Sie-Min K, Zulkifley MA, Kamari NAM. Optimal Compact Network for Micro-Expression Analysis System. SENSORS 2022; 22:s22114011. [PMID: 35684628 PMCID: PMC9183082 DOI: 10.3390/s22114011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/22/2022] [Accepted: 05/23/2022] [Indexed: 02/04/2023]
Abstract
Micro-expression analysis is the study of subtle and fleeting facial expressions that convey genuine human emotions. Since such expressions cannot be controlled, many believe that it is an excellent way to reveal a human’s inner thoughts. Analyzing micro-expressions manually is a very time-consuming and complicated task, hence many researchers have incorporated deep learning techniques to produce a more efficient analysis system. However, the insufficient amount of micro-expression data has limited the network’s ability to be fully optimized, as overfitting is likely to occur if a deeper network is utilized. In this paper, a complete deep learning-based micro-expression analysis system is introduced that covers the two main components of a general automated system: spotting and recognition, with also an additional element of synthetic data augmentation. For the spotting part, an optimized continuous labeling scheme is introduced to spot the apex frame in a video. Once the apex frames have been recognized, they are passed to the generative adversarial network to produce an additional set of augmented apex frames. Meanwhile, for the recognition part, a novel convolutional neural network, coined as Optimal Compact Network (OC-Net), is introduced for the purpose of emotion recognition. The proposed system achieved the best F1-score of 0.69 in categorizing the emotions with the highest accuracy of 79.14%. In addition, the generated synthetic data used in the training phase also contributed to performance improvement of at least 0.61% for all tested networks. Therefore, the proposed optimized and compact deep learning system is suitable for mobile-based micro-expression analysis to detect the genuine human emotions.
Collapse
|
168
|
Zhou Q, Wen M, Ding M, Zhang X. Unsupervised despeckling of optical coherence tomography images by combining cross-scale CNN with an intra-patch and inter-patch based transformer. OPTICS EXPRESS 2022; 30:18800-18820. [PMID: 36221673 DOI: 10.1364/oe.459477] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/03/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) has found wide application to the diagnosis of ophthalmic diseases, but the quality of OCT images is degraded by speckle noise. The convolutional neural network (CNN) based methods have attracted much attention in OCT image despeckling. However, these methods generally need noisy-clean image pairs for training and they are difficult to capture the global context information effectively. To address these issues, we have proposed a novel unsupervised despeckling method. This method uses the cross-scale CNN to extract the local features and uses the intra-patch and inter-patch based transformer to extract and merge the local and global feature information. Based on these extracted features, a reconstruction network is used to produce the final denoised result. The proposed network is trained using a hybrid unsupervised loss function, which is defined by the loss produced from Nerighbor2Neighbor, the structural similarity between the despeckled results of the probabilistic non-local means method and our method as well as the mean squared error between their features extracted by the VGG network. Experiments on two clinical OCT image datasets show that our method performs better than several popular despeckling algorithms in terms of visual evaluation and quantitative indexes.
Collapse
|
169
|
Jung C, Abuhamad M, Mohaisen D, Han K, Nyang D. WBC image classification and generative models based on convolutional neural network. BMC Med Imaging 2022; 22:94. [PMID: 35596153 PMCID: PMC9121596 DOI: 10.1186/s12880-022-00818-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 05/06/2022] [Indexed: 11/25/2022] Open
Abstract
Background Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. Recent works have shown highly accurate segmentation and detection of white blood cells from microscopic blood images. However, the classification of the observed cells is still a challenge, in part due to the distribution of the five types that affect the condition of the immune system. Methods (i) This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset that includes 6562 real images of the five WBC types. (ii) For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing. Results (i) W-Net achieves an average accuracy of 97%. In comparison to state-of-the-art methods in the field of WBC classification, we show that W-Net outperforms other CNN- and RNN-based model architectures. Moreover, we show the benefits of using pre-trained W-Net in a transfer learning context when fine-tuned to specific task or accommodating another dataset. (ii) The synthetic WBC images are confirmed by experiments and a domain expert to have a high degree of similarity to the original images. The pre-trained W-Net and the generated WBC dataset are available for the community to facilitate reproducibility and follow up research work. Conclusion This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes.
Collapse
Affiliation(s)
- Changhun Jung
- Department of Cyber Security, Ewha Womans University, 52, Ewhayeodae-gil, Seodaemun-gu, Seoul, 03760, Republic of Korea
| | - Mohammed Abuhamad
- Department of Computer Science, Loyola University Chicago, 1032 W Sheridan Rd, Chicago, 60660, USA
| | - David Mohaisen
- Department of Computer Science, University of Central Florida, 4000 Central Florida Blvd, Orlando, FL, 32816, USA
| | - Kyungja Han
- Department of Laboratory Medicine and College of Medicine, The Catholic University of Korea Seoul St. Mary's Hospital, 222, Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea
| | - DaeHun Nyang
- Department of Cyber Security, Ewha Womans University, 52, Ewhayeodae-gil, Seodaemun-gu, Seoul, 03760, Republic of Korea.
| |
Collapse
|
170
|
Precisely translating computed tomography diagnosis accuracy into therapeutic intervention by a carbon-iodine conjugated polymer. Nat Commun 2022; 13:2625. [PMID: 35551194 PMCID: PMC9098856 DOI: 10.1038/s41467-022-30263-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Accepted: 04/23/2022] [Indexed: 12/24/2022] Open
Abstract
X-ray computed tomography (CT) has an important role in precision medicine. However, CT contrast agents with high efficiency and the ability to translate diagnostic accuracy into therapeutic intervention are scarce. Here, poly(diiododiacetylene) (PIDA), a conjugated polymer composed of only carbon and iodine atoms, is reported as an efficient CT contrast agent to bridge CT diagnostic imaging with therapeutic intervention. PIDA has a high iodine payload (>84 wt%), and the aggregation of nanofibrous PIDA can further amplify CT intensity and has improved geometrical and positional stability in vivo. Moreover, with a conjugated backbone, PIDA is in deep blue color, making it dually visible by both CT imaging and the naked eyes. The performance of PIDA in CT-guided preoperative planning and visualization-guided surgery is validated using orthotopic xenograft rat models. In addition, PIDA excels clinical fiducial markers of imaging-guided radiotherapy in efficiency and biocompatibility, and exhibits successful guidance of robotic radiotherapy on Beagles, demonstrating clinical potential to translate CT diagnosis accuracy into therapeutic intervention for precision medicine.
Collapse
|
171
|
Sun J, Zhang Q, Du Y, Zhang D, Pretorius PH, King MA, Mok GSP. Dual gating myocardial perfusion SPECT denoising using a conditional generative adversarial network. Med Phys 2022; 49:5093-5106. [DOI: 10.1002/mp.15707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 04/29/2022] [Accepted: 05/01/2022] [Indexed: 11/12/2022] Open
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering Faculty of Science and Technology University of Macau Macau SAR China
| | - Qi Zhang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering Faculty of Science and Technology University of Macau Macau SAR China
- Department of Computer and Information Science Faculty of Science and Technology University of Macau Macau SAR China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering Faculty of Science and Technology University of Macau Macau SAR China
| | - Duo Zhang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering Faculty of Science and Technology University of Macau Macau SAR China
- Research Center for Healthcare Data Science Zhejiang Lab Hangzhou Zhejiang China
| | - P. Hendrik Pretorius
- Department of Radiology University of Massachusetts Medical School Worcester USA
| | - Michael A. King
- Department of Radiology University of Massachusetts Medical School Worcester USA
| | - Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering Faculty of Science and Technology University of Macau Macau SAR China
- Center for Cognitive and Brain Sciences Institute of Collaborative Innovation University of Macau Macau SAR China
| |
Collapse
|
172
|
Li H, Tuo X. Research on an English translation method based on an improved transformer model. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
With the expansion of people’s needs, the translation performance of traditional models is increasingly unable to meet current demands. This article mainly studied the Transformer model. First, the structure and principle of the Transformer model were briefly introduced. Then, the model was improved by a generative adversarial network (GAN) to improve the translation effect of the model. Finally, experiments were carried out on the linguistic data consortium (LDC) dataset. It was found that the average Bilingual Evaluation Understudy (BLEU) value of the improved Transformer model improved by 0.49, and the average perplexity value reduced by 10.06 compared with the Transformer model, but the computation speed was not greatly affected. The translation results of the two example sentences showed that the translation of the improved Transformer model was closer to the results of human translation. The experimental results verify that the improved Transformer model can improve the translation quality and be further promoted and applied in practice to further improve the English translation and meet application needs in real life.
Collapse
Affiliation(s)
- Hongxia Li
- Xi’an Innovation College, Yan’an University , Yan’an , Shaanxi 716000 , China
| | - Xin Tuo
- Xi’an Innovation College, Yan’an University , Yan’an , Shaanxi 716000 , China
| |
Collapse
|
173
|
Yang B, Zhou L, Chen L, Lu L, Liu H, Zhu W. Cycle-consistent learning-based hybrid iterative reconstruction for whole-body PET imaging. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5bfb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/09/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. To develop a cycle-consistent learning-based hybrid iterative reconstruction (IR) method that takes only slightly longer than analytic reconstruction, while pursuing the image resolution and tumor quantification achievable by IR for whole-body PET imaging. Approach. We backproject the raw positron emission tomography (PET) data to generate a blurred activity distribution. From the backprojection to the IR label, a reconstruction mapping that approximates the deblurring filters for the point spread function and the physical effects of the PET system is unrolled to a neural network with stacked convolutional layers. By minimizing the cycle-consistent loss, we train the reconstruction and inverse mappings simultaneously. Main results. In phantom study, the proposed method results in an absolute relative error (RE) of the mean activity of 4.0% ± 0.7% in the largest hot sphere, similar to the RE of the full-count IR and significantly smaller than that obtained by CycleGAN postprocessing. Achieving a noise reduction of 48.1% ± 0.5% relative to the low-count IR, the proposed method demonstrates advantages over the low-count IR and CycleGAN in terms of resolution maintenance, contrast recovery, and noise reduction. In patient study, the proposed method obtains a noise reduction of 44.6% ± 8.0% for the lung and the liver, while maintaining the regional mean activity in both simulated lesions and real tumors. The run time of the proposed method is only half that of the conventional IR. Significance. The proposed cycle-consistent learning from the backprojection rather than the raw PET data or an IR result enables improved reconstruction accuracy, reduced memory requirements, and fast implementation speeds for clinical whole-body PET imaging.
Collapse
|
174
|
Jing J, Xia W, Hou M, Chen H, Liu Y, Zhou J, Zhang Y. Training low dose CT denoising network without high quality reference data. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5f70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 03/21/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Currently, the field of low-dose CT (LDCT) denoising is dominated by supervised learning based methods, which need perfectly registered pairs of LDCT and its corresponding clean reference image (normal-dose CT). However, training without clean labels is more practically feasible and significant, since it is clinically impossible to acquire a large amount of these paired samples. In this paper, a self-supervised denoising method is proposed for LDCT imaging. Approach. The proposed method does not require any clean images. In addition, the perceptual loss is used to achieve data consistency in feature domain during the denoising process. Attention blocks used in decoding phase can help further improve the image quality. Main results. In the experiments, we validate the effectiveness of our proposed self-supervised framework and compare our method with several state-of-the-art supervised and unsupervised methods. The results show that our proposed model achieves competitive performance in both qualitative and quantitative aspects to other methods. Significance. Our framework can be directly applied to most denoising scenarios without collecting pairs of training data, which is more flexible for real clinical scenario.
Collapse
|
175
|
Artificial Intelligence (Enhanced Super-Resolution Generative Adversarial Network) for Calcium Deblooming in Coronary Computed Tomography Angiography: A Feasibility Study. Diagnostics (Basel) 2022; 12:diagnostics12040991. [PMID: 35454039 PMCID: PMC9027004 DOI: 10.3390/diagnostics12040991] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 12/22/2022] Open
Abstract
Background: The presence of heavy calcification in the coronary artery always presents a challenge for coronary computed tomography angiography (CCTA) in assessing the degree of coronary stenosis due to blooming artifacts associated with calcified plaques. Our study purpose was to use an advanced artificial intelligence (enhanced super-resolution generative adversarial network [ESRGAN]) model to suppress the blooming artifact in CCTA and determine its effect on improving the diagnostic performance of CCTA in calcified plaques. Methods: A total of 184 calcified plaques from 50 patients who underwent both CCTA and invasive coronary angiography (ICA) were analysed with measurements of coronary lumen on the original CCTA, and three sets of ESRGAN-processed images including ESRGAN-high-resolution (ESRGAN-HR), ESRGAN-average and ESRGAN-median with ICA as the reference method for determining sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). Results: ESRGAN-processed images improved the specificity and PPV at all three coronary arteries (LAD-left anterior descending, LCx-left circumflex and RCA-right coronary artery) compared to original CCTA with ESRGAN-median resulting in the highest values being 41.0% (95% confidence interval [CI]: 30%, 52.7%) and 26.9% (95% CI: 22.9%, 31.4%) at LAD; 41.7% (95% CI: 22.1%, 63.4%) and 36.4% (95% CI: 28.9%, 44.5%) at LCx; 55% (95% CI: 38.5%, 70.7%) and 47.1% (95% CI: 38.7%, 55.6%) at RCA; while corresponding values for original CCTA were 21.8% (95% CI: 13.2%, 32.6%) and 22.8% (95% CI: 20.8%, 24.9%); 12.5% (95% CI: 2.6%, 32.4%) and 27.6% (95% CI: 24.7%, 30.7%); 17.5% (95% CI: 7.3%, 32.8%) and 32.7% (95% CI: 29.6%, 35.9%) at LAD, LCx and RCA, respectively. There was no significant effect on sensitivity and NPV between the original CCTA and ESRGAN-processed images at all three coronary arteries. The area under the receiver operating characteristic curve was the highest with ESRGAN-median images at the RCA level with values being 0.76 (95% CI: 0.64, 0.89), 0.81 (95% CI: 0.69, 0.93), 0.82 (95% CI: 0.71, 0.94) and 0.86 (95% CI: 0.76, 0.96) corresponding to original CCTA and ESRGAN-HR, average and median images, respectively. Conclusions: This feasibility study shows the potential value of ESRGAN-processed images in improving the diagnostic value of CCTA for patients with calcified plaques.
Collapse
|
176
|
Abstract
We present a deep learning-based generative model for the enhancement of partially coherent diffractive images. In lensless coherent diffractive imaging, a highly coherent X-ray illumination is required to image an object at high resolution. Non-ideal experimental conditions result in a partially coherent X-ray illumination, lead to imperfections of coherent diffractive images recorded on a detector, and ultimately limit the capability of lensless coherent diffractive imaging. The previous approaches, relying on the coherence property of illumination, require preliminary experiments or expensive computations. In this article, we propose a generative adversarial network (GAN) model to enhance the visibility of fringes in partially coherent diffractive images. Unlike previous approaches, the model is trained to restore the latent sharp features from blurred input images without finding coherence properties of illumination. We demonstrate that the GAN model performs well with both coherent diffractive imaging and ptychography. It can be applied to a wide range of imaging techniques relying on phase retrieval of coherent diffraction patterns.
Collapse
|
177
|
Abstract
Due to sensor instability and atmospheric interference, hyperspectral images (HSIs) often suffer from different kinds of noise which degrade the performance of downstream tasks. Therefore, HSI denoising has become an essential part of HSI preprocessing. Traditional methods tend to tackle one specific type of noise and remove it iteratively, resulting in drawbacks including inefficiency when dealing with mixed noise. Most recently, deep neural network-based models, especially generative adversarial networks, have demonstrated promising performance in generic image denoising. However, in contrast to generic RGB images, HSIs often possess abundant spectral information; thus, it is non-trivial to design a denoising network to effectively explore both spatial and spectral characteristics simultaneously. To address the above issues, in this paper, we propose an end-to-end HSI denoising model via adversarial learning. More specifically, to capture the subtle noise distribution from both spatial and spectral dimensions, we designed a Residual Spatial-Spectral Module (RSSM) and embed it in an UNet-like structure as the generator to obtain clean images. To distinguish the real image from the generated one, we designed a discriminator based on the Multiscale Feature Fusion Module (MFFM) to further improve the quality of the denoising results. The generator was trained with joint loss functions, including reconstruction loss, structural loss and adversarial loss. Moreover, considering the lack of publicly available training data for the HSI denoising task, we collected an additional benchmark dataset denoted as the Shandong Feicheng Denoising (SFD) dataset. We evaluated five types of mixed noise across several datasets in comparative experiments, and comprehensive experimental results on both simulated and real data demonstrate that the proposed model achieves competitive results against state-of-the-art methods. For ablation studies, we investigated the structure of the generator as well as the training process with joint losses and different amounts of training data, further validating the rationality and effectiveness of the proposed method.
Collapse
|
178
|
Kalare K, Bajpai M, Sarkar S, Munshi P. Deep neural network for beam hardening artifacts removal in image reconstruction. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02604-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
179
|
Monsour R, Dutta M, Mohamed AZ, Borkowski A, Viswanadhan NA. Neuroimaging in the Era of Artificial Intelligence: Current Applications. Fed Pract 2022; 39:S14-S20. [PMID: 35765692 PMCID: PMC9227741 DOI: 10.12788/fp.0231] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI increases efficiency and reduces errors, making it a valuable resource for physicians. With the increasing amount of data processing and image interpretation required, the ability to use AI to augment and aid the radiologist could improve the quality of patient care. OBSERVATIONS AI can predict patient wait times, which may allow more efficient patient scheduling. Additionally, AI can save time for repeat magnetic resonance neuroimaging and reduce the time spent during imaging. AI has the ability to read computed tomography, magnetic resonance imaging, and positron emission tomography with reduced or without contrast without significant loss in sensitivity for detecting lesions. Neuroimaging does raise important ethical considerations and is subject to bias. It is vital that users understand the practical and ethical considerations of the technology. CONCLUSIONS The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI's use for detecting neurologic conditions holds promise in combatting ever increasing imaging volumes and providing timely diagnoses.
Collapse
Affiliation(s)
- Robert Monsour
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Mudit Dutta
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | | | - Andrew Borkowski
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| | - Narayan A. Viswanadhan
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| |
Collapse
|
180
|
Prasad S, Almekkawy M. DeepUCT: Complex cascaded deep learning network for improved ultrasound tomography. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/07/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Ultrasound computed tomography is an inexpensive and radiation-free medical imaging technique used to quantify the tissue acoustic properties for advanced clinical diagnosis. Image reconstruction in ultrasound tomography is often modeled as an optimization scheme solved by iterative methods like full-waveform inversion. These iterative methods are computationally expensive, while the optimization problem is ill-posed and nonlinear. To address this problem, we propose to use deep learning to overcome the computational burden and ill-posedness, and achieve near real-time image reconstruction in ultrasound tomography. We aim to directly learn the mapping from the recorded time-series sensor data to a spatial image of acoustical properties. To accomplish this, we develop a deep learning model using two cascaded convolutional neural networks with an encoder–decoder architecture. We achieve a good representation by first extracting the intermediate mapping-knowledge and later utilizing this knowledge to reconstruct the image. This approach is evaluated on synthetic phantoms where simulated ultrasound data are acquired from a ring of transducers surrounding the region of interest. The measurement data is acquired by forward modeling the wave equation using the k-wave toolbox. Our simulation results demonstrate that our proposed deep-learning method is robust to noise and significantly outperforms the state-of-the-art traditional iterative method both quantitatively and qualitatively. Furthermore, our model takes substantially less computational time than the conventional full-wave inversion method.
Collapse
|
181
|
Karar ME, Alotaibi B, Alotaibi M. Intelligent Medical IoT-Enabled Automated Microscopic Image Diagnosis of Acute Blood Cancers. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22062348. [PMID: 35336523 PMCID: PMC8949784 DOI: 10.3390/s22062348] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/06/2022] [Accepted: 03/15/2022] [Indexed: 05/03/2023]
Abstract
Blood cancer, or leukemia, has a negative impact on the blood and/or bone marrow of children and adults. Acute lymphocytic leukemia (ALL) and acute myeloid leukemia (AML) are two sub-types of acute leukemia. The Internet of Medical Things (IoMT) and artificial intelligence have allowed for the development of advanced technologies to assist in recently introduced medical procedures. Hence, in this paper, we propose a new intelligent IoMT framework for the automated classification of acute leukemias using microscopic blood images. The workflow of our proposed framework includes three main stages, as follows. First, blood samples are collected by wireless digital microscopy and sent to a cloud server. Second, the cloud server carries out automatic identification of the blood conditions-either leukemias or healthy-utilizing our developed generative adversarial network (GAN) classifier. Finally, the classification results are sent to a hematologist for medical approval. The developed GAN classifier was successfully evaluated on two public data sets: ALL-IDB and ASH image bank. It achieved the best accuracy scores of 98.67% for binary classification (ALL or healthy) and 95.5% for multi-class classification (ALL, AML, and normal blood cells), when compared with existing state-of-the-art methods. The results of this study demonstrate the feasibility of our proposed IoMT framework for automated diagnosis of acute leukemia tests. Clinical realization of this blood diagnosis system is our future work.
Collapse
Affiliation(s)
- Mohamed Esmail Karar
- College of Computing and Information Technology, Shaqra University, P.O. Box 33, Shaqra 11961, Saudi Arabia;
- Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Bandar Alotaibi
- Department of Information Technology, University of Tabuk, Tabuk 47731, Saudi Arabia;
- Sensor Networks and Cellular Systems (SNCS) Research Center, University of Tabuk, Tabuk 47731, Saudi Arabia
| | - Munif Alotaibi
- College of Computing and Information Technology, Shaqra University, P.O. Box 33, Shaqra 11961, Saudi Arabia;
- Correspondence:
| |
Collapse
|
182
|
Towards Accurate Skin Lesion Classification across All Skin Categories Using a PCNN Fusion-Based Data Augmentation Approach. COMPUTERS 2022. [DOI: 10.3390/computers11030044] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Deep learning models yield remarkable results in skin lesions analysis. However, these models require considerable amounts of data, while accessibility to the images with annotated skin lesions is often limited, and the classes are often imbalanced. Data augmentation is one way to alleviate the lack of labeled data and class imbalance. This paper proposes a new data augmentation method based on image fusion technique to construct large dataset on all existing tones. The fusion method consists of a pulse-coupled neural network fusion strategy in a non-subsampled shearlet transform domain and consists of three steps: decomposition, fusion, and reconstruction. The dermoscopic dataset is obtained by combining ISIC2019 and ISIC2020 Challenge datasets. A comparative study with current algorithms was performed to access the effectiveness of the proposed one. The first experiment results indicate that the proposed algorithm best preserves the lesion dermoscopic structure and skin tones features. The second experiment, which consisted of training a convolutional neural network model with the augmented dataset, indicates a more significant increase in accuracy by 15.69%, and 15.38% respectively for tanned, and brown skin categories. The model precision, recall, and F1-score have also been increased. The obtained results indicate that the proposed augmentation method is suitable for dermoscopic images and can be used as a solution to the lack of dark skin images in the dataset.
Collapse
|
183
|
Abstract
In clinical medical applications, sparse-view computed tomography (CT) imaging is an effective method for reducing radiation doses. The iterative reconstruction method is usually adopted for sparse-view CT. In the process of optimizing the iterative model, the approach of directly solving the quadratic penalty function of the objective function can be expected to perform poorly. Compared with the direct solution method, the alternating direction method of multipliers (ADMM) algorithm can avoid the ill-posed problem associated with the quadratic penalty function. However, the regularization items, sparsity transform, and parameters in the traditional ADMM iterative model need to be manually adjusted. In this paper, we propose a data-driven ADMM reconstruction method that can automatically optimize the above terms that are difficult to choose within an iterative framework. The main contribution of this paper is that a modified U-net represents the sparse transformation, and the prior information and related parameters are automatically trained by the network. Based on a comparison with other state-of-the-art reconstruction algorithms, the qualitative and quantitative results show the effectiveness of our method for sparse-view CT image reconstruction. The experimental results show that the proposed method performs well in streak artifact elimination and detail structure preservation. The proposed network can deal with a wide range of noise levels and has exceptional performance in low-dose reconstruction tasks.
Collapse
|
184
|
Brain stroke lesion segmentation using consistent perception generative adversarial network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06816-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
185
|
A Hemolysis Image Detection Method Based on GAN-CNN-ELM. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1558607. [PMID: 35242201 PMCID: PMC8888064 DOI: 10.1155/2022/1558607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 01/18/2022] [Indexed: 11/18/2022]
Abstract
Since manual hemolysis test methods are given priority with practical experience and its cost is high, the characteristics of hemolysis images are studied. A hemolysis image detection method based on generative adversarial networks (GANs) and convolutional neural networks (CNNs) with extreme learning machine (ELM) is proposed. First, the image enhancement and data enhancement are performed on a sample set, and GAN is used to expand the sample data volume. Second, CNN is used to extract the feature vectors of the processed images and label eigenvectors with one-hot encoding. Third, the feature matrix is input to the map in the ELM network to minimize the error and obtain the optimal weight by training. Finally, the image to be detected is input to the trained model, and the image with the greatest probability is selected as the final category. Through model comparison experiments, the results show that the hemolysis image detection method based on the GAN-CNN-ELM model is better than GAN-CNN, GAN-ELM, GAN-ELM-L1, GAN-SVM, GAN-CNN-SVM, and CNN-ELM in accuracy and speed, and the accuracy rate is 98.91%.
Collapse
|
186
|
DENOISING SWEPT SOURCE OPTICAL COHERENCE TOMOGRAPHY VOLUMETRIC SCANS USING A DEEP LEARNING MODEL. Retina 2022; 42:450-455. [PMID: 35175017 DOI: 10.1097/iae.0000000000003348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE To evaluate the use of a deep learning noise reduction model on swept source optical coherence tomography volumetric scans. METHODS Three groups of images including single-line highly averaged foveal scans (averaged images), foveal B-scans from volumetric scans using no averaging (unaveraged images), and deep learning denoised versions of the latter (denoised images) were obtained. We evaluated the potential increase in the signal-to-noise ratio by evaluating the contrast-to-noise ratio of the resultant images and measured the multiscale structural similarity index to determine whether the unaveraged and denoised images held true in structure to the averaged images. We evaluated the practical effects of denoising on a popular metric of choroidal vascularity known as the choroidal vascularity index. RESULTS Ten eyes of 10 subjects with a mean age of 31 years (range 24-64 years) were evaluated. The deep choroidal contrast-to-noise ratio mean values of the averaged and denoised image groups were similar (7.06 vs. 6.81, P = 0.75), and both groups had better maximum contrast-to-noise ratio mean values (27.65 and 46.34) than the unaveraged group (14.75; P = 0.001 and P < 0.001, respectively). The mean multiscale structural similarity index of the average-denoised images was significantly higher than the one from the averaged--unaveraged images (0.85 vs. 0.61, P < 0.001). Choroidal vascularity index values from averaged and denoised images were similar (71.81 vs. 71.16, P = 0.554). CONCLUSION Using three different metrics, we demonstrated that the deep learning denoising model can produce high-quality images that emulate, and may exceed, the quality of highly averaged scans.
Collapse
|
187
|
Yang H, Dong B, Gu W, Wu S, Zhou W, Zhang X, Wang D. Transmission reconstruction algorithm by combining maximum-likelihood expectation maximization and a convolutional neural network for radioactive drum characterization. Appl Radiat Isot 2022; 184:110172. [DOI: 10.1016/j.apradiso.2022.110172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/15/2022] [Accepted: 02/25/2022] [Indexed: 11/16/2022]
|
188
|
Posterior temperature optimized Bayesian models for inverse problems in medical imaging. Med Image Anal 2022; 78:102382. [DOI: 10.1016/j.media.2022.102382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 11/09/2021] [Accepted: 02/01/2022] [Indexed: 11/21/2022]
|
189
|
Platscher M, Zopes J, Federau C. Image translation for medical image generation: Ischemic stroke lesion segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103283] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
190
|
Dinh TQ, Xiong Y, Huang Z, Vo T, Mishra A, Kim WH, Ravi SN, Singh V. Performing Group Difference Testing on Graph Structured Data From GANs: Analysis and Applications in Neuroimaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:877-889. [PMID: 32763848 PMCID: PMC7867665 DOI: 10.1109/tpami.2020.3013433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Generative adversarial networks (GANs) have emerged as a powerful generative model in computer vision. Given their impressive abilities in generating highly realistic images, they are also being used in novel ways in applications in the life sciences. This raises an interesting question when GANs are used in scientific or biomedical studies. Consider the setting where we are restricted to only using the samples from a trained GAN for downstream group difference analysis (and do not have direct access to the real data). Will we obtain similar conclusions? In this work, we explore if "generated" data, i.e., sampled from such GANs can be used for performing statistical group difference tests in cases versus controls studies, common across many scientific disciplines. We provide a detailed analysis describing regimes where this may be feasible. We complement the technical results with an empirical study focused on the analysis of cortical thickness on brain mesh surfaces in an Alzheimer's disease dataset. To exploit the geometric nature of the data, we use simple ideas from spectral graph theory to show how adjustments to existing GANs can yield improvements. We also give a generalization error bound by extending recent results on Neural Network Distance. To our knowledge, our work offers the first analysis assessing whether the Null distribution in "healthy versus diseased subjects" type statistical testing using data generated from the GANs coincides with the one obtained from the same analysis with real data. The code is available at https://github.com/yyxiongzju/GLapGAN.
Collapse
|
191
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
192
|
Montoya JC, Zhang C, Li Y, Li K, Chen GH. Reconstruction of three-dimensional tomographic patient models for radiation dose modulation in CT from two scout views using deep learning. Med Phys 2022; 49:901-916. [PMID: 34908175 PMCID: PMC9080958 DOI: 10.1002/mp.15414] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 11/11/2021] [Accepted: 11/16/2021] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND A tomographic patient model is essential for radiation dose modulation in x-ray computed tomography (CT). Currently, two-view scout images (also known as topograms) are used to estimate patient models with relatively uniform attenuation coefficients. These patient models do not account for the detailed anatomical variations of human subjects, and thus, may limit the accuracy of intraview or organ-specific dose modulations in emerging CT technologies. PURPOSE The purpose of this work was to show that 3D tomographic patient models can be generated from two-view scout images using deep learning strategies, and the reconstructed 3D patient models indeed enable accurate prescriptions of fluence-field modulated or organ-specific dose delivery in the subsequent CT scans. METHODS CT images and the corresponding two-view scout images were retrospectively collected from 4214 individual CT exams. The collected data were curated for the training of a deep neural network architecture termed ScoutCT-NET to generate 3D tomographic attenuation models from two-view scout images. The trained network was validated using a cohort of 55 136 images from 212 individual patients. To evaluate the accuracy of the reconstructed 3D patient models, radiation delivery plans were generated using ScoutCT-NET 3D patient models and compared with plans prescribed based on true CT images (gold standard) for both fluence-field-modulated CT and organ-specific CT. Radiation dose distributions were estimated using Monte Carlo simulations and were quantitatively evaluated using the Gamma analysis method. Modulated dose profiles were compared against state-of-the-art tube current modulation schemes. Impacts of ScoutCT-NET patient model-based dose modulation schemes on universal-purpose CT acquisitions and organ-specific acquisitions were also compared in terms of overall image appearance, noise magnitude, and noise uniformity. RESULTS The results demonstrate that (1) The end-to-end trained ScoutCT-NET can be used to generate 3D patient attenuation models and demonstrate empirical generalizability. (2) The 3D patient models can be used to accurately estimate the spatial distribution of radiation dose delivered by standard helical CTs prior to the actual CT acquisition; compared to the gold-standard dose distribution, 95.0% of the voxels in the ScoutCT-NET based dose maps have acceptable gamma values for 5 mm distance-to-agreement and 10% dose difference. (3) The 3D patient models also enabled accurate prescription of fluence-field modulated CT to generate a more uniform noise distribution across the patient body compared to tube current-modulated CT. (4) ScoutCT-NET 3D patient models enabled accurate prescription of organ-specific CT to boost image quality for a given body region-of-interest under a given radiation dose constraint. CONCLUSION 3D tomographic attenuation models generated by ScoutCT-NET from two-view scout images can be used to prescribe fluence-field-modulated or organ-specific CT scans with high accuracy for the overall objective of radiation dose reduction or image quality improvement for a given imaging task.
Collapse
Affiliation(s)
- Juan C Montoya
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Ke Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
193
|
Hsieh J. A novel simulation-driven reconstruction approach for X-ray computed tomography. Med Phys 2022; 49:2245-2258. [PMID: 35102555 DOI: 10.1002/mp.15502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/17/2022] [Accepted: 01/21/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Radiation dose reduction is critical to the success of x-ray computed tomography (CT). Many advanced reconstruction techniques have been developed over the years to combat noise resulting from the low-dose CT scans. These algorithms rely on accurate local estimation of the image noise to determine reconstruction parameters or to select inferencing models. Because of difficulties in the noise estimation for heterogeneous objects, the performance of many algorithms is inconsistent and suboptimal. In this paper, we propose a novel approach to overcome such shortcoming. METHOD By injecting appropriate amount of noise in the CT raw data, a computer simulation approach is capable of accurately estimating the local statistics of the raw data and the local noise in the reconstructed images. This information is then used to guide the noise reduction process during the reconstruction. As an initial implementation, a scaling map is generated based on the noise predicted from the simulation and the noise estimated from existing reconstruction algorithms. Images generated with existing algorithms are subsequently modified based on the scaling map. In this study, both iterative reconstruction (IR) and deep learning image reconstruction (DLIR) algorithms are evaluated. RESULTS Phantom experiments were conducted to evaluate the performance of the simulation-based noise estimation in terms of the standard deviation and noise power spectrum (NPS). Quantitative results have demonstrated that the noise measured from the original image matches well with the noise estimated from the simulation. Clinical datasets were utilized to further confirm the accuracy of the proposed approach under more challenging conditions. To validate the performance of the proposed reconstruction approach, clinical scans were used. Performance comparison was carried out qualitatively and quantitatively. Two existing advanced reconstruction techniques, IR and DLIR, were evaluated against the proposed approach. Results have shown that the proposed approach outperforms existing IR and DLIR algorithms in terms of noise suppression and, equally importantly, noise uniformity across the entire imaging volume. Visual assessment of the images also reveals that the proposed approach does not endure noise texture issues facing some of the existing reconstruction algorithms today. CONCLUSION Phantom and clinical results have demonstrated superior performance of the proposed approach with regard to noise reduction as well as noise homogeneity. Visual inspection of the noise texture further confirms the clinical utility of the proposed approach. Future enhancements on the current implementation are explored regarding image quality and computational efficiency. Because of the limited scope of this paper, detailed investigation on these enhancement features will be covered in a separate report. This article is protected by copyright. All rights reserved.
Collapse
|
194
|
Radiomics in Cardiovascular Disease Imaging: from Pixels to the Heart of the Problem. CURRENT CARDIOVASCULAR IMAGING REPORTS 2022. [DOI: 10.1007/s12410-022-09563-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Abstract
Purpose of Review
This review of the literature aims to present potential applications of radiomics in cardiovascular radiology and, in particular, in cardiac imaging.
Recent Findings
Radiomics and machine learning represent a technological innovation which may be used to extract and analyze quantitative features from medical images. They aid in detecting hidden pattern in medical data, possibly leading to new insights in pathophysiology of different medical conditions. In the recent literature, radiomics and machine learning have been investigated for numerous potential applications in cardiovascular imaging. They have been proposed to improve image acquisition and reconstruction, for anatomical structure automated segmentation or automated characterization of cardiologic diseases.
Summary
The number of applications for radiomics and machine learning is continuing to rise, even though methodological and implementation issues still limit their use in daily practice. In the long term, they may have a positive impact in patient management.
Collapse
|
195
|
Park HS, Jeon K, Lee J, You SK. Denoising of pediatric low dose abdominal CT using deep learning based algorithm. PLoS One 2022; 17:e0260369. [PMID: 35061701 PMCID: PMC8782418 DOI: 10.1371/journal.pone.0260369] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 11/08/2021] [Indexed: 11/25/2022] Open
Abstract
OBJECTIVES To evaluate standard dose-like computed tomography (CT) images generated by a deep learning method, trained using unpaired low-dose CT (LDCT) and standard-dose CT (SDCT) images. MATERIALS AND METHODS LDCT (80 kVp, 100 mAs, n = 83) and SDCT (120 kVp, 200 mAs, n = 42) images were divided into training (42 LDCT and 42 SDCT) and validation (41 LDCT) sets. A generative adversarial network framework was used to train unpaired datasets. The trained deep learning method generated virtual SDCT images (VIs) from the original LDCT images (OIs). To test the proposed method, LDCT images (80 kVp, 262 mAs, n = 33) were collected from another CT scanner using iterative reconstruction (IR). Image analyses were performed to evaluate the qualities of VIs in the validation set and to compare the performance of deep learning and IR in the test set. RESULTS The noise of the VIs was the lowest in both validation and test sets (all p<0.001). The mean CT number of the VIs for the portal vein and liver was lower than that of OIs in both validation and test sets (all p<0.001) and was similar to those of SDCT. The contrast-to-noise ratio of portal vein and the signal-to-noise ratio (SNR) of portal vein and liver of VIs were higher than those of SDCT (all p<0.05). The SNR of VIs in test sets was the highest among three images. CONCLUSION The deep learning method trained by unpaired datasets could reduce noise of LDCT images and showed similar performance to SAFIRE. It can be applied to LDCT images of older CT scanners without IR.
Collapse
Affiliation(s)
- Hyoung Suk Park
- National Institute for Mathematical Sciences, Daejeon, Republic of Korea
| | - Kiwan Jeon
- National Institute for Mathematical Sciences, Daejeon, Republic of Korea
| | - JeongEun Lee
- Department of Radiology, Chungnam National University College of Medicine, Daejeon, Republic of Korea
- Department of Radiology, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Sun Kyoung You
- Department of Radiology, Chungnam National University College of Medicine, Daejeon, Republic of Korea
- Department of Radiology, Chungnam National University Hospital, Daejeon, Republic of Korea
| |
Collapse
|
196
|
Hirata K, Sugimori H, Fujima N, Toyonaga T, Kudo K. Artificial intelligence for nuclear medicine in oncology. Ann Nucl Med 2022; 36:123-132. [PMID: 35028877 DOI: 10.1007/s12149-021-01693-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 11/07/2021] [Indexed: 12/12/2022]
Abstract
As in all other medical fields, artificial intelligence (AI) is increasingly being used in nuclear medicine for oncology. There are many articles that discuss AI from the viewpoint of nuclear medicine, but few focus on nuclear medicine from the viewpoint of AI. Nuclear medicine images are characterized by their low spatial resolution and high quantitativeness. It is noted that AI has been used since before the emergence of deep learning. AI can be divided into three categories by its purpose: (1) assisted interpretation, i.e., computer-aided detection (CADe) or computer-aided diagnosis (CADx). (2) Additional insight, i.e., AI provides information beyond the radiologist's eye, such as predicting genes and prognosis from images. It is also related to the field called radiomics/radiogenomics. (3) Augmented image, i.e., image generation tasks. To apply AI to practical use, harmonization between facilities and the possibility of black box explanations need to be resolved.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan. .,Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan. .,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.
| | | | - Noriyuki Fujima
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan.,Global Center for Biomedical Science and Engineering, Hokkaido University Faculty of Medicine, Sapporo, Japan
| |
Collapse
|
197
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
198
|
Artificial Intelligence in Diagnostic Radiology: Where Do We Stand, Challenges, and Opportunities. J Comput Assist Tomogr 2022; 46:78-90. [PMID: 35027520 DOI: 10.1097/rct.0000000000001247] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT Artificial intelligence (AI) is the most revolutionizing development in the health care industry in the current decade, with diagnostic imaging having the greatest share in such development. Machine learning and deep learning (DL) are subclasses of AI that show breakthrough performance in image analysis. They have become the state of the art in the field of image classification and recognition. Machine learning deals with the extraction of the important characteristic features from images, whereas DL uses neural networks to solve such problems with better performance. In this review, we discuss the current applications of machine learning and DL in the field of diagnostic radiology.Deep learning applications can be divided into medical imaging analysis and applications beyond analysis. In the field of medical imaging analysis, deep convolutional neural networks are used for image classification, lesion detection, and segmentation. Also used are recurrent neural networks when extracting information from electronic medical records and to augment the use of convolutional neural networks in the field of image classification. Generative adversarial networks have been explicitly used in generating high-resolution computed tomography and magnetic resonance images and to map computed tomography images from the corresponding magnetic resonance imaging. Beyond image analysis, DL can be used for quality control, workflow organization, and reporting.In this article, we review the most current AI models used in medical imaging research, providing a brief explanation of the various models described in the literature within the past 5 years. Emphasis is placed on the various DL models, as they are the most state-of-art in imaging analysis.
Collapse
|
199
|
Cui X, Guo Y, Zhang X, Shangguan H, Liu B, Wang A. Artifact-Assisted multi-level and multi-scale feature fusion attention network for low-dose CT denoising. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:875-889. [PMID: 35694948 DOI: 10.3233/xst-221149] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Since low-dose computed tomography (LDCT) images typically have higher noise that may affect accuracy of disease diagnosis, the objective of this study is to develop and evaluate a new artifact-assisted feature fusion attention (AAFFA) network to extract and reduce image artifact and noise in LDCT images. METHODS In AAFFA network, a feature fusion attention block is constructed for local multi-scale artifact feature extraction and progressive fusion from coarse to fine. A multi-level fusion architecture based on skip connection and attention modules is also introduced for artifact feature extraction. Specifically, long-range skip connections are used to enhance and fuse artifact features with different depth levels. Then, the fused shallower features enter channel attention for better extraction of artifact features, and the fused deeper features are sent into pixel attention for focusing on the artifact pixel information. Besides, an artifact channel is designed to provide rich artifact features and guide the extraction of noise and artifact features. The AAPM LDCT Challenge dataset is used to train and test the network. The performance is evaluated by using both visual observation and quantitative metrics including peak signal-noise-ratio (PSNR), structural similarity index (SSIM) and visual information fidelity (VIF). RESULTS Using AAFFA network improves the averaged PSNR/SSIM/VIF values of AAPM LDCT images from 43.4961, 0.9595, 0.3926 to 48.2513, 0.9859, 0.4589, respectively. CONCLUSIONS The proposed AAFFA network is able to effectively reduce noise and artifacts while preserving object edges. Assessment of visual quality and quantitative index demonstrates the significant improvement compared with other image denoising methods.
Collapse
Affiliation(s)
- Xueying Cui
- School of Applied Science, Taiyuan University of Science and Technology, Taiyuan, China
| | - Yingting Guo
- School of Applied Science, Taiyuan University of Science and Technology, Taiyuan, China
| | - Xiong Zhang
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan, China
| | - Hong Shangguan
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan, China
| | - Bin Liu
- School of Applied Science, Taiyuan University of Science and Technology, Taiyuan, China
| | - Anhong Wang
- School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan, China
| |
Collapse
|
200
|
Hybrid System: PET/CT. Nucl Med Mol Imaging 2022. [DOI: 10.1016/b978-0-12-822960-6.00103-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|