51
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
52
|
Li Q, Li R, Li S, Wang T, Cheng Y, Zhang S, Wu W, Zhao J, Qiang Y, Wang L. Unpaired low-dose computed tomography image denoising using a progressive cyclical convolutional neural network. Med Phys 2024; 51:1289-1312. [PMID: 36841936 DOI: 10.1002/mp.16331] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 02/15/2023] [Accepted: 02/18/2023] [Indexed: 02/27/2023] Open
Abstract
BACKGROUND Reducing the radiation dose from computed tomography (CT) can significantly reduce the radiation risk to patients. However, low-dose CT (LDCT) suffers from severe and complex noise interference that affects subsequent diagnosis and analysis. Recently, deep learning-based methods have shown superior performance in LDCT image-denoising tasks. However, most methods require many normal-dose and low-dose CT image pairs, which are difficult to obtain in clinical applications. Unsupervised methods, on the other hand, are more general. PURPOSE Deep learning methods based on GAN networks have been widely used for unsupervised LDCT denoising, but the additional memory requirements of the model also hinder its further clinical application. To this end, we propose a simpler multi-stage denoising framework trained using unpaired data, the progressive cyclical convolutional neural network (PCCNN), which can remove the noise from CT images in latent space. METHODS Our proposed PCCNN introduces a noise transfer model that transfers noise from LDCT to normal-dose CT (NDCT), denoised CT images generated from unpaired CT images, and noisy CT images. The denoising framework also contains a progressive module that effectively removes noise through multi-stage wavelet transforms without sacrificing high-frequency components such as edges and details. RESULTS Compared with seven LDCT denoising algorithms, we perform a quantitative and qualitative evaluation of the experimental results and perform ablation experiments on each network module and loss function. On the AAPM dataset, compared with the contrasted unsupervised methods, our denoising framework has excellent denoising performance increasing the peak signal-to-noise ratio (PSNR) from 29.622 to 30.671, and the structural similarity index (SSIM) was increased from 0.8544 to 0.9199. The PCCNN denoising results were relatively optimal and statistically significant. In the qualitative result comparison, PCCNN without introducing additional blurring and artifacts, the resulting image has higher resolution and complete detail preservation, and the overall structural texture of the image is closer to NDCT. In visual assessments, PCCNN achieves a relatively balanced result in noise suppression, contrast retention, and lesion discrimination. CONCLUSIONS Extensive experimental validation shows that our scheme achieves reconstruction results comparable to supervised learning methods and has performed well in image quality and medical diagnostic acceptability.
Collapse
Affiliation(s)
- Qing Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Runrui Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Saize Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Tao Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yubin Cheng
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Shuming Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Wei Wu
- Department of Clinical Laboratory, Affiliated People's Hospital of Shanxi Medical University, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
- School of Information Engineering, Jinzhong College of Information, Jinzhong, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Long Wang
- School of Information Engineering, Jinzhong College of Information, Jinzhong, China
| |
Collapse
|
53
|
Bousse A, Kandarpa VSS, Rit S, Perelli A, Li M, Wang G, Zhou J, Wang G. Systematic Review on Learning-based Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:113-137. [PMID: 38476981 PMCID: PMC10927029 DOI: 10.1109/trpms.2023.3314131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Spectral computed tomography (CT) has recently emerged as an advanced version of medical CT and significantly improves conventional (single-energy) CT. Spectral CT has two main forms: dual-energy computed tomography (DECT) and photon-counting computed tomography (PCCT), which offer image improvement, material decomposition, and feature quantification relative to conventional CT. However, the inherent challenges of spectral CT, evidenced by data and image artifacts, remain a bottleneck for clinical applications. To address these problems, machine learning techniques have been widely applied to spectral CT. In this review, we present the state-of-the-art data-driven techniques for spectral CT.
Collapse
Affiliation(s)
- Alexandre Bousse
- LaTIM, Inserm UMR 1101, Université de Bretagne Occidentale, 29238 Brest, France
| | | | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Étienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Alessandro Perelli
- Department of Biomedical Engineering, School of Science and Engineering, University of Dundee, DD1 4HN, UK
| | - Mengzhou Li
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, USA
| | - Jian Zhou
- CTIQ, Canon Medical Research USA, Inc., Vernon Hills, 60061, USA
| | - Ge Wang
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
54
|
Sandeep B, Liu X, Huang X, Wang X, Mao L, Xiao Z. Feasibility of artificial intelligence its current status, clinical applications, and future direction in cardiovascular disease. Curr Probl Cardiol 2024; 49:102349. [PMID: 38103818 DOI: 10.1016/j.cpcardiol.2023.102349] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 12/19/2023]
Abstract
In routine clinical practice, the diagnosis and treatment of cardiovascular disease (CVD) rely on data in a variety of formats. These formats comprise invasive angiography, laboratory data, non-invasive imaging diagnostics, and patient history. Artificial intelligence (AI) is a field of computer science that aims to mimic human thought processes, learning capacity, and knowledge storage. In cardiovascular medicine, artificial intelligence (AI) algorithms have been used to discover novel genotypes and phenotypes in established diseases enhance patient care, enable cost effectiveness, and lower readmission and mortality rates. AI will lead to a paradigm change toward precision cardiovascular medicine in the near future. The promise application of AI in cardiovascular medicine is immense; however, failure to recognize and ignorance of the challenges may overshadow its potential clinical impact. AI can facilitate every stage in cardiology in the imaging process, from acquisition and reconstruction, to segmentation, measurement, interpretation, and subsequent clinical pathways. Along with new possibilities, new threats arise, acknowledging and understanding them is as important as understanding the machine learning (ML) methodology itself. Therefore, attention is also paid to the current opinions and guidelines regarding the validation and safety of AI. This paper provides a outline for clinicians on relevant aspects of AI and machine learning, selection of applications and methods in cardiology to date, and identifies how cardiovascular medicine could incorporate AI in the future. With progress continuing in this emerging technology, the impact for cardiovascular medicine is highlighted to provide insight for the practicing clinician and to identify potential patient benefits.
Collapse
Affiliation(s)
- Bhushan Sandeep
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China.
| | - Xian Liu
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Xin Huang
- Department of Anesthesiology, West China Hospital of Medicine, Sichuan University, Chengdu, Sichuan 610017, China
| | - Xiaowei Wang
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Long Mao
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| | - Zongwei Xiao
- Department of Cardio-Thoracic Surgery, Chengdu Second People's Hospital, Chengdu, Sichuan 610017, China
| |
Collapse
|
55
|
Cobanaj M, Corti C, Dee EC, McCullum L, Boldrini L, Schlam I, Tolaney SM, Celi LA, Curigliano G, Criscitiello C. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer 2024; 198:113504. [PMID: 38141549 PMCID: PMC11362966 DOI: 10.1016/j.ejca.2023.113504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/25/2023]
Abstract
Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.
Collapse
Affiliation(s)
- Marisa Cobanaj
- National Center for Radiation Research in Oncology, OncoRay, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Chiara Corti
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy.
| | - Edward C Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucas McCullum
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Boldrini
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Ilana Schlam
- Department of Hematology and Oncology, Tufts Medical Center, Boston, MA, USA; Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sara M Tolaney
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Leo A Celi
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| |
Collapse
|
56
|
Tan XI, Liu X, Xiang K, Wang J, Tan S. Deep Filtered Back Projection for CT Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:20962-20972. [PMID: 39211346 PMCID: PMC11361368 DOI: 10.1109/access.2024.3357355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Filtered back projection (FBP) is a classic analytical algorithm for computed tomography (CT) reconstruction, with high computational efficiency. However, images reconstructed by FBP often suffer from excessive noise and artifacts. The original FBP algorithm uses a window function to smooth signals and a linear interpolation to estimate projection values at un-sampled locations. In this study, we propose a novel framework named DeepFBP in which an optimized filter and an optimized nonlinear interpolation operator are learned with neural networks. Specifically, the learned filter can be considered as the product of an optimized window function and the ramp filter, and the learned interpolation can be considered as an optimized way to utilize projection information of nearby locations through nonlinear combination. The proposed method remains the high computational efficiency of the original FBP and achieves much better reconstruction quality at different noise levels. It also outperforms the TV-based statistical iterative algorithm, with computational time being reduced in an order of two, and state-of-the-art post-processing deep learning methods that have deeper and more complicated network structures.
Collapse
Affiliation(s)
- X I Tan
- College of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou 80305, China
| | - Xuan Liu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Kai Xiang
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Shan Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
57
|
Rudroff T. Artificial Intelligence's Transformative Role in Illuminating Brain Function in Long COVID Patients Using PET/FDG. Brain Sci 2024; 14:73. [PMID: 38248288 PMCID: PMC10813353 DOI: 10.3390/brainsci14010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
Cutting-edge brain imaging techniques, particularly positron emission tomography with Fluorodeoxyglucose (PET/FDG), are being used in conjunction with Artificial Intelligence (AI) to shed light on the neurological symptoms associated with Long COVID. AI, particularly deep learning algorithms such as convolutional neural networks (CNN) and generative adversarial networks (GAN), plays a transformative role in analyzing PET scans, identifying subtle metabolic changes, and offering a more comprehensive understanding of Long COVID's impact on the brain. It aids in early detection of abnormal brain metabolism patterns, enabling personalized treatment plans. Moreover, AI assists in predicting the progression of neurological symptoms, refining patient care, and accelerating Long COVID research. It can uncover new insights, identify biomarkers, and streamline drug discovery. Additionally, the application of AI extends to non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), which have shown promise in alleviating Long COVID symptoms. AI can optimize treatment protocols by analyzing neuroimaging data, predicting individual responses, and automating adjustments in real time. While the potential benefits are vast, ethical considerations and data privacy must be rigorously addressed. The synergy of AI and PET scans in Long COVID research offers hope in understanding and mitigating the complexities of this condition.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA 52242, USA; ; Tel.: +1-(319)-467-0363; Fax: +1-(319)-355-6669
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| |
Collapse
|
58
|
Li Z, Liu Y, Zhang P, Lu J, Gui Z. Decomposition iteration strategy for low-dose CT denoising. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:493-512. [PMID: 38189738 DOI: 10.3233/xst-230272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
In the medical field, computed tomography (CT) is a commonly used examination method, but the radiation generated increases the risk of illness in patients. Therefore, low-dose scanning schemes have attracted attention, in which noise reduction is essential. We propose a purposeful and interpretable decomposition iterative network (DISN) for low-dose CT denoising. This method aims to make the network design interpretable and improve the fidelity of details, rather than blindly designing or using deep CNN architecture. The experiment is trained and tested on multiple data sets. The results show that the DISN method can restore the low-dose CT image structure and improve the diagnostic performance when the image details are limited. Compared with other algorithms, DISN has better quantitative and visual performance, and has potential clinical application prospects.
Collapse
Affiliation(s)
- Zhiyuan Li
- North University of China, Taiyuan, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Yi Liu
- North University of China, Taiyuan, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Pengcheng Zhang
- North University of China, Taiyuan, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Jing Lu
- North University of China, Taiyuan, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Zhiguo Gui
- North University of China, Taiyuan, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
59
|
Hsieh J. Synthetization of high-dose images using low-dose CT scans. Med Phys 2024; 51:113-125. [PMID: 37975625 DOI: 10.1002/mp.16833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 09/05/2023] [Accepted: 10/25/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Radiation dose reduction has been the focus of many research activities in x-ray CT. Various approaches were taken to minimize the dose to patients, ranging from the optimization of clinical protocols, refinement of the scanner hardware design, and development of advanced reconstruction algorithms. Although significant progress has been made, more advancements in this area are needed to minimize the radiation risks to patients. PURPOSE Reconstruction algorithm-based dose reduction approaches focus mainly on the suppression of noise in the reconstructed images while preserving detailed anatomical structures. Such an approach effectively produces synthesized high-dose images (SHD) from the data acquired with low-dose scans. A representative example is the model-based iterative reconstruction (MBIR). Despite its widespread deployment, its full adoption in a clinical environment is often limited by an undesirable image texture. Recent studies have shown that deep learning image reconstruction (DLIR) can overcome this shortcoming. However, the limited availability of high-quality clinical images for training and validation is often the bottleneck for its development. In this paper, we propose a novel approach to generate SHD with existing low-dose clinical datasets that overcomes both the noise texture issue and the data availability issue. METHODS Our approach is based on the observation that noise in the image can be effectively reduced by performing image processing orthogonal to the imaging plane. This process essentially creates an equivalent thick-slice image (TSI), and the characteristics of TSI depend on the nature of the image processing. An advantage of this approach is its potential to reduce impact on the noise texture. The resulting image, however, is likely corrupted by the anatomical structural degradation due to partial volume effects. Careful examination has shown that the differential signal between the original and the processed image contains sufficient information to identify regions where anatomical structures are modified. The differential signal, unfortunately, contains significant noise and has to be removed. The noise removal can be accomplished by performing iterative noise reduction to preserve structural information. The processed differential signal is subsequently subtracted from TSI to arrive at SHD. RESULTS The algorithm was evaluated extensively with phantom and clinical datasets. For better visual inspection, difference images between the original and SHD were generated and carefully examined. Negligible residual structure could be observed. In addition to the qualitative inspection, quantitative analyses were performed on clinical images in terms of the CT number consistency and the noise reduction characteristics. Results indicate that no CT number bias is introduced by the proposed algorithm. In addition, noise reduction capability is consistent across different patient anatomical regions. Further, simulated water phantom scans were utilized in the generation of the noise power spectrum (NPS) to demonstrate the preservation of the noise-texture. CONCLUSIONS We present a method to generate SHD datasets from regularly acquired low-dose CT scans. Images produced with the proposed approach exhibit excellent noise-reduction with the desired noise-texture. Extensive clinical and phantom studies have demonstrated the efficacy and robustness of our approach. Potential limitations of the current implementation are discussed and further research topics are outlined.
Collapse
Affiliation(s)
- Jiang Hsieh
- Independent Consultant, Brookfield, Wisconsin, USA
| |
Collapse
|
60
|
Kang HJ, Lee JM, Park SJ, Lee SM, Joo I, Yoon JH. Image Quality Improvement of Low-dose Abdominal CT using Deep Learning Image Reconstruction Compared with the Second Generation Iterative Reconstruction. Curr Med Imaging 2024; 20:e250523217310. [PMID: 37231764 DOI: 10.2174/1573405620666230525104809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 03/23/2023] [Accepted: 04/06/2023] [Indexed: 05/27/2023]
Abstract
BACKGROUND Whether deep learning-based CT reconstruction could improve lesion conspicuity on abdominal CT when the radiation dose is reduced is controversial. OBJECTIVES To determine whether DLIR can provide better image quality and reduce radiation dose in contrast-enhanced abdominal CT compared with the second generation of adaptive statistical iterative reconstruction (ASiR-V). AIMS This study aims to determine whether deep-learning image reconstruction (DLIR) can improve image quality. METHOD In this retrospective study, a total of 102 patients were included, who underwent abdominal CT using a DLIR-equipped 256-row scanner and routine CT of the same protocol on the same vendor's 64-row scanner within four months. The CT data from the 256-row scanner were reconstructed into ASiR-V with three blending levels (AV30, AV60, and AV100), and DLIR images with three strength levels (DLIR-L, DLIR-M, and DLIR-H). The routine CT data were reconstructed into AV30, AV60, and AV100. The contrast-to-noise ratio (CNR) of the liver, overall image quality, subjective noise, lesion conspicuity, and plasticity in the portal venous phase (PVP) of ASiR-V from both scanners and DLIR were compared. RESULTS The mean effective radiation dose of PVP of the 256-row scanner was significantly lower than that of the routine CT (6.3±2.0 mSv vs. 2.4±0.6 mSv; p< 0.001). The mean CNR, image quality, subjective noise, and lesion conspicuity of ASiR-V images of the 256-row scanner were significantly lower than those of ASiR-V images at the same blending factor of routine CT, but significantly improved with DLIR algorithms. DLIR-H showed higher CNR, better image quality, and subjective noise than AV30 from routine CT, whereas plasticity was significantly better for AV30. CONCLUSION DLIR can be used for improving image quality and reducing radiation dose in abdominal CT, compared with ASIR-V.
Collapse
Affiliation(s)
- Hyo-Jin Kang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Jeong Min Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Sae Jin Park
- Department of Radiology, G&E alphadom medical center, Seongnam, Korea
| | - Sang Min Lee
- Department of Radiology, Cha Gangnam Medical Center, Seoul, Korea
| | - Ijin Joo
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Jeong Hee Yoon
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
61
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
62
|
Gao M, Fessler JA, Chan HP. Model-based deep CNN-regularized reconstruction for digital breast tomosynthesis with a task-based CNN image assessment approach. Phys Med Biol 2023; 68:245024. [PMID: 37988758 PMCID: PMC10719554 DOI: 10.1088/1361-6560/ad0eb4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/02/2023] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective. Digital breast tomosynthesis (DBT) is a quasi-three-dimensional breast imaging modality that improves breast cancer screening and diagnosis because it reduces fibroglandular tissue overlap compared with 2D mammography. However, DBT suffers from noise and blur problems that can lower the detectability of subtle signs of cancers such as microcalcifications (MCs). Our goal is to improve the image quality of DBT in terms of image noise and MC conspicuity.Approach. We proposed a model-based deep convolutional neural network (deep CNN or DCNN) regularized reconstruction (MDR) for DBT. It combined a model-based iterative reconstruction (MBIR) method that models the detector blur and correlated noise of the DBT system and the learning-based DCNN denoiser using the regularization-by-denoising framework. To facilitate the task-based image quality assessment, we also proposed two DCNN tools for image evaluation: a noise estimator (CNN-NE) trained to estimate the root-mean-square (RMS) noise of the images, and an MC classifier (CNN-MC) as a DCNN model observer to evaluate the detectability of clustered MCs in human subject DBTs.Main results. We demonstrated the efficacies of CNN-NE and CNN-MC on a set of physical phantom DBTs. The MDR method achieved low RMS noise and the highest detection area under the receiver operating characteristic curve (AUC) rankings evaluated by CNN-NE and CNN-MC among the reconstruction methods studied on an independent test set of human subject DBTs.Significance. The CNN-NE and CNN-MC may serve as a cost-effective surrogate for human observers to provide task-specific metrics for image quality comparisons. The proposed reconstruction method shows the promise of combining physics-based MBIR and learning-based DCNNs for DBT image reconstruction, which may potentially lead to lower dose and higher sensitivity and specificity for MC detection in breast cancer screening and diagnosis.
Collapse
Affiliation(s)
- Mingjie Gao
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
| |
Collapse
|
63
|
Im JY, Halliburton SS, Mei K, Perkins AE, Wong E, Roshkovan L, Sandvold OF, Liu LP, Gang GJ, Noël PB. Patient-derived PixelPrint phantoms for evaluating clinical imaging performance of a deep learning CT reconstruction algorithm. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.07.23299625. [PMID: 38106064 PMCID: PMC10723564 DOI: 10.1101/2023.12.07.23299625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Objective Deep learning reconstruction (DLR) algorithms exhibit object-dependent resolution and noise performance. Thus, traditional geometric CT phantoms cannot fully capture the clinical imaging performance of DLR. This study uses a patient-derived 3D-printed PixelPrint lung phantom to evaluate a commercial DLR algorithm across a wide range of radiation dose levels. Approach The lung phantom used in this study is based on a patient chest CT scan containing ground glass opacities and was fabricated using PixelPrint 3D-printing technology. The phantom was placed inside two different sized extension rings to mimic a small and medium sized patient and was scanned on a conventional CT scanner at exposures between 0.5 and 20 mGy. Each scan was reconstructed using filtered back projection (FBP), iterative reconstruction, and DLR at five levels of denoising. Image noise, contrast to noise ratio (CNR), root mean squared error (RMSE), structural similarity index (SSIM), and multi-scale SSIM (MS SSIM) were calculated for each image. Main Results DLR demonstrated superior performance compared to FBP and iterative reconstruction for all measured metrics in both phantom sizes, with better performance for more aggressive denoising levels. DLR was estimated to reduce dose by 25-83% in the small phantom and by 50-83% in the medium phantom without decreasing image quality for any of the metrics measured in this study. These dose reduction estimates are more conservative compared to the estimates obtained when only considering noise and CNR with a non-anatomical physics phantom. Significance DLR has the capability of producing diagnostic image quality at up to 83% lower radiation dose which can improve the clinical utility and viability of lower dose CT scans. Furthermore, the PixelPrint phantom used in this study offers an improved testing environment with more realistic tissue structures compared to traditional CT phantoms, allowing for structure-based image quality evaluation beyond noise and contrast-based assessments.
Collapse
|
64
|
Aromiwura AA, Settle T, Umer M, Joshi J, Shotwell M, Mattumpuram J, Vorla M, Sztukowska M, Contractor S, Amini A, Kalra DK. Artificial intelligence in cardiac computed tomography. Prog Cardiovasc Dis 2023; 81:54-77. [PMID: 37689230 DOI: 10.1016/j.pcad.2023.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/04/2023] [Indexed: 09/11/2023]
Abstract
Artificial Intelligence (AI) is a broad discipline of computer science and engineering. Modern application of AI encompasses intelligent models and algorithms for automated data analysis and processing, data generation, and prediction with applications in visual perception, speech understanding, and language translation. AI in healthcare uses machine learning (ML) and other predictive analytical techniques to help sort through vast amounts of data and generate outputs that aid in diagnosis, clinical decision support, workflow automation, and prognostication. Coronary computed tomography angiography (CCTA) is an ideal union for these applications due to vast amounts of data generation and analysis during cardiac segmentation, coronary calcium scoring, plaque quantification, adipose tissue quantification, peri-operative planning, fractional flow reserve quantification, and cardiac event prediction. In the past 5 years, there has been an exponential increase in the number of studies exploring the use of AI for cardiac computed tomography (CT) image acquisition, de-noising, analysis, and prognosis. Beyond image processing, AI has also been applied to improve the imaging workflow in areas such as patient scheduling, urgent result notification, report generation, and report communication. In this review, we discuss algorithms applicable to AI and radiomic analysis; we then present a summary of current and emerging clinical applications of AI in cardiac CT. We conclude with AI's advantages and limitations in this new field.
Collapse
Affiliation(s)
| | - Tyler Settle
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
| | - Muhammad Umer
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jonathan Joshi
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Matthew Shotwell
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jishanth Mattumpuram
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Mounica Vorla
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Maryta Sztukowska
- Clinical Trials Unit, University of Louisville, Louisville, KY, USA; University of Information Technology and Management, Rzeszow, Poland
| | - Sohail Contractor
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Amir Amini
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Dinesh K Kalra
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
65
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
66
|
Wehbe RM, Katsaggelos AK, Hammond KJ, Hong H, Ahmad FS, Ouyang D, Shah SJ, McCarthy PM, Thomas JD. Deep Learning for Cardiovascular Imaging: A Review. JAMA Cardiol 2023; 8:1089-1098. [PMID: 37728933 DOI: 10.1001/jamacardio.2023.3142] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Importance Artificial intelligence (AI), driven by advances in deep learning (DL), has the potential to reshape the field of cardiovascular imaging (CVI). While DL for CVI is still in its infancy, research is accelerating to aid in the acquisition, processing, and/or interpretation of CVI across various modalities, with several commercial products already in clinical use. It is imperative that cardiovascular imagers are familiar with DL systems, including a basic understanding of how they work, their relative strengths compared with other automated systems, and possible pitfalls in their implementation. The goal of this article is to review the methodology and application of DL to CVI in a simple, digestible fashion toward demystifying this emerging technology. Observations At its core, DL is simply the application of a series of tunable mathematical operations that translate input data into a desired output. Based on artificial neural networks that are inspired by the human nervous system, there are several types of DL architectures suited to different tasks; convolutional neural networks are particularly adept at extracting valuable information from CVI data. We survey some of the notable applications of DL to tasks across the spectrum of CVI modalities. We also discuss challenges in the development and implementation of DL systems, including avoiding overfitting, preventing systematic bias, improving explainability, and fostering a human-machine partnership. Finally, we conclude with a vision of the future of DL for CVI. Conclusions and Relevance Deep learning has the potential to meaningfully affect the field of CVI. Rather than a threat, DL could be seen as a partner to cardiovascular imagers in reducing technical burden and improving efficiency and quality of care. High-quality prospective evidence is still needed to demonstrate how the benefits of DL CVI systems may outweigh the risks.
Collapse
Affiliation(s)
- Ramsey M Wehbe
- Division of Cardiology, Department of Medicine & Biomedical Informatics Center, Medical University of South Carolina, Charleston
- Division of Cardiology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Aggelos K Katsaggelos
- Department of Computer and Electrical Engineering, Northwestern University, Evanston, Illinois
| | - Kristian J Hammond
- Department of Computer Science, Northwestern University, Evanston, Illinois
| | - Ha Hong
- Medtronic, Minneapolis, Minnesota
| | - Faraz S Ahmad
- Division of Cardiology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Center for Health Information Partnerships, Institute for Public Health and Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Center for Artificial Intelligence, Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, Illinois
| | - David Ouyang
- Division of Cardiology, Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, California
| | - Sanjiv J Shah
- Division of Cardiology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Center for Artificial Intelligence, Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, Illinois
| | - Patrick M McCarthy
- Division of Cardiac Surgery, Department of Surgery, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Center for Artificial Intelligence, Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, Illinois
| | - James D Thomas
- Division of Cardiology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Center for Artificial Intelligence, Northwestern Medicine Bluhm Cardiovascular Institute, Chicago, Illinois
| |
Collapse
|
67
|
Wang CJ, Rost NS, Golland P. Spatial-Intensity Transforms for Medical Image-to-Image Translation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3362-3373. [PMID: 37285247 PMCID: PMC10651358 DOI: 10.1109/tmi.2023.3283948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Image-to-image translation has seen major advances in computer vision but can be difficult to apply to medical images, where imaging artifacts and data scarcity degrade the performance of conditional generative adversarial networks. We develop the spatial-intensity transform (SIT) to improve output image quality while closely matching the target domain. SIT constrains the generator to a smooth spatial transform (diffeomorphism) composed with sparse intensity changes. SIT is a lightweight, modular network component that is effective on various architectures and training schemes. Relative to unconstrained baselines, this technique significantly improves image fidelity, and our models generalize robustly to different scanners. Additionally, SIT provides a disentangled view of anatomical and textural changes for each translation, making it easier to interpret the model's predictions in terms of physiological phenomena. We demonstrate SIT on two tasks: predicting longitudinal brain MRIs in patients with various stages of neurodegeneration, and visualizing changes with age and stroke severity in clinical brain scans of stroke patients. On the first task, our model accurately forecasts brain aging trajectories without supervised training on paired scans. On the second task, it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. As conditional generative models become increasingly versatile tools for visualization and forecasting, our approach demonstrates a simple and powerful technique for improving robustness, which is critical for translation to clinical settings. Source code is available at github.com/clintonjwang/spatial-intensity-transforms.
Collapse
|
68
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
69
|
Yu M, Guo M, Zhang S, Zhan Y, Zhao M, Lukasiewicz T, Xu Z. RIRGAN: An end-to-end lightweight multi-task learning method for brain MRI super-resolution and denoising. Comput Biol Med 2023; 167:107632. [PMID: 39491379 DOI: 10.1016/j.compbiomed.2023.107632] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 10/05/2023] [Accepted: 10/23/2023] [Indexed: 11/05/2024]
Abstract
A common problem in the field of deep-learning-based low-level vision medical images is that most of the research is based on single task learning (STL), which is dedicated to solving one of the situations of low resolution or high noise. Our motivation is to design a model that can perform both SR and DN tasks simultaneously, in order to cope with the actual situation of low resolution and high noise in low-level vision medical images. By improving the existing single image super-resolution (SISR) network and introducing the idea of multi-task learning (MTL), we propose an end-to-end lightweight MTL generative adversarial network (GAN) based network using residual-in-residual-blocks (RIR-Blocks) for feature extraction, RIRGAN, which can concurrently accomplish super-resolution (SR) and denoising (DN) tasks. The generator in RIRGAN is composed of several residual groups with a long skip connection (LSC), which can help form a very deep network and enable the network to focus on learning high-frequency (HF) information. The introduction of a discriminator based on relativistic average discriminator (RaD) greatly improves the discriminator's ability and makes the generated image have more realistic details. Meanwhile, the use of hybrid loss function not only ensures that RIRGAN has the ability of MTL, but also enables RIRGAN to give a more balanced attention between quantitative evaluation of metrics and qualitative evaluation of human vision. The experimental results show that the quality of the restoration image of RIRGAN is superior to the SR and DN methods based on STL in both subjective perception and objective evaluation metrics when processing medical images with low-level vision. Our RIRGAN is more in line with the practical requirements of medical practice.
Collapse
Affiliation(s)
- Miao Yu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Miaomiao Guo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Shuai Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China.
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children's Medical Center, Haikou, China
| | - Mingkang Zhao
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Thomas Lukasiewicz
- Institute of Logic and Computation, Vienna University of Technology, Vienna, Austria; Department of Computer Science, University of Oxford, Oxford, United Kingdom
| | - Zhenghua Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China.
| |
Collapse
|
70
|
Jiao J, Xiao X, Li Z. dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:19485-19503. [PMID: 38052611 DOI: 10.3934/mbe.2023863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Breast cancer seriously threatens women's physical and mental health. Mammography is one of the most effective methods for breast cancer diagnosis via artificial intelligence algorithms to identify diverse breast masses. The popular intelligent diagnosis methods require a large amount of breast images for training. However, collecting and labeling many breast images manually is extremely time consuming and inefficient. In this paper, we propose a distributed multi-latent code inversion enhanced Generative Adversarial Network (dm-GAN) for fast, accurate and automatic breast image generation. The proposed dm-GAN takes advantage of the generator and discriminator of the GAN framework to achieve automatic image generation. The new generator in dm-GAN adopts a multi-latent code inverse mapping method to simplify the data fitting process of GAN generation and improve the accuracy of image generation, while a multi-discriminator structure is used to enhance the discrimination accuracy. The experimental results show that the proposed dm-GAN can automatically generate breast images with higher accuracy, up to a higher 1.84 dB Peak Signal-to-Noise Ratio (PSNR) and lower 5.61% Fréchet Inception Distance (FID), as well as 1.38x faster generation than the state-of-the-art.
Collapse
Affiliation(s)
- Jiajia Jiao
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Xiao Xiao
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Zhiyu Li
- Department of Medical Imaging, Shanghai East Hospital, Tongji University School of Medicine, Shanghai 201306, China
| |
Collapse
|
71
|
Patwari M, Gutjahr R, Marcus R, Thali Y, Calvarons AF, Raupach R, Maier A. Reducing the risk of hallucinations with interpretable deep learning models for low-dose CT denoising: comparative performance analysis. Phys Med Biol 2023; 68:19LT01. [PMID: 37733068 DOI: 10.1088/1361-6560/acfc11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 09/21/2023] [Indexed: 09/22/2023]
Abstract
Objective.Reducing CT radiation dose is an often proposed measure to enhance patient safety, which, however results in increased image noise, translating into degradation of clinical image quality. Several deep learning methods have been proposed for low-dose CT (LDCT) denoising. The high risks posed by possible hallucinations in clinical images necessitate methods which aid the interpretation of deep learning networks. In this study, we aim to use qualitative reader studies and quantitative radiomics studies to assess the perceived quality, signal preservation and statistical feature preservation of LDCT volumes denoised by deep learning. We aim to compare interpretable deep learning methods with classical deep neural networks in clinical denoising performance.Approach.We conducted an image quality analysis study to assess the image quality of the denoised volumes based on four criteria to assess the perceived image quality. We subsequently conduct a lesion detection/segmentation study to assess the impact of denoising on signal detectability. Finally, a radiomic analysis study was performed to observe the quantitative and statistical similarity of the denoised images to standard dose CT (SDCT) images.Main results.The use of specific deep learning based algorithms generate denoised volumes which are qualitatively inferior to SDCT volumes(p< 0.05). Contrary to previous literature, denoising the volumes did not reduce the accuracy of the segmentation (p> 0.05). The denoised volumes, in most cases, generated radiomics features which were statistically similar to those generated from SDCT volumes (p> 0.05).Significance.Our results show that the denoised volumes have a lower perceived quality than SDCT volumes. Noise and denoising do not significantly affect detectability of the abdominal lesions. Denoised volumes also contain statistically identical features to SDCT volumes.
Collapse
Affiliation(s)
- Mayank Patwari
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, D-91058 Erlangen, Germany
- CT Concepts, Siemens Healthineers AG, D-91301 Forchheim, Germany
| | - Ralf Gutjahr
- CT Concepts, Siemens Healthineers AG, D-91301 Forchheim, Germany
| | - Roy Marcus
- Balgrist University Hospital Zurich, 8008 Zurich, Switzerland
- Faculty of Medicine, University of Zurich, 8032 Zurich, Switzerland
- Cantonal Hospital of Lucerne, 6016 Lucerne, Switzerland
| | - Yannick Thali
- Spital Zofingen AG, 4800 Zofingen, Switzerland
- Cantonal Hospital of Lucerne, 6016 Lucerne, Switzerland
| | | | - Rainer Raupach
- CT Concepts, Siemens Healthineers AG, D-91301 Forchheim, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, D-91058 Erlangen, Germany
| |
Collapse
|
72
|
Choi K, Kim SH, Kim S. Self-supervised denoising of projection data for low-dose cone-beam CT. Med Phys 2023; 50:6319-6333. [PMID: 37079443 DOI: 10.1002/mp.16421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 04/03/2023] [Accepted: 04/03/2023] [Indexed: 04/21/2023] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have shown promising results in image denoising tasks. While most existing CNN-based methods depend on supervised learning by directly mapping noisy inputs to clean targets, high-quality references are often unavailable for interventional radiology such as cone-beam computed tomography (CBCT). PURPOSE In this paper, we propose a novel self-supervised learning method that reduces noise in projections acquired by ordinary CBCT scans. METHODS With a network that partially blinds input, we are able to train the denoising model by mapping the partially blinded projections to the original projections. Additionally, we incorporate noise-to-noise learning into the self-supervised learning by mapping the adjacent projections to the original projections. With standard image reconstruction methods such as FDK-type algorithms, we can reconstruct high-quality CBCT images from the projections denoised by our projection-domain denoising method. RESULTS In the head phantom study, we measure peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) values of the proposed method along with the other denoising methods and uncorrected low-dose CBCT data for a quantitative comparison both in projection and image domains. The PSNR and SSIM values of our self-supervised denoising approach are 27.08 and 0.839, whereas those of uncorrected CBCT images are 15.68 and 0.103, respectively. In the retrospective study, we assess the quality of interventional patient CBCT images to evaluate the projection-domain and image-domain denoising methods. Both qualitative and quantitative results indicate that our approach can effectively produce high-quality CBCT images with low-dose projections in the absence of duplicate clean or noisy references. CONCLUSIONS Our self-supervised learning strategy is capable of restoring anatomical information while efficiently removing noise in CBCT projection data.
Collapse
Affiliation(s)
- Kihwan Choi
- Bionics Research Center, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - Seung Hyoung Kim
- Department of Radiology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sungwon Kim
- Department of Radiology, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
73
|
Wei L, Yadav A, Hsu W. CTFlow: Mitigating Effects of Computed Tomography Acquisition and Reconstruction with Normalizing Flows. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14226:413-422. [PMID: 38737498 PMCID: PMC11086056 DOI: 10.1007/978-3-031-43990-2_39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.
Collapse
Affiliation(s)
- Leihao Wei
- Department of Electrical & Computer Engineering, Samueli School of Engineering, University of California, Los Angeles, CA 90095, USA
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, CA 90024, USA
| | - Anil Yadav
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, CA 90024, USA
| | - William Hsu
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA, Los Angeles, CA 90024, USA
| |
Collapse
|
74
|
Chan HP, Helvie MA, Gao M, Hadjiiski L, Zhou C, Garver K, Klein KA, McLaughlin C, Oudsema R, Rahman WT, Roubidoux MA. Deep learning denoising of digital breast tomosynthesis: Observer performance study of the effect on detection of microcalcifications in breast phantom images. Med Phys 2023; 50:6177-6189. [PMID: 37145996 PMCID: PMC10592580 DOI: 10.1002/mp.16439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 05/07/2023] Open
Abstract
BACKGROUND The noise in digital breast tomosynthesis (DBT) includes x-ray quantum noise and detector readout noise. The total radiation dose of a DBT scan is kept at about the level of a digital mammogram but the detector noise is increased due to acquisition of multiple projections. The high noise can degrade the detectability of subtle lesions, specifically microcalcifications (MCs). PURPOSE We previously developed a deep-learning-based denoiser to improve the image quality of DBT. In the current study, we conducted an observer performance study with breast radiologists to investigate the feasibility of using deep-learning-based denoising to improve the detection of MCs in DBT. METHODS We have a modular breast phantom set containing seven 1-cm-thick heterogeneous 50% adipose/50% fibroglandular slabs custom-made by CIRS, Inc. (Norfolk, VA). We made six 5-cm-thick breast phantoms embedded with 144 simulated MC clusters of four nominal speck sizes (0.125-0.150, 0.150-0.180, 0.180-0.212, 0.212-0.250 mm) at random locations. The phantoms were imaged with a GE Pristina DBT system using the automatic standard (STD) mode. The phantoms were also imaged with the STD+ mode that increased the average glandular dose by 54% to be used as a reference condition for comparison of radiologists' reading. Our previously trained and validated denoiser was deployed to the STD images to obtain a denoised DBT set (dnSTD). Seven breast radiologists participated as readers to detect the MCs in the DBT volumes of the six phantoms under the three conditions (STD, STD+, dnSTD), totaling 18 DBT volumes. Each radiologist read all the 18 DBT volumes sequentially, which were arranged in a different order for each reader in a counter-balanced manner to minimize any potential reading order effects. They marked the location of each detected MC cluster and provided a conspicuity rating and their confidence level for the perceived cluster. The visual grading characteristics (VGC) analysis was used to compare the conspicuity ratings and the confidence levels of the radiologists for the detection of MCs. RESULTS The average sensitivities over all MC speck sizes were 65.3%, 73.2%, and 72.3%, respectively, for the radiologists reading the STD, dnSTD, and STD+ volumes. The sensitivity for dnSTD was significantly higher than that for STD (p < 0.005, two-tailed Wilcoxon signed rank test) and comparable to that for STD+. The average false positive rates were 3.9 ± 4.6, 2.8 ± 3.7, and 2.7 ± 3.9 marks per DBT volume, respectively, for reading the STD, dnSTD, and STD+ images but the difference between dnSTD and STD or STD+ did not reach statistical significance. The overall conspicuity ratings and confidence levels by VGC analysis for dnSTD were significantly higher than those for both STD and STD+ (p ≤ 0.001). The critical alpha value for significance was adjusted to be 0.025 with Bonferroni correction. CONCLUSIONS This observer study using breast phantom images showed that deep-learning-based denoising has the potential to improve the detection of MCs in noisy DBT images and increase radiologists' confidence in differentiating noise from MCs without increasing radiation dose. Further studies are needed to evaluate the generalizability of these results to the wide range of DBTs from human subjects and patient populations in clinical settings.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Mark A Helvie
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Mingjie Gao
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Kim Garver
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Katherine A Klein
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Carol McLaughlin
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Rebecca Oudsema
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - W Tania Rahman
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | | |
Collapse
|
75
|
Yuan J, Zhou F, Guo Z, Li X, Yu H. HCformer: Hybrid CNN-Transformer for LDCT Image Denoising. J Digit Imaging 2023; 36:2290-2305. [PMID: 37386333 PMCID: PMC10501999 DOI: 10.1007/s10278-023-00842-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 04/29/2023] [Accepted: 05/02/2023] [Indexed: 07/01/2023] Open
Abstract
Low-dose computed tomography (LDCT) is an effective way to reduce radiation exposure for patients. However, it will increase the noise of reconstructed CT images and affect the precision of clinical diagnosis. The majority of the current deep learning-based denoising methods are built on convolutional neural networks (CNNs), which concentrate on local information and have little capacity for multiple structures modeling. Transformer structures are capable of computing each pixel's response on a global scale, but their extensive computation requirements prevent them from being widely used in medical image processing. To reduce the impact of LDCT scans on patients, this paper aims to develop an image post-processing method by combining CNN and Transformer structures. This method can obtain a high-quality images from LDCT. A hybrid CNN-Transformer (HCformer) codec network model is proposed for LDCT image denoising. A neighborhood feature enhancement (NEF) module is designed to introduce the local information into the Transformer's operation, and the representation of adjacent pixel information in the LDCT image denoising task is increased. The shifting window method is utilized to lower the computational complexity of the network model and overcome the problems that come with computing the MSA (Multi-head self-attention) process in a fixed window. Meanwhile, W/SW-MSA (Windows/Shifted window Multi-head self-attention) is alternately used in two layers of the Transformer to gain the information interaction between various Transformer layers. This approach can successfully decrease the Transformer's overall computational cost. The AAPM 2016 LDCT grand challenge dataset is employed for ablation and comparison experiments to demonstrate the viability of the proposed LDCT denoising method. Per the experimental findings, HCformer can increase the image quality metrics SSIM, HuRMSE and FSIM from 0.8017, 34.1898, and 0.6885 to 0.8507, 17.7213, and 0.7247, respectively. Additionally, the proposed HCformer algorithm will preserves image details while it reduces noise. In this paper, an HCformer structure is proposed based on deep learning and evaluated by using the AAPM LDCT dataset. Both the qualitative and quantitative comparison results confirm that the proposed HCformer outperforms other methods. The contribution of each component of the HCformer is also confirmed by the ablation experiments. HCformer can combine the advantages of CNN and Transformer, and it has great potential for LDCT image denoising and other tasks.
Collapse
Affiliation(s)
- Jinli Yuan
- The School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401 China
| | - Feng Zhou
- The School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401 China
| | - Zhitao Guo
- The School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401 China
| | - Xiaozeng Li
- The School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401 China
| | - Hengyong Yu
- The Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA 01854 USA
| |
Collapse
|
76
|
Zhao F, Li D, Luo R, Liu M, Jiang X, Hu J. Self-supervised deep learning for joint 3D low-dose PET/CT image denoising. Comput Biol Med 2023; 165:107391. [PMID: 37717529 DOI: 10.1016/j.compbiomed.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/08/2023] [Accepted: 08/25/2023] [Indexed: 09/19/2023]
Abstract
Deep learning (DL)-based denoising of low-dose positron emission tomography (LDPET) and low-dose computed tomography (LDCT) has been widely explored. However, previous methods have focused only on single modality denoising, neglecting the possibility of simultaneously denoising LDPET and LDCT using only one neural network, i.e., joint LDPET/LDCT denoising. Moreover, DL-based denoising methods generally require plenty of well-aligned LD-normal-dose (LD-ND) sample pairs, which can be difficult to obtain. To this end, we propose a self-supervised two-stage training framework named MAsk-then-Cycle (MAC), to achieve self-supervised joint LDPET/LDCT denoising. The first stage of MAC is masked autoencoder (MAE)-based pre-training and the second stage is self-supervised denoising training. Specifically, we propose a self-supervised denoising strategy named cycle self-recombination (CSR), which enables denoising without well-aligned sample pairs. Unlike other methods that treat noise as a homogeneous whole, CSR disentangles noise into signal-dependent and independent noises. This is more in line with the actual imaging process and allows for flexible recombination of noises and signals to generate new samples. These new samples contain implicit constraints that can improve the network's denoising ability. Based on these constraints, we design multiple loss functions to enable self-supervised training. Then we design a CSR-based denoising network to achieve joint 3D LDPET/LDCT denoising. Existing self-supervised methods generally lack pixel-level constraints on networks, which can easily lead to additional artifacts. Before denoising training, we perform MAE-based pre-training to indirectly impose pixel-level constraints on networks. Experiments on an LDPET/LDCT dataset demonstrate its superiority over existing methods. Our method is the first self-supervised joint LDPET/LDCT denoising method. It does not require any prior assumptions and is therefore more robust.
Collapse
Affiliation(s)
- Feixiang Zhao
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Dongfen Li
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Rui Luo
- Department of Nuclear Medicine, Mianyang Central Hospital, Mianyang, 621000, China.
| | - Mingzhe Liu
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, 325000, China.
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
77
|
Zhou S, Yang J, Konduri K, Huang J, Yu L, Jin M. Spatiotemporal denoising of low-dose cardiac CT image sequences using RecycleGAN. Biomed Phys Eng Express 2023; 9:10.1088/2057-1976/acf223. [PMID: 37604139 PMCID: PMC10593187 DOI: 10.1088/2057-1976/acf223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/21/2023] [Indexed: 08/23/2023]
Abstract
Electrocardiogram (ECG)-gated multi-phase computed tomography angiography (MP-CTA) is frequently used for diagnosis of coronary artery disease. Radiation dose may become a potential concern as the scan needs to cover a wide range of cardiac phases during a heart cycle. A common method to reduce radiation is to limit the full-dose acquisition to a predefined range of phases while reducing the radiation dose for the rest. Our goal in this study is to develop a spatiotemporal deep learning method to enhance the quality of low-dose CTA images at phases acquired at reduced radiation dose. Recently, we demonstrated that a deep learning method, Cycle-Consistent generative adversarial networks (CycleGAN), could effectively denoise low-dose CT images through spatial image translation without labeled image pairs in both low-dose and full-dose image domains. As CycleGAN does not utilize the temporal information in its denoising mechanism, we propose to use RecycleGAN, which could translate a series of images ordered in time from the low-dose domain to the full-dose domain through an additional recurrent network. To evaluate RecycleGAN, we use the XCAT phantom program, a highly realistic simulation tool based on real patient data, to generate MP-CTA image sequences for 18 patients (14 for training, 2 for validation and 2 for test). Our simulation results show that RecycleGAN can achieve better denoising performance than CycleGAN based on both visual inspection and quantitative metrics. We further demonstrate the superior denoising performance of RecycleGAN using clinical MP-CTA images from 50 patients.
Collapse
Affiliation(s)
- Shiwei Zhou
- Department of Physics, University of Texas at Arlington, Arlington, TX, United States of America
| | - Jinyu Yang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States of America
| | - Krishnateja Konduri
- Department of Bioengineering, University of Texas at Arlington, Arlington, TX, United States of America
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States of America
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN, United States of America
| | - Mingwu Jin
- Department of Physics, University of Texas at Arlington, Arlington, TX, United States of America
| |
Collapse
|
78
|
Yu J, Zhang H, Zhang P, Zhu Y. Unsupervised learning-based dual-domain method for low-dose CT denoising. Phys Med Biol 2023; 68:185010. [PMID: 37567225 DOI: 10.1088/1361-6560/acefa2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 08/10/2023] [Indexed: 08/13/2023]
Abstract
Objective. Low-dose CT (LDCT) is an important research topic in the field of CT imaging because of its ability to reduce radiation damage in clinical diagnosis. In recent years, deep learning techniques have been widely applied in LDCT imaging and a large number of denoising methods have been proposed. However, One major challenge of supervised deep learning-based methods is the exactly geometric pairing of datasets with different doses. Therefore, the aim of this study is to develop an unsupervised learning-based LDCT imaging method to address the aforementioned challenges.Approach. In this paper, we propose an unsupervised learning-based dual-domain method for LDCT denoising, which consists of two stages: the first stage is projection domain denoising, in which the unsupervised learning method Noise2Self is applied to denoise the projection data with statistically independent and zero-mean noise. The second stage is an iterative enhancement approach, which combines the prior information obtained from the generative model with an iterative reconstruction algorithm to enhance the details of the reconstructed image.Main results. Experimental results show that our proposed method outperforms the comparison method in terms of denoising effect. Particularly, in terms of SSIM, the denoised results obtained using our method achieve the highest SSIM.Significance. In conclusion, our unsupervised learning-based method can be a promising alternative to the traditional supervised methods for LDCT imaging, especially when the availability of the labeled datasets is limited.
Collapse
Affiliation(s)
- Jie Yu
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
| | - Huitao Zhang
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- Shenzhen National Applied Mathematics Center, Southern University of Science and Technology, Shenzhen, 518055, People's Republic of China
| | - Peng Zhang
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
| | - Yining Zhu
- School of Mathematical Sciences, Capital Normal University, Beijing, 100048, People's Republic of China
- Shenzhen National Applied Mathematics Center, Southern University of Science and Technology, Shenzhen, 518055, People's Republic of China
| |
Collapse
|
79
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
80
|
Guo K, Chen J, Qiu T, Guo S, Luo T, Chen T, Ren S. MedGAN: An adaptive GAN approach for medical image generation. Comput Biol Med 2023; 163:107119. [PMID: 37364533 DOI: 10.1016/j.compbiomed.2023.107119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/09/2023] [Accepted: 05/30/2023] [Indexed: 06/28/2023]
Abstract
Generative adversarial networks (GANs) and their variants as an effective method for generating visually appealing images have shown great potential in different medical imaging applications during past decades. However, some issues remain insufficiently investigated: many models still suffer from model collapse, vanishing gradients, and convergence failure. Considering the fact that medical images differ from typical RGB images in terms of complexity and dimensionality, we propose an adaptive generative adversarial network, namely MedGAN, to mitigate these issues. Specifically, we first use Wasserstein loss as a convergence metric to measure the convergence degree of the generator and the discriminator. Then, we adaptively train MedGAN based on this metric. Finally, we generate medical images based on MedGAN and use them to build few-shot medical data learning models for disease classification and lesion localization. On demodicosis, blister, molluscum, and parakeratosis datasets, our experimental results verify the advantages of MedGAN in model convergence, training speed, and visual quality of generated samples. We believe this approach can be generalized to other medical applications and contribute to radiologists' efforts for disease diagnosis. The source code can be downloaded at https://github.com/geyao-c/MedGAN.
Collapse
Affiliation(s)
- Kehua Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jie Chen
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Tian Qiu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Shaojun Guo
- National Innovation of Defense Technology, Academy of Military Sciences PLA China, Fengtai District, Beijing 100071, China.
| | - Tao Luo
- Huawei Technologies Co., Ltd, Changsha 410006, China
| | - Tianyu Chen
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Sheng Ren
- School of Computer Science and Engineering, Hunan University of Arts and Sciences, Changde 415000, China
| |
Collapse
|
81
|
Dieckmeyer M, Sollmann N, Kupfer K, Löffler MT, Paprottka KJ, Kirschke JS, Baum T. Computed Tomography of the Head : A Systematic Review on Acquisition and Reconstruction Techniques to Reduce Radiation Dose. Clin Neuroradiol 2023; 33:591-610. [PMID: 36862232 PMCID: PMC10449676 DOI: 10.1007/s00062-023-01271-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/24/2023] [Indexed: 03/03/2023]
Abstract
In 1971, the first computed tomography (CT) scan was performed on a patient's brain. Clinical CT systems were introduced in 1974 and dedicated to head imaging only. New technological developments, broader availability, and the clinical success of CT led to a steady growth in examination numbers. Most frequent indications for non-contrast CT (NCCT) of the head include the assessment of ischemia and stroke, intracranial hemorrhage and trauma, while CT angiography (CTA) has become the standard for first-line cerebrovascular evaluation; however, resulting improvements in patient management and clinical outcomes come at the cost of radiation exposure, increasing the risk for secondary morbidity. Therefore, radiation dose optimization should always be part of technical advancements in CT imaging but how can the dose be optimized? What dose reduction can be achieved without compromising diagnostic value, and what is the potential of the upcoming technologies artificial intelligence and photon counting CT? In this article, we look for answers to these questions by reviewing dose reduction techniques with respect to the major clinical indications of NCCT and CTA of the head, including a brief perspective on what to expect from current and future developments in CT technology with respect to radiation dose optimization.
Collapse
Affiliation(s)
- Michael Dieckmeyer
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
| | - Karina Kupfer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Maximilian T. Löffler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Freiburg im Breisgau, Germany
| | - Karolin J. Paprottka
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jan S. Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Thomas Baum
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
82
|
Huang J, Chen K, Ren Y, Sun J, Wang Y, Tao T, Pu X. CDDnet: Cross-domain denoising network for low-dose CT image via local and global information alignment. Comput Biol Med 2023; 163:107219. [PMID: 37422942 DOI: 10.1016/j.compbiomed.2023.107219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 05/21/2023] [Accepted: 06/25/2023] [Indexed: 07/11/2023]
Abstract
The domain shift problem has emerged as a challenge in cross-domain low-dose CT (LDCT) image denoising task, where the acquisition of a sufficient number of medical images from multiple sources may be constrained by privacy concerns. In this study, we propose a novel cross-domain denoising network (CDDnet) that incorporates both local and global information of CT images. To address the local component, a local information alignment module has been proposed to regularize the similarity between extracted target and source features from selected patches. To align the general information of the semantic structure from a global perspective, an autoencoder is adopted to learn the latent correlation between the source label and the estimated target label generated by the pre-trained denoiser. Experimental results demonstrate that our proposed CDDnet effectively alleviates the domain shift problem, outperforming other deep learning-based and domain adaptation-based methods under cross-domain scenarios.
Collapse
Affiliation(s)
- Jiaxin Huang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Kecheng Chen
- Department of Electrical Engineering, City University of Hong Kong, 999077, Hong Kong Special Administrative Region of China
| | - Yazhou Ren
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China; Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, 518110, China
| | - Jiayu Sun
- West China Hospital, Sichuan University, Chengdu, 610044, China
| | - Yanmei Wang
- Institute of Traditional Chinese Medicine, Sichuan College of Traditional Chinese Medicine (Sichuan Second Hospital of TCM), Chengdu, 610075, China
| | - Tao Tao
- Institute of Traditional Chinese Medicine, Sichuan College of Traditional Chinese Medicine (Sichuan Second Hospital of TCM), Chengdu, 610075, China
| | - Xiaorong Pu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China; Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, 518110, China; NHC Key Laboratory of Nuclear Technology Medical Transformation, Mianyang Central Hospital, Mianyang, 621000, China.
| |
Collapse
|
83
|
Wunderlich A, Sklar J. Data-driven modeling of noise time series with convolutional generative adversarial networks. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2023; 4:10.1088/2632-2153/acee44. [PMID: 37693073 PMCID: PMC10484071 DOI: 10.1088/2632-2153/acee44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023] Open
Abstract
Random noise arising from physical processes is an inherent characteristic of measurements and a limiting factor for most signal processing and data analysis tasks. Given the recent interest in generative adversarial networks (GANs) for data-driven modeling, it is important to determine to what extent GANs can faithfully reproduce noise in target data sets. In this paper, we present an empirical investigation that aims to shed light on this issue for time series. Namely, we assess two general-purpose GANs for time series that are based on the popular deep convolutional GAN architecture, a direct time-series model and an image-based model that uses a short-time Fourier transform data representation. The GAN models are trained and quantitatively evaluated using distributions of simulated noise time series with known ground-truth parameters. Target time series distributions include a broad range of noise types commonly encountered in physical measurements, electronics, and communication systems: band-limited thermal noise, power law noise, shot noise, and impulsive noise. We find that GANs are capable of learning many noise types, although they predictably struggle when the GAN architecture is not well suited to some aspects of the noise, e.g. impulsive time-series with extreme outliers. Our findings provide insights into the capabilities and potential limitations of current approaches to time-series GANs and highlight areas for further research. In addition, our battery of tests provides a useful benchmark to aid the development of deep generative models for time series.
Collapse
Affiliation(s)
- Adam Wunderlich
- Communications Technology Laboratory, National Institute of Standards and Technology, Boulder, CO 80305, United States of America
| | - Jack Sklar
- Communications Technology Laboratory, National Institute of Standards and Technology, Boulder, CO 80305, United States of America
| |
Collapse
|
84
|
Baccarelli E, Scarpiniti M, Momenzadeh A. Twinned Residual Auto-Encoder (TRAE)-A new DL architecture for denoising super-resolution and task-aware feature learning from COVID-19 CT images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120104. [PMID: 37090446 PMCID: PMC10106117 DOI: 10.1016/j.eswa.2023.120104] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/21/2023] [Accepted: 04/08/2023] [Indexed: 05/03/2023]
Abstract
The detection of the COronaVIrus Disease 2019 (COVID-19) from Computed Tomography (CT) scans has become a very important task in modern medical diagnosis. Unfortunately, typical resolutions of state-of-the-art CT scans are still not adequate for reliable and accurate automatic detection of COVID-19 disease. Motivated by this consideration, in this paper, we propose a novel architecture that jointly affords the Single-Image Super-Resolution (SISR) and the reliable classification problems from Low Resolution (LR) and noisy CT scans. Specifically, the proposed architecture is based on a couple of Twinned Residual Auto-Encoders (TRAE), which exploits the feature vectors and the SR images recovered by a Master AE for performing transfer learning and then improves the training of a "twinned" Follower AE. In addition, we also develop a Task-Aware (TA) version of the basic TRAE architecture, namely the TA-TRAE, which further utilizes the set of feature vectors generated by the Follower AE for the joint training of an additional auxiliary classifier, so to perform automated medical diagnosis on the basis of the available LR input images without human support. Experimental results and comparisons with a number of state-of-the-art CNN/GAN/CycleGAN benchmark SISR architectures, performed by considering × 2 , × 4 , and × 8 super-resolution (i.e., upscaling) factors, support the effectiveness of the proposed TRAE/TA-TRAE architectures. In particular, the detection accuracy attained by the proposed architectures outperforms the corresponding ones of the implemented CNN, GAN and CycleGAN baselines up to 9.0%, 6.5%, and 6.0% at upscaling factors as high as × 8 .
Collapse
Affiliation(s)
- Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
85
|
Muller FM, Maebe J, Vanhove C, Vandenberghe S. Dose reduction and image enhancement in micro-CT using deep learning. Med Phys 2023; 50:5643-5656. [PMID: 36994779 DOI: 10.1002/mp.16385] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 02/14/2023] [Accepted: 03/09/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND In preclinical settings, micro-computed tomography (CT) provides a powerful tool to acquire high resolution anatomical images of rodents and offers the advantage to in vivo non-invasively assess disease progression and therapy efficacy. Much higher resolutions are needed to achieve scale-equivalent discriminatory capabilities in rodents as those in humans. High resolution imaging however comes at the expense of increased scan times and higher doses. Specifically, with preclinical longitudinal imaging, there are concerns that dose accumulation may affect experimental outcomes of animal models. PURPOSE Dose reduction efforts under the ALARA (as low as reasonably achievable) principles are thus a key point of attention. However, low dose CT acquisitions inherently induce higher noise levels which deteriorate image quality and negatively impact diagnostic performance. Many denoising techniques already exist, and deep learning (DL) has become increasingly popular for image denoising, but research has mostly focused on clinical CT with limited studies conducted on preclinical CT imaging. We investigate the potential of convolutional neural networks (CNN) for restoring high quality micro-CT images from low dose (noisy) images. The novelty of the CNN denoising frameworks presented in this work consists of utilizing image pairs with realistic CT noise present in the input as well as the target image used for the model training; a noisier image acquired with a low dose protocol is matched to a less noisy image acquired with a higher dose scan of the same mouse. METHODS Low and high dose ex vivo micro-CT scans of 38 mice were acquired. Two CNN models, based on a 2D and 3D four-layer U-Net, were trained with mean absolute error (30 training, 4 validation and 4 test sets). To assess denoising performance, ex vivo mice and phantom data were used. Both CNN approaches were compared to existing methods, like spatial filtering (Gaussian, Median, Wiener) and iterative total variation image reconstruction algorithm. Image quality metrics were derived from the phantom images. A first observer study (n = 23) was set-up to rank overall quality of differently denoised images. A second observer study (n = 18) estimated the dose reduction factor of the investigated 2D CNN method. RESULTS Visual and quantitative results show that both CNN algorithms exhibit superior performance in terms of noise suppression, structural preservation and contrast enhancement over comparator methods. The quality scoring by 23 medical imaging experts also indicates that the investigated 2D CNN approach is consistently evaluated as the best performing denoising method. Results from the second observer study and quantitative measurements suggest that CNN-based denoising could offer a 2-4× dose reduction, with an estimated dose reduction factor of about 3.2 for the considered 2D network. CONCLUSIONS Our results demonstrate the potential of DL in micro-CT for higher quality imaging at low dose acquisition settings. In the context of preclinical research, this offers promising future prospects for managing the cumulative severity effects of radiation in longitudinal studies.
Collapse
Affiliation(s)
- Florence M Muller
- Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, Ghent, Belgium
| | - Christian Vanhove
- Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Faculty of Engineering and Architecture, Ghent University, Ghent, Belgium
| |
Collapse
|
86
|
Wang Z, Nawaz M, Khan S, Xia P, Irfan M, Wong EC, Chan R, Cao P. Cross modality generative learning framework for anatomical transitive Magnetic Resonance Imaging (MRI) from Electrical Impedance Tomography (EIT) image. Comput Med Imaging Graph 2023; 108:102272. [PMID: 37515968 DOI: 10.1016/j.compmedimag.2023.102272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/04/2023] [Accepted: 07/08/2023] [Indexed: 07/31/2023]
Abstract
This paper presents a cross-modality generative learning framework for transitive magnetic resonance imaging (MRI) from electrical impedance tomography (EIT). The proposed framework is aimed at converting low-resolution EIT images to high-resolution wrist MRI images using a cascaded cycle generative adversarial network (CycleGAN) model. This model comprises three main components: the collection of initial EIT from the medical device, the generation of a high-resolution transitive EIT image from the corresponding MRI image for domain adaptation, and the coalescence of two CycleGAN models for cross-modality generation. The initial EIT image was generated at three different frequencies (70 kHz, 140 kHz, and 200 kHz) using a 16-electrode belt. Wrist T1-weighted images were acquired on a 1.5T MRI. A total of 19 normal volunteers were imaged using both EIT and MRI, which resulted in 713 paired EIT and MRI images. The cascaded CycleGAN, end-to-end CycleGAN, and Pix2Pix models were trained and tested on the same cohort. The proposed method achieved the highest accuracy in bone detection, with 0.97 for the proposed cascaded CycleGAN, 0.68 for end-to-end CycleGAN, and 0.70 for the Pix2Pix model. Visual inspection showed that the proposed method reduced bone-related errors in the MRI-style anatomical reference compared with end-to-end CycleGAN and Pix2Pix. Multifrequency EIT inputs reduced the testing normalized root mean squared error of MRI-style anatomical reference from 67.9% ± 12.7% to 61.4% ± 8.8% compared with that of single-frequency EIT. The mean conductivity values of fat and bone from regularized EIT were 0.0435 ± 0.0379 S/m and 0.0183 ± 0.0154 S/m, respectively, when the anatomical prior was employed. These results demonstrate that the proposed framework is able to generate MRI-style anatomical references from EIT images with a good degree of accuracy.
Collapse
Affiliation(s)
- Zuojun Wang
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong.
| | - Mehmood Nawaz
- The Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong.
| | - Sheheryar Khan
- School of Professional Education and Executive Development, The Hong Kong Polytechnic University, Hong Kong
| | - Peng Xia
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong
| | - Muhammad Irfan
- Faculty of Electrical Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Pakistan
| | | | | | - Peng Cao
- The Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong.
| |
Collapse
|
87
|
Shan H, Vimieiro RB, Borges LR, Vieira MAC, Wang G. Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography. Artif Intell Med 2023; 142:102555. [PMID: 37316093 PMCID: PMC10267506 DOI: 10.1016/j.artmed.2023.102555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 04/13/2023] [Accepted: 04/14/2023] [Indexed: 06/16/2023]
Abstract
Digital mammography is currently the most common imaging tool for breast cancer screening. Although the benefits of using digital mammography for cancer screening outweigh the risks associated with the x-ray exposure, the radiation dose must be kept as low as possible while maintaining the diagnostic utility of the generated images, thus minimizing patient risks. Many studies investigated the feasibility of dose reduction by restoring low-dose images using deep neural networks. In these cases, choosing the appropriate training database and loss function is crucial and impacts the quality of the results. In this work, we used a standard residual network (ResNet) to restore low-dose digital mammography images and evaluated the performance of several loss functions. For training purposes, we extracted 256,000 image patches from a dataset of 400 images of retrospective clinical mammography exams, where dose reduction factors of 75% and 50% were simulated to generate low and standard-dose pairs. We validated the network in a real scenario by using a physical anthropomorphic breast phantom to acquire real low-dose and standard full-dose images in a commercially available mammography system, which were then processed through our trained model. We benchmarked our results against an analytical restoration model for low-dose digital mammography. Objective assessment was performed through the signal-to-noise ratio (SNR) and the mean normalized squared error (MNSE), decomposed into residual noise and bias. Statistical tests revealed that the use of the perceptual loss (PL4) resulted in statistically significant differences when compared to all other loss functions. Additionally, images restored using the PL4 achieved the closest residual noise to the standard dose. On the other hand, perceptual loss PL3, structural similarity index (SSIM) and one of the adversarial losses achieved the lowest bias for both dose reduction factors. The source code of our deep neural network is available at https://github.com/WANG-AXIS/LdDMDenoising.
Collapse
Affiliation(s)
- Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China; Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai, China; Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, USA.
| | - Rodrigo B Vimieiro
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, USA; Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, São Carlos, Brazil.
| | - Lucas R Borges
- Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, São Carlos, Brazil; Real Time Tomography, LLC, Villanova, USA.
| | - Marcelo A C Vieira
- Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, São Carlos, Brazil.
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, USA.
| |
Collapse
|
88
|
Naderi M, Karimi N, Emami A, Shirani S, Samavi S. Dynamic-Pix2Pix: Medical image segmentation by injecting noise to cGAN for modeling input and target domain joint distributions with limited training data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
89
|
Li Z, Liu Y, Shu H, Lu J, Kang J, Chen Y, Gui Z. Multi-Scale Feature Fusion Network for Low-Dose CT Denoising. J Digit Imaging 2023; 36:1808-1825. [PMID: 36914854 PMCID: PMC10406773 DOI: 10.1007/s10278-023-00805-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 03/01/2023] [Accepted: 03/03/2023] [Indexed: 03/16/2023] Open
Abstract
Computed tomography (CT) is an imaging technique extensively used in medical treatment, but too much radiation dose in a CT scan will cause harm to the human body. Decreasing the dose of radiation will result in increased noise and artifacts in the reconstructed image, blurring the internal tissue and edge details. To get high-quality CT images, we present a multi-scale feature fusion network (MSFLNet) for low-dose CT (LDCT) denoising. In our MSFLNet, we combined multiple feature extraction modules, effective noise reduction modules, and fusion modules constructed using the attention mechanism to construct a horizontally connected multi-scale structure as the overall architecture of the network, which is used to construct different levels of feature maps at all scales. We innovatively define a composite loss function composed of pixel-level loss based on MS-SSIM-L1 and edge-based edge loss for LDCT denoising. In short, our approach learns a rich set of features that combine contextual information from multiple scales while maintaining the spatial details of denoised CT images. Our laboratory results indicate that compared with the existing methods, the peak signal-to-noise ratio (PSNR) value of CT images of the AAPM dataset processed by the new model is 33.6490, and the structural similarity (SSIM) value is 0.9174, which also achieves good results on the Piglet dataset with different doses. The results also show that the method removes noise and artifacts while effectively preserving CT images' architecture and grain information.
Collapse
Affiliation(s)
- Zhiyuan Li
- School of Information and Communication Engineering, North University of China, No.3, College Road, 030051, Taiyuan, Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, 030051, Taiyuan, China
| | - Yi Liu
- School of Information and Communication Engineering, North University of China, No.3, College Road, 030051, Taiyuan, Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, 030051, Taiyuan, China
| | - Huazhong Shu
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, 211189, Nanjing, Jiangsu, China
| | - Jing Lu
- School of Information and Communication Engineering, North University of China, No.3, College Road, 030051, Taiyuan, Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, 030051, Taiyuan, China
| | - Jiaqi Kang
- School of Information and Communication Engineering, North University of China, No.3, College Road, 030051, Taiyuan, Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, 030051, Taiyuan, China
| | - Yang Chen
- School of Computer Science and Engineering, Southeast University, 211189, Nanjing, Jiangsu, China
- Key Laboratory of Computer Network and Information Integration Ministry of Education, Southeast University, 211189, Nanjing, Jiangsu, China
| | - Zhiguo Gui
- School of Information and Communication Engineering, North University of China, No.3, College Road, 030051, Taiyuan, Shanxi Province, China.
- State Key Laboratory of Dynamic Testing Technology, North University of China, 030051, Taiyuan, China.
| |
Collapse
|
90
|
Hirairi T, Ichikawa K, Urikura A, Kawashima H, Tabata T, Matsunami T. Improvement of diagnostic performance of hyperacute ischemic stroke in head CT using an image-based noise reduction technique with non-black-boxed process. Phys Med 2023; 112:102646. [PMID: 37549457 DOI: 10.1016/j.ejmp.2023.102646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/05/2023] [Accepted: 07/28/2023] [Indexed: 08/09/2023] Open
Abstract
PURPOSE This study aims to investigate whether an image-based noise reduction (INR) technique with a conventional rule-based algorithm involving no black-boxed processes can outperform an existing hybrid-type iterative reconstruction (HIR) technique, when applied to brain CT images for diagnosis of early CT signs, which generally exhibit low-contrast lesions that are difficult to detect. METHODS The subjects comprised 27 patients having infarctions within 4.5 h of onset and 27 patients with no change in brain parenchyma. Images with thicknesses of 5 mm and 0.625 mm were reconstructed by HIR. Images with a thickness of 0.625 mm reconstructed by filter back projection (FBP) were processed by INR. The contrast-to-noise ratios (CNRs) were calculated between gray and white matters; lentiform nucleus and internal capsule; infarcted and non-infarcted areas. Two radiologists subjectively evaluated the presence of hyperdense artery signs (HASs) and infarctions and visually scored three properties regarding image quality (0.625-mm HIR images were excluded because of their notably worse noise appearances). RESULTS The CNRs of INR were significantly better than those of HIR with P < 0.001 for all the indicators. INR yielded significantly higher areas under the curve for both infarction and HAS detections than HIR (P < 0.001). Also, INR significantly improved the visual scores of all the three indicators. CONCLUSION The INR incorporating a simple and reproducible algorithm was more effective than HIR in detecting early CT signs and can be potentially applied to CT images from a large variety of CT systems.
Collapse
Affiliation(s)
- Tetsuya Hirairi
- Department of Radiological Technology, Juntendo University Shizuoka Hospital, 1129 Nagaoka, Izunokuni, Shizuoka, 410-2295, Japan.
| | - Katsuhiro Ichikawa
- Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan.
| | - Atsushi Urikura
- Department of Radiological Technology, Radiological Diagnosis, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuuouku, Tokyo, 104-0045, Japan.
| | - Hiroki Kawashima
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa 920-0942, Japan.
| | - Takasumi Tabata
- Department of Radiology, Juntendo University Shizuoka Hospital, 1129 Nagaoka, Izunokuni, Shizuoka, 410-2295, Japan.
| | - Tamaki Matsunami
- Department of Radiology, Juntendo University Shizuoka Hospital, 1129 Nagaoka, Izunokuni, Shizuoka, 410-2295, Japan.
| |
Collapse
|
91
|
Dashtbani Moghari M, Sanaat A, Young N, Moore K, Zaidi H, Evans A, Fulton RR, Kyme AZ. Reduction of scan duration and radiation dose in cerebral CT perfusion imaging of acute stroke using a recurrent neural network. Phys Med Biol 2023; 68:165005. [PMID: 37327792 DOI: 10.1088/1361-6560/acdf3a] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 06/16/2023] [Indexed: 06/18/2023]
Abstract
Objective. Cerebral CT perfusion (CTP) imaging is most commonly used to diagnose acute ischaemic stroke and support treatment decisions. Shortening CTP scan duration is desirable to reduce the accumulated radiation dose and the risk of patient head movement. In this study, we present a novel application of a stochastic adversarial video prediction approach to reduce CTP imaging acquisition time.Approach. A variational autoencoder and generative adversarial network (VAE-GAN) were implemented in a recurrent framework in three scenarios: to predict the last 8 (24 s), 13 (31.5 s) and 18 (39 s) image frames of the CTP acquisition from the first 25 (36 s), 20 (28.5 s) and 15 (21 s) acquired frames, respectively. The model was trained using 65 stroke cases and tested on 10 unseen cases. Predicted frames were assessed against ground-truth in terms of image quality and haemodynamic maps, bolus shape characteristics and volumetric analysis of lesions.Main results. In all three prediction scenarios, the mean percentage error between the area, full-width-at-half-maximum and maximum enhancement of the predicted and ground-truth bolus curve was less than 4 ± 4%. The best peak signal-to-noise ratio and structural similarity of predicted haemodynamic maps was obtained for cerebral blood volume followed (in order) by cerebral blood flow, mean transit time and time to peak. For the 3 prediction scenarios, average volumetric error of the lesion was overestimated by 7%-15%, 11%-28% and 7%-22% for the infarct, penumbra and hypo-perfused regions, respectively, and the corresponding spatial agreement for these regions was 67%-76%, 76%-86% and 83%-92%.Significance. This study suggests that a recurrent VAE-GAN could potentially be used to predict a portion of CTP frames from truncated acquisitions, preserving the majority of clinical content in the images, and potentially reducing the scan duration and radiation dose simultaneously by 65% and 54.5%, respectively.
Collapse
Affiliation(s)
- Mahdieh Dashtbani Moghari
- School of Biomedical Engineering, Faculty of Engineering and Information Technologies, The University of Sydney, Sydney, Australia
| | - Amirhossein Sanaat
- Geneva University Hospitals, Division of Nuclear Medicine & Molecular Imaging, CH-1205 Geneva, Switzerland
| | - Noel Young
- Department of Radiology, Westmead Hospital, Sydney, Australia
- Medical imaging group, School of Medicine, Western Sydney University, Sydney, Australia
| | - Krystal Moore
- Department of Radiology, Westmead Hospital, Sydney, Australia
| | - Habib Zaidi
- Geneva University Hospitals, Division of Nuclear Medicine & Molecular Imaging, CH-1205 Geneva, Switzerland
| | - Andrew Evans
- Department of Aged Care & Stroke, Westmead Hospital, Sydney, Australia
- School of Health Sciences, University of Sydney, Sydney, Australia
| | - Roger R Fulton
- School of Health Sciences, University of Sydney, Sydney, Australia
- Department of Medical Physics, Westmead Hospital, Sydney, Australia
- The Brain & Mind Centre, The University of Sydney, Sydney, Australia
| | - Andre Z Kyme
- School of Biomedical Engineering, Faculty of Engineering and Information Technologies, The University of Sydney, Sydney, Australia
- The Brain & Mind Centre, The University of Sydney, Sydney, Australia
| |
Collapse
|
92
|
Liao S, Mo Z, Zeng M, Wu J, Gu Y, Li G, Quan G, Lv Y, Liu L, Yang C, Wang X, Huang X, Zhang Y, Cao W, Dong Y, Wei Y, Zhou Q, Xiao Y, Zhan Y, Zhou XS, Shi F, Shen D. Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction. Cell Rep Med 2023; 4:101119. [PMID: 37467726 PMCID: PMC10394257 DOI: 10.1016/j.xcrm.2023.101119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/21/2023]
Abstract
Fast and low-dose reconstructions of medical images are highly desired in clinical routines. We propose a hybrid deep-learning and iterative reconstruction (hybrid DL-IR) framework and apply it for fast magnetic resonance imaging (MRI), fast positron emission tomography (PET), and low-dose computed tomography (CT) image generation tasks. First, in a retrospective MRI study (6,066 cases), we demonstrate its capability of handling 3- to 10-fold under-sampled MR data, enabling organ-level coverage with only 10- to 100-s scan time; second, a low-dose CT study (142 cases) shows that our framework can successfully alleviate the noise and streak artifacts in scans performed with only 10% radiation dose (0.61 mGy); and last, a fast whole-body PET study (131 cases) allows us to faithfully reconstruct tumor-induced lesions, including small ones (<4 mm), from 2- to 4-fold-accelerated PET acquisition (30-60 s/bp). This study offers a promising avenue for accurate and high-quality image reconstruction with broad clinical value.
Collapse
Affiliation(s)
- Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Zhanhao Mo
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Mengsu Zeng
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Guobin Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Chun Yang
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Xinglie Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiaoqian Huang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yang Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Wenjing Cao
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yun Dong
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yongqin Xiao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 200122, China.
| |
Collapse
|
93
|
Jiao C, Ling D, Bian S, Vassantachart A, Cheng K, Mehta S, Lock D, Zhu Z, Feng M, Thomas H, Scholey JE, Sheng K, Fan Z, Yang W. Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN. Cancers (Basel) 2023; 15:3544. [PMID: 37509207 PMCID: PMC10377331 DOI: 10.3390/cancers15143544] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
PURPOSES To provide abdominal contrast-enhanced MR image synthesis, we developed an gradient regularized multi-modal multi-discrimination sparse attention fusion generative adversarial network (GRMM-GAN) to avoid repeated contrast injections to patients and facilitate adaptive monitoring. METHODS With IRB approval, 165 abdominal MR studies from 61 liver cancer patients were retrospectively solicited from our institutional database. Each study included T2, T1 pre-contrast (T1pre), and T1 contrast-enhanced (T1ce) images. The GRMM-GAN synthesis pipeline consists of a sparse attention fusion network, an image gradient regularizer (GR), and a generative adversarial network with multi-discrimination. The studies were randomly divided into 115 for training, 20 for validation, and 30 for testing. The two pre-contrast MR modalities, T2 and T1pre images, were adopted as inputs in the training phase. The T1ce image at the portal venous phase was used as an output. The synthesized T1ce images were compared with the ground truth T1ce images. The evaluation metrics include peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). A Turing test and experts' contours evaluated the image synthesis quality. RESULTS The proposed GRMM-GAN model achieved a PSNR of 28.56, an SSIM of 0.869, and an MSE of 83.27. The proposed model showed statistically significant improvements in all metrics tested with p-values < 0.05 over the state-of-the-art model comparisons. The average Turing test score was 52.33%, which is close to random guessing, supporting the model's effectiveness for clinical application. In the tumor-specific region analysis, the average tumor contrast-to-noise ratio (CNR) of the synthesized MR images was not statistically significant from the real MR images. The average DICE from real vs. synthetic images was 0.90 compared to the inter-operator DICE of 0.91. CONCLUSION We demonstrated the function of a novel multi-modal MR image synthesis neural network GRMM-GAN for T1ce MR synthesis based on pre-contrast T1 and T2 MR images. GRMM-GAN shows promise for avoiding repeated contrast injections during radiation therapy treatment.
Collapse
Affiliation(s)
- Changzhe Jiao
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Diane Ling
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shelly Bian
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - April Vassantachart
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Karen Cheng
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shahil Mehta
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Derrick Lock
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Zhenyu Zhu
- Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China;
| | - Mary Feng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Horatio Thomas
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Jessica E. Scholey
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Ke Sheng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Zhaoyang Fan
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
| | - Wensha Yang
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|
94
|
Zhou Z, Inoue A, McCollough CH, Yu L. Self-trained deep convolutional neural network for noise reduction in CT. J Med Imaging (Bellingham) 2023; 10:044008. [PMID: 37636895 PMCID: PMC10449263 DOI: 10.1117/1.jmi.10.4.044008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023] Open
Abstract
Purpose Supervised deep convolutional neural network (CNN)-based methods have been actively used in clinical CT to reduce image noise. The networks of these methods are typically trained using paired high- and low-quality data from a massive number of patients and/or phantom images. This training process is tedious, and the network trained under a given condition may not be generalizable to patient images acquired and reconstructed under different conditions. We propose a self-trained deep CNN (ST_CNN) method for noise reduction in CT that does not rely on pre-existing training datasets. Approach The ST_CNN training was accomplished using extensive data augmentation in the projection domain, and the inference was applied to the data itself. Specifically, multiple independent noise insertions were applied to the original patient projection data to generate multiple realizations of low-quality projection data. Then, rotation augmentation was adopted for both the original and low-quality projection data by applying the rotation angle directly on the projection data so that images were rotated at arbitrary angles without introducing additional bias. A large number of paired low- and high-quality images from the same patient were reconstructed and paired for training the ST_CNN model. Results No significant difference was found between the ST_CNN and conventional CNN models in terms of the peak signal-to-noise ratio and structural similarity index measure. The ST_CNN model outperformed the conventional CNN model in terms of noise texture and homogeneity in liver parenchyma as well as better subjective visualization of liver lesions. The ST_CNN may sacrifice the sharpness of vessels slightly compared to the conventional CNN model but without affecting the visibility of peripheral vessels or diagnosis of vascular pathology. Conclusions The proposed ST_CNN method trained from the data itself may achieve similar image quality in comparison with conventional deep CNN denoising methods pre-trained on external datasets.
Collapse
Affiliation(s)
- Zhongxing Zhou
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Akitoshi Inoue
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Lifeng Yu
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| |
Collapse
|
95
|
Lee D, Weinhardt F, Hommel J, Piotrowski J, Class H, Steeb H. Machine learning assists in increasing the time resolution of X-ray computed tomography applied to mineral precipitation in porous media. Sci Rep 2023; 13:10529. [PMID: 37386125 DOI: 10.1038/s41598-023-37523-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/22/2023] [Indexed: 07/01/2023] Open
Abstract
Many subsurface engineering technologies or natural processes cause porous medium properties, such as porosity or permeability, to evolve in time. Studying and understanding such processes on the pore scale is strongly aided by visualizing the details of geometric and morphological changes in the pores. For realistic 3D porous media, X-Ray Computed Tomography (XRCT) is the method of choice for visualization. However, the necessary high spatial resolution requires either access to limited high-energy synchrotron facilities or data acquisition times which are considerably longer (e.g. hours) than the time scales of the processes causing the pore geometry change (e.g. minutes). Thus, so far, conventional benchtop XRCT technologies are often too slow to allow for studying dynamic processes. Interrupting experiments for performing XRCT scans is also in many instances no viable approach. We propose a novel workflow for investigating dynamic precipitation processes in porous media systems in 3D using a conventional XRCT technology. Our workflow is based on limiting the data acquisition time by reducing the number of projections and enhancing the lower-quality reconstructed images using machine-learning algorithms trained on images reconstructed from high-quality initial- and final-stage scans. We apply the proposed workflow to induced carbonate precipitation within a porous-media sample of sintered glass-beads. So we were able to increase the temporal resolution sufficiently to study the temporal evolution of the precipitate accumulation using an available benchtop XRCT device.
Collapse
Affiliation(s)
- Dongwon Lee
- Institute of Applied Mechanics (CE), University of Stuttgart, Pfaffenwaldring 7, 70569, Stuttgart, Germany.
| | - Felix Weinhardt
- Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart, Pfaffenwaldring 61, 70569, Stuttgart, Germany
| | - Johannes Hommel
- Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart, Pfaffenwaldring 61, 70569, Stuttgart, Germany
| | - Joseph Piotrowski
- Agrosphere (IBG-3), Institute of Bio- and Geosciences, Forschungszentrum Jülich, 52425, Jülich, Germany
| | - Holger Class
- Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart, Pfaffenwaldring 61, 70569, Stuttgart, Germany
| | - Holger Steeb
- Institute of Applied Mechanics (CE), University of Stuttgart, Pfaffenwaldring 7, 70569, Stuttgart, Germany
- SC SimTech, University of Stuttgart, Pfaffenwaldring 5, 70569, Stuttgart, Germany
| |
Collapse
|
96
|
Illimoottil M, Ginat D. Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans. Cancers (Basel) 2023; 15:3267. [PMID: 37444376 PMCID: PMC10339989 DOI: 10.3390/cancers15133267] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/25/2023] [Accepted: 05/27/2023] [Indexed: 07/15/2023] Open
Abstract
Deep learning techniques have been developed for analyzing head and neck cancer imaging. This review covers deep learning applications in cancer imaging, emphasizing tumor detection, segmentation, classification, and response prediction. In particular, advanced deep learning techniques, such as convolutional autoencoders, generative adversarial networks (GANs), and transformer models, as well as the limitations of traditional imaging and the complementary roles of deep learning and traditional techniques in cancer management are discussed. Integration of radiomics, radiogenomics, and deep learning enables predictive models that aid in clinical decision-making. Challenges include standardization, algorithm interpretability, and clinical validation. Key gaps and controversies involve model generalizability across different imaging modalities and tumor types and the role of human expertise in the AI era. This review seeks to encourage advancements in deep learning applications for head and neck cancer management, ultimately enhancing patient care and outcomes.
Collapse
Affiliation(s)
- Mathew Illimoottil
- School of Medicine, University of Missouri-Kansas City, Kansas City, MO 64018, USA
| | - Daniel Ginat
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
97
|
van Velzen SGM, Dobrolinska MM, Knaapen P, van Herten RLM, Jukema R, Danad I, Slart RHJA, Greuter MJW, Išgum I. Automated cardiovascular risk categorization through AI-driven coronary calcium quantification in cardiac PET acquired attenuation correction CT. J Nucl Cardiol 2023; 30:955-969. [PMID: 35851642 PMCID: PMC10261233 DOI: 10.1007/s12350-022-03047-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/30/2022] [Indexed: 12/17/2022]
Abstract
BACKGROUND We present an automatic method for coronary artery calcium (CAC) quantification and cardiovascular risk categorization in CT attenuation correction (CTAC) scans acquired at rest and stress during cardiac PET/CT. The method segments CAC according to visual assessment rather than the commonly used CT-number threshold. METHODS The method decomposes an image containing CAC into a synthetic image without CAC and an image showing only CAC. Extensive evaluation was performed in a set of 98 patients, each having rest and stress CTAC scans and a dedicated calcium scoring CT (CSCT). Standard manual calcium scoring in CSCT provided the reference standard. RESULTS The interscan reproducibility of CAC quantification computed as average absolute relative differences between CTAC and CSCT scan pairs was 75% and 85% at rest and stress using the automatic method compared to 121% and 114% using clinical calcium scoring. Agreement between automatic risk assessment in CTAC and clinical risk categorization in CSCT resulted in linearly weighted kappa of 0.65 compared to 0.40 between CTAC and CSCT using clinically used calcium scoring. CONCLUSION The increased interscan reproducibility achieved by our method may allow routine cardiovascular risk assessment in CTAC, potentially relieving the need for dedicated CSCT.
Collapse
Affiliation(s)
- S G M van Velzen
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Meibergdreef 123, 1105 AZ, Amsterdam, the Netherlands.
- Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands.
- Amsterdam Cardiovascular Sciences, Heart Failure & Arrhythmias, Amsterdam, the Netherlands.
| | - M M Dobrolinska
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30.001, 9700 RB, Groningen, the Netherlands
| | - P Knaapen
- Department of Cardiology, VU University Medical Center, Amsterdam, the Netherlands
| | - R L M van Herten
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Meibergdreef 123, 1105 AZ, Amsterdam, the Netherlands
- Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Cardiovascular Sciences, Heart Failure & Arrhythmias, Amsterdam, the Netherlands
| | - R Jukema
- Department of Cardiology, VU University Medical Center, Amsterdam, the Netherlands
| | - I Danad
- Department of Cardiology, VU University Medical Center, Amsterdam, the Netherlands
| | - R H J A Slart
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30.001, 9700 RB, Groningen, the Netherlands
- Department of Biomedical Photonic Imaging, Faculty of Science and Technology, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, the Netherlands
| | - M J W Greuter
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30.001, 9700 RB, Groningen, the Netherlands
- Department of Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics & Computer Science, University of Twente, P.O. Box 217, 7500 AE, Enschede, the Netherlands
| | - I Išgum
- Department of Biomedical Engineering and Physics, Amsterdam UMC location University of Amsterdam, Meibergdreef 123, 1105 AZ, Amsterdam, the Netherlands
- Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Cardiovascular Sciences, Heart Failure & Arrhythmias, Amsterdam, the Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam UMC location University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
98
|
Li Z, Liu Y, Chen Y, Shu H, Lu J, Gui Z. Dual-domain fusion deep convolutional neural network for low-dose CT denoising. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023:XST230020. [PMID: 37212059 DOI: 10.3233/xst-230020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
BACKGROUND In view of the underlying health risks posed by X-ray radiation, the main goal of the present research is to achieve high-quality CT images at the same time as reducing x-ray radiation. In recent years, convolutional neural network (CNN) has shown excellent performance in removing low-dose CT noise. However, previous work mainly focused on deepening and feature extraction work on CNN without considering fusion of features from frequency domain and image domain. OBJECTIVE To address this issue, we propose to develop and test a new LDCT image denoising method based on a dual-domain fusion deep convolutional neural network (DFCNN). METHODS This method deals with two domains, namely, the DCT domain and the image domain. In the DCT domain, we design a new residual CBAM network to enhance the internal and external relations of different channels while reducing noise to promote richer image structure information. For the image domain, we propose a top-down multi-scale codec network as a denoising network to obtain more acceptable edges and textures while obtaining multi-scale information. Then, the feature images of the two domains are fused by a combination network. RESULTS The proposed method was validated on the Mayo dataset and the Piglet dataset. The denoising algorithm is optimal in both subjective and objective evaluation indexes as compared to other state-of-the-art methods reported in previous studies. CONCLUSIONS The study results demonstrate that by applying the new fusion model denoising, denoising results in both image domain and DCT domain are better than other models developed using features extracted in the single image domain.
Collapse
Affiliation(s)
- Zhiyuan Li
- School of Information and Communication Engineering, North University of China, Taiyuan Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Yi Liu
- School of Information and Communication Engineering, North University of China, Taiyuan Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Yang Chen
- School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Huazhong Shu
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, Jiangsu, China
| | - Jing Lu
- School of Information and Communication Engineering, North University of China, Taiyuan Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Zhiguo Gui
- School of Information and Communication Engineering, North University of China, Taiyuan Shanxi Province, China
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
- Shanxi Provincial Key Laboratory for Biomedical Imaging and Big Data, North University of China, Taiyuan, China
| |
Collapse
|
99
|
Guo Z, Liu Z, Barbastathis G, Zhang Q, Glinsky ME, Alpert BK, Levine ZH. Noise-resilient deep learning for integrated circuit tomography. OPTICS EXPRESS 2023; 31:15355-15371. [PMID: 37157639 DOI: 10.1364/oe.486213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.
Collapse
|
100
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|