1
|
Huang W, Zhang H, Guo H, Li W, Quan X, Zhang Y. ADDNS: An asymmetric dual deep network with sharing mechanism for medical image fusion of CT and MR-T2. Comput Biol Med 2023; 166:107531. [PMID: 37806056 DOI: 10.1016/j.compbiomed.2023.107531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/04/2023] [Accepted: 09/27/2023] [Indexed: 10/10/2023]
Abstract
Medical images with different modalities have different semantic characteristics. Medical image fusion aiming to promotion of the visual quality and practical value has become important in medical diagnostics. However, the previous methods do not fully represent semantic and visual features, and the model generalization ability needs to be improved. Furthermore, the brightness-stacking phenomenon is easy to occur during the fusion process. In this paper, we propose an asymmetric dual deep network with sharing mechanism (ADDNS) for medical image fusion. In our asymmetric model-level dual framework, primal Unet part learns to fuse medical images of different modality into a fusion image, while dual Unet part learns to invert the fusion task for multi-modal image reconstruction. This asymmetry of network settings not only enables the ADDNS to fully extract semantic and visual features, but also reduces the model complexity and accelerates the convergence. Furthermore, the sharing mechanism designed according to task relevance also reduces the model complexity and improves the generalization ability of our model. In the end, we use the intermediate supervision method to minimize the difference between fusion image and source images so as to prevent the brightness-stacking problem. Experimental results show that our algorithm achieves better results on both quantitative and qualitative experiments than several state-of-the-art methods.
Collapse
Affiliation(s)
- Wanwan Huang
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China.
| | - Huike Guo
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China
| | - Wei Li
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China
| | - Xiongwen Quan
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China
| | - Yuzhi Zhang
- College of Software, Nankai University, Tianjin, 300350, China
| |
Collapse
|
2
|
Zhang G, Nie X, Liu B, Yuan H, Li J, Sun W, Huang S. A multimodal fusion method for Alzheimer's disease based on DCT convolutional sparse representation. Front Neurosci 2023; 16:1100812. [PMID: 36685238 PMCID: PMC9853298 DOI: 10.3389/fnins.2022.1100812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/07/2023] Open
Abstract
Introduction The medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer's disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images and spatial inconsistency in the traditional multimodal medical image fusion methods based on sparse representation. A multimodal fusion algorithm for Alzheimer' s disease based on the discrete cosine transform (DCT) convolutional sparse representation is proposed. Methods The algorithm first performs a multi-scale DCT decomposition of the source medical images and uses the sub-images of different scales as training images, respectively. Different sparse coefficients are obtained by optimally solving the sub-dictionaries at different scales using alternating directional multiplication method (ADMM). Secondly, the coefficients of high-frequency and low-frequency subimages are inverse DCTed using an improved L1 parametric rule combined with improved spatial frequency novel sum-modified SF (NMSF) to obtain the final fused images. Results and discussion Through extensive experimental results, we show that our proposed method has good performance in contrast enhancement, texture and contour information retention.
Collapse
Affiliation(s)
- Guo Zhang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Xixi Nie
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bangtao Liu
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Hong Yuan
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Jin Li
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Weiwei Sun
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,*Correspondence: Weiwei Sun,
| | - Shixin Huang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,Department of Scientific Research, The People’s Hospital of Yubei District of Chongqing City, Yubei, China,Shixin Huang,
| |
Collapse
|
3
|
|
4
|
Su Q, Wang F, Chen D, Chen G, Li C, Wei L. Deep convolutional neural networks with ensemble learning and transfer learning for automated detection of gastrointestinal diseases. Comput Biol Med 2022; 150:106054. [PMID: 36244302 DOI: 10.1016/j.compbiomed.2022.106054] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/12/2022] [Accepted: 08/27/2022] [Indexed: 11/22/2022]
Abstract
Gastrointestinal (GI) diseases are serious health threats to human health, and the related detection and treatment of gastrointestinal diseases place a huge burden on medical institutions. Imaging-based methods are one of the most important approaches for automated detection of gastrointestinal diseases. Although deep neural networks have shown impressive performance in a number of imaging tasks, its application to detection of gastrointestinal diseases has not been sufficiently explored. In this study, we propose a novel and practical method to detect gastrointestinal disease from wireless capsule endoscopy (WCE) images by convolutional neural networks. The proposed method utilizes three backbone networks modified and fine-tuned by transfer learning as the feature extractors, and an integrated classifier using ensemble learning is trained to detection of gastrointestinal diseases. The proposed method outperforms existing computational methods on the benchmark dataset. The case study results show that the proposed method captures discriminative information of wireless capsule endoscopy images. This work shows the potential of using deep learning-based computer vision models for effective GI disease screening.
Collapse
Affiliation(s)
- Qiaosen Su
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | - Fengsheng Wang
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | | | | | - Chao Li
- Beidahuang Industry Group General Hospital, Harbin, China.
| | - Leyi Wei
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China.
| |
Collapse
|
5
|
Using of Laplacian Re-decomposition image fusion algorithm for glioma grading with SWI, ADC, and FLAIR images. POLISH JOURNAL OF MEDICAL PHYSICS AND ENGINEERING 2021. [DOI: 10.2478/pjmpe-2021-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Introduction: Based on the tumor’s growth potential and aggressiveness, glioma is most often classified into low or high-grade groups. Traditionally, tissue sampling is used to determine the glioma grade. The aim of this study is to evaluate the efficiency of the Laplacian Re-decomposition (LRD) medical image fusion algorithm for glioma grading by advanced magnetic resonance imaging (MRI) images and introduce the best image combination for glioma grading.
Material and methods: Sixty-one patients (17 low-grade and 44 high-grade) underwent Susceptibility-weighted image (SWI), apparent diffusion coefficient (ADC) map, and Fluid attenuated inversion recovery (FLAIR) MRI imaging. To fuse different MRI image, LRD medical image fusion algorithm was used. To evaluate the effectiveness of LRD in the classification of glioma grade, we compared the parameters of the receiver operating characteristic curve (ROC).
Results: The average Relative Signal Contrast (RSC) of SWI and ADC maps in high-grade glioma are significantly lower than RSCs in low-grade glioma. No significant difference was detected between low and high-grade glioma on FLAIR images. In our study, the area under the curve (AUC) for low and high-grade glioma differentiation on SWI and ADC maps were calculated at 0.871 and 0.833, respectively.
Conclusions: By fusing SWI and ADC map with LRD medical image fusion algorithm, we can increase AUC for low and high-grade glioma separation to 0.978. Our work has led us to conclude that, by fusing SWI and ADC map with LRD medical image fusion algorithm, we reach the highest diagnostic accuracy for low and high-grade glioma differentiation and we can use LRD medical fusion algorithm for glioma grading.
Collapse
|
6
|
Preliminary study of multiple b-value diffusion-weighted images and T1 post enhancement magnetic resonance imaging images fusion with Laplacian Re-decomposition (LRD) medical fusion algorithm for glioma grading. Eur J Radiol Open 2021; 8:100378. [PMID: 34632000 PMCID: PMC8487979 DOI: 10.1016/j.ejro.2021.100378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/20/2021] [Accepted: 09/26/2021] [Indexed: 12/21/2022] Open
Abstract
LRD medical image fusion algorithm can be used for glioma grading. We can use the LRD fusion algorithm with MRI image for glioma grading. Fusing of DWI (b50) and T1 enhancement (T1Gd) by LRD, have highest diagnostic value for glioma grading.
Background Grade of brain tumor is thought to be the most significant and crucial component in treatment management. Recent development in medical imaging techniques have led to the introduce non-invasive methods for brain tumor grading such as different magnetic resonance imaging (MRI) protocols. Combination of different MRI protocols with fusion algorithms for tumor grading is used to increase diagnostic improvement. This paper investigated the efficiency of the Laplacian Re-decomposition (LRD) fusion algorithms for glioma grading. Procedures In this study, 69 patients were examined with MRI. The T1 post enhancement (T1Gd) and diffusion-weighted images (DWI) were obtained. To evaluated LRD performance for glioma grading, we compared the parameters of the receiver operating characteristic (ROC) curves. Findings We found that the average Relative Signal Contrast (RSC) for high-grade gliomas is greater than RSCs for low-grade gliomas in T1Gd images and all fused images. No significant difference in RSCs of DWI images was observed between low-grade and high-grade gliomas. However, a significant RSCs difference was detected between grade III and IV in the T1Gd, b50, and all fussed images. Conclusions This research suggests that T1Gd images are an appropriate imaging protocol for separating low-grade and high-grade gliomas. According to the findings of this study, we may use the LRD fusion algorithm to increase the diagnostic value of T1Gd and DWI picture for grades III and IV glioma distinction. In conclusion, this article has emphasized the significance of the LRD fusion algorithm as a tool for differentiating grade III and IV gliomas.
Collapse
Key Words
- ADC, apparent diffusion coefficient
- AUC, Aera Under Curve
- BOLD, blood oxygen level dependent imaging
- CBV, Cerebral Blood Volume
- DCE, Dynamic contrast enhancement
- DGR, Decision Graph Re-decomposition
- DWI, Diffusion-weighted imaging
- Diffusion-weighted images
- FA, flip angle
- Fusion algorithm
- GBM, glioblastomas
- GDIE, Gradient Domain Image Enhancement
- Glioma
- Grade
- IRS, Inverse Re-decomposition Scheme
- LEM, Local Energy Maximum
- LP, Laplacian Pyramid
- LRD, Laplacian Re-decomposition
- Laplacian Re-decomposition
- MLD, Maximum Local Difference
- MRI, magnetic resonance imaging
- MRS, Magnetic resonance spectroscopy
- MST, Multi-scale transform
- Magnetic resonance imaging
- NOD, Non-overlapping domain
- OD, overlapping domain
- PACS, PACS picture archiving and communication system
- ROC, receiver operating characteristic curve
- ROI, regions of interest
- RSC, Relative Signal Contrast
- SCE, Susceptibility contrast enhancement
- T1Gd, T1 post enhancement
- TE, time of echo
- TI, time of inversion
- TR, repetition time
Collapse
|
7
|
Faragallah OS, Muhammed AN, Taha TS, Geweid GG. PCA based SVD fusion for MRI and CT medical images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper presents a new approach to the multi-modal medical image fusion based on Principal Component Analysis (PCA) and Singular value decomposition (SVD).The main objective of the proposed approach is to facilitate its implementation on a hardware unit, so it works effectively at run time. To evaluate the presented approach, it was tested in fusing four different cases of a registered CT and MRI images. Eleven quality metrics (including Mutual Information and Universal Image Quality Index) were used in evaluating the fused image obtained by the proposed approach, and compare it with the images obtained by the other fusion approaches. In experiments, the quality metrics shows that the fused image obtained by the presented approach has better quality result and it proved effective in medical image fusion especially in MRI and CT images. It also indicates that the paper approach had reduced the processing time and the memory required during the fusion process, and leads to very cheap and fast hardware implementation of the presented approach.
Collapse
Affiliation(s)
- Osama S. Faragallah
- Department of Information Technology, College of Computers and Information Technology, Taif University, Saudi Arabia
| | - Abdullah N. Muhammed
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Taha S. Taha
- Department of Electronics and Communication Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Gamal G.N. Geweid
- Department of Biomedical Engineering, College of Engineering and Computer Sciences, Marshall University, Huntington, WV, USA
- Department of Electrical Engineering, Faculty of Engineering, Benha University, Benha, Egypt
| |
Collapse
|
8
|
Zhu R, Li X, Zhang X, Wang J. HID: The Hybrid Image Decomposition Model for MRI and CT Fusion. IEEE J Biomed Health Inform 2021; 26:727-739. [PMID: 34270437 DOI: 10.1109/jbhi.2021.3097374] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multimodal medical image fusion can combine salient information from different source images of the same part and reduce the redundancy of information. In this paper, an efficient hybrid image decomposition (HID) method is proposed. It combines the advantages of spatial domain and transform domain methods and breaks through the limitations of the algorithms based on single category features. The accurate separation of base layer and texture details is conducive to the better effect of the fusion rules. First, the source anatomical images are decomposed into a series of high frequencies and a low frequency via nonsubsampled shearlet transform (NSST). Second, the low frequency is further decomposed using the designed optimization model based on structural similarity and structure tensor to get an energy texture layer and a base layer. Then, the modified choosing maximum (MCM) is designed to fuse base layers. The sum of modified Laplacian (SML) is used to fuse high frequencies and energy texture layers. Finally, the fused low frequency can be obtained by adding fused energy texture layer and base layer. And the fused image is reconstructed by the inverse NSST. The superiority of the proposed method is verified by amounts of experiments on 50 pairs of magnetic resonance imaging (MRI) images and computed tomography (CT) images and others, and compared with 12 state-of-the-art medical image fusion methods. It is demonstrated that the proposed hybrid decomposition model has a better ability to extract texture information than conventional ones.
Collapse
|
9
|
Dinh PH. A novel approach based on Three-scale image decomposition and Marine predators algorithm for multi-modal medical image fusion. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102536] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|