1
|
Li X, Wan W, Zhou F, Cheng X, Jie Y, Tan H. Medical image fusion based on sparse representation and neighbor energy activity. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
2
|
VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis. BMC Bioinformatics 2022; 23:548. [PMID: 36536297 PMCID: PMC9762055 DOI: 10.1186/s12859-022-05072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/22/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Today's biomedical imaging technology has been able to present the morphological structure or functional metabolic information of organisms at different scale levels, such as organ, tissue, cell, molecule and gene. However, different imaging modes have different application scope, advantages and disadvantages. In order to improve the role of medical image in disease diagnosis, the fusion of biomedical image information at different imaging modes and scales has become an important research direction in medical image. Traditional medical image fusion methods are all designed to measure the activity level and fusion rules. They are lack of mining the context features of different modes of image, which leads to the obstruction of improving the quality of fused images. METHOD In this paper, an attention-multiscale network medical image fusion model based on contextual features is proposed. The model selects five backbone modules in the VGG-16 network to build encoders to obtain the contextual features of medical images. It builds the attention mechanism branch to complete the fusion of global contextual features and designs the residual multiscale detail processing branch to complete the fusion of local contextual features. Finally, it completes the cascade reconstruction of features by the decoder to obtain the fused image. RESULTS Ten sets of images related to five diseases are selected from the AANLIB database to validate the VANet model. Structural images are derived from MR images with high resolution and functional images are derived from SPECT and PET images that are good at describing organ blood flow levels and tissue metabolism. Fusion experiments are performed on twelve fusion algorithms including the VANet model. The model selects eight metrics from different aspects to build a fusion quality evaluation system to complete the performance evaluation of the fused images. Friedman's test and the post-hoc Nemenyi test are introduced to conduct professional statistical tests to demonstrate the superiority of VANet model. CONCLUSIONS The VANet model completely captures and fuses the texture details and color information of the source images. From the fusion results, the metabolism and structural information of the model are well expressed and there is no interference of color information on the structure and texture; in terms of the objective evaluation system, the metric value of the VANet model is generally higher than that of other methods.; in terms of efficiency, the time consumption of the model is acceptable; in terms of scalability, the model is not affected by the input order of source images and can be extended to tri-modal fusion.
Collapse
|
3
|
Kong W, Li C, Lei Y. Multimodal medical image fusion using convolutional neural network and extreme learning machine. Front Neurorobot 2022; 16:1050981. [PMID: 36467563 PMCID: PMC9708736 DOI: 10.3389/fnbot.2022.1050981] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 10/28/2022] [Indexed: 08/27/2023] Open
Abstract
The emergence of multimodal medical imaging technology greatly increases the accuracy of clinical diagnosis and etiological analysis. Nevertheless, each medical imaging modal unavoidably has its own limitations, so the fusion of multimodal medical images may become an effective solution. In this paper, a novel fusion method on the multimodal medical images exploiting convolutional neural network (CNN) and extreme learning machine (ELM) is proposed. As a typical representative in deep learning, CNN has been gaining more and more popularity in the field of image processing. However, CNN often suffers from several drawbacks, such as high computational costs and intensive human interventions. To this end, the model of convolutional extreme learning machine (CELM) is constructed by incorporating ELM into the traditional CNN model. CELM serves as an important tool to extract and capture the features of the source images from a variety of different angles. The final fused image can be obtained by integrating the significant features together. Experimental results indicate that, the proposed method is not only helpful to enhance the accuracy of the lesion detection and localization, but also superior to the current state-of-the-art ones in terms of both subjective visual performance and objective criteria.
Collapse
Affiliation(s)
- Weiwei Kong
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an, China
| | - Chi Li
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an, China
| | - Yang Lei
- College of Cryptography Engineering, Engineering University of PAP, Xi'an, China
| |
Collapse
|
4
|
NOSMFuse: An infrared and visible image fusion approach based on norm optimization and slime mold architecture. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03591-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
5
|
Nandhini Abirami R, Durai Raj Vincent PM, Srinivasan K, Manic KS, Chang CY. Multimodal Medical Image Fusion of Positron Emission Tomography and Magnetic Resonance Imaging Using Generative Adversarial Networks. Behav Neurol 2022; 2022:6878783. [PMID: 35464043 PMCID: PMC9023223 DOI: 10.1155/2022/6878783] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 03/27/2022] [Indexed: 12/02/2022] Open
Abstract
Multimodal medical image fusion is a current technique applied in the applications related to medical field to combine images from the same modality or different modalities to improve the visual content of the image to perform further operations like image segmentation. Biomedical research and medical image analysis highly demand medical image fusion to perform higher level of medical analysis. Multimodal medical fusion assists medical practitioners to visualize the internal organs and tissues. Multimodal medical fusion of brain image helps to medical practitioners to simultaneously visualize hard portion like skull and soft portion like tissue. Brain tumor segmentation can be accurately performed by utilizing the image obtained after multimodal medical image fusion. The area of the tumor can be accurately located with the information obtained from both Positron Emission Tomography and Magnetic Resonance Image in a single fused image. This approach increases the accuracy in diagnosing the tumor and reduces the time consumed in diagnosing and locating the tumor. The functional information of the brain is available in the Positron Emission Tomography while the anatomy of the brain tissue is available in the Magnetic Resonance Image. Thus, the spatial characteristics and functional information can be obtained from a single image using a robust multimodal medical image fusion model. The proposed approach uses a generative adversarial network to fuse Positron Emission Tomography and Magnetic Resonance Image into a single image. The results obtained from the proposed approach can be used for further medical analysis to locate the tumor and plan for further surgical procedures. The performance of the GAN based model is evaluated using two metrics, namely, structural similarity index and mutual information. The proposed approach achieved a structural similarity index of 0.8551 and a mutual information of 2.8059.
Collapse
Affiliation(s)
- R. Nandhini Abirami
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - P. M. Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, 632 014 Tamil Nadu, India
| | - K. Suresh Manic
- Department of Electrical and Communication Engineering, National University of Science and Technology, Muscat, Oman
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| |
Collapse
|
6
|
Gupta M, Kumar N, Gupta N, Zaguia A. Fusion of multi-modality biomedical images using deep neural networks. Soft comput 2022. [DOI: 10.1007/s00500-022-07047-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Multimodal medical image fusion based on multichannel coupled neural P systems and max-cloud models in spectral total variation domain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
Kong W, Miao Q, Liu R, Lei Y, Cui J, Xie Q. Multimodal medical image fusion using gradient domain guided filter random walk and side window filtering in framelet domain. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
9
|
Zhu R, Li X, Huang S, Zhang X. Multimodal medical image fusion using adaptive co-occurrence filter-based decomposition optimization model. Bioinformatics 2022; 38:818-826. [PMID: 34664633 DOI: 10.1093/bioinformatics/btab721] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 10/10/2021] [Accepted: 10/13/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION Medical image fusion has developed into an important technology, which can effectively merge the significant information of multiple source images into one image. Fused images with abundant and complementary information are desirable, which contributes to clinical diagnosis and surgical planning. RESULTS In this article, the concept of the skewness of pixel intensity (SPI) and a novel adaptive co-occurrence filter (ACOF)-based image decomposition optimization model are proposed to improve the quality of fused images. Experimental results demonstrate that the proposed method outperforms 22 state-of-the-art medical image fusion methods in terms of five objective indices and subjective evaluation, and it has higher computational efficiency. AVAILABILITY AND IMPLEMENTATION First, the concept of SPI is applied to the co-occurrence filter to design ACOF. The initial base layers of source images are obtained using ACOF, which relies on the contents of images rather than fixed scale. Then, the widely used iterative filter framework is replaced with an optimization model to ensure that the base layer and detail layer are sufficiently separated and the image decomposition has higher computational efficiency. The optimization function is constructed based on the characteristics of the ideal base layer. Finally, the fused images are generated by designed fusion rules and linear addition. The code and data can be downloaded at https://github.com/zhunui/acof. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Rui Zhu
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Xiongfei Li
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Sa Huang
- Department of Radiology, the Second Hospital of Jilin University, Changchun 130041, China
| | - Xiaoli Zhang
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China.,College of Computer Science and Technology, Jilin University, Changchun 130012, China
| |
Collapse
|
10
|
Faragallah OS, Muhammed AN, Taha TS, Geweid GG. PCA based SVD fusion for MRI and CT medical images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper presents a new approach to the multi-modal medical image fusion based on Principal Component Analysis (PCA) and Singular value decomposition (SVD).The main objective of the proposed approach is to facilitate its implementation on a hardware unit, so it works effectively at run time. To evaluate the presented approach, it was tested in fusing four different cases of a registered CT and MRI images. Eleven quality metrics (including Mutual Information and Universal Image Quality Index) were used in evaluating the fused image obtained by the proposed approach, and compare it with the images obtained by the other fusion approaches. In experiments, the quality metrics shows that the fused image obtained by the presented approach has better quality result and it proved effective in medical image fusion especially in MRI and CT images. It also indicates that the paper approach had reduced the processing time and the memory required during the fusion process, and leads to very cheap and fast hardware implementation of the presented approach.
Collapse
Affiliation(s)
- Osama S. Faragallah
- Department of Information Technology, College of Computers and Information Technology, Taif University, Saudi Arabia
| | - Abdullah N. Muhammed
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Taha S. Taha
- Department of Electronics and Communication Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Gamal G.N. Geweid
- Department of Biomedical Engineering, College of Engineering and Computer Sciences, Marshall University, Huntington, WV, USA
- Department of Electrical Engineering, Faculty of Engineering, Benha University, Benha, Egypt
| |
Collapse
|
11
|
Zhu R, Li X, Zhang X, Wang J. HID: The Hybrid Image Decomposition Model for MRI and CT Fusion. IEEE J Biomed Health Inform 2021; 26:727-739. [PMID: 34270437 DOI: 10.1109/jbhi.2021.3097374] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multimodal medical image fusion can combine salient information from different source images of the same part and reduce the redundancy of information. In this paper, an efficient hybrid image decomposition (HID) method is proposed. It combines the advantages of spatial domain and transform domain methods and breaks through the limitations of the algorithms based on single category features. The accurate separation of base layer and texture details is conducive to the better effect of the fusion rules. First, the source anatomical images are decomposed into a series of high frequencies and a low frequency via nonsubsampled shearlet transform (NSST). Second, the low frequency is further decomposed using the designed optimization model based on structural similarity and structure tensor to get an energy texture layer and a base layer. Then, the modified choosing maximum (MCM) is designed to fuse base layers. The sum of modified Laplacian (SML) is used to fuse high frequencies and energy texture layers. Finally, the fused low frequency can be obtained by adding fused energy texture layer and base layer. And the fused image is reconstructed by the inverse NSST. The superiority of the proposed method is verified by amounts of experiments on 50 pairs of magnetic resonance imaging (MRI) images and computed tomography (CT) images and others, and compared with 12 state-of-the-art medical image fusion methods. It is demonstrated that the proposed hybrid decomposition model has a better ability to extract texture information than conventional ones.
Collapse
|
12
|
Ju F, Sun Y, Gao J, Hu Y, Yin B. Kronecker-decomposable robust probabilistic tensor discriminant analysis. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.01.054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
13
|
Dinh PH. A novel approach based on Three-scale image decomposition and Marine predators algorithm for multi-modal medical image fusion. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102536] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Ilyas A, Farid MS, Khan MH, Grzegorzek M. Exploiting Superpixels for Multi-Focus Image Fusion. ENTROPY (BASEL, SWITZERLAND) 2021; 23:247. [PMID: 33670018 PMCID: PMC7926613 DOI: 10.3390/e23020247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/15/2021] [Accepted: 02/17/2021] [Indexed: 12/03/2022]
Abstract
Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.
Collapse
Affiliation(s)
- Areeba Ilyas
- Punjab University College of Information Technology, University of the Punjab, Lahore 54000, Pakistan; (A.I.); (M.H.K.)
| | - Muhammad Shahid Farid
- Punjab University College of Information Technology, University of the Punjab, Lahore 54000, Pakistan; (A.I.); (M.H.K.)
| | - Muhammad Hassan Khan
- Punjab University College of Information Technology, University of the Punjab, Lahore 54000, Pakistan; (A.I.); (M.H.K.)
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany;
| |
Collapse
|
15
|
Wang G, Li W, Huang Y. Medical image fusion based on hybrid three-layer decomposition model and nuclear norm. Comput Biol Med 2020; 129:104179. [PMID: 33360260 DOI: 10.1016/j.compbiomed.2020.104179] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 11/30/2020] [Accepted: 12/12/2020] [Indexed: 11/30/2022]
Abstract
The aim of medical image fusion technology is to synthesize multiple-image information to assist doctors in making scientific decisions. Existing studies have focused on preserving image details while avoiding halo artifacts and color distortions. This paper proposes a novel medical image fusion algorithm based on this research objective. First, the input image is decomposed into structure, texture, and local mean brightness layers using a hybrid three-layer decomposition model that can fully extract the features of the original images without the introduction of artifacts. Secondly, the nuclear norm of the patches, which are obtained using a sliding window, are calculated to construct the weight maps of the structure and texture layers. The weight map of the local mean brightness layer is constructed by calculating the local energy. Finally, remapping functions are applied to enhance each fusion layer, which reconstructs the final fusion image with the inverse operation of decomposition. Subjective and objective experiments confirm that the proposed algorithm has a distinct advantage compared with other state-of-the-art algorithms.
Collapse
Affiliation(s)
- Guofen Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Yuping Huang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|