1
|
Ravi J, Narmadha R. Optimized dual-tree complex wavelet transform aided multimodal image fusion with adaptive weighted average fusion strategy. Sci Rep 2024; 14:30246. [PMID: 39632891 PMCID: PMC11618366 DOI: 10.1038/s41598-024-81594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/27/2024] [Indexed: 12/07/2024] Open
Abstract
Image fusion is generally utilized for retrieving significant data from a set of input images to provide useful informative data. Image fusion enhances the applicability and quality of data. Hence, the analysis of multimodal image fusion is a new to the research topic, which is designed by combining the images of multimodal into single image in order to preserveexact details. On the other hand, the existing approaches face challenges in the precise interpretation of source images, and also it have only captured local information without considering the wide range of information. To consider these weaknesses, a multimodal image fusion model is planned to develop according to the multi-resolution transform along with the optimization strategy. At first, the images are effectively analyzed from standard public datasets and further, the images given into the Optimized Dual-Tree Complex Wavelet Transform (ODTCWT) to acquire low frequency and high frequency coefficients. Here, certain parameters in DTCWT get tuned with the hybridized heuristic strategy with the Probability of Fitness-based Honey Badger Squirrel Search Optimization (PF-HBSSO) to enhance the decomposition quality. Then, the fusion of high-frequency coefficients is performed using adaptive weighted average fusion technique, whereas the weights are optimized using PF-HBSSOto achieve the optimal fused results. Similarly, the low-frequency coefficients are combined by average fusion. Finally, the fused images undergo image reconstruction using the inverse ODTCWT. The experimental evaluation of the designed multimodal image fusion illustratessuperioritythat distinguishes this work from others.
Collapse
Affiliation(s)
- Jampani Ravi
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India.
| | - R Narmadha
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India
| |
Collapse
|
2
|
Tajmirriahi M, Rabbani H. A Review of EEG-based Localization of Epileptic Seizure Foci: Common Points with Multimodal Fusion of Brain Data. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:19. [PMID: 39234592 PMCID: PMC11373807 DOI: 10.4103/jmss.jmss_11_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/07/2024] [Accepted: 05/24/2024] [Indexed: 09/06/2024]
Abstract
Unexpected seizures significantly decrease the quality of life in epileptic patients. Seizure attacks are caused by hyperexcitability and anatomical lesions of special regions of the brain, and cognitive impairments and memory deficits are their most common concomitant effects. In addition to seizure reduction treatments, medical rehabilitation involving brain-computer interfaces and neurofeedback can improve cognition and quality of life in patients with focal epilepsy in most cases, in particular when resective epilepsy surgery has been considered treatment in drug-resistant epilepsy. Source estimation and precise localization of epileptic foci can improve such rehabilitation and treatment. Electroencephalography (EEG) monitoring and multimodal noninvasive neuroimaging techniques such as ictal/interictal single-photon emission computerized tomography (SPECT) imaging and structural magnetic resonance imaging are common practices for the localization of epileptic foci and have been studied in several kinds of researches. In this article, we review the most recent research on EEG-based localization of seizure foci and discuss various methods, their advantages, limitations, and challenges with a focus on model-based data processing and machine learning algorithms. In addition, we survey whether combined analysis of EEG monitoring and neuroimaging techniques, which is known as multimodal brain data fusion, can potentially increase the precision of the seizure foci localization. To this end, we further review and summarize the key parameters and challenges of processing, fusion, and analysis of multiple source data, in the framework of model-based signal processing, for the development of a multimodal brain data analyzing system. This article has the potential to be used as a valuable resource for neuroscience researchers for the development of EEG-based rehabilitation systems based on multimodal data analysis related to focal epilepsy.
Collapse
Affiliation(s)
- Mahnoosh Tajmirriahi
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
3
|
Gu Y, Guan Y, Yu Z, Dong B. SegCoFusion: An Integrative Multimodal Volumetric Segmentation Cooperating With Fusion Pipeline to Enhance Lesion Awareness. IEEE J Biomed Health Inform 2023; 27:5860-5871. [PMID: 37738185 DOI: 10.1109/jbhi.2023.3318131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Multimodal volumetric segmentation and fusion are two valuable techniques for surgical treatment planning, image-guided interventions, tumor growth detection, radiotherapy map generation, etc. In recent years, deep learning has demonstrated its excellent capability in both of the above tasks, while these methods inevitably face bottlenecks. On the one hand, recent segmentation studies, especially the U-Net-style series, have reached the performance ceiling in segmentation tasks. On the other hand, it is almost impossible to capture the ground truth of the fusion in multimodal imaging, due to differences in physical principles among imaging modalities. Hence, most of the existing studies in the field of multimodal medical image fusion, which fuse only two modalities at a time with hand-crafted proportions, are subjective and task-specific. To address the above concerns, this work proposes an integration of multimodal segmentation and fusion, namely SegCoFusion, which consists of a novel feature frequency dividing network named FDNet and a segmentation part using a dual-single path feature supplementing strategy to optimize the segmentation inputs and suture with the fusion part. Furthermore, focusing on multimodal brain tumor volumetric fusion and segmentation, the qualitative and quantitative results demonstrate that SegCoFusion can break the ceiling both of segmentation and fusion methods. Moreover, the effectiveness of the proposed framework is also revealed by comparing it with state-of-the-art fusion methods on 2D two-modality fusion tasks, our method achieves better fusion performance than others. Therefore, the proposed SegCoFusion develops a novel perspective that improves the performance in volumetric fusion by cooperating with segmentation and enhances lesion awareness.
Collapse
|
4
|
Liu Y, Zang Y, Zhou D, Cao J, Nie R, Hou R, Ding Z, Mei J. An Improved Hybrid Network With a Transformer Module for Medical Image Fusion. IEEE J Biomed Health Inform 2023; 27:3489-3500. [PMID: 37023161 DOI: 10.1109/jbhi.2023.3264819] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.
Collapse
|
5
|
Panigrahy C, Seal A, Gonzalo-Martín C, Pathak P, Jalal AS. Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
6
|
Alpar O. A mathematical fuzzy fusion framework for whole tumor segmentation in multimodal MRI using Nakagami imaging. EXPERT SYSTEMS WITH APPLICATIONS 2023; 216:119462. [DOI: 10.1016/j.eswa.2022.119462] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
7
|
Diwakar M, Singh P, Ravi V, Maurya A. A Non-Conventional Review on Multi-Modality-Based Medical Image Fusion. Diagnostics (Basel) 2023; 13:diagnostics13050820. [PMID: 36899965 PMCID: PMC10000748 DOI: 10.3390/diagnostics13050820] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/29/2023] [Accepted: 02/20/2023] [Indexed: 02/25/2023] Open
Abstract
Today, medical images play a crucial role in obtaining relevant medical information for clinical purposes. However, the quality of medical images must be analyzed and improved. Various factors affect the quality of medical images at the time of medical image reconstruction. To obtain the most clinically relevant information, multi-modality-based image fusion is beneficial. Nevertheless, numerous multi-modality-based image fusion techniques are present in the literature. Each method has its assumptions, merits, and barriers. This paper critically analyses some sizable non-conventional work within multi-modality-based image fusion. Often, researchers seek help in apprehending multi-modality-based image fusion and choosing an appropriate multi-modality-based image fusion approach; this is unique to their cause. Hence, this paper briefly introduces multi-modality-based image fusion and non-conventional methods of multi-modality-based image fusion. This paper also signifies the merits and downsides of multi-modality-based image fusion.
Collapse
Affiliation(s)
- Manoj Diwakar
- Department of CSE, Graphic Era Deemed to be University, Dehradun 248002, India
| | - Prabhishek Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida 201310, India
- Correspondence:
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
| | - Ankur Maurya
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida 201310, India
| |
Collapse
|
8
|
Yang Y, Cao S, Wan W, Huang S. Multi-modal medical image super-resolution fusion based on detail enhancement and weighted local energy deviation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
9
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
Lu F, Du L, Chen W, Jiang H, Yang C, Pu Y, Wu J, Zhu J, Chen T, Zhang X, Wu C. T 1- T 2 dual-modal magnetic resonance contrast-enhanced imaging for rat liver fibrosis stage. RSC Adv 2022; 12:35809-35819. [PMID: 36545112 PMCID: PMC9749127 DOI: 10.1039/d2ra05913d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 12/02/2022] [Indexed: 12/16/2022] Open
Abstract
The development of an effective method for staging liver fibrosis has always been a hot topic of research in the field of liver fibrosis. In this paper, PEGylated ultrafine superparamagnetic iron oxide nanocrystals (SPIO@PEG) were developed for T 1-T 2 dual-modal contrast-enhanced magnetic resonance imaging (MRI) and combined with Matrix Laboratory (MATLAB)-based image fusion for staging liver fibrosis in the rat model. Firstly, SPIO@PEG was synthesized and characterized with physical and biological properties as a T 1-T 2 dual-mode MRI contrast agent. Secondly, in the subsequent MR imaging of liver fibrosis in rats in vivo, conventional T 1 and T 2-weighted imaging, and T 1 and T 2 mapping of the liver pre- and post-intravenous administration of SPIO@PEG were systematically collected and analyzed. Thirdly, by creative design, we fused the T 1 and T 2 mapping images by MATLAB and quantitively measured each rat's hepatic fibrosis positive pixel ratio (PPR). SPIO@PEG was proved to have an ultrafine core size (4.01 ± 0.16 nm), satisfactory biosafety and T 1-T 2 dual-mode contrast effects under a 3.0 T MR scanner (r 2/r 1 = 3.51). According to the image fusion results, the SPIO@PEG contrast-enhanced PPR shows significant differences among different stages of liver fibrosis (P < 0.05). The combination of T 1-T 2 dual-modal SPIO@PEG and MATLAB-based image fusion technology could be a promising method for diagnosing and staging liver fibrosis in the rat model. PPR could also be used as a non-invasive biomarker to diagnose and discriminate the stages of liver fibrosis.
Collapse
Affiliation(s)
- Fulin Lu
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
- Department of Radiology, Sichuan Academy of Medical Sciences, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China Chengdu 610072 China
| | - Liang Du
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Wei Chen
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Hai Jiang
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Chenwu Yang
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Yu Pu
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Jun Wu
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Jiang Zhu
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Tianwu Chen
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Xiaoming Zhang
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| | - Changqiang Wu
- Medical Imaging Key Laboratory of Sichuan Province, School of Medical Imaging, Affiliated Hospital of North Sichuan Medical College Nanchong 637000 China
| |
Collapse
|
11
|
Kong W, Li C, Lei Y. Multimodal medical image fusion using convolutional neural network and extreme learning machine. Front Neurorobot 2022; 16:1050981. [PMID: 36467563 PMCID: PMC9708736 DOI: 10.3389/fnbot.2022.1050981] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 10/28/2022] [Indexed: 08/27/2023] Open
Abstract
The emergence of multimodal medical imaging technology greatly increases the accuracy of clinical diagnosis and etiological analysis. Nevertheless, each medical imaging modal unavoidably has its own limitations, so the fusion of multimodal medical images may become an effective solution. In this paper, a novel fusion method on the multimodal medical images exploiting convolutional neural network (CNN) and extreme learning machine (ELM) is proposed. As a typical representative in deep learning, CNN has been gaining more and more popularity in the field of image processing. However, CNN often suffers from several drawbacks, such as high computational costs and intensive human interventions. To this end, the model of convolutional extreme learning machine (CELM) is constructed by incorporating ELM into the traditional CNN model. CELM serves as an important tool to extract and capture the features of the source images from a variety of different angles. The final fused image can be obtained by integrating the significant features together. Experimental results indicate that, the proposed method is not only helpful to enhance the accuracy of the lesion detection and localization, but also superior to the current state-of-the-art ones in terms of both subjective visual performance and objective criteria.
Collapse
Affiliation(s)
- Weiwei Kong
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an, China
| | - Chi Li
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an, China
| | - Yang Lei
- College of Cryptography Engineering, Engineering University of PAP, Xi'an, China
| |
Collapse
|
12
|
Ramprasad MVS, Rahman MZU, Bayleyegn MD. A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2022; 3:178-188. [PMID: 36712319 PMCID: PMC9870266 DOI: 10.1109/ojemb.2022.3217186] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/14/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Goal: Implementation of an artificial intelli gence-based medical diagnosis tool for brain tumor classification, which is called the BTFSC-Net. Methods: Medical images are preprocessed using a hybrid probabilistic wiener filter (HPWF) The deep learning convolutional neural network (DLCNN) was utilized to fuse MRI and CT images with robust edge analysis (REA) properties, which are used to identify the slopes and edges of source images. Then, hybrid fuzzy c-means integrated k-means (HFCMIK) clustering is used to segment the disease affected region from the fused image. Further, hybrid features such as texture, colour, and low-level features are extracted from the fused image by using gray-level cooccurrence matrix (GLCM), redundant discrete wavelet transform (RDWT) descriptors. Finally, a deep learning based probabilistic neural network (DLPNN) is used to classify malignant and benign tumors. The BTFSC-Net attained 99.21% of segmentation accuracy and 99.46% of classification accuracy. Conclusions: The simulations showed that BTFSC-Net outperformed as compared to existing methods.
Collapse
Affiliation(s)
- M V S Ramprasad
- Koneru Lakshmaiah Education FoundationK L University Guntur 522302 India
- GITAM (Deemed to be University) Visakhapatnam AP 522502 India
| | - Md Zia Ur Rahman
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education FoundationK L University Vaddeswaram Guntur 522502 India
| | | |
Collapse
|
13
|
Multimodal Brain Image Fusion Based on Improved Rolling Guidance Filter and Wiener Filter. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5691099. [PMID: 36277015 PMCID: PMC9581680 DOI: 10.1155/2022/5691099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/08/2022] [Accepted: 09/12/2022] [Indexed: 12/02/2022]
Abstract
Medical image fusion technology can integrate complementary information from different modality medical images to provide a more complete and accurate description of the specific diagnosed object, which is very helpful for image-guided clinical diagnosis and treatment. This paper proposes an effective brain image fusion framework based on improved rolling guidance filter (IRGF). Firstly, input images are decomposed into base layers and detail layers using the IRGF and Wiener filter. Secondly, the visual saliency maps of the input image are computed by pixel-level saliency value, and the weight maps of detail layers are constructed by max-absolute strategy and are further smoothed with Gaussian filter, the purpose of which is to make the fused image appear more naturally and more suitable for human visual perception. Lastly, base layers are fused by visual saliency map based fusion rule and the corresponding weight maps from detail layers are fused by the weighted least squares optimization scheme. Experimental results testify that our method is superior to some state-of-the-art methods in both subjective and objective assessments.
Collapse
|
14
|
Pan-Logical Probabilistic Algorithms Based on Convolutional Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8935906. [PMID: 35990166 PMCID: PMC9385339 DOI: 10.1155/2022/8935906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 06/25/2022] [Accepted: 06/27/2022] [Indexed: 11/18/2022]
Abstract
A brand-new kind of flexible logic system called universal logic aims to address a variety of uncertain problems. In this study, the role of convolutional neural networks in assessing probabilistic pan-logic algorithms is investigated. A generic logic probability algorithm analysis based on a convolutional neural network is suggested due to the unpredictable outputs of the probabilistic algorithm and the difficulty of its analysis. The stochastic gradient descent technique and the error backpropagation algorithm are used to investigate the broad logic probability algorithm (SGD). The experimental data presented in this research show that the BP algorithm of the convolutional neural network has an accuracy rate of 89 percent when analysing the experimental data. As there are more experimental iterations, the error will go down. The SGD method proves that raising the algorithm's learning rate reduces the loss value of the function. The loss value can be as low as 100%, and the algorithm analysis is closer to the real.
Collapse
|
15
|
Kong W, Miao Q, Lei Y, Ren C. Guided filter random walk and improved spiking cortical model based image fusion method in NSST domain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
Goyal B, Dogra A, Khoond R, Al-Turjman F. An Efficient Medical Assistive Diagnostic Algorithm for Visualisation of Structural and Tissue Details in CT and MRI Fusion. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09958-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
Zuo Q, Zhang J, Yang Y. DMC-Fusion: Deep Multi-Cascade Fusion With Classifier-Based Feature Synthesis for Medical Multi-Modal Images. IEEE J Biomed Health Inform 2021; 25:3438-3449. [PMID: 34038372 DOI: 10.1109/jbhi.2021.3083752] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-modal medical image fusion is a challenging yet important task for precision diagnosis and surgical planning in clinical practice. Although single feature fusion strategy such as Densefuse has achieved inspiring performance, it tends to be not fully preserved for the source image features. In this paper, a deep multi-fusion framework with classifier-based feature synthesis is proposed to automatically fuse multi-modal medical images. It consists of a pre-trained autoencoder based on dense connections, a feature classifier and a multi-cascade fusion decoder with separately fusing high-frequency and low-frequency. The encoder and decoder are transferred from MS-COCO datasets and pre-trained simultaneously on multi-modal medical image public datasets to extract features. The feature classification is conducted through Gaussian high-pass filtering and the peak signal to noise ratio thresholding, then feature maps in each layer of the pre-trained Dense-Block and decoder are divided into high-frequency and low-frequency sequences. Specifically, in proposed feature fusion block, parameter-adaptive pulse coupled neural network and l1-weighted are employed to fuse high-frequency and low-frequency, respectively. Finally, we design a novel multi-cascade fusion decoder on total decoding feature stage to selectively fuse useful information from different modalities. We also validate our approach for the brain disease classification using the fused images, and a statistical significance test is performed to illustrate that the improvement in classification performance is due to the fusion. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance in both qualitative and quantitative evaluations.
Collapse
|
18
|
Li X, Zhou F, Tan H, Zhang W, Zhao C. Multimodal medical image fusion based on joint bilateral filter and local gradient energy. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.04.052] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
19
|
Zhu R, Li X, Zhang X, Wang J. HID: The Hybrid Image Decomposition Model for MRI and CT Fusion. IEEE J Biomed Health Inform 2021; 26:727-739. [PMID: 34270437 DOI: 10.1109/jbhi.2021.3097374] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multimodal medical image fusion can combine salient information from different source images of the same part and reduce the redundancy of information. In this paper, an efficient hybrid image decomposition (HID) method is proposed. It combines the advantages of spatial domain and transform domain methods and breaks through the limitations of the algorithms based on single category features. The accurate separation of base layer and texture details is conducive to the better effect of the fusion rules. First, the source anatomical images are decomposed into a series of high frequencies and a low frequency via nonsubsampled shearlet transform (NSST). Second, the low frequency is further decomposed using the designed optimization model based on structural similarity and structure tensor to get an energy texture layer and a base layer. Then, the modified choosing maximum (MCM) is designed to fuse base layers. The sum of modified Laplacian (SML) is used to fuse high frequencies and energy texture layers. Finally, the fused low frequency can be obtained by adding fused energy texture layer and base layer. And the fused image is reconstructed by the inverse NSST. The superiority of the proposed method is verified by amounts of experiments on 50 pairs of magnetic resonance imaging (MRI) images and computed tomography (CT) images and others, and compared with 12 state-of-the-art medical image fusion methods. It is demonstrated that the proposed hybrid decomposition model has a better ability to extract texture information than conventional ones.
Collapse
|
20
|
Multi-modal medical image fusion based on equilibrium optimizer algorithm and local energy functions. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02282-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
KUMAR NNAGARAJA, PRASAD TJAYACHANDRA, PRASAD KSATYA. OPTIMIZED DUAL-TREE COMPLEX WAVELET TRANSFORM AND FUZZY ENTROPY FOR MULTI-MODAL MEDICAL IMAGE FUSION: A HYBRID META-HEURISTIC CONCEPT. J MECH MED BIOL 2021. [DOI: 10.1142/s021951942150024x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In recent times, multi-modal medical image fusion has emerged as an important medical application tool. An important goal is to fuse the multi-modal medical images from diverse imaging modalities into a single fused image. The physicians broadly utilize this for precise identification and treatment of diseases. This medical image fusion approach will help the physician perform the combined diagnosis, interventional treatment, pre-operative planning, and intra-operative guidance in various medical applications by developing the corresponding information from clinical images through different modalities. In this paper, a novel multi-modal medical image fusion method is adopted using the intelligent method. Initially, the images from two different modalities are applied with optimized Dual-Tree Complex Wavelet Transform (DT-CWT) for splitting the images into high-frequency subbands and low-frequency subbands. As an improvement to the conventional DT-CWT, the filter coefficients are optimized by the hybrid meta-heuristic algorithm named as Hybrid Beetle and Salp Swarm Optimization (HBSSO) by merging the Salp Swarm Algorithm (SSA), and Beetle Swarm Optimization (BSO). Moreover, the fusion of the source images’ high-frequency subbands was done by the optimized type-2 Fuzzy Entropy. The upper and lower membership limits are optimized by the same hybrid HBSSO. The optimized type-2 fuzzy Entropy automatically selects high-frequency coefficients. Also, the fusion of the low-frequency sub-images is performed by the Averaging approach. Further, the inverse optimized DT-CWT on the fused image sets helps to obtain the final fused medical image. The main objective of the optimized DT-CWT and optimized type-2 fuzzy Entropy is to maximize the SSIM. The experimental results confirm that the developed approach outperforms the existing fusion algorithms in diverse performance measures.
Collapse
Affiliation(s)
| | | | - K. SATYA PRASAD
- Rector of Vignan’s Foundation for Science Technology and Research, Guntur, Andhra Pradesh, India
| |
Collapse
|
22
|
Liu Y, Zhang C, Li C, Cheng J, Zhang Y, Xu H, Song T, Zhao L, Chen X. A practical PET/CT data visualization method with dual-threshold PET colorization and image fusion. Comput Biol Med 2020; 126:104050. [PMID: 33096422 DOI: 10.1016/j.compbiomed.2020.104050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 09/09/2020] [Accepted: 10/06/2020] [Indexed: 10/23/2022]
Abstract
Multi-modal medical imaging has emerged as a general trend in clinical diagnosis and treatment planning. In recent years, great efforts have been made to investigate and develop dual-modality scanners, among which PET/CT is the most widespread one in clinical practice. In this paper, we propose a simple yet effective PET/CT data visualization method that can integrate these two modalities into composite data for better observation. The proposed method consists of three main steps. First, a PET data colorization approach is presented based on a dual-threshold scheme, which applies a pair of high and low thresholds to colorize the PET image. Then, to extract functional information from the PET image more adequately, unlike traditional blending fashion that directly uses the CT image as underlay, we merge the CT and the PET images with a Laplacian pyramid (LP)-based image fusion approach to generate the underlay. Finally, the visualization result is obtained by blending the fused image and the colorized PET image. Experiments are conducted on 5 sets of PET/CT scans that contain 200 paired slices in total. The ClearCanvas software and the method using the presented PET colorization approach but with the CT image as underlay are adopted for comparison. Experimental results demonstrate that the proposed method can achieve more promising performance in terms of both visual perception and quantitative assessment. The code of the proposed method has been made available online athttps://github.com/yuliu316316/Visualization.
Collapse
Affiliation(s)
- Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China.
| | - Chao Zhang
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Chang Li
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Juan Cheng
- Department of Biomedical Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Yadong Zhang
- The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China.
| | - Huiqin Xu
- The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
| | - Tao Song
- The SenseTime Research, Shanghai, 200233, China
| | - Liang Zhao
- The SenseTime Research, Shanghai, 200233, China
| | - Xun Chen
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China; Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China.
| |
Collapse
|
23
|
|