1
|
Ravi J, Narmadha R. Optimized dual-tree complex wavelet transform aided multimodal image fusion with adaptive weighted average fusion strategy. Sci Rep 2024; 14:30246. [PMID: 39632891 PMCID: PMC11618366 DOI: 10.1038/s41598-024-81594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/27/2024] [Indexed: 12/07/2024] Open
Abstract
Image fusion is generally utilized for retrieving significant data from a set of input images to provide useful informative data. Image fusion enhances the applicability and quality of data. Hence, the analysis of multimodal image fusion is a new to the research topic, which is designed by combining the images of multimodal into single image in order to preserveexact details. On the other hand, the existing approaches face challenges in the precise interpretation of source images, and also it have only captured local information without considering the wide range of information. To consider these weaknesses, a multimodal image fusion model is planned to develop according to the multi-resolution transform along with the optimization strategy. At first, the images are effectively analyzed from standard public datasets and further, the images given into the Optimized Dual-Tree Complex Wavelet Transform (ODTCWT) to acquire low frequency and high frequency coefficients. Here, certain parameters in DTCWT get tuned with the hybridized heuristic strategy with the Probability of Fitness-based Honey Badger Squirrel Search Optimization (PF-HBSSO) to enhance the decomposition quality. Then, the fusion of high-frequency coefficients is performed using adaptive weighted average fusion technique, whereas the weights are optimized using PF-HBSSOto achieve the optimal fused results. Similarly, the low-frequency coefficients are combined by average fusion. Finally, the fused images undergo image reconstruction using the inverse ODTCWT. The experimental evaluation of the designed multimodal image fusion illustratessuperioritythat distinguishes this work from others.
Collapse
Affiliation(s)
- Jampani Ravi
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India.
| | - R Narmadha
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India
| |
Collapse
|
2
|
Allapakam V, Karuna Y. An ensemble deep learning model for medical image fusion with Siamese neural networks and VGG-19. PLoS One 2024; 19:e0309651. [PMID: 39441782 PMCID: PMC11498686 DOI: 10.1371/journal.pone.0309651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 08/15/2024] [Indexed: 10/25/2024] Open
Abstract
Multimodal medical image fusion methods, which combine complementary information from many multi-modality medical images, are among the most important and practical approaches in numerous clinical applications. Various conventional image fusion techniques have been developed for multimodality image fusion. Complex procedures for weight map computing, fixed fusion strategy and lack of contextual understanding remain difficult in conventional and machine learning approaches, usually resulting in artefacts that degrade the image quality. This work proposes an efficient hybrid learning model for medical image fusion using pre-trained and non-pre-trained networks i.e. VGG-19 and SNN with stacking ensemble method. The model leveraging the unique capabilities of each architecture, can effectively preserve the detailed information with high visual quality, for numerous combinations of image modalities in image fusion challenges, notably improved contrast, increased resolution, and lower artefacts. Additionally, this ensemble model can be more robust in the fusion of various combinations of source images that are publicly available from Havard-Medical-Image-Fusion Datasets, GitHub. and Kaggle. Our proposed model performance is superior in terms of visual quality and performance metrics to that of the existing fusion methods in literature like PCA+DTCWT, NSCT, DWT, DTCWT+NSCT, GADCT, CNN and VGG-19.
Collapse
Affiliation(s)
- Venu Allapakam
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Yepuganti Karuna
- School of Electronics Engineering, VIT-AP University, Amaravathi, India
| |
Collapse
|
3
|
Tajmirriahi M, Rabbani H. A Review of EEG-based Localization of Epileptic Seizure Foci: Common Points with Multimodal Fusion of Brain Data. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:19. [PMID: 39234592 PMCID: PMC11373807 DOI: 10.4103/jmss.jmss_11_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/07/2024] [Accepted: 05/24/2024] [Indexed: 09/06/2024]
Abstract
Unexpected seizures significantly decrease the quality of life in epileptic patients. Seizure attacks are caused by hyperexcitability and anatomical lesions of special regions of the brain, and cognitive impairments and memory deficits are their most common concomitant effects. In addition to seizure reduction treatments, medical rehabilitation involving brain-computer interfaces and neurofeedback can improve cognition and quality of life in patients with focal epilepsy in most cases, in particular when resective epilepsy surgery has been considered treatment in drug-resistant epilepsy. Source estimation and precise localization of epileptic foci can improve such rehabilitation and treatment. Electroencephalography (EEG) monitoring and multimodal noninvasive neuroimaging techniques such as ictal/interictal single-photon emission computerized tomography (SPECT) imaging and structural magnetic resonance imaging are common practices for the localization of epileptic foci and have been studied in several kinds of researches. In this article, we review the most recent research on EEG-based localization of seizure foci and discuss various methods, their advantages, limitations, and challenges with a focus on model-based data processing and machine learning algorithms. In addition, we survey whether combined analysis of EEG monitoring and neuroimaging techniques, which is known as multimodal brain data fusion, can potentially increase the precision of the seizure foci localization. To this end, we further review and summarize the key parameters and challenges of processing, fusion, and analysis of multiple source data, in the framework of model-based signal processing, for the development of a multimodal brain data analyzing system. This article has the potential to be used as a valuable resource for neuroscience researchers for the development of EEG-based rehabilitation systems based on multimodal data analysis related to focal epilepsy.
Collapse
Affiliation(s)
- Mahnoosh Tajmirriahi
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
4
|
Chen M, Li Y, Zhang K, Liu H. Protein coding regions prediction by fusing DNA shape features. N Biotechnol 2024; 80:21-26. [PMID: 38182076 DOI: 10.1016/j.nbt.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 11/14/2023] [Accepted: 12/23/2023] [Indexed: 01/07/2024]
Abstract
Exons crucial for coding are often hidden within introns, and the two tend to vary greatly in length, which results in deep learning-based protein coding region prediction methods often performing poorly when applied to more structurally complex biological genomes. DNA shape information also plays a role in revealing the underlying logic of gene expression, yet current methods ignore the influence of DNA shape features when distinguishing coding and non-coding regions. We propose a method to predict protein-coding regions using the CNNS-BRNN model, which incorporates DNA shape features and improves the model's ability to distinguish between intronic and exonic features. We use a fusion coding technique that combines DNA shape features and traditional sequence features. Experiments show that this method outperforms the baseline method in metrics such as AUC and F1 by 2.3% and 5.3%, respectively, and the fusion coding method that introduces DNA shape features has a significant improvement in model performance.
Collapse
Affiliation(s)
- Miao Chen
- Ocean University of China, College of Computer Science and Technology, Qingdao 266100, China
| | - Yangyang Li
- Ocean University of China, College of Computer Science and Technology, Qingdao 266100, China
| | - Kun Zhang
- Ocean University of China, College of Computer Science and Technology, Qingdao 266100, China
| | - Hao Liu
- Ocean University of China, College of Computer Science and Technology, Qingdao 266100, China.
| |
Collapse
|
5
|
Mandracchia B, Liu W, Hua X, Forghani P, Lee S, Hou J, Nie S, Xu C, Jia S. Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images. SCIENCE ADVANCES 2023; 9:eadg9245. [PMID: 37647399 PMCID: PMC10468132 DOI: 10.1126/sciadv.adg9245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 07/31/2023] [Indexed: 09/01/2023]
Abstract
Fluorescence microscopy is one of the most indispensable and informative driving forces for biological research, but the extent of observable biological phenomena is essentially determined by the content and quality of the acquired images. To address the different noise sources that can degrade these images, we introduce an algorithm for multiscale image restoration through optimally sparse representation (MIRO). MIRO is a deterministic framework that models the acquisition process and uses pixelwise noise correction to improve image quality. Our study demonstrates that this approach yields a remarkable restoration of the fluorescence signal for a wide range of microscopy systems, regardless of the detector used (e.g., electron-multiplying charge-coupled device, scientific complementary metal-oxide semiconductor, or photomultiplier tube). MIRO improves current imaging capabilities, enabling fast, low-light optical microscopy, accurate image analysis, and robust machine intelligence when integrated with deep neural networks. This expands the range of biological knowledge that can be obtained from fluorescence microscopy.
Collapse
Affiliation(s)
- Biagio Mandracchia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Scientific-Technical Central Units, Instituto de Salud Carlos III (ISCIII), Majadahonda, Spain
- ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | - Wenhao Liu
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Xuanwen Hua
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Parvin Forghani
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Soojung Lee
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Jessica Hou
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shuyi Nie
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Chunhui Xu
- Department of Pediatrics, School of Medicine, Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| | - Shu Jia
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- Parker H. Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
6
|
VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis. BMC Bioinformatics 2022; 23:548. [PMID: 36536297 PMCID: PMC9762055 DOI: 10.1186/s12859-022-05072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/22/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Today's biomedical imaging technology has been able to present the morphological structure or functional metabolic information of organisms at different scale levels, such as organ, tissue, cell, molecule and gene. However, different imaging modes have different application scope, advantages and disadvantages. In order to improve the role of medical image in disease diagnosis, the fusion of biomedical image information at different imaging modes and scales has become an important research direction in medical image. Traditional medical image fusion methods are all designed to measure the activity level and fusion rules. They are lack of mining the context features of different modes of image, which leads to the obstruction of improving the quality of fused images. METHOD In this paper, an attention-multiscale network medical image fusion model based on contextual features is proposed. The model selects five backbone modules in the VGG-16 network to build encoders to obtain the contextual features of medical images. It builds the attention mechanism branch to complete the fusion of global contextual features and designs the residual multiscale detail processing branch to complete the fusion of local contextual features. Finally, it completes the cascade reconstruction of features by the decoder to obtain the fused image. RESULTS Ten sets of images related to five diseases are selected from the AANLIB database to validate the VANet model. Structural images are derived from MR images with high resolution and functional images are derived from SPECT and PET images that are good at describing organ blood flow levels and tissue metabolism. Fusion experiments are performed on twelve fusion algorithms including the VANet model. The model selects eight metrics from different aspects to build a fusion quality evaluation system to complete the performance evaluation of the fused images. Friedman's test and the post-hoc Nemenyi test are introduced to conduct professional statistical tests to demonstrate the superiority of VANet model. CONCLUSIONS The VANet model completely captures and fuses the texture details and color information of the source images. From the fusion results, the metabolism and structural information of the model are well expressed and there is no interference of color information on the structure and texture; in terms of the objective evaluation system, the metric value of the VANet model is generally higher than that of other methods.; in terms of efficiency, the time consumption of the model is acceptable; in terms of scalability, the model is not affected by the input order of source images and can be extended to tri-modal fusion.
Collapse
|
7
|
Tawfik N, Elnemr HA, Fakhr M, Dessouky MI, El-Samie FEA. Multimodal Medical Image Fusion Using Stacked Auto-encoder in NSCT Domain. J Digit Imaging 2022; 35:1308-1325. [PMID: 35768753 PMCID: PMC9582113 DOI: 10.1007/s10278-021-00554-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 11/24/2022] Open
Abstract
Medical image fusion is a process that aims to merge the important information from images with different modalities of the same organ of the human body to create a more informative fused image. In recent years, deep learning (DL) methods have achieved significant breakthroughs in the field of image fusion because of their great efficiency. The DL methods in image fusion have become an active topic due to their high feature extraction and data representation ability. In this work, stacked sparse auto-encoder (SSAE), a general category of deep neural networks, is exploited in medical image fusion. The SSAE is an efficient technique for unsupervised feature extraction. It has high capability of complex data representation. The proposed fusion method is carried as follows. Firstly, the source images are decomposed into low- and high-frequency coefficient sub-bands with the non-subsampled contourlet transform (NSCT). The NSCT is a flexible multi-scale decomposition technique, and it is superior to traditional decomposition techniques in several aspects. After that, the SSAE is implemented for feature extraction to obtain a sparse and deep representation from high-frequency coefficients. Then, the spatial frequencies are computed for the obtained features to be used for high-frequency coefficient fusion. After that, a maximum-based fusion rule is applied to fuse the low-frequency sub-band coefficients. The final integrated image is acquired by applying the inverse NSCT. The proposed method has been applied and assessed on various groups of medical image modalities. Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.
Collapse
Affiliation(s)
- Nahed Tawfik
- Computers and Systems Department, Electronics Research Institute, Joseph Tito St, El Nozha, Huckstep Cairo, Egypt.
| | - Heba A Elnemr
- Department of Computer and Software Engineering, Misr University for Science and Technology, Giza, Egypt
| | - Mahmoud Fakhr
- Computers and Systems Department, Electronics Research Institute, Joseph Tito St, El Nozha, Huckstep Cairo, Egypt
| | - Moawad I Dessouky
- Electronics and Electrical Communications Department, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Fathi E Abd El-Samie
- Electronics and Electrical Communications Department, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|
8
|
Deep learning with multiresolution handcrafted features for brain MRI segmentation. Artif Intell Med 2022; 131:102365. [DOI: 10.1016/j.artmed.2022.102365] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 06/28/2022] [Accepted: 07/09/2022] [Indexed: 12/26/2022]
|
9
|
Li H, Bhatt M, Qu Z, Zhang S, Hartel MC, Khademhosseini A, Cloutier G. Deep learning in ultrasound elastography imaging: A review. Med Phys 2022; 49:5993-6018. [PMID: 35842833 DOI: 10.1002/mp.15856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 02/04/2022] [Accepted: 07/06/2022] [Indexed: 11/11/2022] Open
Abstract
It is known that changes in the mechanical properties of tissues are associated with the onset and progression of certain diseases. Ultrasound elastography is a technique to characterize tissue stiffness using ultrasound imaging either by measuring tissue strain using quasi-static elastography or natural organ pulsation elastography, or by tracing a propagated shear wave induced by a source or a natural vibration using dynamic elastography. In recent years, deep learning has begun to emerge in ultrasound elastography research. In this review, several common deep learning frameworks in the computer vision community, such as multilayer perceptron, convolutional neural network, and recurrent neural network are described. Then, recent advances in ultrasound elastography using such deep learning techniques are revisited in terms of algorithm development and clinical diagnosis. Finally, the current challenges and future developments of deep learning in ultrasound elastography are prospected. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hongliang Li
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada
| | - Manish Bhatt
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Zhen Qu
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Shiming Zhang
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Martin C Hartel
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Ali Khademhosseini
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Guy Cloutier
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada.,Department of Radiology, Radio-Oncology and Nuclear Medicine, University of Montreal, Montréal, Québec, Canada
| |
Collapse
|
10
|
Ullah H, Zhao Y, Abdalla FYO, Wu L. Fast local Laplacian filtering based enhanced medical image fusion using parameter-adaptive PCNN and local features-based fuzzy weighted matrices. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02834-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
11
|
Lakshmi A, Rajasekaran MP, Jeevitha S, Selvendran S. An Adaptive MRI-PET Image Fusion Model Based on Deep Residual Learning and Self-Adaptive Total Variation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-020-05201-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Image fusion algorithm based on unsupervised deep learning-optimized sparse representation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103140] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
13
|
A multiscale double-branch residual attention network for anatomical-functional medical image fusion. Comput Biol Med 2021; 141:105005. [PMID: 34763846 DOI: 10.1016/j.compbiomed.2021.105005] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 01/29/2023]
Abstract
Medical image fusion technology synthesizes complementary information from multimodal medical images. This technology is playing an increasingly important role in clinical applications. In this paper, we propose a new convolutional neural network, which is called the multiscale double-branch residual attention (MSDRA) network, for fusing anatomical-functional medical images. Our network contains a feature extraction module, a feature fusion module and an image reconstruction module. In the feature extraction module, we use three identical MSDRA blocks in series to extract image features. The MSDRA block has two branches. The first branch uses a multiscale mechanism to extract features of different scales with three convolution kernels of different sizes, while the second branch uses six 3 × 3 convolutional kernels. In addition, we propose the Feature L1-Norm fusion strategy to fuse the features obtained from the input images. Compared with the reference image fusion algorithms, MSDRA consumes less fusion time and achieves better results in visual quality and the objective metrics of Spatial Frequency (SF), Average Gradient (AG), Edge Intensity (EI), Quality-Aware Clustering (QAC), Variance (VAR), and Visual Information Fidelity for Fusion (VIFF).
Collapse
|
14
|
CT and MRI image fusion algorithm based on hybrid ℓ0ℓ1 layer decomposing and two-dimensional variation transform. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
15
|
Zhu R, Li X, Zhang X, Wang J. HID: The Hybrid Image Decomposition Model for MRI and CT Fusion. IEEE J Biomed Health Inform 2021; 26:727-739. [PMID: 34270437 DOI: 10.1109/jbhi.2021.3097374] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multimodal medical image fusion can combine salient information from different source images of the same part and reduce the redundancy of information. In this paper, an efficient hybrid image decomposition (HID) method is proposed. It combines the advantages of spatial domain and transform domain methods and breaks through the limitations of the algorithms based on single category features. The accurate separation of base layer and texture details is conducive to the better effect of the fusion rules. First, the source anatomical images are decomposed into a series of high frequencies and a low frequency via nonsubsampled shearlet transform (NSST). Second, the low frequency is further decomposed using the designed optimization model based on structural similarity and structure tensor to get an energy texture layer and a base layer. Then, the modified choosing maximum (MCM) is designed to fuse base layers. The sum of modified Laplacian (SML) is used to fuse high frequencies and energy texture layers. Finally, the fused low frequency can be obtained by adding fused energy texture layer and base layer. And the fused image is reconstructed by the inverse NSST. The superiority of the proposed method is verified by amounts of experiments on 50 pairs of magnetic resonance imaging (MRI) images and computed tomography (CT) images and others, and compared with 12 state-of-the-art medical image fusion methods. It is demonstrated that the proposed hybrid decomposition model has a better ability to extract texture information than conventional ones.
Collapse
|
16
|
Das M, Gupta D, Radeva P, Bakde AM. Optimized CT-MR neurological image fusion framework using biologically inspired spiking neural model in hybrid ℓ1 − ℓ0 layer decomposition domain. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Valverde JM, Imani V, Abdollahzadeh A, De Feo R, Prakash M, Ciszek R, Tohka J. Transfer Learning in Magnetic Resonance Brain Imaging: A Systematic Review. J Imaging 2021; 7:66. [PMID: 34460516 PMCID: PMC8321322 DOI: 10.3390/jimaging7040066] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/26/2021] [Accepted: 03/29/2021] [Indexed: 11/25/2022] Open
Abstract
(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR images from different imaging protocols or scanners. Additionally, transfer learning is beneficial for reutilizing machine learning models that were trained to solve different (but related) tasks to the task of interest. The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging; (2) Methods: We performed a systematic literature search for articles that applied transfer learning to MR brain imaging tasks. We screened 433 studies for their relevance, and we categorized and extracted relevant information, including task type, application, availability of labels, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled issues relevant to medical imaging, including privacy, unseen target domains, and unlabeled data; (3) Results: We found 129 articles that applied transfer learning to MR brain imaging tasks. The most frequent applications were dementia-related classification tasks and brain tumor segmentation. The majority of articles utilized transfer learning techniques based on convolutional neural networks (CNNs). Only a few approaches utilized clearly brain MRI-specific methodology, and considered privacy issues, unseen target domains, or unlabeled data. We proposed a new categorization to group specific, widely-used approaches such as pretraining and fine-tuning CNNs; (4) Discussion: There is increasing interest in transfer learning for brain MRI. Well-known public datasets have clearly contributed to the popularity of Alzheimer's diagnostics/prognostics and tumor segmentation as applications. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare their approach with other transfer learning approaches.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jussi Tohka
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, 70150 Kuopio, Finland; (J.M.V.); (V.I.); (A.A.); (R.D.F.); (M.P.); (R.C.)
| |
Collapse
|
18
|
Li Q, Zeng J, Lin L, Zhang J, Zhu J, Yao L, Wang S, Du J, Wu Z. Mid-infrared spectra feature extraction and visualization by convolutional neural network for sugar adulteration identification of honey and real-world application. Lebensm Wiss Technol 2021; 140:110856. [DOI: 10.1016/j.lwt.2021.110856] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
19
|
Sunitha T, Rajalakshmi R. Multi-modal image fusion technique for enhancing image quality with multi-scale decomposition algorithm. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1830437] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- T.O. Sunitha
- Department of Computer Applications, Manonmaniam Sundaranar University, Tirunelveli, India
| | - R. Rajalakshmi
- Department of Computer Science, Noorul Islam College of Arts and Sciences, Kumara Coil, India
| |
Collapse
|
20
|
Hu Q, Hu S, Zhang F. Multi-modality image fusion combining sparse representation with guidance filtering. Soft comput 2021. [DOI: 10.1007/s00500-020-05448-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
Elzeki OM, Abd Elfattah M, Salem H, Hassanien AE, Shams M. A novel perceptual two layer image fusion using deep learning for imbalanced COVID-19 dataset. PeerJ Comput Sci 2021; 7:e364. [PMID: 33817014 PMCID: PMC7959632 DOI: 10.7717/peerj-cs.364] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/30/2020] [Indexed: 05/31/2023]
Abstract
BACKGROUND AND PURPOSE COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people's health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. MATERIALS AND METHODS In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. RESULTS Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. CONCLUSIONS A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.
Collapse
Affiliation(s)
- Omar M. Elzeki
- Faculty of Computers and Information Sciences, Mansoura University, Mansoura, Egypt
| | | | - Hanaa Salem
- Communications and Computers Engineering Department, Faculty of Engineering, Delta University for Science and Technology, Gamasa, Egypt
| | - Aboul Ella Hassanien
- Faculty of Computers and Artificial Intelligence, Cairo University, Cairo, Egypt
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt
| | - Mahmoud Shams
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, Egypt
| |
Collapse
|
22
|
Sangeetha Francelin Vinnarasi F, Daniel J, Anita Rose JT, Pugalenthi R. Deep learning supported disease detection with multi-modality image fusion. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:411-434. [PMID: 33814482 DOI: 10.3233/xst-210851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.
Collapse
Affiliation(s)
| | - Jesline Daniel
- St. Joseph's College of Engineering, OMR, Chennai, India
| | - J T Anita Rose
- St. Joseph's College of Engineering, OMR, Chennai, India
| | - R Pugalenthi
- St. Joseph's College of Engineering, OMR, Chennai, India
| |
Collapse
|
23
|
Kaur M, Singh D. Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021; 12:2483-2493. [PMID: 32837596 PMCID: PMC7414903 DOI: 10.1007/s12652-020-02386-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 07/22/2020] [Indexed: 05/02/2023]
Abstract
The advancements in automated diagnostic tools allow researchers to obtain more and more information from medical images. Recently, to obtain more informative medical images, multi-modality images have been used. These images have significantly more information as compared to traditional medical images. However, the construction of multi-modality images is not an easy task. The proposed approach, initially, decomposes the image into sub-bands using a non-subsampled contourlet transform (NSCT) domain. Thereafter, an extreme version of the Inception (Xception) is used for feature extraction of the source images. The multi-objective differential evolution is used to select the optimal features. Thereafter, the coefficient of determination and the energy loss based fusion functions are used to obtain the fused coefficients. Finally, the fused image is computed by applying the inverse NSCT. Extensive experimental results show that the proposed approach outperforms the competitive multi-modality image fusion approaches.
Collapse
Affiliation(s)
- Manjit Kaur
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India
- Computer Science Engineering, School of Engineering and Applied Sciences, Bennett University, Greater Noida, 201310 India
| | - Dilbag Singh
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, India
- Computer Science Engineering, School of Engineering and Applied Sciences, Bennett University, Greater Noida, 201310 India
| |
Collapse
|
24
|
Muzammil SR, Maqsood S, Haider S, Damaševičius R. CSID: A Novel Multimodal Image Fusion Algorithm for Enhanced Clinical Diagnosis. Diagnostics (Basel) 2020; 10:E904. [PMID: 33167376 PMCID: PMC7694345 DOI: 10.3390/diagnostics10110904] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 10/28/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022] Open
Abstract
Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms.
Collapse
Affiliation(s)
- Shah Rukh Muzammil
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan; (S.R.M.); (S.H.)
| | - Sarmad Maqsood
- Department of Software Engineering, Kaunas University of Technology, Kaunas 51368, Lithuania;
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan; (S.R.M.); (S.H.)
| | - Robertas Damaševičius
- Department of Software Engineering, Kaunas University of Technology, Kaunas 51368, Lithuania;
| |
Collapse
|
25
|
GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05387-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AbstractIn this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.
Collapse
|
26
|
Liu S, Yin L, Miao S, Ma J, Cong S, Hu S. Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization. Curr Med Imaging 2020; 16:1243-1258. [PMID: 32807062 DOI: 10.2174/1573405616999200817103920] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 06/27/2020] [Accepted: 07/01/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Medical image fusion is very important for the diagnosis and treatment of diseases. In recent years, there have been a number of different multi-modal medical image fusion algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently. Recently, nuclear norm minimization and deep learning have been used effectively in image processing. METHODS A multi-modality medical image fusion method using a rolling guidance filter (RGF) with a convolutional neural network (CNN) based feature mapping and nuclear norm minimization (NNM) is proposed. At first, we decompose medical images to base layer components and detail layer components by using RGF. In the next step, we get the basic fused image through the pretrained CNN model. The CNN model with pre-training is used to obtain the significant characteristics of the base layer components. And we can compute the activity level measurement from the regional energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated into the fused result. RESULTS From the comparison with the most advanced fusion algorithms, the results of experiments indicate that this fusion algorithm has the best effect in visual evaluation and objective standard. CONCLUSION The fusion algorithm using RGF and CNN-based feature mapping, combined with NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.
Collapse
Affiliation(s)
- Shuaiqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Lu Yin
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Siyu Miao
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Jian Ma
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Shuai Cong
- Industrial and Commercial College, Hebei University, Baoding Hebei, China
| | - Shaohai Hu
- College of Computer and Information, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
27
|
A Review of Multimodal Medical Image Fusion Techniques. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8279342. [PMID: 32377226 PMCID: PMC7195632 DOI: 10.1155/2020/8279342] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 02/26/2020] [Accepted: 04/03/2020] [Indexed: 11/17/2022]
Abstract
The medical image fusion is the process of coalescing multiple images from multiple imaging modalities to obtain a fused image with a large amount of information for increasing the clinical applicability of medical images. In this paper, we attempt to give an overview of multimodal medical image fusion methods, putting emphasis on the most recent advances in the domain based on (1) the current fusion methods, including based on deep learning, (2) imaging modalities of medical image fusion, and (3) performance analysis of medical image fusion on mainly data set. Finally, the conclusion of this paper is that the current multimodal medical image fusion research results are more significant and the development trend is on the rise but with many challenges in the research field.
Collapse
|
28
|
Wang D, Tian F, Yang SX, Zhu Z, Jiang D, Cai B. Improved Deep CNN with Parameter Initialization for Data Analysis of Near-Infrared Spectroscopy Sensors. SENSORS (BASEL, SWITZERLAND) 2020; 20:E874. [PMID: 32041366 PMCID: PMC7038673 DOI: 10.3390/s20030874] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 01/22/2020] [Accepted: 02/02/2020] [Indexed: 02/05/2023]
Abstract
Near-infrared (NIR) spectral sensors can deliver the spectral response of light absorbed by materials. Data analysis technology based on NIR sensors has been a useful tool for quality identification. In this paper, an improved deep convolutional neural network (CNN) with batch normalization and MSRA (Microsoft Research Asia) initialization is proposed to discriminate the tobacco cultivation regions using data collected from NIR sensors. The network structure is created with six convolutional layers and three full connection layers, and the learning rate is controlled by exponential attenuation method. One-dimensional kernel is applied as the convolution kernel to extract features. Meanwhile, the methods of L2 regularization and dropout are used to avoid the overfitting problem, which improve the generalization ability of the network. Experimental results show that the proposed deep network structure can effectively extract the complex characteristics inside the spectrum, which proves that it has excellent recognition performance on tobacco cultivation region discrimination, and it also demonstrates that the deep CNN is more suitable for information mining and analysis of big data.
Collapse
Affiliation(s)
- Di Wang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China;
| | - Fengchun Tian
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
| | - Simon X. Yang
- School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (Z.Z.); (D.J.)
| | - Daiyu Jiang
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (Z.Z.); (D.J.)
| | - Bin Cai
- Guizhou Tobacco Rebaking Co. LTD, Guizhou 550025, China;
| |
Collapse
|
29
|
Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:5450373. [PMID: 31885682 PMCID: PMC6915023 DOI: 10.1155/2019/5450373] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 11/07/2019] [Accepted: 11/20/2019] [Indexed: 12/16/2022]
Abstract
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.
Collapse
|
30
|
|
31
|
Object manipulation with a variable-stiffness robotic mechanism using deep neural networks for visual semantics and load estimation. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04412-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
32
|
Local bit-plane decoded convolutional neural network features for biomedical image retrieval. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04279-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|