1
|
Wu D, Jiang J, Wang J, Zhou S, Qian K. Accuracy evaluation of dental CBCT and scanned model registration method based on pulp horn mapping surface: an in vitro proof-of-concept. BMC Oral Health 2024; 24:827. [PMID: 39034391 PMCID: PMC11637213 DOI: 10.1186/s12903-024-04565-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 07/03/2024] [Indexed: 07/23/2024] Open
Abstract
BACKGROUND AND AIM 3D fusion model of cone-beam computed tomography (CBCT) and oral scanned data can be used for the accurate design of root canal access and guide plates in root canal therapy (RCT). However, the pose accuracy of the dental pulp and crown in data registration has not been investigated, which affects the precise implementation of clinical planning goals. We aimed to establish a novel registration method based on pulp horn mapping surface (PHMSR), to evaluate the accuracy of PHMSR versus traditional methods for crown-pulp registration of CBCT and oral scan data. MATERIALS AND METHODS This vitro study collected 8 groups of oral scanned and CBCT data in which the left mandibular teeth were not missing, No. 35 and No. 36 teeth were selected as the target teeth. The CBCT and scanned model were processed to generate equivalent point clouds. For the PHMSR method, the similarity between the feature directions of the pulp horn and the surface normal vectors of the crown were used to determine the mapping points in the CBCT point cloud that have a great influence on the pulp pose. The small surface with adjustable parameters is reconstructed near the mapping point of the crown, and the new matching point pairs between the point and the mapping surface are searched. The sparse iterative closest point (ICP) algorithm is used to solve the new matching point pairs. Then, in the C + + programming environment with a point cloud library (PCL), the PHMSR, the traditional sparse ICP, ICP, and coherent point drift (CPD) algorithms are used to register the point clouds under two different initial deviations. The root square mean error (RSME) of the crown, crown-pulp orientation deviation (CPOD), and position deviation (CPPD) were calculated to evaluate the registration accuracy. The significance between the groups was tested by a two-tailed paired t-test (p < 0.05). RESULTS The crown RSME values of the sparse ICP method (0.257), the ICP method (0.217), and the CPD method (0.209) were not significantly different from the PHMSR method (0.250). The CPOD and CPPD values of the sparse ICP method (4.089 and 0.133), the ICP method (1.787 and 0.700), and the CPD method (1.665 and 0.718) than for the PHMSR method, which suggests that the accuracy of crown-pulp registration is higher with the PHMSR method. CONCLUSION Compared with the traditional method, the PHMSR method has a smaller crown-pulp registration accuracy and a clinically acceptable deviation range, these results support the use of PHMSR method instead of the traditional method for clinical planning of root canal therapy.
Collapse
Affiliation(s)
- Dianhao Wu
- The Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, NO. 52, Xuefu Road, Nangang Dist, Harbin, Heilongjiang Province, 150080, People's Republic of China
| | - Jingang Jiang
- The Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, NO. 52, Xuefu Road, Nangang Dist, Harbin, Heilongjiang Province, 150080, People's Republic of China.
| | - Jinke Wang
- The Robotics and its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang Province, 150080, China
| | - Shan Zhou
- The 2nd Affiliated Hospital of Harbin Medical University, No.246 Xuefu Road, Nangang District, Harbin, Heilongjiang Province, 150001, People's Republic of China
| | - Kun Qian
- The Peking University School of Stomatology, No.22 Zhongguancun South Street, Haidian District, Beijing, 100081, People's Republic of China
| |
Collapse
|
2
|
Yang Z, Lian J, Liu J. Infrared UAV Target Detection Based on Continuous-Coupled Neural Network. MICROMACHINES 2023; 14:2113. [PMID: 38004970 PMCID: PMC10673491 DOI: 10.3390/mi14112113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/13/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023]
Abstract
The task of the detection of unmanned aerial vehicles (UAVs) is of great significance to social communication security. Infrared detection technology has the advantage of not being interfered with by environmental and other factors and can detect UAVs in complex environments. Since infrared detection equipment is expensive and data collection is difficult, there are few existing UAV-based infrared images, making it difficult to train deep neural networks; in addition, there are background clutter and noise in infrared images, such as heavy clouds, buildings, etc. The signal-to-clutter ratio is low, and the signal-to-noise ratio is low. Therefore, it is difficult to achieve the UAV detection task using traditional methods. The above challenges make infrared UAV detection a difficult task. In order to solve the above problems, this work drew upon the visual processing mechanism of the human brain to propose an effective framework for UAV detection in infrared images. The framework first determines the relevant parameters of the continuous-coupled neural network (CCNN) through the image's standard deviation, mean, etc. Then, it inputs the image into the CCNN, groups the pixels through iteration, then obtains the segmentation result through expansion and erosion, and finally, obtains the final result through the minimum circumscribed rectangle. The experimental results showed that, compared with the existing most-advanced brain-inspired image-understanding methods, this framework has the best intersection over union (IoU) (the intersection over union is the overlapping area between the predicted segmentation and the label divided by the joint area between the predicted segmentation and the label) in UAV infrared images, with an average of 74.79% (up to 97.01%), and can effectively realize the task of UAV detection.
Collapse
Affiliation(s)
- Zhuoran Yang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China;
| | - Jing Lian
- School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China;
| | - Jizhao Liu
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China;
| |
Collapse
|
3
|
Panigrahy C, Seal A, Gonzalo-Martín C, Pathak P, Jalal AS. Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
4
|
Zhou Q, Huang Z, Ding M, Zhang X. Medical Image Classification Using Light-Weight CNN With Spiking Cortical Model Based Attention Module. IEEE J Biomed Health Inform 2023; 27:1991-2002. [PMID: 37022371 DOI: 10.1109/jbhi.2023.3241439] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In the field of disease diagnosis where only a small dataset of medical images may be accessible, the light-weight convolutional neural network (CNN) has become popular because it can help to avoid the over-fitting problem and improve computational efficiency. However, the feature extraction capability of the light-weight CNN is inferior to that of the heavy-weight counterpart. Although the attention mechanism provides a feasible solution to this problem, the existing attention modules, such as the squeeze and excitation module and the convolutional block attention module, have insufficient non-linearity, thereby influencing the ability of the light-weight CNN to discover the key features. To address this issue, we have proposed a spiking cortical model based global and local (SCM-GL) attention module. The SCM-GL module analyzes the input feature maps in parallel and decomposes each map into several components according to the relation between pixels and their neighbors. The components are weighted summed to obtain a local mask. Besides, a global mask is produced by discovering the correlation between the distant pixels in the feature map. The final attention mask is generated by combining the local and global masks, and it is multiplied by the original map so that the important components can be highlighted to facilitate accurate disease diagnosis. To appreciate the performance of the SCM-GL module, this module and some mainstream attention modules have been embedded into the popular light-weight CNN models for comparison. Experiments on the classification of brain MR, chest X-ray, and osteosarcoma image datasets demonstrate that the SCM-GL module can significantly improve the classification performance of the evaluated light-weight CNN models by enhancing the ability of discovering the suspected lesions and it is generally superior to state-of-the-art attention modules in terms of accuracy, recall, specificity and F1 score.
Collapse
|
5
|
Liu H, Liu M, Jiang X, Luo J, Song Y, Chu X, Zan G. Multimodal Image Fusion for X-ray Grating Interferometry. SENSORS (BASEL, SWITZERLAND) 2023; 23:3115. [PMID: 36991826 PMCID: PMC10053574 DOI: 10.3390/s23063115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/11/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
X-ray grating interferometry (XGI) can provide multiple image modalities. It does so by utilizing three different contrast mechanisms-attenuation, refraction (differential phase-shift), and scattering (dark-field)-in a single dataset. Combining all three imaging modalities could create new opportunities for the characterization of material structure features that conventional attenuation-based methods are unable probe. In this study, we proposed an image fusion scheme based on the non-subsampled contourlet transform and spiking cortical model (NSCT-SCM) to combine the tri-contrast images retrieved from XGI. It incorporated three main steps: (i) image denoising based on Wiener filtering, (ii) the NSCT-SCM tri-contrast fusion algorithm, and (iii) image enhancement using contrast-limited adaptive histogram equalization, adaptive sharpening, and gamma correction. The tri-contrast images of the frog toes were used to validate the proposed approach. Moreover, the proposed method was compared with three other image fusion methods by several figures of merit. The experimental evaluation results highlighted the efficiency and robustness of the proposed scheme, with less noise, higher contrast, more information, and better details.
Collapse
Affiliation(s)
- Haoran Liu
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu 610059, China
| | - Mingzhe Liu
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu 610059, China
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China
| | - Jinglei Luo
- The Engineering & Technical College of Chengdu University of Technology, Leshan 614000, China
| | - Yuming Song
- The Engineering & Technical College of Chengdu University of Technology, Leshan 614000, China
| | - Xingyue Chu
- The Engineering & Technical College of Chengdu University of Technology, Leshan 614000, China
| | | |
Collapse
|
6
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
7
|
Reddy KR, Dhuli R. A Novel Lightweight CNN Architecture for the Diagnosis of Brain Tumors Using MR Images. Diagnostics (Basel) 2023; 13:diagnostics13020312. [PMID: 36673122 PMCID: PMC9858139 DOI: 10.3390/diagnostics13020312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/22/2022] [Accepted: 01/11/2023] [Indexed: 01/18/2023] Open
Abstract
Over the last few years, brain tumor-related clinical cases have increased substantially, particularly in adults, due to environmental and genetic factors. If they are unidentified in the early stages, there is a risk of severe medical complications, including death. So, early diagnosis of brain tumors plays a vital role in treatment planning and improving a patient's condition. There are different forms, properties, and treatments of brain tumors. Among them, manual identification and classification of brain tumors are complex, time-demanding, and sensitive to error. Based on these observations, we developed an automated methodology for detecting and classifying brain tumors using the magnetic resonance (MR) imaging modality. The proposed work includes three phases: pre-processing, classification, and segmentation. In the pre-processing, we started with the skull-stripping process through morphological and thresholding operations to eliminate non-brain matters such as skin, muscle, fat, and eyeballs. Then we employed image data augmentation to improve the model accuracy by minimizing the overfitting. Later in the classification phase, we developed a novel lightweight convolutional neural network (lightweight CNN) model to extract features from skull-free augmented brain MR images and then classify them as normal and abnormal. Finally, we obtained infected tumor regions from the brain MR images in the segmentation phase using a fast-linking modified spiking cortical model (FL-MSCM). Based on this sequence of operations, our framework achieved 99.58% classification accuracy and 95.7% of dice similarity coefficient (DSC). The experimental results illustrate the efficiency of the proposed framework and its appreciable performance compared to the existing techniques.
Collapse
|
8
|
Liu Y, Zhou D, Nie R, Hou R, Ding Z, Xia W, Li M. Green fluorescent protein and phase contrast image fusion via Spectral TV filter-based decomposition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
9
|
Hui M, Zhang J, Iu HHC, Yao R, Bai L. A novel intermittent sliding mode control approach to finite-time synchronization of complex-valued neural networks. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
10
|
Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115749] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
Collapse
|
11
|
Kong W, Miao Q, Lei Y, Ren C. Guided filter random walk and improved spiking cortical model based image fusion method in NSST domain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
12
|
|
13
|
Wang G, Li W, Huang Y. Medical image fusion based on hybrid three-layer decomposition model and nuclear norm. Comput Biol Med 2020; 129:104179. [PMID: 33360260 DOI: 10.1016/j.compbiomed.2020.104179] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 11/30/2020] [Accepted: 12/12/2020] [Indexed: 11/30/2022]
Abstract
The aim of medical image fusion technology is to synthesize multiple-image information to assist doctors in making scientific decisions. Existing studies have focused on preserving image details while avoiding halo artifacts and color distortions. This paper proposes a novel medical image fusion algorithm based on this research objective. First, the input image is decomposed into structure, texture, and local mean brightness layers using a hybrid three-layer decomposition model that can fully extract the features of the original images without the introduction of artifacts. Secondly, the nuclear norm of the patches, which are obtained using a sliding window, are calculated to construct the weight maps of the structure and texture layers. The weight map of the local mean brightness layer is constructed by calculating the local energy. Finally, remapping functions are applied to enhance each fusion layer, which reconstructs the final fusion image with the inverse operation of decomposition. Subjective and objective experiments confirm that the proposed algorithm has a distinct advantage compared with other state-of-the-art algorithms.
Collapse
Affiliation(s)
- Guofen Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Yuping Huang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|
14
|
Xie X, Wen S, Yan Z, Huang T, Chen Y. Designing pulse-coupled neural networks with spike-synchronization-dependent plasticity rule: image segmentation and memristor circuit application. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04752-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
15
|
Robust spiking cortical model and total-variational decomposition for multimodal medical image fusion. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101996] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
16
|
Lobo JL, Oregi I, Bifet A, Del Ser J. Exploiting the stimuli encoding scheme of evolving Spiking Neural Networks for stream learning. Neural Netw 2020; 123:118-133. [DOI: 10.1016/j.neunet.2019.11.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 09/05/2019] [Accepted: 11/25/2019] [Indexed: 10/25/2022]
|
17
|
Zheng B, Hu C, Yu J, Jiang H. Finite-time synchronization of fully complex-valued neural networks with fractional-order. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.048] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
18
|
|
19
|
Lian J, Yang Z, Sun W, Guo Y, Zheng L, Li J, Shi B, Ma Y. An image segmentation method of a modified SPCNN based on human visual system in medical images. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.12.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
20
|
Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model. Med Biol Eng Comput 2018; 57:887-900. [PMID: 30471068 DOI: 10.1007/s11517-018-1935-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 11/17/2018] [Indexed: 10/27/2022]
Abstract
The aim of medical image fusion is to improve the clinical diagnosis accuracy, so the fused image is generated by preserving salient features and details of the source images. This paper designs a novel fusion scheme for CT and MRI medical images based on convolutional neural networks (CNNs) and a dual-channel spiking cortical model (DCSCM). Firstly, non-subsampled shearlet transform (NSST) is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients. Secondly, the low-frequency coefficient is fused by the CNN framework, where weight map is generated by a series of feature maps and an adaptive selection rule, and then the high-frequency coefficients are fused by DCSCM, where the modified average gradient of the high-frequency coefficients is adopted as the input stimulus of DCSCM. Finally, the fused image is reconstructed by inverse NSST. Experimental results indicate that the proposed scheme performs well in both subjective visual performance and objective evaluation and has superiorities in detail retention and visual effect over other current typical ones. Graphical abstract A schematic diagram of the CT and MRI medical image fusion framework using convolutional neural network and a dual-channel spiking cortical model.
Collapse
|
21
|
Yang Z, Lian J, Li S, Guo Y, Qi Y, Ma Y. Heterogeneous SPCNN and its application in image segmentation. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.01.044] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Wang Z, Sun X, Yang Z, Zhang Y, Zhu Y, Ma Y. Leaf Recognition Based on DPCNN and BOW. Neural Process Lett 2018. [DOI: 10.1007/s11063-017-9635-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Guo Y, Yang Z, Ma Y, Lian J, Zhu L. Saliency motivated improved simplified PCNN model for object segmentation. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.10.057] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Chacon-Murguia MI, Ramirez-Quintana JA. Bio-inspired architecture for static object segmentation in time varying background models from video sequences. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.10.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
Lian J, Shi B, Li M, Nan Z, Ma Y. An automatic segmentation method of a parameter-adaptive PCNN for medical images. Int J Comput Assist Radiol Surg 2017; 12:1511-1519. [PMID: 28477278 DOI: 10.1007/s11548-017-1597-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Accepted: 04/24/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. METHODS The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. RESULTS Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. CONCLUSION The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.
Collapse
Affiliation(s)
- Jing Lian
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Bin Shi
- Equipment Management Department, Gansu Provincial Hospital, Lanzhou, 730000, Gansu, China
| | - Mingcong Li
- Biology Department, Lanhua No.1 High School, Lanzhou, 730060, Gansu, China
| | - Ziwei Nan
- School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China.
| |
Collapse
|
26
|
Zhan K, Shi J, Teng J, Li Q, Wang M, Lu F. Linking synaptic computation for image enhancement. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.031] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
|
28
|
Lian J, Ma Y, Ma Y, Shi B, Liu J, Yang Z, Guo Y. Automatic gallbladder and gallstone regions segmentation in ultrasound image. Int J Comput Assist Radiol Surg 2017; 12:553-568. [PMID: 28063077 DOI: 10.1007/s11548-016-1515-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 12/15/2016] [Indexed: 11/28/2022]
Abstract
PURPOSE As gallbladder diseases including gallstone and cholecystitis are mainly diagnosed by using ultra-sonographic examinations, we propose a novel method to segment the gallbladder and gallstones in ultrasound images. METHODS The method is divided into five steps. Firstly, a modified Otsu algorithm is combined with the anisotropic diffusion to reduce speckle noise and enhance image contrast. The Otsu algorithm separates distinctly the weak edge regions from the central region of the gallbladder. Secondly, a global morphology filtering algorithm is adopted for acquiring the fine gallbladder region. Thirdly, a parameter-adaptive pulse-coupled neural network (PA-PCNN) is employed to obtain the high-intensity regions including gallstones. Fourthly, a modified region-growing algorithm is used to eliminate physicians' labeled regions and avoid over-segmentation of gallstones. It also has good self-adaptability within the growth cycle in light of the specified growing and terminating conditions. Fifthly, the smoothing contours of the detected gallbladder and gallstones are obtained by the locally weighted regression smoothing (LOESS). RESULTS We test the proposed method on the clinical data from Gansu Provincial Hospital of China and obtain encouraging results. For the gallbladder and gallstones, average similarity percent of contours (EVA) containing metrics dice's similarity , overlap fraction and overlap value is 86.01 and 79.81%, respectively; position error is 1.7675 and 0.5414 mm, respectively; runtime is 4.2211 and 0.6603 s, respectively. Our method then achieves competitive performance compared with the state-of-the-art methods. CONCLUSIONS The proposed method is potential to assist physicians for diagnosing the gallbladder disease rapidly and effectively.
Collapse
Affiliation(s)
- Jing Lian
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China.
| | - Yurun Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Bin Shi
- Equipment Management Department, Gansu Provincial Hospital, Lanzhou, 730000, Gansu, China
| | - Jizhao Liu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Zhen Yang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Yanan Guo
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| |
Collapse
|
29
|
Yang Z, Dong M, Guo Y, Gao X, Wang K, Shi B, Ma Y. A new method of micro-calcifications detection in digitized mammograms based on improved simplified PCNN. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.08.068] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
30
|
Zhang X, Ren J, Huang Z, Zhu F. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor. SENSORS (BASEL, SWITZERLAND) 2016; 16:E1503. [PMID: 27649190 PMCID: PMC5038776 DOI: 10.3390/s16091503] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 09/02/2016] [Accepted: 09/09/2016] [Indexed: 11/30/2022]
Abstract
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Collapse
Affiliation(s)
- Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No. 1037, Luoyu Road, Wuhan 430074, China.
| | - Jinxia Ren
- Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No. 1037, Luoyu Road, Wuhan 430074, China.
| | - Zhiwen Huang
- Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No. 1037, Luoyu Road, Wuhan 430074, China.
| | - Fei Zhu
- Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, No. 1037, Luoyu Road, Wuhan 430074, China.
| |
Collapse
|
31
|
Guo Y, Dong M, Yang Z, Gao X, Wang K, Luo C, Ma Y, Zhang J. A new method of detecting micro-calcification clusters in mammograms using contourlet transform and non-linking simplified PCNN. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 130:31-45. [PMID: 27208519 DOI: 10.1016/j.cmpb.2016.02.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Revised: 02/25/2016] [Accepted: 02/26/2016] [Indexed: 06/05/2023]
Abstract
BACKGROUND AND OBJECTIVES Mammography analysis is an effective technology for early detection of breast cancer. Micro-calcification clusters (MCs) are a vital indicator of breast cancer, so detection of MCs plays an important role in computer aided detection (CAD) system, this paper proposes a new hybrid method to improve MCs detection rate in mammograms. METHODS The proposed method comprises three main steps: firstly, remove label and pectoral muscle adopting the largest connected region marking and region growing method, and enhance MCs using the combination of double top-hat transform and grayscale-adjustment function; secondly, remove noise and other interference information, and retain the significant information by modifying the contourlet coefficients using nonlinear function; thirdly, we use the non-linking simplified pulse-coupled neural network to detect MCs. RESULTS In our work, we choose 118 mammograms including 38 mammograms with micro-calcification clusters and 80 mammograms without micro-calcification to demonstrate our algorithm separately from two open and common database including the MIAS and JSMIT; and we achieve the higher specificity of 94.7%, sensitivity of 96.3%, AUC of 97.0%, accuracy of 95.8%, MCC of 90.4%, MCC-PS of 61.3% and CEI of 53.5%, these promising results clearly demonstrate that the proposed approach outperforms the current state-of-the-art algorithms. In addition, this method is verified on the 20 mammograms from the People's Hospital of Gansu Province, the detection results reveal that our method can accurately detect the calcifications in clinical application. CONCLUSIONS This proposed method is simple and fast, furthermore it can achieve high detection rate, it could be considered used in CAD systems to assist the physicians for breast cancer diagnosis in the future.
Collapse
Affiliation(s)
- Ya'nan Guo
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China.
| | - Min Dong
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Zhen Yang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Xiaoli Gao
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Keju Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Chongfan Luo
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Jiuwen Zhang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| |
Collapse
|
32
|
Zhan K, Teng J, Shi J, Li Q, Wang M. Feature-Linking Model for Image Enhancement. Neural Comput 2016; 28:1072-100. [DOI: 10.1162/neco_a_00832] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Inspired by gamma-band oscillations and other neurobiological discoveries, neural networks research shifts the emphasis toward temporal coding, which uses explicit times at which spikes occur as an essential dimension in neural representations. We present a feature-linking model (FLM) that uses the timing of spikes to encode information. The first spiking time of FLM is applied to image enhancement, and the processing mechanisms are consistent with the human visual system. The enhancement algorithm achieves boosting the details while preserving the information of the input image. Experiments are conducted to demonstrate the effectiveness of the proposed method. Results show that the proposed method is effective.
Collapse
Affiliation(s)
- Kun Zhan
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Jicai Teng
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Jinhui Shi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Qiaoqiao Li
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China
| | - Mingying Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu 730000, China
| |
Collapse
|
33
|
Chen Y, Ma Y, Kim DH, Park SK. Region-Based Object Recognition by Color Segmentation Using a Simplified PCNN. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1682-1697. [PMID: 25494514 DOI: 10.1109/tnnls.2014.2351418] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we propose a region-based object recognition (RBOR) method to identify objects from complex real-world scenes. First, the proposed method performs color image segmentation by a simplified pulse-coupled neural network (SPCNN) for the object model image and test image, and then conducts a region-based matching between them. Hence, we name it as RBOR with SPCNN (SPCNN-RBOR). Hereinto, the values of SPCNN parameters are automatically set by our previously proposed method in terms of each object model. In order to reduce various light intensity effects and take advantage of SPCNN high resolution on low intensities for achieving optimized color segmentation, a transformation integrating normalized Red Green Blue (RGB) with opponent color spaces is introduced. A novel image segmentation strategy is suggested to group the pixels firing synchronously throughout all the transformed channels of an image. Based on the segmentation results, a series of adaptive thresholds, which is adjustable according to the specific object model is employed to remove outlier region blobs, form potential clusters, and refine the clusters in test images. The proposed SPCNN-RBOR method overcomes the drawback of feature-based methods that inevitably includes background information into local invariant feature descriptors when keypoints locate near object boundaries. A large number of experiments have proved that the proposed SPCNN-RBOR method is robust for diverse complex variations, even under partial occlusion and highly cluttered environments. In addition, the SPCNN-RBOR method works well in not only identifying textured objects, but also in less-textured ones, which significantly outperforms the current feature-based methods.
Collapse
|
34
|
Hegenbart S, Uhl A. A scale- and orientation-adaptive extension of Local Binary Patterns for texture classification. PATTERN RECOGNITION 2015; 48:2633-2644. [PMID: 26240440 PMCID: PMC4416733 DOI: 10.1016/j.patcog.2015.02.024] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Revised: 01/15/2015] [Accepted: 02/25/2015] [Indexed: 05/05/2023]
Abstract
Local Binary Patterns (LBPs) have been used in a wide range of texture classification scenarios and have proven to provide a highly discriminative feature representation. A major limitation of LBP is its sensitivity to affine transformations. In this work, we present a scale- and rotation-invariant computation of LBP. Rotation-invariance is achieved by explicit alignment of features at the extraction level, using a robust estimate of global orientation. Scale-adapted features are computed in reference to the estimated scale of an image, based on the distribution of scale normalized Laplacian responses in a scale-space representation. Intrinsic-scale-adaption is performed to compute features, independent of the intrinsic texture scale, leading to a significantly increased discriminative power for a large amount of texture classes. In a final step, the rotation- and scale-invariant features are combined in a multi-resolution representation, which improves the classification accuracy in texture classification scenarios with scaling and rotation significantly.
Collapse
Affiliation(s)
- Sebastian Hegenbart
- Department of Computer Sciences, University of Salzburg, Jakob-Haringer Strasse 2, 5020 Salzburg, Austria
| | | |
Collapse
|
35
|
Medical Image Fusion Based on Rolling Guidance Filter and Spiking Cortical Model. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2015; 2015:156043. [PMID: 26146512 PMCID: PMC4469768 DOI: 10.1155/2015/156043] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Accepted: 05/19/2015] [Indexed: 11/20/2022]
Abstract
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. Although numerous medical image fusion methods have been proposed, most of these approaches are sensitive to the noise and usually lead to fusion image distortion, and image information loss. Furthermore, they lack universality when dealing with different kinds of medical images. In this paper, we propose a new medical image fusion to overcome the aforementioned issues of the existing methods. It is achieved by combining with rolling guidance filter (RGF) and spiking cortical model (SCM). Firstly, saliency of medical images can be captured by RGF. Secondly, a self-adaptive threshold of SCM is gained by utilizing the mean and variance of the source images. Finally, fused image can be gotten by SCM motivated by RGF coefficients. Experimental results show that the proposed method is superior to other current popular ones in both subjectively visual performance and objective criteria.
Collapse
|
36
|
Zhou D, Zhou H, Gao C, Guo Y. Simplified parameters model of PCNN and its application to image segmentation. Pattern Anal Appl 2015. [DOI: 10.1007/s10044-015-0462-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
A pulse coupled neural network segmentation algorithm for reflectance confocal images of epithelial tissue. PLoS One 2015; 10:e0122368. [PMID: 25816131 PMCID: PMC4376773 DOI: 10.1371/journal.pone.0122368] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Accepted: 02/13/2015] [Indexed: 12/16/2022] Open
Abstract
Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard.
Collapse
|
38
|
Hegenbart S, Uhl A, Vécsei A. Survey on computer aided decision support for diagnosis of celiac disease. Comput Biol Med 2015; 65:348-58. [PMID: 25770906 PMCID: PMC4593300 DOI: 10.1016/j.compbiomed.2015.02.007] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 02/10/2015] [Accepted: 02/11/2015] [Indexed: 12/13/2022]
Abstract
Celiac disease (CD) is a complex autoimmune disorder in genetically predisposed individuals of all age groups triggered by the ingestion of food containing gluten. A reliable diagnosis is of high interest in view of embarking on a strict gluten-free diet, which is the CD treatment modality of first choice. The gold standard for diagnosis of CD is currently based on a histological confirmation of serology, using biopsies performed during upper endoscopy. Computer aided decision support is an emerging option in medicine and endoscopy in particular. Such systems could potentially save costs and manpower while simultaneously increasing the safety of the procedure. Research focused on computer-assisted systems in the context of automated diagnosis of CD has started in 2008. Since then, over 40 publications on the topic have appeared. In this context, data from classical flexible endoscopy as well as wireless capsule endoscopy (WCE) and confocal laser endomicrosopy (CLE) has been used. In this survey paper, we try to give a comprehensive overview of the research focused on computer-assisted diagnosis of CD. The state-of-the-art research in automated diagnosis of celiac disease is presented. A systematic review of methods and techniques used in this field is given. Specific issues and challenges in the field are identified and discussed.
Collapse
Affiliation(s)
- Sebastian Hegenbart
- Department of Computer Sciences, University of Salzburg, Jakob-Haringer Strasse, 5020 Salzburg, Austria.
| | - Andreas Uhl
- Department of Computer Sciences, University of Salzburg, Jakob-Haringer Strasse, 5020 Salzburg, Austria.
| | - Andreas Vécsei
- St. Anna Children׳s Hospital, Medical University Vienna, 1090 Vienna, Austria.
| |
Collapse
|
39
|
Uhl A, Wimmer G. A systematic evaluation of the scale invariance of texture recognition methods. Pattern Anal Appl 2015; 18:945-969. [PMID: 27034616 PMCID: PMC4768293 DOI: 10.1007/s10044-014-0435-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2013] [Accepted: 11/16/2014] [Indexed: 10/28/2022]
Abstract
A large variety of well-known scale-invariant texture recognition methods is tested with respect to their scale invariance. The scale invariance of these methods is estimated by comparing the results of two test setups. In the first test setup, the images of the training and evaluation set are acquired under same scale conditions and in the second test setup, the images in the evaluation set are gathered under different scale conditions than those of the training set. For the first test setup, scale invariance is not needed, whereas for the second test setup, scale invariance is obviously crucial. The difference between the results of these two test setups indicates the scale invariance of a method (the higher the scale invariance the lower the difference). The scale invariance of the methods is additionally estimated by analyzing the similarity of the feature vectors of images and their scaled versions. Additionally to the scale invariance, we also test eventual viewpoint and illumination invariance of the methods. As texture databases for our tests we use the KTH-TIPS database and the CUReT database. Results imply that many of the considered methods are not as scale-invariant as expected.
Collapse
Affiliation(s)
- Andreas Uhl
- Department of Computer Sciences, University of Salzburg, Jakob Haringerstrasse 2, 5020 Salzburg, Austria
| | - Georg Wimmer
- Department of Computer Sciences, University of Salzburg, Jakob Haringerstrasse 2, 5020 Salzburg, Austria
| |
Collapse
|
40
|
Tang Y, Wang Z, Gao H, Qiao H, Kurths J. On controllability of neuronal networks with constraints on the average of control gains. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:2670-2681. [PMID: 24733036 DOI: 10.1109/tcyb.2014.2313154] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Control gains play an important role in the control of a natural or a technical system since they reflect how much resource is required to optimize a certain control objective. This paper is concerned with the controllability of neuronal networks with constraints on the average value of the control gains injected in driver nodes, which are in accordance with engineering and biological backgrounds. In order to deal with the constraints on control gains, the controllability problem is transformed into a constrained optimization problem (COP). The introduction of the constraints on the control gains unavoidably leads to substantial difficulty in finding feasible as well as refining solutions. As such, a modified dynamic hybrid framework (MDyHF) is developed to solve this COP, based on an adaptive differential evolution and the concept of Pareto dominance. By comparing with statistical methods and several recently reported constrained optimization evolutionary algorithms (COEAs), we show that our proposed MDyHF is competitive and promising in studying the controllability of neuronal networks. Based on the MDyHF, we proceed to show the controlling regions under different levels of constraints. It is revealed that we should allocate the control gains economically when strong constraints are considered. In addition, it is found that as the constraints become more restrictive, the driver nodes are more likely to be selected from the nodes with a large degree. The results and methods presented in this paper will provide useful insights into developing new techniques to control a realistic complex network efficiently.
Collapse
|
41
|
Zhang X, Li L, Zhu F, Hou W, Chen X. Spiking cortical model-based nonlocal means method for speckle reduction in optical coherence tomography images. JOURNAL OF BIOMEDICAL OPTICS 2014; 19:066005. [PMID: 24919448 DOI: 10.1117/1.jbo.19.6.066005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 05/12/2014] [Indexed: 06/03/2023]
Abstract
Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.
Collapse
Affiliation(s)
- Xuming Zhang
- Huazhong University of Science and Technology, School of Life Science and Technology, 1037 Luoyu Road, Wuhan 430074, China
| | - Liu Li
- Huazhong University of Science and Technology, School of Life Science and Technology, 1037 Luoyu Road, Wuhan 430074, China
| | - Fei Zhu
- Huazhong University of Science and Technology, School of Life Science and Technology, 1037 Luoyu Road, Wuhan 430074, China
| | - Wenguang Hou
- Huazhong University of Science and Technology, School of Life Science and Technology, 1037 Luoyu Road, Wuhan 430074, China
| | - Xinjian Chen
- Soochow University, School of Electronics and Information, 1 Shizi Street, Suzhou 215006, China
| |
Collapse
|
42
|
|
43
|
Wang WX, Zhou WG, Zhao XM. Airplane extraction and identification by improved PCNN with wavelet transform and modified Zernike moments. THE IMAGING SCIENCE JOURNAL 2013. [DOI: 10.1179/1743131x12y.0000000033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
44
|
Li J, Zou B, Ding L, Gao X. Image Segmentation with PCNN Model and Immune Algorithm. ACTA ACUST UNITED AC 2013. [DOI: 10.4304/jcp.8.9.2429-2436] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
45
|
A coarse-to-fine strategy for iterative segmentation using simplified pulse-coupled neural network. Soft comput 2013. [DOI: 10.1007/s00500-013-1077-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
46
|
Wang N, Ma Y, Zhan K, Yuan M. Multimodal Medical Image Fusion Framework Based on Simplified PCNN in Nonsubsampled Contourlet Transform Domain. ACTA ACUST UNITED AC 2013. [DOI: 10.4304/jmm.8.3.270-276] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
47
|
Gao C, Zhou D, Guo Y. An Iterative Thresholding Segmentation Model Using a Modified Pulse Coupled Neural Network. Neural Process Lett 2013. [DOI: 10.1007/s11063-013-9291-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Hegenbart S, Uhl A, Vécsei A, Wimmer G. Scale invariant texture descriptors for classifying celiac disease. Med Image Anal 2013; 17:458-74. [PMID: 23481171 PMCID: PMC4268896 DOI: 10.1016/j.media.2013.02.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2012] [Revised: 01/29/2013] [Accepted: 02/01/2013] [Indexed: 02/07/2023]
Abstract
Scale invariant texture recognition methods are applied for the computer assisted diagnosis of celiac disease. In particular, emphasis is given to techniques enhancing the scale invariance of multi-scale and multi-orientation wavelet transforms and methods based on fractal analysis. After fine-tuning to specific properties of our celiac disease imagery database, which consists of endoscopic images of the duodenum, some scale invariant (and often even viewpoint invariant) methods provide classification results improving the current state of the art. However, not each of the investigated scale invariant methods is applicable successfully to our dataset. Therefore, the scale invariance of the employed approaches is explicitly assessed and it is found that many of the analyzed methods are not as scale invariant as they theoretically should be. Results imply that scale invariance is not a key-feature required for successful classification of our celiac disease dataset.
Collapse
|
49
|
|
50
|
Li X, Ma Y, Wang Z, Yu W. Geometry-Invariant Texture Retrieval Using a Dual-Output Pulse-Coupled Neural Network. Neural Comput 2012; 24:194-216. [DOI: 10.1162/neco_a_00194] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This letter proposes a novel dual-output pulse coupled neural network model (DPCNN). The new model is applied to obtain a more stable texture description in the face of the geometric transformation. Time series, which are computed from output binary images of DPCNN, are employed as translation-, rotation-, scale-, and distortion-invariant texture features. In the experiments, DPCNN has been well tested by using Brodatz's album and the VisTex database. Several existing models are compared with the proposed DPCNN model. The experimental results, based on different testing data sets for images with different translations, orientations, scales, and affine transformations, show that our proposed model outperforms existing models in geometry-invariant texture retrieval. Furthermore, the robustness of DPCNN to noisy data is examined in the experiments.
Collapse
Affiliation(s)
- Xiaojun Li
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu Province 730000, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu Province 730000, China
| | - Zhaobin Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu Province 730000, China
| | - Wenrui Yu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu Province 730000, China
| |
Collapse
|