1
|
Zhao H, Deng X, Shao H, Jiang Y. COVID-19 diagnostic prediction on chest CT scan images using hybrid quantum-classical convolutional neural network. J Biomol Struct Dyn 2024; 42:3737-3746. [PMID: 38600864 DOI: 10.1080/07391102.2023.2226215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 05/11/2023] [Indexed: 04/12/2024]
Abstract
Notwithstanding the extensive research efforts directed towards devising a dependable approach for the diagnosis of coronavirus disease 2019 (COVID-19), the inherent complexity and capriciousness of the virus continue to pose a formidable challenge to the precise identification of affected individuals. In light of this predicament, it is essential to devise a model for COVID-19 prediction utilizing chest computed tomography (CT) scans. To this end, we present a hybrid quantum-classical convolutional neural network (HQCNN) model, which is founded on stochastic quantum circuits that can discern COVID-19 patients from chest CT images. Two publicly available chest CT image datasets were employed to evaluate the performance of our model. The experimental outcomes evinced diagnostic accuracies of 99.39% and 97.91%, along with precisions of 99.19% and 98.52%, respectively. These findings are indicative of the fact that the proposed model surpasses recently published works in terms of performance, thus providing a superior ability to precisely predict COVID-19 positive instances.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Haorong Zhao
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, Jiangsu, China
| | - Xing Deng
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, Jiangsu, China
| | - Haijian Shao
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, Jiangsu, China
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV, USA
| | - Yingtao Jiang
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV, USA
| |
Collapse
|
2
|
Mujahid O, Contreras I, Beneyto A, Vehi J. Generative deep learning for the development of a type 1 diabetes simulator. COMMUNICATIONS MEDICINE 2024; 4:51. [PMID: 38493243 PMCID: PMC10944502 DOI: 10.1038/s43856-024-00476-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 03/05/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Type 1 diabetes (T1D) simulators, crucial for advancing diabetes treatments, often fall short of capturing the entire complexity of the glucose-insulin system due to the imprecise approximation of the physiological models. This study introduces a simulation approach employing a conditional deep generative model. The aim is to overcome the limitations of existing T1D simulators by synthesizing virtual patients that more accurately represent the entire glucose-insulin system physiology. METHODS Our methodology utilizes a sequence-to-sequence generative adversarial network to simulate virtual T1D patients causally. Causality is embedded in the model by introducing shifted input-output pairs during training, with a 90-min shift capturing the impact of input insulin and carbohydrates on blood glucose. To validate our approach, we train and evaluate the model using three distinct datasets, each consisting of 27, 12, and 10 T1D patients, respectively. In addition, we subject the trained model to further validation for closed-loop therapy, employing a state-of-the-art controller. RESULTS The generated patients display statistical similarity to real patients when evaluated on the time-in-range results for each of the standard blood glucose ranges in T1D management along with means and variability outcomes. When tested for causality, authentic causal links are identified between the insulin, carbohydrates, and blood glucose levels of the virtual patients. The trained generative model demonstrates behaviours that are closer to reality compared to conventional T1D simulators when subjected to closed-loop insulin therapy using a state-of-the-art controller. CONCLUSIONS These results highlight our approach's capability to accurately capture physiological dynamics and establish genuine causal relationships, holding promise for enhancing the development and evaluation of therapies in diabetes.
Collapse
Affiliation(s)
- Omer Mujahid
- Modelling, Identification and Control Engineering Laboratory, Institut d'Informatica i Aplicacions, Universitat de Girona, Girona, 17003, Girona, Spain
| | - Ivan Contreras
- Modelling, Identification and Control Engineering Laboratory, Institut d'Informatica i Aplicacions, Universitat de Girona, Girona, 17003, Girona, Spain
| | - Aleix Beneyto
- Modelling, Identification and Control Engineering Laboratory, Institut d'Informatica i Aplicacions, Universitat de Girona, Girona, 17003, Girona, Spain
| | - Josep Vehi
- Modelling, Identification and Control Engineering Laboratory, Institut d'Informatica i Aplicacions, Universitat de Girona, Girona, 17003, Girona, Spain.
- Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas (CIBERDEM), Girona, Spain.
| |
Collapse
|
3
|
Guo X, Wang Z, Wu P, Li Y, Alsaadi FE, Zeng N. ELTS-Net: An enhanced liver tumor segmentation network with augmented receptive field and global contextual information. Comput Biol Med 2024; 169:107879. [PMID: 38142549 DOI: 10.1016/j.compbiomed.2023.107879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/30/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
The liver is one of the organs with the highest incidence rate in the human body, and late-stage liver cancer is basically incurable. Therefore, early diagnosis and lesion location of liver cancer are of important clinical value. This study proposes an enhanced network architecture ELTS-Net based on the 3D U-Net model, to address the limitations of conventional image segmentation methods and the underutilization of image spatial features by the 2D U-Net network structure. ELTS-Net expands upon the original network by incorporating dilated convolutions to increase the receptive field of the convolutional kernel. Additionally, an attention residual module, comprising an attention mechanism and residual connections, replaces the original convolutional module, serving as the primary components of the encoder and decoder. This design enables the network to capture contextual information globally in both channel and spatial dimensions. Furthermore, deep supervision modules are integrated between different levels of the decoder network, providing additional feedback from deeper intermediate layers. This constrains the network weights to the target regions and optimizing segmentation results. Evaluation on the LiTS2017 dataset shows improvements in evaluation metrics for liver and tumor segmentation tasks compared to the baseline 3D U-Net model, achieving 95.2% liver segmentation accuracy and 71.9% tumor segmentation accuracy, with accuracy improvements of 0.9% and 3.1% respectively. The experimental results validate the superior segmentation performance of ELTS-Net compared to other comparison models, offering valuable guidance for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Xiaoyue Guo
- College of Engineering, Peking University, Beijing 100871, China; Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Yurong Li
- College of Electrical Engineering and Automation, Fuzhou University, Fujian 350116, China; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fujian 350116, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
4
|
Khan SH, Iqbal J, Hassnain SA, Owais M, Mostafa SM, Hadjouni M, Mahmoud A. COVID-19 detection and analysis from lung CT images using novel channel boosted CNNs. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120477. [PMID: 37220492 PMCID: PMC10186852 DOI: 10.1016/j.eswa.2023.120477] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 05/10/2023] [Accepted: 05/10/2023] [Indexed: 05/25/2023]
Abstract
In December 2019, the global pandemic COVID-19 in Wuhan, China, affected human life and the worldwide economy. Therefore, an efficient diagnostic system is required to control its spread. However, the automatic diagnostic system poses challenges with a limited amount of labeled data, minor contrast variation, and high structural similarity between infection and background. In this regard, a new two-phase deep convolutional neural network (CNN) based diagnostic system is proposed to detect minute irregularities and analyze COVID-19 infection. In the first phase, a novel SB-STM-BRNet CNN is developed, incorporating a new channel Squeezed and Boosted (SB) and dilated convolutional-based Split-Transform-Merge (STM) block to detect COVID-19 infected lung CT images. The new STM blocks performed multi-path region-smoothing and boundary operations, which helped to learn minor contrast variation and global COVID-19 specific patterns. Furthermore, the diverse boosted channels are achieved using the SB and Transfer Learning concepts in STM blocks to learn texture variation between COVID-19-specific and healthy images. In the second phase, COVID-19 infected images are provided to the novel COVID-CB-RESeg segmentation CNN to identify and analyze COVID-19 infectious regions. The proposed COVID-CB-RESeg methodically employed region-homogeneity and heterogeneity operations in each encoder-decoder block and boosted-decoder using auxiliary channels to simultaneously learn the low illumination and boundaries of the COVID-19 infected region. The proposed diagnostic system yields good performance in terms of accuracy: 98.21 %, F-score: 98.24%, Dice Similarity: 96.40 %, and IOU: 98.85 % for the COVID-19 infected region. The proposed diagnostic system would reduce the burden and strengthen the radiologist's decision for a fast and accurate COVID-19 diagnosis.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Javed Iqbal
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Syed Agha Hassnain
- Ocean College, Zhejiang University, Zheda Road 1, Zhoushan, Zhejiang 316021, China
| | - Muhammad Owais
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, UAE
| | - Samih M Mostafa
- Computer Science Department, Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
- Faculty of Industry and Energy Technology, New Assiut Technological University (N.A.T.U.), New Assiut City, Egypt
| | - Myriam Hadjouni
- Department of Computer Sciences, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Amena Mahmoud
- Faculty of Computers and Information, Department of Computer Science, KafrElSkeikh University, Egypt
| |
Collapse
|
5
|
Hsieh C, Nobre IB, Sousa SC, Ouyang C, Brereton M, Nascimento JC, Jorge J, Moreira C. MDF-Net for abnormality detection by fusing X-rays with clinical data. Sci Rep 2023; 13:15873. [PMID: 37741833 PMCID: PMC10517966 DOI: 10.1038/s41598-023-41463-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 08/27/2023] [Indexed: 09/25/2023] Open
Abstract
This study investigates the effects of including patients' clinical information on the performance of deep learning (DL) classifiers for disease location in chest X-ray images. Although current classifiers achieve high performance using chest X-ray images alone, consultations with practicing radiologists indicate that clinical data is highly informative and essential for interpreting medical images and making proper diagnoses. In this work, we propose a novel architecture consisting of two fusion methods that enable the model to simultaneously process patients' clinical data (structured data) and chest X-rays (image data). Since these data modalities are in different dimensional spaces, we propose a spatial arrangement strategy, spatialization, to facilitate the multimodal learning process in a Mask R-CNN model. We performed an extensive experimental evaluation using MIMIC-Eye, a dataset comprising different modalities: MIMIC-CXR (chest X-ray images), MIMIC IV-ED (patients' clinical data), and REFLACX (annotations of disease locations in chest X-rays). Results show that incorporating patients' clinical data in a DL model together with the proposed fusion methods improves the disease localization in chest X-rays by 12% in terms of Average Precision compared to a standard Mask R-CNN using chest X-rays alone. Further ablation studies also emphasize the importance of multimodal DL architectures and the incorporation of patients' clinical data in disease localization. In the interest of fostering scientific reproducibility, the architecture proposed within this investigation has been made publicly accessible( https://github.com/ChihchengHsieh/multimodal-abnormalities-detection ).
Collapse
Affiliation(s)
| | | | | | - Chun Ouyang
- Queensland University of Technology, Brisbane, Australia
| | | | - Jacinto C Nascimento
- Institute for Systems and Robotics, Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal
| | - Joaquim Jorge
- Instituto Superior Técnico, University of Lisbon, Portugal, Lisbon, Portugal
| | - Catarina Moreira
- Queensland University of Technology, Brisbane, Australia.
- Instituto Superior Técnico, University of Lisbon, Portugal, Lisbon, Portugal.
- Human Technology Institute, University of Technology Sydney, Ultimo, Australia.
- INESC-ID, Lisbon, Portugal.
| |
Collapse
|
6
|
Laghari AA, Sun Y, Alhussein M, Aurangzeb K, Anwar MS, Rashid M. Deep residual-dense network based on bidirectional recurrent neural network for atrial fibrillation detection. Sci Rep 2023; 13:15109. [PMID: 37704659 PMCID: PMC10499947 DOI: 10.1038/s41598-023-40343-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
Atrial fibrillation easily leads to stroke, cerebral infarction and other complications, which will seriously harm the life and health of patients. Traditional deep learning methods have weak anti-interference and generalization ability. Therefore, we propose a new-fashioned deep residual-dense network via bidirectional recurrent neural network (RNN) model for atrial fibrillation detection. The combination of one-dimensional dense residual network and bidirectional RNN for atrial fibrillation detection simplifies the tedious feature extraction steps, and constructs the end-to-end neural network to achieve atrial fibrillation detection through data feature learning. Meanwhile, the attention mechanism is utilized to fuse the different features and extract the high-value information. The accuracy of the experimental results is 97.72%, the sensitivity and specificity are 93.09% and 98.71%, respectively compared with other methods.
Collapse
Affiliation(s)
- Asif Ali Laghari
- Software College, Shenyang Normal University, Shenyang, 110034, China
| | - Yanqiu Sun
- Liaoning University of Traditional Chinese Medicine, Shenyang, China.
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh, 11543, Kingdom of Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh, 11543, Kingdom of Saudi Arabia
| | | | - Mamoon Rashid
- Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune, 411048, India
| |
Collapse
|
7
|
Innani S, Dutande P, Baid U, Pokuri V, Bakas S, Talbar S, Baheti B, Guntuku SC. Generative adversarial networks based skin lesion segmentation. Sci Rep 2023; 13:13467. [PMID: 37596306 PMCID: PMC10439152 DOI: 10.1038/s41598-023-39648-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 07/28/2023] [Indexed: 08/20/2023] Open
Abstract
Skin cancer is a serious condition that requires accurate diagnosis and treatment. One way to assist clinicians in this task is using computer-aided diagnosis tools that automatically segment skin lesions from dermoscopic images. We propose a novel adversarial learning-based framework called Efficient-GAN (EGAN) that uses an unsupervised generative network to generate accurate lesion masks. It consists of a generator module with a top-down squeeze excitation-based compound scaled path, an asymmetric lateral connection-based bottom-up path, and a discriminator module that distinguishes between original and synthetic masks. A morphology-based smoothing loss is also implemented to encourage the network to create smooth semantic boundaries of lesions. The framework is evaluated on the International Skin Imaging Collaboration Lesion Dataset. It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and accuracy of 90.1%, 83.6%, and 94.5%, respectively. We also design a lightweight segmentation framework called Mobile-GAN (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters, thus resulting in faster inference times for low compute resource settings.
Collapse
Affiliation(s)
- Shubham Innani
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India.
| | - Prasad Dutande
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Ujjwal Baid
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Venu Pokuri
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sanjay Talbar
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Bhakti Baheti
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sharath Chandra Guntuku
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
8
|
Johnson DR, Vaidhyanathan RU. Detection and localization of multi-scale and oriented objects using an enhanced feature refinement algorithm. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15219-15243. [PMID: 37679178 DOI: 10.3934/mbe.2023681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Object detection is a fundamental aspect of computer vision, with numerous generic object detectors proposed by various researchers. The proposed work presents a novel single-stage rotation detector that can detect oriented and multi-scale objects accurately from diverse scenarios. This detector addresses the challenges faced by current rotation detectors, such as the detection of arbitrary orientations, objects that are densely arranged, and the issue of loss discontinuity. First, the detector also adopts a progressive regression form (coarse-to-fine-grained approach) that uses both horizontal anchors (speed and higher recall) and rotating anchors (oriented objects) in cluttered backgrounds. Second, the proposed detector includes a feature refinement module that helps minimize the problems related to feature angulation and reduces the number of bounding boxes generated. Finally, to address the issue of loss discontinuity, the proposed detector utilizes a newly formulated adjustable loss function that can be extended to both single-stage and two-stage detectors. The proposed detector shows outstanding performance on benchmark datasets and significantly outperforms other state-of-the-art methods in terms of speed and accuracy.
Collapse
Affiliation(s)
- Deepika Roselind Johnson
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, India
| | | |
Collapse
|
9
|
Azeem M, Javaid S, Khalil RA, Fahim H, Althobaiti T, Alsharif N, Saeed N. Neural Networks for the Detection of COVID-19 and Other Diseases: Prospects and Challenges. Bioengineering (Basel) 2023; 10:850. [PMID: 37508877 PMCID: PMC10416184 DOI: 10.3390/bioengineering10070850] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/09/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Artificial neural networks (ANNs) ability to learn, correct errors, and transform a large amount of raw data into beneficial medical decisions for treatment and care has increased in popularity for enhanced patient safety and quality of care. Therefore, this paper reviews the critical role of ANNs in providing valuable insights for patients' healthcare decisions and efficient disease diagnosis. We study different types of ANNs in the existing literature that advance ANNs' adaptation for complex applications. Specifically, we investigate ANNs' advances for predicting viral, cancer, skin, and COVID-19 diseases. Furthermore, we propose a deep convolutional neural network (CNN) model called ConXNet, based on chest radiography images, to improve the detection accuracy of COVID-19 disease. ConXNet is trained and tested using a chest radiography image dataset obtained from Kaggle, achieving more than 97% accuracy and 98% precision, which is better than other existing state-of-the-art models, such as DeTraC, U-Net, COVID MTNet, and COVID-Net, having 93.1%, 94.10%, 84.76%, and 90% accuracy and 94%, 95%, 85%, and 92% precision, respectively. The results show that the ConXNet model performed significantly well for a relatively large dataset compared with the aforementioned models. Moreover, the ConXNet model reduces the time complexity by using dropout layers and batch normalization techniques. Finally, we highlight future research directions and challenges, such as the complexity of the algorithms, insufficient available data, privacy and security, and integration of biosensing with ANNs. These research directions require considerable attention for improving the scope of ANNs for medical diagnostic and treatment applications.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK;
| | - Shumaila Javaid
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Ruhul Amin Khalil
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25120, Pakistan;
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| | - Hamza Fahim
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Turke Althobaiti
- Department of Computer Science, Faculty of Science, Northern Border University, Arar 73222, Saudi Arabia;
| | - Nasser Alsharif
- Department of Administrative and Financial Sciences, Ranyah University Collage, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| | - Nasir Saeed
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| |
Collapse
|
10
|
Zhang Y, Yuan Q, Muzzammil HM, Gao G, Xu Y. Image-guided prostate biopsy robots: A review. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15135-15166. [PMID: 37679175 DOI: 10.3934/mbe.2023678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
At present, the incidence of prostate cancer (PCa) in men is increasing year by year. So, the early diagnosis of PCa is of great significance. Transrectal ultrasonography (TRUS)-guided biopsy is a common method for diagnosing PCa. The biopsy process is performed manually by urologists but the diagnostic rate is only 20%-30% and its reliability and accuracy can no longer meet clinical needs. The image-guided prostate biopsy robot has the advantages of a high degree of automation, does not rely on the skills and experience of operators, reduces the work intensity and operation time of urologists and so on. Capable of delivering biopsy needles to pre-defined biopsy locations with minimal needle placement errors, it makes up for the shortcomings of traditional free-hand biopsy and improves the reliability and accuracy of biopsy. The integration of medical imaging technology and the robotic system is an important means for accurate tumor location, biopsy puncture path planning and visualization. This paper mainly reviews image-guided prostate biopsy robots. According to the existing literature, guidance modalities are divided into magnetic resonance imaging (MRI), ultrasound (US) and fusion image. First, the robot structure research by different guided methods is the main line and the actuators and material research of these guided modalities is the auxiliary line to introduce and compare. Second, the robot image-guided localization technology is discussed. Finally, the image-guided prostate biopsy robot is summarized and suggestions for future development are provided.
Collapse
Affiliation(s)
- Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
- Foshan Baikang Robot Technology Co., Ltd, Nanhai District, Foshan City, Guangdong Province 528225, China
| | - Qihang Yuan
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Hafiz Muhammad Muzzammil
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Guoqiang Gao
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Yong Xu
- Department of Urology, the Third Medical Centre, Chinese PLA (People's Liberation Army) General Hospital, Beijing 100039, China
| |
Collapse
|
11
|
Li M, Qi Y, Pan G. Optimal Feature Analysis for Identification Based on Intracranial Brain Signals with Machine Learning Algorithms. Bioengineering (Basel) 2023; 10:801. [PMID: 37508828 PMCID: PMC10376518 DOI: 10.3390/bioengineering10070801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 06/05/2023] [Accepted: 06/29/2023] [Indexed: 07/30/2023] Open
Abstract
Biometrics, e.g., fingerprints, the iris, and the face, have been widely used to authenticate individuals. However, most biometrics are not cancellable, i.e., once these traditional biometrics are cloned or stolen, they cannot be replaced easily. Unlike traditional biometrics, brain biometrics are extremely difficult to clone or forge due to the natural randomness across different individuals, which makes them an ideal option for identity authentication. Most existing brain biometrics are based on an electroencephalogram (EEG), which typically demonstrates unstable performance due to the low signal-to-noise ratio (SNR). Thus, in this paper, we propose the use of intracortical brain signals, which have higher resolution and SNR, to realize the construction of a high-performance brain biometric. Significantly, this is the first study to investigate the features of intracortical brain signals for identification. Specifically, several features based on local field potential are computed for identification, and their performance is compared with different machine learning algorithms. The results show that frequency domain features and time-frequency domain features are excellent for intra-day and inter-day identification. Furthermore, the energy features perform best among all features with 98% intra-day and 93% inter-day identification accuracy, which demonstrates the great potential of intracraial brain signals to be biometrics. This paper may serve as a guidance for future intracranial brain researches and the development of more reliable and high-performance brain biometrics.
Collapse
Affiliation(s)
- Ming Li
- State Key Lab of Brain-Machine Intelligence, Hangzhou 310018, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| | - Yu Qi
- State Key Lab of Brain-Machine Intelligence, Hangzhou 310018, China
- Affiliated Mental Health Center & Hangzhou Seventh Peoples Hospital, MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University School of Medicine, Hangzhou 310030, China
| | - Gang Pan
- State Key Lab of Brain-Machine Intelligence, Hangzhou 310018, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
12
|
Yoshida Y, Hayashi Y, Shimada T, Hattori N, Tomita K, Miura RE, Yamao Y, Tateishi S, Iwadate Y, Nakada TA. Prehospital stroke-scale machine-learning model predicts the need for surgical intervention. Sci Rep 2023; 13:9135. [PMID: 37277424 DOI: 10.1038/s41598-023-36004-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/27/2023] [Indexed: 06/07/2023] Open
Abstract
While the development of prehospital diagnosis scales has been reported in various regions, we have also developed a scale to predict stroke type using machine learning. In the present study, we aimed to assess for the first time a scale that predicts the need for surgical intervention across stroke types, including subarachnoid haemorrhage and intracerebral haemorrhage. A multicentre retrospective study was conducted within a secondary medical care area. Twenty-three items, including vitals and neurological symptoms, were analysed in adult patients suspected of having a stroke by paramedics. The primary outcome was a binary classification model for predicting surgical intervention based on eXtreme Gradient Boosting (XGBoost). Of the 1143 patients enrolled, 765 (70%) were used as the training cohort, and 378 (30%) were used as the test cohort. The XGBoost model predicted stroke requiring surgical intervention with high accuracy in the test cohort, with an area under the receiver operating characteristic curve of 0.802 (sensitivity 0.748, specificity 0.853). We found that simple survey items, such as the level of consciousness, vital signs, sudden headache, and speech abnormalities were the most significant variables for accurate prediction. This algorithm can be useful for prehospital stroke management, which is crucial for better patient outcomes.
Collapse
Affiliation(s)
- Yoichi Yoshida
- Department of Neurosurgery, Chiba Municipal Kaihin Hospital, Chiba, Japan
- Department of Neurological Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Yosuke Hayashi
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
| | - Tadanaga Shimada
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
| | - Noriyuki Hattori
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
| | - Keisuke Tomita
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
| | - Rie E Miura
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
- SMART119 Inc., 7th Floor, Chiba Chuo Twin Building No. 2, 2-5-1 Chuo, Chiba, Japan
| | - Yasuo Yamao
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
- SMART119 Inc., 7th Floor, Chiba Chuo Twin Building No. 2, 2-5-1 Chuo, Chiba, Japan
| | - Shino Tateishi
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan
- SMART119 Inc., 7th Floor, Chiba Chuo Twin Building No. 2, 2-5-1 Chuo, Chiba, Japan
| | - Yasuo Iwadate
- Department of Neurological Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Taka-Aki Nakada
- Department of Emergency and Critical Care Medicine, Chiba University Graduate School of Medicine, 1-8-1 Inohana, Chuo, Chiba, 260-8677, Japan.
- SMART119 Inc., 7th Floor, Chiba Chuo Twin Building No. 2, 2-5-1 Chuo, Chiba, Japan.
| |
Collapse
|
13
|
Liu P, Zheng G. CVCL: Context-aware Voxel-wise Contrastive Learning for label-efficient multi-organ segmentation. Comput Biol Med 2023; 160:106995. [PMID: 37187134 DOI: 10.1016/j.compbiomed.2023.106995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/02/2023] [Accepted: 05/01/2023] [Indexed: 05/17/2023]
Abstract
Despite the significant performance improvement on multi-organ segmentation with supervised deep learning-based methods, the label-hungry nature hinders their applications in practical disease diagnosis and treatment planning. Due to the challenges in obtaining expert-level accurate, densely annotated multi-organ datasets, label-efficient segmentation, such as partially supervised segmentation trained on partially labeled datasets or semi-supervised medical image segmentation, has attracted increasing attention recently. However, most of these methods suffer from the limitation that they neglect or underestimate the challenging unlabeled regions during model training. To this end, we propose a novel Context-aware Voxel-wise Contrastive Learning method, referred as CVCL, to take full advantage of both labeled and unlabeled information in label-scarce datasets for a performance improvement on multi-organ segmentation. Experimental results demonstrate that our proposed method achieves superior performance than other state-of-the-art methods.
Collapse
Affiliation(s)
- Peng Liu
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
14
|
Kasbekar RS, Ji S, Clancy EA, Goel A. Optimizing the input feature sets and machine learning algorithms for reliable and accurate estimation of continuous, cuffless blood pressure. Sci Rep 2023; 13:7750. [PMID: 37173370 PMCID: PMC10181996 DOI: 10.1038/s41598-023-34677-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023] Open
Abstract
The advent of mobile devices, wearables and digital healthcare has unleashed a demand for accurate, reliable, and non-interventional ways to measure continuous blood pressure (BP). Many consumer products claim to measure BP with a cuffless device, but their lack of accuracy and reliability limit clinical adoption. Here, we demonstrate how multimodal feature datasets, comprising: (i) pulse arrival time (PAT); (ii) pulse wave morphology (PWM), and (iii) demographic data, can be combined with optimized Machine Learning (ML) algorithms to estimate Systolic BP (SBP), Diastolic BP (DBP) and Mean Arterial Pressure (MAP) within a 5 mmHg bias of the gold standard Intra-Arterial BP, well within the acceptable limits of the IEC/ANSI 80601-2-30 (2018) standard. Furthermore, DBP's calculated using 126 datasets collected from 31 hemodynamically compromised patients had a standard deviation within 8 mmHg, while SBP's and MAP's exceeded these limits. Using ANOVA and Levene's test for error means and standard deviations, we found significant differences in the various ML algorithms but found no significant differences amongst the multimodal feature datasets. Optimized ML algorithms and key multimodal features obtained from larger real-world data (RWD) sets could enable more reliable and accurate estimation of continuous BP in cuffless devices, accelerating wider clinical adoption.
Collapse
Affiliation(s)
- Rajesh S Kasbekar
- Department of Biomedical Engineering, Worcester Polytechnic Institute (WPI), Worcester, MA, USA.
| | - Songbai Ji
- Department of Biomedical Engineering, Worcester Polytechnic Institute (WPI), Worcester, MA, USA
| | - Edward A Clancy
- Department of Biomedical Engineering, Worcester Polytechnic Institute (WPI), Worcester, MA, USA
- Department of Electrical and Computer Engineering, Worcester Polytechnic Institute (WPI), Worcester, MA, USA
| | - Anita Goel
- Nanobiosym Research Institute, Nanobiosym, Inc. and Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
15
|
Tsuji T, Hirata Y, Kusunose K, Sata M, Kumagai S, Shiraishi K, Kotoku J. Classification of chest X-ray images by incorporation of medical domain knowledge into operation branch networks. BMC Med Imaging 2023; 23:62. [PMID: 37161392 PMCID: PMC10169130 DOI: 10.1186/s12880-023-01019-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 05/02/2023] [Indexed: 05/11/2023] Open
Abstract
BACKGROUND This study was conducted to alleviate a common difficulty in chest X-ray image diagnosis: The attention region in a convolutional neural network (CNN) does not often match the doctor's point of focus. The method presented herein, which guides the area of attention in CNN to a medically plausible region, can thereby improve diagnostic capabilities. METHODS The model is based on an attention branch network, which has excellent interpretability of the classification model. This model has an additional new operation branch that guides the attention region to the lung field and heart in chest X-ray images. We also used three chest X-ray image datasets (Teikyo, Tokushima, and ChestX-ray14) to evaluate the CNN attention area of interest in these fields. Additionally, after devising a quantitative method of evaluating improvement of a CNN's region of interest, we applied it to evaluation of the proposed model. RESULTS Operation branch networks maintain or improve the area under the curve to a greater degree than conventional CNNs do. Furthermore, the network better emphasizes reasonable anatomical parts in chest X-ray images. CONCLUSIONS The proposed network better emphasizes the reasonable anatomical parts in chest X-ray images. This method can enhance capabilities for image interpretation based on judgment.
Collapse
Affiliation(s)
- Takumasa Tsuji
- Graduate School of Medical Care and Technology, Teikyo University, 2-11-1 Kaga, Itabashi-Ku, Tokyo, 173-8605, Japan
| | - Yukina Hirata
- Ultrasound Examination Center, Tokushima University Hospital, 2-50-1, Kuramoto, Tokushima, Japan
| | - Kenya Kusunose
- Department of Cardiovascular Medicine, Tokushima University Hospital, 2-50-1, Kuramoto, Tokushima, Japan
| | - Masataka Sata
- Department of Cardiovascular Medicine, Tokushima University Hospital, 2-50-1, Kuramoto, Tokushima, Japan
| | - Shinobu Kumagai
- Central Radiology Division, Teikyo University Hospital, 2-11-1 Kaga, Itabashi-Ku, Tokyo, 173-8606, Japan
| | - Kenshiro Shiraishi
- Department of Radiology, Teikyo University School of Medicine, 2-11-1 Kaga, Itabashi-Ku, Tokyo, 173-8605, Japan
| | - Jun'ichi Kotoku
- Graduate School of Medical Care and Technology, Teikyo University, 2-11-1 Kaga, Itabashi-Ku, Tokyo, 173-8605, Japan.
- Central Radiology Division, Teikyo University Hospital, 2-11-1 Kaga, Itabashi-Ku, Tokyo, 173-8606, Japan.
| |
Collapse
|
16
|
Alablani IAL, Alenazi MJF. COVID-ConvNet: A Convolutional Neural Network Classifier for Diagnosing COVID-19 Infection. Diagnostics (Basel) 2023; 13:diagnostics13101675. [PMID: 37238159 DOI: 10.3390/diagnostics13101675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 05/28/2023] Open
Abstract
The novel coronavirus (COVID-19) pandemic still has a significant impact on the worldwide population's health and well-being. Effective patient screening, including radiological examination employing chest radiography as one of the main screening modalities, is an important step in the battle against the disease. Indeed, the earliest studies on COVID-19 found that patients infected with COVID-19 present with characteristic anomalies in chest radiography. In this paper, we introduce COVID-ConvNet, a deep convolutional neural network (DCNN) design suitable for detecting COVID-19 symptoms from chest X-ray (CXR) scans. The proposed deep learning (DL) model was trained and evaluated using 21,165 CXR images from the COVID-19 Database, a publicly available dataset. The experimental results demonstrate that our COVID-ConvNet model has a high prediction accuracy at 97.43% and outperforms recent related works by up to 5.9% in terms of prediction accuracy.
Collapse
Affiliation(s)
- Ibtihal A L Alablani
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
| | - Mohammed J F Alenazi
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
| |
Collapse
|
17
|
Sultana A, Nahiduzzaman M, Bakchy SC, Shahriar SM, Peyal HI, Chowdhury MEH, Khandakar A, Arselene Ayari M, Ahsan M, Haider J. A Real Time Method for Distinguishing COVID-19 Utilizing 2D-CNN and Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094458. [PMID: 37177662 PMCID: PMC10181786 DOI: 10.3390/s23094458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real time. The current, state-of-the-art literature suggests that deep learning (DL) is highly advantageous in automatic disease classification utilizing the CXR images. The goal of this study is to develop models by employing DL models for identifying COVID-19 and other lung disorders more efficiently. For this study, a dataset of 18,564 CXR images with seven disease categories was created from multiple publicly available sources. Four DL architectures including the proposed CNN model and pretrained VGG-16, VGG-19, and Inception-v3 models were applied to identify healthy and six lung diseases (fibrosis, lung opacity, viral pneumonia, bacterial pneumonia, COVID-19, and tuberculosis). Accuracy, precision, recall, f1 score, area under the curve (AUC), and testing time were used to evaluate the performance of these four models. The results demonstrated that the proposed CNN model outperformed all other DL models employed for a seven-class classification with an accuracy of 93.15% and average values for precision, recall, f1-score, and AUC of 0.9343, 0.9443, 0.9386, and 0.9939. The CNN model equally performed well when other multiclass classifications including normal and COVID-19 as the common classes were considered, yielding accuracy values of 98%, 97.49%, 97.81%, 96%, and 96.75% for two, three, four, five, and six classes, respectively. The proposed model can also identify COVID-19 with shorter training and testing times compared to other transfer learning models.
Collapse
Affiliation(s)
- Abida Sultana
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sagor Chandro Bakchy
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Saleh Mohammed Shahriar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Hasibul Islam Peyal
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | | | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | | | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester Street, Manchester M1 5GD, UK
| |
Collapse
|
18
|
Cluster-aware multiplex InfoMax for unsupervised graph representation learning. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
19
|
Xie T, Wang Z, Li H, Wu P, Huang H, Zhang H, Alsaadi FE, Zeng N. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput Biol Med 2023; 159:106947. [PMID: 37099976 PMCID: PMC10116157 DOI: 10.1016/j.compbiomed.2023.106947] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/30/2023] [Accepted: 04/15/2023] [Indexed: 04/28/2023]
Abstract
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Collapse
Affiliation(s)
- Tingyi Xie
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Huixiang Huang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Hongyi Zhang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
20
|
Türk F, Kökver Y. Detection of Lung Opacity and Treatment Planning with Three-Channel Fusion CNN Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2023:1-13. [PMID: 37361471 PMCID: PMC10103673 DOI: 10.1007/s13369-023-07843-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 03/20/2023] [Indexed: 06/28/2023]
Abstract
Lung opacities are extremely important for physicians to monitor and can have irreversible consequences for patients if misdiagnosed or confused with other findings. Therefore, long-term monitoring of the regions of lung opacity is recommended by physicians. Tracking the regional dimensions of images and classifying differences from other lung cases can provide significant ease to physicians. Deep learning methods can be easily used for the detection, classification, and segmentation of lung opacity. In this study, a three-channel fusion CNN model is applied to effectively detect lung opacity on a balanced dataset compiled from public datasets. The MobileNetV2 architecture is used in the first channel, the InceptionV3 model in the second channel, and the VGG19 architecture in the third channel. The ResNet architecture is used for feature transfer from the previous layer to the current layer. In addition to being easy to implement, the proposed approach can also provide significant cost and time advantages to physicians. Our accuracy values for two, three, four, and five classes on the newly compiled dataset for lung opacity classifications are found to be 92.52%, 92.44%, 87.12%, and 91.71%, respectively.
Collapse
Affiliation(s)
- Fuat Türk
- Department of Computer Engineering, Çankırı Karatekin University, 18100 Çankırı, Turkey
| | - Yunus Kökver
- Department of Computer Technologies, Elmadağ Vocational School, Ankara University, 06780 Ankara, Turkey
| |
Collapse
|
21
|
Wind power prediction based on periodic characteristic decomposition and multi-layer attention network. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
22
|
Liu M, Wang Z, Li H, Wu P, Alsaadi FE, Zeng N. AA-WGAN: Attention augmented Wasserstein generative adversarial network with application to fundus retinal vessel segmentation. Comput Biol Med 2023; 158:106874. [PMID: 37019013 DOI: 10.1016/j.compbiomed.2023.106874] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/15/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
In this paper, a novel attention augmented Wasserstein generative adversarial network (AA-WGAN) is proposed for fundus retinal vessel segmentation, where a U-shaped network with attention augmented convolution and squeeze-excitation module is designed to serve as the generator. In particular, the complex vascular structures make some tiny vessels hard to segment, while the proposed AA-WGAN can effectively handle such imperfect data property, which is competent in capturing the dependency among pixels in the whole image to highlight the regions of interests via the applied attention augmented convolution. By applying the squeeze-excitation module, the generator is able to pay attention to the important channels of the feature maps, and the useless information can be suppressed as well. In addition, gradient penalty method is adopted in the WGAN backbone to alleviate the phenomenon of generating large amounts of repeated images due to excessive concentration on accuracy. The proposed model is comprehensively evaluated on three datasets DRIVE, STARE, and CHASE_DB1, and the results show that the proposed AA-WGAN is a competitive vessel segmentation model as compared with several other advanced models, which obtains the accuracy of 96.51%, 97.19% and 96.94% on each dataset, respectively. The effectiveness of the applied important components is validated by ablation study, which also endows the proposed AA-WGAN with considerable generalization ability.
Collapse
|
23
|
Han Z, Huang H, Lu D, Fan Q, Ma C, Chen X, Gu Q, Chen Q. One-stage and lightweight CNN detection approach with attention: Application to WBC detection of microscopic images. Comput Biol Med 2023; 154:106606. [PMID: 36706565 DOI: 10.1016/j.compbiomed.2023.106606] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/01/2023] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
White blood cell (WBC) detection in microscopic images is indispensable in medical diagnostics; however, this work, based on manual checking, is time-consuming, labor-intensive, and easily results in errors. Using object detectors for WBCs with deep convolutional neural networks can be regarded as a feasible solution. In this paper, to improve the examination precision and efficiency, a one-stage and lightweight CNN detector with an attention mechanism for detecting microscopic WBC images, and a white blood cell detection vision system are proposed. The method integrates different optimizing strategies to strengthen the feature extraction capability through the combination of an improved residual convolution module, hybrid spatial pyramid pooling module, improved coordinate attention mechanism, efficient intersection over union (EIOU) loss and Mish activation function. Extensive ablation and contrast experiments on the latest public Raabin-WBC dataset verify the effectiveness and robustness of the proposed detector for achieving a better overall detection performance. It is also more efficient than other existing studies for blood cell detection on two additional classic public BCCD and LISC datasets. The novel detection approach is significant and flexible for medical technicians to use for blood cell microscopic examination in clinical practice.
Collapse
Affiliation(s)
- Zhenggong Han
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Haisong Huang
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China; Information Engineering Institute, Chongqing Vocational and Technical University of Mechatronics, Chongqing, 402760, China.
| | - Dan Lu
- Guizhou University of Traditional Chinese Medicine, Guiyang, Guizhou, 550025, China
| | - Qingsong Fan
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Chi Ma
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Xingran Chen
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Qiang Gu
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Qipeng Chen
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| |
Collapse
|
24
|
Zhang S, Chen D, Tang Y, Li X. Learning graph-based relationship of dual-modal features towards subject adaptive ASD assessment. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion. Comput Biol Med 2023; 152:106457. [PMID: 36571937 DOI: 10.1016/j.compbiomed.2022.106457] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/06/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
In this paper, a magnetic resonance imaging (MRI) oriented novel attention-based glioma grading network (AGGN) is proposed. By applying the dual-domain attention mechanism, both channel and spatial information can be considered to assign weights, which benefits highlighting the key modalities and locations in the feature maps. Multi-branch convolution and pooling operations are applied in a multi-scale feature extraction module to separately obtain shallow and deep features on each modality, and a multi-modal information fusion module is adopted to sufficiently merge low-level detailed and high-level semantic features, which promotes the synergistic interaction among different modality information. The proposed AGGN is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the proposed AGGN in comparison to other advanced models, which also presents high generalization ability and strong robustness. In addition, even without the manually labeled tumor masks, AGGN can present considerable performance as other state-of-the-art algorithms, which alleviates the excessive reliance on supervised information in the end-to-end learning paradigm.
Collapse
|
26
|
Fu S, Tian Y, Tang J, Liu X. Cost-sensitive learning with modified Stein loss function. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
27
|
Liu M, Wang Y, Palade V, Ji Z. Multi-View Subspace Clustering Network with Block Diagonal and Diverse Representation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2022.12.104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
28
|
Accurate iris segmentation and recognition using an end-to-end unified framework based on MADNet and DSANet. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.10.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
29
|
AI meets UAVs: A survey on AI empowered UAV perception systems for precision agriculture. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
30
|
Irkham I, Ibrahim AU, Nwekwo CW, Al-Turjman F, Hartati YW. Current Technologies for Detection of COVID-19: Biosensors, Artificial Intelligence and Internet of Medical Things (IoMT): Review. SENSORS (BASEL, SWITZERLAND) 2022; 23:426. [PMID: 36617023 PMCID: PMC9824404 DOI: 10.3390/s23010426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/14/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Despite the fact that COVID-19 is no longer a global pandemic due to development and integration of different technologies for the diagnosis and treatment of the disease, technological advancement in the field of molecular biology, electronics, computer science, artificial intelligence, Internet of Things, nanotechnology, etc. has led to the development of molecular approaches and computer aided diagnosis for the detection of COVID-19. This study provides a holistic approach on COVID-19 detection based on (1) molecular diagnosis which includes RT-PCR, antigen-antibody, and CRISPR-based biosensors and (2) computer aided detection based on AI-driven models which include deep learning and transfer learning approach. The review also provide comparison between these two emerging technologies and open research issues for the development of smart-IoMT-enabled platforms for the detection of COVID-19.
Collapse
Affiliation(s)
- Irkham Irkham
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| | | | - Chidi Wilson Nwekwo
- Department of Biomedical Engineering, Near East University, Mersin 99138, Turkey
| | - Fadi Al-Turjman
- Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Mersin 99138, Turkey
- Artificial Intelligence Engineering Department, AI and Robotics Institute, Near East University, Mersin 99138, Turkey
| | - Yeni Wahyuni Hartati
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| |
Collapse
|
31
|
Li H, Wu P, Wang Z, Mao J, Alsaadi FE, Zeng N. A generalized framework of feature learning enhanced convolutional neural network for pathology-image-oriented cancer diagnosis. Comput Biol Med 2022; 151:106265. [PMID: 36401968 DOI: 10.1016/j.compbiomed.2022.106265] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/24/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022]
Abstract
In this paper, a feature learning enhanced convolutional neural network (FLE-CNN) is proposed for cancer detection from histopathology images. To build a highly generalized computer-aided diagnosis (CAD) system, an information refinement unit employing depth- and point-wise convolutions is meticulously designed, where a dual-domain attention mechanism is adopted to focus primarily on the important areas. By deploying a residual fusion unit, context information is further integrated to extract highly discriminative features with strong representation ability. Experimental results demonstrate the merits of the proposed FLE-CNN in terms of feature extraction, which has achieved average sensitivity, specificity, precision, accuracy and F1 score of 0.9992, 0.9998, 0.9992, 0.9997 and 0.9992 in a five-class cancer detection task, and in comparison to some other advanced deep learning models, above indicators have been improved by 1.23%, 0.31%, 1.24%, 0.5% and 1.26%, respectively. Moreover, the proposed FLE-CNN provides satisfactory results in three important diagnosis, which further validates that FLE-CNN is a competitive CAD model with high generalization ability.
Collapse
Affiliation(s)
- Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Jingfeng Mao
- School of Electrical Engineering, Nantong University, Nantong 226019, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
32
|
Wang S, Wang B, Zhang Z, Heidari AA, Chen H. Class-Aware Sample Reweighting Optimal Transport for Multi-source Domain Adaptation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
33
|
Gwo CY, Zhu DC, Zhang R. Brain white matter hyperintensity lesion characterization in 3D T 2 fluid-attenuated inversion recovery magnetic resonance images: Shape, texture, and their correlations with potential growth. Front Neurosci 2022; 16:1028929. [PMID: 36507337 PMCID: PMC9731131 DOI: 10.3389/fnins.2022.1028929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 11/07/2022] [Indexed: 11/25/2022] Open
Abstract
Analyses of age-related white matter hyperintensity (WMH) lesions manifested in T2 fluid-attenuated inversion recovery (FLAIR) magnetic resonance images (MRI) have been mostly on understanding the size and location of the WMH lesions and rarely on the morphological characterization of the lesions. This work extends our prior analyses of the morphological characteristics and texture of WMH from 2D to 3D based on 3D T2 FLAIR images. 3D Zernike transformation was used to characterize WMH shape; a fuzzy logic method was used to characterize the lesion texture. We then clustered 3D WMH lesions into groups based on their 3D shape and texture features. A potential growth index (PGI) to assess dynamic changes in WMH lesions was developed based on the image texture features of the WMH lesion penumbra. WMH lesions with various sizes were segmented from brain images of 32 cognitively normal older adults. The WMH lesions were divided into two groups based on their size. Analyses of Variance (ANOVAs) showed significant differences in PGI among WMH shape clusters (P = 1.57 × 10-3 for small lesions; P = 3.14 × 10-2 for large lesions). Significant differences in PGI were also found among WMH texture group clusters (P = 1.79 × 10-6). In conclusion, we presented a novel approach to characterize the morphology of 3D WMH lesions and explored the potential to assess the dynamic morphological changes of WMH lesions using PGI.
Collapse
Affiliation(s)
- Chih-Ying Gwo
- Department of Information Management, Chien Hsin University of Science and Technology, Taoyuan City, Taiwan
| | - David C. Zhu
- Department of Radiology, Cognitive Imaging Research Center, Michigan State University, East Lansing, MI, United States
- Department of Psychology, Cognitive Imaging Research Center, Michigan State University, East Lansing, MI, United States
| | - Rong Zhang
- Department of Neurology and Internal Medicine, University of Texas Southwestern Medical Center, Dallas, TX, United States
- Institute for Exercise and Environmental Medicine, Texas Health Presbyterian Hospital Dallas, Dallas, TX, United States
| |
Collapse
|
34
|
Gao J, Gong M, Li X. Congested crowd instance localization with dilated convolutional swin transformer. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
35
|
Zang SS, Yu H, Song Y, Zeng R. Unsupervised Video Summarization Using Deep Non-Local Video Summarization Networks. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
36
|
Xia T, Chen X. Category-learning attention mechanism for short text filtering. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.08.076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
37
|
Zhu F, Li Y, Shi Z, Shi W. TV-NARX and Coiflets WPT based time-frequency Granger causality with application to corticomuscular coupling in hand-grasping. Front Neurosci 2022; 16:1014495. [PMID: 36248661 PMCID: PMC9560889 DOI: 10.3389/fnins.2022.1014495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 09/12/2022] [Indexed: 11/21/2022] Open
Abstract
The study of the synchronous characteristics and functional connections between the functional cortex and muscles of hand-grasping movements is important in basic research, clinical disease diagnosis and rehabilitation evaluation. The electroencephalogram (EEG) and electromyographic signal (EMG) signals of 15 healthy participants were used to analyze the corticomuscular coupling under grasping movements by holding three different objects, namely, card, ball, and cup by using the time-frequency Granger causality method based on time-varying nonlinear autoregressive with exogenous input (TV-NARX) model and Coiflets wavelet packet transform. The results show that there is a bidirectional coupling between cortex and muscles under grasping movement, and it is mainly reflected in the beta and gamma frequency bands, in which there is a statistically significant difference (p < 0.05) among the different grasping actions during the movement execution period in the beta frequency band, and a statistically significant difference (p < 0.1) among the different grasping actions during the movement preparation period in the gamma frequency band. The results show that the proposed method can effectively characterize the EEG-EMG synchronization features and functional connections in different frequency bands during the movement preparation and execution phases in the time-frequency domain, and reveal the neural control mechanism of sensorimotor system to control the hand-grasping function achievement by regulating the intensity of neuronal synchronization oscillations.
Collapse
Affiliation(s)
- Feifei Zhu
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yurong Li
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- *Correspondence: Yurong Li
| | - Zhengyi Shi
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Wuxiang Shi
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| |
Collapse
|