51
|
Wang Y, Liu S, Wang H, Zhao Y, Zhang XD. Neuron devices: emerging prospects in neural interfaces and recognition. MICROSYSTEMS & NANOENGINEERING 2022; 8:128. [PMID: 36507057 PMCID: PMC9726942 DOI: 10.1038/s41378-022-00453-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/28/2022] [Accepted: 08/30/2022] [Indexed: 06/17/2023]
Abstract
Neuron interface devices can be used to explore the relationships between neuron firing and synaptic transmission, as well as to diagnose and treat neurological disorders, such as epilepsy and Alzheimer's disease. It is crucial to exploit neuron devices with high sensitivity, high biocompatibility, multifunctional integration and high-speed data processing. During the past decades, researchers have made significant progress in neural electrodes, artificial sensory neuron devices, and neuromorphic optic neuron devices. The main part of the review is divided into two sections, providing an overview of recently developed neuron interface devices for recording electrophysiological signals, as well as applications in neuromodulation, simulating the human sensory system, and achieving memory and recognition. We mainly discussed the development, characteristics, functional mechanisms, and applications of neuron devices and elucidated several key points for clinical translation. The present review highlights the advances in neuron devices on brain-computer interfaces and neuroscience research.
Collapse
Affiliation(s)
- Yang Wang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Shuangjie Liu
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Hao Wang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Yue Zhao
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Xiao-Dong Zhang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology, Institute of Advanced Materials Physics, School of Sciences, Tianjin University, 300350 Tianjin, China
| |
Collapse
|
52
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
53
|
Liu Y, Ota M, Han R, Siewerdsen JH, Liu TYA, Jones CK. Active shape model registration of ocular structures in computed tomography images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac9a98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 10/14/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Purpose. The goal of this work is to create an active shape model segmentation method based on the statistical shape model of five regions of the globe on computed tomography (CT) scans and to use the method to categorize normal globe from globe injury. Methods. A set of 78 normal globes imaged with CT scans were manually segmented (vitreous cavity, lens, sclera, anterior chamber, and cornea) by two graders. A statistical shape model was created from the regions. An active shape model was trained using the manual segmentations and the statistical shape model and was assessed using leave-one-out cross validations. The active shape model was then applied to a set of globes with open globe injures, and the segmentations were compared to those of normal globes, in terms of the standard deviations away from normal. Results. The active shape model (ASM) segmentation compared well to ground truth, based on Dice similarity coefficient score in a leave-one-out experiment: 90.2% ± 2.1% for the cornea, 92.5% ± 3.5% for the sclera, 87.4% ± 3.7% for the vitreous cavity, 83.5% ± 2.3% for the anterior chamber, and 91.2% ± 2.4% for the lens. A preliminary set of CT scans of patients with open globe injury were segmented using the ASM and the shape of each region was quantified. The sclera and vitreous cavity were statistically different in shape from the normal. The Zone 1 and Zone 2 globes were statistically different than normal from the cornea and anterior chamber. Both results are consistent with the definition of the zonal injuries in OGI. Conclusion. The ASM results were found to be reproducible and accurately correlated with manual segmentations. The quantitative metrics derived from ASM of globes with OGI are consistent with existing medical knowledge in terms of structural deformation.
Collapse
|
54
|
Gulakala R, Markert B, Stoffel M. Generative adversarial network based data augmentation for CNN based detection of Covid-19. Sci Rep 2022; 12:19186. [PMID: 36357530 PMCID: PMC9647771 DOI: 10.1038/s41598-022-23692-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022] Open
Abstract
Covid-19 has been a global concern since 2019, crippling the world economy and health. Biological diagnostic tools have since been developed to identify the virus from bodily fluids and since the virus causes pneumonia, which results in lung inflammation, the presence of the virus can also be detected using medical imaging by expert radiologists. The success of each diagnostic method is measured by the hit rate for identifying Covid infections. However, the access for people to each diagnosis tool can be limited, depending on the geographic region and, since Covid treatment denotes a race against time, the diagnosis duration plays an important role. Hospitals with X-ray opportunities are widely distributed all over the world, so a method investigating lung X-ray images for possible Covid-19 infections would offer itself. Promising results have been achieved in the literature in automatically detecting the virus using medical images like CT scans and X-rays using supervised artificial neural network algorithms. One of the major drawbacks of supervised learning models is that they require enormous amounts of data to train, and generalize on new data. In this study, we develop a Swish activated, Instance and Batch normalized Residual U-Net GAN with dense blocks and skip connections to create synthetic and augmented data for training. The proposed GAN architecture, due to the presence of instance normalization and swish activation, can deal with the randomness of luminosity, that arises due to different sources of X-ray images better than the classical architecture and generate realistic-looking synthetic data. Also, the radiology equipment is not generally computationally efficient. They cannot efficiently run state-of-the-art deep neural networks such as DenseNet and ResNet effectively. Hence, we propose a novel CNN architecture that is 40% lighter and more accurate than state-of-the-art CNN networks. Multi-class classification of the three classes of chest X-rays (CXR), ie Covid-19, healthy and Pneumonia, is performed using the proposed model which had an extremely high test accuracy of 99.2% which has not been achieved in any previous studies in the literature. Based on the mentioned criteria for developing Corona infection diagnosis, in the present study, an Artificial Intelligence based method is proposed, resulting in a rapid diagnostic tool for Covid infections based on generative adversarial and convolutional neural networks. The benefit will be a high accuracy of lung infection identification with 99% accuracy. This could lead to a support tool that helps in rapid diagnosis, and an accessible Covid identification method using CXR images.
Collapse
Affiliation(s)
- Rutwik Gulakala
- grid.1957.a0000 0001 0728 696XInstitute of General Mechanics, RWTH Aachen University, Aachen, Germany
| | - Bernd Markert
- grid.1957.a0000 0001 0728 696XInstitute of General Mechanics, RWTH Aachen University, Aachen, Germany
| | - Marcus Stoffel
- grid.1957.a0000 0001 0728 696XInstitute of General Mechanics, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
55
|
Bassi PRAS, Attux R. Covid-19 detection using chest X-rays: is lung segmentation important for generalization? RESEARCH ON BIOMEDICAL ENGINEERING 2022. [PMCID: PMC9628459 DOI: 10.1007/s42600-022-00242-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Purpose We evaluated the generalization capability of deep neural networks (DNNs) in the task of classifying chest X-rays as Covid-19, normal or pneumonia, when trained in a relatively small and mixed datasets. Methods We proposed a DNN to perform lung segmentation and classification, stacking a segmentation module (U-Net), an original intermediate module and a classification module (DenseNet201). To evaluate generalization capability, we tested the network with an external dataset (from distinct localities) and used Bayesian inference to estimate the probability distributions of performance metrics. Furthermore, we introduce a novel evaluation technique, which uses layer-wise relevance propagation (LRP) and Brixia scores to compare the DNN grounds for decision with radiologists. Results The proposed DNN achieved 0.917 AUC (area under the ROC curve) on the external test dataset, surpassing a DenseNet without segmentation, which showed 0.906 AUC. Bayesian inference indicated mean accuracy of 76.1% and [0.695, 0.826] 95% HDI (high-density interval, which concentrates 95% of the metric’s probability mass) with segmentation and, without segmentation, 71.7% and [0.646, 0.786]. Conclusion Employing an analysis based on LRP and Brixia scores, we discovered that areas where radiologists found strong Covid-19 symptoms are the most important for the stacked DNN classification. External validation showed smaller accuracies than internal, indicating difficulty in generalization, which is positively affected by lung segmentation. Finally, the performance on the external dataset and the analysis with LRP suggest that DNNs can successfully detect Covid-19 even when trained on small and mixed datasets.
Collapse
Affiliation(s)
- Pedro R. A. S. Bassi
- Department of Computer Engineering and Industrial Automation, School of Electrical and Computer Engineering, University of Campinas - UNICAMP, Campinas, SP 13083-970 Brazil ,Present Address: Alma Mater Studiorum - University of Bologna, 40126 Bologna, BO Italy ,Present Address: Istituto Italiano Di Tecnologia, 16163 Genoa, GE Italy
| | - Romis Attux
- Department of Computer Engineering and Industrial Automation, School of Electrical and Computer Engineering, University of Campinas - UNICAMP, Campinas, SP 13083-970 Brazil
| |
Collapse
|
56
|
Naseem MT, Hussain T, Lee CS, Khan MA. Classification and Detection of COVID-19 and Other Chest-Related Diseases Using Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:7977. [PMID: 36298328 PMCID: PMC9610066 DOI: 10.3390/s22207977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/17/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
COVID-19 has infected millions of people worldwide over the past few years. The main technique used for COVID-19 detection is reverse transcription, which is expensive, sensitive, and requires medical expertise. X-ray imaging is an alternative and more accessible technique. This study aimed to improve detection accuracy to create a computer-aided diagnostic tool. Combining other artificial intelligence applications techniques with radiological imaging can help detect different diseases. This study proposes a technique for the automatic detection of COVID-19 and other chest-related diseases using digital chest X-ray images of suspected patients by applying transfer learning (TL) algorithms. For this purpose, two balanced datasets, Dataset-1 and Dataset-2, were created by combining four public databases and collecting images from recently published articles. Dataset-1 consisted of 6000 chest X-ray images with 1500 for each class. Dataset-2 consisted of 7200 images with 1200 for each class. To train and test the model, TL with nine pretrained convolutional neural networks (CNNs) was used with augmentation as a preprocessing method. The network was trained to classify using five classifiers: two-class classifier (normal and COVID-19); three-class classifier (normal, COVID-19, and viral pneumonia), four-class classifier (normal, viral pneumonia, COVID-19, and tuberculosis (Tb)), five-class classifier (normal, bacterial pneumonia, COVID-19, Tb, and pneumothorax), and six-class classifier (normal, bacterial pneumonia, COVID-19, viral pneumonia, Tb, and pneumothorax). For two, three, four, five, and six classes, our model achieved a maximum accuracy of 99.83, 98.11, 97.00, 94.66, and 87.29%, respectively.
Collapse
Affiliation(s)
- Muhammad Tahir Naseem
- Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Korea
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| | - Tajmal Hussain
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| | - Chan-Su Lee
- Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Korea
| | - Muhammad Adnan Khan
- Riphah School of Computing & Applied Sciences (RSCI), Riphah International University, Lahore 55150, Pakistan
| |
Collapse
|
57
|
Malik H, Anees T, Din M, Naeem A. CDC_Net: multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:13855-13880. [PMID: 36157356 PMCID: PMC9485026 DOI: 10.1007/s11042-022-13843-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/30/2022] [Accepted: 09/06/2022] [Indexed: 05/27/2023]
Abstract
Coronavirus (COVID-19) has adversely harmed the healthcare system and economy throughout the world. COVID-19 has similar symptoms as other chest disorders such as lung cancer (LC), pneumothorax, tuberculosis (TB), and pneumonia, which might mislead the clinical professionals in detecting a new variant of flu called coronavirus. This motivates us to design a model to classify multi-chest infections. A chest x-ray is the most ubiquitous disease diagnosis process in medical practice. As a result, chest x-ray examinations are the primary diagnostic tool for all of these chest infections. For the sake of saving human lives, paramedics and researchers are working tirelessly to establish a precise and reliable method for diagnosing the disease COVID-19 at an early stage. However, COVID-19's medical diagnosis is exceedingly idiosyncratic and varied. A multi-classification method based on the deep learning (DL) model is developed and tested in this work to automatically classify the COVID-19, LC, pneumothorax, TB, and pneumonia from chest x-ray images. COVID-19 and other chest tract disorders are diagnosed using a convolutional neural network (CNN) model called CDC Net that incorporates residual network thoughts and dilated convolution. For this study, we used this model in conjunction with publically available benchmark data to identify these diseases. For the first time, a single deep learning model has been used to diagnose five different chest ailments. In terms of classification accuracy, recall, precision, and f1-score, we compared the proposed model to three CNN-based pre-trained models, such as Vgg-19, ResNet-50, and inception v3. An AUC of 0.9953 was attained by the CDC Net when it came to identifying various chest diseases (with an accuracy of 99.39%, a recall of 98.13%, and a precision of 99.42%). Moreover, CNN-based pre-trained models Vgg-19, ResNet-50, and inception v3 achieved accuracy in classifying multi-chest diseases are 95.61%, 96.15%, and 95.16%, respectively. Using chest x-rays, the proposed model was found to be highly accurate in diagnosing chest diseases. Based on our testing data set, the proposed model shows significant performance as compared to its competitor methods. Statistical analyses of the datasets using McNemar's, and ANOVA tests also showed the robustness of the proposed model.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore, 54000 Pakistan
| | - Muizzud Din
- Department of Computer Science, Ghazi University, Dera Ghazi Khan, 32200 Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| |
Collapse
|
58
|
Integrating patient symptoms, clinical readings, and radiologist feedback with computer-aided diagnosis system for detection of infectious pulmonary disease: a feasibility study. Med Biol Eng Comput 2022; 60:2549-2565. [DOI: 10.1007/s11517-022-02611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 06/07/2022] [Indexed: 10/17/2022]
|
59
|
Chen S, Qiu C, Yang W, Zhang Z. Combining edge guidance and feature pyramid for medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
60
|
Meedeniya D, Kumarasinghe H, Kolonne S, Fernando C, Díez IDLT, Marques G. Chest X-ray analysis empowered with deep learning: A systematic review. Appl Soft Comput 2022; 126:109319. [PMID: 36034154 PMCID: PMC9393235 DOI: 10.1016/j.asoc.2022.109319] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 03/16/2022] [Accepted: 07/12/2022] [Indexed: 11/12/2022]
Abstract
Chest radiographs are widely used in the medical domain and at present, chest X-radiation particularly plays an important role in the diagnosis of medical conditions such as pneumonia and COVID-19 disease. The recent developments of deep learning techniques led to a promising performance in medical image classification and prediction tasks. With the availability of chest X-ray datasets and emerging trends in data engineering techniques, there is a growth in recent related publications. Recently, there have been only a few survey papers that addressed chest X-ray classification using deep learning techniques. However, they lack the analysis of the trends of recent studies. This systematic review paper explores and provides a comprehensive analysis of the related studies that have used deep learning techniques to analyze chest X-ray images. We present the state-of-the-art deep learning based pneumonia and COVID-19 detection solutions, trends in recent studies, publicly available datasets, guidance to follow a deep learning process, challenges and potential future research directions in this domain. The discoveries and the conclusions of the reviewed work have been organized in a way that researchers and developers working in the same domain can use this work to support them in taking decisions on their research.
Collapse
|
61
|
Yang L, Gu Y, Huo B, Liu Y, Bian G. A shape-guided deep residual network for automated CT lung segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
62
|
Lung Field Segmentation in Chest X-ray Images Using Superpixel Resizing and Encoder–Decoder Segmentation Networks. Bioengineering (Basel) 2022; 9:bioengineering9080351. [PMID: 36004876 PMCID: PMC9404743 DOI: 10.3390/bioengineering9080351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/24/2022] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Lung segmentation of chest X-ray (CXR) images is a fundamental step in many diagnostic applications. Most lung field segmentation methods reduce the image size to speed up the subsequent processing time. Then, the low-resolution result is upsampled to the original high-resolution image. Nevertheless, the image boundaries become blurred after the downsampling and upsampling steps. It is necessary to alleviate blurred boundaries during downsampling and upsampling. In this paper, we incorporate the lung field segmentation with the superpixel resizing framework to achieve the goal. The superpixel resizing framework upsamples the segmentation results based on the superpixel boundary information obtained from the downsampling process. Using this method, not only can the computation time of high-resolution medical image segmentation be reduced, but also the quality of the segmentation results can be preserved. We evaluate the proposed method on JSRT, LIDC-IDRI, and ANH datasets. The experimental results show that the proposed superpixel resizing framework outperforms other traditional image resizing methods. Furthermore, combining the segmentation network and the superpixel resizing framework, the proposed method achieves better results with an average time score of 4.6 s on CPU and 0.02 s on GPU.
Collapse
|
63
|
Sellergren AB, Chen C, Nabulsi Z, Li Y, Maschinot A, Sarna A, Huang J, Lau C, Kalidindi SR, Etemadi M, Garcia-Vicente F, Melnick D, Liu Y, Eswaran K, Tse D, Beladia N, Krishnan D, Shetty S. Simplified Transfer Learning for Chest Radiography Models Using Less Data. Radiology 2022; 305:454-465. [DOI: 10.1148/radiol.212482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Andrew B. Sellergren
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Christina Chen
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Zaid Nabulsi
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Yuanzhen Li
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Aaron Maschinot
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Aaron Sarna
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Jenny Huang
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Charles Lau
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Sreenivasa Raju Kalidindi
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Mozziyar Etemadi
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Florencia Garcia-Vicente
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - David Melnick
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Yun Liu
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Krish Eswaran
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Daniel Tse
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Neeral Beladia
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Dilip Krishnan
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| | - Shravya Shetty
- From Google Health, Google, 3400 Hillview Ave, Palo Alto, CA 94304 (A.B.S., C.C., Z.N., Y. Liu, K.E., D.T., N.B., S.S.); Google Research, Cambridge, Mass (Y. Li, A.M., A.S., J.H., D.K.); Google via Advanced Clinical, Deerfield, Ill (C.L.); Apollo Radiology International, Hyderabad, India (S.R.K.); and Northwestern Medicine, Chicago, Ill (M.E., F.G.V., D.M.)
| |
Collapse
|
64
|
Chandra TB, Singh BK, Jain D. Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106947. [PMID: 35749885 PMCID: PMC9403875 DOI: 10.1016/j.cmpb.2022.106947] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India.
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
65
|
FGAM: A pluggable light-weight attention module for medical image segmentation. Comput Biol Med 2022; 146:105628. [DOI: 10.1016/j.compbiomed.2022.105628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 04/08/2022] [Accepted: 04/15/2022] [Indexed: 11/22/2022]
|
66
|
Jafar A, Hameed MT, Akram N, Waqas U, Kim HS, Naqvi RA. CardioNet: Automatic Semantic Segmentation to Calculate the Cardiothoracic Ratio for Cardiomegaly and Other Chest Diseases. J Pers Med 2022; 12:988. [PMID: 35743771 PMCID: PMC9225197 DOI: 10.3390/jpm12060988] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 11/18/2022] Open
Abstract
Semantic segmentation for diagnosing chest-related diseases like cardiomegaly, emphysema, pleural effusions, and pneumothorax is a critical yet understudied tool for identifying the chest anatomy. A dangerous disease among these is cardiomegaly, in which sudden death is a high risk. An expert medical practitioner can diagnose cardiomegaly early using a chest radiograph (CXR). Cardiomegaly is a heart enlargement disease that can be analyzed by calculating the transverse cardiac diameter (TCD) and the cardiothoracic ratio (CTR). However, the manual estimation of CTR and other chest-related diseases requires much time from medical experts. Based on their anatomical semantics, artificial intelligence estimates cardiomegaly and related diseases by segmenting CXRs. Unfortunately, due to poor-quality images and variations in intensity, the automatic segmentation of the lungs and heart with CXRs is challenging. Deep learning-based methods are being used to identify the chest anatomy segmentation, but most of them only consider the lung segmentation, requiring a great deal of training. This work is based on a multiclass concatenation-based automatic semantic segmentation network, CardioNet, that was explicitly designed to perform fine segmentation using fewer parameters than a conventional deep learning scheme. Furthermore, the semantic segmentation of other chest-related diseases is diagnosed using CardioNet. CardioNet is evaluated using the JSRT dataset (Japanese Society of Radiological Technology). The JSRT dataset is publicly available and contains multiclass segmentation of the heart, lungs, and clavicle bones. In addition, our study examined lung segmentation using another publicly available dataset, Montgomery County (MC). The experimental results of the proposed CardioNet model achieved acceptable accuracy and competitive results across all datasets.
Collapse
Affiliation(s)
- Abbas Jafar
- Department of Computer Engineering, Myongji University, Yongin 03674, Korea;
| | - Muhammad Talha Hameed
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Nadeem Akram
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Umer Waqas
- Research and Development, AItheNutrigene, Seoul 06132, Korea;
| | - Hyung Seok Kim
- School of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| |
Collapse
|
67
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
68
|
Liu W, Luo J, Yang Y, Wang W, Deng J, Yu L. Automatic lung segmentation in chest X-ray images using improved U-Net. Sci Rep 2022; 12:8649. [PMID: 35606509 PMCID: PMC9127108 DOI: 10.1038/s41598-022-12743-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 05/16/2022] [Indexed: 11/08/2022] Open
Abstract
The automatic segmentation of the lung region for chest X-ray (CXR) can help doctors diagnose many lung diseases. However, extreme lung shape changes and fuzzy lung regions caused by serious lung diseases may incorrectly make the automatic lung segmentation model. We improved the U-Net network by using the pre-training Efficientnet-b4 as the encoder and the Residual block and the LeakyReLU activation function in the decoder. The network can extract Lung field features efficiently and avoid the gradient instability caused by the multiplication effect in gradient backpropagation. Compared with the traditional U-Net model, our method improves about 2.5% dice coefficient and 6% Jaccard Index for the two benchmark lung segmentation datasets. Our model improves about 5% dice coefficient and 9% Jaccard Index for the private lung segmentation datasets compared with the traditional U-Net model. Comparative experiments show that our method can improve the accuracy of lung segmentation of CXR images and it has a lower standard deviation and good robustness.
Collapse
Affiliation(s)
- Wufeng Liu
- Henan University of Technology, Zhengzhou, 450001, China.
| | - Jiaxin Luo
- Henan University of Technology, Zhengzhou, 450001, China
| | - Yan Yang
- Henan University of Technology, Zhengzhou, 450001, China
| | - Wenlian Wang
- Nanyang Central Hospital, Nanyang, 473009, China
| | - Junkui Deng
- Nanyang Central Hospital, Nanyang, 473009, China
| | - Liang Yu
- Henan University of Technology, Zhengzhou, 450001, China
| |
Collapse
|
69
|
Multiresolution Aggregation Transformer UNet Based on Multiscale Input and Coordinate Attention for Medical Image Segmentation. SENSORS 2022; 22:s22103820. [PMID: 35632229 PMCID: PMC9145221 DOI: 10.3390/s22103820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/09/2022] [Accepted: 05/13/2022] [Indexed: 12/07/2022]
Abstract
The latest medical image segmentation methods uses UNet and transformer structures with great success. Multiscale feature fusion is one of the important factors affecting the accuracy of medical image segmentation. Existing transformer-based UNet methods do not comprehensively explore multiscale feature fusion, and there is still much room for improvement. In this paper, we propose a novel multiresolution aggregation transformer UNet (MRA-TUNet) based on multiscale input and coordinate attention for medical image segmentation. It realizes multiresolution aggregation from the following two aspects: (1) On the input side, a multiresolution aggregation module is used to fuse the input image information of different resolutions, which enhances the input features of the network. (2) On the output side, an output feature selection module is used to fuse the output information of different scales to better extract coarse-grained information and fine-grained information. We try to introduce a coordinate attention structure for the first time to further improve the segmentation performance. We compare with state-of-the-art medical image segmentation methods on the automated cardiac diagnosis challenge and the 2018 atrial segmentation challenge. Our method achieved average dice score of 0.911 for right ventricle (RV), 0.890 for myocardium (Myo), 0.961 for left ventricle (LV), and 0.923 for left atrium (LA). The experimental results on two datasets show that our method outperforms eight state-of-the-art medical image segmentation methods in dice score, precision, and recall.
Collapse
|
70
|
Deformable image registration with attention-guided fusion of multi-scale deformation fields. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03659-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractDeformable medical image registration plays a crucial role in theoretical research and clinical application. Traditional methods suffer from low registration accuracy and efficiency. Recent deep learning-based methods have made significant progresses, especially those weakly supervised by anatomical segmentations. However, the performance still needs further improvement, especially for images with large deformations. This work proposes a novel deformable image registration method based on an attention-guided fusion of multi-scale deformation fields. Specifically, we adopt a separately trained segmentation network to segment the regions of interest to remove the interference from the uninterested areas. Then, we construct a novel dense registration network to predict the deformation fields of multiple scales and combine them for final registration through an attention-weighted field fusion process. The proposed contour loss and image structural similarity index (SSIM) based loss further enhance the model training through regularization. Compared to the state-of-the-art methods on three benchmark datasets, our method has achieved significant performance improvement in terms of the average Dice similarity score (DSC), Hausdorff distance (HD), Average symmetric surface distance (ASSD), and Jacobian coefficient (JAC). For example, the improvements on the SHEN dataset are 0.014, 5.134, 0.559, and 359.936, respectively.
Collapse
|
71
|
Wen Y, Chen L, Deng Y, Zhang Z, Zhou C. Pixel-wise triplet learning for enhancing boundary discrimination in medical image segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
72
|
Evaluating Explainable Artificial Intelligence for X-ray Image Analysis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094459] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The lack of justification of the results obtained by artificial intelligence (AI) algorithms has limited their usage in the medical context. To increase the explainability of the existing AI methods, explainable artificial intelligence (XAI) is proposed. We performed a systematic literature review, based on the guidelines proposed by Kitchenham and Charters, of studies that applied XAI methods in X-ray-image-related tasks. We identified 141 studies relevant to the objective of this research from five different databases. For each of these studies, we assessed the quality and then analyzed them according to a specific set of research questions. We determined two primary purposes for X-ray images: the detection of bone diseases and lung diseases. We found that most of the AI methods used were based on a CNN. We identified the different techniques to increase the explainability of the models and grouped them depending on the kind of explainability obtained. We found that most of the articles did not evaluate the quality of the explainability obtained, causing problems of confidence in the explanation. Finally, we identified the current challenges and future directions of this subject and provide guidelines to practitioners and researchers to improve the limitations and the weaknesses that we detected.
Collapse
|
73
|
Haghanifar A, Majdabadi MM, Choi Y, Deivalakshmi S, Ko S. COVID-CXNet: Detecting COVID-19 in frontal chest X-ray images using deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:30615-30645. [PMID: 35431611 PMCID: PMC8989406 DOI: 10.1007/s11042-022-12156-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/16/2021] [Accepted: 01/03/2022] [Indexed: 05/02/2023]
Abstract
One of the primary clinical observations for screening the novel coronavirus is capturing a chest x-ray image. In most patients, a chest x-ray contains abnormalities, such as consolidation, resulting from COVID-19 viral pneumonia. In this study, research is conducted on efficiently detecting imaging features of this type of pneumonia using deep convolutional neural networks in a large dataset. It is demonstrated that simple models, alongside the majority of pretrained networks in the literature, focus on irrelevant features for decision-making. In this paper, numerous chest x-ray images from several sources are collected, and one of the largest publicly accessible datasets is prepared. Finally, using the transfer learning paradigm, the well-known CheXNet model is utilized to develop COVID-CXNet. This powerful model is capable of detecting the novel coronavirus pneumonia based on relevant and meaningful features with precise localization. COVID-CXNet is a step towards a fully automated and robust COVID-19 detection system.
Collapse
Affiliation(s)
- Arman Haghanifar
- Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, SK Canada
| | | | - Younhee Choi
- Department of Electrical & Computer EngineeringUniversity of Saskatchewan, Saskatoon, SK Canada
| | | | - Seokbum Ko
- Department of Electrical & Computer EngineeringUniversity of Saskatchewan, Saskatoon, SK Canada
| |
Collapse
|
74
|
Subramanian N, Elharrouss O, Al-Maadeed S, Chowdhury M. A review of deep learning-based detection methods for COVID-19. Comput Biol Med 2022; 143:105233. [PMID: 35180499 PMCID: PMC8798789 DOI: 10.1016/j.compbiomed.2022.105233] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 01/10/2022] [Accepted: 01/10/2022] [Indexed: 12/16/2022]
Abstract
COVID-19 is a fast-spreading pandemic, and early detection is crucial for stopping the spread of infection. Lung images are used in the detection of coronavirus infection. Chest X-ray (CXR) and computed tomography (CT) images are available for the detection of COVID-19. Deep learning methods have been proven efficient and better performing in many computer vision and medical imaging applications. In the rise of the COVID pandemic, researchers are using deep learning methods to detect coronavirus infection in lung images. In this paper, the currently available deep learning methods that are used to detect coronavirus infection in lung images are surveyed. The available methodologies, public datasets, datasets that are used by each method and evaluation metrics are summarized in this paper to help future researchers. The evaluation metrics that are used by the methods are comprehensively compared.
Collapse
Affiliation(s)
- Nandhini Subramanian
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Omar Elharrouss
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Somaya Al-Maadeed
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Muhammed Chowdhury
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| |
Collapse
|
75
|
Chen J, Wu Y, Yang Y, Wen S, Shi K, Bermak A, Huang T. An Efficient Memristor-Based Circuit Implementation of Squeeze-and-Excitation Fully Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1779-1790. [PMID: 33406044 DOI: 10.1109/tnnls.2020.3044047] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, there has been a surge of interest in applying memristors to hardware implementations of deep neural networks due to various desirable properties of the memristor, such as nonvolativity, multivalue, and nanosize. Most existing neural network circuit designs, however, are based on generic frameworks that are not optimized for memristors. Furthermore, to the best of our knowledge, there are no existing efficient memristor-based implementations of complex neural network operators, such as deconvolutions and squeeze-and-excitation (SE) blocks, which are critical for achieving high accuracy in common medical image analysis applications, such as semantic segmentation. This article proposes convolution-kernel first (CKF), an efficient scheme for designing memristor-based fully convolutional neural networks (FCNs). Compared with existing neural network circuits, CKF enables effective parameter pruning, which significantly reduces circuit power consumption. Furthermore, CKF includes the novel, memristor-optimized implementations of deconvolution layers and SE blocks. Simulation results on real medical image segmentation tasks confirm that CKF obtains up to 56.2% reduction in terms of computations and 33.62-W reduction in terms of power consumption in the circuit after weight pruning while retaining high accuracy on the test set. Moreover, the pruning results can be applied directly to existing circuits without any modification for the corresponding system.
Collapse
|
76
|
Peng T, Wang C, Zhang Y, Wang J. H-SegNet: hybrid segmentation network for lung segmentation in chest radiographs using mask region-based convolutional neural network and adaptive closed polyline searching method. Phys Med Biol 2022; 67. [PMID: 35287125 DOI: 10.1088/1361-6560/ac5d74] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 03/14/2022] [Indexed: 12/24/2022]
Abstract
Chest x-ray (CXR) is one of the most commonly used imaging techniques for the detection and diagnosis of pulmonary diseases. One critical component in many computer-aided systems, for either detection or diagnosis in digital CXR, is the accurate segmentation of the lung. Due to low-intensity contrast around lung boundary and large inter-subject variance, it has been challenging to segment lung from structural CXR images accurately. In this work, we propose an automatic Hybrid Segmentation Network (H-SegNet) for lung segmentation on CXR. The proposed H-SegNet consists of two key steps: (1) an image preprocessing step based on a deep learning model to automatically extract coarse lung contours; (2) a refinement step to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. Experimental results on several public datasets show that the proposed method achieves superior segmentation results in lung CXRs, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Peng
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Caishan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, People's Republic of China
| | - You Zhang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Jing Wang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| |
Collapse
|
77
|
Prediction of Pulmonary Fibrosis Based on X-Rays by Deep Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3845008. [PMID: 35378944 PMCID: PMC8976624 DOI: 10.1155/2022/3845008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 02/14/2022] [Accepted: 02/24/2022] [Indexed: 11/21/2022]
Abstract
As a fatal lung disease, pulmonary fibrosis can cause irreversible damage to the lung, affect normal lung function, and eventually lead to death. At present, the pathogenesis of this kind of disease is not completely clear, and there is no radical cure. The main purpose of the treatment of this disease is to slow down the deterioration of pulmonary fibrosis. For this kind of disease, if it can be found early, it can be treated as soon as possible and the life of patients will be prolonged. Clinically, the diagnosis of pulmonary fibrosis depends on the relevant imaging examination, lung biopsy, lung function examination, and so on. Imaging data such as X-rays is a common examination means in clinical medicine and also plays an important role in the prediction of pulmonary fibrosis. Through X-ray, radiologists can clearly see the relevant lung lesions so as to make the relevant diagnosis. Based on the common medical image data, this paper designs related models to complete the prediction of pulmonary fibrosis. The model designed in this paper is mainly divided into two parts: first, this paper uses a neural network to complete the segmentation of lung organs; second, the neural network of image classification is designed to complete the process from lung image to disease prediction. In the design of these two parts, this paper improves on the basis of previous research methods. Through the design of a neural network with higher performance, more optimized results are achieved on the key indicators which can be applied to the real scene of pulmonary fibrosis prediction.
Collapse
|
78
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
79
|
Maity A, Nair TR, Mehta S, Prakasam P. Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103398] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
80
|
Wang R, Chen S, Ji C, Fan J, Li Y. Boundary-Aware Context Neural Network for Medical Image Segmentation. Med Image Anal 2022; 78:102395. [DOI: 10.1016/j.media.2022.102395] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 02/07/2022] [Accepted: 02/12/2022] [Indexed: 12/13/2022]
|
81
|
Wang H, Gu H, Qin P, Wang J. U-shaped GAN for Semi-Supervised Learning and Unsupervised Domain Adaptation in High Resolution Chest Radiograph Segmentation. Front Med (Lausanne) 2022; 8:782664. [PMID: 35096877 PMCID: PMC8792862 DOI: 10.3389/fmed.2021.782664] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/14/2021] [Indexed: 01/03/2023] Open
Abstract
Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.
Collapse
Affiliation(s)
- Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
82
|
Tahir AM, Qiblawey Y, Khandakar A, Rahman T, Khurshid U, Musharavati F, Islam MT, Kiranyaz S, Al-Maadeed S, Chowdhury MEH. Deep Learning for Reliable Classification of COVID-19, MERS, and SARS from Chest X-ray Images. Cognit Comput 2022; 14:1752-1772. [PMID: 35035591 PMCID: PMC8747861 DOI: 10.1007/s12559-021-09955-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 04/09/2021] [Indexed: 12/29/2022]
Abstract
Novel coronavirus disease (COVID-19) is an extremely contagious and quickly spreading coronavirus infestation. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), which outbreak in 2002 and 2011, and the current COVID-19 pandemic are all from the same family of coronavirus. This work aims to classify COVID-19, SARS, and MERS chest X-ray (CXR) images using deep convolutional neural networks (CNNs). To the best of our knowledge, this classification scheme has never been investigated in the literature. A unique database was created, so-called QU-COVID-family, consisting of 423 COVID-19, 144 MERS, and 134 SARS CXR images. Besides, a robust COVID-19 recognition system was proposed to identify lung regions using a CNN segmentation model (U-Net), and then classify the segmented lung images as COVID-19, MERS, or SARS using a pre-trained CNN classifier. Furthermore, the Score-CAM visualization method was utilized to visualize classification output and understand the reasoning behind the decision of deep CNNs. Several deep learning classifiers were trained and tested; four outperforming algorithms were reported: SqueezeNet, ResNet18, InceptionV3, and DenseNet201. Original and preprocessed images were used individually and all together as the input(s) to the networks. Two recognition schemes were considered: plain CXR classification and segmented CXR classification. For plain CXRs, it was observed that InceptionV3 outperforms other networks with a 3-channel scheme and achieves sensitivities of 99.5%, 93.1%, and 97% for classifying COVID-19, MERS, and SARS images, respectively. In contrast, for segmented CXRs, InceptionV3 outperformed using the original CXR dataset and achieved sensitivities of 96.94%, 79.68%, and 90.26% for classifying COVID-19, MERS, and SARS images, respectively. The classification performance degrades with segmented CXRs compared to plain CXRs. However, the results are more reliable as the network learns from the main region of interest, avoiding irrelevant non-lung areas (heart, bones, or text), which was confirmed by the Score-CAM visualization. All networks showed high COVID-19 detection sensitivity (> 96%) with the segmented lung images. This indicates the unique radiographic signature of COVID-19 cases in the eyes of AI, which is often a challenging task for medical doctors.
Collapse
Affiliation(s)
- Anas M. Tahir
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Yazan Qiblawey
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Uzair Khurshid
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Farayi Musharavati
- Mechanical & Industrial Engineering Department, Qatar University, 2713 Doha, Qatar
| | - M. T. Islam
- Department of Electrical, Electronic & Systems Engineering, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor Malaysia
| | - Serkan Kiranyaz
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Somaya Al-Maadeed
- Department of Computer Science and Engineering, Qatar University, 2713 Doha, Qatar
| | | |
Collapse
|
83
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
84
|
Bourkache N, Laghrouche M, Lahdir M, Sidhom S. Images indexing and matched assessment of semantics and visuals similarities applied to a medical learning X-ray image base. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:919-939. [PMID: 35754253 DOI: 10.3233/xst-221180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Medical diagnostic support systems are important tools in the field of radiology. However, the precision obtained, during the exploitation of high homogeneity image datasets, needs to be improved. OBJECTIVE To develop a new learning system dedicated to public health practitioners. This study presents an upgraded version dedicated to radiology experts for better clinical decision-making when diagnosing and treating the patient (CAD approach). METHODS Our system is a hybrid approach based on a matching of semantic and visual attributes of images. It is a combination of two complementary subsystems to form the intermodal system. The first one named α based on semantic attributes. Indexing and image retrieval based on specific keywords. The second system named β based on low-level attributes. Vectors characterizing the digital content of the image (color, texture and shape) represent images. Our image database consists of 930 X-ray images including 320 mammograms acquired from the mini-MIAS database of mammograms and 610 X-rays acquired from the Public Hospital Establishment (EPH-Rouiba Algeria). The combination of two subsystems gives rise to the intermodal system: α-subsystem offers an overall result (based on visual descriptors), then β-subsystem (low level descriptors) refines the result and increases relevance. RESULTS Our system can perform a specific image search (in a database of images with very high homogeneity) with an accuracy of around 90% for a recall of 25%. The average (overall) accuracy of the system exceeds 70%. CONCLUSION The results obtained are very encouraging, and demonstrate efficiency of our approach, particularly for the intermodal system.
Collapse
Affiliation(s)
- Noureddine Bourkache
- Laboratoire d'Analyse et Modélisation des Phénomènes Aléatoires (LAMPA), UMMTO, Tizi-Ouzou, Algeria
| | - Mourad Laghrouche
- Laboratoire d'Analyse et Modélisation des Phénomènes Aléatoires (LAMPA), UMMTO, Tizi-Ouzou, Algeria
| | - Mourad Lahdir
- Laboratoire d'Analyse et Modélisation des Phénomènes Aléatoires (LAMPA), UMMTO, Tizi-Ouzou, Algeria
| | - Sahbi Sidhom
- Laboratoire Lorrain en Informatique et ses Applications (LORIA Lab), University of Lorraine (Nancy), France
| |
Collapse
|
85
|
Rajaraman S, Zamzmi G, Antani SK. Novel loss functions for ensemble-based medical image classification. PLoS One 2021; 16:e0261307. [PMID: 34968393 PMCID: PMC8718001 DOI: 10.1371/journal.pone.0261307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 11/29/2021] [Indexed: 01/08/2023] Open
Abstract
Medical images commonly exhibit multiple abnormalities. Predicting them requires multi-class classifiers whose training and desired reliable performance can be affected by a combination of factors, such as, dataset size, data source, distribution, and the loss function used to train deep neural networks. Currently, the cross-entropy loss remains the de-facto loss function for training deep learning classifiers. This loss function, however, asserts equal learning from all classes, leading to a bias toward the majority class. Although the choice of the loss function impacts model performance, to the best of our knowledge, we observed that no literature exists that performs a comprehensive analysis and selection of an appropriate loss function toward the classification task under study. In this work, we benchmark various state-of-the-art loss functions, critically analyze model performance, and propose improved loss functions for a multi-class classification task. We select a pediatric chest X-ray (CXR) dataset that includes images with no abnormality (normal), and those exhibiting manifestations consistent with bacterial and viral pneumonia. We construct prediction-level and model-level ensembles to improve classification performance. Our results show that compared to the individual models and the state-of-the-art literature, the weighted averaging of the predictions for top-3 and top-5 model-level ensembles delivered significantly superior classification performance (p < 0.05) in terms of MCC (0.9068, 95% confidence interval (0.8839, 0.9297)) metric. Finally, we performed localization studies to interpret model behavior and confirm that the individual models and ensembles learned task-specific features and highlighted disease-specific regions of interest. The code is available at https://github.com/sivaramakrishnan-rajaraman/multiloss_ensemble_models.
Collapse
Affiliation(s)
| | - Ghada Zamzmi
- National Library of Medicine, National Institutes of Health, Bethesda, MD, United States of America
| | - Sameer K. Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, United States of America
| |
Collapse
|
86
|
Arora R, Saini I, Sood N. Multi-label segmentation and detection of COVID-19 abnormalities from chest radiographs using deep learning. OPTIK 2021; 246:167780. [PMID: 34393275 PMCID: PMC8349421 DOI: 10.1016/j.ijleo.2021.167780] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/09/2021] [Accepted: 08/03/2021] [Indexed: 06/01/2023]
Abstract
Due to COVID-19, demand for Chest Radiographs (CXRs) have increased exponentially. Therefore, we present a novel fully automatic modified Attention U-Net (CXAU-Net) multi-class segmentation deep model that can detect common findings of COVID-19 in CXR images. The architectural design of this model includes three novelties: first, an Attention U-net model with channel and spatial attention blocks is designed that precisely localize multiple pathologies; second, dilated convolution applied improves the sensitivity of the model to foreground pixels with additional receptive fields valuation, and third a newly proposed hybrid loss function combines both area and size information for optimizing model. The proposed model achieves average accuracy, DSC, and Jaccard index scores of 0.951, 0.993, 0.984, and 0.921, 0.985, 0.973 for image-based and patch-based approaches respectively for multi-class segmentation on Chest X-ray 14 dataset. Also, average DSC and Jaccard index scores of 0.998, 0.989 are achieved for binary-class segmentation on the Japanese Society of Radiological Technology (JSRT) CXR dataset. These results illustrate that the proposed model outperformed the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Ruchika Arora
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| | - Indu Saini
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| | - Neetu Sood
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| |
Collapse
|
87
|
Kaur A, Kaur L, Singh A. GA-UNet: UNet-based framework for segmentation of 2D and 3D medical images applicable on heterogeneous datasets. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06134-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
88
|
Rashid N, Hossain MAF, Ali M, Islam Sukanya M, Mahmud T, Fattah SA. AutoCovNet: Unsupervised feature learning using autoencoder and feature merging for detection of COVID-19 from chest X-ray images. Biocybern Biomed Eng 2021; 41:1685-1701. [PMID: 34690398 PMCID: PMC8526490 DOI: 10.1016/j.bbe.2021.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 09/16/2021] [Accepted: 09/18/2021] [Indexed: 12/11/2022]
Abstract
With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification.
Collapse
Affiliation(s)
- Nayeeb Rashid
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | | - Mohammad Ali
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | | - Tanvir Mahmud
- Department of EEE, BUET, ECE Building, West Palashi, Dhaka 1205, Bangladesh
| | | |
Collapse
|
89
|
Lee S, Summers RM. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol Clin North Am 2021; 59:987-1002. [PMID: 34689882 DOI: 10.1016/j.rcl.2021.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Organ segmentation, chest radiograph classification, and lung and liver nodule detections are some of the popular artificial intelligence (AI) tasks in chest and abdominal radiology due to the wide availability of public datasets. AI algorithms have achieved performance comparable to humans in less time for several organ segmentation tasks, and some lesion detection and classification tasks. This article introduces the current published articles of AI applied to chest and abdominal radiology, including organ segmentation, lesion detection, classification, and predicting prognosis.
Collapse
Affiliation(s)
- Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
90
|
Kobat MA, Kivrak T, Barua PD, Tuncer T, Dogan S, Tan RS, Ciaccio EJ, Acharya UR. Automated COVID-19 and Heart Failure Detection Using DNA Pattern Technique with Cough Sounds. Diagnostics (Basel) 2021; 11:1962. [PMID: 34829308 PMCID: PMC8620352 DOI: 10.3390/diagnostics11111962] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 10/17/2021] [Accepted: 10/19/2021] [Indexed: 01/22/2023] Open
Abstract
COVID-19 and heart failure (HF) are common disorders and although they share some similar symptoms, they require different treatments. Accurate diagnosis of these disorders is crucial for disease management, including patient isolation to curb infection spread of COVID-19. In this work, we aim to develop a computer-aided diagnostic system that can accurately differentiate these three classes (normal, COVID-19 and HF) using cough sounds. A novel handcrafted model was used to classify COVID-19 vs. healthy (Case 1), HF vs. healthy (Case 2) and COVID-19 vs. HF vs. healthy (Case 3) automatically using deoxyribonucleic acid (DNA) patterns. The model was developed using the cough sounds collected from 241 COVID-19 patients, 244 HF patients, and 247 healthy subjects using a hand phone. To the best our knowledge, this is the first work to automatically classify healthy subjects, HF and COVID-19 patients using cough sounds signals. Our proposed model comprises a graph-based local feature generator (DNA pattern), an iterative maximum relevance minimum redundancy (ImRMR) iterative feature selector, with classification using the k-nearest neighbor classifier. Our proposed model attained an accuracy of 100.0%, 99.38%, and 99.49% for Case 1, Case 2, and Case 3, respectively. The developed system is completely automated and economical, and can be utilized to accurately detect COVID-19 versus HF using cough sounds.
Collapse
Affiliation(s)
- Mehmet Ali Kobat
- Department of Cardiology, Firat University Hospital, Firat University, Elazig 23119, Turkey; (M.A.K.); (T.K.)
| | - Tarik Kivrak
- Department of Cardiology, Firat University Hospital, Firat University, Elazig 23119, Turkey; (M.A.K.); (T.K.)
| | - Prabal Datta Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia;
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig 23119, Turkey; (T.T.); (S.D.)
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig 23119, Turkey; (T.T.); (S.D.)
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore 169609, Singapore;
- Department of Cardiology, Duke-NUS Graduate Medical School, Singapore 169857, Singapore
| | - Edward J. Ciaccio
- Department of Medicine, Celiac Disease Center, Columbia University Irving Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Clementi 599494, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
91
|
|
92
|
Zhou YJ, Xie XL, Zhou XH, Liu SQ, Bian GB, Hou ZG. A Real-Time Multifunctional Framework for Guidewire Morphological and Positional Analysis in Interventional X-Ray Fluoroscopy. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3023952] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
93
|
Nabulsi Z, Sellergren A, Jamshy S, Lau C, Santos E, Kiraly AP, Ye W, Yang J, Pilgrim R, Kazemzadeh S, Yu J, Kalidindi SR, Etemadi M, Garcia-Vicente F, Melnick D, Corrado GS, Peng L, Eswaran K, Tse D, Beladia N, Liu Y, Chen PHC, Shetty S. Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19. Sci Rep 2021; 11:15523. [PMID: 34471144 PMCID: PMC8410908 DOI: 10.1038/s41598-021-93967-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 07/01/2021] [Indexed: 01/20/2023] Open
Abstract
Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.
Collapse
Affiliation(s)
| | | | | | - Charles Lau
- Google Health Via Advanced Clinical, Deerfield, USA
| | | | | | | | - Jie Yang
- Google Health, Google, Palo Alto, USA
| | | | | | - Jin Yu
- Google Health, Google, Palo Alto, USA
| | | | | | | | | | | | - Lily Peng
- Google Health, Google, Palo Alto, USA
| | | | | | | | - Yun Liu
- Google Health, Google, Palo Alto, USA
| | | | | |
Collapse
|
94
|
CNL-UNet: A novel lightweight deep learning architecture for multimodal biomedical image segmentation with false output suppression. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102959] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
95
|
Anatomic Point-Based Lung Region with Zone Identification for Radiologist Annotation and Machine Learning for Chest Radiographs. J Digit Imaging 2021; 34:922-931. [PMID: 34327625 DOI: 10.1007/s10278-021-00494-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 06/02/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022] Open
Abstract
Our objective is to investigate the reliability and usefulness of anatomic point-based lung zone segmentation on chest radiographs (CXRs) as a reference standard framework and to evaluate the accuracy of automated point placement. Two hundred frontal CXRs were presented to two radiologists who identified five anatomic points: two at the lung apices, one at the top of the aortic arch, and two at the costophrenic angles. Of these 1000 anatomic points, 161 (16.1%) were obscured (mostly by pleural effusions). Observer variations were investigated. Eight anatomic zones then were automatically generated from the manually placed anatomic points, and a prototype algorithm was developed using the point-based lung zone segmentation to detect cardiomegaly and levels of diaphragm and pleural effusions. A trained U-Net neural network was used to automatically place these five points within 379 CXRs of an independent database. Intra- and inter-observer variation in mean distance between corresponding anatomic points was larger for obscured points (8.7 mm and 20 mm, respectively) than for visible points (4.3 mm and 7.6 mm, respectively). The computer algorithm using the point-based lung zone segmentation could diagnostically measure the cardiothoracic ratio and diaphragm position or pleural effusion. The mean distance between corresponding points placed by the radiologist and by the neural network was 6.2 mm. The network identified 95% of the radiologist-indicated points with only 3% of network-identified points being false-positives. In conclusion, a reliable anatomic point-based lung segmentation method for CXRs has been developed with expected utility for establishing reference standards for machine learning applications.
Collapse
|
96
|
Yousefi B, Kawakita S, Amini A, Akbari H, Advani SM, Akhloufi M, Maldague XPV, Ahadian S. Impartially Validated Multiple Deep-Chain Models to Detect COVID-19 in Chest X-ray Using Latent Space Radiomics. J Clin Med 2021; 10:3100. [PMID: 34300266 PMCID: PMC8304336 DOI: 10.3390/jcm10143100] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 12/31/2022] Open
Abstract
The COVID-19 pandemic continues to spread globally at a rapid pace, and its rapid detection remains a challenge due to its rapid infectivity and limited testing availability. One of the simply available imaging modalities in clinical routine involves chest X-ray (CXR), which is often used for diagnostic purposes. Here, we proposed a computer-aided detection of COVID-19 in CXR imaging using deep and conventional radiomic features. First, we used a 2D U-Net model to segment the lung lobes. Then, we extracted deep latent space radiomics by applying deep convolutional autoencoder (ConvAE) with internal dense layers to extract low-dimensional deep radiomics. We used Johnson-Lindenstrauss (JL) lemma, Laplacian scoring (LS), and principal component analysis (PCA) to reduce dimensionality in conventional radiomics. The generated low-dimensional deep and conventional radiomics were integrated to classify COVID-19 from pneumonia and healthy patients. We used 704 CXR images for training the entire model (i.e., U-Net, ConvAE, and feature selection in conventional radiomics). Afterward, we independently validated the whole system using a study cohort of 1597 cases. We trained and tested a random forest model for detecting COVID-19 cases through multivariate binary-class and multiclass classification. The maximal (full multivariate) model using a combination of the two radiomic groups yields performance in classification cross-validated accuracy of 72.6% (69.4-74.4%) for multiclass and 89.6% (88.4-90.7%) for binary-class classification.
Collapse
Affiliation(s)
- Bardia Yousefi
- Department of Electrical and Computer Engineering, Laval University, Quebec City, QC G1V 0A6, Canada
| | - Satoru Kawakita
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA 90024, USA; (S.K.); (S.M.A.)
| | - Arya Amini
- Department of Radiation Oncology, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA;
| | - Hamed Akbari
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Shailesh M. Advani
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA 90024, USA; (S.K.); (S.M.A.)
| | - Moulay Akhloufi
- Department of Computer Science, Perception Robotics and Intelligent Machines (PRIME) Research Group, University of Moncton, New Brunswick, NB E1A 3E9, Canada;
| | - Xavier P. V. Maldague
- Department of Electrical and Computer Engineering, Laval University, Quebec City, QC G1V 0A6, Canada
| | - Samad Ahadian
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA 90024, USA; (S.K.); (S.M.A.)
| |
Collapse
|
97
|
|
98
|
Qayyum A, Razzak I, Tanveer M, Kumar A. Depth-wise dense neural network for automatic COVID19 infection detection and diagnosis. ANNALS OF OPERATIONS RESEARCH 2021:1-21. [PMID: 34248242 PMCID: PMC8254442 DOI: 10.1007/s10479-021-04154-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/08/2021] [Indexed: 05/09/2023]
Abstract
Coronavirus (COVID-19) and its new strain resulted in massive damage to society and brought panic worldwide. Automated medical image analysis such as X-rays, CT, and MRI offers excellent early diagnosis potential to augment the traditional healthcare strategy to fight against COVID-19. However, the identification of COVID infected lungs X-rays is challenging due to the high variation in infection characteristics and low-intensity contrast between normal tissues and infections. To identify the infected area, in this work, we present a novel depth-wise dense network that uniformly scales all dimensions and performs multilevel feature embedding, resulting in increased feature representations. The inclusion of depth wise component and squeeze-and-excitation results in better performance by capturing a more receptive field than the traditional convolutional layer; however, the parameters are almost the same. To improve the performance and training set, we have combined three large scale datasets. The extensive experiments on the benchmark X-rays datasets demonstrate the effectiveness of the proposed framework by achieving 96.17% in comparison to cutting-edge methods primarily based on transfer learning.
Collapse
|
99
|
Abbas A, Abdelsamea MM, Gaber MM. 4S-DT: Self-Supervised Super Sample Decomposition for Transfer Learning With Application to COVID-19 Detection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2798-2808. [PMID: 34038371 PMCID: PMC8544943 DOI: 10.1109/tnnls.2021.3082015] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/23/2021] [Accepted: 05/17/2021] [Indexed: 05/12/2023]
Abstract
Due to the high availability of large-scale annotated image datasets, knowledge transfer from pretrained models showed outstanding performance in medical image classification. However, building a robust image classification model for datasets with data irregularity or imbalanced classes can be a very challenging task, especially in the medical imaging domain. In this article, we propose a novel deep convolutional neural network, which we called self-supervised super sample decomposition for transfer learning (4S-DT) model. The 4S-DT encourages a coarse-to-fine transfer learning from large-scale image recognition tasks to a specific chest X-ray image classification task using a generic self-supervised sample decomposition approach. Our main contribution is a novel self-supervised learning mechanism guided by a super sample decomposition of unlabeled chest X-ray images. 4S-DT helps in improving the robustness of knowledge transformation via a downstream learning strategy with a class-decomposition (CD) layer to simplify the local structure of the data. The 4S-DT can deal with any irregularities in the image dataset by investigating its class boundaries using a downstream CD mechanism. We used 50000 unlabeled chest X-ray images to achieve our coarse-to-fine transfer learning with an application to COVID-19 detection, as an exemplar. The 4S-DT has achieved a high accuracy of 99.8% on the larger of the two datasets used in the experimental study and an accuracy of 97.54% on the smaller dataset, which was enriched by augmented images, out of which all real COVID-19 cases were detected.
Collapse
Affiliation(s)
- Asmaa Abbas
- Department of MathematicsUniversity of AssiutAsyut71515Egypt
| | - Mohammed M. Abdelsamea
- School of Computing and Digital TechnologyBirmingham City UniversityBirminghamB4 7APU.K.
- Department of Computer ScienceFaculty of Computers and InformationUniversity of AssiutAsyut71515Egypt
| | - Mohamed Medhat Gaber
- School of Computing and Digital TechnologyBirmingham City UniversityBirminghamB4 7APU.K.
| |
Collapse
|
100
|
Singh A, Lall B, Panigrahi B, Agrawal A, Agrawal A, Thangakunam B, Christopher D. Deep LF-Net: Semantic lung segmentation from Indian chest radiographs including severely unhealthy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|