1
|
Spyridonos P, Gaitanis G, Likas A, Bassukas ID. A convolutional neural network based system for detection of actinic keratosis in clinical images of cutaneous field cancerization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
2
|
Implementation of an automated workflow for image-based seafloor classification with examples from manganese-nodule covered seabed areas in the Central Pacific Ocean. Sci Rep 2022; 12:15338. [PMID: 36096920 PMCID: PMC9468037 DOI: 10.1038/s41598-022-19070-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 08/24/2022] [Indexed: 11/15/2022] Open
Abstract
Mapping and monitoring of seafloor habitats are key tasks for fully understanding ocean ecosystems and resilience, which contributes towards sustainable use of ocean resources. Habitat mapping relies on seafloor classification typically based on acoustic methods, and ground truthing through direct sampling and optical imaging. With the increasing capabilities to record high-resolution underwater images, manual approaches for analyzing these images to create seafloor classifications are no longer feasible. Automated workflows have been proposed as a solution, in which algorithms assign pre-defined seafloor categories to each image. However, in order to provide consistent and repeatable analysis, these automated workflows need to address e.g., underwater illumination artefacts, variances in resolution and class-imbalances, which could bias the classification. Here, we present a generic implementation of an Automated and Integrated Seafloor Classification Workflow (AI-SCW). The workflow aims to classify the seafloor into habitat categories based on automated analysis of optical underwater images with only minimal amount of human annotations. AI-SCW incorporates laser point detection for scale determination and color normalization. It further includes semi-automatic generation of the training data set for fitting the seafloor classifier. As a case study, we applied the workflow to an example seafloor image dataset from the Belgian and German contract areas for Manganese-nodule exploration in the Pacific Ocean. Based on this, we provide seafloor classifications along the camera deployment tracks, and discuss results in the context of seafloor multibeam bathymetry. Our results show that the seafloor in the Belgian area predominantly comprises densely distributed nodules, which are intermingled with qualitatively larger-sized nodules at local elevations and within depressions. On the other hand, the German area primarily comprises nodules that only partly cover the seabed, and these occur alongside turned-over sediment (artificial seafloor) that were caused by the settling plume following a dredging experiment conducted in the area.
Collapse
|
3
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
4
|
Revising Cadastral Data on Land Boundaries Using Deep Learning in Image-Based Mapping. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11050298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
One of the main concerns of land administration in developed countries is to keep the cadastral system up to date. The goal of this research was to develop an approach to detect visible land boundaries and revise existing cadastral data using deep learning. The convolutional neural network (CNN), based on a modified architecture, was trained using the Berkeley segmentation data set 500 (BSDS500) available online. This dataset is known for edge and boundary detection. The model was tested in two rural areas in Slovenia. The results were evaluated using recall, precision, and the F1 score—as a more appropriate method for unbalanced classes. In terms of detection quality, balanced recall and precision resulted in F1 scores of 0.60 and 0.54 for Ponova vas and Odranci, respectively. With lower recall (completeness), the model was able to predict the boundaries with a precision (correctness) of 0.71 and 0.61. When the cadastral data were revised, the low values were interpreted to mean that the lower the recall, the greater the need to update the existing cadastral data. In the case of Ponova vas, the recall value was less than 0.1, which means that the boundaries did not overlap. In Odranci, 21% of the predicted and cadastral boundaries overlapped. Since the direction of the lines was not a problem, the low recall value (0.21) was mainly due to overly fragmented plots. Overall, the automatic methods are faster (once the model is trained) but less accurate than the manual methods. For a rapid revision of existing cadastral boundaries, an automatic approach is certainly desirable for many national mapping and cadastral agencies, especially in developed countries.
Collapse
|
5
|
Mi Y, Liu Z, Zhao K, Wang S. Recognizing Micro Actions in Videos by Learning Multi-Layer Local Features. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
6
|
Spyridonos P, Gaitanis G, Likas A, Bassukas I. Characterizing Malignant Melanoma Clinically Resembling Seborrheic Keratosis Using Deep Knowledge Transfer. Cancers (Basel) 2021; 13:cancers13246300. [PMID: 34944920 PMCID: PMC8699430 DOI: 10.3390/cancers13246300] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/10/2021] [Accepted: 12/13/2021] [Indexed: 12/26/2022] Open
Abstract
Simple Summary Malignant melanomas (MMs) with aypical clinical presentation constitute a diagnostic pitfall, and false negatives carry the risk of a diagnostic delay and improper disease management. Among the most common, challenging presentation forms of MMs are those that clinically resemble seborrheic keratosis (SK). On the other hand, SK may mimic melanoma, producing ‘false positive overdiagnosis’ and leading to needless excisions. The evolving efficiency of deep learning algorithms in image recognition and the availability of large image databases have accelerated the development of advanced computer-aided systems for melanoma detection. In the present study, we used image data from the International Skin Image Collaboration archive to explore the capacity of deep knowledge transfer in the challenging diagnostic task of the atypical skin tumors of MM and SK. Abstract Malignant melanomas resembling seborrheic keratosis (SK-like MMs) are atypical, challenging to diagnose melanoma cases that carry the risk of delayed diagnosis and inadequate treatment. On the other hand, SK may mimic melanoma, producing a ‘false positive’ with unnecessary lesion excisions. The present study proposes a computer-based approach using dermoscopy images for the characterization of SΚ-like MMs. Dermoscopic images were retrieved from the International Skin Imaging Collaboration archive. Exploiting image embeddings from pretrained convolutional network VGG16, we trained a support vector machine (SVM) classification model on a data set of 667 images. SVM optimal hyperparameter selection was carried out using the Bayesian optimization method. The classifier was tested on an independent data set of 311 images with atypical appearance: MMs had an absence of pigmented network and had an existence of milia-like cysts. SK lacked milia-like cysts and had a pigmented network. Atypical MMs were characterized with a sensitivity and specificity of 78.6% and 84.5%, respectively. The advent of deep learning in image recognition has attracted the interest of computer science towards improved skin lesion diagnosis. Open-source, public access archives of skin images empower further the implementation and validation of computer-based systems that might contribute significantly to complex clinical diagnostic problems such as the characterization of SK-like MMs.
Collapse
Affiliation(s)
- Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
- Correspondence: (P.S.); (I.B.)
| | - George Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece;
| | - Aristidis Likas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece;
| | - Ioannis Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece;
- Correspondence: (P.S.); (I.B.)
| |
Collapse
|
7
|
Nair RR, Singh T, Sankar R, Gunndu K. Multi-modal medical image fusion using LMF-GAN - A maximum parameter infusion technique. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189860] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The multi-sensor, multi-modal, composite design of medical images merged into a single image, contributes to identifying features that are relevant to medical diagnoses and treatments. Although, current image fusion technologies, including conventional and deep learning algorithms, can produce superior fused images, however, they will require huge volumes of images of various modalities. This solution may not be viable for some situations, where time efficiency is expected or the equipment is inadequate. This paper addressed a modified end-to-end Generative Adversarial Network(GAN), termed Loss Minimized Fusion Generative Adversarial Network (LMF-GAN), a triple ConvNet deep learning architecture for the fusion of medical images with a limited sampling rate. The encoding network is combined with a convolutional neural network layer and a dense block called GAN, in contrast to conventional convolutional networks. The loss is minimized by training GAN’s discriminator with all the source images by learning more parameters to generate more features in the fused image. The LMF-GAN can produce fused images with clear textures through adversarial training of the generator and discriminator. The proposed fusion method has the ability to achieve state-of-the-art quality in objective and subjective evaluation, in comparison with current fusion methods. The model has experimented with standard data sets.
Collapse
Affiliation(s)
- Rekha R. Nair
- Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India
| | - Tripty Singh
- Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India
| | - Rashmi Sankar
- Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India
| | - Klement Gunndu
- Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India
| |
Collapse
|
8
|
Maurya R, Pathak VK, Burget R, Dutta MK. Automated detection of bioimages using novel deep feature fusion algorithm and effective high-dimensional feature selection approach. Comput Biol Med 2021; 137:104862. [PMID: 34534793 DOI: 10.1016/j.compbiomed.2021.104862] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/26/2021] [Accepted: 09/07/2021] [Indexed: 11/30/2022]
Abstract
The classification of bioimages plays an important role in several biological studies, such as subcellular localisation, phenotype identification and other types of histopathological examinations. The objective of the present study was to develop a computer-aided bioimage classification method for the classification of bioimages across nine diverse benchmark datasets. A novel algorithm was developed, which systematically fused the features extracted from nine different convolution neural network architectures. A systematic fusion of features boosts the performance of a classifier but at the cost of the high dimensionality of the fused feature set. Therefore, non-discriminatory and redundant features need to be removed from a high-dimensional fused feature set to improve the classification performance and reduce the time complexity. To achieve this aim, a method based on analysis of variance and evolutionary feature selection was developed to select an optimal set of discriminatory features from the fused feature set. The proposed method was evaluated on nine different benchmark datasets. The experimental results showed that the proposed method achieved superior performance, with a significant reduction in the dimensionality of the fused feature set for most bioimage datasets. The performance of the proposed feature selection method was better than that of some of the most recent and classical methods used for feature selection. Thus, the proposed method was desirable because of its superior performance and high compression ratio, which significantly reduced the computational complexity.
Collapse
Affiliation(s)
- Ritesh Maurya
- Centre for Advanced Studies, Dr A.P.J. Abdul Kalam Technical University, Lucknow, India.
| | | | - Radim Burget
- Department of Telecommunications, Faculty of Electrical Engineering and Communication, BRNO University of Technology, Czech Republic.
| | - Malay Kishore Dutta
- Centre for Advanced Studies, Dr A.P.J. Abdul Kalam Technical University, Lucknow, India.
| |
Collapse
|
9
|
Identification of Neurodegenerative Diseases Based on Vertical Ground Reaction Force Classification Using Time-Frequency Spectrogram and Deep Learning Neural Network Features. Brain Sci 2021; 11:brainsci11070902. [PMID: 34356136 PMCID: PMC8303978 DOI: 10.3390/brainsci11070902] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/01/2021] [Accepted: 07/05/2021] [Indexed: 12/13/2022] Open
Abstract
A novel identification algorithm using a deep learning approach was developed in this study to classify neurodegenerative diseases (NDDs) based on the vertical ground reaction force (vGRF) signal. The irregularity of NDD vGRF signals caused by gait abnormalities can indicate different force pattern variations compared to a healthy control (HC). The main purpose of this research is to help physicians in the early detection of NDDs, efficient treatment planning, and monitoring of disease progression. The detection algorithm comprises a preprocessing process, a feature transformation process, and a classification process. In the preprocessing process, the five-minute vertical ground reaction force signal was divided into 10, 30, and 60 s successive time windows. In the feature transformation process, the time-domain vGRF signal was modified into a time-frequency spectrogram using a continuous wavelet transform (CWT). Then, feature enhancement with principal component analysis (PCA) was utilized. Finally, a convolutional neural network, as a deep learning classifier, was employed in the classification process of the proposed detection algorithm and evaluated using leave-one-out cross-validation (LOOCV) and k-fold cross-validation (k-fold CV, k = 5). The proposed detection algorithm can effectively differentiate gait patterns based on a time-frequency spectrogram of a vGRF signal between HC subjects and patients with neurodegenerative diseases.
Collapse
|
10
|
Loveymi S, Dezfoulian MH, Mansoorizadeh M. Automatic Generation of Structured Radiology Reports for Volumetric Computed Tomography Images Using Question-Specific Deep Feature Extraction and Learning. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:194-207. [PMID: 34466399 PMCID: PMC8382036 DOI: 10.4103/jmss.jmss_21_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 06/20/2020] [Accepted: 09/23/2020] [Indexed: 11/04/2022]
Abstract
BACKGROUND In today's modern medicine, the use of radiological imaging devices has spread at medical centers. Therefore, the need for accurate, reliable, and portable medical image analysis and understanding systems has been increasing constantly. Accompanying images with the required clinical information, in the form of structured reports, is very important, because images play a pivotal role in detect, planning, and diagnosis of different diseases. Report-writing can be exposure to error, tedious and labor-intensive for physicians and radiologists; to address these issues, there is a need for systems that generate medical image reports automatically and efficiently. Thus, automatic report generation systems are among the most desired applications. METHODS This research proposes an automatic structured-radiology report generation system that is based on deep learning methods. Extracting useful and descriptive image features to model the conceptual contents of the images is one of the main challenges in this regard. Considering the ability of deep neural networks (DNNs) in soliciting informative and effective features as well as lower resource requirements, tailored convolutional neural networks and MobileNets are employed as the main building blocks of the proposed system. To cope with challenges such as multi-slice medical images and diversity of questions asked in a radiology report, our system develops volume-level and question-specific deep features using DNNs. RESULTS We demonstrate the effectiveness of the proposed system on ImageCLEF2015 Liver computed tomography (CT) annotation task, for filling in a structured radiology report about liver CT. The results confirm the efficiency of the proposed approach, as compared to classic annotation methods. CONCLUSION We have proposed a question-specific DNNbased system for filling in structured radiology reports about medical images.
Collapse
Affiliation(s)
- Samira Loveymi
- Department of Computer Engineering, Bu-Ali Sina University, Hamedan, Iran
| | | | | |
Collapse
|
11
|
Artificial Intelligence Framework for Efficient Detection and Classification of Pneumonia Using Chest Radiography Images. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00631-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
12
|
Sannasi Chakravarthy S, Rajaguru H. Automatic Detection and Classification of Mammograms Using Improved Extreme Learning Machine with Deep Learning. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2020.12.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Kang T, Chen Y, Fazli S, Wallraven C. EEG-Based Prediction of Successful Memory Formation During Vocabulary Learning. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2377-2389. [PMID: 32915743 DOI: 10.1109/tnsre.2020.3023116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous Electroencephalography (EEG) and neuroimaging studies have found differences between brain signals for subsequently remembered and forgotten items during learning of items - it has even been shown that single trial prediction of memorization success is possible with a few target items. There has been little attempt, however, in validating the findings in an application-oriented context involving longer test spans with realistic learning materials encompassing more items. Hence, the present study investigates subsequent memory prediction within the application context of foreign-vocabulary learning. We employed an off-line, EEG-based paradigm in which Korean participants without prior German language experience learned 900 German words in paired-associate form. Our results using convolutional neural networks optimized for EEG-signal analysis show that above-chance classification is possible in this context allowing us to predict during learning which of the words would be successfully remembered later.
Collapse
|
14
|
Alejo D, Caballero F, Merino L. A Robust Localization System for Inspection Robots in Sewer Networks. SENSORS 2019; 19:s19224946. [PMID: 31766253 PMCID: PMC6891562 DOI: 10.3390/s19224946] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 11/08/2019] [Accepted: 11/10/2019] [Indexed: 11/16/2022]
Abstract
Sewers represent a very important infrastructure of cities whose state should be monitored periodically. However, the length of such infrastructure prevents sensor networks from being applicable. In this paper, we present a mobile platform (SIAR) designed to inspect the sewer network. It is capable of sensing gas concentrations and detecting failures in the network such as cracks and holes in the floor and walls or zones were the water is not flowing. These alarms should be precisely geo-localized to allow the operators performing the required correcting measures. To this end, this paper presents a robust localization system for global pose estimation on sewers. It makes use of prior information of the sewer network, including its topology, the different cross sections traversed and the position of some elements such as manholes. The system is based on a Monte Carlo Localization system that fuses wheel and RGB-D odometry for the prediction stage. The update step takes into account the sewer network topology for discarding wrong hypotheses. Additionally, the localization is further refined with novel updating steps proposed in this paper which are activated whenever a discrete element in the sewer network is detected or the relative orientation of the robot over the sewer gallery could be estimated. Each part of the system has been validated with real data obtained from the sewers of Barcelona. The whole system is able to obtain median localization errors in the order of one meter in all cases. Finally, the paper also includes comparisons with state-of-the-art Simultaneous Localization and Mapping (SLAM) systems that demonstrate the convenience of the approach.
Collapse
Affiliation(s)
- David Alejo
- School of Engineering, Universidad Pablo de Olavide, 41012 Sevilla, Spain;
| | - Fernando Caballero
- Department of Systems Engineering and Automation, Universidad de Sevilla, 41009 Sevilla, Spain;
| | - Luis Merino
- School of Engineering, Universidad Pablo de Olavide, 41012 Sevilla, Spain;
- Correspondence: ; Tel.: +34-95-434-8350
| |
Collapse
|
15
|
Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery. REMOTE SENSING 2019. [DOI: 10.3390/rs11212505] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Cadastral boundaries are often demarcated by objects that are visible in remote sensing imagery. Indirect surveying relies on the delineation of visible parcel boundaries from such images. Despite advances in automated detection and localization of objects from images, indirect surveying is rarely automated and relies on manual on-screen delineation. We have previously introduced a boundary delineation workflow, comprising image segmentation, boundary classification and interactive delineation that we applied on Unmanned Aerial Vehicle (UAV) data to delineate roads. In this study, we improve each of these steps. For image segmentation, we remove the need to reduce the image resolution and we limit over-segmentation by reducing the number of segment lines by 80% through filtering. For boundary classification, we show how Convolutional Neural Networks (CNN) can be used for boundary line classification, thereby eliminating the previous need for Random Forest (RF) feature generation and thus achieving 71% accuracy. For interactive delineation, we develop additional and more intuitive delineation functionalities that cover more application cases. We test our approach on more varied and larger data sets by applying it to UAV and aerial imagery of 0.02–0.25 m resolution from Kenya, Rwanda and Ethiopia. We show that it is more effective in terms of clicks and time compared to manual delineation for parcels surrounded by visible boundaries. Strongest advantages are obtained for rural scenes delineated from aerial imagery, where the delineation effort per parcel requires 38% less time and 80% fewer clicks compared to manual delineation.
Collapse
|