1
|
Furtado DP, Vieira EA, Nascimento WF, Inagaki KY, Bleuel J, Alves MAZ, Longo GO, Oliveira LS. #DeOlhoNosCorais: a polygonal annotated dataset to optimize coral monitoring. PeerJ 2023; 11:e16219. [PMID: 37953792 PMCID: PMC10634335 DOI: 10.7717/peerj.16219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 09/11/2023] [Indexed: 11/14/2023] Open
Abstract
Corals are colonial animals within the Phylum Cnidaria that form coral reefs, playing a significant role in marine environments by providing habitat for fish, mollusks, crustaceans, sponges, algae, and other organisms. Global climate changes are causing more intense and frequent thermal stress events, leading to corals losing their color due to the disruption of a symbiotic relationship with photosynthetic endosymbionts. Given the importance of corals to the marine environment, monitoring coral reefs is critical to understanding their response to anthropogenic impacts. Most coral monitoring activities involve underwater photographs, which can be costly to generate on large spatial scales and require processing and analysis that may be time-consuming. The Marine Ecology Laboratory (LECOM) at the Federal University of Rio Grande do Norte (UFRN) developed the project "#DeOlhoNosCorais" which encourages users to post photos of coral reefs on their social media (Instagram) using this hashtag, enabling people without previous scientific training to contribute to coral monitoring. The laboratory team identifies the species and gathers information on coral health along the Brazilian coast by analyzing each picture posted on social media. To optimize this process, we conducted baseline experiments for image classification and semantic segmentation. We analyzed the classification results of three different machine learning models using the Local Interpretable Model-agnostic Explanations (LIME) algorithm. The best results were achieved by combining EfficientNet for feature extraction and Logistic Regression for classification. Regarding semantic segmentation, the U-Net Pix2Pix model produced a pixel-level accuracy of 86%. Our results indicate that this tool can enhance image selection for coral monitoring purposes and open several perspectives for improving classification performance. Furthermore, our findings can be expanded by incorporating other datasets to create a tool that streamlines the time and cost associated with analyzing coral reef images across various regions.
Collapse
Affiliation(s)
- Daniel P. Furtado
- Department of Informatics, Federal University of Paraná, Curitiba, PR, Brazil
| | - Edson A. Vieira
- Department of Oceanography and Limnology, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | | | - Kelly Y. Inagaki
- Department of Oceanography and Limnology, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | - Jessica Bleuel
- Department of Oceanography and Limnology, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | | | - Guilherme O. Longo
- Department of Oceanography and Limnology, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | - Luiz S. Oliveira
- Department of Informatics, Federal University of Paraná, Curitiba, PR, Brazil
| |
Collapse
|
2
|
Kim SM, Cho GJ. Analysis of Various Facial Expressions of Horses as a Welfare Indicator Using Deep Learning. Vet Sci 2023; 10:vetsci10040283. [PMID: 37104439 PMCID: PMC10141195 DOI: 10.3390/vetsci10040283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/04/2023] [Accepted: 04/06/2023] [Indexed: 04/28/2023] Open
Abstract
This study aimed to prove that deep learning can be effectively used for identifying various equine facial expressions as welfare indicators. In this study, a total of 749 horses (healthy: 586 and experiencing pain: 163) were investigated. Moreover, a model for recognizing facial expressions based on images and their classification into four categories, i.e., resting horses (RH), horses with pain (HP), horses immediately after exercise (HE), and horseshoeing horses (HH), was developed. The normalization of equine facial posture revealed that the profile (99.45%) had higher accuracy than the front (97.59%). The eyes-nose-ears detection model achieved an accuracy of 98.75% in training, 81.44% in validation, and 88.1% in testing, with an average accuracy of 89.43%. Overall, the average classification accuracy was high; however, the accuracy of pain classification was low. These results imply that various facial expressions in addition to pain may exist in horses depending on the situation, degree of pain, and type of pain experienced by horses. Furthermore, automatic pain and stress recognition would greatly enhance the identification of pain and other emotional states, thereby improving the quality of equine welfare.
Collapse
Affiliation(s)
- Su Min Kim
- College of Veterinary Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Gil Jae Cho
- College of Veterinary Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
3
|
Kalbhor M, Shinde S, Joshi H, Wajire P. Pap smear-based cervical cancer detection using hybrid deep learning and performance evaluation. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2022.2163704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Madhura Kalbhor
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Swati Shinde
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Hrushikesh Joshi
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| | - Pankaj Wajire
- Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India
| |
Collapse
|
4
|
Alkhouri I, Atia G, Mikhael W. Fooling the Big Picture in Classification Tasks. CIRCUITS, SYSTEMS, AND SIGNAL PROCESSING 2022; 42:2385-2415. [PMID: 36373009 PMCID: PMC9638414 DOI: 10.1007/s00034-022-02226-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 10/24/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
UNLABELLED Minimally perturbed adversarial examples were shown to drastically reduce the performance of one-stage classifiers while being imperceptible. This paper investigates the susceptibility of hierarchical classifiers, which use fine and coarse level output categories, to adversarial attacks. We formulate a program that encodes minimax constraints to induce misclassification of the coarse class of a hierarchical classifier (e.g., changing the prediction of a 'monkey' to a 'vehicle' instead of some 'animal'). Subsequently, we develop solutions based on convex relaxations of said program. An algorithm is obtained using the alternating direction method of multipliers with competitive performance in comparison with state-of-the-art solvers. We show the ability of our approach to fool the coarse classification through a set of measures such as the relative loss in coarse classification accuracy and imperceptibility factors. In comparison with perturbations generated for one-stage classifiers, we show that fooling a classifier about the 'big picture' requires higher perturbation levels which results in lower imperceptibility. We also examine the impact of different label groupings on the performance of the proposed attacks. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s00034-022-02226-w.
Collapse
Affiliation(s)
- Ismail Alkhouri
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816 USA
| | - George Atia
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816 USA
| | - Wasfy Mikhael
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816 USA
| |
Collapse
|
5
|
de Lope J, Graña M. Deep transfer learning-based gaze tracking for behavioral activity recognition. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.06.100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
6
|
Automatic Semantic Segmentation of Benthic Habitats Using Images from Towed Underwater Camera in a Complex Shallow Water Environment. REMOTE SENSING 2022. [DOI: 10.3390/rs14081818] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Underwater image segmentation is useful for benthic habitat mapping and monitoring; however, manual annotation is time-consuming and tedious. We propose automated segmentation of benthic habitats using unsupervised semantic algorithms. Four such algorithms––Fast and Robust Fuzzy C-Means (FR), Superpixel-Based Fast Fuzzy C-Means (FF), Otsu clustering (OS), and K-means segmentation (KM)––were tested for accuracy for segmentation. Further, YCbCr and the Commission Internationale de l’Éclairage (CIE) LAB color spaces were evaluated to correct variations in image illumination and shadow effects. Benthic habitat field data from a geo-located high-resolution towed camera were used to evaluate proposed algorithms. The Shiraho study area, located off Ishigaki Island, Japan, was used, and six benthic habitats were classified. These categories were corals (Acropora and Porites), blue corals (Heliopora coerulea), brown algae, other algae, sediments, and seagrass (Thalassia hemprichii). Analysis showed that the K-means clustering algorithm yielded the highest overall accuracy. However, the differences between the KM and OS overall accuracies were statistically insignificant at the 5% level. Findings showed the importance of eliminating underwater illumination variations and outperformance of the red difference chrominance values (Cr) in the YCbCr color space for habitat segmentation. The proposed framework enhanced the automation of benthic habitat classification processes.
Collapse
|
7
|
A CNN-RNN Combined Structure for Real-World Violence Detection in Surveillance Cameras. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031021] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Surveillance cameras have been increasingly used in many public and private spaces in recent years to increase the security of those areas. Although many companies still recruit someone to monitor the cameras, the person recruited is more likely to miss some abnormal events in the camera feeds due to human error. Therefore, monitoring surveillance cameras could be a waste of time and energy. On the other hand, many researchers worked on surveillance data and proposed several methods to detect abnormal events automatically. As a result, if any anomalous happens in front of the surveillance cameras, it can be detected immediately. Therefore, we introduced a model for detecting abnormal events in the surveillance camera feed. In this work, we designed a model by implementing a well-known convolutional neural network (ResNet50) for extracting essential features of each frame of our input stream followed by a particular schema of recurrent neural networks (ConvLSTM) for detecting abnormal events in our time-series dataset. Furthermore, in contrast with previous works, which mainly focused on hand-crafted datasets, our dataset took real-time surveillance camera feeds with different subjects and environments. In addition, we classify normal and abnormal events and show the method’s ability to find the right category for each anomaly. Therefore, we categorized our data into three main and essential categories: the first groups mainly need firefighting service, while the second and third categories are about thefts and violent behaviour. We implemented the proposed method on the UCF-Crime dataset and achieved 81.71% in AUC, higher than other models like C3D on the same dataset. Our future work focuses on adding an attention layer to the existing model to detect more abnormal events.
Collapse
|
8
|
Ma L, He W, Petersen M, Chou KC, Lu X. Next-Generation Antimicrobial Resistance Surveillance System Based on the Internet-of-Things and Microfluidic Technique. ACS Sens 2021; 6:3477-3484. [PMID: 34494420 DOI: 10.1021/acssensors.1c01453] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Antimicrobial resistance (AMR) of foodborne pathogens is a global crisis in public health and economic growth. A real-time surveillance system is key to track the emergence of AMR bacteria and provides a comprehensive AMR trend from farm to fork. However, current AMR surveillance systems, which integrate results from multiple laboratories using the conventional broth microdilution method, are labor-intensive and time-consuming. To address these challenges, we present the internet of things (IoT), including colorimetric-based microfluidic sensors, a custom-built portable incubator, and machine learning algorithms, to monitor AMR trends in real time. As a top priority microbe that poses risks to human health, Campylobacter was selected as a bacterial model to demonstrate and validate the IoT-assisted AMR surveillance. Image classification with convolution neural network ResNet50 on the colorimetric sensors achieved an accuracy of 99.5% in classifying bacterial growth/inhibition patterns. The IoT was used to carry out a small-scale survey study, identifying eight Campylobacter isolates out of 35 chicken samples. A 96% agreement on Campylobacter AMR profiles was achieved between the results from the IoT and the conventional broth microdilution method. The data collected from the intelligent sensors were transmitted from local computers to a cloud server, facilitating real-time data collection and integration. A web browser was developed to demonstrate the spatial and temporal AMR trends to end-users. This rapid, cost-effective, and portable approach is able to monitor, assess, and mitigate the burden of bacterial AMR in the agri-food chain.
Collapse
Affiliation(s)
- Luyao Ma
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
- Department of Food Science and Agricultural Chemistry, Faculty of Agricultural and Environmental Sciences, McGill University, Sainte-Anne-de-Bellevue, Quebec H9X 3V9, Canada
| | - Weidong He
- College of Computer Science, Chongqing University, Chongqing 400044, China
| | - Marlen Petersen
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Keng C. Chou
- Department of Chemistry, Faculty of Science, The University of British Columbia, Vancouver V6T 1Z1, Canada
| | - Xiaonan Lu
- Food, Nutrition and Health Program, Faculty of Land and Food Systems, The University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
- Department of Food Science and Agricultural Chemistry, Faculty of Agricultural and Environmental Sciences, McGill University, Sainte-Anne-de-Bellevue, Quebec H9X 3V9, Canada
| |
Collapse
|
9
|
Enhancing Surface Fault Detection Using Machine Learning for 3D Printed Products. APPLIED SYSTEM INNOVATION 2021. [DOI: 10.3390/asi4020034] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In the era of Industry 4.0, the idea of 3D printed products has gained momentum and is also proving to be beneficial in terms of financial and time efforts. These products are physically built layer-by-layer based on the digital Computer Aided Design (CAD) inputs. Nonetheless, 3D printed products are still subjected to defects due to variation in properties and structure, which leads to deterioration in the quality of printed products. Detection of these errors at each layer level of the product is of prime importance. This paper provides the methodology for layer-wise anomaly detection using an ensemble of machine learning algorithms and pre-trained models. The proposed combination is trained offline and implemented online for fault detection. The current work provides an experimental comparative study of different pre-trained models with machine learning algorithms for monitoring and fault detection in Fused Deposition Modelling (FDM). The results showed that the combination of the Alexnet and SVM algorithm has given the maximum accuracy. The proposed fault detection approach has low experimental and computing costs, which can easily be implemented for real-time fault detection.
Collapse
|
10
|
Semiautomated Mapping of Benthic Habitats and Seagrass Species Using a Convolutional Neural Network Framework in Shallow Water Environments. REMOTE SENSING 2020. [DOI: 10.3390/rs12234002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Benthic habitats are structurally complex and ecologically diverse ecosystems that are severely vulnerable to human stressors. Consequently, marine habitats must be mapped and monitored to provide the information necessary to understand ecological processes and lead management actions. In this study, we propose a semiautomated framework for the detection and mapping of benthic habitats and seagrass species using convolutional neural networks (CNNs). Benthic habitat field data from a geo-located towed camera and high-resolution satellite images were integrated to evaluate the proposed framework. Features extracted from pre-trained CNNs and a “bagging of features” (BOF) algorithm was used for benthic habitat and seagrass species detection. Furthermore, the resultant correctly detected images were used as ground truth samples for training and validating CNNs with simple architectures. These CNNs were evaluated for their accuracy in benthic habitat and seagrass species mapping using high-resolution satellite images. Two study areas, Shiraho and Fukido (located on Ishigaki Island, Japan), were used to evaluate the proposed model because seven benthic habitats were classified in the Shiraho area and four seagrass species were mapped in Fukido cove. Analysis showed that the overall accuracy of benthic habitat detection in Shiraho and seagrass species detection in Fukido was 91.5% (7 classes) and 90.4% (4 species), respectively, while the overall accuracy of benthic habitat and seagrass mapping in Shiraho and Fukido was 89.9% and 91.2%, respectively.
Collapse
|
11
|
Raphael A, Dubinsky Z, Iluz D, Benichou JIC, Netanyahu NS. Deep neural network recognition of shallow water corals in the Gulf of Eilat (Aqaba). Sci Rep 2020; 10:12959. [PMID: 32737327 PMCID: PMC7395127 DOI: 10.1038/s41598-020-69201-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 07/01/2020] [Indexed: 11/08/2022] Open
Abstract
We describe the application of the computerized deep learning methodology to the recognition of corals in a shallow reef in the Gulf of Eilat, Red Sea. This project is aimed at applying deep neural network analysis, based on thousands of underwater images, to the automatic recognition of some common species among the 100 species reported to be found in the Eilat coral reefs. This is a challenging task, since even in the same colony, corals exhibit significant within-species morphological variability, in terms of age, depth, current, light, geographic location, and inter-specific competition. Since deep learning procedures are based on photographic images, the task is further challenged by image quality, distance from the object, angle of view, and light conditions. We produced a large dataset of over 5,000 coral images that were classified into 11 species in the present automated deep learning classification scheme. We demonstrate the efficiency and reliability of the method, as compared to painstaking manual classification. Specifically, we demonstrated that this method is readily adaptable to include additional species, thereby providing an excellent tool for future studies in the region, that would allow for real time monitoring the detrimental effects of global climate change and anthropogenic impacts on the coral reefs of the Gulf of Eilat and elsewhere, and that would help assess the success of various bioremediation efforts.
Collapse
Affiliation(s)
- Alina Raphael
- Faculty of Life Sciences, The Mina and Everard Goodman, Bar-Ilan University, 5290002, Ramat-Gan, Israel.
| | - Zvy Dubinsky
- Faculty of Life Sciences, The Mina and Everard Goodman, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| | - David Iluz
- Faculty of Life Sciences, The Mina and Everard Goodman, Bar-Ilan University, 5290002, Ramat-Gan, Israel
- Department of Environmental Sciences and Agriculture, Beit Berl College, 4490500, Beit Berl, Israel
| | - Jennifer I C Benichou
- Faculty of Life Sciences, The Mina and Everard Goodman, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| | - Nathan S Netanyahu
- Department of Computer Science, Bar-Ilan University, 5290002, Ramat-Gan, Israel
| |
Collapse
|
12
|
Classification of VLF/LF Lightning Signals Using Sensors and Deep Learning Methods. SENSORS 2020; 20:s20041030. [PMID: 32075020 PMCID: PMC7070770 DOI: 10.3390/s20041030] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/11/2020] [Accepted: 02/12/2020] [Indexed: 11/24/2022]
Abstract
Lightning waveform plays an important role in lightning observation, location, and lightning disaster investigation. Based on a large amount of lightning waveform data provided by existing real-time very low frequency/low frequency (VLF/LF) lightning waveform acquisition equipment, an automatic and accurate lightning waveform classification method becomes extremely important. With the widespread application of deep learning in image and speech recognition, it becomes possible to use deep learning to classify lightning waveforms. In this study, 50,000 lightning waveform samples were collected. The data was divided into the following categories: positive cloud ground flash, negative cloud ground flash, cloud ground flash with ionosphere reflection signal, positive narrow bipolar event, negative narrow bipolar event, positive pre-breakdown process, negative pre-breakdown process, continuous multi-pulse cloud flash, bipolar pulse, skywave. A multi-layer one-dimensional convolutional neural network (1D-CNN) was designed to automatically extract VLF/LF lightning waveform features and distinguish lightning waveforms. The model achieved an overall accuracy of 99.11% in the lightning dataset and overall accuracy of 97.55% in a thunderstorm process. Considering its excellent performance, this model could be used in lightning sensors to assist in lightning monitoring and positioning.
Collapse
|