1
|
Minier L, Rouch J, Sabbagh B, Bertucci F, Parmentier E, Lecchini D, Sèbe F, Mathevon N, Emonet R. Visualization and quantification of coral reef soundscapes using CoralSoundExplorer software. PLoS Comput Biol 2025; 21:e1012050. [PMID: 40208899 PMCID: PMC12017563 DOI: 10.1371/journal.pcbi.1012050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 04/23/2025] [Accepted: 02/28/2025] [Indexed: 04/12/2025] Open
Abstract
Despite hosting some of the highest concentrations of biodiversity and providing invaluable goods and services in the oceans, coral reefs are under threat from global change and other local human impacts. Changes in living ecosystems often induce changes in their acoustic characteristics, but despite recent efforts in passive acoustic monitoring of coral reefs, rapid measurement and identification of changes in their soundscapes remains a challenge. Here we present the new open-source software CoralSoundExplorer, which is designed to study and monitor coral reef soundscapes. CoralSoundExplorer uses machine learning approaches and is designed to eliminate the need to extract conventional acoustic indices. To demonstrate CoralSoundExplorer's functionalities, we use and analyze a set of recordings from three coral reef sites, each with different purposes (undisturbed site, tourist site and boat site), located on the island of Bora-Bora in French Polynesia. We explain the CoralSoundExplorer analysis workflow, from raw sounds to ecological results, detailing and justifying each processing step. We detail the software settings, the graphical representations used for visual exploration of soundscapes and their temporal dynamics, along with the analysis methods and metrics proposed. We demonstrate that CoralSoundExplorer is a powerful tool for identifying disturbances affecting coral reef soundscapes, combining visualizations of the spatio-temporal distribution of sound recordings with new quantification methods to characterize soundscapes at different temporal scales.
Collapse
Affiliation(s)
- Lana Minier
- PSL Université Paris, EPHE-UPVD-CNRS, UAR3278 CRIOBE, Moorea, French Polynesia
| | - Jérémy Rouch
- ENES Bioacoustics Research Laboratory, University of Saint-Etienne, CRNL, CNRS, Inserm, Saint-Etienne, France
| | - Bamdad Sabbagh
- ENES Bioacoustics Research Laboratory, University of Saint-Etienne, CRNL, CNRS, Inserm, Saint-Etienne, France
| | - Frédéric Bertucci
- UMR MARBEC, University of Montpellier, CNRS, IFREMER, IRD, Sète, France
| | - Eric Parmentier
- Laboratory of Functional and Evolutionary Morphology, FOCUS, University of Liège, Liège, Belgium
| | - David Lecchini
- PSL Université Paris, EPHE-UPVD-CNRS, UAR3278 CRIOBE, Moorea, French Polynesia
| | - Frédéric Sèbe
- ENES Bioacoustics Research Laboratory, University of Saint-Etienne, CRNL, CNRS, Inserm, Saint-Etienne, France
- Office Français de la Biodiversité, Service Anthropisation and Fonctionnement des Ecosystèmes Terrestres, Direction de la Recherche et de l’Appui Scientifique, Gières, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, University of Saint-Etienne, CRNL, CNRS, Inserm, Saint-Etienne, France
- Institut Universitaire de France, Paris, France
- Ecole Pratique des Hautes Etudes, Chart Lab, PSL University, Paris, France
- Department of Psychology, University of California, Berkeley, California, United States of America
| | - Rémi Emonet
- Institut Universitaire de France, Paris, France
- Université Jean Monnet Saint-Etienne, CNRS, Institut d’Optique Graduate School, Inria, Laboratoire Hubert Curien UMR 5516, Saint-Etienne, France
| |
Collapse
|
2
|
Williams B, Balvanera SM, Sethi SS, Lamont TA, Jompa J, Prasetya M, Richardson L, Chapuis L, Weschke E, Hoey A, Beldade R, Mills SC, Haguenauer A, Zuberer F, Simpson SD, Curnick D, Jones KE. Unlocking the soundscape of coral reefs with artificial intelligence: pretrained networks and unsupervised learning win out. PLoS Comput Biol 2025; 21:e1013029. [PMID: 40294093 PMCID: PMC12064026 DOI: 10.1371/journal.pcbi.1013029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 05/09/2025] [Accepted: 04/07/2025] [Indexed: 04/30/2025] Open
Abstract
Passive acoustic monitoring can offer insights into the state of coral reef ecosystems at low-costs and over extended temporal periods. Comparison of whole soundscape properties can rapidly deliver broad insights from acoustic data, in contrast to detailed but time-consuming analysis of individual bioacoustic events. However, a lack of effective automated analysis for whole soundscape data has impeded progress in this field. Here, we show that machine learning (ML) can be used to unlock greater insights from reef soundscapes. We showcase this on a diverse set of tasks using three biogeographically independent datasets, each containing fish community (high or low), coral cover (high or low) or depth zone (shallow or mesophotic) classes. We show supervised learning can be used to train models that can identify ecological classes and individual sites from whole soundscapes. However, we report unsupervised clustering achieves this whilst providing a more detailed understanding of ecological and site groupings within soundscape data. We also compare three different approaches for extracting feature embeddings from soundscape recordings for input into ML algorithms: acoustic indices commonly used by soundscape ecologists, a pretrained convolutional neural network (P-CNN) trained on 5.2 million hrs of YouTube audio, and CNN's which were trained on each individual task (T-CNN). Although the T-CNN performs marginally better across tasks, we reveal that the P-CNN offers a powerful tool for generating insights from marine soundscape data as it requires orders of magnitude less computational resources whilst achieving near comparable performance to the T-CNN, with significant performance improvements over the acoustic indices. Our findings have implications for soundscape ecology in any habitat.
Collapse
Affiliation(s)
- Ben Williams
- Centre for Biodiversity and Environment Research, Department of Genetics, Evolution and Environment, University College London, London, United Kingdom
- Zoological Society of London, Regents Park, London, United Kingdom
| | - Santiago M. Balvanera
- Centre for Biodiversity and Environment Research, Department of Genetics, Evolution and Environment, University College London, London, United Kingdom
| | - Sarab S. Sethi
- Department of Life Sciences, Imperial College London, London, United Kingdom
| | - Timothy A.C. Lamont
- Lancaster Environment Centre, Lancaster University, Lancaster, United Kingdom
| | | | | | - Laura Richardson
- School of Ocean Sciences, Bangor University, Askew Street, Menai Bridge, Anglesey, United Kingdom
| | - Lucille Chapuis
- School of Biological Sciences, University of Bristol, Bristol, United Kingdom
| | - Emma Weschke
- School of Biological Sciences, University of Bristol, Bristol, United Kingdom
| | - Andrew Hoey
- Australian Research Council Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, Queensland, Australia
| | - Ricardo Beldade
- Australian Research Council Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, Queensland, Australia
- Estación Costera de Investigaciones Marinas, Millennium Nucleus for Ecology and Conservation of Temperate Mesophotic Reef Ecosystems, Facultad de Ciencias Biológicas, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Suzanne C. Mills
- CRIOBE, PSL Research University, Moorea, French Polynesia
- Laboratoire d’Excellence “CORAIL”, Perpignan, France
| | | | | | - Stephen D. Simpson
- School of Biological Sciences, University of Bristol, Bristol, United Kingdom
| | - David Curnick
- Zoological Society of London, Regents Park, London, United Kingdom
| | - Kate E. Jones
- Centre for Biodiversity and Environment Research, Department of Genetics, Evolution and Environment, University College London, London, United Kingdom
| |
Collapse
|
3
|
McCammon S, Formel N, Jarriel S, Mooney TA. Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural networka). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1665-1683. [PMID: 40067342 DOI: 10.1121/10.0035829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 01/28/2025] [Indexed: 05/13/2025]
Abstract
The quantity of passive acoustic data collected in marine environments is rapidly expanding; however, the software developments required to meaningfully process large volumes of soundscape data have lagged behind. A significant bottleneck in the analysis of biological patterns in soundscape datasets is the human effort required to identify and annotate individual acoustic events, such as diverse and abundant fish sounds. This paper addresses this problem by training a YOLOv5 convolutional neural network (CNN) to automate the detection of tonal and pulsed fish calls in spectrogram data from five tropical coral reefs in the U.S. Virgin Islands, building from over 22 h of annotated data with 55 015 fish calls. The network identified fish calls with a mean average precision of up to 0.633, while processing data over 25× faster than it is recorded. We compare the CNN to human annotators on five datasets, including three used for training and two untrained reefs. CNN-detected call rates reflected baseline reef fish and coral cover observations; and both expected biological (e.g., crepuscular choruses) and novel call patterns were identified. Given the importance of reef-fish communities, their bioacoustic patterns, and the impending biodiversity crisis, these results provide a vital and scalable means to assess reef community health.
Collapse
Affiliation(s)
- Seth McCammon
- Applied Ocean Physics and Engineering Department, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
| | - Nathan Formel
- Biology Department, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
| | - Sierra Jarriel
- Biology Department, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
| | - T Aran Mooney
- Biology Department, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
| |
Collapse
|
4
|
Carriger JF, Fisher WS. Exploring coral reef communities in Puerto Rico using Bayesian networks. ECOL INFORM 2024; 82:102665. [PMID: 39377040 PMCID: PMC11457097 DOI: 10.1016/j.ecoinf.2024.102665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/09/2024]
Abstract
Most coral reef studies focus on scleractinian (stony) corals to indicate reef condition, but there are other prominent assemblages that play a role in ecosystem structure and function. In Puerto Rico these include fish, gorgonians, and sponges. The U.S. Environmental Protection Agency conducted unique surveys of coral reef communities across the southern coast of Puerto Rico that included simultaneous measurement of all four assemblages. Evaluating the results from a community perspective demands endpoints for all four assemblages, so patterns of community structure were explored by probabilistic clustering of measured variables with Bayesian networks. Most variables were found to have stronger associations within than between taxa, but unsupervised structure learning identified three cross-taxa relationships with potential ecological significance. Clusters for each assemblage were constructed using an expectation-maximization algorithm that created a factor node jointly characterizing the density, size, and diversity of individuals in each taxon. The clusters were characterized by the measured variables, and relationships to variables for other taxa were examined, such as stony coral clusters with fish variables. Each of the factor nodes were then used to create a set of meta-factor clusters that further summarized the aggregate monitoring variables for the four taxa. Once identified, taxon-specific and meta-clusters represent patterns of community structure that can be examined on a regional or site-specific basis to better understand risk assessment, risk management and delivery of ecosystem services.
Collapse
Affiliation(s)
- John F. Carriger
- U.S. Environmental Protection Agency, Office of Research and Development, Center for Environmental Solutions and Emergency Response, Cincinnati, OH 45268, USA
| | - William S. Fisher
- U.S. Environmental Protection Agency, Office of Research and Development, Center for Environmental Measurement and Modeling, Gulf Breeze, FL 32561, USA
| |
Collapse
|
5
|
Zhou S. A method of water resources accounting based on deep clustering and attention mechanism under the background of integration of public health data and environmental economy. PeerJ Comput Sci 2023; 9:e1571. [PMID: 37810344 PMCID: PMC10557482 DOI: 10.7717/peerj-cs.1571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 08/14/2023] [Indexed: 10/10/2023]
Abstract
Water resource accounting constitutes a fundamental approach for implementing sophisticated management of basin water resources. The quality of water plays a pivotal role in determining the liabilities associated with these resources. Evaluating the quality of water facilitates the computation of water resource liabilities during the accounting process. Traditional accounting methods rely on manual sorting and data analysis, which necessitate significant human effort. In order to address this issue, we leverage the remarkable feature extraction capabilities of convolutional operations to construct neural networks. Moreover, we introduce the self-attention mechanism module to propose an unsupervised deep clustering method. This method offers assistance in accounting tasks by automatically classifying the debt levels of water resources in distinct regions, thereby facilitating comprehensive water resource accounting. The methodology presented in this article underwent verification using three datasets: the United States Postal Service (USPS), Heterogeneity Human Activity Recognition (HHAR), and Association for Computing Machinery (ACM). The evaluation of Accuracy rate (ACC), Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI) metrics yielded favorable results, surpassing those of K-means clustering, hierarchical clustering, and Density-based constraint extension (DCE). Specifically, the mean values of the evaluation metrics across the three datasets were 0.8474, 0.7582, and 0.7295, respectively.
Collapse
Affiliation(s)
- Shiya Zhou
- Wuhan Technology and Business University, Wuhan, Hubei, China
| |
Collapse
|
6
|
Hua X, Cheng L, Zhang T, Li J. Interpretable deep dictionary learning for sound speed profiles with uncertainties. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:877. [PMID: 36859122 DOI: 10.1121/10.0017099] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 01/12/2023] [Indexed: 06/18/2023]
Abstract
Uncertainties abound in sound speed profiles (SSPs) measured/estimated by modern ocean observing systems, which impede the knowledge acquisition and downstream underwater applications. To reduce the SSP uncertainties and draw insights into specific ocean processes, an interpretable deep dictionary learning model is proposed to cater for uncertain SSP processing. In particular, two kinds of SSP uncertainties are considered: measurement errors, which generally exist in the form of Gaussian noises; and the disturbances/anomalies caused by potential ocean dynamics, which occur at some specific depths and durations. To learn the generative patterns of these uncertainties while maintaining the interpretability of the resulting deep model, the adopted scheme first unrolls the classical K-singular value decomposition algorithm into a neural network, and trains this neural network in a supervised learning manner. The training data and model initializations are judiciously designed to incorporate the environmental properties of ocean SSPs. Experimental results demonstrate the superior performance of the proposed method over the classical baseline in mitigating noise corruptions, detecting, and localizing SSP disturbances/anomalies.
Collapse
Affiliation(s)
- Xinyun Hua
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Lei Cheng
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Ting Zhang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Jianlong Li
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, 310027, China
| |
Collapse
|
7
|
De Salvio D, Bianco MJ, Gerstoft P, D'Orazio D, Garai M. Blind source separation by long-term monitoring: A variational autoencoder to validate the clustering analysis. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:738. [PMID: 36732230 DOI: 10.1121/10.0016887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/03/2023] [Indexed: 06/18/2023]
Abstract
Noise exposure influences the comfort and well-being of people in several contexts, such as work or learning environments. For instance, in offices, different kind of noises can increase or drop the employees' productivity. Thus, the ability of separating sound sources in real contexts plays a key role in assessing sound environments. Long-term monitoring provide large amounts of data that can be analyzed through machine and deep learning algorithms. Based on previous works, an entire working day was recorded through a sound level meter. Both sound pressure levels and the digital audio recording were collected. Then, a dual clustering analysis was carried out to separate the two main sound sources experienced by workers: traffic and speech noises. The first method exploited the occurrences of sound pressure levels via Gaussian mixture model and K-means clustering. The second analysis performed a semi-supervised deep clustering analyzing the latent space of a variational autoencoder. Results show that both approaches were able to separate the sound sources. Spectral matching and the latent space of the variational autoencoder validated the assumptions underlying the proposed clustering methods.
Collapse
Affiliation(s)
- Domenico De Salvio
- Department of Industrial Engineering (DIN), University of Bologna, Viale del Risorgimento 2, Bologna, 40136, Italy
| | - Michael J Bianco
- NoiseLab, Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92037, USA
| | - Peter Gerstoft
- NoiseLab, Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92037, USA
| | - Dario D'Orazio
- Department of Industrial Engineering (DIN), University of Bologna, Viale del Risorgimento 2, Bologna, 40136, Italy
| | - Massimo Garai
- Department of Industrial Engineering (DIN), University of Bologna, Viale del Risorgimento 2, Bologna, 40136, Italy
| |
Collapse
|
8
|
Stowell D. Computational bioacoustics with deep learning: a review and roadmap. PeerJ 2022; 10:e13152. [PMID: 35341043 PMCID: PMC8944344 DOI: 10.7717/peerj.13152] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/01/2022] [Indexed: 01/20/2023] Open
Abstract
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.
Collapse
Affiliation(s)
- Dan Stowell
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands,Naturalis Biodiversity Center, Leiden, The Netherlands
| |
Collapse
|
9
|
Parsons MJG, Lin TH, Mooney TA, Erbe C, Juanes F, Lammers M, Li S, Linke S, Looby A, Nedelec SL, Van Opzeeland I, Radford C, Rice AN, Sayigh L, Stanley J, Urban E, Di Iorio L. Sounding the Call for a Global Library of Underwater Biological Sounds. Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.810156] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Aquatic environments encompass the world’s most extensive habitats, rich with sounds produced by a diversity of animals. Passive acoustic monitoring (PAM) is an increasingly accessible remote sensing technology that uses hydrophones to listen to the underwater world and represents an unprecedented, non-invasive method to monitor underwater environments. This information can assist in the delineation of biologically important areas via detection of sound-producing species or characterization of ecosystem type and condition, inferred from the acoustic properties of the local soundscape. At a time when worldwide biodiversity is in significant decline and underwater soundscapes are being altered as a result of anthropogenic impacts, there is a need to document, quantify, and understand biotic sound sources–potentially before they disappear. A significant step toward these goals is the development of a web-based, open-access platform that provides: (1) a reference library of known and unknown biological sound sources (by integrating and expanding existing libraries around the world); (2) a data repository portal for annotated and unannotated audio recordings of single sources and of soundscapes; (3) a training platform for artificial intelligence algorithms for signal detection and classification; and (4) a citizen science-based application for public users. Although individually, these resources are often met on regional and taxa-specific scales, many are not sustained and, collectively, an enduring global database with an integrated platform has not been realized. We discuss the benefits such a program can provide, previous calls for global data-sharing and reference libraries, and the challenges that need to be overcome to bring together bio- and ecoacousticians, bioinformaticians, propagation experts, web engineers, and signal processing specialists (e.g., artificial intelligence) with the necessary support and funding to build a sustainable and scalable platform that could address the needs of all contributors and stakeholders into the future.
Collapse
|
10
|
Goudarzi A, Spehr C, Herbold S. Expert decision support system for aeroacoustic source type identification using clustering. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:1259. [PMID: 35232112 DOI: 10.1121/10.0009322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 12/31/2021] [Indexed: 06/14/2023]
Abstract
This paper presents an Expert Decision Support System for the identification of time-invariant, aeroacoustic source types. The system comprises two steps: first, acoustic properties are calculated based on spectral and spatial information. Second, clustering is performed based on these properties. The clustering aims at helping and guiding an expert for quick identification of different source types, providing an understanding of how sources differ. This supports the expert in determining similar or atypical behavior. A variety of features are proposed for capturing the characteristics of the sources. These features represent aeroacoustic properties that can be interpreted by both the machine and by experts. The features are independent of the absolute Mach number, which enables the proposed method to cluster data measured at different flow configurations. The method is evaluated on deconvolved beamforming data from two scaled airframe half-model measurements. For this exemplary data, the proposed support system method results in clusters that mostly correspond to the source types identified by the authors. The clustering also provides the mean feature values and the cluster hierarchy for each cluster, and for each cluster member, a clustering confidence. This additional information makes the results transparent and allows the expert to understand the clustering choices.
Collapse
Affiliation(s)
| | - C Spehr
- German Aerospace Center (DLR), Germany
| | - S Herbold
- Institute of Computer Science, University of Göttingen, Germany
| |
Collapse
|
11
|
Michalopoulou ZH, Gerstoft P, Kostek B, Roch MA. Introduction to the special issue on machine learning in acoustics. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3204. [PMID: 34717489 DOI: 10.1121/10.0006783] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
The use of machine learning (ML) in acoustics has received much attention in the last decade. ML is unique in that it can be applied to all areas of acoustics. ML has transformative potentials as it can extract statistically based new information about events observed in acoustic data. Acoustic data provide scientific and engineering insight ranging from biology and communications to ocean and Earth science. This special issue included 61 papers, illustrating the very diverse applications of ML in acoustics.
Collapse
Affiliation(s)
- Zoi-Heleni Michalopoulou
- Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, New Jersey 07102, USA
| | - Peter Gerstoft
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92093, USA
| | - Bozena Kostek
- Faculty of Electronics, Telecommunications and Informatics, Audio Acoustics Laboratory, Gdansk University of Technology (GUT), Gdansk, Poland
| | - Marie A Roch
- Department of Computer Science, San Diego State University, San Diego, California 92182-7720, USA
| |
Collapse
|