1
|
Zhang JZ, Graf L, Banerjee A, Yeiser A, McHugh CI, Kymissis I, Lang JH, Olson ES, Nakajima HH. An Implantable Piezofilm Middle Ear Microphone: Performance in Human Cadaveric Temporal Bones. J Assoc Res Otolaryngol 2024; 25:53-61. [PMID: 38238525 PMCID: PMC10907555 DOI: 10.1007/s10162-024-00927-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 12/31/2023] [Indexed: 01/30/2024] Open
Abstract
PURPOSE One of the major reasons that totally implantable cochlear microphones are not readily available is the lack of good implantable microphones. An implantable microphone has the potential to provide a range of benefits over external microphones for cochlear implant users including the filtering ability of the outer ear, cosmetics, and usability in all situations. This paper presents results from experiments in human cadaveric ears of a piezofilm microphone concept under development as a possible component of a future implantable microphone system for use with cochlear implants. This microphone is referred to here as a drum microphone (DrumMic) that senses the robust and predictable motion of the umbo, the tip of the malleus. METHODS The performance was measured by five DrumMics inserted in four different human cadaveric temporal bones. Sensitivity, linearity, bandwidth, and equivalent input noise were measured during these experiments using a sound stimulus and measurement setup. RESULTS The sensitivity of the DrumMics was found to be tightly clustered across different microphones and ears despite differences in umbo and middle ear anatomy. The DrumMics were shown to behave linearly across a large dynamic range (46 dB SPL to 100 dB SPL) across a wide bandwidth (100 Hz to 8 kHz). The equivalent input noise (over a bandwidth of 0.1-10 kHz) of the DrumMic and amplifier referenced to the ear canal was measured to be about 54 dB SPL in the temporal bone experiment and estimated to be 46 dB SPL after accounting for the pressure gain of the outer ear. CONCLUSION The results demonstrate that the DrumMic behaves robustly across ears and fabrication. The equivalent input noise performance (related to the lowest level of sound measurable) was shown to approach that of commercial hearing aid microphones. To advance this demonstration of the DrumMic concept to a future prototype implantable in humans, work on encapsulation, biocompatibility, and connectorization will be required.
Collapse
Affiliation(s)
- John Z Zhang
- Massachusetts Institute of Technology, Cambridge, USA
| | - Lukas Graf
- Harvard Medical School, Massachusetts Eye and Ear, Boston, USA
| | | | - Aaron Yeiser
- Massachusetts Institute of Technology, Cambridge, USA
| | | | | | | | | | | |
Collapse
|
2
|
Fahed VS, Doheny EP, Busse M, Hoblyn J, Lowery MM. Comparison of Acoustic Voice Features Derived From Mobile Devices and Studio Microphone Recordings. J Voice 2022:S0892-1997(22)00312-5. [PMID: 36379826 DOI: 10.1016/j.jvoice.2022.10.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 10/10/2022] [Accepted: 10/10/2022] [Indexed: 11/14/2022]
Abstract
OBJECTIVES/HYPOTHESIS Improvements in mobile device technology offer new opportunities for remote monitoring of voice for home and clinical assessment. However, there is a need to establish equivalence between features derived from signals recorded from mobile devices and gold standard microphone-preamplifiers. In this study acoustic voice features from android smartphone, tablet, and microphone-preamplifier recordings were compared. METHODS Data were recorded from 37 volunteers (20 female) with no history of speech disorder and six volunteers with Huntington's disease (HD) during sustained vowel (SV) phonation, reading passage (RP), and five syllable repetition (SR) tasks. The following features were estimated: fundamental frequency median and standard deviation (F0 and SD F0), harmonics-to-noise ratio (HNR), local jitter, relative average perturbation of jitter (RAP), five-point period perturbation quotient (PPQ5), difference of differences of amplitude and periods (DDA and DDP), shimmer, and amplitude perturbation quotients (APQ3, APQ5, and APQ11). RESULTS Bland-Altman analysis revealed good agreement between microphone and mobile devices for fundamental frequency, jitter, RAP, PPQ5, and DDP during all tasks and a bias for HNR, shimmer and its variants (APQ3, APQ5, APQ11, and DDA). Significant differences were observed between devices for HNR, shimmer, and its variants for all tasks. High correlation was observed between devices for all features, except SD F0 for RP. Similar results were observed in the HD group for SV and SR task. Biological sex had a significant effect on F0 and HNR during all tests, and for jitter, RAP, PPQ5, DDP, and shimmer for RP and SR. No significant effect of age was observed. CONCLUSIONS Mobile devices provided good agreement with state of the art, high-quality microphones during structured speech tasks for features derived from frequency components of the audio recordings. Caution should be taken when estimating HNR, shimmer and its variants from recordings made with mobile devices.
Collapse
Affiliation(s)
- Vitória S Fahed
- School of Electrical and Electronic Engineering, University College Dublin, Dublin, Ireland; Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland.
| | - Emer P Doheny
- School of Electrical and Electronic Engineering, University College Dublin, Dublin, Ireland; Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland
| | - Monica Busse
- Centre for Trials Research, Cardiff University, Cardiff, UK
| | - Jennifer Hoblyn
- School of Medicine, Trinity College Dublin, Dublin, Ireland; Bloomfield Health Services, Dublin, Ireland
| | - Madeleine M Lowery
- School of Electrical and Electronic Engineering, University College Dublin, Dublin, Ireland; Insight Centre for Data Analytics, University College Dublin, Dublin, Ireland
| |
Collapse
|
3
|
Lim C, Kim J, Kim J, Kang BG, Nam Y. Estimation of respiratory rate in various environments using microphones embedded in face masks. J Supercomput 2022; 78:19228-19245. [PMID: 35754514 PMCID: PMC9206076 DOI: 10.1007/s11227-022-04622-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/19/2022] [Indexed: 06/15/2023]
Abstract
Wearable health devices and respiratory rates (RRs) have drawn attention to the healthcare domain as it helps healthcare workers monitor patients' health status continuously and in a non-invasive manner. However, to monitor health status outside healthcare professional settings, the reliability of this wearable device needs to be evaluated in complex environments (i.e., public street and transportation). Therefore, this study proposes a method to estimate RR from breathing sounds recorded by a microphone placed inside three types of masks: surgical, a respirator mask (Korean Filter 94), and reusable masks. The Welch periodogram method was used to estimate the power spectral density of the breathing signals to measure the RR. We evaluated the proposed method by collecting data from 10 healthy participants in four different environments: indoor (office) and outdoor (public street, public bus, and subway). The results obtained errors as low as 0% for accuracy and repeatability in most cases. This research demonstrated that the acoustic-based method could be employed as a wearable device to monitor RR continuously, even outside the hospital environment.
Collapse
Affiliation(s)
- Chhayly Lim
- Department of ICT Convergence, Soonchunhyang University, Asan, 31538 South Korea
| | - Jungyeon Kim
- ICT Convergence Research Center, Soonchunhyang University, Asan, 31538 South Korea
| | - Jeongseok Kim
- Department of ICT Convergence, Soonchunhyang University, Asan, 31538 South Korea
| | - Byeong-Gwon Kang
- Department of Information and Communication Engineering, Soonchunhyang University, Asan, 31538 South Korea
| | - Yunyoung Nam
- Department of Computer Science and Engineering, Soonchunhyang University, Asan, 31538 South Korea
| |
Collapse
|
4
|
Pires IM, Garcia NM, Zdravevski E, Lameski P. Indoor and outdoor environmental data: A dataset with acoustic data acquired by the microphone embedded on mobile devices. Data Brief 2021; 36:107051. [PMID: 34007870 PMCID: PMC8111260 DOI: 10.1016/j.dib.2021.107051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 03/16/2021] [Accepted: 04/07/2021] [Indexed: 11/24/2022] Open
Abstract
All mobile devices include a microphone that can be used for acoustic data acquisition. This article presents a dataset of acoustic signals related to nine environments, captured with a microphone embedded on off-the-shelf mobile devices. The mobile phone can be placed in the pants pockets, in a wristband, over the bedside table, on a table, or on other furniture. Data collection environments are bar, classroom, gym, kitchen, library, street, hall, living room, and bedroom. The data was collected by 25 individuals (15 men and 10 women) in different environments around Covilhã and Fundão municipalities (Portugal). The microphone data was sampled with 44,100 Hz into an array with 16-bit unsigned integer values in the range [0, 255] with a 128 offset for zero. The dataset presented in this paper presents at least 2000 samples of 5 s of data for each environment, corresponding to around 2.8 h for each environment into text files. In total, it includes at least 25.2 h of acoustic data for the implementation of data processing techniques, e.g., Fast Fourier Transform (FFT), and other machine learning methods for the different analysis.
Collapse
Affiliation(s)
- Ivan Miguel Pires
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal.,Department of Computer Science, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal.,UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510, Viseu, Portugal
| | - Nuno M Garcia
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000, Skopje, North Macedonia
| | - Petre Lameski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000, Skopje, North Macedonia
| |
Collapse
|
5
|
Abstract
Totally implantable cochlear implants may be able to address many of the problems cochlear implant users have around cosmetic appearances, discomfort, and restriction of activities. The major technological challenges that need to be solved to develop a totally implantable device relate to implanted microphone performance. Previous attempts at implanting microphones for cochlear implants have not performed as well as conventional cochlear implant microphones, and in addition have struggled with extraneous body or surface contact noise. Microphones can be implanted under the skin or act as sensors in the middle ear; however, evidence from middle ear implants suggest body and contact noise can be overcome by converting ossicular chain movements into digital signals. This article reviews implantable microphone systems and discusses the technology behind them.
Collapse
Affiliation(s)
- Alistair Mitchell-Innes
- a University Hospital Birmingham NHS Foundation Trust , Mindelsohn Way, Edgbaston, Birmingham B15 2TH , UK
| | - Robert Morse
- b School of Engineering, University of Warwick , Library Road, Coventry , CV4 7AL , UK
| | - Richard Irving
- a University Hospital Birmingham NHS Foundation Trust , Mindelsohn Way, Edgbaston, Birmingham B15 2TH , UK
| | - Philip Begg
- a University Hospital Birmingham NHS Foundation Trust , Mindelsohn Way, Edgbaston, Birmingham B15 2TH , UK
| |
Collapse
|
6
|
Movahedi F, Kurosu A, Coyle JL, Perera S, Sejdić E. A comparison between swallowing sounds and vibrations in patients with dysphagia. Comput Methods Programs Biomed 2017; 144:179-187. [PMID: 28495001 PMCID: PMC5455149 DOI: 10.1016/j.cmpb.2017.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 01/27/2017] [Accepted: 03/09/2017] [Indexed: 06/07/2023]
Abstract
The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. This study aims to compare of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who had videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5 ml spoon was given to the participants. A broad feature set was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies.
Collapse
Affiliation(s)
- Faezeh Movahedi
- Department of Electrical and Computer Engineering, Swanson School of Enginering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Atsuko Kurosu
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - James L Coyle
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - Subashan Perera
- Department of Medicine, Division of Geriatric Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Enginering, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|