1
|
Extremely compact sources (ECS): a new potential field filtering method. Sci Rep 2024; 14:11950. [PMID: 38789581 DOI: 10.1038/s41598-024-62751-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/21/2024] [Indexed: 05/26/2024] Open
Abstract
We present a new filtering method for potential fields, based on modelling the fields in terms of very compact solutions, i.e., the sources are expected to occupy the smallest allowable volume in the source domain. The selected solutions, which we call "Extremely Compact Sources" (ECS) form a sort of atomized model, which still satisfies the non-unique inverse problem of gravity and magnetic fields. The ECS model is not only characterized by sparsity, but also by large values of the physical property (density or magnetic susceptibility). The sparse nature of the model allows for the definition of a highly localized filter, which can be obtained by simply specifying the atoms to be selected in a given area. This feature allows managing tasks normally impossible with traditional filters, such as the separation of interfering anomalies having a similar wavenumber content. In addition, the procedure can perform a very effective regional/residual separation. We demonstrate the method on synthetic cases and apply it in the real case of gravity data of Campi Flegrei volcanic area (Italy), where we use the ECS filtering to isolate the gravity effect of the Mount Olibano dome.
Collapse
|
2
|
Attentional focusing and filtering in multisensory categorization. Psychon Bull Rev 2024; 31:708-720. [PMID: 37673842 DOI: 10.3758/s13423-023-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 09/08/2023]
Abstract
Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.
Collapse
|
3
|
A comparison of different methods to maximise signal extraction when using central venous pressure to optimise atrioventricular delay after cardiac surgery. IJC HEART & VASCULATURE 2024; 51:101382. [PMID: 38496260 PMCID: PMC10944103 DOI: 10.1016/j.ijcha.2024.101382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 03/02/2024] [Accepted: 03/05/2024] [Indexed: 03/19/2024]
Abstract
Objective Our group has shown that central venous pressure (CVP) can optimise atrioventricular (AV) delay in temporary pacing (TP) after cardiac surgery. However, the signal-to-noise ratio (SNR) is influenced both by the methods used to mitigate the pressure effects of respiration and the number of heartbeats analysed. This paper systematically studies the effect of different analysis methods on SNR to maximise the accuracy of this technique. Methods We optimised AV delay in 16 patients with TP after cardiac surgery. Transitioning rapidly and repeatedly from a reference AV delay to different tested AV delays, we measured pressure differences before and after each transition. We analysed the resultant signals in different ways with the aim of maximising the SNR: (1) adjusting averaging window location (around versus after transition), (2) modifying window length (heartbeats analysed), and (3) applying different signal filtering methods to correct respiratory artefact. Results (1) The SNR was 27 % higher for averaging windows around the transition versus post-transition windows. (2) The optimal window length for CVP analysis was two respiratory cycle lengths versus one respiratory cycle length for optimising SNR for arterial blood pressure (ABP) signals. (3) Filtering with discrete wavelet transform improved SNR by 62 % for CVP measurements. When applying the optimal window length and filtering techniques, the correlation between ABP and CVP peak optima exceeded that of a single cycle length (R = 0.71 vs. R = 0.50, p < 0.001). Conclusion We demonstrated that utilising a specific set of techniques maximises the signal-to-noise ratio and hence the utility of this technique.
Collapse
|
4
|
Implementation of Adaptive-Bayesian DStoch technique for obtaining winds from MST radar covering higher altitudes. Heliyon 2024; 10:e26316. [PMID: 38420412 PMCID: PMC10900930 DOI: 10.1016/j.heliyon.2024.e26316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 02/02/2024] [Accepted: 02/09/2024] [Indexed: 03/02/2024] Open
Abstract
It is challenging to estimate winds accurately from higher altitudes using VHF-MST radar. The current study introduces the Adaptive-Bayesian Deterministic Stochastics Technique (ADStoch), which implements an Empirical Bayesian 1D prediction method using stochastics to analyze radar signals. A new and robust estimator for empirical wavelet shrinkage with Gaussian prior of the nonzero mean for wavelet coefficients is presented, which makes the current prior different from other priors. The mean parameters and the prior covariance hyperparameters follow a pseudo maximum likelihood method for computation. Details on the implemented algorithm developed from scratch using C# are also presented. This technique outperforms contemporary techniques discussed in this context that can recover signals buried in noise established based on the analysis of moment and quality. The estimated Wind is cross-validated for accuracy with the observed wind from the GPS radiosonde operated simultaneously. This technique can consistently extract 3D wind that can reach the range of 25.5 km-28.2 km, improving the conventional maximum altitude of 21.2 km in real time for the MST radar. It is concluded that the ADStoch analysis technique can effectively obtain VHF-MST radar signals at significantly higher altitudes, which is helpful in various scientific investigations.
Collapse
|
5
|
A unified filtering method for estimating asymmetric orientation distribution functions. Neuroimage 2024; 287:120516. [PMID: 38244878 DOI: 10.1016/j.neuroimage.2024.120516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 01/04/2024] [Accepted: 01/09/2024] [Indexed: 01/22/2024] Open
Abstract
Numerous filtering methods have been proposed for estimating asymmetric orientation distribution functions (ODFs) for diffusion magnetic resonance imaging (dMRI). It can be hard to make sense of all these different methods, which share similar features and result in similar outputs. In this work, we disentangle these many filtering methods proposed in the past and combine them into a novel, unified filtering equation. We also propose a self-supervised data-driven approach for calibrating the filtering parameter values. Our equation is implemented in an open-source GPU-accelerated python software to facilitate its integration into any existing dMRI processing pipeline. Our method is applied on multi-shell multi-tissue fiber ODFs from the Human Connectome Project dataset (1.25 mm3 native resolution) and on single-shell single-tissue fiber ODFs from the Bilingualism and the Brain dataset (2.0 mm3 isotropic resolution) to evaluate the occurrence of asymmetric patterns on different spatial resolutions, representing cutting-edge and "clinical" research data. Asymmetry measures such as the asymmetric index (ASI) and our novel number of fiber directions (NuFiD) are then used to explain the behaviour of our method in these images. The contributions of this work are: (i) the disentanglement and unification of filtering methods for estimating asymmetric ODFs; (ii) a calibration method for automatically fixing the parameters governing the filtering; (iii) an open-source, efficient implementation of our unified filtering method for estimating asymmetric ODFs; (iv) a novel number of fiber directions (NuFiD) index for explaining asymmetric fiber configurations; and (v) a novel template of asymmetries, revealing that our filtering method estimates asymmetric configurations in at least 50% of the brain voxels (∼31% of the white matter and ∼63% of the gray matter).
Collapse
|
6
|
Head Exposure to Acceleration Database in Sport (HEADSport): a kinematic signal processing method to enable instrumented mouthguard (iMG) field-based inter-study comparisons. BMJ Open Sport Exerc Med 2024; 10:e001758. [PMID: 38304714 PMCID: PMC10831454 DOI: 10.1136/bmjsem-2023-001758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2023] [Indexed: 02/03/2024] Open
Abstract
Objective Instrumented mouthguard (iMG) systems use different signal processing approaches limiting field-based inter-study comparisons, especially when artefacts are present in the signal. The objective of this study was to assess the frequency content and characteristics of head kinematic signals from head impact reconstruction laboratory and field-based environments to develop an artefact attenuation filtering method (HEADSport filter method). Methods Laboratory impacts (n=72) on a test-dummy headform ranging from 25 to 150 g were conducted and 126 rugby union players were equipped with iMGs for 209 player-matches. Power spectral density (PSD) characteristics of the laboratory impacts and on-field head acceleration events (HAEs) (n=5694) such as the 95th percentile cumulative sum PSD frequency were used to develop the HEADSport method. The HEADSport filter method was compared with two other common filtering approaches (Butterworth-200Hz and CFC180 filter) through signal-to-noise ratio (SNR) and mixed linear effects models for laboratory and on-field events, respectively. Results The HEADSport filter method produced marginally higher SNR than the Butterworth-200Hz and CFC180 filter and on-field peak linear acceleration (PLA) and peak angular acceleration (PAA) values within the magnitude range tested in the laboratory. Median PLA and PAA (and outlier values) were higher for the CFC180 filter than the Butterworth-200Hz and HEADSport filter method (p<0.01). Conclusion The HEADSport filter method could enable iMG field-based inter-study comparisons and is openly available at https://github.com/GTBiomech/HEADSport-Filter-Method.
Collapse
|
7
|
Fast and easy method to culture and obtain large populations of male nematodes. MethodsX 2023; 11:102293. [PMID: 37539340 PMCID: PMC10393782 DOI: 10.1016/j.mex.2023.102293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 07/18/2023] [Indexed: 08/05/2023] Open
Abstract
Caenorhabditis elegans is a model system widely used in fundamental research. Even though, nematodes are easy to maintain in the laboratory, obtaining large populations of worms require a lot of work and is time consuming. Furthermore, because C. elegans are mainly hermaphrodite it is even more complicated to obtain large amounts of males which make high-throughput experiments using C. elegans males very challenging. In order to overcome these limitations, we developed affordable and rapid methods to: (1) grow large synchronous worm populations (2) easily obtain large amounts of males We developed a culture method on plates to grow big synchronized worm populations with the standard incubators used on all worm labs. We also established an easy filtration method allowing to obtain large male populations in an hour. After filtering, the worm population contains more than 90% of adult males and no adult hermaphrodites since all the contaminants are larva and embryos. The culture and the filtering methods we developed are easy to implement and require a very limited investment in equipment and consumables beside the standard one present in worm labs. In addition, this filtering method could be applied to nematode's species similar in size to C. elegans.
Collapse
|
8
|
Evaluating overtaking and filtering maneuver of motorcyclists and car drivers using advanced trajectory data analysis. Int J Inj Contr Saf Promot 2023; 30:530-546. [PMID: 37343167 DOI: 10.1080/17457300.2023.2225162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 06/11/2023] [Indexed: 06/23/2023]
Abstract
The present paper compares motorized two-wheeler (MTW) and passenger car's interactions with the rest of the traffic in urban roads while performing overtaking and filtering maneuvers. To better understand filtering maneuvers of motorcyclists and car drivers, an attempt was made to propose a new measure, i.e. pore size ratio. Additionally, the factors affecting lateral width acceptance for motorcyclists and car drivers while overtaking and filtering were studied using advanced trajectory data. A regression model was developed to predict the significant factors affecting motorcyclist's and car driver's decisions to accept lateral width with the adjacent vehicle while performing overtaking and filtering maneuvers. Finally, a comparative analysis between machine learning and the probit model revealed that, in the present case, machine learning models perform better than the probit model in terms of the model's discernment power. The findings of this study will help ameliorate the power of existing microsimulation tools.
Collapse
|
9
|
A computational view of short-term plasticity and its implications for E-I balance. Curr Opin Neurobiol 2023; 81:102729. [PMID: 37245258 DOI: 10.1016/j.conb.2023.102729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 03/30/2023] [Accepted: 04/25/2023] [Indexed: 05/30/2023]
Abstract
Short-term plasticity (STP) and excitatory-inhibitory balance (EI balance) are both ubiquitous building blocks of brain circuits across the animal kingdom. The synapses involved in EI are also subject to short-term plasticity, and several experimental studies have shown that their effects overlap. Recent computational and theoretical work has begun to highlight the functional implications of the intersection of these motifs. The findings are nuanced: while there are general computational themes, such as pattern tuning, normalization, and gating, much of the richness of these interactions comes from region- and modality specific tuning of STP properties. Together these findings point towards the STP-EI balance combination as being a versatile and highly efficient neural building block for a wide range of pattern-specific responses.
Collapse
|
10
|
Strategic filtering of high-energy visible light expands neural correlates of functional vision particularly in older participants. Heliyon 2023; 9:e17271. [PMID: 37539228 PMCID: PMC10394902 DOI: 10.1016/j.heliyon.2023.e17271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 05/18/2023] [Accepted: 06/13/2023] [Indexed: 08/05/2023] Open
Abstract
In this study we assessed the neural correlates of functional vision while varying patterns of light filtration. Four filter conditions used relatively flat filtering across the visible spectrum while one filter was a step filter that selectively absorbed violet light (wavelengths below about 415 nm). Neural effects were quantified by measuring the BOLD response ((T2*-based fMRI) while subjects performed a challenging visual task (judging gap direction in Landolt Cs that randomly varied in size). In general (based on p < 0.01 directional criterion not corrected for aggregated error), as filtering increased (less interference by bright light), brain activity associated with the task also increased. This effect, even using the most conservative statistics, was most evident when using the violet filter (especially for the older subjects) despite only reducing the very highest energy portion of the visible spectrum. This finding suggests that filtering can increase neural activity associated with functional vision; such effects might be achievable through filtering just the highest visible energy (violet).
Collapse
|
11
|
Age differences in the use of positive and negative cues to filter distracting information from working memory. Atten Percept Psychophys 2023; 85:1207-1218. [PMID: 37012577 DOI: 10.3758/s13414-023-02695-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2023] [Indexed: 04/05/2023]
Abstract
Previous research has demonstrated that as people age, visual working memory (VWM) declines. One potential explanation for this decline is that older adults are less able to ignore irrelevant information, which contributes to VWM filtering deficits. Most research examining age differences in filtering ability has used positive cues (indicating which items to pay attention to), but negative cues (indicating which items to ignore) may be even harder for older adults to implement as some work suggests that negatively cued items are first paid attention to before they are suppressed. The current study aimed to test whether older adults can use negative cues to filter irrelevant information from VWM. Across two experiments, young and older adults were presented with two (Experiment 1) or four (Experiment 2) display items, preceded by a neutral, negative, or positive cue. After a delay, participants reported the target's orientation in a continuous-response task. Results show that both groups benefitted from being provided with a cue (positive or negative) compared to no cue (i.e., neutral condition), but the benefit was smaller for negative cues. Thus, although negative cues aid in filtering of VWM, they are less effective than positive cues, possibly due to residual attention being directed towards distractor items.
Collapse
|
12
|
The influence of smoothing techniques on the accuracy of the reference finite helical axis when applied to 2D-3D registrations. Med Biol Eng Comput 2023:10.1007/s11517-023-02813-2. [PMID: 36914925 DOI: 10.1007/s11517-023-02813-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
Highspeed Biplanar Videoradiography (HSBV) permits recording of 3D bone movements with sub-millimeter precision. 2D-3D registrations are performed to quantify bone movements, providing a series of affine transformation matrices (ATMs). These registrations may result in alignment errors that produce inaccurate kinematics. Smoothing techniques can be applied to the ATMs to reduce these inaccuracies. Which techniques are best for this application remain unknown. The purpose of this study was to investigate the performance of six smoothing techniques on ATMs obtained from HSBV. Performance was assessed by measuring the accuracy of three reference finite helical axis (rFHA) measures during a turntable rotation: orientation, dispersion, and rotation speed difference (RSD = rFHA RS-turntable RS). A 3D printed femur and tibia were mounted to the turntable and rotations recorded with HSBV. The rFHA was calculated for the bones using each smoothing technique and ranked using a Friedman test. The relative percent change to the unsmoothed data was reported. A spline filter with outlier removal (SPOUT) was ranked the best technique, producing the most accurate RSDs for the femur (-79.64%) and tibia (-70.59%). SPOUT was the top performing smoothing technique. Further investigations using SPOUT are required for in-vivo human movements.
Collapse
|
13
|
A novel framework for the removal of pacing artifacts from bio-electrical recordings. Comput Biol Med 2023; 155:106673. [PMID: 36805227 DOI: 10.1016/j.compbiomed.2023.106673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/23/2023] [Accepted: 02/10/2023] [Indexed: 02/13/2023]
Abstract
BACKGROUND Electroceuticals provide clinical solutions for a range of disorders including Parkinson's disease, cardiac arrythmias and are emerging as a potential treatment option for gastrointestinal disorders. However, pre-clinical investigations are challenged by the large stimulation artifacts registered in bio-electrical recordings. METHOD A generalized framework capable of isolating and suppressing stimulation artifacts with minimal intervention was developed. Stimulation artifacts with different pulse-parameters in synthetic and experimental cardiac and gastrointestinal signals were detected using a Hampel filter and reconstructed using 3 methods: i) autoregression, ii) weighted mean, and iii) linear interpolation. RESULTS Synthetic stimulation artifacts with amplitudes of 2 mV and 4 mV and pulse-widths of 50 ms, 100 ms, and 200 ms were successfully isolated and the artifact window size remained uninfluenced by the pulse-amplitude, but was influenced by pulse-width (e.g., the autoregression method resulted in an identical Root Mean Square Error (RMSE) of 1.64 mV for artifacts with 200 ms pulse-width and both 2 mV and 4 mV amplitudes). The performance of autoregression (RMSE = 1.45 ± 0.16 mV) and linear interpolation (RMSE = 1.22 ± 0.14 mV) methods were comparable and better than weighted mean (RMSE = 5.54 ± 0.56 mV) for synthetic data. However, for experimental recordings, artifact removal by autoregression was superior to both linear interpolation and weighted mean approaches in gastric, small intestinal and cardiac recordings. CONCLUSIONS A novel signal processing framework enabled efficient analysis of bio-electrical recordings with stimulation artifacts. This will allow the bio-electrical events induced by stimulation protocols to be efficiently and systematically evaluated, resulting in improved stimulation therapies.
Collapse
|
14
|
Category learning is shaped by the multifaceted development of selective attention. J Exp Child Psychol 2023; 226:105549. [PMID: 36116317 DOI: 10.1016/j.jecp.2022.105549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 08/28/2022] [Accepted: 08/31/2022] [Indexed: 11/25/2022]
Abstract
Categories are a fundamental building block of cognition that simplify the multitude of entities we encounter into equivalence classes. By simplifying this barrage of inputs, categories support reasoning about and interacting with their members. For example, despite differences in size, color, and other features, we can treat members of the category of dogs as equivalent, and thus generalize information about any given dog to other dogs. Simplifying entities into categories in adulthood is supported by selective attention, in which people focus on category-relevant attributes, while filtering out category-irrelevant attributes. However, much category learning takes place in infancy and early childhood, when selective attention undergoes substantial development. We designed two experiments to disentangle the contributions of the focusing and filtering aspects of selective attention to category learning over development. Experiment 1 provided evidence that learning simple categories was accompanied by selective attention in both 4- and 5- year-old children and adults. Experiment 2 provided evidence that only focusing contributed to selective attention in 4-year-olds, whereas both focusing and filtering contributed to selective attention in 5-year-olds and adults. Thus, category learning may recruit different aspects of selective attention across development.
Collapse
|
15
|
Evaluation of denoising techniques to remove speckle and Gaussian noise from dermoscopy images. Comput Biol Med 2023; 152:106474. [PMID: 36563540 DOI: 10.1016/j.compbiomed.2022.106474] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 10/03/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Computerized methods provide analyses of skin lesions from dermoscopy images automatically. However, the images acquired from dermoscopy devices are noisy and cause low accuracy in automated methods. Therefore, various methods have been applied for denoising in the literature. There are some review-type papers about these methods. However, their authors have focused on either denoising with a specific approach or denoising from other images rather than dermoscopy images, which have a different characteristic. It is not possible to determine which method is the most suitable for denoising from dermoscopy images according to the results presented in them. Therefore, a review on the denoising approaches applied with dermoscopy images is required and, according to our knowledge, there is no such a review-type paper. To fill this gap in the literature, the required review has been performed in this work. Also, in this work, the methods in the literature have been implemented using the same data sets containing images with speckle or Gaussian types of noise. The results have been analyzed not only visually but also quantitatively to compare capabilities of the techniques. Our experiments indicated that each denoising technique has its own disadvantages and advantages. The main contributions of this paper are three-fold: (i) A comprehensive review on the denoising approaches applied with dermoscopy images has been presented. (ii) The denoising techniques have been implemented with the same images for meaningful comparisons. (iii) Both visual and quantitative analyses with different metrics have been performed and comparative performance evaluations have been presented.
Collapse
|
16
|
Systematic analysis of different low-pass filter cut-off frequencies on lumbar spine kinematics data and the impact on the agreement between accelerometers and an optoelectronic system. J Biomech 2022; 145:111395. [PMID: 36442430 DOI: 10.1016/j.jbiomech.2022.111395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/18/2022] [Accepted: 11/16/2022] [Indexed: 11/21/2022]
Abstract
A necessary step in the validation of accelerometers for the measurement of spine angles is to determine the levels of agreement with current gold standard methods. However, agreement may be a function of filtering parameters. We aimed to (1) systematically determine the effect of different filter frequency cut-offs on the peak range of motion (ROM) during forward bending as measured by accelerometers and an optoelectronic (OE) system, (2) explore the influence of filtering on agreement between systems, and (3) determine the difference in peak ROM measurement between these systems. Accelerometers and OE sensors were attached at L2, L4, and S1 of 20 asymptomatic female participants for a guided flexion trial. Signals were then iteratively low-pass filtered with cut-off frequencies ranging from 14 Hz to 1 Hz and peak range of motion outcome measures were compared between systems. Peak ROM was minimally affected by filter cut-off frequency for both accelerometer and OE system. The difference in peak ROM between difference cut-off frequencies were maximum 0.66°, median 0.18° and minimum 0.06° for accelerometer derived values and maximum 0.23°, median 0.08° and minimum 0.03° for the OE system. The maximum difference across the filtering frequencies was 0.62° and the largest difference between the two systems (with outliers removed) was 0.82°. Cut-off frequencies ranging from 14 to 1 Hz had little effect of peak lumbar spine ROM during low velocity (6°/s) forward bending, regardless of motion capture method. Filtering cut-off frequency had little effect on the differences between the accelerometer and OE system and similar measurements can be achieved using accelerometers compared to OE systems.
Collapse
|
17
|
Wearable electroencephalography and multi-modal mental state classification: A systematic literature review. Comput Biol Med 2022; 150:106088. [PMID: 36137314 DOI: 10.1016/j.compbiomed.2022.106088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 08/10/2022] [Accepted: 09/03/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND Wearable multi-modal time-series classification applications outperform their best uni-modal counterparts and hold great promise. A modality that directly measures electrical correlates from the brain is electroencephalography. Due to varying noise sources, different key brain regions, key frequency bands, and signal characteristics like non-stationarity, techniques for data pre-processing and classification algorithms are task-dependent. METHOD Here, a systematic literature review on mental state classification for wearable electroencephalography is presented. Four search terms in different combinations were used for an in-title search. The search was executed on the 29th of June 2022, across Google Scholar, PubMed, IEEEXplore, and ScienceDirect. 76 most relevant publications were set into context as the current state-of-the-art in mental state time-series classification. RESULTS Pre-processing techniques, features, and time-series classification models were analyzed. Across publications, a window length of one second was mainly chosen for classification and spectral features were utilized the most. The achieved performance per time-series classification model is analyzed, finding linear discriminant analysis, decision trees, and k-nearest neighbors models outperform support-vector machines by a factor of up to 1.5. A historical analysis depicts future trends while under-reported aspects relevant to practical applications are discussed. CONCLUSIONS Five main conclusions are given, covering utilization of available area for electrode placement on the head, most often or scarcely utilized features and time-series classification model architectures, baseline reporting practices, as well as explainability and interpretability of Deep Learning. The importance of a 'test battery' assessing the influence of data pre-processing and multi-modality on time-series classification performance is emphasized.
Collapse
|
18
|
Optimally filtering and matching processing for regional upstrokes to improve ultrasound transit time-based local PWV estimation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:106997. [PMID: 35809369 DOI: 10.1016/j.cmpb.2022.106997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 06/19/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Pulse wave velocity (PWV) is an important index for quantifying the elasticity of artery. Local PWV estimates based on ultrasonic transit time (TT) methods, however, are affected by the reflected waves and ultrasonic noise, biasing the spatiotemporal propagation of the time fiduciary point (TFP) positioning in the distension waveforms. In this study, an optimally filtering and matching processing for regional upstrokes is proposed to improve the ultrasound TT-based local PWV estimation. METHOD (i) Smooth the pulse waves (PWs) using the Savitzky-Golay filter with one set of randomly combined parameters. (ii) An arbitrary region at the first beam upstroke of the smoothed PWs is selected as the curve template, and then matched with the upstrokes of other PWs by calculating the sum of square differences (SSD) between the template and matching regions to find its similar regions. (iii) Update the filter parameters and the template using the moth-flame optimization (MFO) feedback for computing the new SSD value. When the new SSD value is smaller than the historical one, the later will be replaced. (iv) Repeat the above steps until the MFO algorithm converges to the minimum SSD value. (v) Output the optimal filter parameters and the locations of regional curves corresponding to the minimum SSD value. Then the time delay of the PWs propagation can be detected by using the starting points of the regional curves as the TFPs. RESULTS We conducted performance comparison with the advanced TT method through both simulation and clinical experiments. The results demonstrate that the proposed work observes considerable reductions on both the normalized root mean square error ± the standard deviation (from 6.73 ± 2.27% to 1.57 ± 0.72%) and the coefficient of variation (from 13.39% to 8.87%). CONCLUSIONS The results of this study support that the proposed method may facilitate the early diagnosis and prevention of local arterial stiffness .
Collapse
|
19
|
Eigenfunction martingale estimating functions and filtered data for drift estimation of discretely observed multiscale diffusions. STATISTICS AND COMPUTING 2022; 32:34. [PMID: 35527984 PMCID: PMC9001250 DOI: 10.1007/s11222-022-10081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 01/20/2022] [Indexed: 06/14/2023]
Abstract
We propose a novel method for drift estimation of multiscale diffusion processes when a sequence of discrete observations is given. For the Langevin dynamics in a two-scale potential, our approach relies on the eigenvalues and the eigenfunctions of the homogenized dynamics. Our first estimator is derived from a martingale estimating function of the generator of the homogenized diffusion process. However, the unbiasedness of the estimator depends on the rate with which the observations are sampled. We therefore introduce a second estimator which relies also on filtering the data, and we prove that it is asymptotically unbiased independently of the sampling rate. A series of numerical experiments illustrate the reliability and efficiency of our different estimators.
Collapse
|
20
|
Contactless monitoring of human respiration using infrared thermography and deep learning. Physiol Meas 2022; 43. [PMID: 35193123 DOI: 10.1088/1361-6579/ac57a8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 02/22/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To monitor the human respiration rate (RR) using infrared thermography (IRT) and artificial intelligence, in a completely non-invasive and automated manner. APPROACH The human breathing signals (BS) were obtained using IRT. The RR was monitored under extreme conditions, by developing a deep learning (DL) based "Residual network 50+Facial landmark detection" (ResNet 50+FLD) model. This model was built and evaluated on 10,000 thermograms and is the first work that documents the use of a DL classifier on a large thermal dataset for nostril tracking. Further, the acquired BS were filtered using the Moving average filter (MAF), and the Butterworth filter (BF). The novel "Breathing signal characterization algorithm (BSCA)" was proposed to obtain the RR in an automated manner. This algorithm is the first work that identifies the breaths in the thermal BS as regular, prolonged, or rapid, using machine learning (ML). The "Exploratory data analysis" was performed to choose an appropriate ML algorithm for the BSCA. The performance of the "BSCA" was evaluated for both "Decision tree (DT)" and "Support vector machine(SVM)" models. MAIN RESULTS The "ResNet 50+FLD model" had Validation and Testing accuracy, of 99.5 %, and 99.4 % respectively. The Precision, Sensitivity, Specificity, F-measure, and G- mean values were computed as well. The comparative analysis of the filters revealed that the BF performed better than the MAF. The "BSCA" performed better with the SVM classifier, than the DT classifier, with Validation accuracy, and Testing accuracy of 99.5%, and 98.83%, respectively. SIGNIFICANCE The ever-increasing number of critical cases and the limited availability of skilled medical attendants, advocates in favor of an automated and harmless health monitoring system. The proposed methodology eliminates the risk of infections that spread through contact. It can be used in darkness, and in remote areas as well, where there is a lack of medical attendants.
Collapse
|
21
|
Abstract
Single-nucleotide polymorphisms (SNPs) have become the primary type of molecular genetic marker used in a diverse range of genetic and genomic studies. SNPs can be used to identify genomic regions linked to traits such as disease in genome-wide association studies, to understand population structure and diversity, or to understand mechanisms of genome evolution. One of the first steps of any SNP-based workflow, following SNP discovery, is quality control of SNP data. The protocol described here details how to perform quality control on SNP data to minimise errors in downstream analysis.
Collapse
|
22
|
Design and circuit analysis of a single and dual band-notched UWB antenna using vertical stubs embedded in feedline. Heliyon 2021; 7:e08554. [PMID: 34917819 PMCID: PMC8668833 DOI: 10.1016/j.heliyon.2021.e08554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 11/16/2021] [Accepted: 12/02/2021] [Indexed: 12/03/2022] Open
Abstract
Within the frequency band designated by the FCC for UWB systems, there are other frequency bands that are also designated for use for other technologies like WLAN (5.15 – 5.35 GHz) and WiMAX (3.3–3.7 GHz). These systems can cause interference with UWB systems when operated at the same time. Therefore, an antenna operating in the UWB spectrum needs to have band-notch capabilities in order to mitigate interference resulting from nearby communication systems operating within the UWB frequency band. In this paper, the notched bands are achieved by using vertical stubs protruded from a microstrip feedline. The antenna is etched on a 25 × 30 mm2 substrate. Two antenna structures are presented; one is designed to notch an intended narrowband from 3.3 – 3.6 GHz and the second is designed to include an additional band notch from 5.15 – 6 GHz. The simulations and measurements show that the proposed antennas achieve an ultra-wide bandwidth of 3–10.6 GHz with successful single and dual band-notches, good gain and good group delay rejection in the notch bands. Stable radiation patterns with low cross polarization are also realized across the operating bandwidth. A detailed analysis of how the filtering is also achieved using circuit theory is presented in this work as well.
Collapse
|
23
|
Incorporating outlier information into diffusion-weighted MRI modeling for robust microstructural imaging and structural brain connectivity analyses. Neuroimage 2021; 247:118802. [PMID: 34896584 DOI: 10.1016/j.neuroimage.2021.118802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 11/01/2021] [Accepted: 12/09/2021] [Indexed: 11/28/2022] Open
Abstract
The white matter structures of the human brain can be represented using diffusion-weighted MRI tractography. Unfortunately, tractography is prone to find false-positive streamlines causing a severe decline in its specificity and limiting its feasibility in accurate structural brain connectivity analyses. Filtering algorithms have been proposed to reduce the number of invalid streamlines but the currently available filtering algorithms are not suitable to process data that contains motion artefacts which are typical in clinical research. We augmented the Convex Optimization Modelling for Microstructure Informed Tractography (COMMIT) algorithm to adjust for these signals drop-out motion artefacts. We demonstrate with comprehensive Monte-Carlo whole brain simulations and in vivo infant data that our robust algorithm is capable of properly filtering tractography reconstructions despite these artefacts. We evaluated the results using parametric and non-parametric statistics and our results demonstrate that if not accounted for, motion artefacts can have severe adverse effects in human brain structural connectivity analyses as well as in microstructural property mappings. In conclusion, the usage of robust filtering methods to mitigate motion related errors in tractogram filtering is highly beneficial, especially in clinical studies with uncooperative patient groups such as infants. With our presented robust augmentation and open-source implementation, robust tractogram filtering is readily available.
Collapse
|
24
|
A novel technique for automating stiffness measurement and emphasizing the main wave: Coherent-wave auto-selection (CHASE). Magn Reson Imaging 2021; 85:133-140. [PMID: 34687851 DOI: 10.1016/j.mri.2021.10.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 05/17/2021] [Accepted: 10/17/2021] [Indexed: 11/20/2022]
Abstract
This study aims to develop and assess a new automated processing technique in MR elastography (MRE), namely coherent-wave auto-selection (CHASE). CHASE enables automatic selection of the region of interest (ROI) for stiffness measurement by extraction of the coherent wave region (CHASE ROI), and it improves the reconstruction of stiffness by a directional filter oriented along the main wave in each pixel (CHASE filtering). In this study, MRE of a phantom and of the liver of four healthy volunteers was performed. To investigate the potential of CHASE, this study assessed the CHASE according to three indices through the phantom study: 1) agreement on the ROI settings between CHASE and expert observers, 2) noise dependency, and 3) effect of the CHASE on stiffness variability within the CHASE ROI. The agreements on the ROI settings were analyzed by Cohen's kappa coefficient (κ). The noise dependency was analyzed by the mean absolute percentage errors (MAPEs) within the ROI between low (20%-80% amplitudes) and high vibration amplitudes (100% amplitude). The stiffness variability was assessed by standard deviation (SD) within the ROI. In the volunteer study, agreements on the ROI settings (or stiffness value) and stiffness variability within the CHASE ROI were assessed using κ-value (or intraclass correlation coefficient: ICC) and coefficient of variation, respectively. The results showed close agreement on the ROI settings and stiffness (κ-value: greater than 0.61 in both the phantom and volunteer studies, ICC: 0.97 in the volunteer study). The MAPEs within the CHASE ROI were much smaller than those in the whole region of the phantom (CHASE ROI vs. the whole region at 20% amplitude: 10.3% vs. 50.8%). Moreover, in both the phantom and volunteer studies, the stiffness variation within the CHASE ROI was smaller in the elastogram processed with CHASE filtering than in the unprocessed one. Our results demonstrated that the CHASE has high robustness against noise and the potential to provide ROI settings for stiffness measurement comparable to expert observers, as well as improve the reconstruction of stiffness.
Collapse
|
25
|
Impact of processing demands at encoding, maintenance and retrieval in visual working memory. Cognition 2021; 214:104758. [PMID: 33984741 PMCID: PMC8346950 DOI: 10.1016/j.cognition.2021.104758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/29/2022]
Abstract
There has been surprisingly little examination of how recall performance is affected by processing demands induced by retrieval cues, how manipulations at encoding interact with processing demands during maintenance or due to the retrieval cue, and how these are affected with aging. Here, we investigate these relationships by examining the fidelity of working memory recall across two delayed reproduction tasks with a continuous measure of report across the adult lifespan. Participants were asked to remember and subsequently reproduce from memory the identity and location of a probed item from the encoding display. In Experiment 1, we examined the effect of filtering irrelevant information at encoding and the impact of filtering distracting information at retrieval simultaneously. In Experiment 2, we tested how ignoring distracting information during maintenance or updating current contents with new information during this period affects recall. The results reveal that manipulating processing requirements induced by retrieval cues (by altering the nature of the retrieval foil) had a significant impact on memory recall: the presence of two previously viewed features from the encoding display in the retrieval foil led to a decrease in identification accuracy. Although irrelevant information can be filtered out well at encoding, both ignoring irrelevant information and updating the contents of memory during the maintenance delay had a detrimental effect on recall. These effects were similar across the lifespan, but older individuals were particularly affected by manipulations of processing demands at encoding as well as increasing set size of information to be retained in memory. Finally, analyses revealed that there were no systematic relationships between filtering performance at encoding, maintenance and retrieval suggesting that these processing demands are independent of each other. Rather than filtering being a single, monolithic entity, the data suggest that it is better accounted for as distinctly dissociable cognitive processes that engage and articulate with different phases of working memory.
Collapse
|
26
|
Rethinking resilience and development: A coevolutionary perspective. AMBIO 2021; 50:1304-1312. [PMID: 33566331 PMCID: PMC8116373 DOI: 10.1007/s13280-020-01485-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 11/13/2020] [Accepted: 12/15/2020] [Indexed: 05/03/2023]
Abstract
The interdependence of social and ecological processes is broadly acknowledged in the pursuit to enhance human wellbeing and prosperity for all. Yet, development interventions continue to prioritise economic development and short-term goals with little consideration of social-ecological interdependencies, ultimately undermining resilience and therefore efforts to deliver development outcomes. We propose and advance a coevolutionary perspective for rethinking development and its relationship to resilience. The perspective rests on three propositions: (1) social-ecological relationships coevolve through processes of variation, selection and retention, which are manifest in practices; (2) resilience is the capacity to filter practices (i.e. to influence what is selected and retained); and (3) development is a coevolutionary process shaping pathways of persistence, adaptation or transformation. Development interventions affect and are affected by social-ecological relationships and their coevolutionary dynamics, with consequences for resilience, often with perverse outcomes. A coevolutionary approach enables development interventions to better consider social-ecological interdependencies and dynamics. Adopting a coevolutionary perspective, which we illustrate with a case on agricultural biodiversity, encourages a radical rethinking of how resilience and development are conceptualised and practiced across global to local scales.
Collapse
|
27
|
TraceBase; A database structure for forensic trace analysis. Sci Justice 2021; 61:410-418. [PMID: 34172130 DOI: 10.1016/j.scijus.2021.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 03/05/2021] [Accepted: 03/06/2021] [Indexed: 11/22/2022]
Abstract
A data structure is proposed that can store forensic data obtained by experts from different disciplines and acquired using different instruments. This data structure, called TraceBase, is congruent with the forensic examination in the laboratory. We describe the design as well as its planned introduction in casework. The back-end of TraceBase is based on PostgreSQL and can be accessed by front-end applications such as the open-source LibreOffice office suite. The back-end regulates the flexible and robust storage of data, as well as the relation between items, samples, and analyses. The front-end applications allow the user to enter or retrieve data in an easy fashion, while the modular structure ensures that different aspects, such as the data entry, the processing and reporting of entered data, can be optimised individually. Additional analyses can be introduced and linked to items or samples already present. The database is designed such that data from several sources, different forensic disciplines and data acquired by different analytical techniques can be entered. When data needs to be retrieved for further analysis, a subcollection can be filtered for use in a specific situation.
Collapse
|
28
|
Filtering in tractography using autoencoders (FINTA). Med Image Anal 2021; 72:102126. [PMID: 34161915 DOI: 10.1016/j.media.2021.102126] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 04/20/2021] [Accepted: 05/26/2021] [Indexed: 10/21/2022]
Abstract
Current brain white matter fiber tracking techniques show a number of problems, including: generating large proportions of streamlines that do not accurately describe the underlying anatomy; extracting streamlines that are not supported by the underlying diffusion signal; and under-representing some fiber populations, among others. In this paper, we describe a novel autoencoder-based learning method to filter streamlines from diffusion MRI tractography, and hence, to obtain more reliable tractograms. Our method, dubbed FINTA (Filtering in Tractography using Autoencoders) uses raw, unlabeled tractograms to train the autoencoder, and to learn a robust representation of brain streamlines. Such an embedding is then used to filter undesired streamline samples using a nearest neighbor algorithm. Our experiments on both synthetic and in vivo human brain diffusion MRI tractography data obtain accuracy scores exceeding the 90% threshold on the test set. Results reveal that FINTA has a superior filtering performance compared to conventional, anatomy-based methods, and the RecoBundles state-of-the-art method. Additionally, we demonstrate that FINTA can be applied to partial tractograms without requiring changes to the framework. We also show that the proposed method generalizes well across different tracking methods and datasets, and shortens significantly the computation time for large (>1 M streamlines) tractograms. Together, this work brings forward a new deep learning framework in tractography based on autoencoders, which offers a flexible and powerful method for white matter filtering and bundling that could enhance tractometry and connectivity analyses.
Collapse
|
29
|
A novel method to correct repolarization time estimation from unipolar electrograms distorted by standard filtering. Med Image Anal 2021; 72:102075. [PMID: 34020081 DOI: 10.1016/j.media.2021.102075] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 03/30/2021] [Accepted: 04/02/2021] [Indexed: 11/30/2022]
Abstract
Reliable patient-specific ventricular repolarization times (RTs) can identify regions of functional block or afterdepolarizations, indicating arrhythmogenic cardiac tissue and the risk of sudden cardiac death. Unipolar electrograms (UEs) record electric potentials, and the Wyatt method has been shown to be accurate for estimating RT from a UE. High-pass filtering is an important step in processing UEs, however, it is known to distort the T-wave phase of the UE, which may compromise the accuracy of the Wyatt method. The aim of this study was to examine the effects of high-pass filtering, and improve RT estimates derived from filtered UEs. We first generated a comprehensive set of UEs, corresponding to early and late activation and repolarization, that were then high-pass filtered with settings that mimicked the CARTO filter. We trained a deep neural network (DNN) to output a probabilistic estimation of RT and a measure of confidence, using the filtered synthetic UEs and their true RTs. Unfiltered ex-vivo human UEs were also filtered and the trained DNN used to estimate RT. Even a modest 2 Hz high-pass filter imposes a significant error on RT estimation using the Wyatt method. The DNN outperformed the Wyatt method in 62.75% of cases, and produced a significantly lower absolute error (p=8.99E-13), with a median of 16.91 ms, on 102 ex-vivo UEs. We also applied the DNN to patient UEs from CARTO, from which an RT map was computed. In conclusion, DNNs trained on synthetic UEs improve the RT estimation from filtered UEs, which leads to more reliable repolarization maps that help to identify patient-specific repolarization abnormalities.
Collapse
|
30
|
Filtering Spatial Point Patterns Using Kernel Densities. SPATIAL STATISTICS 2021; 41:100487. [PMID: 33409121 PMCID: PMC7781288 DOI: 10.1016/j.spasta.2020.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Understanding spatial inhomogeneity and clustering in point patterns arises in many contexts, ranging from disease outbreak monitoring to analyzing radiologically-based emphysema in biomedical images. This can often involve classifying individual points as being part of a feature/cluster or as being part of a background noise process. Existing methods for this task can struggle when there are differences in the size and/or density of individual clusters. In this work, we propose employing kernel density estimates of the underlying point process intensity function, using an existing data-driven approach to bandwidth selection, to separate feature points from noise. This is achieved by constructing a null distribution, either through asymptotic properties or Monte Carlo simulation, and comparing kernel density estimates to a given quantile of this distribution. We demonstrate that our method, termed Kernel Density and Simulation based Filtering (KDS-Filt), showed superior performance to existing alternative approaches, especially when there is inhomogeneity in cluster sizes and density. We also show the utility of KDS-Filt for identifying clinically relevant information about the spatial distribution of emphysema in lung computed tomography scans. The KDS-Filt methodology is available as part of the sncp R package, which can be downloaded at https://github.com/stop-pre16/sncp.
Collapse
|
31
|
[The Application Value of Artificial Intelligence-based Filtering and Interpolated Image Reconstruction Algorithm in Abdominal Magnetic Resonance Image Denoising]. SICHUAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF SICHUAN UNIVERSITY. MEDICAL SCIENCE EDITION 2021; 52:293-299. [PMID: 33829705 PMCID: PMC10408905 DOI: 10.12182/20210360104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVE To compare the noise reduction performance of conventional filtering and artificial intelligence-based filtering and interpolation (AIFI) and to explore for optimal parameters of applying AIFI in the noise reduction of abdominal magnetic resonance imaging (MRI). METHODS Sixty patients who underwent upper abdominal MRI examination in our hospital were retrospectively included. The raw data of T1-weighted image (T1WI), T2-weighted image (T2WI), and dualecho sequences were reconstructed with two image denoising techniques, conventional filtering and AIFI of different levels of intensity. The difference in objective image quality indicators, peak signal-to-noise ratio (pSNR) and image sharpness, of the different denoising techniques was compared. Two radiologists evaluated the image noise, contrast, sharpness, and overall image quality. Their scores were compared and the interobserver agreement was calculated. RESULTS Compared with the original images, improvement of varying degrees were shown in the pSNR and the sharpness of the images of the three sequences, T1W1, T2W2, and dual echo sequence, after denoising filtering and AIFI were used (all P<0.05). In addition, compared with conventional filtering, the objective quality scores of the reconstructed images were improved when conventional filtering was combined with AIFI reconstruction methods in T1WI sequence, AIFI level≥3 was used in T2WI and echo1 sequence, and AIFI level≥4 was used in echo2 sequence (all P<0.05). The subjective scores given by the two radiologists for the image noise, contrast, sharpness, and overall image quality in each sequence of conventional filtering reconstruction, AIFI reconstruction (except for AIFI level=1), and two-method combination reconstruction were higher than those of the original images (all P<0.05). However, the image contrast scores were reduced for AIFI level=5. There was good interobserver agreement between the two radiologists (all r>0.75, P<0.05). After multidimensional comparison, the optimal parameters of using AIFI technique for noise reduction in abdominal MRI were conventional filtering+AIFI level=3 in the T1WI sequence and AIFI level=4 in the T2WI and dualecho sequences. CONCLUSION AIFI is superior to filtering in imaging denoising at medium and high levels. It is a promising noise reduction technique. The optimal parameters of using AIFI for abdominal MRI are Filtering+AIFI level=3 in the T1WI sequence and AIFI level=4 in T2WI and dualecho sequences.
Collapse
|
32
|
FiNGS: high quality somatic mutations using filters for next generation sequencing. BMC Bioinformatics 2021; 22:77. [PMID: 33602113 PMCID: PMC7890800 DOI: 10.1186/s12859-021-03995-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 02/02/2021] [Indexed: 01/15/2023] Open
Abstract
Background Somatic variant callers are used to find mutations in sequencing data from cancer samples. They are very sensitive and have high recall, but also may produce low precision data with a large proportion of false positives. Further ad hoc filtering is commonly performed after variant calling and before further analysis. Improving the filtering of somatic variants in a reproducible way represents an unmet need. We have developed Filters for Next Generation Sequencing (FiNGS), software written specifically to address these filtering issues. Results Developed and tested using publicly available sequencing data sets, we demonstrate that FiNGS reliably improves upon the precision of default variant caller outputs and performs better than other tools designed for the same task. Conclusions FiNGS provides researchers with a tool to reproducibly filter somatic variants that is simple to both deploy and use, with filters and thresholds that are fully configurable by the user. It ingests and emits standard variant call format (VCF) files and will slot into existing sequencing pipelines. It allows users to develop and implement their own filtering strategies and simple sharing of these with others.
Collapse
|
33
|
Abstract
OBJECTIVES Non-local mean (NLM) filtering has been broadly used for denoising of natural and medical images. The NLM filter relies on the redundant information, in the form of repeated patterns/textures, in the target image to discriminate the underlying structures/signals from noise. In PET (or SPECT) imaging, the raw data could be reconstructed using different parameters and settings, leading to different representations of the target image, which contain highly similar structures/signals to the target image contaminated with different noise levels (or properties). In this light, multiple-reconstruction NLM filtering (MR-NLM) is proposed, which relies on the redundant information provided by the different reconstructions of the same PET data (referred to as auxiliary images) to conduct the denoising process. METHODS Implementation of the MR-NLM approach involved the use of twelve auxiliary PET images (in addition to the target image) reconstructed using the same iterative reconstruction algorithm with different numbers of iterations and subsets. For each target voxel, the patches of voxels at the same location are extracted from the auxiliary PET images based on which the NLM denoising process is conducted. Through this, the exhaustive search scheme performed in the conventional NLM method to find similar patches of voxels is bypassed. The performance evaluation of the MR-NLM filter was carried out against the conventional NLM, Gaussian and bilateral post-reconstruction approaches using the experimental Jaszczak phantom and 25 whole-body PET/CT clinical studies. RESULTS The signal-to-noise ratio (SNR) in the experimental Jaszczak phantom study improved from 25.1 when using Gaussian filtering to 27.9 and 28.8 when the conventional NLM and MR-NLM methods were applied (p value < 0.05), respectively. Conversely, the Gaussian filter led to quantification bias of 35.4%, while NLM and MR-NLM approaches resulted in a bias of 32.0% and 31.1% (p value < 0.05), respectively. The clinical studies further confirm the superior performance of the MR-NLM method, wherein the quantitative bias measured in malignant lesions (hot spots) decreased from - 12.3 ± 2.3% when using the Gaussian filter to - 3.5 ± 1.3% and - 2.2 ± 1.2% when using the NLM and MR-NLM approaches (p value < 0.05), respectively. CONCLUSION The MR-NLM approach exhibited promising performance in terms of noise suppression and signal preservation for PET images, thus translating into higher SNR compared to the conventional NLM approach. Despite the promising performance of the MR-NLM approach, the additional computational burden owing to the requirement of multiple PET reconstruction still needs to be addressed.
Collapse
|
34
|
Abstract
Since the outbreak of the COVID-19 pandemic, most countries have recommended their citizens to adopt social distance, hand hygiene, and face mask wearing. However, wearing face masks has not been well adopted by many citizens. While the reasons are complex, there is a general perception that the evidence to support face mask wearing is lacking, especially for the general public in a community setting. Face mask wearing can block or filter airborne virus-carrying particles through the working of colloid and interface science. This paper assesses current knowledge behind the design and functioning of face masks by reviewing the selection of materials, mask specifications, relevant laboratory tests, and respiratory virus transmission trials, with an overview of future development of reusable masks for the general public. This review highlights the effectiveness of face mask wearing in the prevention of COVID-19 infection.
Collapse
|
35
|
Automated Removal of Non-homologous Sequence Stretches with PREQUAL. Methods Mol Biol 2021; 2231:147-162. [PMID: 33289892 DOI: 10.1007/978-1-0716-1036-7_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Large-scale multigene datasets used in phylogenomics and comparative genomics often contain sequence errors inherited from source genomes and transcriptomes. These errors typically manifest as stretches of non-homologous characters and derive from sequencing, assembly, and/or annotation errors. The lack of automatic tools to detect and remove sequence errors leads to the propagation of these errors in large-scale datasets. PREQUAL is a command line tool that identifies and masks regions with non-homologous adjacent characters in sets of unaligned homologous sequences. PREQUAL uses a full probabilistic approach based on pair hidden Markov models. On the front end, PREQUAL is user-friendly and simple to use while also allowing full customization to adjust filtering sensitivity. It is primarily aimed at amino acid sequences but can handle protein-coding nucleotide sequences. PREQUAL is computationally efficient and shows high sensitivity and accuracy. In this chapter, we briefly introduce the motivation for PREQUAL and its underlying methodology, followed by a description of basic and advanced usage, and conclude with some notes and recommendations. PREQUAL fills an important gap in the current bioinformatics tool kit for phylogenomics, contributing toward increased accuracy and reproducibility in future studies.
Collapse
|
36
|
Loss of high- or low-frequency audibility can partially explain effects of hearing loss on emotional responses to non-speech sounds. Hear Res 2020; 401:108153. [PMID: 33360158 DOI: 10.1016/j.heares.2020.108153] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 11/20/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Hearing loss can disrupt emotional responses to sound. However, the impact of stimulus modality (multisensory versus unisensory) on this disruption, and the underlying mechanisms responsible, are unclear. The purposes of this project were to evaluate the effects of stimulus modality and filtering on emotional responses to non-speech stimuli. It was hypothesized that low- and high-pass filtering would result in less extreme ratings, but only for unisensory stimuli. Twenty-four adults (22- 34 years old; 12 male) with normal hearing participated. Participants made ratings of valence and arousal in response to pleasant, neutral, and unpleasant non-speech sounds and/or pictures. Each participant completed ratings of five stimulus modalities: auditory-only, visual-only, auditory-visual, filtered auditory-only, and filtered auditory-visual. Half of the participants rated low-pass filtered stimuli (800 Hz cutoff), and half of the participants rated high-pass filtered stimuli (2000 Hz cutoff). Combining auditory and visual modalities resulted in more extreme (more pleasant and more unpleasant) ratings of valence in response to pleasant and unpleasant stimuli. In addition, low- and high-pass filtering of sounds resulted in less extreme ratings of valence (less pleasant and less unpleasant) and arousal (less exciting) in response to both auditory-only and auditory-visual stimuli. These results suggest that changes in audible spectral information are partially responsible for the noted changes in emotional responses to sound that accompany hearing loss. The findings also suggest the effects of hearing loss will generalize to multisensory stimuli if the stimuli include sound, although further work is warranted to confirm this in listeners with hearing loss.
Collapse
|
37
|
Filtered correlation and allowed frequency spectra in dynamic functional connectivity. J Neurosci Methods 2020; 343:108837. [PMID: 32621916 DOI: 10.1016/j.jneumeth.2020.108837] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 06/15/2020] [Accepted: 06/28/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Dynamic functional connectivity enables us to study brain connectivity occurring at different frequencies. Techniques like sliding window correlation allow for the estimation of time varying connectivity and its frequency spectrum content. Since correlation is equal to the cosine of the phase (cos θ) between activation amplitudes of two brain regions, we assume that phase is the relevant functional connectivity feature and leave out any contamination from activation amplitudes. NEW METHOD First, this work studies the conditions by which time varying correlation can be separated from nuisance activation amplitudes that are not phase related. Second, we propose the filtered sliding window correlation to perform time varying estimation of cosine-phase (cos θ (t)) and nuisance filtering in one single step. RESULTS Mathematical models predict the correlation frequencies that should be filtered out to avoid overlap with the activation amplitude spectra. Filtered sliding window correlation excluded nuisance frequencies with an accurate estimation of time varying correlation. Real data outcomes empirically suggest that fMRI frequencies of interest extend up to 0.05 Hz. COMPARISON WITH EXISTING METHODS Compared with sliding window methods, the filtered sliding window correlation achieves better estimation for frequencies of interest. CONCLUSIONS The filtered sliding window correlation approach allows controlling for nuisance frequencies unrelated to time varying phase estimation.
Collapse
|
38
|
pipeComp, a general framework for the evaluation of computational pipelines, reveals performant single cell RNA-seq preprocessing tools. Genome Biol 2020; 21:227. [PMID: 32873325 PMCID: PMC7465801 DOI: 10.1186/s13059-020-02136-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Accepted: 08/06/2020] [Indexed: 11/13/2022] Open
Abstract
We present pipeComp ( https://github.com/plger/pipeComp ), a flexible R framework for pipeline comparison handling interactions between analysis steps and relying on multi-level evaluation metrics. We apply it to the benchmark of single-cell RNA-sequencing analysis pipelines using simulated and real datasets with known cell identities, covering common methods of filtering, doublet detection, normalization, feature selection, denoising, dimensionality reduction, and clustering. pipeComp can easily integrate any other step, tool, or evaluation metric, allowing extensible benchmarks and easy applications to other fields, as we demonstrate through a study of the impact of removal of unwanted variation on differential expression analysis.
Collapse
|
39
|
Non-fragile dissipative filtering of cyber-physical systems with random sensor delays. ISA TRANSACTIONS 2020; 104:115-121. [PMID: 31948683 DOI: 10.1016/j.isatra.2020.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 01/02/2020] [Accepted: 01/02/2020] [Indexed: 06/10/2023]
Abstract
This paper considers the problem of non-fragile state estimation under dissipative constraint for a class of nonlinear cyber-physical systems (CPSs) with sensor delays. The dynamics of the considered CPSs is characterized by the well-known T-S fuzzy model and system measurements are valued by wireless sensors. The communication link between the filter and the plant is described by a relatively practical model and sensor delays occurred in signal transmissions are taken into consideration. A stochastic variable which yields the standard Bernoulli distribution is exploited to model sensor delays encountered by the sensor measurements. With the help of a basis-dependent Lyapunov function and predefined performance constraint, sufficient conditions are then developed to establish the stochastic stability as well as strict dissipativity for the resultant filtering error system. The existence of the corresponding filter is guaranteed and the expression of desired filter parameters are shown explicitly. In the end, the established theoretical results are validated by a tunnel diode circuit example and corresponding simulations are also provided.
Collapse
|
40
|
Groupwise track filtering via iterative message passing and pruning. Neuroimage 2020; 221:117147. [PMID: 32673747 PMCID: PMC7780547 DOI: 10.1016/j.neuroimage.2020.117147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 06/13/2020] [Accepted: 07/06/2020] [Indexed: 02/07/2023] Open
Abstract
Tractography is an important tool for the in vivo analysis of brain connectivity based on diffusion MRI data, but it also has well-known limitations in false positives and negatives for the faithful reconstruction of neuroanatomy. These problems persist even in the presence of strong anatomical priors in the form of multiple region of interests (ROIs) to constrain the trajectories of fiber tractography. In this work, we propose a novel track filtering method by leveraging the groupwise consistency of fiber bundles that naturally exists across subjects. We first formalize our groupwise concept with a flexible definition that characterizes the consistency of a track with respect to other group members based on three important aspects: degree, affinity, and proximity. An iterative algorithm is then developed to dynamically update the localized consistency measure of all streamlines via message passing from a reference set, which then informs the pruning of outlier points from each streamline. In our experiments, we successfully applied our method to diffusion imaging data of varying resolutions from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and Human Connectome Project (HCP) for the consistent reconstruction of three important fiber bundles in human brain: the fornix, locus coeruleus pathways, and corticospinal tract. Both qualitative evaluations and quantitative comparisons showed that our method achieved significant improvement in enhancing the anatomical fidelity of fiber bundles.
Collapse
|
41
|
The comparative performance of DBS artefact rejection methods for MEG recordings. Neuroimage 2020; 219:117057. [PMID: 32540355 PMCID: PMC7443703 DOI: 10.1016/j.neuroimage.2020.117057] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 06/05/2020] [Accepted: 06/11/2020] [Indexed: 01/01/2023] Open
Abstract
Deep brain stimulation (DBS) can be a very efficient treatment option for movement disorders and psychiatric diseases. To better understand DBS mechanisms, brain activity can be recorded using magnetoencephalography (MEG) with the stimulator turned on. However, DBS produces large artefacts compromising MEG data quality due to both the applied current and the movement of wires connecting the stimulator with the electrode. To filter out these artefacts, several methods to suppress the DBS artefact have been proposed in the literature. A comparative study evaluating each method’s effectiveness, however, is missing so far. In this study, we evaluate the performance of four artefact rejection methods on MEG data from phantom recordings with DBS acquired with an Elekta Neuromag and a CTF system: (i) Hampel-filter, (ii) spectral signal space projection (S3P), (iii) independent component analysis with mutual information (ICA-MI), and (iv) temporal signal space separation (tSSS). In the sensor space, the largest increase in signal-to-noise (SNR) ratio was achieved by ICA-MI, while the best correspondence in terms of source activations was obtained by tSSS. LCMV beamforming alone was not sufficient to suppress the DBS-induced artefacts. Phantom MEG measurement with Elekta Neuromag and CTF MEG system with DBS. Systematic comparison of cleaning algorithms to remove DBS artefact from MEG data. Sensor level ICA-MI yielded the best results. Source level: tSSS provided the best correspondence to recording without DBS.
Collapse
|
42
|
Directed avoidance and its effect on visual working memory. Cognition 2020; 201:104277. [PMID: 32276234 DOI: 10.1016/j.cognition.2020.104277] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 03/23/2020] [Accepted: 03/24/2020] [Indexed: 11/19/2022]
Abstract
Attentional control processes help to prioritize the storage of information in visual working memory (VWM) by gating what enters the system and influencing how precisely this information is stored. However, the extent to which such prioritization occurs deliberately, opposed to incidentally, is poorly understood. In large part, this is because investigations of this matter have almost exclusively relied on comparisons of memory for exogenously cued items versus uncued items. To understand whether prioritization occurs independent of intention, though, it is essential to examine instances in which attended items are entirely task-irrelevant. Thus, in the current study we used a directed avoidance paradigm to examine VWM performance following the selection of an item known to be task-irrelevant. In Experiment 1, we confirmed that cueing the color of a non-target item paradoxically increases attention to the cued item when the target color is unknown, resulting in longer search times (in line with previous findings). In Experiments 2 and 3, we applied the same cueing procedure to a delayed-estimation task of VWM, but now found a non-target cueing benefit in which the recall of task-relevant items was improved by directed avoidance. We further found that this effect is not solely due to the reprioritization of cognitive resources during maintenance (Exp. 4), but involves additional control processes that 1) reallocate resources to relevant items at encoding, and 2) selectively stabilize such items during the transition from encoding to maintenance (Exp. 5). As such, we suggest that while attentionally selected items may initially be prioritized independent of importance, more controlled mechanisms reallocate resources on the basis of relevance when sufficient time is provided before the sensory information is removed or displaced.
Collapse
|
43
|
General principles of machine learning for brain-computer interfacing. HANDBOOK OF CLINICAL NEUROLOGY 2020; 168:311-328. [PMID: 32164862 DOI: 10.1016/b978-0-444-63934-9.00023-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Brain-computer interfaces (BCIs) are systems that translate brain activity patterns into commands that can be executed by an artificial device. This enables the possibility of controlling devices such as a prosthetic arm or exoskeleton, a wheelchair, typewriting applications, or games directly by modulating our brain activity. For this purpose, BCI systems rely on signal processing and machine learning algorithms to decode the brain activity. This chapter provides an overview of the main steps required to do such a process, including signal preprocessing, feature extraction and selection, and decoding. Given the large amount of possible methods that can be used for these processes, a comprehensive review of them is beyond the scope of this chapter, and it is focused instead on the general principles that should be taken into account, as well as discussing good practices on how these methods should be applied and evaluated for proper design of reliable BCI systems.
Collapse
|
44
|
Automatic segment filtering procedure for processing non-stationary signals. J Biomech 2020; 101:109619. [PMID: 31952818 DOI: 10.1016/j.jbiomech.2020.109619] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Revised: 12/20/2019] [Accepted: 01/06/2020] [Indexed: 11/16/2022]
Abstract
Computing time derivatives is a frequent stage in the processing of biomechanical data. Unfortunately, differentiation amplifies the high frequency noise inherent within the signal hampering the accuracy of signal derivatives. A low-pass Butterworth filter is commonly used to reduce the sampled signal noise prior to differentiation. One hurdle lies in selecting an appropriate filter cut-off frequency which retains the signal of interest while reducing deleterious noise. Most biomechanics data processing approaches utilize the same cut-off frequency for the whole sampled signal, but the frequency components of a signal can vary with time. To accommodate such signals, the Automatic Segment Filtering Procedure (ASFP) is proposed which uses different automatically determined Butterworth filter cut-off frequencies for separate segments of a sampled signal. The Teager-Kaiser Energy Operator of the signal is computed and used to determine segments of the signal with different energy content. The Autocorrelation-Based Procedure (ABP) is used on each of these segments to determine filter cut-off frequencies. This new procedure was evaluated by estimating acceleration values from the test data set of Dowling (1985). The ASFP produced a root mean square error (RMSE) of 16.4 rad s-2 (26.6%) whereas a single ABP determined filter cut-off frequency applied to the whole Dowling (1985) signal, representing the common approach, produced a RMSE of 25.5 rad s-2 (41.4%). As a point of comparison, a Generalized Cross-Validated Quintic Spline, a common non-Butterworth filter, produced a RMSE of 23.6 rad s-2 (38.4%). This new automatic approach is advantageous in biomechanics for preserving high frequency content of non-stationary signals.
Collapse
|
45
|
High-precision tracking differentiator via generalized discrete-time optimal control. ISA TRANSACTIONS 2019; 95:144-151. [PMID: 31122694 DOI: 10.1016/j.isatra.2019.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 04/28/2019] [Accepted: 05/03/2019] [Indexed: 06/09/2023]
Abstract
An enhanced discrete-time tracking differentiator (TD) with high precision based on discrete-time optimal control (DTOC) law is proposed. This law takes the form of state feedback for a double-integral system that adopts the Isochronic Region approach. There, the control signal sequence is determined by a linearized criterion based on the position of the initial state point on the phase plane. The proposed control law can be easily extended to the TD design problem by combining the first-state variable of the double-integral system with the desired trajectory. To improve the precision of the discretization model, we introduced a zero-order hold on the control signal. We also discuss the general form of DTOC law by analysing the relationship between boundary transformations and boundary characteristic points. After comparing the simulation results from three different TDs, we determined that this new TD achieves better performance and higher precision in signal-tracking filtering and differentiation acquisition than do existing TDs. Also the comparisons of the computational complexities between the proposed DTOC law and normal one are demonstrated. For confirmation of its utility, we processed raw phasor measurement units data via the proposed TD. In the absence of complex power system modelling and historical data, it was verified that the proposed TD is suitable for applications of real-time synchrophasor estimations, especially when the states are corrupted by noise.
Collapse
|
46
|
Comparing a Distributed Parameter Model-Based System Identification Technique with More Conventional Methods for Inverse Problems. JOURNAL OF INVERSE AND ILL-POSED PROBLEMS 2019; 27:703-717. [PMID: 31885419 PMCID: PMC6934369 DOI: 10.1515/jiip-2018-0006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Three methods for the estimation of blood or breath alcohol concentration (BAC/BrAC) from biosensor measured transdermal alcohol concentration (TAC) are evaluated and compared. Specifically, we consider a system identification/quasi-blind deconvolution scheme based on a distributed parameter model with unbounded input and output for ethanol transport in the skin and compare it to two more conventional system identification and filtering/deconvolution techniques for ill-posed inverse problems, one based on frequency domain methods, and the other on a time series approach using an ARMA input/output model. Our basis for comparison are five statistical measures of interest to alcohol researchers and clinicians: peak BAC/BrAC, time of peak BAC/BrAC, the ascending and descending slopes of the BAC/BrAC curve, and the area underneath the BAC/BrAC curve.
Collapse
|
47
|
PERFect: PERmutation Filtering test for microbiome data. Biostatistics 2019; 20:615-631. [PMID: 29917060 PMCID: PMC6797060 DOI: 10.1093/biostatistics/kxy020] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 04/24/2018] [Accepted: 04/29/2018] [Indexed: 12/22/2022] Open
Abstract
The human microbiota composition is associated with a number of diseases including obesity, inflammatory bowel disease, and bacterial vaginosis. Thus, microbiome research has the potential to reshape clinical and therapeutic approaches. However, raw microbiome count data require careful pre-processing steps that take into account both the sparsity of counts and the large number of taxa that are being measured. Filtering is defined as removing taxa that are present in a small number of samples and have small counts in the samples where they are observed. Despite progress in the number and quality of filtering approaches, there is no consensus on filtering standards and quality assessment. This can adversely affect downstream analyses and reproducibility of results across platforms and software. We introduce PERFect, a novel permutation filtering approach designed to address two unsolved problems in microbiome data processing: (i) define and quantify loss due to filtering by implementing thresholds and (ii) introduce and evaluate a permutation test for filtering loss to provide a measure of excessive filtering. Methods are assessed on three "mock experiment" data sets, where the true taxa compositions are known, and are applied to two publicly available real microbiome data sets. The method correctly removes contaminant taxa in "mock" data sets, quantifies and visualizes the corresponding filtering loss, providing a uniform data-driven filtering criteria for real microbiome data sets. In real data analyses PERFect tends to remove more taxa than existing approaches; this likely happens because the method is based on an explicit loss function, uses statistically principled testing, and takes into account correlation between taxa. The PERFect software is freely available at https://github.com/katiasmirn/PERFect.
Collapse
|
48
|
Filtering procedures for untargeted LC-MS metabolomics data. BMC Bioinformatics 2019; 20:334. [PMID: 31200644 PMCID: PMC6570933 DOI: 10.1186/s12859-019-2871-9] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 05/02/2019] [Indexed: 12/21/2022] Open
Abstract
Background Untargeted metabolomics datasets contain large proportions of uninformative features that can impede subsequent statistical analysis such as biomarker discovery and metabolic pathway analysis. Thus, there is a need for versatile and data-adaptive methods for filtering data prior to investigating the underlying biological phenomena. Here, we propose a data-adaptive pipeline for filtering metabolomics data that are generated by liquid chromatography-mass spectrometry (LC-MS) platforms. Our data-adaptive pipeline includes novel methods for filtering features based on blank samples, proportions of missing values, and estimated intra-class correlation coefficients. Results Using metabolomics datasets that were generated in our laboratory from samples of human blood, as well as two public LC-MS datasets, we compared our data-adaptive filtering method with traditional methods that rely on non-method specific thresholds. The data-adaptive approach outperformed traditional approaches in terms of removing noisy features and retaining high quality, biologically informative ones. The R code for running the data-adaptive filtering method is provided at https://github.com/courtneyschiffman/Metabolomics-Filtering. Conclusions Our proposed data-adaptive filtering pipeline is intuitive and effectively removes uninformative features from untargeted metabolomics datasets. It is particularly relevant for interrogation of biological phenomena in data derived from complex matrices associated with biospecimens. Electronic supplementary material The online version of this article (10.1186/s12859-019-2871-9) contains supplementary material, which is available to authorized users.
Collapse
|
49
|
Wave front analysis for enhanced time-domain beamforming of point-like targets in optoacoustic imaging using a linear array. PHOTOACOUSTICS 2019; 14:67-76. [PMID: 31194149 PMCID: PMC6551558 DOI: 10.1016/j.pacs.2019.04.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 01/07/2019] [Accepted: 04/03/2019] [Indexed: 05/20/2023]
Abstract
Using linear array transducers in combination with state-of-the-art multichannel electronics allows to perform optoacoustic imaging with frame rates only limited by the laser pulse repetition frequency and the acoustic time of flight. However, characteristic image artefacts resulting from the limited view and a lower SNR when compared to systems based on single-element focused transducers represent a burden for the clinical acceptance of the technology. In this paper, we present a new method for the improvement of image quality based on the analysis of the signal amplitudes along summation curves during the delay-and-sum beamforming process (DAS). The algorithm compares amplitude distributions along wave fronts with theoretical patterns from optoacoustic point sources. The method was validated on simulated and experimental phantom as well as in-vivo data. An improvement of the lateral resolution by more than a factor of two when comparing conventional DAS and our approach could be shown (numeric and experimental phantom data). For instance, on experimental data from a wire phantom, a PSF in the range of 0.18-0.22 mm was obtained with our approach against 0.48 mm for standard DAS. Furthermore, the SNR of a subcutaneous vessel 2.5 mm below the skin surface was improved by about 30 dB when compared to standard DAS.
Collapse
|
50
|
A survey of neural network-based cancer prediction models from microarray data. Artif Intell Med 2019; 97:204-214. [PMID: 30797633 DOI: 10.1016/j.artmed.2019.01.006] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 10/22/2018] [Accepted: 01/27/2019] [Indexed: 12/17/2022]
Abstract
Neural networks are powerful tools used widely for building cancer prediction models from microarray data. We review the most recently proposed models to highlight the roles of neural networks in predicting cancer from gene expression data. We identified articles published between 2013-2018 in scientific databases using keywords such as cancer classification, cancer analysis, cancer prediction, cancer clustering and microarray data. Analyzing the studies reveals that neural network methods have been either used for filtering (data engineering) the gene expressions in a prior step to prediction; predicting the existence of cancer, cancer type or the survivability risk; or for clustering unlabeled samples. This paper also discusses some practical issues that can be considered when building a neural network-based cancer prediction model. Results indicate that the functionality of the neural network determines its general architecture. However, the decision on the number of hidden layers, neurons, hypermeters and learning algorithm is made using trail-and-error techniques.
Collapse
|