1
|
Beyond basics: Key mutation selection features for successful tumor-informed ctDNA detection. Int J Cancer 2024. [PMID: 38623608 DOI: 10.1002/ijc.34964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 03/26/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
Tumor-informed mutation-based approaches are frequently used for detection of circulating tumor DNA (ctDNA). Not all mutations make equally effective ctDNA markers. The objective was to explore if prioritizing mutations using mutational features-such as cancer cell fraction (CCF), multiplicity, and error rate-would improve the success rate of tumor-informed ctDNA analysis. Additionally, we aimed to develop a practical and easily implementable analysis pipeline for identifying and prioritizing candidate mutations from whole-exome sequencing (WES) data. We analyzed WES and ctDNA data from three tumor-informed ctDNA studies, one on bladder cancer (Cohort A) and two on colorectal cancer (Cohorts I and N). The studies included 390 patients. For each patient, a unique set of mutations (median mutations/patient: 6, interquartile 13, range: 1-46, total n = 4023) were used as markers of ctDNA. The tool PureCN was used to assess the CCF and multiplicity of each mutation. High-CCF mutations were detected more frequently than low-CCF mutations (Cohort A: odds ratio [OR] 20.6, 95% confidence interval [CI] 5.72-173, p = 1.73e-12; Cohort I: OR 2.24, 95% CI 1.44-3.52, p = 1.66e-04; and Cohort N: OR 1.78, 95% CI 1.14-2.79, p = 7.86e-03). The detection-likelihood was additionally improved by selecting mutations with multiplicity of two or above (Cohort A: OR 1.55, 95% CI 1. 14-2.11, p = 3.85e-03; Cohort I: OR 1.78, 95% CI 1.23-2.56, p = 1.34e-03; and Cohort N: OR 1.94, 95% CI 1.63-2.31, p = 2.83e-14). Furthermore, selecting the mutations for which the ctDNA detection method had the lowest error rates, additionally improved the detection-likelihood, particularly evident when plasma cell-free DNA tumor fractions were below 0.1% (p = 2.1e-07). Selecting mutational markers with high CCF, high multiplicity, and low error rate significantly improve ctDNA detection likelihood. We provide free access to the analysis pipeline enabling others to perform qualified prioritization of mutations for tumor-informed ctDNA analysis.
Collapse
|
2
|
Assessing quality of selection procedures: Lower bound of false positive rate as a function of inter-rater reliability. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2024. [PMID: 38623032 DOI: 10.1111/bmsp.12343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/06/2024] [Accepted: 03/18/2024] [Indexed: 04/17/2024]
Abstract
Inter-rater reliability (IRR) is one of the commonly used tools for assessing the quality of ratings from multiple raters. However, applicant selection procedures based on ratings from multiple raters usually result in a binary outcome; the applicant is either selected or not. This final outcome is not considered in IRR, which instead focuses on the ratings of the individual subjects or objects. We outline the connection between the ratings' measurement model (used for IRR) and a binary classification framework. We develop a simple way of approximating the probability of correctly selecting the best applicants which allows us to compute error probabilities of the selection procedure (i.e., false positive and false negative rate) or their lower bounds. We draw connections between the IRR and the binary classification metrics, showing that binary classification metrics depend solely on the IRR coefficient and proportion of selected applicants. We assess the performance of the approximation in a simulation study and apply it in an example comparing the reliability of multiple grant peer review selection procedures. We also discuss other possible uses of the explored connections in other contexts, such as educational testing, psychological assessment, and health-related measurement, and implement the computations in the R package IRR2FPR.
Collapse
|
3
|
Penalized deep partially linear cox models with application to CT scans of lung cancer patients. Biometrics 2024; 80:ujad024. [PMID: 38412302 PMCID: PMC10898596 DOI: 10.1093/biomtc/ujad024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/22/2023] [Accepted: 12/06/2023] [Indexed: 02/29/2024]
Abstract
Lung cancer is a leading cause of cancer mortality globally, highlighting the importance of understanding its mortality risks to design effective patient-centered therapies. The National Lung Screening Trial (NLST) employed computed tomography texture analysis, which provides objective measurements of texture patterns on CT scans, to quantify the mortality risks of lung cancer patients. Partially linear Cox models have gained popularity for survival analysis by dissecting the hazard function into parametric and nonparametric components, allowing for the effective incorporation of both well-established risk factors (such as age and clinical variables) and emerging risk factors (eg, image features) within a unified framework. However, when the dimension of parametric components exceeds the sample size, the task of model fitting becomes formidable, while nonparametric modeling grapples with the curse of dimensionality. We propose a novel Penalized Deep Partially Linear Cox Model (Penalized DPLC), which incorporates the smoothly clipped absolute deviation (SCAD) penalty to select important texture features and employs a deep neural network to estimate the nonparametric component of the model. We prove the convergence and asymptotic properties of the estimator and compare it to other methods through extensive simulation studies, evaluating its performance in risk prediction and feature selection. The proposed method is applied to the NLST study dataset to uncover the effects of key clinical and imaging risk factors on patients' survival. Our findings provide valuable insights into the relationship between these factors and survival outcomes.
Collapse
|
4
|
The scientific reinvention of forensic science. Proc Natl Acad Sci U S A 2023; 120:e2301840120. [PMID: 37782789 PMCID: PMC10576124 DOI: 10.1073/pnas.2301840120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
Forensic science is undergoing an evolution in which a long-standing "trust the examiner" focus is being replaced by a "trust the scientific method" focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way. In this Perspective, we discuss the ways in which the move to a more empirically grounded scientific culture for the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of forensic results. However, we caution that the ultimate success of this scientific reinvention likely depends on whether the courts begin to engage with forensic science claims in a more rigorous way.
Collapse
|
5
|
Correlation between oculometric measures and clinical assessment in ALS patients participating in a phase IIb clinical drug trial. Amyotroph Lateral Scler Frontotemporal Degener 2023:1-7. [PMID: 37026395 DOI: 10.1080/21678421.2023.2196315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
Objective: Oculometric measures (OM) can be extracted from eye movements during presentation of visual stimuli. Studies have indicated the benefit of OM in assessment of neurological disorders, including Amyotrophic Lateral Sclerosis (ALS). We used a new software-based platform for the extraction of OM during patients' assessment. Our objective was to examine the correlation between OM and clinical assessment as a part of a clinical drug trial. Methods: 32 ALS patients (mean age 60.75 ± 10.36 years, 13 females), were assessed using a validated score (ALSFRS-R), and a novel software-based oculometric platform (NeuraLight, Israel) as a part of a clinical drug trial. Correlations of ALSFRS-R with OM were calculated and compared with matched healthy subjects' data (N = 129). Results: A moderate correlation was found between ALSFRS-R and corrective saccadic latency (R = 0.52, p = 0.002). Fixation time during smooth pursuit and peak velocity during pro-saccades were both worse in ALS patients versus healthy subjects (mean (SD)=0.34(0.06) vs. 0.3(0.07), p = 0.01, and 0.41(0.05) vs. 0.38(0.07), p = 0.04, respectively). Patients with bulbar symptoms (N = 14) had a decreased pro-saccade gain compared with patients without bulbar symptoms (mean (SD)=0.1 (0.04) vs. 0.93 (0.07), p = 0.01), and a larger error rate of anti-saccade movement (mean (SD)=0.42 (0.21) vs. 0.28 (0.16), p = 0.04). Conclusions: Oculometric measures correlated with the clinical assessment and were different from data of healthy subjects. Further studies are warranted to establish the role of oculometrics in the evaluation of patients with ALS and other neurodegenerative disorders, and its possible use in clinical trials.
Collapse
|
6
|
Accuracy of comparison decisions by forensic firearms examiners. J Forensic Sci 2023; 68:86-100. [PMID: 36183147 PMCID: PMC10092368 DOI: 10.1111/1556-4029.15152] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 08/29/2022] [Accepted: 09/13/2022] [Indexed: 12/31/2022]
Abstract
This black box study assessed the performance of forensic firearms examiners in the United States. It involved three different types of firearms and 173 volunteers who performed a total of 8640 comparisons of both bullets and cartridge cases. The overall false-positive error rate was estimated as 0.656% and 0.933% for bullets and cartridge cases, respectively, while the rate of false negatives was estimated as 2.87% and 1.87% for bullets and cartridge cases, respectively. The majority of errors were made by a limited number of examiners. Because chi-square tests of independence strongly suggest that error probabilities are not the same for each examiner, these are maximum-likelihood estimates based on the beta-binomial probability model and do not depend on an assumption of equal examiner-specific error rates. Corresponding 95% confidence intervals are (0.305%, 1.42%) and (0.548%, 1.57%) for false positives for bullets and cartridge cases, respectively, and (1.89%, 4.26%) and (1.16%, 2.99%) for false negatives for bullets and cartridge cases, respectively. The results of this study are consistent with prior studies, despite its comprehensive design and challenging specimens.
Collapse
|
7
|
Accuracy of forensic pathologists in incorporating post-mortem CT (PMCT) in forensic death investigation. J Forensic Sci 2022; 67:2351-2359. [PMID: 36069005 DOI: 10.1111/1556-4029.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/21/2022] [Accepted: 08/25/2022] [Indexed: 11/28/2022]
Abstract
Post-mortem computed tomography (PMCT) is now performed routinely in some medical examiner's offices, and the images are typically interpreted by forensic pathologists. In this study, the question of whether pathologists appropriately identify significant PMCT findings and incorporate them into the death investigation report and the cause and manner of death (COD and MOD) statements was addressed. We retrospectively reviewed 200 cases where PMCT was performed. The cases were divided into four categories: (1) full autopsy without radiology consultation (n = 77), (2) external exam without radiology consultation (n = 79), (3) full autopsy with radiology consultation (n = 26), (4) external exam with radiology consultation (n = 18). A radiologist (not the consult radiologist) read the PMCT images, and a pathologist (not the case pathologist) reviewed the case pathologist's post-mortem examination report in tandem to determine any PMCT findings omitted from the report. Omitted findings were classified into error types according to a modified Goldman classification including Major 1: Unrecognized fatal injury or pathology that would change COD and/or MOD, and Major 2: Unrecognized fatal injury or pathology that would not change COD and/or MOD. A total of 13 Major errors were identified (6.5%), and none definitively changed the MOD. All four Major-1 errors which could change the COD were found in Category 2. Of 9 Major-2 errors, 2 occurred in Category 1, 6 occurred in Category 2, and 1 occurred in Category 4. In conclusion, forensic pathologists who routinely utilize computed tomography (CT) interpret CT images well enough to reliably certify the COD and MOD.
Collapse
|
8
|
Segmentation-Based Classification Deep Learning Model Embedded with Explainable AI for COVID-19 Detection in Chest X-ray Scans. Diagnostics (Basel) 2022; 12:diagnostics12092132. [PMID: 36140533 PMCID: PMC9497601 DOI: 10.3390/diagnostics12092132] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 08/26/2022] [Accepted: 08/30/2022] [Indexed: 12/16/2022] Open
Abstract
Background and Motivation: COVID-19 has resulted in a massive loss of life during the last two years. The current imaging-based diagnostic methods for COVID-19 detection in multiclass pneumonia-type chest X-rays are not so successful in clinical practice due to high error rates. Our hypothesis states that if we can have a segmentation-based classification error rate <5%, typically adopted for 510 (K) regulatory purposes, the diagnostic system can be adapted in clinical settings. Method: This study proposes 16 types of segmentation-based classification deep learning-based systems for automatic, rapid, and precise detection of COVID-19. The two deep learning-based segmentation networks, namely UNet and UNet+, along with eight classification models, namely VGG16, VGG19, Xception, InceptionV3, Densenet201, NASNetMobile, Resnet50, and MobileNet, were applied to select the best-suited combination of networks. Using the cross-entropy loss function, the system performance was evaluated by Dice, Jaccard, area-under-the-curve (AUC), and receiver operating characteristics (ROC) and validated using Grad-CAM in explainable AI framework. Results: The best performing segmentation model was UNet, which exhibited the accuracy, loss, Dice, Jaccard, and AUC of 96.35%, 0.15%, 94.88%, 90.38%, and 0.99 (p-value <0.0001), respectively. The best performing segmentation-based classification model was UNet+Xception, which exhibited the accuracy, precision, recall, F1-score, and AUC of 97.45%, 97.46%, 97.45%, 97.43%, and 0.998 (p-value <0.0001), respectively. Our system outperformed existing methods for segmentation-based classification models. The mean improvement of the UNet+Xception system over all the remaining studies was 8.27%. Conclusion: The segmentation-based classification is a viable option as the hypothesis (error rate <5%) holds true and is thus adaptable in clinical practice.
Collapse
|
9
|
A Universal Testbed for IoT Wireless Technologies: Abstracting Latency, Error Rate and Stability from the IoT Protocol and Hardware Platform. SENSORS 2022; 22:s22114159. [PMID: 35684780 PMCID: PMC9185241 DOI: 10.3390/s22114159] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/22/2022] [Accepted: 05/26/2022] [Indexed: 11/30/2022]
Abstract
IoT applications rely strongly on the performance of wireless communication networks. There is a wide variety of wireless IoT technologies and choosing one over another depends on the specific use case requirements—be they technical, implementation-related or functional factors. Among the technical factors, latency, error rate and stability are the main parameters that affect communication reliability. In this work, we present the design, development and validation of a Universal Testbed to experimentally measure these parameters, abstracting them from the wireless IoT technology protocols and hardware platforms. The Testbed setup, which is based on a Raspberry Pi 4, only requires the IoT device under test to have digital inputs. We evaluate the Testbed’s accuracy with a temporal characterisation—accumulated response delay—showing an error less than 290 µs, leading to a relative error around 3% for the latencies of most IoT wireless technologies, the latencies of which are usually on the order of tens of milliseconds. Finally, we validate the Testbed’s performance by comparing the latency, error and stability measurements with those expected for the most common IoT wireless technologies: 6LoWPAN, LoRaWAN, Sigfox, Zigbee, Wi-Fi, BLE and NB-IoT.
Collapse
|
10
|
Individualized discovery of rare cancer drivers in global network context. eLife 2022; 11:74010. [PMID: 35593700 PMCID: PMC9159755 DOI: 10.7554/elife.74010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 05/20/2022] [Indexed: 11/13/2022] Open
Abstract
Late advances in genome sequencing expanded the space of known cancer driver genes several-fold. However, most of this surge was based on computational analysis of somatic mutation frequencies and/or their impact on the protein function. On the contrary, experimental research necessarily accounted for functional context of mutations interacting with other genes and conferring cancer phenotypes. Eventually, just such results become ‘hard currency’ of cancer biology. The new method, NEAdriver employs knowledge accumulated thus far in the form of global interaction network and functionally annotated pathways in order to recover known and predict novel driver genes. The driver discovery was individualized by accounting for mutations’ co-occurrence in each tumour genome – as an alternative to summarizing information over the whole cancer patient cohorts. For each somatic genome change, probabilistic estimates from two lanes of network analysis were combined into joint likelihoods of being a driver. Thus, ability to detect previously unnoticed candidate driver events emerged from combining individual genomic context with network perspective. The procedure was applied to 10 largest cancer cohorts followed by evaluating error rates against previous cancer gene sets. The discovered driver combinations were shown to be informative on cancer outcome. This revealed driver genes with individually sparse mutation patterns that would not be detectable by other computational methods and related to cancer biology domains poorly covered by previous analyses. In particular, recurrent mutations of collagen, laminin, and integrin genes were observed in the adenocarcinoma and glioblastoma cancers. Considering constellation patterns of candidate drivers in individual cancer genomes opens a novel avenue for personalized cancer medicine.
Collapse
|
11
|
Tooth hop variability in human and nonhuman bone: Effect on the estimation of saw blade TPI. J Forensic Sci 2021; 67:102-111. [PMID: 34585386 DOI: 10.1111/1556-4029.14897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/13/2021] [Accepted: 09/10/2021] [Indexed: 11/28/2022]
Abstract
Forensic research has demonstrated that tooth hop (TH) is a valuable measurement from saw-cut bones as it can be used to estimate teeth-per-inch (TPI) of a saw used in postmortem dismemberment cases. However, error rates for TPI estimation are still under development and knowledge of how bone tissue affects TH measurements remains unclear. The purpose of this research was to investigate the effects of tissue variability through the use of different taxa on the accuracy and precision of TH measurements in the bone to estimate TPI of the blade. A total of 1766 TH measurements were analyzed from human, pig, and deer long bones cut by two 7 TPI saw blades of different tooth type. Fifty distance-between-teeth measurements before and after sawing were collected directly from each blade for comparison to bone-measured TH to assess potential effects of tooth wear on TH variability. ANOVA and F tests were used to compare mean TH and variance, respectively, by saw-species (i.e., crosscut-deer, rip-deer) and species groups (i.e., all deer, all pig), with significance determined at the p < 0.05 level. TH measurements were converted to usable TPI ranges, which would typically be presented in a forensic report. It is concluded that significant differences in TH (mm) do not necessarily reflect significant differences in associated TPI ranges of suspect blades. Forensic reports should report mean TPI ± 1.5-2.5 TPI while providing a sample size indicating number of TH measured rather than just number of cuts or cut surfaces examined.
Collapse
|
12
|
Improving the efficacy and reliability of rTMS language mapping by increasing the stimulation frequency. Hum Brain Mapp 2021; 42:5309-5321. [PMID: 34387388 PMCID: PMC8519874 DOI: 10.1002/hbm.25619] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 07/14/2021] [Accepted: 07/27/2021] [Indexed: 11/08/2022] Open
Abstract
Repetitive TMS (rTMS) with a frequency of 5–10 Hz is widely used for language mapping. However, it may be accompanied by discomfort and is limited in the number and reliability of evoked language errors. We, here, systematically tested the influence of different stimulation frequencies (i.e., 10, 30, and 50 Hz) on tolerability, number, reliability, and cortical distribution of language errors aiming at improved language mapping. 15 right‐handed, healthy subjects (m = 8, median age: 29 yrs) were investigated in two sessions, separated by 2–5 days. In each session, 10, 30, and 50 Hz rTMS were applied over the left hemisphere in a randomized order during a picture naming task. Overall, 30 Hz rTMS evoked significantly more errors (20 ± 12%) compared to 50 Hz (12 ± 8%; p <.01), whereas error rates were comparable between 30/50 and 10 Hz (18 ± 11%). Across all conditions, a significantly higher error rate was found in Session 1 (19 ± 13%) compared to Session 2 (13 ± 7%, p <.05). The error rate was poorly reliable between sessions for 10 (intraclass correlation coefficient, ICC = .315) and 30 Hz (ICC = .427), whereas 50 Hz showed a moderate reliability (ICC = .597). Spatial reliability of language errors was low to moderate with a tendency toward increased reliability for higher frequencies, for example, within frontal regions. Compared to 10 Hz, both, 30 and 50 Hz were rated as less painful. Taken together, our data favor the use of rTMS‐protocols employing higher frequencies for evoking language errors reliably and with reduced discomfort, depending on the region of interest.
Collapse
|
13
|
Abstract
Humans can learn and produce skilled movement sequences from memory, yet the nature of sequence planning is not well understood. Previous computational and neurophysiological work suggests that movements in a sequence are planned as parallel graded activations and selected for output through competition. However, the relevance of this planning pattern to sequence production fluency and accuracy, as opposed to the temporal structure of sequences, is unclear. To resolve this question, we assessed the relative availability of constituent movements behaviorally during the preparation of motor sequences from memory. In three separate multisession experiments, healthy participants were trained to retrieve and produce four-element finger press sequences with particular timing according to an abstract sequence cue. We evaluated reaction time (RT) and error rate as markers of movement availability to constituent movement probes. Our results demonstrate that longer preparation time produces more pronounced differences in availability between adjacent sequence elements, whereas no effect was found for sequence speed or temporal grouping. Further, participants with larger position-dependent differences in movement availability tended to initiate correct sequences faster and with a higher temporal accuracy. Our results suggest that competitive preactivation is established gradually during sequence planning and predicts sequence skill, rather than the temporal structure of the motor sequence.NEW & NOTEWORTHY Sequence planning is an integral part of motor sequence control. Here, we demonstrate that the competitive state of sequential movements during sequence planning can be read out behaviorally through movement probes. We show that position-dependent differences in movement availability during planning reflect sequence preparedness and skill but not the timing of the planned sequence. Behavioral access to the preparatory state of movements may serve as a marker of sequence planning capacity.
Collapse
|
14
|
Controlling type I error rates in multi-arm clinical trials: A case for the false discovery rate. Pharm Stat 2021; 20:109-116. [PMID: 32790026 PMCID: PMC7612170 DOI: 10.1002/pst.2059] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 06/04/2020] [Accepted: 07/16/2020] [Indexed: 11/30/2022]
Abstract
Multi-arm trials are an efficient way of simultaneously testing several experimental treatments against a shared control group. As well as reducing the sample size required compared to running each trial separately, they have important administrative and logistical advantages. There has been debate over whether multi-arm trials should correct for the fact that multiple null hypotheses are tested within the same experiment. Previous opinions have ranged from no correction is required, to a stringent correction (controlling the probability of making at least one type I error) being needed, with regulators arguing the latter for confirmatory settings. In this article, we propose that controlling the false-discovery rate (FDR) is a suitable compromise, with an appealing interpretation in multi-arm clinical trials. We investigate the properties of the different correction methods in terms of the positive and negative predictive value (respectively how confident we are that a recommended treatment is effective and that a non-recommended treatment is ineffective). The number of arms and proportion of treatments that are truly effective is varied. Controlling the FDR provides good properties. It retains the high positive predictive value of FWER correction in situations where a low proportion of treatments is effective. It also has a good negative predictive value in situations where a high proportion of treatments is effective. In a multi-arm trial testing distinct treatment arms, we recommend that sponsors and trialists consider use of the FDR.
Collapse
|
15
|
Beretta barrel fired bullet validation study. J Forensic Sci 2020; 66:547-556. [PMID: 33104244 DOI: 10.1111/1556-4029.14604] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 09/28/2020] [Accepted: 10/02/2020] [Indexed: 11/28/2022]
Abstract
A report published in 2016 by the President's Council of Advisors on Science and Technology (PCAST) criticized studies that have been published regarding the discipline of firearm identification. This study was designed to answer some of these criticisms and involved 30 consecutively manufactured Beretta brand 9 mm Luger caliber barrels. This study had an "open set" design to help the discipline of firearm identification establish "Foundational Validity" which is outlined in the PCAST report. Seventy-two qualified firearm examiners completed and submitted answers for this study that included 15 knowns and 20 unknowns. There were an additional 5 firearms with similar characteristics as the Beretta barrels that were also included as unknowns which provided "known non-match" comparisons. Test sets were created using the random function in Microsoft Excel. Collaborative Testing Services (CTS) funded, facilitated, distributed the tests, and collected the answers from qualified firearm examiners throughout the United States and the world. Firearm examiners were able to complete the test of fired bullets with a low error rate. The error rate for the corrected data was 0.08% (1 in 1250) with the lower confidence interval as low as 0.01% (1 in 10,000) and the upper confidence interval being as high as 0.4% (1 in 250).
Collapse
|
16
|
Results of the 3D Virtual Comparison Microscopy Error Rate (VCMER) Study for firearm forensics. J Forensic Sci 2020; 66:557-570. [PMID: 33104255 DOI: 10.1111/1556-4029.14602] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/09/2020] [Accepted: 10/01/2020] [Indexed: 11/27/2022]
Abstract
The digital examination of scanned or measured 3D surface topography is referred to as Virtual Comparison Microscopy (VCM). Within the discipline of firearm and toolmark examination, VCM enables review and comparison of microscopic toolmarks on fired ammunition components. In the coming years, this technique may supplement and potentially replace the light comparison microscope as the primary instrument used for firearm and toolmark examination. This paper describes a VCM error rate and validation study involving 107 participants. The study included 40 test sets of fired cartridge cases from firearms with a variety of makes, models, and calibers. Participants used commercially available VCM software which allowed digital data distribution, specimen visualization, and submission of conclusions. The software also allowed participants to annotate areas of similarity and dissimilarity to support their conclusions. The primary cohort of 76 qualified United States and Canadian examiners that completed the study had an overall false-positive error rate of 3 errors from 693 comparisons (0.43%) and a false-negative error rate of 0 errors from 491 comparisons (0.0%). This accuracy is supplemented by the participant's provided surface annotations which provide insight into the cause of errors and the overall consistency across the independent examinations conducted in the study. The ability to obtain highly accurate conclusions on test fires from a wide range of firearms supports the hypothesis that VCM is a useful tool within the crime laboratory.
Collapse
|
17
|
Fingerprint error rate on close non-matches. J Forensic Sci 2020; 66:129-134. [PMID: 32990979 DOI: 10.1111/1556-4029.14580] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 08/28/2020] [Accepted: 09/01/2020] [Indexed: 02/06/2023]
Abstract
The accuracy of fingerprint identifications is critically important to the administration of criminal justice. Accuracy is challenging when two prints from different sources have many common features and few dissimilar features. Such print pairs, known as close non-matches (CNMs), are increasingly likely to arise as ever-growing databases are searched with greater frequency. In this study, 125 fingerprint agencies completed a mandatory proficiency test that included two pairs of CNMs. The false-positive error rates on the two CNMs were 15.9% (17 out of 107, 95% C.I.: 9.5%, 24.2%) and 28.1% (27 out of 96, 95% C.I.: 19.4%, 38.2%), respectively. These CNM error rates are (a) inconsistent with the popular notion that fingerprint evidence is nearly infallible, and (b) larger than error rates reported in leading fingerprint studies. We conclude that, when the risk of CNMs is high, the probative value of a reported fingerprint identification may be severely diminished due to an elevated false-positive error risk. We call for additional CNM research, including a replication and expansion of the present study using a representative selection of CNMs from database searches.
Collapse
|
18
|
Assessment of Drugs Toxicity and Associated Biomarker Genes Using Hierarchical Clustering. ACTA ACUST UNITED AC 2019; 55:medicina55080451. [PMID: 31398888 PMCID: PMC6723056 DOI: 10.3390/medicina55080451] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2019] [Revised: 08/04/2019] [Accepted: 08/06/2019] [Indexed: 12/13/2022]
Abstract
Background and objectives: Assessment of drugs toxicity and associated biomarker genes is one of the most important tasks in the pre-clinical phase of drug development pipeline as well as in toxicogenomic studies. There are few statistical methods for the assessment of doses of drugs (DDs) toxicity and their associated biomarker genes. However, these methods consume more time for computation of the model parameters using the EM (expectation-maximization) based iterative approaches. To overcome this problem, in this paper, an attempt is made to propose an alternative approach based on hierarchical clustering (HC) for the same purpose. Methods and materials: There are several types of HC approaches whose performance depends on different similarity/distance measures. Therefore, we explored suitable combinations of distance measures and HC methods based on Japanese Toxicogenomics Project (TGP) datasets for better clustering/co-clustering between DDs and genes as well as to detect toxic DDs and their associated biomarker genes. Results: We observed that Word’s HC method with each of Euclidean, Manhattan, and Minkowski distance measures produces better clustering/co-clustering results. For an example, in the case of the glutathione metabolism pathway (GMP) dataset LOC100359539/Rrm2, Gpx6, RGD1562107, Gstm4, Gstm3, G6pd, Gsta5, Gclc, Mgst2, Gsr, Gpx2, Gclm, Gstp1, LOC100912604/Srm, Gstm4, Odc1, Gsr, Gss are the biomarker genes and Acetaminophen_Middle, Acetaminophen_High, Methapyrilene_High, Nitrofurazone_High, Nitrofurazone_Middle, Isoniazid_Middle, Isoniazid_High are their regulatory (associated) DDs explored by our proposed co-clustering algorithm based on the distance and HC method combination Euclidean: Word. Similarly, for the peroxisome proliferator-activated receptor signaling pathway (PPAR-SP) dataset Cpt1a, Cyp8b1, Cyp4a3, Ehhadh, Plin5, Plin2, Fabp3, Me1, Fabp5, LOC100910385, Cpt2, Acaa1a, Cyp4a1, LOC100365047, Cpt1a, LOC100365047, Angptl4, Aqp7, Cpt1c, Cpt1b, Me1 are the biomarker genes and Aspirin_Low, Aspirin_Middle, Aspirin_High, Benzbromarone_Middle, Benzbromarone_High, Clofibrate_Middle, Clofibrate_High, WY14643_Low, WY14643_High, WY14643_Middle, Gemfibrozil_Middle, Gemfibrozil_High are their regulatory DDs. Conclusions: Overall, the methods proposed in this article, co-cluster the genes and DDs as well as detect biomarker genes and their regulatory DDs simultaneously consuming less time compared to other mentioned methods. The results produced by the proposed methods have been validated by the available literature and functional annotation.
Collapse
|
19
|
Impact of retrospective data verification to prepare the ICON6 trial for use in a marketing authorization application. Clin Trials 2019; 16:502-511. [PMID: 31347385 PMCID: PMC6801797 DOI: 10.1177/1740774519862528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Background: The ICON6 trial (ISRCTN68510403) is a phase III academic-led, international,
randomized, three-arm, double-blind, placebo-controlled trial of the
addition of cediranib to chemotherapy in recurrent ovarian cancer. It
investigated the use of placebo during chemotherapy and maintenance (arm A),
cediranib alongside chemotherapy followed by placebo maintenance (arm B) and
cediranib throughout both periods (arm C). Results of the primary comparison
showed a meaningful gain in progression-free survival (time to progression
or death from any cause) when comparing arm A (placebo) with arm C
(cediranib). As a consequence of the positive results, AstraZeneca was
engaged with the Medical Research Council trials unit to discuss regulatory
submission using ICON6 as the single pivotal trial. Methods: A relatively limited level of on-site monitoring, single data entry and
investigator’s local evaluation of progression were used on trial. In order
to submit a license application, it was decided that (a) extensive
retrospective source data verification of medical records against case
report forms should be performed, (b) further quality control checks for
accuracy of data entry should be performed and (c) blinded independent
central review of images used to define progression should be undertaken. To
assess the value of these extra activities, we summarize the impact on both
efficacy and safety outcomes. Results: Data point changes were minimal; those key to the primary results had a 0.47%
error rate (36/7686), and supporting data points had a 0.18% error rate
(109/59,261). The impact of the source data verification and quality control
processes were analyzed jointly. The conclusion drawn for the primary
outcome measure of progression-free survival between arm A and arm C was
unchanged. The log-rank test p-value changed only at the sixth decimal
place, the hazard ratio does not change from 0.57 with the exception of a
marginal change in its upper bound (0.74–0.73) and the median
progression-free survival benefit from arm C remained at 2.4 months.
Separately, the blinded independent central review of progression scans was
performed as a sensitivity analysis. Estimates and p values varied slightly
but overall demonstrated a difference in arms, which is consistent with the
initial result. Some increases in toxicity were observed, though these were
generally minor, with the exception of hypertension. However, none of these
increases were systematically biased toward one arm. Conclusion: The conduct of this pragmatic, academic-sponsored trial was sufficient given
the robustness of the results, shown by the results remaining largely
unchanged following retrospective verification despite not being designed
for use in a marketing authorization. The burden of such comprehensive
retrospective effort required to ensure the results of ICON6 were acceptable
to regulators is difficult to justify.
Collapse
|
20
|
Modeling Saccadic Action Selection: Cortical and Basal Ganglia Signals Coalesce in the Superior Colliculus. Front Syst Neurosci 2019; 13:3. [PMID: 30814938 PMCID: PMC6381059 DOI: 10.3389/fnsys.2019.00003] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 01/10/2019] [Indexed: 11/13/2022] Open
Abstract
The distributed nature of information processing in the brain creates a complex variety of decision making behavior. Likewise, computational models of saccadic decision making behavior are numerous and diverse. Here we present a generative model of saccadic action selection in the context of competitive decision making in the superior colliculus (SC) in order to investigate how independent neural signals may converge to interact and guide saccade selection, and to test if systematic variations can better replicate the variability in responses that are part of normal human behavior. The model was tasked with performing pro- and anti-saccades in order to replicate specific attributes of healthy human saccade behavior. Participants (ages 18-39) were instructed to either look toward (pro-saccade, well-practiced automated response) or away from (anti-saccade, combination of inhibitory and voluntary responses) a peripheral visual stimulus. They generated express and regular latency saccades in the pro-saccade task. In the anti-saccade task, correct reaction times were longer and participants occasionally looked at the stimulus (direction error) at either express or regular latencies. To gain a better understanding of the underlying neural processes that lead to saccadic action selection and response inhibition, we implemented 8 inputs inspired by systems neuroscience. These inputs reflected known sensory, automated, voluntary, and inhibitory components of cortical and basal ganglia activity that coalesces in the intermediate layers of the SC (SCi). The model produced bimodal reaction time distributions, where express and regular latency saccades had distinct modes, for both correct pro-saccades and direction errors in the anti-saccade task. Importantly, express and regular latency direction errors resulted from interactions of different inputs in the model. Express latency direction errors were due to a lack of pre-emptive fixation and inhibitory activity, which aloud sensory and automated inputs to initiate a stimulus-driven saccade. Regular latency errors occurred when the automated motor signals were stronger than the voluntary motor signals. While previous models have emulated fewer aspects of these behavioral findings, the focus of the simulations here is on the interaction of a wide variety of physiologically-based information integration producing a richer set of natural behavioral variability.
Collapse
|
21
|
Inhibition failures and late errors in the antisaccade task: influence of cue delay. J Neurophysiol 2018; 120:3001-3016. [PMID: 30110237 DOI: 10.1152/jn.00240.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the antisaccade task participants are required to saccade in the opposite direction of a peripheral visual cue (PVC). This paradigm is often used to investigate inhibition of reflexive responses as well as voluntary response generation. However, it is not clear to what extent different versions of this task probe the same underlying processes. Here, we explored with the Stochastic Early Reaction, Inhibition, and late Action (SERIA) model how the delay between task cue and PVC affects reaction time (RT) and error rate (ER) when pro- and antisaccade trials are randomly interleaved. Specifically, we contrasted a condition in which the task cue was presented before the PVC with a condition in which the PVC served also as task cue. Summary statistics indicate that ERs and RTs are reduced and contextual effects largely removed when the task is signaled before the PVC appears. The SERIA model accounts for RT and ER in both conditions and better so than other candidate models. Modeling demonstrates that voluntary pro- and antisaccades are frequent in both conditions. Moreover, early task cue presentation results in better control of reflexive saccades, leading to fewer fast antisaccade errors and more rapid correct prosaccades. Finally, high-latency errors are shown to be prevalent in both conditions. In summary, SERIA provides an explanation for the differences in the delayed and nondelayed antisaccade task. NEW & NOTEWORTHY In this article, we use a computational model to study the mixed antisaccade task. We contrast two conditions in which the task cue is presented either before or concurrently with the saccadic target. Modeling provides a highly accurate account of participants' behavior and demonstrates that a significant number of prosaccades are voluntary actions. Moreover, we provide a detailed quantitative analysis of the types of error that occur in pro- and antisaccade trials.
Collapse
|
22
|
Pharmacist prescribing in critical care: an evaluation of the introduction of pharmacist prescribing in a single large UK teaching hospital. Eur J Hosp Pharm 2018; 25:e2-e6. [PMID: 31157059 PMCID: PMC6457156 DOI: 10.1136/ejhpharm-2017-001267] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 07/14/2017] [Accepted: 07/18/2017] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES To evaluate the introduction of pharmacist independent prescribing activity across three general critical care units within a single large UK teaching hospital. To identify the prescribing demographics including total of all prescriptions, number prescribed by pharmacists, reason for pharmacist prescription, range of medications prescribed, pharmacist prescribing error rate and the extent of pharmacist second 'clinical check'. METHODS Retrospective evaluation of e-prescribing across all general critical care units of a single large UK teaching hospital. All prescribing data were downloaded over a 1-month period (May to June 2016) with analysis of pharmacist prescribing activity including rate, indication, therapeutic class and error rate. RESULTS In total, 5374 medicines were prescribed in 193 patients during the evaluated period. Prescribing pharmacists were available on the units on 60.4% (58/96) of days, during their working hours and accounted for 576/5374 (10.7%) of medicines prescribed in 65.2% (126/193) of patients. The majority (342/576) of pharmacist prescriptions were for new medicines. Infections, central nervous system, and nutrition/blood were the top three British National Formulary (BNF) therapeutic categories, accounting for 63.4% (349/576) of all pharmacist prescriptions. The critical care pharmacist prescribing error rate was 0.18% (1/550). CONCLUSIONS Pharmacist independent prescribers demonstrated a high degree and wide-ranging scope of prescribing activity in general critical care patients. Pharmacists contributed a significant proportion of total prescribing, despite less than full service coverage. Prescribing activity was also safe with a very low error rate recorded.
Collapse
|
23
|
Do Residency Selection Factors Predict Radiology Resident Performance? Acad Radiol 2018; 25:397-402. [PMID: 29239834 DOI: 10.1016/j.acra.2017.09.020] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 09/12/2017] [Accepted: 09/21/2017] [Indexed: 10/18/2022]
Abstract
RATIONALE AND OBJECTIVES The purpose of our study is to determine what information in medical student residency applications predicts radiology residency success as defined by objective clinical performance data. MATERIALS AND METHODS We performed a retrospective cohort study of residents who entered our institution's residency program through the National Resident Matching Program as postgraduate year 2 residents and completed the program over the past 2 years. Medical school grades, selection to Alpha Omega Alpha (AOA) Honor Society, United States Medical Licensing Examination (USMLE) scores, publication in peer-reviewed journals, and whether the applicant was from a peer institution were the variables examined. Clinical performance was determined by calculating each resident's cumulative major discordance rate for on-call cases the resident read and gave a preliminary interpretation. A major discordance was defined as a difference between the preliminary resident and the final attending interpretations that could immediately impact the care of the patient. A multivariate logistic regression was performed to determine significant variables. RESULTS Twenty-seven residents provided preliminary reports on call for 67,145 studies. The mean major discordance rate was 1.08% (range 0.34%-2.54%). Higher USMLE Step 1 scores, publication before residency, and election to AOA Honor Society were all statistically significant predictors of lower major discordance rates (P values 0.01, 0.01, and <0.001, respectively). CONCLUSIONS Overall resident performance was excellent. There are predictors that help select the better performing residents, namely higher USMLE Step 1 scores, one to two publications during medical school, and election to AOA in the junior year of medical school.
Collapse
|
24
|
Abstract
Virus fitness is a complex parameter that results from the interaction of virus-specific characters (e.g. intracellular growth rate, adsorption rate, virion extracellular stability, and tolerance to mutations) with others that depend on the underlying fitness landscape and the internal structure of the whole population. Individual mutants usually have lower fitness values than the complex population from which they come from. When they are propagated and allowed to attain large population sizes for a sufficiently long time, they approach mutation-selection equilibrium with the concomitant fitness gains. The optimization process follows dynamics that vary among viruses, likely due to differences in any of the parameters that determine fitness values. As a consequence, when different mutants spread together, the number of generations experienced by each of them prior to co-propagation may determine its particular fate. In this work we attempt a clarification of the effect of different levels of population diversity in the outcome of competition dynamics. To this end, we analyze the behavior of two mutants of the RNA bacteriophage Qβ that co-propagate with the wild-type virus. When both competitor viruses are clonal, the mutants rapidly outcompete the wild type. However, the outcome in competitions performed with partially optimized virus populations depends on the distance of the competitors to their clonal origin. We also implement a theoretical population dynamics model that describes the evolution of a heterogeneous population of individuals, each characterized by a fitness value, subjected to subsequent cycles of replication and mutation. The experimental results are explained in the framework of our theoretical model under two non-excluding, likely complementary assumptions: (1) The relative advantage of both competitors changes as populations approach mutation-selection equilibrium, as a consequence of differences in their growth rates and (2) one of the competitors is more robust to mutations than the other. The main conclusion is that the nearness of an RNA virus population to mutation-selection equilibrium is a key factor determining the fate of particular mutants arising during replication.
Collapse
|
25
|
Cold adaptation of tRNA nucleotidyltransferases: A tradeoff in activity, stability and fidelity. RNA Biol 2017; 15:144-155. [PMID: 29099323 DOI: 10.1080/15476286.2017.1391445] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
Cold adaptation is an evolutionary process that has dramatic impact on enzymatic activity. Increased flexibility of the protein structure represents the main evolutionary strategy for efficient catalysis and reaction rates in the cold, but is achieved at the expense of structural stability. This results in a significant activity-stability tradeoff, as it was observed for several metabolic enzymes. In polymerases, however, not only reaction rates, but also fidelity plays an important role, as these enzymes have to synthesize copies of DNA and RNA as exact as possible. Here, we investigate the effects of cold adaptation on the highly accurate CCA-adding enzyme, an RNA polymerase that uses an internal amino acid motif within the flexible catalytic core as a template to synthesize the CCA triplet at tRNA 3'-ends. As the relative orientation of these residues determines nucleotide selection, we characterized how cold adaptation impacts template reading and fidelity. In a comparative analysis of closely related psychro-, meso-, and thermophilic enzymes, the cold-adapted polymerase shows a remarkable error rate during CCA synthesis in vitro as well as in vivo. Accordingly, CCA-adding activity at low temperatures is not only achieved at the expense of structural stability, but also results in a reduced polymerization fidelity.
Collapse
|
26
|
Slowed Prosaccades and Increased Antisaccade Errors As a Potential Behavioral Biomarker of Multiple System Atrophy. Front Neurol 2017; 8:261. [PMID: 28676787 PMCID: PMC5476968 DOI: 10.3389/fneur.2017.00261] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Accepted: 05/24/2017] [Indexed: 02/06/2023] Open
Abstract
Current clinical diagnostic tools are limited in their ability to accurately differentiate idiopathic Parkinson’s disease (PD) from multiple system atrophy (MSA) and other parkinsonian disorders early in the disease course, but eye movements may stand as objective and sensitive markers of disease differentiation and progression. To assess the use of eye movement performance for uniquely characterizing PD and MSA, subjects diagnosed with PD (N = 21), MSA (N = 11), and age-matched controls (C, N = 20) were tested on the prosaccade and antisaccade tasks using an infrared eye tracker. Twenty of these subjects were retested ~7 months later. Saccade latencies, error rates, and longitudinal changes in saccade latencies were measured. Both PD and MSA patients had greater antisaccade error rates than C subjects, but MSA patients exhibited longer prosaccade latencies than both PD and C patients. With repeated testing, antisaccade latencies improved over time, with benefits in C and PD but not MSA patients. In the prosaccade task, the normal latencies of the PD group show that basic sensorimotor oculomotor function remain intact in mid-stage PD, whereas the impaired latencies of the MSA group suggest additional degeneration earlier in the disease course. Changes in antisaccade latency appeared most sensitive to differences between MSA and PD across short time intervals. Therefore, in these mid-stage patients, increased antisaccade errors combined with slowed prosaccade latencies might serve as a useful marker for early differentiation between PD and MSA, and, antisaccade performance, a measure of MSA progression. Together, our findings suggest that eye movements are promising biomarkers for early differentiation and progression of parkinsonian disorders.
Collapse
|
27
|
The Development and Application of Random Match Probabilities to Firearm and Toolmark Identification. J Forensic Sci 2017; 62:619-625. [PMID: 28449257 DOI: 10.1111/1556-4029.13386] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Revised: 06/26/2016] [Accepted: 07/17/2016] [Indexed: 11/30/2022]
Abstract
The field of firearms and toolmark analysis has encountered deep scrutiny of late, stemming from a handful of voices, primarily in the law and statistical communities. While strong scrutiny is a healthy and necessary part of any scientific endeavor, much of the current criticism leveled at firearm and toolmark analysis is, at best, misinformed and, at worst, punditry. One of the most persistent criticisms stems from the view that as the field lacks quantified random match probability data (or at least a firm statistical model) with which to calculate the probability of a false match, all expert testimony concerning firearm and toolmark identification or source attribution is unreliable and should be ruled inadmissible. However, this critique does not stem from the hard work of actually obtaining data and performing the scientific research required to support or reject current findings in the literature. Although there are sound reasons (described herein) why there is currently no unifying probabilistic model for the comparison of striated and impressed toolmarks as there is in the field of forensic DNA profiling, much statistical research has been, and continues to be, done to aid the criminal justice system. This research has thus far shown that error rate estimates for the field are very low, especially when compared to other forms of judicial error. The first purpose of this paper is to point out the logical fallacies in the arguments of a small group of pundits, who advocate a particular viewpoint but cloak it as fact and research. The second purpose is to give a balanced review of the literature regarding random match probability models and statistical applications that have been carried out in forensic firearm and toolmark analysis.
Collapse
|
28
|
Automatic detection of malaria parasite in blood images using two parameters. Technol Health Care 2016; 24 Suppl 1:S33-9. [PMID: 26409536 DOI: 10.3233/thc-151049] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
Collapse
|
29
|
Abstract
The sciatic functional index (SFI) is a popular parameter for peripheral nerve evaluation that relies on footprints obtained with ink and paper. Drawbacks include smearing artefacts and a lack of dynamic information during measurement. Modern applications use digitized systems that can deliver results with less analytical effort and fewer mice. However, the systems are expensive (€40,000). This study aimed to evaluate the applicability and precision of a self-made, low-cost infrared system for evaluating SFI in mice. Mice were subjected to unilateral sciatic nerve crush injury (crush group; n = 7) and sham operation (sham group; n = 4). They were evaluated on the day before surgery, the 2nd, 4th and 6th days after injury, and then every day up to the 23rd day after injury. We compared two SFI evaluation methods, i.e., conventional ink-and-paper SFI (C-SFI) and our infrared system (I-SFI). Our apparatus visualized footprints with totally internally reflected infrared light (950 nm) and a camera that can only detect this wavelength. Additionally we performed an analysis with the ladder beam walking test (LBWT) as a reference test. I-SFI assessment reduced the standard deviation by about 33 percent, from 11.6 to 7.8, and cut the variance around the baseline to 21 percent. The system thus requires fewer measurement repetitions and fewer animals, and cuts the cost of keeping the animals. The apparatus cost €321 to build. Our results show that the process of obtaining the SFI can be made more precise via digitization with a self-made, low-cost infrared system.
Collapse
|
30
|
Attention Deficit/Hyperactivity Disorder (ADHD): age related change of completion time and error rates of Stroop test. THE KOBE JOURNAL OF MEDICAL SCIENCES 2015; 61:E19-E26. [PMID: 25868610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
BACKGROUND Attention Deficit/Hyperactivity Disorder (ADHD) is a common neurobehavioral problem in children throughout the world. The Stroop test has been widely used for the evaluation of ADHD symptoms. However, the age-related change of the Stroop test results has not been fully clarified until now. METHODS Sixty-five ADHD and 70 age-matched control children aged 6-13 years were enrolled in this study. ADHD was diagnosed based on DSM-IV criteria. We examined the completion time and error rates of the Congruent Stroop test (CST) and Incongruent Stroop test (IST) in ADHD and control children. RESULTS No significant difference was observed in the completion time for CST or IST between the ADHD and control children at 6-9 years old. However, ADHD children at 10-13 years old showed significantly delayed completion time for the CST and IST compared with controls of the same age. As for the error rates of the CST and IST, ADHD and control children at 6-9 years old showed no difference. However, error rates of CST and IST in the ADHD children at 10-13 years were significantly higher than those of control of the same age. CONCLUSIONS Age may influence the results of Stroop test in ADHD children. For the ages of 10-13 years old, the Stroop test clearly separates ADHD children from control children, suggesting that it may be a useful screening tool for ADHD among preadolescent children.
Collapse
|
31
|
Electronic inventory systems and barcode technology: impact on pharmacy technical accuracy and error liability. Hosp Pharm 2015. [PMID: 25684799 DOI: 10.1310/hjp5001-034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. METHODS During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. RESULTS Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). CONCLUSIONS Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.
Collapse
|
32
|
Abstract
Embryo screening for aneuploidy (AS) is part of preimplantation genetic diagnostics (PGD) and is aimed at improving the efficiency of assisted reproduction. Currently, several technologies, including the well-established fluorescence in situ hybridization (FISH) technique, cover the screening of all chromosomes in a single cell. This study evaluates a novel 24-chromosome FISH technique protocol (FISH-24). A total of 337 embryos were analyzed using the traditional 9-chromosome FISH technique (FISH-9) while 251 embryos were evaluated using the new FISH-24 technique. Embryos deemed nontransferable on Day 3 were cultured in vitro to Day 5 of development, then fixed and reanalyzed according to the technique allocated to each treatment cycle (107 embryos analyzed by FISH-9 and 111 by FISH-24). The global error rate (discrepancy between Day 3 and Day 5 results for a single embryo) was 2.8% after FISH-9 and 3.6% after FISH-24, with a p value of 0.95. Thus, we have established and validated a 24-chromosome FISH-based single cell aneuploidy screening technique, showing that the error rate obtained for FISH-24 is independent of the number of chromosomes analyzed and equivalent to the error rate observed for FISH-9, as a useful tool for chromosome segregation studies and clinical use.
Collapse
|
33
|
Can post-error dynamics explain sequential reaction time patterns? Front Psychol 2012; 3:213. [PMID: 22811671 PMCID: PMC3397377 DOI: 10.3389/fpsyg.2012.00213] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2012] [Accepted: 06/08/2012] [Indexed: 11/13/2022] Open
Abstract
We investigate human error dynamics in sequential two-alternative choice tasks. When subjects repeatedly discriminate between two stimuli, their error rates and reaction times (RTs) systematically depend on prior sequences of stimuli. We analyze these sequential effects on RTs, separating error and correct responses, and identify a sequential RT tradeoff: a sequence of stimuli which yields a relatively fast RT on error trials will produce a relatively slow RT on correct trials and vice versa. We reanalyze previous data and acquire and analyze new data in a choice task with stimulus sequences generated by a first-order Markov process having unequal probabilities of repetitions and alternations. We then show that relationships among these stimulus sequences and the corresponding RTs for correct trials, error trials, and averaged over all trials are significantly influenced by the probability of alternations; these relationships have not been captured by previous models. Finally, we show that simple, sequential updates to the initial condition and thresholds of a pure drift diffusion model can account for the trends in RT for correct and error trials. Our results suggest that error-based parameter adjustments are critical to modeling sequential effects.
Collapse
|