1
|
A Comprehensive Survey of Statistical Approaches for Differential Expression Analysis in Single-Cell RNA Sequencing Studies. Genes (Basel) 2021; 12:1947. [PMID: 34946896 PMCID: PMC8701051 DOI: 10.3390/genes12121947] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 11/27/2021] [Accepted: 11/27/2021] [Indexed: 12/13/2022] Open
Abstract
Single-cell RNA-sequencing (scRNA-seq) is a recent high-throughput sequencing technique for studying gene expressions at the cell level. Differential Expression (DE) analysis is a major downstream analysis of scRNA-seq data. DE analysis the in presence of noises from different sources remains a key challenge in scRNA-seq. Earlier practices for addressing this involved borrowing methods from bulk RNA-seq, which are based on non-zero differences in average expressions of genes across cell populations. Later, several methods specifically designed for scRNA-seq were developed. To provide guidance on choosing an appropriate tool or developing a new one, it is necessary to comprehensively study the performance of DE analysis methods. Here, we provide a review and classification of different DE approaches adapted from bulk RNA-seq practice as well as those specifically designed for scRNA-seq. We also evaluate the performance of 19 widely used methods in terms of 13 performance metrics on 11 real scRNA-seq datasets. Our findings suggest that some bulk RNA-seq methods are quite competitive with the single-cell methods and their performance depends on the underlying models, DE test statistic(s), and data characteristics. Further, it is difficult to obtain the method which will be best-performing globally through individual performance criterion. However, the multi-criteria and combined-data analysis indicates that DECENT and EBSeq are the best options for DE analysis. The results also reveal the similarities among the tested methods in terms of detecting common DE genes. Our evaluation provides proper guidelines for selecting the proper tool which performs best under particular experimental settings in the context of the scRNA-seq.
Collapse
|
2
|
There's an App for That: Development of an Application to Operationalize the Global Diet Quality Score. J Nutr 2021; 151:176S-184S. [PMID: 34689193 PMCID: PMC8542098 DOI: 10.1093/jn/nxab196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 04/19/2021] [Accepted: 05/25/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND The global diet quality score (GDQS) is a simple, standardized metric appropriate for population-based measurement of diet quality globally. OBJECTIVES We aimed to operationalize data collection by modifying the quantity of consumption cutoffs originally developed for the GDQS food groups and to statistically evaluate the performance of the operationalized GDQS relative to the original GDQS against nutrient adequacy and noncommunicable disease (NCD)-related outcomes. METHODS The GDQS application uses a 24-h open-recall to collect a full list of all foods consumed during the previous day or night, and automatically classifies them into corresponding GDQS food group. Respondents use a set of 10 cubes in a range of predetermined sizes to determine if the quantity consumed per GDQS food group was below, or equal to or above food group-specific cutoffs established in grams. Because there is only a total of 10 cubes but as many as 54 cutoffs for the GDQS food groups, the operationalized cutoffs differ slightly from the original GDQS cutoffs. RESULTS A secondary analysis using 5 cross-sectional datasets comparing the GDQS with the original and operationalized cutoffs showed that the operationalized GDQS remained strongly correlated with nutrient adequacy and was equally sensitive to anthropometric and other clinical measures of NCD risk. In a secondary analysis of a longitudinal cohort study of Mexican teachers, there were no differences between the 2 modalities with the beta coefficients per 1 SD change in the original and operationalized GDQS scores being nearly identical for weight gain (-0.37 and -0.36, respectively, P < 0.001 for linear trend for both models) and of the same clinical order of magnitude for waist circumference (-0.52 and -0.44, respectively, P < 0.001 for linear trend for both models). CONCLUSION The operationalized GDQS cutoffs did not change the performance of the GDQS and therefore are recommended for use to collect GDQS data in the future.
Collapse
|
3
|
TRsandflies: A Web-Based Software for the Morphometric Identification of Sand Flies in Turkey. JOURNAL OF MEDICAL ENTOMOLOGY 2021; 58:1149-1156. [PMID: 33331881 DOI: 10.1093/jme/tjaa275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Indexed: 06/12/2023]
Abstract
Sand flies are vector of several diseases, mostly cutaneous and visceral leishmaniasis (CL and VL). Also, 29 sand fly species have been identified in previous fauna studies carried out in 40 provinces of Turkey. Totally, 24 sand flies species belonging to Phlebotomus (Ph.) (Diptera: Psychodidae) genus have been proven or reported as possible vector species. This study aimed to develop a new software which could contribute to researchers' decision making about the identification of sand flies with obtained data from entomological surveys conducted before in Turkey. Developed software called TRsandflies included 35 textbox created with parameters obtained from caught sand flies specimens by the above-mentioned surveys. It also contained 130 photos and distribution maps related to 24 sand flies species. In addition, C# language and MYSQL database were used in the program. TRsandflies had three different forms (pages) allowing the user to compare the specimens and known species. In the species identification trials with three repetitions carried out in the program, except for the specimens belonging to the Transphlebotomus Artemiev & Neronov, 1984 subgenus, morphometric data of all previously collected sand fly species specimens were included. The process of running the morphometric measurement results of predetermined specimens in the program provided us with an accurate prediction rate of 86.66% in male specimens and 71.66% in female specimens. We concluded that the web-based software developed could play an important role in reducing the rate of possible errors that might be encountered by conventional identification methods.
Collapse
|
4
|
A comparison of imaging software and conventional cell counting in determining melanocyte density in photodamaged control sample and melanoma in situ biopsies. J Cutan Pathol 2020; 47:675-680. [PMID: 32159867 DOI: 10.1111/cup.13681] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 02/11/2020] [Accepted: 02/23/2020] [Indexed: 11/29/2022]
Abstract
BACKGROUND Objective methods for distinguishing melanoma in situ (MIS) from photodamaged skin (PS) are needed to guide treatment in patients with melanocytic proliferations. Melanocyte density (MD) could serve as an objective histopathological criterion in difficult cases. Calculating MD via manual cell counts (MCC) with immunohistochemical (IHC)-stained slides has been previously published. However, the clinical application of this method is questionable, as quantification of MD via MCC on difficult cases is time consuming, especially in high volume practices. METHODS ImageJ is an image processing software that uses scanned slide images to determine cell count. In this study, we compared MCC to ImageJ calculated MD in microphthalmia transcription factor-IHC stained MIS biopsies and control PS acquired from the same patients. RESULTS We found a statistically significant difference in MD between PS and MIS as measured by both MCC and ImageJ software (P < 0.01). Additionally, no statistically significant difference was found when comparing MD measurements recorded by ImageJ vs those determined by the MCC method. CONCLUSION MD as determined by ImageJ strongly correlates with the MD calculated by MCC. We propose the use of ImageJ as a time-efficient, objective, and reproducible tool to assess MD.
Collapse
|
5
|
Development of a Deep Learning Model to Identify Lymph Node Metastasis on Magnetic Resonance Imaging in Patients With Cervical Cancer. JAMA Netw Open 2020; 3:e2011625. [PMID: 32706384 PMCID: PMC7382006 DOI: 10.1001/jamanetworkopen.2020.11625] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
IMPORTANCE Accurate identification of lymph node metastasis preoperatively and noninvasively in patients with cervical cancer can avoid unnecessary surgical intervention and benefit treatment planning. OBJECTIVE To develop a deep learning model using preoperative magnetic resonance imaging for prediction of lymph node metastasis in cervical cancer. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study developed an end-to-end deep learning model to identify lymph node metastasis in cervical cancer using magnetic resonance imaging (MRI). A total of 894 patients with stage IB to IIB cervical cancer who underwent radical hysterectomy and pelvic lymphadenectomy were reviewed. All patients underwent radical hysterectomy and pelvic lymphadenectomy, received pelvic MRI within 2 weeks before the operations, had no concurrent cancers, and received no preoperative treatment. To achieve the optimal model, the diagnostic value of 3 MRI sequences was compared, and the outcomes in the intratumoral and peritumoral regions were explored. To mine tumor information from both image and clinicopathologic levels, a hybrid model was built and its prognostic value was assessed by Kaplan-Meier analysis. The deep learning model and hybrid model were developed on a primary cohort consisting of 338 patients (218 patients from Sun Yat-sen University Cancer Center, Guangzhou, China, between January 2011 and December 2017 and 120 patients from Henan Provincial People's Hospital, Zhengzhou, China, between December 2016 and June 2018). The models then were evaluated on an independent validation cohort consisting of 141 patients from Yunnan Cancer Hospital, Kunming, China, between January 2011 and December 2017. MAIN OUTCOMES AND MEASURES The primary diagnostic outcome was lymph node metastasis status, with the pathologic characteristics diagnosed by lymphadenectomy. The secondary primary clinical outcome was survival. The primary diagnostic outcome was assessed by receiver operating characteristic (area under the curve [AUC]) analysis; the primary clinical outcome was assessed by Kaplan-Meier survival analysis. RESULTS A total of 479 patients (mean [SD] age, 49.1 [9.7] years) fulfilled the eligibility criteria and were enrolled in the primary (n = 338) and validation (n = 141) cohorts. A total of 71 patients (21.0%) in the primary cohort and 32 patients (22.7%) in the validation cohort had lymph node metastais confirmed by lymphadenectomy. Among the 3 image sequences, the deep learning model that used both intratumoral and peritumoral regions on contrast-enhanced T1-weighted imaging showed the best performance (AUC, 0.844; 95% CI, 0.780-0.907). These results were further improved in a hybrid model that combined tumor image information mined by deep learning model and MRI-reported lymph node status (AUC, 0.933; 95% CI, 0.887-0.979). Moreover, the hybrid model was significantly associated with disease-free survival from cervical cancer (hazard ratio, 4.59; 95% CI, 2.04-10.31; P < .001). CONCLUSIONS AND RELEVANCE The findings of this study suggest that deep learning can be used as a preoperative noninvasive tool to diagnose lymph node metastasis in cervical cancer.
Collapse
|
6
|
Refinement and validation of the IDIOM score for predicting the risk of gastrointestinal cancer in iron deficiency anaemia. BMJ Open Gastroenterol 2020; 7:e000403. [PMID: 32444424 PMCID: PMC7247388 DOI: 10.1136/bmjgast-2020-000403] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 03/30/2020] [Accepted: 04/08/2020] [Indexed: 01/27/2023] Open
Abstract
OBJECTIVE To refine and validate a model for predicting the risk of gastrointestinal (GI) cancer in iron deficiency anaemia (IDA) and to develop an app to facilitate use in clinical practice. DESIGN Three elements: (1) analysis of a dataset of 2390 cases of IDA to validate the predictive value of age, sex, blood haemoglobin concentration (Hb), mean cell volume (MCV) and iron studies on the probability of underlying GI cancer; (2) a pilot study of the benefit of adding faecal immunochemical testing (FIT) into the model; and (3) development of an app based on the model. RESULTS Age, sex and Hb were all strong, independent predictors of the risk of GI cancer, with ORs (95% CI) of 1.05 per year (1.03 to 1.07, p<0.00001), 2.86 for men (2.03 to 4.06, p<0.00001) and 1.03 for each g/L reduction in Hb (1.01 to 1.04, p<0.0001) respectively. An association with MCV was also revealed, with an OR of 1.03 for each fl reduction (1.01 to 1.05, p<0.02). The model was confirmed to be robust by an internal validation exercise. In the pilot study of high-risk cases, FIT was also predictive of GI cancer (OR 6.6, 95% CI 1.6 to 51.8), but the sensitivity was low at 23.5% (95% CI 6.8% to 49.9%). An app based on the model was developed. CONCLUSION This predictive model may help rationalise the use of investigational resources in IDA, by fast-tracking high-risk cases and, with appropriate safeguards, avoiding invasive investigation altogether in those at ultra-low predicted risk.
Collapse
|
7
|
Power Analysis and Sample Size Planning in ANCOVA Designs. PSYCHOMETRIKA 2020; 85:101-120. [PMID: 31823115 PMCID: PMC8225521 DOI: 10.1007/s11336-019-09692-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 11/19/2019] [Indexed: 06/10/2023]
Abstract
The analysis of covariance (ANCOVA) has notably proven to be an effective tool in a broad range of scientific applications. Despite the well-documented literature about its principal uses and statistical properties, the corresponding power analysis for the general linear hypothesis tests of treatment differences remains a less discussed issue. The frequently recommended procedure is a direct application of the ANOVA formula in combination with a reduced degrees of freedom and a correlation-adjusted variance. This article aims to explicate the conceptual problems and practical limitations of the common method. An exact approach is proposed for power and sample size calculations in ANCOVA with random assignment and multinormal covariates. Both theoretical examination and numerical simulation are presented to justify the advantages of the suggested technique over the current formula. The improved solution is illustrated with an example regarding the comparative effectiveness of interventions. In order to facilitate the application of the described power and sample size calculations, accompanying computer programs are also presented.
Collapse
|
8
|
An Implementation and Visualization of the Tree-Based Scan Statistic for Safety Event Monitoring in Longitudinal Electronic Health Data. Drug Saf 2020; 42:727-741. [PMID: 30617498 DOI: 10.1007/s40264-018-00784-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
INTRODUCTION Longitudinal electronic healthcare data hold great potential for drug safety surveillance. The tree-based scan statistic (TBSS), as implemented by the TreeScan® software, allows for hypothesis-free signal detection in longitudinal data by grouping safety events according to branching, hierarchical data coding systems, and then identifying signals of disproportionate recording (SDRs) among the singular events or event groups. OBJECTIVE The objective of this analysis was to identify and visualize SDRs with the TBSS in historical data from patients using two antifungal drugs, itraconazole or terbinafine. By examining patients who used either itraconazole or terbinafine, we provide a conceptual replication of a previous TBSS analyses by varying methodological choices and using a data source that had not been previously used with the TBSS, i.e., the Optum Clinformatics™ claims database. With this analysis, we aimed to test a parsimonious design that could be the basis of a broadly applicable method for multiple drug and safety event pairs. METHODS The TBSS analysis was used to examine incident events and any itraconazole or terbinafine use among US-based patients from 2002 through 2007. Event frequencies before and after the first day of drug exposure were compared over 14- and 56-day periods of observation in a Bernoulli model with a self-controlled design. Safety events were classified into a hierarchical tree structure using the Clinical Classifications Software (CCS) which mapped International Classification of Diseases, 9th Revision (ICD-9) codes to 879 diagnostic groups. Using the TBSS, the log likelihood ratio of observed versus expected events in all groups along the CCS hierarchy were compared, and groups of events that occurred at disproportionally high frequencies were identified as potential SDRs; p-values for the potential SDRs were estimated with Monte-Carlo permutation based methods. Output from TreeScan® was visualized and plotted as a network which followed the CCS tree structure. RESULTS Terbinafine use (n = 223,968) was associated with SDRs for diseases of the circulatory system (14- and 56-day p = 0.001) and heart (14-day p = 0.026 and 56-day p = 0.001) as well as coronary atherosclerosis and other heart disease (14-day p = 0.003 and 56-day p = 0.004). For itraconazole use (n = 36,025), the TBSS identified SDRs for coronary atherosclerosis and other heart disease (p = 0.002) and complications of an implanted or grafted device (14-day p = 0.001 and 56-day p < 0.05). Use of both drugs was associated with SDRs for diseases of the digestive system at 14 days (p < 0.05) and this SDR had been observed among terbinafine users in a previous TBSS analysis with a different data source. The TreeScan® visualization facilitated the identification of the atherosclerosis and other heart disease SDRs as well as highlighting the consistency of the SDR for diseases of the digestive system across drugs and data sources. CONCLUSION With the TBSS, we identified potential SDRs related to the circulatory system that may reflect the cardiac risk that was described in the itraconazole product label. SDRs for diseases of the digestive system among terbinafine users were also reported in a previous signal detection analysis, although other SDRs from the previous publications were not replicated. The TBSS visualizations aided in the understanding and interpretation of the TBSS output, including the comparisons to the previous publications. In this conceptual replication, differences in the results observed in our analysis and the previous analyses could be attributable to variation in modeling and design choices as well as factors that were intrinsic to the underlying data sources. The broad consistency, but far from perfect concordance, of our results with the known safety profile of these antifungals including the risks from the itraconazole product label supports the rationale for continued investigations of signal detection methods across differing data sources and populations.
Collapse
|
9
|
BAYESIAN SPECTRUM DECONVOLUTION INCLUDING UNCERTAINTIES AND MODEL SELECTION: APPLICATION TO X-RAY EMISSION DATA USING WINBUGS. RADIATION PROTECTION DOSIMETRY 2019; 185:157-167. [PMID: 30624720 DOI: 10.1093/rpd/ncy286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 11/13/2018] [Accepted: 12/07/2018] [Indexed: 06/09/2023]
Abstract
Spectrum deconvolution is an important task in ionizing radiation measurements, as the pulse height spectra, or, in general, the measured data from spectrometers or other measuring instruments are usually determined by the convolution of the response function with the fluence spectra. The method presented here for obtaining fluence spectra from the measurements is an application of Bayesian parameter estimation to the deconvolution of X-ray emission data. The problem of choosing the optimal model among several possible models is also considered, as well as an approach to include contributions from various sources of uncertainty, both correlated and uncorrelated. The application is carried out using the Bayesian software WinBUGS.
Collapse
|
10
|
Assessing a novel way to measure step count while walking using a custom mobile phone application. PLoS One 2018; 13:e0206828. [PMID: 30399162 PMCID: PMC6219786 DOI: 10.1371/journal.pone.0206828] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Accepted: 10/19/2018] [Indexed: 12/30/2022] Open
Abstract
Introduction Walking speed has been associated with many clinical outcomes (e.g., frailty, mortality, joint replacement need, etc.). Accurately measuring walking speed (stride length x step count/time) typically requires significant clinician/staff time or a gait lab with specialized equipment (i.e., electronic timers or motion capture). In the present study, our goal was to measure “step count” via smartphones through novel software and to compare with step tracking software that come standard with iOS and Android smartphones as a first step in walking speed measurement. Methods A separate calibration and validation data collection was performed. Individuals in the calibration collection (n = 5) walked 20m at normal and slow speed (<1.0 m/s). Appropriate settings for the novel mobile application were chosen to measure step count. Individuals in the validation (n = 52) collection walked at 6m, 10m, and 20m at normal and slow walking speeds. We compared step difference (absolute difference) from observed step counts to native step tracking software and our novel software derived step counts. We used generalized estimated equation adjusted (participant level) negative binomial regression models of absolute step difference from observed step counts, to determine optimal settings (calibration) and subsequently to gauge performance of the shake algorithm settings and native step tracking software across different distances and speeds (validation). Results For iOS/iPhone 6, when compared to observed step count, the shake service (software driven approach) significantly outperformed the embedded native step tracking software across all distances at slow speed, and for short distance (6m) at normal speed. On the Android phone, the shake service outperformed the native step tracking software at slow speed at 6 meters and 20 meters, while its performance eclipsed the native step tracking software only at 20 meters at normal speed. Discussion Our software based approach outperformed native step tracking software across various speeds and distances and carries the advantage of having adjustable measurement parameters that can be further optimized for specific medical conditions. Such software applications will provide an effective way to capture standardized data across multiple commercial smartphone devices, facilitating the future capture of walking speed and other clinically important performance parameters that will influence clinical and home care in the era of value based care.
Collapse
|
11
|
Abstract
In recent years, the explosion of genomic data and bioinformatic tools has been accompanied by a growing conversation around reproducibility of results and usability of software. However, the actual state of the body of bioinformatics software remains largely unknown. The purpose of this paper is to investigate the state of source code in the bioinformatics community, specifically looking at relationships between code properties, development activity, developer communities, and software impact. To investigate these issues, we curated a list of 1,720 bioinformatics repositories on GitHub through their mention in peer-reviewed bioinformatics articles. Additionally, we included 23 high-profile repositories identified by their popularity in an online bioinformatics forum. We analyzed repository metadata, source code, development activity, and team dynamics using data made available publicly through the GitHub API, as well as article metadata. We found key relationships within our dataset, including: certain scientific topics are associated with more active code development and higher community interest in the repository; most of the code in the main dataset is written in dynamically typed languages, while most of the code in the high-profile set is statically typed; developer team size is associated with community engagement and high-profile repositories have larger teams; the proportion of female contributors decreases for high-profile repositories and with seniority level in author lists; and, multiple measures of project impact are associated with the simple variable of whether the code was modified at all after paper publication. In addition to providing the first large-scale analysis of bioinformatics code to our knowledge, our work will enable future analysis through publicly available data, code, and methods. Code to generate the dataset and reproduce the analysis is provided under the MIT license at https://github.com/pamelarussell/github-bioinformatics. Data are available at https://doi.org/10.17605/OSF.IO/UWHX8.
Collapse
|
12
|
Sequence diagram refactoring using single and hybridized algorithms. PLoS One 2018; 13:e0202629. [PMID: 30133518 PMCID: PMC6105025 DOI: 10.1371/journal.pone.0202629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Accepted: 08/07/2018] [Indexed: 11/18/2022] Open
Abstract
Data mining and search-based algorithms have been applied to various problems due to their power and performance. There have been several studies on the use of these algorithms for refactoring. In this paper, we show how search based algorithms can be used for sequence diagram refactoring. We also show how a hybridized algorithm of Kmeans and Simulated Annealing (SA) algorithms can aid each other in solving sequence diagram refactoring. Results show that search based algorithms can be used successfully in refactoring sequence diagram on small and large case studies. In addition, the hybridized algorithm obtains good results using selected quality metrics. Detailed insights on the experiments on sequence diagram refactoring reveal that the limitations of SA can be addressed by hybridizing the Kmeans algorithm to the SA algorithm.
Collapse
|
13
|
Survey of Smartphone Use among Anaesthetists In Saolta University Health Care Group Midlands Setting. IRISH MEDICAL JOURNAL 2018; 111:709. [PMID: 30376227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND The use of smartphones in health care settings has become widespread. Although there are several benefits of smartphone usage for anaesthetists, there is a potential for negative effects on their performance and hence on patients' care. OBJECTIVES To investigate the ownership and patterns of anaesthetists' usage of smartphones and to identify the concerns and opinions about the potentially harmful effects of their use. METHODS We emailed an online survey to all anaesthetists working in the Saolta university healthcare group. RESULTS A high proportion of anaesthetists owned 1-5 medical-related applications (61.1%). Drug and medical references was the most commonly used category of applications. DISCUSSION There is an increasing number of useful medical-related apps with a potential for improving performance and new developments. The low level of awareness to smartphone use policies indicates the need for increasing awareness and developing guidelines that encourage the safe use of smartphones.
Collapse
|
14
|
Population Pharmacokinetics and Exploratory Pharmacodynamics of Lorazepam in Pediatric Status Epilepticus. Clin Pharmacokinet 2017; 56:941-951. [PMID: 27943220 PMCID: PMC5466505 DOI: 10.1007/s40262-016-0486-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
BACKGROUND Lorazepam is one of the preferred agents used for intravenous treatment of status epilepticus (SE). We combined data from two pediatric clinical trials to characterize the population pharmacokinetics of intravenous lorazepam in infants and children aged 3 months to 17 years with active SE or a history of SE. METHODS We developed a population pharmacokinetic model for lorazepam using the NONMEM software. We then assessed exploratory exposure-response relationships using the overall efficacy and safety study endpoints, and performed dosing simulations. RESULTS A total of 145 patients contributed 439 pharmacokinetic samples. The median (range) age and dose were 5.4 years (0.3-17.8) and 0.10 mg/kg (0.02-0.18), respectively. A two-compartment pharmacokinetic model with allometric scaling described the data well. In addition to total body weight (WT), younger age was associated with slightly higher weight-normalized clearance (CL). The following relationships characterized the typical values for the central compartment volume (V1), CL, peripheral compartment volume (V2), and intercompartmental CL (Q), using individual subject WT (kg) and age (years): V1 (L) = 0.879*WT; CL (L/h) = 0.115*(Age/4.7)0.133*WT0.75; V2 (L) = 0.542*V1; Q (L/h) = 1.45*WT0.75. No pharmacokinetic parameters were associated with clinical outcomes. Simulations suggest uniform pediatric dosing (0.1 mg/kg, to a maximum of 4 mg) can be used to achieve concentrations of 50-100 ng/mL in children with SE, which have been previously associated with effective seizure control. CONCLUSIONS The population pharmacokinetics of lorazepam were successfully described using a sparse sampling approach and a two-compartment model in pediatric patients with active SE.
Collapse
|
15
|
How can GPs drive software changes to improve healthcare for Aboriginal and Torres Strait Islanders peoples? AUSTRALIAN FAMILY PHYSICIAN 2017; 46:249-253. [PMID: 28376579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
BACKGROUND Changes to the software used in general practice could improve the collection of the Aboriginal and Torres Strait Islander status of all patients, and boost access to healthcare measures specifically for Aboriginal and Torres Strait Islander peoples provided directly or indirectly by general practitioners (GPs). OBJECTIVE Despite longstanding calls for improvements to general practice software to better support Aboriginal and Torres Strait Islander health, little change has been made. The aim of this article is to promote software improvements by identifying desirable software attributes and encouraging GPs to promote their adoption. DISCUSSION Establishing strong links between collecting Aboriginal and Torres Strait Islander status, clinical decision supports, and uptake of GP-mediated health measures specifically for Aboriginal and Torres Strait Islander peoples - and embedding these links in GP software - is a long overdue reform. In the absence of government initiatives in this area, GPs are best placed to advocate for software changes, using the model described here as a starting point for action.
Collapse
|
16
|
Abstract
Tests of self-control theory have examined a substantial number of criminal behaviors, but no study has examined the correlation of low self-control with software piracy. Using data collected from 302 students in this university, this study examined the correlation of low self-control with software piracy and the moderating role of associating with deviant peers in this correlation. Low self-control correlated with software piracy more strongly for those who had high associations with deviant peers than for students with low associations with deviant peers. Analysis indicated differential links for lack of moral attitude in relation to software piracy and favorable attitudes for software piracy for varying association with deviant peers.
Collapse
|
17
|
Learning management system and e-learning tools: an experience of medical students' usage and expectations. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2016; 7:267-73. [PMID: 27544782 PMCID: PMC5018353 DOI: 10.5116/ijme.57a5.f0f5] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2016] [Accepted: 08/06/2016] [Indexed: 05/22/2023]
Abstract
OBJECTIVES To investigate medical students´ utilization of and problems with a learning management system and its e-learning tools as well as their expectations on future developments. METHODS A single-center online survey has been carried out to investigate medical students´ (n = 505) usage and perception concerning the learning management system Blackboard, and provided e-learning tools. Data were collected with a standardized questionnaire consisting of 70 items and analyzed by quantitative and qualitative methods. RESULTS The participants valued lecture notes (73.7%) and Wikipedia (74%) as their most important online sources for knowledge acquisition. Missing integration of e-learning into teaching was seen as the major pitfall (58.7%). The learning management system was mostly used for study information (68.3%), preparation of exams (63.3%) and lessons (54.5%). Clarity (98.3%), teaching-related contexts (92.5%) and easy use of e-learning offers (92.5%) were rated highest. Interactivity was most important in free-text comments (n = 123). CONCLUSIONS It is desired that contents of a learning management system support an efficient learning. Interactivity of tools and their conceptual integration into face-to-face teaching are important for students. The learning management system was especially important for organizational purposes and the provision of learning materials. Teachers should be aware that free online sources such as Wikipedia enjoy a high approval as source of knowledge acquisition. This study provides an empirical basis for medical schools and teachers to improve their offerings in the field of digital learning for their students.
Collapse
|
18
|
A Systematic Review of Omaha System Literature in Turkey. Stud Health Technol Inform 2016; 225:633-634. [PMID: 27332286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
|
19
|
The Validity of Using E4D Compare's "% Comparison" to Assess Crown Preparations in Preclinical Dental Education. J Dent Educ 2015; 79:1445-1451. [PMID: 26632299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
When a dental school is deciding which technology to introduce into a curriculum, it is important to identify the educational goals for the system. The authors' primary goal for the use of a computer-aided resource was to offer students another way to assess their performance, to enhance their learning, and to potentially decrease their learning curve in the preclinical environment prior to using the technique in clinical patient care. The aim of this study was to examine the validity of the "% Comparison" numbers derived from the E4D Compare software program. Three practical examinations were administered to a class of 82 students at one U.S. dental school over a six-week period. The grading of the practical examinations was performed with individual faculty members being responsible for evaluating specific aspects of each preparation. A digital image of each student's practical examination tooth was then obtained and compared to the digital image of an ideal preparation. The preparations were compared, and the "% Comparison" was recorded at five tolerance levels. Spearman's correlation coefficient (SCC) was used to measure the agreement in rankings between the faculty scores on practical exams 1-3 and the scores obtained using E4D Compare at the different tolerance levels. The SCC values for practical exams 2 and 3 were all between 0.2 and 0.4; for practical exam 1, the SCC values ranged from 0.47 to 0.56. There was no correlation between the faculty scores and the numbers given by the "% Comparison" of the software.
Collapse
|
20
|
Technology Use for Diabetes Problem Solving in Adolescents with Type 1 Diabetes: Relationship to Glycemic Control. Diabetes Technol Ther 2015; 17:449-54. [PMID: 25826706 PMCID: PMC4504438 DOI: 10.1089/dia.2014.0422] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
BACKGROUND This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). SUBJECTS AND METHODS A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. RESULTS Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β = -0.26, P < 0.01), Diabetes Adolescent Problem Solving Questionnaire (β = -0.26, P = 0.01), and TUPS (β = 0.26, P = 0.01). Aside from the Diabetes Self-Care Inventory--Revised, each block added significantly to the model R(2). The final model R(2) was 0.22 for modeling A1C (P < 0.001). CONCLUSIONS Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes.
Collapse
|
21
|
Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud. eLife 2015; 4:e06664. [PMID: 25955969 PMCID: PMC4440898 DOI: 10.7554/elife.06664] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2015] [Accepted: 05/01/2015] [Indexed: 01/27/2023] Open
Abstract
The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.
Collapse
|
22
|
Use of Self-Service Query Tools Varies by Experience and Research Knowledge. Stud Health Technol Inform 2015; 216:1023. [PMID: 26262323 PMCID: PMC4684252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The lack of understanding of user experience with self-service query tools is a barrier to designing effective query tools and is what propelled this study. User actions were documented and transformed into networks of actions for qualitative analysis. Proficient use of self-service query tools requires significant technical experience. To decrease the user learning curve, additional user education is necessary for novice users.
Collapse
|
23
|
Quantifying the Activities of Self-quantifiers: Management of Data, Time and Health. Stud Health Technol Inform 2015; 216:333-337. [PMID: 26262066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Current self-quantification systems (SQS) are limited in their ability to support the acquisition of health-related information essential for individuals to make informed decisions based on their health status. They do not offer services such as data handling and data aggregation in a single place, and using multiple types of tools for this purpose complicates data and health self-management for self-quantifiers. An online survey was used to elicit information from self-quantifiers about the methods they used to undertake key activities related to health self-management. This paper provides empirical evidence about self-quantifiers' time spent using different data collection, data handling, data analysis, and data sharing tools and draws implications for health self-management activities.
Collapse
|
24
|
Dental radiography in New Zealand: digital versus film. THE NEW ZEALAND DENTAL JOURNAL 2013; 109:107-114. [PMID: 24027973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
UNLABELLED Digital x-ray systems offer advantages over conventional film systems, yet many dentists have not adopted digital technology. OBJECTIVES To assess New Zealand dental practitioners' use of--and preferences for--dental radiography systems. DESIGN Cross-sectional survey. SETTING General and specialist dental practice. PARTICIPANTS AND METHODS Postal questionnaire survey of a sample of 770 dentists (520 randomly selected general dental practitioners and all 250 specialists) listed in the 2012 NZ Dental Council Register. MAIN OUTCOME MEASURES Type of radiography systems used by dentists. Dentists' experiences and opinions of conventional film and digital radiography. RESULTS The participation rate was 55.2%. Digital radiography systems were used by 58.0% of participating dentists, most commonly among those aged 31-40 years. Users of digital radiography tended to report greater satisfaction with their radiography systems than users conventional films. Two-thirds of film users were interested in switching to digital radiography in the near future. Reasons given by conventional film users for not using digital radiography included cost, difficulty in integrating with other software systems, concern about potential technical errors, and the size and nature of the intra-oral sensors. CONCLUSION Many dental practitioners have still not adopted digital radiography, yet its users are more satisfied with their radiography systems than are conventional film users. The latter may find changing to a digital system to be satisfying and rewarding.
Collapse
MESH Headings
- Adult
- Attitude of Health Personnel
- Computer Systems/statistics & numerical data
- Costs and Cost Analysis
- Cross-Sectional Studies
- Dentists/psychology
- Electronic Health Records/statistics & numerical data
- Equipment Design
- Female
- General Practice, Dental/statistics & numerical data
- Humans
- Internet/statistics & numerical data
- Male
- Middle Aged
- New Zealand
- Personal Satisfaction
- Practice Patterns, Dentists'/statistics & numerical data
- Radiography, Dental/statistics & numerical data
- Radiography, Dental, Digital/economics
- Radiography, Dental, Digital/instrumentation
- Radiography, Dental, Digital/statistics & numerical data
- Software/statistics & numerical data
- Specialties, Dental/statistics & numerical data
- X-Ray Film/statistics & numerical data
Collapse
|
25
|
Ultrasound imaging on picture archiving and communication systems: are radiologists satisfied? JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2013; 32:1377-1384. [PMID: 23887946 DOI: 10.7863/ultra.32.8.1377] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
OBJECTIVES To evaluate whether picture archiving and communication systems (PACS) adequately satisfy radiologists' needs in ultrasound (US) imaging and which PACS functions may be inadequately implemented for handling US diagnosis. METHODS An electronic survey was sent to the membership of the Society of Radiologists in Ultrasound asking them to rate their PACS experience for different modalities, judge the quality of various PACS functions having an impact on US practice and diagnosis, indicate if they felt a need for US-related PACS functions to be implemented or improved, and rate PACS-related improvements for different components of their US practice. RESULTS Of the 161 respondents, 112 (70%) used a general radiology PACS. Of these respondents, only 53.2% gave a high rating to the US experience in PACS, significantly lower (P < .0001) than for computed tomography (85.2%), magnetic resonance imaging (84.4%), and radiography (83.2%). The functionality of US-specific display, image-processing, and data management PACS processes were graded significantly lower than basic PACS display functions. Only 0.9% of respondents highly rated PACS handling of 3-dimensional US volume data, whereas 92% highly rated the quality of the black-and-white US image display (P < .0001). Most respondents would like most of these US-specific functions implemented or improved, and most respondents stated that PACS has improved their US practice in different ways, although the contribution in more complex image analysis is lagging. CONCLUSIONS Radiologists with a special interest in US believe that the PACS experience for US is lacking. This research helps identify those specific tasks that may further improve work efficiency and diagnostic confidence.
Collapse
|
26
|
Computer-aided auditing of prescription drug claims. Health Care Manag Sci 2013; 17:203-14. [PMID: 23821344 DOI: 10.1007/s10729-013-9247-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Accepted: 06/18/2013] [Indexed: 11/26/2022]
Abstract
We describe a methodology for identifying and ranking candidate audit targets from a database of prescription drug claims. The relevant audit targets may include various entities such as prescribers, patients and pharmacies, who exhibit certain statistical behavior indicative of potential fraud and abuse over the prescription claims during a specified period of interest. Our overall approach is consistent with related work in statistical methods for detection of fraud and abuse, but has a relative emphasis on three specific aspects: first, based on the assessment of domain experts, certain focus areas are selected and data elements pertinent to the audit analysis in each focus area are identified; second, specialized statistical models are developed to characterize the normalized baseline behavior in each focus area; and third, statistical hypothesis testing is used to identify entities that diverge significantly from their expected behavior according to the relevant baseline model. The application of this overall methodology to a prescription claims database from a large health plan is considered in detail.
Collapse
|
27
|
Discovering online learning barriers: survey of health educational stakeholders in dentistry. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2013; 17:e126-e135. [PMID: 23279400 DOI: 10.1111/j.1600-0579.2012.00772.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/20/2012] [Indexed: 06/01/2023]
Abstract
OBJECTIVES Given the exponential explosion of online learning tools and the challenge to harness their influence in dental education, there is a need to determine the current status of online learning tools being adopted at dental schools, the barriers that thwart the potential of adopting these and to capture this information from each of the various stakeholders involved in dental online learning (administrators, instructors, students and software/hardware technicians). The aims of this exploratory study are threefold: first, to understand which online learning tools are currently being adopted at dental schools; second, to determine the barriers in adopting online learning in dental education; and third, to identify a way of better preparing stakeholders in their quest to encourage others at their institutions to adopt online learning tools. METHODS Seventy-two participants representing eight countries and 13 stakeholder groups in dentistry were invited to complete the online Survey of Barriers in Online Learning Education in Health Professional Schools. The survey was created for this study but generic to all healthcare education domains. Twenty participants completed the survey. RESULTS demonstrated that many online learning tools are being successfully adopted at dental schools, but computer-based assessment tools are the least successful. Added to this are challenges of support and resources for online learning tools. Participants offered suggestions of creating a blended (online and face-to-face) tutorial aimed at assisting stakeholders to help their dental schools in adopting online learning tools CONCLUSION The information from this study is essential in helping us to better prepare the next generation of dental providers in terms of adopting online learning tools. This paper will not only provide strategies of how best to proceed, but also inspire participants with the necessary tools to move forward as they assist their clients with adopting and sustaining online learning tools and models.
Collapse
|
28
|
Detecting software failures in the MAUDE database: a preliminary analysis. Stud Health Technol Inform 2013; 192:1098. [PMID: 23920872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The MAUDE (Manufacturer and User facility Device Experience) was analyzed to identify challenges in detecting software failure causing Medical Device (MD) adverse events.
Collapse
|
29
|
Personal health records are designed for people like us. Stud Health Technol Inform 2013; 192:1037. [PMID: 23920811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Current approaches to designing, implementing and evaluating personal health record systems reflect the attributes and assumptions of well-educated and well to-do users (People like Us: PLUs) rather than the needs of the most disadvantaged in society (the disempowered, disengaged and disconnected: DDDs). These electronic systems for increasing accessibility to personal health information may accentuate rather than mitigate the emerging eHealth divide. Using a PubMed review of literature on personal health record systems, we identified only seven of 73 papers, and one of 29 abstracts which made specific mention of users who were disadvantaged by low literacy levels or difficulties with access to technology. This work is part of a larger study into personal health records and disadvantage.
Collapse
|
30
|
Evaluating the performance of different procedures for constructing confidence intervals for coefficient alpha: a simulation study. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2012; 65:467-498. [PMID: 22295951 DOI: 10.1111/j.2044-8317.2012.02038.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Reliability is one of the most important aspects of testing in educational and psychological measurement. The construction of confidence intervals for reliability coefficients has important implications for evaluating the accuracy of the sample estimate of reliability and for comparing different tests, scoring rubrics, or training procedures for raters or observers. The present simulation study evaluated and compared various parametric and non-parametric methods for constructing confidence intervals of coefficient alpha. Six factors were manipulated: number of items, number of subjects, population coefficient alpha, deviation from essentially parallel condition, item response distribution and type. The coverage and width of different confidence intervals were compared across simulation conditions.
Collapse
|
31
|
Shots by STFM: value of immunization software to family medicine residency directors: a CERA study. Fam Med 2012; 44:716-718. [PMID: 23148004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND AND OBJECTIVES The Group on Immunization Education (GIE) of the Society of Teachers of Family Medicine (STFM) has developed Shots by STFM immunization software, which is available free of charge for a variety of platforms. It is routinely updated with the Center for Disease Control and Prevention's (CDC's) most recent immunization schedules. Successful development and marketing of teaching resources requires periodic evaluation of their use and value to their target audience. This study was undertaken to evaluate the 2011 version of Shots by STFM. METHODS Family medicine residency directors were surveyed about their use of Shots by STFM for teaching residents and their ratings of its features. RESULTS The response rate for the survey was 38% (172/452). While awareness of Shots by STFM among responding residency directors was low (57%), ratings by those using the resource were excellent. Thirty percent of respondents recommend or require their residents to use Shots by STFM. CONCLUSIONS Better marketing of Shots by STFM to family medicine residency directors seems to be indicated.
Collapse
|
32
|
Abstract
INTRODUCTION Among the more than 1,500 "Apps" in the health sector numerous medical "Apps" and "Apps" for visually impaired persons are available. METHODS An Internet survey was performed to determine available medical "Apps" and evaluate their usability. The corresponding web pages were evaluated and the described "Apps" evaluated in a first analysis for the medical seriousness and usability. Identified "Apps" were entered again as key words to search for the most current and comprehensive assessment. In addition visual impaired persons possessing and using Smartphones were asked for their personal and subjective experiences and preference of selected "Apps". RESULTS The more than 50 "Apps" examined can be subdivided into different categories (A) stand-alone "Apps" and (B) global positioning system (GPS) driven navigation"Apps". Many "Apps" have only been available for a short time, still having some initial technical problems and are presently under further development. Medical "Apps" can support healthy and visually impaired people in many healthcare areas. A barrier-free access to these new technologies is essential for an unhindered utilization of "Apps" by visually impaired persons. Many "Apps" developed by and for visual impaired people received a high acceptance and popularity in practical applications. CONCLUSION The use of "Apps" in medical healthcare, especially for visually impaired persons, has a great potential to achieve a relief in the clinical provision for visually impaired persons with increasing distribution of smartphones and new technical developments.
Collapse
|
33
|
Abstract
Classical reliability theory assumes that individuals have identical true scores on both testing occasions, a condition described as stable. If some individuals' true scores are different on different testing occasions, described as unstable, the estimated reliability can be misleading. A model called stable unstable reliability theory (SURT) frames stability or instability as an empirically testable question. SURT assumes a mixed population of stable and unstable individuals in unknown proportions, with w(i) the probability that individual i is stable. w(i) becomes i's test score weight which is used to form a weighted correlation coefficient r(w) which is reliability under SURT. If all w(i) = 1 then r(w) is the classical reliability coefficient; thus classical theory is a special case of SURT. Typically r(w) is larger than the conventional reliability r, and confidence intervals on true scores are typically shorter than conventional intervals. r(w) is computed with routines in a publicly available R package.
Collapse
|
34
|
Accuracy in parameter estimation for ANCOVA and ANOVA contrasts: sample size planning via narrow confidence intervals. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2012; 65:350-370. [PMID: 22004142 DOI: 10.1111/j.2044-8317.2011.02029.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Contrasts of means are often of interest because they describe the effect size among multiple treatments. High-quality inference of population effect sizes can be achieved through narrow confidence intervals (CIs). Given the close relation between CI width and sample size, we propose two methods to plan the sample size for an ANCOVA or ANOVA study, so that a sufficiently narrow CI for the population (standardized or unstandardized) contrast of interest will be obtained. The standard method plans the sample size so that the expected CI width is sufficiently small. Since CI width is a random variable, the expected width being sufficiently small does not guarantee that the width obtained in a particular study will be sufficiently small. An extended procedure ensures with some specified, high degree of assurance (e.g., 90% of the time) that the CI observed in a particular study will be sufficiently narrow. We also discuss the rationale and usefulness of two different ways to standardize an ANCOVA contrast, and compare three types of standardized contrast in the ANCOVA/ANOVA context. All of the methods we propose have been implemented in the freely available MBESS package in R so that they can be easily applied by researchers.
Collapse
|
35
|
Linkage-disequilibrium-based binning affects the interpretation of GWASs. Am J Hum Genet 2012; 90:727-33. [PMID: 22444669 DOI: 10.1016/j.ajhg.2012.02.025] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2011] [Revised: 02/16/2012] [Accepted: 02/27/2012] [Indexed: 12/13/2022] Open
Abstract
Genome-wide association studies (GWASs) are critically dependent on detailed knowledge of the pattern of linkage disequilibrium (LD) in the human genome. GWASs generate lists of variants, usually SNPs, ranked according to the significance of their association to a trait. Downstream analyses generally focus on the gene or genes that are physically closest to these SNPs and ignore their LD profile with other SNPs. We have developed a flexible R package (LDsnpR) that efficiently assigns SNPs to genes on the basis of both their physical position and their pairwise LD with other SNPs. We used the positional-binning and LD-based-binning approaches to investigate whether including these "LD-based" SNPs would affect the interpretation of three published GWASs on bipolar affective disorder (BP) and of the imputed versions of two of these GWASs. We show how including LD can be important for interpreting and comparing GWASs. In the published, unimputed GWASs, LD-based binning effectively "recovered" 6.1%-8.3% of Ensembl-defined genes. It altered the ranks of the genes and resulted in nonnegligible differences between the lists of the top 2,000 genes emerging from the two binning approaches. It also improved the overall gene-based concordance between independent BP studies. In the imputed datasets, although the increases in coverage (>0.4%) and rank changes were more modest, even greater concordance between the studies was observed, attesting to the potential of LD-based binning on imputed data as well. Thus, ignoring LD can result in the misinterpretation of the GWAS findings and have an impact on subsequent genetic and functional studies.
Collapse
|
36
|
The value of reusing prior nested case-control data in new studies with different outcome. Stat Med 2012; 31:1291-302. [PMID: 22350833 DOI: 10.1002/sim.4494] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2010] [Accepted: 10/14/2011] [Indexed: 11/07/2022]
Abstract
Many epidemiological studies use a nested case-control (NCC) design to reduce cost while maintaining study power. However, because of the incidence density sampling used, reusing data from NCC studies for analysis of secondary outcomes is not straightforward. Recent methodological developments have opened the possibility for prior NCC data to be used to complement controls in a current study, thereby improving study efficiency. However, practical guidelines on the effectiveness of prior data relative to newly sampled subjects and the potential power gains are still lacking. Using simulated cohorts, we show in this paper how the efficiency of NCC studies that use a mixture of prior and newly sampled subjects depends on the number of newly sampled controls and prior subjects as well as the overlap in the distributions of the matching variables. We explore the feasibility and efficiency of a current study that gathers no controls, relying instead on prior data. Using the concept of effective number of controls, we show how researchers can assess the potential power gains from reusing prior data. We apply the method to analyses of anorexia and contralateral breast cancer in the Swedish population and show how power calculations can be done using publicly available software. This work has important applications in genetic and molecular epidemiology to make optimal use of costly exposure measurements.
Collapse
|
37
|
Update of the NIOSH life table analysis system: a person-years analysis program for the windows computing environment. Am J Ind Med 2011; 54:915-24. [PMID: 22068723 DOI: 10.1002/ajim.20999] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/17/2011] [Indexed: 11/09/2022]
Abstract
BACKGROUND Person-years analysis is a fundamental tool of occupational epidemiology. A life table analysis system (LTAS), previously developed by the National Institute for Occupational Safety and Health, was limited by its platform and analysis and reporting capabilities. We describe the updating of LTAS for the Windows operating system (LTAS.NET) with improved properties. SOFTWARE DEVELOPMENT PROCESS A group of epidemiologists, programmers, and statisticians developed software, platform, and computing requirements. Statistical methods include the use of (indirectly) standardized mortality ratios, (directly) standardized rate ratios, confidence intervals, and P values based on the normal approximation and exact Poisson methods, and a trend estimator for linear exposure-response associations. SOFTWARE FEATURES We show examples using LTAS.NET to stratify and analyze multiple fixed and time-dependent variables. Data import, stratification, and reporting options are highly flexible. Users may export stratified data for Poisson regression modeling. CONCLUSIONS LTAS.NET incorporates improvements that will facilitate more complex person-years analysis of occupational cohort data.
Collapse
|
38
|
Using SAS PROC TCALIS for multigroup structural equation modelling with mean structures. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2011; 64:516-537. [PMID: 21973099 DOI: 10.1111/j.2044-8317.2010.02012.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Multigroup structural equation modelling (SEM) is a technique frequently used to evaluate measurement invariance in social and behavioural science research. Before version 9.2, SAS was incapable of handling multigroup SEM. However, this limitation is resolved in PROC TCALIS in SAS 9.2. For the purpose of illustration, this paper provides a step-by-step guide to programming the tests of measurement invariance and partial invariance using PROC TCALIS for multigroup SEM with mean structures. Fit indices and parameter estimates are validated, thus providing an alternative tool for researchers conducting both applied and simulated studies. Other new features (e.g., different types of modelling languages and estimation methods) and limitations (e.g., ordered-categorical SEM and multilevel SEM) of the TCALIS procedure are also briefly discussed.
Collapse
|
39
|
Near-native protein loop sampling using nonparametric density estimation accommodating sparcity. PLoS Comput Biol 2011; 7:e1002234. [PMID: 22028638 PMCID: PMC3197639 DOI: 10.1371/journal.pcbi.1002234] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2011] [Accepted: 09/01/2011] [Indexed: 11/29/2022] Open
Abstract
Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. A protein's structure consists of elements of regular secondary structure connected by less regular stretches of loop segments. The irregularity of the loop structure makes loop modeling quite challenging. More accurate sampling of these loop conformations has a direct impact on protein modeling, design, function classification, as well as protein interactions. A method has been developed that extends a more comprehensive knowledge-based approach to producing models of the loop regions of protein structure. Most physical models cannot adequately sample the large conformational space, while the more discrete knowledge based libraries are conformationally limited. To address both of these problems, we introduce a novel statistical method that produces a continuous yet weighted estimation of loop conformational space from a discrete library of structures by using a Dirichlet process mixture of hidden Markov models (DPM-HMM). Applied to loop structure sampling, the results of a number of tests demonstrate that our approach quickly generates large numbers of candidates with near native loop conformations. Most significantly, in the cases where the template sampling is sparse and/or far from native conformations, the DPM-HMM method samples close to the native space and produces a population of accurate loop structures.
Collapse
|
40
|
Quantifying the impact of fixed effects modeling of clusters in multiple imputation for cluster randomized trials. Biom J 2011; 53:57-74. [PMID: 21259309 PMCID: PMC3124925 DOI: 10.1002/bimj.201000140] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In cluster randomized trials (CRTs), identifiable clusters rather than individuals are randomized to study groups. Resulting data often consist of a small number of clusters with correlated observations within a treatment group. Missing data often present a problem in the analysis of such trials, and multiple imputation (MI) has been used to create complete data sets, enabling subsequent analysis with well-established analysis methods for CRTs. We discuss strategies for accounting for clustering when multiply imputing a missing continuous outcome, focusing on estimation of the variance of group means as used in an adjusted t-test or ANOVA. These analysis procedures are congenial to (can be derived from) a mixed effects imputation model; however, this imputation procedure is not yet available in commercial statistical software. An alternative approach that is readily available and has been used in recent studies is to include fixed effects for cluster, but the impact of using this convenient method has not been studied. We show that under this imputation model the MI variance estimator is positively biased and that smaller intraclass correlations (ICCs) lead to larger overestimation of the MI variance. Analytical expressions for the bias of the variance estimator are derived in the case of data missing completely at random, and cases in which data are missing at random are illustrated through simulation. Finally, various imputation methods are applied to data from the Detroit Middle School Asthma Project, a recent school-based CRT, and differences in inference are compared.
Collapse
|
41
|
[Application of saTScan in detection of schistosomiasis clusters in marshland and lake areas]. ZHONGGUO XUE XI CHONG BING FANG ZHI ZA ZHI = CHINESE JOURNAL OF SCHISTOSOMIASIS CONTROL 2011; 23:28-31. [PMID: 22164371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
OBJECTIVE To detect the schistosomiasis clusters in marshland and lake areas in 2008, so as to provide the reference for schistosomiasis control and the methodology for detection of cluster areas of related diseases. METHODS SaTScan was used to detect the schistosomiasis clusters based on the spatial database from GIS and related variables, including the number of current patients and population in endemic areas. RESULTS A total of 5 clusters including 39 counties (districts) were detected by SaTScan, the RRs and the Log-likelihood ratios of 3 clusters among them were over 3 and 1 000 (P < 0.05), respectively. The one with the highest RR and Log-likelihood ratio was located in the boundary of Hubei and Hunan provinces, and the cluster range there was the biggest. From there to the downstream of the Yangtze River, the area and RRs of the 5 clusters became smaller and smaller. CONCLUSION The 5 provinces in the marshland and lake areas are still the key spatial clusters of schistosomiasis, especially near the boundary of Hubei and Hunan provinces.
Collapse
|
42
|
Role of mobile health in the care of culturally and linguistically diverse US populations. PERSPECTIVES IN HEALTH INFORMATION MANAGEMENT 2011; 8:1e. [PMID: 21307988 PMCID: PMC3035829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Emerging trends in the health-related use of cell phones include the proliferation of mobile health applications for the care and monitoring of patients with chronic diseases and the rise in cell phone usage by Latinos and African Americans in the United States. This article reviews public policy in four areas with the goal of improving the care of patients belonging to culturally and linguistically diverse populations: 1) mobile health service access and the physician's duty of care, 2) affordability of and reimbursement for health related services via mobile phone, 3) protocols for mobile health enabled patient health data collection and distribution, and 4) cultural and linguistic appropriateness of health related messages delivered via cell phone. The review demonstrates the need for policy changes that would allow for reimbursement of both synchronous and asynchronous patient-provider communication, subsidize broadband access for lower-income patients, introduce standards for confidentiality of health data transmitted via cell phone as well as amplify existing cultural and linguistic standards to encompass mobile communication, and consider widespread public accessibility when certifying new technologies as "medical devices." Federal and state governments must take prompt action to ensure that the benefits of mobile health are accessible to all Americans.
Collapse
|
43
|
A survey of analysis software for array-comparative genomic hybridisation studies to detect copy number variation. Hum Genomics 2010; 4:421-7. [PMID: 20846932 PMCID: PMC3525224 DOI: 10.1186/1479-7364-4-6-421] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2010] [Accepted: 08/27/2010] [Indexed: 11/10/2022] Open
Abstract
Copy number variants (CNVs) create a major source of variation among individuals and populations. Array-based comparative genomic hybridisation (aCGH) is a powerful method used to detect and compare the copy numbers of DNA sequences at high resolution along the genome. In recent years, several informatics tools for accurate and efficient CNV detection and assessment have been developed. In this paper, most of the well known algorithms, analysis software and the limitations of that software will be briefly reviewed.
Collapse
|
44
|
Lab software: the global scene. MLO: MEDICAL LABORATORY OBSERVER 2009; 41:28-30. [PMID: 19753786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
|
45
|
Accuracy of Computer Programs in Predicting Orthognathic Surgery Soft Tissue Response. J Oral Maxillofac Surg 2009; 67:751-9. [PMID: 19304030 DOI: 10.1016/j.joms.2008.11.006] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2007] [Revised: 09/18/2008] [Accepted: 11/06/2008] [Indexed: 11/18/2022]
|
46
|
Analysis of family health history data collection patterns in consumer-oriented Web-based tools. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2008:982. [PMID: 18999269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/14/2008] [Accepted: 06/17/2008] [Indexed: 05/27/2023]
Abstract
Current trends have brought resurgent interest in developing consumer-oriented tools that gather patient-entered clinical data. Family health history data has long been recognized as valuable for risk assessment in primary care, but has gained renewed attention recently as part of IT-oriented efforts in personalized medicine. In order to better understand the breadth of data collected in consumer-oriented web applications, we evaluated their collection patterns using the recommendations issued by the American Health Information Community (AHIC).
Collapse
|
47
|
Three-dimensional accuracy of measurements made with software on cone-beam computed tomography images. Am J Orthod Dentofacial Orthop 2008; 134:112-6. [PMID: 18617110 DOI: 10.1016/j.ajodo.2006.08.024] [Citation(s) in RCA: 145] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2006] [Revised: 08/01/2006] [Accepted: 08/01/2006] [Indexed: 11/17/2022]
Abstract
INTRODUCTION The purpose of this article was to evaluate the accuracy of measurements made on 9- and 12-in cone-beam computed tomography (CBCT) images compared with measurements made on a coordinate measuring machine (CMM), which is the gold standard. METHODS Ten markers were placed on a synthetic mandible, and landmark coordinates and linear and angular measurements were determined with the CMM. Three-dimensional CBCT images, measuring 9 and 12 in, were taken of the mandible with a CBCT machine (NewTom 3G, Aperio Services, Verona, Italy), and landmark coordinates and linear and angular measurements were obtained with AMIRA (Mercury Computer Systems, Berlin, Germany) software. RESULTS The coordinate intrareliability correlation coefficient was almost perfect between the 3-dimensional CBCT images and the CMM measurements. With the Student t test, we found no significant statistical difference between linear and angular measurements from the CMM and the NewTom 3G images, which differed less than 1 mm and 1 degrees , respectively. CONCLUSIONS The NewTom 3G produces a 1-to-1 image-to-reality ratio.
Collapse
|
48
|
Phenostat: visualization and statistical tool for analysis of phenotyping data. Mamm Genome 2007; 18:677-81. [PMID: 17674099 DOI: 10.1007/s00335-007-9042-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2007] [Accepted: 05/29/2007] [Indexed: 10/23/2022]
Abstract
The effective extraction of information from multidimensional data sets derived from phenotyping experiments is a growing challenge in biology. Data visualization tools are important resources that can aid in exploratory data analysis of complex data sets. Phenotyping experiments of model organisms produce data sets in which a large number of phenotypic measures are collected for each individual in a group. A critical initial step in the analysis of such multidimensional data sets is the exploratory analysis of data distribution and correlation. To facilitate the rapid visualization and exploratory analysis of multidimensional complex trait data, we have developed a user-friendly, web-based software tool called Phenostat. Phenostat is composed of a dynamic graphical environment that allows the user to inspect the distribution of multiple variables in a data set simultaneously. Individuals can be selected by directly clicking on the graphs and thus displaying their identity, highlighting corresponding values in all graphs, allowing their inclusion or exclusion from the analysis. Statistical analysis is provided by R package functions. Phenostat is particularly suited for rapid distribution and correlation analysis of subsets of data. An analysis of behavioral and physiologic data stemming from a large mouse phenotyping experiment using Phenostat reveals previously unsuspected correlations. Phenostat is freely available to academic institutions and nonprofit organizations and can be used from our website at: (http://www.bioinfo.embl.it/phenostat/).
Collapse
|
49
|
Supervised multivariate analysis of sequence groups to identify specificity determining residues. BMC Bioinformatics 2007; 8:135. [PMID: 17451607 PMCID: PMC1878507 DOI: 10.1186/1471-2105-8-135] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2006] [Accepted: 04/23/2007] [Indexed: 11/29/2022] Open
Abstract
Background Proteins that evolve from a common ancestor can change functionality over time, and it is important to be able identify residues that cause this change. In this paper we show how a supervised multivariate statistical method, Between Group Analysis (BGA), can be used to identify these residues from families of proteins with different substrate specifities using multiple sequence alignments. Results We demonstrate the usefulness of this method on three different test cases. Two of these test cases, the Lactate/Malate dehydrogenase family and Nucleotidyl Cyclases, consist of two functional groups. The other family, Serine Proteases consists of three groups. BGA was used to analyse and visualise these three families using two different encoding schemes for the amino acids. Conclusion This overall combination of methods in this paper is powerful and flexible while being computationally very fast and simple. BGA is especially useful because it can be used to analyse any number of functional classes. In the examples we used in this paper, we have only used 2 or 3 classes for demonstration purposes but any number can be used and visualised.
Collapse
|
50
|
GENECODIS: a web-based tool for finding significant concurrent annotations in gene lists. Genome Biol 2007; 8:R3. [PMID: 17204154 PMCID: PMC1839127 DOI: 10.1186/gb-2007-8-1-r3] [Citation(s) in RCA: 510] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2006] [Revised: 09/29/2006] [Accepted: 01/04/2007] [Indexed: 12/01/2022] Open
Abstract
GENECODIS, a web-based tool for finding annotations that frequently co-occur in a set of genes and ranking them by their statistical significance, is presented. We present GENECODIS, a web-based tool that integrates different sources of information to search for annotations that frequently co-occur in a set of genes and rank them by statistical significance. The analysis of concurrent annotations provides significant information for the biologic interpretation of high-throughput experiments and may outperform the results of standard methods for the functional analysis of gene lists. GENECODIS is publicly available at .
Collapse
|