251
|
Nkoy FL, Wolfe D, Hales JW, Lattin G, Rackham M, Maloney CG. Enhancing an existing clinical information system to improve study recruitment and census gathering efficiency. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2009; 2009:476-480. [PMID: 20351902 PMCID: PMC2815472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Information technology can improve healthcare efficiency. We developed and implemented a simple and inexpensive tool, the "Automated Case Finding and Alerting System" (ACAS), using data from an existing clinical information system to facilitate identification of potentially eligible patients for clinical trials and patient encounters for billing purposes. We validated the ACAS by calculating the level of agreement in patient identification with data generated from manual identification methods. There was substantial agreement between the two methods both for clinical trial (kappa:0.84) and billing (kappa:0.97). Automated identification occurred instantaneously vs. about 2 hours/day for clinical trial and 1 hour 10 minutes/day for billing, and was inexpensive ($98.95, one time fee) compared to manual identification ($1,200/month for clinical trial and $670/month for billing). Automated identification was more efficient and cost-effective than manual identification methods. Repurposing clinical information beyond their traditional use has the potential to improve efficiency and decrease healthcare cost.
Collapse
|
252
|
Zhu G, Zheng Y, Doermann D, Jaeger S. Signature detection and matching for document image retrieval. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:2015-2031. [PMID: 19762928 DOI: 10.1109/tpami.2008.237] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
Collapse
|
253
|
Barron D, Blumenthal L, Bourque S, Brovarny N, Childress J, Clark JS, Criswell DL, Dillard J, Dougherty M, Gardenier M, Gryzbowski D, Hall T, Hardison M, Hecht J, Hjort B, Jackson K, Johnson M, Lerch DM, Maxim DW, Mozie DI, Osi I, Panzarella D, Pavlick JL, Qazen U, Ray S, Spurrell L, Stephens D, Sugg S, Vernon K, Waugh TE, Wiedemann LA. Electronic signature, attestation, and authorship (updated). JOURNAL OF AHIMA 2009; 80:62-69. [PMID: 19953798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
|
254
|
Djioua M, Plamondon R. A new algorithm and system for the characterization of handwriting strokes with delta-lognormal parameters. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:2060-2072. [PMID: 19762931 DOI: 10.1109/tpami.2008.264] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
In this paper, we present a new analytical method for estimating the parameters of Delta-Lognormal functions and characterizing handwriting strokes. According to the Kinematic Theory of rapid human movements, these parameters contain information on both the motor commands and the timing properties of a neuromuscular system. The new algorithm, called XZERO, exploits relationships between the zero crossings of the first and second time derivatives of a lognormal function and its four basic parameters. The methodology is described and then evaluated under various testing conditions. The new tool allows a greater variety of stroke patterns to be processed automatically. Furthermore, for the first time, the extraction accuracy is quantified empirically, taking advantage of the exponential relationships that link the dispersion of the extraction errors with its signal-to-noise ratio. A new extraction system which combines this algorithm with two other previously published methods is also described and evaluated. This system provides researchers involved in various domains of pattern analysis and artificial intelligence with new tools for the basic study of single strokes as primitives for understanding rapid human movements.
Collapse
|
255
|
Bergmann T, Hadrys H, Breves G, Schierwater B. Character-based DNA barcoding: a superior tool for species classification. BERLINER UND MUNCHENER TIERARZTLICHE WOCHENSCHRIFT 2009; 122:446-450. [PMID: 19999380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
In zoonosis research only correct assigned host-agent-vector associations can lead to success. If most biological species on Earth, from agent to host and from procaryotes to vertebrates, are still undetected, the development of a reliable and universal diversity detection tool becomes a conditio sine qua non. In this context, in breathtaking speed, modern molecular-genetic techniques have become acknowledged tools for the classification of life forms at all taxonomic levels. While previous DNA-barcoding techniques were criticised for several reasons (Moritz and Cicero, 2004; Rubinoff et al., 2006a, b; Rubinoff, 2006; Rubinoff and Haines, 2006) a new approach, the so called CAOS-barcoding (Character Attribute Organisation System), avoids most of the weak points. Traditional DNA-barcoding approaches are based on distances, i. e. they use genetic distances and tree construction algorithms for the classification of species or lineages. The definition of limit values is enforced and prohibits a discrete or clear assignment. In comparison, the new character-based barcoding (CAOS-barcoding; DeSalle et al., 2005; DeSalle, 2006; Rach et al., 2008) works with discrete single characters and character combinations which permits a clear, unambiguous classification. In Hannover (Germany) we are optimising this system and developing a semiautomatic high-throughput procedure for hosts, agents and vectors being studied within the Zoonosis Centre of the "Stiftung Tierärztliche Hochschule Hannover". Our primary research is concentrated on insects, the most successful and species-rich animal group on Earth (every fourth animal is a bug). One subgroup, the winged insects (Pterygota), represents the outstanding majority of all zoonosis relevant animal vectors.
Collapse
|
256
|
Dupuy F, Casas J, Bagnères AG, Lazzari CR. OpenFluo: a free open-source software for optophysiological data analyses. J Neurosci Methods 2009; 183:195-201. [PMID: 19583983 DOI: 10.1016/j.jneumeth.2009.06.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2009] [Revised: 06/17/2009] [Accepted: 06/27/2009] [Indexed: 11/19/2022]
Abstract
Optophysiological imaging methods can be used to record the activity in vivo of groups of neurons from particular areas of the nervous system (e.g. the brain) or of cell cultures. Such methods are used, for example, in the spatio-temporal coding and processing of sensory information. However, the data generated by optophysiological methods must be processed carefully if relevant results are to be obtained. The raw fluorescence data must be digitally filtered and analyzed appropriately to obtain activity maps and fluorescence time course for single spots. We used a Matlab environment to implement the necessary procedures in a user-friendly manner. We developed OpenFluo, a program for people inexperienced in optophysiological methods and for advanced users wishing to perform simple, rapid data analyses without the need for complex, time-consuming programming procedures. This program will be made available as stand-alone software and as an open-source Matlab tool. It will therefore be possible for experienced users to integrate their own routines. We validated this software by assessing its ability to process both artificial recordings and real biological data corresponding to recordings of the honeybee brain.
Collapse
|
257
|
Barnett LAK, Ebert DA, Cailliet GM. Assessment of the dorsal fin spine for chimaeroid (Holocephali: Chimaeriformes) age estimation. JOURNAL OF FISH BIOLOGY 2009; 75:1258-1270. [PMID: 20738613 DOI: 10.1111/j.1095-8649.2009.02362.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Previous attempts to age chimaeroids have not rigorously tested assumptions of dorsal fin spine growth dynamics. Here, novel imaging and data-analysis techniques revealed that the dorsal fin spine of the spotted ratfish Hydrolagus colliei is an unreliable structure for age estimation. Variation among individuals in the relationship between spine width and distance from the spine tip indicated that the technique of transverse sectioning may impart imprecision and bias to age estimates. The number of growth-band pairs observed by light microscopy in the inner dentine layer was not a good predictor of body size. Mineral density gradients, indicative of growth zones, were absent in the dorsal fin spine of H. colliei, decreasing the likelihood that the bands observed by light microscopy represent a record of growth with consistent periodicity. These results indicate that the hypothesis of aseasonal growth remains plausible and it should not be assumed that chimaeroid age is quantifiable by standard techniques.
Collapse
|
258
|
D'Amato C, D'Andrea R, Bronnert J, Cook J, Foley M, Garret G, Gladden G, Hope K, Imel M, Johnson LM, Jorwic T, Jurek J, Karaman-Meacham C, Kelly CD, Kohn D, Lewis L, Marlow J, Millas B, Novak N, Parker E, Paterno M, Perron K, Peterson K, Peterson K, Piselli C, Ruhnau-Gee B, Safian S, Sanvik G, Scichilone R, Seger R, Viola AF, Von Saleski B, Yoder MJ. Transitioning ICD-10-CM/PCS data management processes. JOURNAL OF AHIMA 2009; 80:66-70. [PMID: 19839441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
|
259
|
D'Amato C, D'Andrea R, Bronnert J, Cook J, Foley M, Garret G, Gladden G, Hope K, Imel M, Johnson LM, Jorwic T, Jurek J, Karaman-Meacham C, Kelly CD, Kohn D, Lewis L, Marlow J, Millas B, Novak N, Parker E, Paterno M, Perron K, Peterson K, Peterson K, Piselli C, Ruhnau-Gee B, Safian S, Sanvik G, Scichilone R, Seger R, Viola AF, Von Saleski B, Yoder MJ. Planning organizational transition to ICD-10-CM/PCS. JOURNAL OF AHIMA 2009; 80:72-77. [PMID: 19839442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
|
260
|
Khoo BCC, Wilson SG, Worth GK, Perks U, Qweitin E, Spector TD, Price RI. A comparative study between corresponding structural geometric variables using 2 commonly implemented hip structural analysis algorithms applied to dual-energy X-ray absorptiometry images. J Clin Densitom 2009; 12:461-7. [PMID: 19880052 DOI: 10.1016/j.jocd.2009.08.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2009] [Revised: 08/27/2009] [Accepted: 08/27/2009] [Indexed: 11/17/2022]
Abstract
Hip structural analysis (HSA) has been developed over 20 yr, applied extensively in research, and has demonstrated useful outcomes associating bone structural geometry with bone fragility (research-HSA or r-HSA). In 2007, Hologic Inc. (Bedford, MA) incorporated HSA with some modifications as an option for Hologic dual-energy X-ray absorptiometry (DXA) scanners (clinical HSA or c-HSA). This brought HSA from the research environment into the clinical environment. This article reports a comparison of r-HSA and c-HSA implementations using DXA scans from a group of 191 females. Bland-Altman plots at the narrow-neck (NN) HSA region indicated higher r-HSA areal bone mineral density (mean difference: 0.27 g/cm(2); 21.7% [of mean]); cross-sectional area (0.63 cm(2); 18.7%); cross-sectional moment of inertia (0.26 cm(4); 11.1%), and section modulus (0.22 cm(3); 14.5%) compared with c-HSA. The converse was observed for NN subperiosteal width (-0.09 cm; 3.1%). High linear correlations (r(2) > 0.81) were found between r-HSA and c-HSA NN structural geometric outcomes, with the exception of neck shaft angle (r(2) > 0.47). As differences were significant (p < 0.001), slopes and intercepts are provided to enable linear transformations from r-HSA to corresponding c-HSA structural geometric data.
Collapse
|
261
|
Choi YJ, Lee BJ, Lim HC, Chung YS. Cross-calibration of iDXA and Prodigy on spine and femur scans in Korean adults. J Clin Densitom 2009; 12:450-5. [PMID: 19815436 DOI: 10.1016/j.jocd.2009.08.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2009] [Revised: 08/02/2009] [Accepted: 08/02/2009] [Indexed: 11/16/2022]
Abstract
In this study, the authors compared bone mineral density (BMD) determined using GE Lunar iDXA and Prodigy and derived cross-calibration equations for the 2 devices in Korean adults. One hundred subjects (66 women and 34 men) participated in this study. Bone mineral density of spine and femur was measured by iDXA and Prodigy dual-energy X-ray absorptiometry (GE Lunar, Madison, WI). Subjects were divided into 3 groups. The first group (30 subjects) was scanned twice using Prodigy for precision testing and then once using iDXA. The second group (30 subjects) was scanned twice using iDXA and then once using Prodigy. Cross-calibration equations were derived using these results. The derived equations were tested in the third group (40 subjects). Predicted values from calculations based on Prodigy findings were compared with measured iDXA data. A significant difference was found between the BMD determined using the 2 devices (p < 0.001). However, linear regression analysis showed a high level of agreement between the two (r(2) from 0.984 to 0.994, p < 0.001). Bland-Altman analysis revealed no significant correlations between Prodigy and iDXA. Cross-calibration equations decreased systematic errors between Prodigy and iDXA by 0.4% at the spine, 0.8% at the femoral neck, and 0.1% at the total femur. A high level of agreement was found between Prodigy and iDXA in Korean adults. Cross-calibration equations proved reliable based on comparisons of measured and calculated BMD values.
Collapse
|
262
|
Song H, Ladenson J, Turk J. Algorithms for automatic processing of data from mass spectrometric analyses of lipids. J Chromatogr B Analyt Technol Biomed Life Sci 2009; 877:2847-54. [PMID: 19131280 PMCID: PMC2723176 DOI: 10.1016/j.jchromb.2008.12.043] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2008] [Revised: 12/15/2008] [Accepted: 12/16/2008] [Indexed: 10/21/2022]
Abstract
Lipidomics comprises large-scale studies of the structures, quantities, and functions of lipid molecular species. Recently developed mass spectrometric methods for lipid analyses, especially electrospray ionization (ESI) tandem mass spectrometry, permit identification and quantitation of an enormous variety of distinct lipid molecular species from small amounts of biological samples but generate a huge amount of experimental data within a brief interval. Processing such data sets so that comprehensible information is derived from them requires bioinformatics tools, and algorithms developed for proteomics and genomics have provided some strategies that can be directly adapted to lipidomics. The structural diversity and complexity of lipids, however, also requires the development and application of new algorithms and software tools that are specifically directed at processing data from lipid analyses. Several such tools are reviewed here, including LipidQA. This program employs searches of a fragment ion database constructed from acquired and theoretical spectra of a wide variety of lipid molecular species, and raw mass spectrometric data can be processed by the program to achieve identification and quantification of many distinct lipids in mixtures. Other approaches that are reviewed here include LIMSA (Lipid Mass Spectrum Analysis), SECD (Spectrum Extraction from Chromatographic Data), MPIS (Multiple Precursor Ion Scanning), FIDS (Fragment Ion Database Searching), LipidInspector, Lipid Profiler, FAAT (Fatty Acid Analysis Tool), and LIPID Arrays. Internet resources for lipid analyses are also summarized.
Collapse
|
263
|
Abstract
Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.
Collapse
|
264
|
Nakagawa A, Yamashita E, Yoshimura M, Suzuki M. [Development of data processing technique for diffraction data collection from micro-crystals]. TANPAKUSHITSU KAKUSAN KOSO. PROTEIN, NUCLEIC ACID, ENZYME 2009; 54:1496-1498. [PMID: 21089578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
|
265
|
Hattie M. Bar coding: labs go 2D. MLO: MEDICAL LABORATORY OBSERVER 2009; 41:32-33. [PMID: 19877399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
|
266
|
Hay SI, Tatem AJ, Graham AJ, Goetz SJ, Rogers DJ. Global environmental data for mapping infectious disease distribution. ADVANCES IN PARASITOLOGY 2009; 62:37-77. [PMID: 16647967 PMCID: PMC3154638 DOI: 10.1016/s0065-308x(05)62002-7] [Citation(s) in RCA: 125] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
This contribution documents the satellite data archives, data processing methods and temporal Fourier analysis (TFA) techniques used to create the remotely sensed datasets on the DVD distributed with this volume. The aim is to provide a detailed reference guide to the genesis of the data, rather than a standard review. These remotely sensed data cover the entire globe at either 1 x 1 or 8 x 8 km spatial resolution. We briefly evaluate the relationships between the 1 x 1 and 8 x 8 km global TFA products to explore their inter-compatibility. The 8 x 8 km TFA surfaces are used in the mapping procedures detailed in the subsequent disease mapping reviews, since the 1 x 1 km products have been validated less widely. Details are also provided on additional, current and planned sensors that should be able to provide continuity with these environmental variable surfaces, as well as other sources of global data that may be used for mapping infectious disease.
Collapse
|
267
|
Jiang ST. How information technologies have improved both productivity and quality of health care. J Med Eng Technol 2009; 29:38-41. [PMID: 15764381 DOI: 10.1080/03091900412331271130] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
268
|
Song X, Wyrwicz AM. Unsupervised spatiotemporal fMRI data analysis using support vector machines. Neuroimage 2009; 47:204-12. [PMID: 19344772 PMCID: PMC2807732 DOI: 10.1016/j.neuroimage.2009.03.054] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2008] [Revised: 03/05/2009] [Accepted: 03/18/2009] [Indexed: 11/30/2022] Open
Abstract
In this work we present a new support vector machine (SVM)-based method for fMRI data analysis. SVM has been shown to be a powerful, efficient data-driven tool in pattern recognition, and has been applied to the supervised classification of brain cognitive states in fMRI experiments. We examine the unsupervised mapping of activated brain regions using SVM. Specifically, the mapping process is formulated as an outlier detection problem of one-class SVM (OCSVM) that provides initial mapping results. These results are further refined by applying prototype selection and SVM reclassification. Multiple spatial and temporal features are extracted and selected to facilitate SVM learning. The proposed method was compared with correlation analysis (CA), t-test (TT), and spatial independent component analysis (SICA) methods using synthetic and experimental data. Our results show that the proposed method can provide more accurate and robust activation mapping than CA, TT and SICA, and is computationally more efficient than SICA. Besides its applicability to typical fMRI experiments, the proposed method is also a powerful tool in fMRI studies where a reliable quantification of activated brain regions is required.
Collapse
|
269
|
Bellamy N, Wilson C, Hendrikz J, Patel B, Dennison S. Electronic data capture (EDC) using cellular technology: implications for clinical trials and practice, and preliminary experience with the m-Womac Index in hip and knee OA patients. Inflammopharmacology 2009; 17:93-9. [PMID: 19139830 DOI: 10.1007/s10787-008-8045-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AIM The capture, analysis and utilisation of health status information are attended by logistic considerations and interpretation challenges. We report a preliminary evaluation of cellular technology in capturing WOMAC NRS 3.1 Index data. METHODS A Java midlet for delivering the WOMAC NRS3.1 Index on Nokia-6300, Motorola-V3 and Samsung-A711 mobile phones was developed by Exco InTouch. Following task orientation, patients completed the paper-based WOMAC (p-WOMAC questionnaire, and then the three mobile phonebased WOMAC (m-WOMAC applications, in random order. RESULTS All 12 patients (age range = 55-82 years) successfully completed the m-WOMAC Index on each of the three phones, and all were found acceptable by patients. With respect to m-WOMAC mean overall rank score, no significant difference was found between the 3 phones (Friedman's chi square (2 df) = 2.2, p = 0.34) however, Motorola V3 was favoured with the best mean rank. Pearson correlation between the average p-WOMAC and average m-WOMAC score was 0.996. CONCLUSIONS Patient reported ratings indicated the m-WOMAC application performed well on all three phones. EDC provides unique opportunities for using quantitative measurement in both clinical practice and research.
Collapse
|
270
|
Yadav Y, Garey KW, Dao-Tran TK, Kaila V, Gbito KYE, DuPont HL. Automated system to identify Clostridium difficile infection among hospitalised patients. J Hosp Infect 2009; 72:337-41. [PMID: 19596490 DOI: 10.1016/j.jhin.2009.04.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2008] [Accepted: 04/23/2009] [Indexed: 11/18/2022]
Abstract
The purpose of this study was to assess whether data on stool frequency collected electronically could identify patients at high risk for Clostridium difficile infection (CDI). All patients with reports of diarrhoea were assessed prospectively for number of stools per day and number of diarrhoea days. C. difficile testing was requested independently from study investigators. Number of days with diarrhoea and maximum number of unformed stools was assessed as a CDI predictor. A total of 605 patients were identified with active diarrhoea of whom 64 (10.6%) were diagnosed with CDI. In univariate analysis, the maximum number of stools and number of diarrhoea days was associated with increased risk of CDI. Compared to patients with three diarrhoea stools per day (CDI incidence: 6.3%), CDI increased to 13.4% in patients with four or more diarrhoea stools per day [odds ratio (OR): 2.3; 95% confidence interval (CI): 1.3-4.2; P=0.0054]. Compared to patients with one day of diarrhoea (CDI incidence: 6.3%), CDI increased to 17.4% in patients with two diarrhoea days (OR: 3.1; 95% CI: 1.7-5.6) and to 27.1% in patients with three or more diarrhoea days (OR: 5.5; 95% CI: 2.6-11.7). These results were validated using logistic regression with number of days with diarrhoea identified as the most important predictor. Using an electronic data capture technique, number of days of diarrhoea and maximum number of diarrhoea stools in a 24h time period were able to identify a patient population at high risk for CDI.
Collapse
|
271
|
Mann K, Röschke J. INFLUENCE OF AGE ON THE INTERRELATION BETWEEN EEG FREQUENCY BANDS DURING NREM AND REM SLEEP. Int J Neurosci 2009; 114:559-71. [PMID: 15195358 DOI: 10.1080/00207450490422704] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The age-dependence of temporal interrelations between distinct frequency bands of sleep EEG was investigated in a group of 59 healthy young and middle-aged males via cross correlation analysis. Based on global evaluation throughout the entire night, a highly significant decline of the delta/theta correlation with increasing age was found. A separate analysis for non-rapid eye movement (NREM) and rapid eye movement (REM) sleep revealed different changes with aging. During NREM sleep, the correlation between the delta and theta frequency bands decreased with increasing age. In contrast, during REM sleep, a stronger correlation became obvious between the theta, alpha, and beta frequency bands with increasing age, whereas the lower frequency components were not affected. These findings indicate that aging processes seem to interact with sleep EEG rhythms in a complex manner, where most conspicuous is a disintegration of the activities in the lower frequency range, both concerning the successive sleep cycles across the night and the micro-structure of NREM sleep.
Collapse
|
272
|
Al-Hajj Mohamad R, Likforman-Sulem L, Mokbel C. Combining slanted-frame classifiers for improved HMM-based Arabic handwriting recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:1165-1177. [PMID: 19443916 DOI: 10.1109/tpami.2008.136] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The problem addressed in this study is the offline recognition of handwritten Arabic city names. The names are assumed to belong to a fixed lexicon of about 1,000 entries. A state-of-the-art classical right-left hidden Markov model (HMM)-based recognizer (reference system) using the sliding window approach is developed. The feature set includes both baseline-independent and baseline-dependent features. The analysis of the errors made by the recognizer shows that the inclination, overlap, and shifted positions of diacritical marks are major sources of errors. In this paper, we propose coping with these problems. Our approach relies on the combination of three homogeneous HMM-based classifiers. All classifiers have the same topology as the reference system and differ only in the orientation of the sliding window. We compare three combination schemes of these classifiers at the decision level. Our reported results on the benchmark IFN/ENIT database of Arabic Tunisian city names give a recognition rate higher than 90 percent accuracy and demonstrate the superiority of the neural network-based combination. Our results also show that the combination of classifiers performs better than a single classifier dealing with slant-corrected images and that the approach is robust for a wide range of orientation angles.
Collapse
|
273
|
Bennardello F, Fidone C, Cabibbo S, Calabrese S, Garozzo G, Cassarino G, Antolino A, Tavolino G, Zisa N, Falla C, Drago G, Di Stefano G, Bonomo P. Use of an identification system based on biometric data for patients requiring transfusions guarantees transfusion safety and traceability. BLOOD TRANSFUSION = TRASFUSIONE DEL SANGUE 2009; 7:193-203. [PMID: 19657483 PMCID: PMC2719271 DOI: 10.2450/2009.0067-08] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/20/2008] [Accepted: 01/16/2009] [Indexed: 11/21/2022]
Abstract
BACKGROUND One of the most serious risks of blood transfusions is an error in ABO blood group compatibility, which can cause a haemolytic transfusion reaction and, in the most severe cases, the death of the patient. The frequency and type of errors observed suggest that these are inevitable, in that mistakes are inherent to human nature, unless significant changes, including the use of computerised instruments, are made to procedures. METHODS In order to identify patients who are candidates for the transfusion of blood components and to guarantee the traceability of the transfusion, the Securblood system (BBS srl) was introduced. This system records the various stages of the transfusion process, the health care workers involved and any immediate transfusion reactions. The patients and staff are identified by fingerprinting or a bar code. The system was implemented within Ragusa hospital in 16 operative units (ordinary wards, day hospital, operating theatres). RESULTS In the period from August 2007 to July 2008, 7282 blood components were transfused within the hospital, of which 5606 (77%) using the Securblood system. Overall, 1777 patients were transfused. In this year of experience, no transfusion errors were recorded and each blood component was transfused to the right patient. We recorded 33 blocks of the terminals (involving 0.6% of the transfused blood components) which required the intervention of staff from the Service of Immunohaematology and Transfusion Medicine (SIMT). Most of the blocks were due to procedural errors. CONCLUSIONS The Securblood system guarantees complete traceability of the transfusion process outside the SIMT and eliminates the possibility of mistaken identification of patients or blood components. The use of fingerprinting to identify health care staff (nurses and doctors) and patients obliges the staff to carry out the identification procedures directly in the presence of the patient and guarantees the presence of the doctor at the start of the transfusion.
Collapse
|
274
|
Cao H, Govindaraju V. Preprocessing of low-quality handwritten documents using Markov random fields. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:1184-1194. [PMID: 19443918 DOI: 10.1109/tpami.2008.126] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This paper presents a statistical approach to the preprocessing of degraded handwritten forms including the steps of binarization and form line removal. The degraded image is modeled by a Markov Random Field (MRF) where the hidden-layer prior probability is learned from a training set of high-quality binarized images and the observation probability density is learned on-the-fly from the gray-level histogram of the input image. We have modified the MRF model to drop the preprinted ruling lines from the image. We use the patch-based topology of the MRF and Belief Propagation (BP) for efficiency in processing. To further improve the processing speed, we prune unlikely solutions from the search space while solving the MRF. Experimental results show higher accuracy on two data sets of degraded handwritten images than previously used methods.
Collapse
|
275
|
Guru DS, Prakash HN. Online signature verification and recognition: an approach based on symbolic representation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:1059-1073. [PMID: 19372610 DOI: 10.1109/tpami.2008.302] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this paper, we propose a new method of representing on-line signatures by interval valued symbolic features. Global features of on-line signatures are used to form an interval valued feature vectors. Methods for signature verification and recognition based on the symbolic representation are also proposed. We exploit the notions of writer dependent threshold and introduce the concept of feature dependent threshold to achieve a significant reduction in equal error rate. Several experiments are conducted to demonstrate the ability of the proposed scheme in discriminating the genuine signatures from the forgeries. We investigate the feasibility of the proposed representation scheme for signature verification and also signature recognition using all 16500 signatures from 330 individuals of the MCYT bimodal biometric database. Further, extensive experimentations are conducted to evaluate the performance of the proposed methods by projecting features onto Eigenspace and Fisherspace. Unlike other existing signature verification methods, the proposed method is simple and efficient. The results of the experimentations reveal that the proposed scheme outperforms several other existing verification methods including the state-of-the-art method for signature verification.
Collapse
|
276
|
Liu JX, Chen YS, Chen LF. Accurate and robust extraction of brain regions using a deformable model based on radial basis functions. J Neurosci Methods 2009; 183:255-66. [PMID: 19467263 DOI: 10.1016/j.jneumeth.2009.05.011] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2009] [Revised: 05/09/2009] [Accepted: 05/14/2009] [Indexed: 11/18/2022]
Abstract
Brain extraction from head magnetic resonance (MR) images is a classification problem of segmenting image volumes into brain and non-brain regions. It is a difficult task due to the convoluted brain surface and the inapparent brain/non-brain boundaries in images. This paper presents an automated, robust, and accurate brain extraction method which utilizes a new implicit deformable model to well represent brain contours and to segment brain regions from MR images. This model is described by a set of Wendland's radial basis functions (RBFs) and has the advantages of compact support property and low computational complexity. Driven by the internal force for imposing the smoothness constraint and the external force for considering the intensity contrast across boundaries, the deformable model of a brain contour can efficiently evolve from its initial state toward its target by iteratively updating the RBF locations. In the proposed method, brain contours are separately determined on 2D coronal and sagittal slices. The results from these two views are generally complementary and are thus integrated to obtain a complete 3D brain volume. The proposed method was compared to four existing methods, Brain Surface Extractor, Brain Extraction Tool, Hybrid Watershed Algorithm, and Model-based Level Set, by using two sets of MR images as well as manual segmentation results obtained from the Internet Brain Segmentation Repository. Our experimental results demonstrated that the proposed approach outperformed these four methods when jointly considering extraction accuracy and robustness.
Collapse
|
277
|
Schroll H. [Data collection perspectives from patient care in general practice]. Ugeskr Laeger 2009; 171:1681-1684. [PMID: 19454209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
GPs in Denmark have a unique civil registration system with personal ID-numbers, a patient list system and a gatekeeper function. A piece of software (Data Capture) has been developed to automatically collect and send prescriptions, lab tests, expense items and diagnosis information from the physician's patient files to the Danish General Practice Database (DAMD). Furthermore, project-related information can be captured by a pop up screen. Data about the GPs' own quality in the field of patient care are sent back to the GP. Many research projects are currently being initiated on the basis of DAMD data.
Collapse
|
278
|
Yao H, Song JY, Ma XY, Liu C, Li Y, Xu HX, Han JP, Duan LS, Chen SL. Identification of Dendrobium species by a candidate DNA barcode sequence: the chloroplast psbA-trnH intergenic region. PLANTA MEDICA 2009; 75:667-9. [PMID: 19235685 DOI: 10.1055/s-0029-1185385] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
DNA barcoding is a novel technology that uses a standard DNA sequence to facilitate species identification. Although a consensus has not been reached regarding which DNA sequences can be used as the best plant barcodes, the psbA-trnH spacer region has been tested extensively in recent years. In this study, we hypothesize that the psbA-trnH spacer regions are also effective barcodes for Dendrobium species. We have sequenced the chloroplast psbA-trnH intergenic spacers of 17 Dendrobium species to test this hypothesis. The sequences were found to be significantly different from those of other species, with percentages of variation ranging from 0.3 % to 2.3 % and an average of 1.2 %. In contrast, the intraspecific variation among the Dendrobium species studied ranged from 0 % to 0.1 %. The sequence difference between the psbA-trnH sequences of 17 Dendrobium species and one Bulbophyllum odoratissimum ranged from 2.0 % to 3.1 %, with an average of 2.5 %. Our results support the notion that the psbA-trnH intergenic spacer region could be used as a barcode to distinguish various Dendrobium species and to differentiate Dendrobium species from other adulterating species.
Collapse
|
279
|
Graves A, Liwicki M, Fernández S, Bertolami R, Bunke H, Schmidhuber J. A novel connectionist system for unconstrained handwriting recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:855-68. [PMID: 19299860 DOI: 10.1109/tpami.2008.137] [Citation(s) in RCA: 317] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance.
Collapse
|
280
|
Preim B, Oeltze S, Mlejnek M, Gröeller E, Hennemuth A, Behrens S. Survey of the visual exploration and analysis of perfusion data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2009; 15:205-220. [PMID: 19147886 DOI: 10.1109/tvcg.2008.95] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Dynamic contrast-enhanced image data (perfusion data) are used to characterize regional tissue perfusion. Perfusion data consist of a sequence of images, acquired after a contrast agent bolus is applied. Perfusion data are used for diagnostic purposes in oncology, ischemic stroke assessment or myocardial ischemia. The diagnostic evaluation of perfusion data is challenging, since the data is complex and exhibits various artifacts, e.g., motion artifacts. We provide an overview on existing methods to analyze, and visualize CT and MR perfusion data. The integrated visualization of several 2D parameter maps, the 3D visualization of parameter volumes and exploration techniques are discussed. An essential aspect in the diagnosis of perfusion data is the correlation between perfusion data and derived time-intensity curves as well as with other image data, in particular with high resolution morphologic image data. We discuss visualization support with respect to the three major application areas: ischemic stroke diagnosis, breast tumor diagnosis and the diagnosis of coronary heart disease.
Collapse
|
281
|
Bhattacharya U, Chaudhuri BB. Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:444-457. [PMID: 19147874 DOI: 10.1109/tpami.2008.88] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
Collapse
|
282
|
Guo YZ, Guo YM. [Change data capture implementation in HIS]. ZHONGGUO YI LIAO QI XIE ZA ZHI = CHINESE JOURNAL OF MEDICAL INSTRUMENTATION 2009; 33:193-197. [PMID: 19771894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The paper introduces the Change Data Capture based on change-track-table implementation in hospital information system. It improves efficiency of the change data capture and distribution in the data flat.
Collapse
|
283
|
Kim W, Kim C. A new approach for overlay text detection and extraction from complex video scene. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:401-411. [PMID: 19095537 DOI: 10.1109/tip.2008.2008225] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Overlay text brings important semantic clues in video content analysis such as video information retrieval and summarization, since the content of the scene or the editor's intention can be well represented by using inserted text. Most of the previous approaches to extracting overlay text from videos are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to detect and extract the overlay text from the video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background, a transition map is first generated. Then candidate regions are extracted by a reshaping method and the overlay text regions are determined based on the occurrence of overlay text in each candidate. The detected overlay text regions are localized accurately using the projection of overlay text pixels in the transition map and the text extraction is finally conducted. The proposed method is robust to different character size, position, contrast, and color. It is also language independent. Overlay text region update between frames is also employed to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.
Collapse
|
284
|
Hanauer DA, Wentzell K, Laffel N, Laffel LM. Computerized Automated Reminder Diabetes System (CARDS): e-mail and SMS cell phone text messaging reminders to support diabetes management. Diabetes Technol Ther 2009; 11:99-106. [PMID: 19848576 PMCID: PMC4504120 DOI: 10.1089/dia.2008.0022] [Citation(s) in RCA: 224] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
BACKGROUND Cell phone text messaging, via the Short Messaging Service (SMS), offers the promise of a highly portable, well-accepted, and inexpensive modality for engaging youth and young adults in the management of their diabetes. This pilot and feasibility study compared two-way SMS cell phone messaging with e-mail reminders that were directed at encouraging blood glucose (BG) monitoring. METHODS Forty insulin-treated adolescents and young adults with diabetes were randomized to receive electronic reminders to check their BG levels via cell phone text messaging or e-mail reminders for a 3-month pilot study. Electronic messages were automatically generated, and participant replies with BG results were processed by the locally developed Computerized Automated Reminder Diabetes System (CARDS). Participants set their schedule for reminders on the secure CARDS website where they could also enter and review BG data. RESULTS Of the 40 participants, 22 were randomized to receive cell phone text message reminders and 18 to receive e-mail reminders; 18 in the cell phone group and 11 in the e-mail group used the system. Compared to the e-mail group, users in the cell phone group received more reminders (180.4 vs. 106.6 per user) and responded with BG results significantly more often (30.0 vs. 6.9 per user, P = 0.04). During the first month cell phone users submitted twice as many BGs as e-mail users (27.2 vs. 13.8 per user); by month 3, usage waned. CONCLUSIONS Cell phone text messaging to promote BG monitoring is a viable and acceptable option in adolescents and young adults with diabetes. However, maintaining interest levels for prolonged intervals remains a challenge.
Collapse
|
285
|
Chen L, Yang X, Liu Y, Zeng D, Tang Y, Yan B, Lin X, Liu L, Xu H, Zhou D. Quantitative and trajectory analysis of movement trajectories in supplementary motor area seizures of frontal lobe epilepsy. Epilepsy Behav 2009; 14:344-53. [PMID: 19100340 DOI: 10.1016/j.yebeh.2008.11.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2008] [Revised: 10/19/2008] [Accepted: 11/14/2008] [Indexed: 11/17/2022]
Abstract
The objectives of this study were to quantitatively analyze the movement trajectories of four types of supplementary motor area (SMA) seizures (hyperkinetic, tonic posturing, fencing posture, tonic head turning), and to compare the movement trajectories of SMA seizures with those of temporal lobe seizures and psychogenic nonepileptic seizures. Ten video/EEG recordings of each type of seizure were obtained. Imaging data collected by video/EEG monitoring were transformed into a digital matrix with image processing software and then transformed into a movement trajectory curve with MATLAB 6.5 software. From these movement trajectories, amplitude, frequency, proximal/distal limb amplitude ratios, and shoulder/abdominal amplitude ratios measurements were calculated. One-way ANOVA revealed statistically significant differences in average amplitude, as well as proximal/distal limb amplitude ratios, in SMA seizures when compared with those of temporal lobe seizures and psychogenic nonepileptic seizures. This study proved the feasibility of quantitative analysis of SMA seizures and suggests it should be further evaluated for its capability to distinguish different seizure semiologies for the diagnosis of epilepsy.
Collapse
|
286
|
Steinherz T, Doermann D, Rivlin E, Intrator N. Offline loop investigation for handwriting analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2009; 31:193-209. [PMID: 19110488 DOI: 10.1109/tpami.2008.68] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Resolution of different types of loops in handwritten script presents a difficult task and is an important step in many classic word recognition systems, writer modeling, and signature verification. When processing a handwritten script, a great deal of ambiguity occurs when strokes overlap, merge, or intersect. This paper presents a novel loop modeling and contour-based handwriting analysis that improves loop investigation. We show excellent results on various loop resolution scenarios, including axial loop understanding and collapsed loop recovery. We demonstrate our approach for loop investigation on several realistic data sets of static binary images and compare with the ground truth of the genuine online signal.
Collapse
|
287
|
Arnell KM, Joanisse MF, Klein RM, Busseri MA, Tannock R. Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability. ACTA ACUST UNITED AC 2009; 63:173-84. [PMID: 19739900 DOI: 10.1037/a0015721] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
288
|
Abstract
Genome-wide association studies have opened a new era in the study of the genetic basis of common, multifactorial diseases and traits. Before the introduction of this approach only a handful of common genetic variants showed consistent association for any phenotype. Using genome-wide association, scores of novel and unsuspected loci have been discovered and later replicated for many complex traits. The principle is to genotype a dense set of common genetic variants across the genomes of individuals with phenotypic differences and examine whether genotype is associated with phenotype. Because the last common human ancestor was relatively recent and recombination events are concentrated in focal hotspots, most common variation in the human genome can be surveyed using a few hundred thousand variants acting as proxies for ungenotyped variants. Here, we describe the different steps of genome-wide association studies and use a recent study as example.
Collapse
|
289
|
Stenzhorn H, Pacheco EJ, Nohama P, Schulz S. Automatic mapping of clinical documentation to SNOMED CT. Stud Health Technol Inform 2009; 150:228-232. [PMID: 19745302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Clinical documentation needs to be fine-grained to truthfully represent the history, development, and treatment of a patient. But natural language, as the main information carrier, is characterized by many issues, like idiosyncratic terminology, spelling and grammar errors, and a lack of grammatical structure. Therefore coding systems, like ICD-10, have been introduced, but their use varies highly among physicians, and they are often used incompletely or incorrectly. The almost exponential growth of clinical data is yet another problem. We present a new methodology to process this data: Through combining several natural language processing methods we extract morphemes from clinical texts and map them onto concepts from SNOMED CT. We first performed a manual analysis of clinical texts received from a university hospital and evaluated the issues found in them. Based on this we implemented a prototypical system which incorporates both the OpenNLP and the MorphoSaurus natural language processing systems.
Collapse
|
290
|
Shklovskiĭ-Kordi NE, Zingerman BV. [Electronic case history]. KLINICHESKAIA MEDITSINA 2009; 87:70-73. [PMID: 19348308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
|
291
|
Abstract
Gene expression profiling provides unprecedented opportunities to study patterns of gene expression regulation, for example, in diseases or developmental processes. Bioinformatics analysis plays an important part of processing the information embedded in large-scale expression profiling studies and for laying the foundation for biological interpretation. Over the past years, numerous tools have emerged for microarray data analysis. One of the most popular platforms is Bioconductor, an open source and open development software project for the analysis and comprehension of genomic data, based on the R programming language. In this chapter, we use Bioconductor analysis packages on a heart development dataset to demonstrate the workflow of microarray data analysis from annotation, normalization, expression index calculation, and diagnostic plots to pathway analysis, leading to a meaningful visualization and interpretation of the data.
Collapse
|
292
|
Orlando DA, Brady SM, Koch JD, Dinneny JR, Benfey PN. Manipulating large-scale Arabidopsis microarray expression data: identifying dominant expression patterns and biological process enrichment. Methods Mol Biol 2009; 553:57-77. [PMID: 19588101 DOI: 10.1007/978-1-60327-563-7_4] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A series of large-scale Arabidopsis thaliana microarray expression experiments profiling genome-wide expression across different developmental stages, cell types, and environmental conditions have resulted in tremendous amounts of gene expression data. This gene expression is the output of complex transcriptional regulatory networks and provides a starting point for identifying the dominant transcriptional regulatory modules acting within the plant. Highly co-expressed groups of genes are likely to be regulated by similar transcription factors. Therefore, finding these co-expressed groups can reduce the dimensionality of complex expression data into a set of dominant transcriptional regulatory modules. Determining the biological significance of these patterns is an informatics challenge and has required the development of new methods. Using these new methods we can begin to understand the biological information contained within large-scale expression data sets.
Collapse
|
293
|
Agaev FF, Akhundova IM, Gasymov IA, Abuzarov RM, Alikhanova NF. [Topicality of an individual computer-aided database for the analysis of epidemiological indices in Azerbaijan]. TUBERKULEZ I BOLEZNI LEGKIKH 2009:24-28. [PMID: 19697852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The prevalence of drug-resistant tuberculosis and especially multidrug-resistant tuberculosis arouses special alarm and these forms of tuberculosis are widespread in the countries of the former Soviet countries. To study this problem in the republic, the authors analyze the records obtained by the Research Institute of Pulmonary Diseases from all TB facilities in 2000-2007 and the data of a test for drug sensitivity in Mycobacterium tuberculosis in the cohort of new cases of tuberculosis in 2006-2007. Sixty-nine (100%) TB service facilities have submitted the records. A total of 33 019 new cases of tuberculosis in 2000-2007 have been analyzed. The results of a test for drug resistance in MBT in 503 new cases have been included into the study and analyzed. The analysis suggests that there is a certain share of conventionality and inadequate validity of the data obtained from consolidated areas. In each of the 11 zones, there are areas with great variations in morbidity and morbidity rates. This shows it necessary to make a target monitoring of the epidemic situation in the regions and to strive not to consolidate for ease the neighboring administrative areas for ease during an analysis; it is expedient to divide the areas into adequately minimum ones. This point monitoring requires individualized electronic systems that provide the input of personified information on each new case of tuberculosis. It is recommended that the individualized electronic system for monitoring the basic epidemiological parameters, including the prevalence of drug-resistant tuberculosis, to be introduced, by taking into account the demographic, social, and geographical features of administrative areas.
Collapse
|
294
|
Lüthi U. [No traps, greater security]. KRANKENPFLEGE. SOINS INFIRMIERS 2009; 102:10-61. [PMID: 19202745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
|
295
|
Solaro P, Pierangeli E, Pizzoni C, Boffi P, Scalese G. From computerized tomography data processing to rapid manufacturing of custom-made prostheses for cranioplasty. Case report. J Neurosurg Sci 2008; 52:113-116. [PMID: 18981986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Cranioplasty is a surgical repair of a structural or morphological deformity of the skull, involving the resection, remolding and displacement of the bones of the head. As it pertains to abnormal head shape, cranioplasty is an operative procedure aimed to fill a gap in the cranial theca or to replace bone removed either as a result of trauma or infection, by means of a biocompatible artificial bony substitute. In the present paper authors report a case of custom-made cranioplasty for the reconstruction of a large bilateral skull defect, based on advanced computerized tomography data processing and rapid prototyping (stereolithography) techniques.
Collapse
|
296
|
Fiszman M, Demner-Fushman D, Kilicoglu H, Rindflesch TC. Automatic summarization of MEDLINE citations for evidence-based medical treatment: a topic-oriented evaluation. J Biomed Inform 2008; 42:801-13. [PMID: 19022398 DOI: 10.1016/j.jbi.2008.10.002] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2008] [Revised: 09/30/2008] [Accepted: 10/15/2008] [Indexed: 11/18/2022]
Abstract
As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for 53 diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p<0.01) and the increase in the overall score of clinical usefulness was 0.39 (p<0.05).
Collapse
|
297
|
Ye JL, Deng Y, Chen YW. [Research on the measuring method for invasive blood pressure and its effectiveness evaluation method]. ZHONGGUO YI LIAO QI XIE ZA ZHI = CHINESE JOURNAL OF MEDICAL INSTRUMENTATION 2008; 32:455-458. [PMID: 19253585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This article introduces a measuring method for invasive blood pressure based on SecWave technology and a testing method of effectiveness based on IBP database and simulator. Besides, quantified indexes are set out for the accuracy evaluation, such as static pressure accuracy, accuracy of dynamic pressure pulse wave recognition, pulse rate and the response time, so as to provide an important reference method of effectively objective evaluation for invasive blood pressure measurements.
Collapse
|
298
|
Lu S, Li L, Tan CL. Document image retrieval through word shape coding. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:1913-1918. [PMID: 18787240 DOI: 10.1109/tpami.2008.89] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.
Collapse
|
299
|
Zheng Y, Barbu A, Georgescu B, Scheuering M, Comaniciu D. Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1668-1681. [PMID: 18955181 DOI: 10.1109/tmi.2008.2004421] [Citation(s) in RCA: 222] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Collapse
|
300
|
Koikkalainen J, Tölli T, Lauerma K, Antila K, Mattila E, Lilja M, Lötjönen J. Methods of artificial enlargement of the training set for statistical shape models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1643-1654. [PMID: 18955179 DOI: 10.1109/tmi.2008.929106] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Due to the small size of training sets, statistical shape models often over-constrain the deformation in medical image segmentation. Hence, artificial enlargement of the training set has been proposed as a solution for the problem to increase the flexibility of the models. In this paper, different methods were evaluated to artificially enlarge a training set. Furthermore, the objectives were to study the effects of the size of the training set, to estimate the optimal number of deformation modes, to study the effects of different error sources, and to compare different deformation methods. The study was performed for a cardiac shape model consisting of ventricles, atria, and epicardium, and built from magnetic resonance (MR) volume images of 25 subjects. Both shape modeling and image segmentation accuracies were studied. The objectives were reached by utilizing different training sets and datasets, and two deformation methods. The evaluation proved that artificial enlargement of the training set improves both the modeling and segmentation accuracy. All but one enlargement techniques gave statistically significantly (p < 0.05) better segmentation results than the standard method without enlargement. The two best enlargement techniques were the nonrigid movement technique and the technique that combines principal component analysis (PCA) and finite element model (FEM). The optimal number of deformation modes was found to be near 100 modes in our application. The active shape model segmentation gave better segmentation accuracy than the one based on the simulated annealing optimization of the model weights.
Collapse
|