51
|
Swamidass SJ, Calhoun BT, Bittker JA, Bodycombe NE, Clemons PA. Enhancing the rate of scaffold discovery with diversity-oriented prioritization. ACTA ACUST UNITED AC 2011; 27:2271-8. [PMID: 21685049 DOI: 10.1093/bioinformatics/btr369] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
MOTIVATION In high-throughput screens (HTS) of small molecules for activity in an in vitro assay, it is common to search for active scaffolds, with at least one example successfully confirmed as an active. The number of active scaffolds better reflects the success of the screen than the number of active molecules. Many existing algorithms for deciding which hits should be sent for confirmatory testing neglect this concern. RESULTS We derived a new extension of a recently proposed economic framework, diversity-oriented prioritization (DOP), that aims-by changing which hits are sent for confirmatory testing-to maximize the number of scaffolds with at least one confirmed active. In both retrospective and prospective experiments, DOP accurately predicted the number of scaffold discoveries in a batch of confirmatory experiments, improved the rate of scaffold discovery by 8-17%, and was surprisingly robust to the size of the confirmatory test batches. As an extension of our previously reported economic framework, DOP can be used to decide the optimal number of hits to send for confirmatory testing by iteratively computing the cost of discovering an additional scaffold, the marginal cost of discovery. CONTACT swamidass@wustl.edu SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- S Joshua Swamidass
- Division of Laboratory and Genomic Medicine, Department of Pathology and Immunology, Washington University School of Medicine, St Louis, MO, USA.
| | | | | | | | | |
Collapse
|
52
|
Casalino L, Magnani D, De Falco S, Filosa S, Minchiotti G, Patriarca EJ, De Cesare D. An Automated High Throughput Screening-Compatible Assay to Identify Regulators of Stem Cell Neural Differentiation. Mol Biotechnol 2011; 50:171-80. [DOI: 10.1007/s12033-011-9413-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
53
|
Abstract
High-throughput screening (HTS) has been postulated in several quarters to be a contributory factor to the decline in productivity in the pharmaceutical industry. Moreover, it has been blamed for stifling the creativity that drug discovery demands. In this article, we aim to dispel these myths and present the case for the use of HTS as part of a proven scientific tool kit, the wider use of which is essential for the discovery of new chemotypes.
Collapse
|
54
|
Dragiev P, Nadon R, Makarenkov V. Systematic error detection in experimental high-throughput screening. BMC Bioinformatics 2011; 12:25. [PMID: 21247425 PMCID: PMC3034671 DOI: 10.1186/1471-2105-12-25] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2010] [Accepted: 01/19/2011] [Indexed: 11/21/2022] Open
Abstract
Background High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [1-7]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error [6]. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests. Results We tested three statistical procedures to assess the presence of systematic error in experimental HTS data, including the χ2 goodness-of-fit test, Student's t-test and Kolmogorov-Smirnov test [8] preceded by the Discrete Fourier Transform (DFT) method [9]. We applied these procedures to raw HTS measurements, first, and to estimated hit distribution surfaces, second. The three competing tests were applied to analyse simulated datasets containing different types of systematic error, and to a real HTS dataset. Their accuracy was compared under various error conditions. Conclusions A successful assessment of the presence of systematic error in experimental HTS assays is possible when the appropriate statistical methodology is used. Namely, the t-test should be carried out by researchers to determine whether systematic error is present in their HTS data prior to applying any error correction method. This important step can significantly improve the quality of selected hits.
Collapse
Affiliation(s)
- Plamen Dragiev
- Département d'informatique, Université du Québec à Montréal, Montreal (QC) H3C 3P8, Canada
| | | | | |
Collapse
|
55
|
Blomberg A. Measuring growth rate in high-throughput growth phenotyping. Curr Opin Biotechnol 2010; 22:94-102. [PMID: 21095113 DOI: 10.1016/j.copbio.2010.10.013] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2010] [Accepted: 10/22/2010] [Indexed: 11/17/2022]
Abstract
Growth rate is an important variable and parameter in biology with a central role in evolutionary, functional genomics, and systems biology studies. In this review the pros and cons of the different technologies presently available for high-throughput measurements of growth rate are discussed. Growth rate can be measured in liquid microcultivation of individual strains, in competition between strains, as growing colonies on agar, as division of individual cells, and estimated from molecular reporters. Irrespective of methodology, statistical issues such as spatial biases and batch effects are crucial to investigate and correct for to ensure low false discovery rates. The rather low correlations between studies indicate that cross-laboratory comparison and standardization are pressing issue to assure high-quality and comparable growth-rate data.
Collapse
Affiliation(s)
- Anders Blomberg
- University of Gothenburg, Department of Cell and Molecular Biology, Lundberg Laboratory, Medicinaregatan 9C, Göteborg, Sweden.
| |
Collapse
|
56
|
Jain S, Heutink P. From single genes to gene networks: high-throughput-high-content screening for neurological disease. Neuron 2010; 68:207-17. [PMID: 20955929 DOI: 10.1016/j.neuron.2010.10.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/04/2010] [Indexed: 11/30/2022]
Abstract
Neuronal development, function, and the subsequent degeneration of the brain are still an enigma in both the normal and pathologic states, and there is an urgent need to find better targets for developing therapeutic intervention. Current techniques to deconstruct the architecture of brain and disease-related pathways are best suited for following up on single genes but would take an impractical amount of time for the leads from the current wave of genetic and genomic data. New technical developments have made combined high-throughput-high-content (HT-HC) cellular screens possible, which have the potential to contextualize the information, gathered from a combination of genetic and genomic approaches, into networks and functional biology and can be utilized for the identification of therapeutic targets. Herein we discuss the potential impact of HT-HC cellular screens on medical neuroscience.
Collapse
Affiliation(s)
- Shushant Jain
- Department of Clinical Genetics, VU University Medical Center Amsterdam, The Netherlands
| | | |
Collapse
|
57
|
Malo N, Hanley JA, Carlile G, Liu J, Pelletier J, Thomas D, Nadon R. Experimental Design and Statistical Methods for Improved Hit Detection in High-Throughput Screening. ACTA ACUST UNITED AC 2010; 15:990-1000. [DOI: 10.1177/1087057110377497] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
Collapse
Affiliation(s)
- Nathalie Malo
- McGill University and Genome Quebec Innovation Centre, Montreal, Quebec, Canada
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - James A. Hanley
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - Graeme Carlile
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Jing Liu
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Jerry Pelletier
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - David Thomas
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Robert Nadon
- McGill University and Genome Quebec Innovation Centre, Montreal, Quebec, Canada
- Department of Human Genetics, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
58
|
Abstract
UNLABELLED High-throughput screening (HTS) is a common technique for both drug discovery and basic research, but researchers often struggle with how best to derive hits from HTS data. While a wide range of hit identification techniques exist, little information is available about their sensitivity and specificity, especially in comparison to each other. To address this, we have developed the open-source NoiseMaker software tool for generation of realistically noisy virtual screens. By applying potential hit identification methods to NoiseMaker-simulated data and determining how many of the pre-defined true hits are recovered (as well as how many known non-hits are misidentified as hits), one can draw conclusions about the likely performance of these techniques on real data containing unknown true hits. Such simulations apply to a range of screens, such as those using small molecules, siRNAs, shRNAs, miRNA mimics or inhibitors, or gene over-expression; we demonstrate this utility by using it to explain apparently conflicting reports about the performance of the B score hit identification method. AVAILABILITY AND IMPLEMENTATION NoiseMaker is written in C#, an ECMA and ISO standard language with compilers for multiple operating systems. Source code, a Windows installer and complete unit tests are available at http://sourceforge.net/projects/noisemaker. Full documentation and support are provided via an extensive help file and tool-tips, and the developers welcome user suggestions.
Collapse
Affiliation(s)
- Phoenix Kwan
- Thermo Fisher Scientific, 2650 Crescent Drive, Lafayette, CO 80026, USA
| | | |
Collapse
|
59
|
Bushway PJ, Azimi B, Heynen-Genel S, Price JH, Mercola M. Hybrid median filter background estimator for correcting distortions in microtiter plate data. Assay Drug Dev Technol 2010; 8:238-50. [PMID: 20230301 DOI: 10.1089/adt.2009.0242] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Microtiter plate (MTP) assays often exhibit distortions, such as caused by edge-dependent drying and robotic fluid handling variation. Distortions vary by assay system but can have both systematic patterns (predictable from plate to plate) and random (sporadic and unpredictable) components. Random errors can be especially difficult to resolve by assay optimization alone, and postassay algorithms reported to date have smoothing effects that often blunt hits. We implemented a 5 x 5 bidirectional hybrid median filter (HMF) as a local background estimator to scale each data point to the MTP global background median and compared it with a recently described Discrete Fourier Transform (DFT) technique for correcting errors on computationally and experimentally generated MTP datasets. Experimental data were generated from a 384-well format fluorescent bioassay using cells engineered to express eGFP and DsRED. MTP arrays were produced with and without control treatments used to simulate hits in random wells. The HMF demonstrated the greatest improvements in MTP coefficients of variation and dynamic range (defined by the ratio of average hit amplitude to standard deviation, SD) for all synthetic and experimental MTPs examined. After HMF application to a MTP of eGFP signal from mouse insulinoma (MIN6) cells obtained by a plate-reader, the assay coefficient of variation (CV) decreased from 8.0% in the raw dataset to 5.1% and the hit amplitudes were reduced by only 1% while the DFT method increased the CV by 36.0% and reduced the hit amplitude by 21%. Thus, our results show that the bidirectional HMF provides superior corrections of MTP data distortions while at the same time preserving hit amplitudes and improving dynamic range. The software to perform hybrid median filter MTP corrections is available at http://bccg.burnham.org/HTS/HMF_Download_Page.aspx, password is pbushway.
Collapse
Affiliation(s)
- Paul J Bushway
- Burnham Institute for Medical Research, La Jolla, California 92037, USA
| | | | | | | | | |
Collapse
|
60
|
Posner BA, Xi H, Mills JEJ. Enhanced HTS Hit Selection via a Local Hit Rate Analysis. J Chem Inf Model 2009; 49:2202-10. [DOI: 10.1021/ci900113d] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Bruce A. Posner
- PGRD Groton Laboratories, Pfizer Inc., Groton, Connecticut 06340, PGRD Computational Sciences CoE, Pfizer Inc., Cambridge, Massachusetts 02139, and PGRD Sandwich Laboratories, Pfizer Inc., 32 Ramsgate Road, Sandwich, Kent CT13 9NJ, Great Britain
| | - Hualin Xi
- PGRD Groton Laboratories, Pfizer Inc., Groton, Connecticut 06340, PGRD Computational Sciences CoE, Pfizer Inc., Cambridge, Massachusetts 02139, and PGRD Sandwich Laboratories, Pfizer Inc., 32 Ramsgate Road, Sandwich, Kent CT13 9NJ, Great Britain
| | - James E. J. Mills
- PGRD Groton Laboratories, Pfizer Inc., Groton, Connecticut 06340, PGRD Computational Sciences CoE, Pfizer Inc., Cambridge, Massachusetts 02139, and PGRD Sandwich Laboratories, Pfizer Inc., 32 Ramsgate Road, Sandwich, Kent CT13 9NJ, Great Britain
| |
Collapse
|
61
|
Shapiro AB, Walkup GK, Keating TA. Correction for Interference by Test Samples in High-Throughput Assays. ACTA ACUST UNITED AC 2009; 14:1008-16. [DOI: 10.1177/1087057109341768] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In high-throughput biochemical assays performed in multiwell plates, the effect of test samples on the activity of the biochemical system is usually measured by optical means such as absorbance, fluorescence, luminescence, or scintillation counting. The test sample often causes detection interference when it remains in the well during the measurement. Interference may be due to light absorption, fluorescence quenching, sample fluorescence, chemical interaction of the sample with a detection reagent, or depression of the meniscus. A simple method is described that corrects for such interference well by well. The interference is measured in a separate artifact assay plate. An appropriate arithmetic correction is then applied to the measurement in the corresponding well of the activity assay plate. The correction procedure can be used for single-point screening or potency measurements on serial dilutions of test samples.
Collapse
|
62
|
Janzen WP, Popa-Burke IG. Advances in improving the quality and flexibility of compound management. ACTA ACUST UNITED AC 2009; 14:444-51. [PMID: 19483148 DOI: 10.1177/1087057109335262] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The process of drug discovery has evolved considerably since the advent of high-throughput screening (HTS) in the 1980s. Experts and opinion leaders today are agreeing that the current trend in the field is a focus on increasing overall quality (target, screening, and compounds), use of multiple screening approaches for lead discovery, and more flexibility in the process. The associated need for increased flexibility and quality control to support existing HTS paradigms as well as lower throughput approaches such as fragment screening, computational chemistry, focused library building, and centralized lead optimization support has required an evolution in compound management (CM, aka sample management or library management). Although there is much less published peer-reviewed data in CM, due to its historical links to HTS, it has followed very similar trends. In recent years, the focus in CM has been increasingly in compound quality and increased flexibility of the process, as opposed to number of compounds dispensed and speed of dispensing, which were standard metrics and indicators used not so long ago. Ideally, to screen the highest quality sample for every assay, one would start with a correct identity and pure solid, make a correct concentration solution in water or water-soluble/assay-compatible solvent that would allow 100% solubilization, and screen it immediately in a biological assay. Neither CM nor screening has advanced sufficiently to deliver this ideal scenario, but many significant advancements have been made in recent years both in terms of quality of compounds in stores and flexibility of the process, which will be reviewed herein.
Collapse
Affiliation(s)
- William P Janzen
- Center for Integrative Chemical Biology and Drug Discovery, Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, USA
| | | |
Collapse
|
63
|
Abstract
MOTIVATION For genome-scale RNAi research, it is critical to investigate sample size required for the achievement of reasonably low false negative rate (FNR) and false positive rate. RESULTS The analysis in this article reveals that current design of sample size contributes to the occurrence of low signal-to-noise ratio in genome-scale RNAi projects. The analysis suggests that (i) an arrangement of 16 wells per plate is acceptable and an arrangement of 20-24 wells per plate is preferable for a negative control to be used for hit selection in a primary screen without replicates; (ii) in a confirmatory screen or a primary screen with replicates, a sample size of 3 is not large enough, and there is a large reduction in FNRs when sample size increases from 3 to 4. To search a tradeoff between benefit and cost, any sample size between 4 and 11 is a reasonable choice. If the main focus is the selection of siRNAs with strong effects, a sample size of 4 or 5 is a good choice. If we want to have enough power to detect siRNAs with moderate effects, sample size needs to be 8, 9, 10 or 11. These discoveries about sample size bring insight to the design of a genome-scale RNAi screen experiment.
Collapse
Affiliation(s)
- Xiaohua Douglas Zhang
- Biometrics Research and BARDS, Merck Research Laboratories, West Point, PA 19486, USA.
| | | |
Collapse
|
64
|
Abstract
Screening is about making decisions on the modulating activity of one particular compound on a biological system. When a compound testing experiment is repeated under the same conditions or as close to the same conditions as possible, the observed results are never exactly the same, and there is an apparent random and uncontrolled source of variability in the system under study. Nevertheless, randomness is not haphazard. In this context, we can see statistics as the science of decision making under uncertainty. Thus, the usage of statistical tools in the analysis of screening experiments is the right approach to the interpretation of screening data, with the aim of making them meaningful and converting them into valuable information that supports sound decision making.In the HTS workflow, there are at least three key stages where key decisions have to be made based on experimental data: (1) assay development (i.e. how to assess whether our assay is good enough to be put into screening production for the identification of modulators of the target of interest), (2) HTS campaign process (i.e. monitoring that screening process is performing at the expected quality and assessing possible patterned signs of experimental response that may adversely bias and mislead hit identification) and (3) data analysis of primary HTS data (i.e. flagging which compounds are giving a positive response in the assay, namely hit identification).In this chapter we will focus on how some statistical tools can help to cope with these three aspects. Assessment of assay quality is reviewed in other chapters, so in Section 1 we will briefly make some further considerations. Section 2 will review statistical process control, Section 3 will cover methodologies for detecting and dealing with HTS patterns and Section 4 will describe approaches for statistically guided selection of hits in HTS.
Collapse
Affiliation(s)
- Isabel Coma
- Molecular Discovery Research, Glaxo SmithKline, Tres Cantos, Madrid, Spain
| | | | | |
Collapse
|
65
|
Coma I, Clark L, Diez E, Harper G, Herranz J, Hofmann G, Lennon M, Richmond N, Valmaseda M, Macarron R. Process Validation and Screen Reproducibility in High-Throughput Screening. ACTA ACUST UNITED AC 2008; 14:66-76. [DOI: 10.1177/1087057108326664] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The use of large-scale compound screening has become a key component of drug discovery projects in both the pharmaceutical and the biotechnological industries. More recently, these activities have also been embraced by the academic community as a major tool for chemical genomic activities. High-throughput screening (HTS) activities constitute a major step in the initial drug discovery efforts and involve the use of large quantities of biological reagents, hundreds of thousands to millions of compounds, and the utilization of expensive equipment. All these factors make it very important to evaluate in advance of the HTS campaign any potential issues related to reproducibility of the experimentation and the quality of the results obtained at the end of these very costly activities. In this article, the authors describe how GlaxoSmithKline (GSK) has addressed the need of a true validation of the HTS process before embarking in full HTS campaigns. They present 2 different aspects of the so-called validation process: (1) optimization of the HTS workflow and its validation as a quality process and (2) the statistical evaluation of the HTS, focusing on the reproducibility of results and the ability to distinguish active from nonactive compounds in a vast collection of samples. The authors describe a variety of reproducibility indexes that are either innovative or have been adapted from generic medical diagnostic screening strategies. In addition, they exemplify how these validation tools have been implemented in a number of case studies at GSK. ( Journal of Biomolecular Screening 2009:66-76)
Collapse
Affiliation(s)
- Isabel Coma
- GlaxoSmithKline R&D Pharmaceuticals, Screening and Compound Profiling, Tres Cantos, Spain,
| | - Liz Clark
- Screening and Compound Profiling, Harlow, UK
| | - Emilio Diez
- GlaxoSmithKline R&D Pharmaceuticals, Screening and Compound Profiling, Tres Cantos, Spain
| | - Gavin Harper
- Computational and Structural Chemistry, Stevenage, UK
| | - Jesus Herranz
- Computational and Structural Chemistry, Tres Cantos, Spain
| | - Glenn Hofmann
- Screening and Compound Profiling, Upper Providence, Collegeville, Pennsylvania
| | | | | | | | - Ricardo Macarron
- Compound Management, Upper Providence, Collegeville, Pennsylvania
| |
Collapse
|
66
|
|
67
|
Chen YPP, Chen F. Identifying targets for drug discovery using bioinformatics. Expert Opin Ther Targets 2008; 12:383-9. [PMID: 18348676 DOI: 10.1517/14728222.12.4.383] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
BACKGROUND Drug discovery is the process of discovering and designing drugs, which includes target identification, target validation, lead identification, lead optimization and introduction of the new drugs to the public. This process is very important, involving analyzing the causes of the diseases and finding ways to tackle them. OBJECTIVE The problems we must face include: i) that this process is so long and expensive that it might cost millions of dollars and take a dozen years; and ii) the accuracy of identification of targets is not good enough, which in turn delays the process. Introducing bioinformatics into the drug discovery process could contribute much to it. Bioinformatics is a booming subject combining biology with computer science. It can explore the causes of diseases at the molecular level, explain the phenomena of the diseases from the angle of the gene and make use of computer techniques, such as data mining, machine learning and so on, to decrease the scope of analysis and enhance the accuracy of the results so as to reduce the cost and time. METHODS Here we describe recent studies about how to apply bioinformatics techniques in the four phases of drug discovery, how these techniques improve the drug discovery process and some possible difficulties that should be dealt with. RESULTS We conclude that combining bioinformatics with drug discovery is a very promising method although it faces many problems currently.
Collapse
|
68
|
Ramadan N, Flockhart I, Booker M, Perrimon N, Mathey-Prevot B. Design and implementation of high-throughput RNAi screens in cultured Drosophila cells. Nat Protoc 2007; 2:2245-64. [PMID: 17853882 DOI: 10.1038/nprot.2007.250] [Citation(s) in RCA: 89] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This protocol describes the various steps and considerations involved in planning and carrying out RNA interference (RNAi) genome-wide screens in cultured Drosophila cells. We focus largely on the procedures that have been modified as a result of our experience over the past 3 years and of our better understanding of the underlying technology. Specifically, our protocol offers a set of suggestions and considerations for screen optimization and a step-by-step description of the procedures successfully used at the Drosophila RNAi Screening Center for screen implementation, data collection and analysis to identify potential hits. In addition, this protocol briefly covers postscreen analysis approaches that are often needed to finalize the hit list. Depending on the scope of the screen and subsequent analysis and validation involved, the full protocol can take anywhere from 3 months to 2 years to complete.
Collapse
Affiliation(s)
- Nadire Ramadan
- Department of Genetics and Drosophila RNAi Screening Center, Harvard Medical School, Boston, Massachusetts 02115, USA
| | | | | | | | | |
Collapse
|
69
|
Perrin D, Frémaux C, Scheer A. Assay Development and Screening of a Serine/Threonine Kinase in an On-Chip Mode Using Caliper Nanofluidics Technology. ACTA ACUST UNITED AC 2006; 11:359-68. [PMID: 16751332 DOI: 10.1177/1087057106286653] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Kinases are key targets for drug discovery. In the field of screening in general and especially in the kinase area, because of considerations of efficiency and cost, radioactivity-based assays tend to be replaced by alternative, mostly fluorescence-based, assays. Today, the limiting factor is rarely the number of data points that can be obtained but rather the quality of the data, enzyme availability, and cost. In this article, the authors describe the development of an assay for a kinase screen based on the electrophoretic separation of fluorescent product and substrate using a Caliper-based nanofluidics environment in on-chip incubation mode. The authors present the results of screening a focused set of 32,000 compounds together with confirmation data obtained in a filtration assay. In addition, they have made a small-scale comparison between the on-chip and off-chip nanofluidics screening modes. In their hands, the screen in on-chip mode is characterized by high precision most likely due to the absence of liquid pipetting; an excellent confirmation rate (62%) in an independent assay format, namely, filtration; and good sensitivity. This study led to the identification of 4 novel chemical series of inhibitors.
Collapse
Affiliation(s)
- Dominique Perrin
- Molecular Screening and Cellular Pharmacology Department, Serono Pharmaceutical Research Institute, Geneva, Switzerland.
| | | | | |
Collapse
|