1
|
Rogers M, Sutton A, Campbell F, Whear R, Bethel A, Coon JT. Streamlining search methods to update evidence and gap maps: A case study using intergenerational interventions. CAMPBELL SYSTEMATIC REVIEWS 2024; 20:e1380. [PMID: 38188228 PMCID: PMC10771710 DOI: 10.1002/cl2.1380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 12/01/2023] [Accepted: 12/14/2023] [Indexed: 01/09/2024]
Abstract
Background Evidence and Gap Maps (EGMs) should be regularly updated. Running update searches to find new studies for EGMs can be a time-consuming process. Search Summary Tables (SSTs) can help streamline searches by identifying which resources were most lucrative for identifying relevant articles, and which were redundant. The aim of this study was to use an SST to streamline search methods for an EGM of studies about intergenerational activities. Methods To produce the EGM, 15 databases were searched. 8638 records were screened and 500 studies were included in the final EGM. Using an SST, we determined which databases and search methods were the most efficient in terms of sensitivity and specificity for finding the included studies. We also investigated whether any database performed particularly well for returning particular study types. For the best performing databases we analysed the search terms used to streamline the strategies. Results No single database returned all of the studies included in the EGM. Out of 500 studies PsycINFO returned 40% (n = 202), CINAHL 39% (n = 194), Ageline 25% (n = 174), MEDLINE 23% (n = 117), ERIC 20% (n = 100) and Embase 19% (n = 98). HMIC database and Conference Proceedings Citation Index-Science via Web of Science returned no studies that were included in the EGM. ProQuest Dissertations & Theses (PQDT) returned the highest number of unique studies (n = 42), followed by ERIC (n = 33) and Ageline (n = 29). Ageline returned the most randomised controlled trials (42%) followed by CINAHL (34%), MEDLINE (29%) and CENTRAL (29%). CINAHL, Ageline, MEDLINE and PsycINFO performed the best for locating systematic reviews. (62%, 46% and 42% respectively). CINAHL, PsycINFO and Ageline performed best for qualitative studies (41%, 40% and 34%). The Journal of Intergenerational Relationships returned more included studies than any other journal (16%). No combinations of search terms were found to be better in terms of balancing specificity and sensitivity than the original search strategies. However, strategies could be reduced considerably in terms of length without losing key, unique studies. Conclusion Using SSTs we have developed a method for streamlining update searches for an EGM about intergenerational activities. For future updates we recommend that MEDLINE, PsycINFO, ERIC, Ageline, CINAHL and PQDT are searched. These searches should be supplemented by hand-searching the Journal of Intergenerational Relationships and carrying out backwards citation chasing on new systematic reviews. Using SSTs to analyse database efficiency could be a useful method to help streamline search updates for other EGMs.
Collapse
Affiliation(s)
- Morwenna Rogers
- Evidence Synthesis Team, NIHR ARC South West Peninsula (PenARC)University of Exeter Medical SchoolExeterUK
| | - Anthea Sutton
- SCHARR, University of Sheffield, Regent CourtSheffieldUK
| | - Fiona Campbell
- Population Health Sciences InstituteNewcastle UniversityNewcastleUK
| | - Rebecca Whear
- Evidence Synthesis Team, NIHR ARC South West Peninsula (PenARC)University of Exeter Medical SchoolExeterUK
| | - Alison Bethel
- Evidence Synthesis Team, NIHR ARC South West Peninsula (PenARC)University of Exeter Medical SchoolExeterUK
| | - Jo Thompson Coon
- Evidence Synthesis Team, NIHR ARC South West Peninsula (PenARC)University of Exeter Medical SchoolExeterUK
| |
Collapse
|
2
|
Escobar Liquitay CM, Garegnani L, Garrote V, Solà I, Franco JV. Search strategies (filters) to identify systematic reviews in MEDLINE and Embase. Cochrane Database Syst Rev 2023; 9:MR000054. [PMID: 37681507 PMCID: PMC10485899 DOI: 10.1002/14651858.mr000054.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
BACKGROUND Bibliographic databases provide access to an international body of scientific literature in health and medical sciences. Systematic reviews are an important source of evidence for clinicians, researchers, consumers, and policymakers as they address a specific health-related question and use explicit methods to identify, appraise and synthesize evidence from which conclusions can be drawn and decisions made. Methodological search filters help database end-users search the literature effectively with different levels of sensitivity and specificity. These filters have been developed for various study designs and have been found to be particularly useful for intervention studies. Other filters have been developed for finding systematic reviews. Considering the variety and number of available search filters for systematic reviews, there is a need for a review of them in order to provide evidence about their retrieval properties at the time they were developed. OBJECTIVES To review systematically empirical studies that report the development, evaluation, or comparison of search filters to retrieve reports of systematic reviews in MEDLINE and Embase. SEARCH METHODS We searched the following databases from inception to January 2023: MEDLINE, Embase, PsycINFO; Library, Information Science & Technology Abstracts (LISTA) and Science Citation Index (Web of Science). SELECTION CRITERIA We included studies if one of their primary objectives is the development, evaluation, or comparison of a search filter that could be used to retrieve systematic reviews on MEDLINE, Embase, or both. DATA COLLECTION AND ANALYSIS Two review authors independently extracted data using a pre-specified and piloted data extraction form using InterTASC Information Specialist Subgroup (ISSG) Search Filter Evaluation Checklist. MAIN RESULTS We identified eight studies that developed filters for MEDLINE and three studies that developed filters for Embase. Most studies are very old and some were limited to systematic reviews in specific clinical areas. Six included studies reported the sensitivity of their developed filter. Seven studies reported precision and six studies reported specificity. Only one study reported the number needed to read and positive predictive value. None of the filters were designed to differentiate systematic reviews on the basis of their methodological quality. For MEDLINE, all filters showed similar sensitivity and precision, and one filter showed higher levels of specificity. For Embase, filters showed variable sensitivity and precision, with limited study reports that may affect accuracy assessments. The report of these studies had some limitations, and the assessments of their accuracy may suffer from indirectness, considering that they were mostly developed before the release of the PRISMA 2009 statement or due to their limited scope in the selection of systematic review topics. Search filters for MEDLINE Three studies produced filters with sensitivity > 90% with variable degrees of precision, and only one of them was developed and validated in a gold-standard database, which allowed the calculation of specificity. The other two search filters had lower levels of sensitivity. One of these produced a filter with higher levels of specificity (> 90%). All filters showed similar sensitivity and precision in the external validation, except for one which was not externally validated and another one which was conceptually derived and only externally validated. Search filters for Embase We identified three studies that developed filters for this database. One of these studies developed filters with variable sensitivity and precision, including highly sensitive strategies (> 90%); however, it was not externally validated. The other study produced a filter with a lower sensitivity (72.7%) but high specificity (99.1%) with a similar performance in the external validation. AUTHORS' CONCLUSIONS Studies reporting the development, evaluation, or comparison of search filters to retrieve reports of systematic reviews in MEDLINE showed similar sensitivity and precision, with one filter showing higher levels of specificity. For Embase, filters showed variable sensitivity and precision, with limited information about how the filter was produced, which leaves us uncertain about their performance assessments. Newer filters had limitations in their methods or scope, including very focused subject topics for their gold standards, limiting their applicability across other topics. Our findings highlight that consensus guidance on the conduct of search filters and standardized reporting of search filters are needed, as we found highly heterogeneous development methods, accuracy assessments and outcome selection. New strategies adaptable across interfaces could enhance their usability. Moreover, the performance of existing filters needs to be evaluated in light of the impact of reporting guidelines, including the PRISMA 2009, on how systematic reviews are reported. Finally, future filter developments should also consider comparing the filters against a common reference set to establish comparative performance and assess the quality of systematic reviews retrieved by strategies.
Collapse
Affiliation(s)
| | - Luis Garegnani
- Research Department, Instituto Universitario Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Virginia Garrote
- Central Library, Instituto Universitario Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Ivan Solà
- Iberoamerican Cochrane Centre, Biomedical Research Institute Sant Pau (IIB Sant Pau), CIBER Epidemiología y Salud Pública (CIBERESP), Barcelona, Spain
| | - Juan Va Franco
- Institute of General Practice, Medical Faculty of the Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
3
|
Nick JM, Sarpy NL. An analysis of data sources and study registries used in systematic reviews. Worldviews Evid Based Nurs 2022; 19:450-457. [PMID: 36380457 PMCID: PMC10099387 DOI: 10.1111/wvn.12614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 08/24/2022] [Accepted: 08/27/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND Reporting standards for data sources in systematic reviews (SRs) have been developed, yet research shows varying compliance in the methods section. When this happens, replication of search results is difficult and creates ambiguous and biased data sources. AIMS This study captured author practices in choosing English and non-English-language databases, listing all the databases searched, and incorporating study registries as part of the search strategy. METHODS Using an analytic, cross-sectional, study design, volunteer data collectors (n = 107) searched one of two assigned English language platforms for SRs on specified health conditions. All the data sources found in the methods section of each SR were documented and analyzed for patterns using bibliographic techniques. RESULTS The final sample size of the SRs reviewed was N = 199. The mean number of data sources seen in the SRs was 3.9 (SD 2), with a range of 1-10. Eighteen records (9%) used a single data source to conduct the SRs. Four leading language platforms were seen in the SRs: English (100% of occurrences), up to 8% used Chinese data sources, and 4% included Spanish or Portuguese. The four most frequently used data sources were: (1) Medline (98%), (2) Embase (65%), (3) Cochrane Library (56%), and (4) Web of Science (33%). The percentage of SRs listing study registries was 30%. LINKING EVIDENCE TO ACTION Strategies to reduce bias and increase the rigor and reliability of SRs include comprehensive search practices by exploring non-English-language databases, using multiple data sources, and searching study registries. By following PRISMA-S guidelines to report data sources correctly, reproducibility can be accomplished.
Collapse
Affiliation(s)
- Jan M Nick
- Loma Linda University - School of Nursing, Loma Linda, California, USA
| | - Nancy L Sarpy
- Loma Linda University - School of Nursing, Loma Linda, California, USA
| |
Collapse
|
4
|
A TECHNICAL REVIEW OF THE ISPOR CONFERENCE PRESENTATIONS DATABASE IDENTIFIED ISSUES IN THE SEARCH INTERFACE AND AREAS FOR FUTURE DEVELOPMENT. Int J Technol Assess Health Care 2022; 38:e29. [PMID: 35256029 DOI: 10.1017/s0266462322000137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
Burns CS, Nix T, Shapiro RM, Huber JT. MEDLINE search retrieval issues: A longitudinal query analysis of five vendor platforms. PLoS One 2021; 16:e0234221. [PMID: 33956834 PMCID: PMC8101950 DOI: 10.1371/journal.pone.0234221] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 03/28/2021] [Indexed: 11/18/2022] Open
Abstract
This study compared the results of data collected from a longitudinal query analysis of the MEDLINE database hosted on multiple platforms that include PubMed, EBSCOHost, Ovid, ProQuest, and Web of Science. The goal was to identify variations among the search results on the platforms after controlling for search query syntax. We devised twenty-nine cases of search queries comprised of five semantically equivalent queries per case to search against the five MEDLINE database platforms. We ran our queries monthly for a year and collected search result count data to observe changes. We found that search results varied considerably depending on MEDLINE platform. Reasons for variations were due to trends in scholarly publication such as publishing individual papers online first versus complete issues. Some other reasons were metadata differences in bibliographic records; differences in the levels of specificity of search fields provided by the platforms and large fluctuations in monthly search results based on the same query. Database integrity and currency issues were observed as each platform updated its MEDLINE data throughout the year. Specific biomedical bibliographic databases are used to inform clinical decision-making, create systematic reviews, and construct knowledge bases for clinical decision support systems. They serve as essential information retrieval and discovery tools to help identify and collect research data and are used in a broad range of fields and as the basis of multiple research designs. This study should help clinicians, researchers, librarians, informationists, and others understand how these platforms differ and inform future work in their standardization.
Collapse
Affiliation(s)
- C. Sean Burns
- School of Information Science, University of Kentucky, Lexington, Kentucky, United States of America
- * E-mail:
| | - Tyler Nix
- Taubman Health Sciences Library, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Robert M. Shapiro
- Robert M. Fales Health Sciences Library - SEAHEC Medical Library, South East Area Health Education Center, Wilmington, North Carolina, United States of America
| | - Jeffrey T. Huber
- School of Information Science, University of Kentucky, Lexington, Kentucky, United States of America
| |
Collapse
|
6
|
Cooper C, Court R, Kotas E, Schauberger U. A technical review of three clinical trials register resources indicates where improvements to the search interfaces are needed. Res Synth Methods 2021; 12:384-393. [PMID: 33555126 DOI: 10.1002/jrsm.1477] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 01/27/2021] [Accepted: 02/01/2021] [Indexed: 11/08/2022]
Abstract
Clinical trials registers form an important part of the search for studies in systematic reviews of intervention effectiveness but the search interfaces and functionality of registers can be challenging to search systematically and resource intensive to search well. We report a technical review of the search interfaces of three leading trials register resources: ClinicalTrials.gov, the EU Clinical Trials Register and the WHO International Clinical Trials Registers Platform. The technical review used a validated checklist to identify areas where the search interfaces of these trials register resources performed well, where performance was adequate, where performance was poor, and to identify differences between search interfaces. The review found low overall scores for each of the interfaces (ClinicalTrials.gov 55/165, the EU Clinical Trials Register 25/165, the WHO International Clinical Trials Registers Platform 32/165). This finding suggests a need for joined-up dialogue between the producers of the registers and researchers who search them via these interfaces. We also set out a series of four proposed changes which might improve the search interfaces. Trials registers are an invaluable resource in systematic reviews of intervention effectiveness. With the continued growth in systematic reviews, and initiatives such as 'AllTrials', there is an anticipated need for these resources. We conclude that small changes to the search interfaces, and improved dialogue with providers, might improve the future search functionality of these valuable resources.
Collapse
Affiliation(s)
- Chris Cooper
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Rachel Court
- Warwick Medical School, University of Warwick, Coventry, UK
| | - Eleanor Kotas
- York Health Economics Consortium Ltd., YHEC, York, UK
| | | |
Collapse
|
7
|
Handsearching had best recall but poor efficiency when exporting to a bibliographic tool: case study. J Clin Epidemiol 2020; 123:39-48. [DOI: 10.1016/j.jclinepi.2020.03.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 03/02/2020] [Accepted: 03/19/2020] [Indexed: 11/23/2022]
|
8
|
McAllister III JT, Diaz NS. Hosting Inspec on Engineering Village or Web of Science. REFERENCE SERVICES REVIEW 2020. [DOI: 10.1108/rsr-08-2019-0050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
As library budgets continue to constrict, librarians will need to become more familiar with comparing database host platforms. This paper aims to compare Inspec on Elsevier’s Engineering Village (EV) and Clarivate’s Web of Science (WOS) from a novice user experience. The main objectives are to identify some R1 institutions that subscribe to Inspec and highlight some of the key differences between the two platforms.
Design/methodology/approach
Information on Inspec was gathered from various sources such as the home website, IET, and the host platform websites of Elsevier and Clarivate Analytics. Additional evidence was also collected from brochures and guides to help illustrate some of the main features and differences that novice users would be familiar with.
Findings
Most institutions subscribe to Inspec via the EV platform. Results from the study conclude that EV was the choice over WOS for hosting Inspec owing to a more user-friendly interface, potential lower cost and faster platform updates in response to meeting user needs.
Research limitations/implications
As database platforms change over time, feature areas such as content, interface and features remain important for information professionals and librarians to stay current with those changes. Also, this work can help librarians with understanding the planning and developing of a process for comparing identical or similar databases on different platforms.
Originality/value
Much of the literature focuses on the unfamiliar details and not so much on the novice user. This paper provides a unique perspective in how a novice user would prefer the attributes of one host platform over the other. Additionally, the same review criteria can be applied in other subjects and disciplines.
Collapse
|
9
|
Gusenbauer M, Haddaway NR. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synth Methods 2020; 11:181-217. [PMID: 31614060 PMCID: PMC7079055 DOI: 10.1002/jrsm.1378] [Citation(s) in RCA: 397] [Impact Index Per Article: 79.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 08/07/2019] [Accepted: 09/06/2019] [Indexed: 01/01/2023]
Abstract
Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses) because the sample selection of relevant studies determines a review's outcome, validity, and explanatory power. Yet, the search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which search systems are most appropriate for evidence synthesis and why. Advice on which search engines and bibliographic databases to choose for systematic searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic search qualities of 28 widely used academic search systems, including Google Scholar, PubMed, and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which search systems can effectively and efficiently perform (Boolean) searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of search systems, meaning that their usability in systematic searches varies. Indeed, only half of the search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal search system. We call for database owners to recognize the requirements of evidence synthesis and for academic journals to reassess quality requirements for systematic reviews. Our findings aim to support researchers in conducting better searches for better evidence synthesis.
Collapse
Affiliation(s)
- Michael Gusenbauer
- Institute of Innovation ManagementJohannes Kepler University LinzLinzAustria
| | - Neal R. Haddaway
- Stockholm Environment InstituteLinnégatan 87DStockholmSweden
- Africa Centre for EvidenceUniversity of JohannesburgJohannesburgSouth Africa
| |
Collapse
|
10
|
Burns CS, Shapiro RM, Nix T, Huber JT. Search results outliers among MEDLINE platforms. J Med Libr Assoc 2019; 107:364-373. [PMID: 31258442 PMCID: PMC6579582 DOI: 10.5195/jmla.2019.622] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 03/01/2019] [Indexed: 11/30/2022] Open
Abstract
Objective Hypothetically, content in MEDLINE records is consistent across multiple platforms. Though platforms have different interfaces and requirements for query syntax, results should be similar when the syntax is controlled for across the platforms. The authors investigated how search result counts varied when searching records among five MEDLINE platforms. Methods We created 29 sets of search queries targeting various metadata fields and operators. Within search sets, we adapted 5 distinct, compatible queries to search 5 MEDLINE platforms (PubMed, ProQuest, EBSCOhost, Web of Science, and Ovid), totaling 145 final queries. The 5 queries were designed to be logically and semantically equivalent and were modified only to match platform syntax requirements. We analyzed the result counts and compared PubMed’s MEDLINE result counts to result counts from the other platforms. We identified outliers by measuring the result count deviations using modified z-scores centered around PubMed’s MEDLINE results. Results Web of Science and ProQuest searches were the most likely to deviate from the equivalent PubMed searches. EBSCOhost and Ovid were less likely to deviate from PubMed searches. Ovid’s results were the most consistent with PubMed’s but appeared to apply an indexing algorithm that resulted in lower retrieval sets among equivalent searches in PubMed. Web of Science exhibited problems with exploding or not exploding Medical Subject Headings (MeSH) terms. Conclusion Platform enhancements among interfaces affect record retrieval and challenge the expectation that MEDLINE platforms should, by default, be treated as MEDLINE. Substantial inconsistencies in search result counts, as demonstrated here, should raise concerns about the impact of platform-specific influences on search results.
Collapse
Affiliation(s)
- Christopher Sean Burns
- Associate Professor, School of Information Science, University of Kentucky, Lexington, KY,
| | - Robert M Shapiro
- Assistant Professor, School of Information Science, University of Kentucky, Lexington, KY,
| | - Tyler Nix
- Informationist, Taubman Health Sciences Library, University of Michigan, Ann Arbor, MI,
| | - Jeffrey T Huber
- Professor, School of Information Science, University of Kentucky, Lexington, KY,
| |
Collapse
|
11
|
Spencer AJ, Eldredge JD. Roles for librarians in systematic reviews: a scoping review. J Med Libr Assoc 2018; 106:46-56. [PMID: 29339933 PMCID: PMC5764593 DOI: 10.5195/jmla.2018.82] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 09/01/2017] [Indexed: 12/18/2022] Open
Abstract
OBJECTIVE What roles do librarians and information professionals play in conducting systematic reviews? Librarians are increasingly called upon to be involved in systematic reviews, but no study has considered all the roles librarians can perform. This inventory of existing and emerging roles aids in defining librarians' systematic reviews services. METHODS For this scoping review, the authors conducted controlled vocabulary and text-word searches in the PubMed; Library, Information Science & Technology Abstracts; and CINAHL databases. We separately searched for articles published in the Journal of the European Association for Health Information and Libraries, Evidence Based Library and Information Practice, the Journal of the Canadian Heath Libraries Association, and Hypothesis. We also text-word searched Medical Library Association annual meeting poster and paper abstracts. RESULTS We identified 18 different roles filled by librarians and other information professionals in conducting systematic reviews from 310 different articles, book chapters, and presented papers and posters. Some roles were well known such as searching, source selection, and teaching. Other less documented roles included planning, question formulation, and peer review. We summarize these different roles and provide an accompanying bibliography of references for in-depth descriptions of these roles. CONCLUSION Librarians play central roles in systematic review teams, including roles that go beyond searching. This scoping review should encourage librarians who are fulfilling roles that are not captured here to document their roles in journal articles and poster and paper presentations.
Collapse
|
12
|
Grant MJ. Thirty years of practitioner-based projects. Health Info Libr J 2014; 31:1-3. [PMID: 24751224 DOI: 10.1111/hir.12058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|