1
|
Safranek CW, Huang T, Wright DS, Wright CX, Socrates V, Sangal RB, Iscoe M, Chartash D, Taylor RA. Automated HEART score determination via ChatGPT: Honing a framework for iterative prompt development. J Am Coll Emerg Physicians Open 2024; 5:e13133. [PMID: 38481520 PMCID: PMC10936537 DOI: 10.1002/emp2.13133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/25/2024] [Accepted: 02/10/2024] [Indexed: 03/17/2024] Open
Abstract
Objectives This study presents a design framework to enhance the accuracy by which large language models (LLMs), like ChatGPT can extract insights from clinical notes. We highlight this framework via prompt refinement for the automated determination of HEART (History, ECG, Age, Risk factors, Troponin risk algorithm) scores in chest pain evaluation. Methods We developed a pipeline for LLM prompt testing, employing stochastic repeat testing and quantifying response errors relative to physician assessment. We evaluated the pipeline for automated HEART score determination across a limited set of 24 synthetic clinical notes representing four simulated patients. To assess whether iterative prompt design could improve the LLMs' ability to extract complex clinical concepts and apply rule-based logic to translate them to HEART subscores, we monitored diagnostic performance during prompt iteration. Results Validation included three iterative rounds of prompt improvement for three HEART subscores with 25 repeat trials totaling 1200 queries each for GPT-3.5 and GPT-4. For both LLM models, from initial to final prompt design, there was a decrease in the rate of responses with erroneous, non-numerical subscore answers. Accuracy of numerical responses for HEART subscores (discrete 0-2 point scale) improved for GPT-4 from the initial to final prompt iteration, decreasing from a mean error of 0.16-0.10 (95% confidence interval: 0.07-0.14) points. Conclusion We established a framework for iterative prompt design in the clinical space. Although the results indicate potential for integrating LLMs in structured clinical note analysis, translation to real, large-scale clinical data with appropriate data privacy safeguards is needed.
Collapse
Affiliation(s)
- Conrad W. Safranek
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
| | - Thomas Huang
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
| | - Donald S. Wright
- Department of Emergency MedicineYale University School of MedicineNew HavenConnecticutUSA
| | - Catherine X. Wright
- Department of Cardiovascular MedicineYale University School of MedicineNew HavenConnecticutUSA
| | - Vimig Socrates
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
| | - Rohit B. Sangal
- Department of Emergency MedicineYale University School of MedicineNew HavenConnecticutUSA
| | - Mark Iscoe
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
- Department of Emergency MedicineYale University School of MedicineNew HavenConnecticutUSA
| | - David Chartash
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
- School of MedicineUniversity College Dublin, National University of IrelandDublinRepublic of Ireland
| | - R. Andrew Taylor
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew HavenConnecticutUSA
- Department of Emergency MedicineYale University School of MedicineNew HavenConnecticutUSA
| |
Collapse
|
2
|
Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. Correction: How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ 2024; 10:e57594. [PMID: 38412478 PMCID: PMC10933712 DOI: 10.2196/57594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 02/20/2024] [Indexed: 02/29/2024]
Abstract
[This corrects the article DOI: 10.2196/45312.].
Collapse
Affiliation(s)
- Aidan Gilson
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
- Department of Emergency MedicineYale University School of MedicineNew Haven, CTUnited States
| | - Conrad W Safranek
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
| | - Thomas Huang
- Department of Emergency MedicineYale University School of MedicineNew Haven, CTUnited States
| | - Vimig Socrates
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
- Program of Computational Biology and BioinformaticsYale UniversityNew Haven, CTUnited States
| | - Ling Chi
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
| | - Richard Andrew Taylor
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
- Department of Emergency MedicineYale University School of MedicineNew Haven, CTUnited States
| | - David Chartash
- Section for Biomedical Informatics and Data ScienceYale University School of MedicineNew Haven, CTUnited States
- School of MedicineUniversity College DublinNational University of Ireland, DublinDublinIreland
| |
Collapse
|
3
|
Chartash D, Bruno MA. Algorithms in medical decision-making and in everyday life: what's the difference? Diagnosis (Berl) 2024; 0:dx-2024-0010. [PMID: 38386866 DOI: 10.1515/dx-2024-0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 02/06/2024] [Indexed: 02/24/2024]
Abstract
Algorithms are a ubiquitous part of modern life. Despite being a component of medicine since early efforts to deploy computers in medicine, clinicians' resistance to using decision support and use algorithms to address cognitive biases has been limited. This resistance is not just limited to the use of algorithmic clinical decision support, but also evidence and stochastic reasoning and the implications of the forcing function of the electronic medical record. Physician resistance to algorithmic support in clinical decision making is in stark contrast to their general acceptance of algorithmic support in other aspects of life.
Collapse
Affiliation(s)
- David Chartash
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, USA
- School of Medicine, University College Dublin-National University of Ireland, Dublin, Republic of Ireland
| | - Michael A Bruno
- Penn State Milton S. Hershey Medical Center and College of Medicine, Hershey, PA, USA
| |
Collapse
|
4
|
Taylor RA, Gilson A, Schulz W, Lopez K, Young P, Pandya S, Coppi A, Chartash D, Fiellin D, D’Onofrio G. Computational phenotypes for patients with opioid-related disorders presenting to the emergency department. PLoS One 2023; 18:e0291572. [PMID: 37713393 PMCID: PMC10503758 DOI: 10.1371/journal.pone.0291572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 08/31/2023] [Indexed: 09/17/2023] Open
Abstract
OBJECTIVE We aimed to discover computationally-derived phenotypes of opioid-related patient presentations to the ED via clinical notes and structured electronic health record (EHR) data. METHODS This was a retrospective study of ED visits from 2013-2020 across ten sites within a regional healthcare network. We derived phenotypes from visits for patients ≥18 years of age with at least one prior or current documentation of an opioid-related diagnosis. Natural language processing was used to extract clinical entities from notes, which were combined with structured data within the EHR to create a set of features. We performed latent dirichlet allocation to identify topics within these features. Groups of patient presentations with similar attributes were identified by cluster analysis. RESULTS In total 82,577 ED visits met inclusion criteria. The 30 topics were discovered ranging from those related to substance use disorder, chronic conditions, mental health, and medical management. Clustering on these topics identified nine unique cohorts with one-year survivals ranging from 84.2-96.8%, rates of one-year ED returns from 9-34%, rates of one-year opioid event 10-17%, rates of medications for opioid use disorder from 17-43%, and a median Carlson comorbidity index of 2-8. Two cohorts of phenotypes were identified related to chronic substance use disorder, or acute overdose. CONCLUSIONS Our results indicate distinct phenotypic clusters with varying patient-oriented outcomes which provide future targets better allocation of resources and therapeutics. This highlights the heterogeneity of the overall population, and the need to develop targeted interventions for each population.
Collapse
Affiliation(s)
- R. Andrew Taylor
- Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, United States of America
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Aidan Gilson
- Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Wade Schulz
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, United States of America
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Kevin Lopez
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Patrick Young
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Sameer Pandya
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Andreas Coppi
- Department of Internal Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - David Chartash
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, United States of America
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, Connecticut, United States of America
- School of Medicine, University College Dublin - National University of Ireland, Dublin, Ireland
| | - David Fiellin
- Department of Internal Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Gail D’Onofrio
- Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America
| |
Collapse
|
5
|
Safranek CW, Sidamon-Eristoff AE, Gilson A, Chartash D. The Role of Large Language Models in Medical Education: Applications and Implications. JMIR Med Educ 2023; 9:e50945. [PMID: 37578830 PMCID: PMC10463084 DOI: 10.2196/50945] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 07/26/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Large language models (LLMs) such as ChatGPT have sparked extensive discourse within the medical education community, spurring both excitement and apprehension. Written from the perspective of medical students, this editorial offers insights gleaned through immersive interactions with ChatGPT, contextualized by ongoing research into the imminent role of LLMs in health care. Three distinct positive use cases for ChatGPT were identified: facilitating differential diagnosis brainstorming, providing interactive practice cases, and aiding in multiple-choice question review. These use cases can effectively help students learn foundational medical knowledge during the preclinical curriculum while reinforcing the learning of core Entrustable Professional Activities. Simultaneously, we highlight key limitations of LLMs in medical education, including their insufficient ability to teach the integration of contextual and external information, comprehend sensory and nonverbal cues, cultivate rapport and interpersonal interaction, and align with overarching medical education and patient care goals. Through interacting with LLMs to augment learning during medical school, students can gain an understanding of their strengths and weaknesses. This understanding will be pivotal as we navigate a health care landscape increasingly intertwined with LLMs and artificial intelligence.
Collapse
Affiliation(s)
- Conrad W Safranek
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | | | - Aidan Gilson
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | - David Chartash
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- School of Medicine, University College Dublin, National University of Ireland, Dublin, Ireland
| |
Collapse
|
6
|
Newton AJH, Chartash D, Kleinstein SH, McDougal RA. A pipeline for the retrieval and extraction of domain-specific information with application to COVID-19 immune signatures. BMC Bioinformatics 2023; 24:292. [PMID: 37474900 PMCID: PMC10357743 DOI: 10.1186/s12859-023-05397-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 06/23/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND The accelerating pace of biomedical publication has made it impractical to manually, systematically identify papers containing specific information and extract this information. This is especially challenging when the information itself resides beyond titles or abstracts. For emerging science, with a limited set of known papers of interest and an incomplete information model, this is of pressing concern. A timely example in retrospect is the identification of immune signatures (coherent sets of biomarkers) driving differential SARS-CoV-2 infection outcomes. IMPLEMENTATION We built a classifier to identify papers containing domain-specific information from the document embeddings of the title and abstract. To train this classifier with limited data, we developed an iterative process leveraging pre-trained SPECTER document embeddings, SVM classifiers and web-enabled expert review to iteratively augment the training set. This training set was then used to create a classifier to identify papers containing domain-specific information. Finally, information was extracted from these papers through a semi-automated system that directly solicited the paper authors to respond via a web-based form. RESULTS We demonstrate a classifier that retrieves papers with human COVID-19 immune signatures with a positive predictive value of 86%. The type of immune signature (e.g., gene expression vs. other types of profiling) was also identified with a positive predictive value of 74%. Semi-automated queries to the corresponding authors of these publications requesting signature information achieved a 31% response rate. CONCLUSIONS Our results demonstrate the efficacy of using a SVM classifier with document embeddings of the title and abstract, to retrieve papers with domain-specific information, even when that information is rarely present in the abstract. Targeted author engagement based on classifier predictions offers a promising pathway to build a semi-structured representation of such information. Through this approach, partially automated literature mining can help rapidly create semi-structured knowledge repositories for automatic analysis of emerging health threats.
Collapse
Affiliation(s)
- Adam J H Newton
- Department of Physiology and Pharmacology, SUNY Downstate Health Sciences University, Brooklyn, NY, 11203, USA
- Yale Center for Medical Informatics, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA
- Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, 06511, USA
- Department of Pathology, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA
| | - David Chartash
- Yale Center for Medical Informatics, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA
- Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, 06511, USA
- School of Medicine, University College Dublin - National University of Ireland, Dublin, Co. Dublin, Republic of Ireland
| | - Steven H Kleinstein
- Department of Pathology, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA
- Department of Immunobiology, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, 06511, USA
| | - Robert A McDougal
- Yale Center for Medical Informatics, Yale School of Medicine, Yale University, New Haven, CT, 06511, USA.
- Department of Biostatistics, Yale School of Public Health, Yale University, New Haven, CT, 06511, USA.
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, 06511, USA.
| |
Collapse
|
7
|
Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. Authors' Reply to: Variability in Large Language Models' Responses to Medical Licensing and Certification Examinations. JMIR Med Educ 2023; 9:e50336. [PMID: 37440299 PMCID: PMC10375396 DOI: 10.2196/50336] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 07/05/2023] [Indexed: 07/14/2023]
Affiliation(s)
- Aidan Gilson
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Conrad W Safranek
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | - Thomas Huang
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Vimig Socrates
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Program of Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
| | - Ling Chi
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | - Richard Andrew Taylor
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - David Chartash
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- School of Medicine, University College Dublin, National University of Ireland, Dublin, Dublin, Ireland
| |
Collapse
|
8
|
Socrates V, Gilson A, Lopez K, Chi L, Taylor RA, Chartash D. Predicting relations between SOAP note sections: The value of incorporating a clinical information model. J Biomed Inform 2023; 141:104360. [PMID: 37061014 PMCID: PMC10197152 DOI: 10.1016/j.jbi.2023.104360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 03/27/2023] [Accepted: 04/05/2023] [Indexed: 04/17/2023]
Abstract
Physician progress notes are frequently organized into Subjective, Objective, Assessment, and Plan (SOAP) sections. The Assessment section synthesizes information recorded in the Subjective and Objective sections, and the Plan section documents tests and treatments to narrow the differential diagnosis and manage symptoms. Classifying the relationship between the Assessment and Plan sections has been suggested to provide valuable insight into clinical reasoning. In this work, we use a novel human-in-the-loop pipeline to classify the relationships between the Assessment and Plan sections of SOAP notes as a part of the n2c2 2022 Track 3 Challenge. In particular, we use a clinical information model constructed from both the entailment logic expected from the aforementioned Challenge and the problem-oriented medical record. This information model is used to label named entities as primary and secondary problems/symptoms, events and complications in all four SOAP sections. We iteratively train separate Named Entity Recognition models and use them to annotate entities in all notes/sections. We fine-tune a downstream RoBERTa-large model to classify the Assessment-Plan relationship. We evaluate multiple language model architectures, preprocessing parameters, and methods of knowledge integration, achieving a maximum macro-F1 score of 82.31%. Our initial model achieves top-2 performance during the challenge (macro-F1: 81.52%, competitors' macro-F1 range: 74.54%-82.12%). We improved our model by incorporating post-challenge annotations (S&O sections), outperforming the top model from the Challenge. We also used Shapley additive explanations to investigate the extent of language model clinical logic, under the lens of our clinical information model. We find that the model often uses shallow heuristics and nonspecific attention when making predictions, suggesting language model knowledge integration requires further research.
Collapse
Affiliation(s)
- Vimig Socrates
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, 300 George St, 06511, New Haven, USA; Department of Emergency Medicine, Yale University School of Medicine, 464 Congress Ave #260, New Haven, 06519, USA; Program of Computational Biology and Bioinformatics, Yale University, 300 George St, New Haven, 06511, USA.
| | - Aidan Gilson
- Department of Emergency Medicine, Yale University School of Medicine, 464 Congress Ave #260, New Haven, 06519, USA.
| | - Kevin Lopez
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, 300 George St, 06511, New Haven, USA; Department of Emergency Medicine, Yale University School of Medicine, 464 Congress Ave #260, New Haven, 06519, USA.
| | - Ling Chi
- Department of Emergency Medicine, Yale University School of Medicine, 464 Congress Ave #260, New Haven, 06519, USA.
| | - Richard Andrew Taylor
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, 300 George St, 06511, New Haven, USA; Department of Emergency Medicine, Yale University School of Medicine, 464 Congress Ave #260, New Haven, 06519, USA.
| | - David Chartash
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, 300 George St, 06511, New Haven, USA; School of Medicine, University College Dublin - National University of Ireland, Dublin, Health Sciences Centre, Belfield, Dublin 4, Ireland.
| |
Collapse
|
9
|
Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ 2023; 9:e45312. [PMID: 36753318 PMCID: PMC9947764 DOI: 10.2196/45312] [Citation(s) in RCA: 352] [Impact Index Per Article: 352.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/27/2023] [Accepted: 01/29/2023] [Indexed: 05/05/2023]
Abstract
BACKGROUND Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. OBJECTIVE This study aimed to evaluate the performance of ChatGPT on questions within the scope of the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams, as well as to analyze responses for user interpretability. METHODS We used 2 sets of multiple-choice questions to evaluate ChatGPT's performance, each with questions pertaining to Step 1 and Step 2. The first set was derived from AMBOSS, a commonly used question bank for medical students, which also provides statistics on question difficulty and the performance on an exam relative to the user base. The second set was the National Board of Medical Examiners (NBME) free 120 questions. ChatGPT's performance was compared to 2 other large language models, GPT-3 and InstructGPT. The text output of each ChatGPT response was evaluated across 3 qualitative metrics: logical justification of the answer selected, presence of information internal to the question, and presence of information external to the question. RESULTS Of the 4 data sets, AMBOSS-Step1, AMBOSS-Step2, NBME-Free-Step1, and NBME-Free-Step2, ChatGPT achieved accuracies of 44% (44/100), 42% (42/100), 64.4% (56/87), and 57.8% (59/102), respectively. ChatGPT outperformed InstructGPT by 8.15% on average across all data sets, and GPT-3 performed similarly to random chance. The model demonstrated a significant decrease in performance as question difficulty increased (P=.01) within the AMBOSS-Step1 data set. We found that logical justification for ChatGPT's answer selection was present in 100% of outputs of the NBME data sets. Internal information to the question was present in 96.8% (183/189) of all questions. The presence of information external to the question was 44.5% and 27% lower for incorrect answers relative to correct answers on the NBME-Free-Step1 (P<.001) and NBME-Free-Step2 (P=.001) data sets, respectively. CONCLUSIONS ChatGPT marks a significant improvement in natural language processing models on the tasks of medical question answering. By performing at a greater than 60% threshold on the NBME-Free-Step-1 data set, we show that the model achieves the equivalent of a passing score for a third-year medical student. Additionally, we highlight ChatGPT's capacity to provide logic and informational context across the majority of answers. These facts taken together make a compelling case for the potential applications of ChatGPT as an interactive medical education tool to support learning.
Collapse
Affiliation(s)
- Aidan Gilson
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Conrad W Safranek
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | - Thomas Huang
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Vimig Socrates
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Program of Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
| | - Ling Chi
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
| | - Richard Andrew Taylor
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - David Chartash
- Section for Biomedical Informatics and Data Science, Yale University School of Medicine, New Haven, CT, United States
- School of Medicine, University College Dublin, National University of Ireland, Dublin, Dublin, Ireland
| |
Collapse
|
10
|
Levy DR, Sloss EA, Chartash D, Corley ST, Mishuris RG, Rosenbloom ST, Tiase VL. Reflections on the Documentation Burden Reduction AMIA Plenary Session through the Lens of 25 × 5. Appl Clin Inform 2023; 14:11-15. [PMID: 36356593 PMCID: PMC9812582 DOI: 10.1055/a-1976-2052] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 11/06/2022] [Indexed: 11/12/2022] Open
Affiliation(s)
- Deborah R. Levy
- Department of Veterans Affairs, Pain Research, Multimorbidities, and Education (PRIME) Center, VA-Connecticut, United States
- Yale University School of Medicine, New Haven, Connecticut, United States
| | - Elizabeth A. Sloss
- College of Nursing, University of Utah, Salt Lake City, Utah, United States
| | - David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, Connecticut, United States
| | - Sarah T. Corley
- MITRE Corporation, Center for Government Effectiveness and Modernization, Center Office, McLean, Virginia, United States
| | - Rebecca G. Mishuris
- Section of General Internal Medicine, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, United States
| | - S. Trent Rosenbloom
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Victoria L. Tiase
- Department of Biomedical Informatics, University of Utah, Salt Lake City, Utah, United States
| |
Collapse
|
11
|
Chartash D, Rosenman M, Wang K, Chen E. Informatics in Undergraduate Medical Education: Analysis of Competency Frameworks and Practices Across North America. JMIR Med Educ 2022; 8:e39794. [PMID: 36099007 PMCID: PMC9516378 DOI: 10.2196/39794] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/02/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND With the advent of competency-based medical education, as well as Canadian efforts to include clinical informatics within undergraduate medical education, competency frameworks in the United States have not emphasized the skills associated with clinical informatics pertinent to the broader practice of medicine. OBJECTIVE By examining the competency frameworks with which undergraduate medical education in clinical informatics has been developed in Canada and the United States, we hypothesized that there is a gap: the lack of a unified competency set and frame for clinical informatics education across North America. METHODS We performed directional competency mapping between Canadian and American graduate clinical informatics competencies and general graduate medical education competencies. Directional competency mapping was performed between Canadian roles and American common program requirements using keyword matching at the subcompetency and enabling competency levels. In addition, for general graduate medical education competencies, the Physician Competency Reference Set developed for the Liaison Committee on Medical Education was used as a direct means of computing the ontological overlap between competency frameworks. RESULTS Upon mapping Canadian roles to American competencies via both undergraduate and graduate medical education competency frameworks, the difference in focus between the 2 countries can be thematically described as a difference between the concepts of clinical and management reasoning. CONCLUSIONS We suggest that the development or deployment of informatics competencies in undergraduate medical education should focus on 3 items: the teaching of diagnostic reasoning, such that the information tasks that comprise both clinical and management reasoning can be discussed; precision medical education, where informatics can provide for more fine-grained evaluation; and assessment methods to support traditional pedagogical efforts (both at the bedside and beyond). Assessment using cases or structured assessments (eg, Objective Structured Clinical Examinations) would help students draw parallels between clinical informatics and fundamental clinical subjects and would better emphasize the cognitive techniques taught through informatics.
Collapse
Affiliation(s)
- David Chartash
- School of Medicine, University College Dublin - National University of Ireland, Dublin, Ireland
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, United States
| | - Marc Rosenman
- Ann & Robert H Lurie Children's Hospital of Chicago, Chicago, IL, United States
- Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| | - Karen Wang
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, United States
| | - Elizabeth Chen
- Center for Biomedical Informatics, The Warren Alpert Medical School of Brown University, Providence, RI, United States
| |
Collapse
|
12
|
Chartash D, Hart L. Building the Generalist Physician to Support Adolescence and Emerging Adulthood: A Narrative Review. Cureus 2022; 14:e22533. [PMID: 35345691 PMCID: PMC8956274 DOI: 10.7759/cureus.22533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 02/20/2022] [Indexed: 11/09/2022] Open
Abstract
Undergraduate medical education serves as a foundation for the medical student to develop the skills of a generalist physician. Given the "blurring" of the demarcations between childhood and adulthood and the increased scope of pediatric practice, an extra layer has been added to medical education which seeks to address care across the lifespan. While approaches have been developed to teach this layer, clerkship reform has not focused on advancing the clinical science of adolescence. Furthermore, as we look towards the vanguard of entrustable professional activities (EPA), specific attention to transition care for the adolescent has seen minimal attention. Drawing on prior examples of curriculum integration between specialties as well as solutions to complex care management from clinical reasoning, we suggest that attention to the development of the generalist physician requires attention to the combined medicine-pediatrics specialty.
Collapse
|
13
|
Andrews J, Chartash D, Hay S. Gender bias in resident evaluations: Natural language processing and competency evaluation. Med Educ 2021; 55:1383-1387. [PMID: 34224606 DOI: 10.1111/medu.14593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 06/08/2021] [Accepted: 06/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Research shows that female trainees experience evaluation penalties for gender non-conforming behaviour during medical training. Studies of medical education evaluations and performance scores do reflect a gender bias, though studies are of varying methodology and results have not been consistent. OBJECTIVE We sought to examine the differences in word use, competency themes and length within written evaluations of internal medicine residents at scale, considering the impact of both faculty and resident gender. We hypothesised that female internal medicine residents receive more negative feedback, and different thematic feedback than male residents. METHODS This study utilised a corpus of 3864 individual responses to positive and negative questions over the course of six years (2012-2018) within Yale University School of Medicine's internal medicine residency. Researchers developed a sentiment model to assess the valence of evaluation responses. We then used natural language processing (NLP) to evaluate whether female versus male residents received more positive or negative feedback and if that feedback focussed on different Accreditation Council for Graduate Medical Education (ACGME) core competencies based on their gender. Evaluator-evaluatee gender dyad was analysed to see how it impacted quantity and quality of feedback. RESULTS We found that female and male residents did not have substantively different numbers of positive or negative comments. While certain competencies were discussed more than others, gender did not seem to influence which competencies were discussed. Neither gender trainee received more written feedback, though female evaluators tended to write longer evaluations. CONCLUSIONS We conclude that when examined at scale, quantitative gender differences are not as prevalent as has been seen in qualitative work. We suggest that further investigation of linguistic phenomena (such as context) is warranted to reconcile this finding with prior work.
Collapse
Affiliation(s)
- Jane Andrews
- Department of Internal Medicine, The University of Texas Health Science Center at Houston John P and Katherine G McGovern Medical School, Houston, TX, USA
| | - David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, USA
| | - Seonaid Hay
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| |
Collapse
|
14
|
Chartash D, Sharifi M, Emerson B, Frank R, Schoenfeld EM, Tanner J, Brandt C, Taylor RA. Documentation of Shared Decisionmaking in the Emergency Department. Ann Emerg Med 2021; 78:637-649. [PMID: 34340873 DOI: 10.1016/j.annemergmed.2021.04.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/22/2021] [Accepted: 04/28/2021] [Indexed: 10/20/2022]
Abstract
STUDY OBJECTIVE While patient-centered communication and shared decisionmaking are increasingly recognized as vital aspects of clinical practice, little is known about their characteristics in real-world emergency department (ED) settings. We constructed a natural language processing tool to identify patient-centered communication as documented in ED notes and to describe visit-level, site-level, and temporal patterns within a large health system. METHODS This was a 2-part study involving (1) the development and validation of an natural language processing tool using regular expressions to identify shared decisionmaking and (2) a retrospective analysis using mixed effects logistic regression and trend analysis of shared decisionmaking and general patient discussion using the natural language processing tool to assess ED physician and advanced practice provider notes from 2013 to 2020. RESULTS Compared to chart review of 600 ED notes, the accuracy rates of the natural language processing tool for identification of shared decisionmaking and general patient discussion were 96.7% (95% CI 94.9% to 97.9%) and 88.9% (95% confidence interval [CI] 86.1% to 91.3%), respectively. The natural language processing tool identified shared decisionmaking in 58,246 (2.2%) and general patient discussion in 590,933 (22%) notes. From 2013 to 2020, natural language processing-detected shared decisionmaking increased 300% and general patient discussion increased 50%. We observed higher odds of shared decisionmaking documentation among physicians versus advanced practice providers (odds ratio [OR] 1.14, 95% CI 1.07 to 1.23) and among female versus male patients (OR 1.13, 95% CI 1.11 to 1.15). Black patients had lower odds of shared decisionmaking (OR 0.8, 95% CI 0.84 to 0.88) compared with White patients. Shared decisionmaking and general patient discussion were also associated with higher levels of triage and commercial insurance status. CONCLUSION In this study, we developed and validated an natural language processing tool using regular expressions to extract shared decisionmaking from ED notes and found multiple potential factors contributing to variation, including social, demographic, temporal, and presentation characteristics.
Collapse
Affiliation(s)
- David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT
| | - Mona Sharifi
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT; Department of Pediatrics, Yale University School of Medicine, New Haven, CT
| | - Beth Emerson
- Department of Pediatrics, Yale University School of Medicine, New Haven, CT; Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Robert Frank
- Department of Linguistics, Yale University, New Haven, CT
| | - Elizabeth M Schoenfeld
- Department of Emergency Medicine, University of Massachusetts Medical School - Baystate Institute for Healthcare Delivery and Population Science, Springfield, MS
| | - Jason Tanner
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Cynthia Brandt
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT; Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Richard A Taylor
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT; Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT.
| |
Collapse
|
15
|
Gilson AS, Chartash D, Chang D, Hawk K, D'Onofrio G, Haimovich AD, Fiellin DA, Taylor RA. Analysis of Health Trajectories Leading to Adverse Opioid-Related Events. AMIA Jt Summits Transl Sci Proc 2021; 2021:248-256. [PMID: 34457139 PMCID: PMC8378649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Identifying patient risk factors leading to adverse opioid-related events (AOEs) may enable targeted risk-based interventions, uncover potential causal mechanisms, and enhance prognosis. In this article, we aim to discover patient diagnosis, procedure, and medication event trajectories associated with AOEs using large-scale data mining methods. The individual temporally preceding factors associated with the highest relative risk (RR) for AOEs were opioid withdrawal therapy agents, toxic encephalopathy, problems related to housing and economic circumstances, and unspecified viral hepatitis, with RR of 33.4, 26.1, 19.9, and 18.7, respectively. Patient cohorts with a socioeconomic or mental health code had a larger RR for over 75% of all identified trajectories compared to the average population. By analyzing health trajectories leading to AOEs, we discover novel, temporally-connected combinations of diagnoses and health service events that significantly increase risk of AOEs, including natural histories marked by socioeconomic and mental health diagnoses.
Collapse
Affiliation(s)
| | - David Chartash
- Center for Medical Informatics, Yale School of Medicine, New Haven, CT
| | - David Chang
- Center for Medical Informatics, Yale School of Medicine, New Haven, CT
| | - Kathryn Hawk
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
| | - Gail D'Onofrio
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
- Department of Medicine, Yale School of Medicine, New Haven, CT
| | - Adrian D Haimovich
- Center for Medical Informatics, Yale School of Medicine, New Haven, CT
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
| | - David A Fiellin
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
- Yale School of Public Health, New Haven, CT
- Department of Medicine, Yale School of Medicine, New Haven, CT
| | - R Andrew Taylor
- Center for Medical Informatics, Yale School of Medicine, New Haven, CT
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
| |
Collapse
|
16
|
Ligtenberg KG, Chartash D, Bosenberg M, Brandt C. Validation of the Data Quality of a Tumor Board Registry Through Assessment of Clinicopathologic Survival Outcomes in Melanoma Patients. AMIA Annu Symp Proc 2021; 2020:747-755. [PMID: 33936449 PMCID: PMC8075482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Affiliation(s)
- Katherine Given Ligtenberg
- Department of Dermatology, Yale University School of Medicine, New Haven, CT
- Center for Outcomes Research and Evaluation, Yale University School of Medicine and Yale New Haven Hospital, New Haven, CT
| | - David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT
| | - Marcus Bosenberg
- Department of Dermatology, Yale University School of Medicine, New Haven, CT
- Department of Pathology, Yale University School of Medicine, New Haven, CT
| | - Cynthia Brandt
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT
| |
Collapse
|
17
|
Chartash D, Paek H, Dziura JD, Ross BK, Nogee DP, Boccio E, Hines C, Schott AM, Jeffery MM, Patel MD, Platts-Mills TF, Ahmed O, Brandt C, Couturier K, Melnick E. Identifying Opioid Use Disorder in the Emergency Department: Multi-System Electronic Health Record-Based Computable Phenotype Derivation and Validation Study. JMIR Med Inform 2019; 7:e15794. [PMID: 31674913 PMCID: PMC6913746 DOI: 10.2196/15794] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 09/27/2019] [Accepted: 10/01/2019] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Deploying accurate computable phenotypes in pragmatic trials requires a trade-off between precise and clinically sensical variable selection. In particular, evaluating the medical encounter to assess a pattern leading to clinically significant impairment or distress indicative of disease is a difficult modeling challenge for the emergency department. OBJECTIVE This study aimed to derive and validate an electronic health record-based computable phenotype to identify emergency department patients with opioid use disorder using physician chart review as a reference standard. METHODS A two-algorithm computable phenotype was developed and evaluated using structured clinical data across 13 emergency departments in two large health care systems. Algorithm 1 combined clinician and billing codes. Algorithm 2 used chief complaint structured data suggestive of opioid use disorder. To evaluate the algorithms in both internal and external validation phases, two emergency medicine physicians, with a third acting as adjudicator, reviewed a pragmatic sample of 231 charts: 125 internal validation (75 positive and 50 negative), 106 external validation (56 positive and 50 negative). RESULTS Cohen kappa, measuring agreement between reviewers, for the internal and external validation cohorts was 0.95 and 0.93, respectively. In the internal validation phase, Algorithm 1 had a positive predictive value (PPV) of 0.96 (95% CI 0.863-0.995) and a negative predictive value (NPV) of 0.98 (95% CI 0.893-0.999), and Algorithm 2 had a PPV of 0.8 (95% CI 0.593-0.932) and an NPV of 1.0 (one-sided 97.5% CI 0.863-1). In the external validation phase, the phenotype had a PPV of 0.95 (95% CI 0.851-0.989) and an NPV of 0.92 (95% CI 0.807-0.978). CONCLUSIONS This phenotype detected emergency department patients with opioid use disorder with high predictive values and reliability. Its algorithms were transportable across health care systems and have potential value for both clinical and research purposes.
Collapse
Affiliation(s)
- David Chartash
- Yale Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, United States
| | - Hyung Paek
- Information Technology Services, Yale New Haven Health, New Haven, CT, United States
| | - James D Dziura
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Bill K Ross
- North Carolina Translational and Clinical Sciences Institute, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Daniel P Nogee
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Eric Boccio
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Cory Hines
- Department of Emergency Medicine, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Aaron M Schott
- Department of Emergency Medicine, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Molly M Jeffery
- Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States
- Department of Health Sciences Research, Mayo Clinic, Rochester, MN, United States
| | - Mehul D Patel
- Department of Emergency Medicine, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Timothy F Platts-Mills
- Department of Emergency Medicine, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Osama Ahmed
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Cynthia Brandt
- Yale Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, United States
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Katherine Couturier
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| | - Edward Melnick
- Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States
| |
Collapse
|
18
|
Chartash D, Sassoon D, Muthu N. Physicians in the Era of Automation: The Case for Clinical Expertise. MDM Policy Pract 2019; 4:2381468319868968. [PMID: 31453366 PMCID: PMC6699007 DOI: 10.1177/2381468319868968] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 07/15/2019] [Indexed: 01/22/2023] Open
Affiliation(s)
- David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, Connecticut
| | - Daniel Sassoon
- Department of Radiology, University of Colorado at Denver Anschutz Medical Campus, Aurora, Colorado
| | - Naveen Muthu
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, and Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
19
|
Wiehe SE, Rosenman MB, Chartash D, Lipscomb ER, Nelson TL, Magee LA, Fortenberry JD, Aalsma MC. A Solutions-Based Approach to Building Data-Sharing Partnerships. EGEMS (Wash DC) 2018; 6:20. [PMID: 30155508 PMCID: PMC6108450 DOI: 10.5334/egems.236] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 07/06/2018] [Indexed: 12/05/2022]
Abstract
INTRODUCTION Although researchers recognize that sharing disparate data can improve population health, barriers (technical, motivational, economic, political, legal, and ethical) limit progress. In this paper, we aim to enhance the van Panhuis et al. framework of barriers to data sharing; we present a complementary solutions-based data-sharing process in order to encourage both emerging and established researchers, whether or not in academia, to engage in data-sharing partnerships. BRIEF DESCRIPTION OF MAJOR COMPONENTS We enhance the van Panhuis et al. framework in three ways. First, we identify the appropriate stakeholder(s) within an organization (e.g., criminal justice agency) with whom to engage in addressing each category of barriers. Second, we provide a representative sample of specific challenges that we have faced in our data-sharing partnerships with criminal justice agencies, local clinical systems, and public health. Third, and most importantly, we suggest solutions we have found successful for each category of barriers. We grouped our solutions into five core areas that cut across the barriers as well as stakeholder groups: Preparation, Clear Communication, Funding/Support, Non-Monetary Benefits, and Regulatory Assurances.Our solutions-based process model is complementary to the enhanced framework. An important feature of the process model is the cyclical, iterative process that undergirds it. Usually, interactions with new data-sharing partner organizations begin with the leadership team and progress to both the data management and legal teams; however, the process is not always linear. CONCLUSIONS AND NEXT STEPS Data sharing is a powerful tool in population health research, but significant barriers hinder such partnerships. Nevertheless, by aspiring to community-based participatory research principles, including partnership engagement, development, and maintenance, we have overcome barriers identified in the van Panhuis et al. framework and have achieved success with various data-sharing partnerships.In the future, systematically studying data-sharing partnerships to clarify which elements of a solutions-based approach are essential for successful partnerships may be helpful to academic and non-academic researchers. The organizational climate is certainly a factor worth studying also because it relates both to barriers and to the potential workability of solutions.
Collapse
Affiliation(s)
| | - Marc B. Rosenman
- Indiana University School of Medicine, US
- Ann and Robert H. Lurie Children’s Hospital of Chicago, US
| | | | | | | | | | | | | |
Collapse
|
20
|
Lindvere L, Janik R, Dorr A, Chartash D, Sahota B, Sled JG, Stefanovic B. Cerebral microvascular network geometry changes in response to functional stimulation. Neuroimage 2013; 71:248-59. [PMID: 23353600 DOI: 10.1016/j.neuroimage.2013.01.011] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2012] [Revised: 01/02/2013] [Accepted: 01/08/2013] [Indexed: 01/28/2023] Open
Abstract
The cortical microvessels are organized in an intricate, hierarchical, three-dimensional network. Superimposed on this anatomical complexity is the highly complicated signaling that drives the focal blood flow adjustments following a rise in the activity of surrounding neurons. The microvascular response to neuronal activation remains incompletely understood. We developed a custom two photon fluorescence microscopy acquisition and analysis to obtain 3D maps of neuronal activation-induced changes in the geometry of the microvascular network of the primary somatosensory cortex of anesthetized rats. An automated, model-based tracking algorithm was employed to reconstruct the 3D microvascular topology and represent it as a graph. The changes in the geometry of this network were then tracked, over time, in the course of electrical stimulation of the contralateral forepaw. Both dilatory and constrictory responses were observed across the network. Early dilatory and late constrictory responses propagated from deeper to more superficial cortical layers while the response of the vertices that showed initial constriction followed by later dilation spread from cortical surface toward increasing cortical depths. Overall, larger caliber adjustments were observed deeper inside the cortex. This work yields the first characterization of the spatiotemporal pattern of geometric changes on the level of the cortical microvascular network as a whole and provides the basis for bottom-up modeling of the hemodynamically-weighted neuroimaging signals.
Collapse
Affiliation(s)
- Liis Lindvere
- Imaging Research, Sunnybrook Research Institute, 2075 Bayview Avenue, Toronto, ON, Canada M4N 3M5
| | | | | | | | | | | | | |
Collapse
|