1
|
Coates A, Chung AQH, Lessard L, Grudniewicz A, Espadero C, Gheidar Y, Bemgal S, Da Silva E, Sauré A, King J, Fung-Kee-Fung M. The use and role of digital technology in learning health systems: A scoping review. Int J Med Inform 2023; 178:105196. [PMID: 37619395 DOI: 10.1016/j.ijmedinf.2023.105196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/12/2023] [Accepted: 08/12/2023] [Indexed: 08/26/2023]
Abstract
OBJECTIVE The review aimed to identify which digital technologies are proposed or used within learning health systems (LHS) and to analyze the extent to which they support learning processes in LHS. MATERIALS AND METHODS Multiple databases and grey literature were searched with terms related to LHS. Manual searches and backward searches of reference lists were also undertaken. The review considered publications from 2007 to 2022. Records focusing on LHS, referring to one or more digital technologies, and describing how at least one digital technology could be used in LHS were included. RESULTS 2046 records were screened for inclusion and 154 records were included in the analysis. Twenty categories of digital technology were identified. The two most common ones across records were data recording and processing and electronic health records. Digital technology was primarily leveraged to support data access and aggregation and data analysis, two of the seven recognized learning processes within LHS learning cycles. DISCUSSION The results of the review show that a wide array of digital technologies is being leveraged to support learning cycles within LHS. Nevertheless, an over-reliance on a narrow set of technologies supporting knowledge discovery, a lack of direct evaluation of digital technologies and ambiguity in technology descriptions are hindering the realization of the LHS vision. CONCLUSION Future LHS research and initiatives should aim to integrate digital technology to support practice change and impact evaluation. The use of recognized evaluation methods for health information technology and more detailed descriptions of proposed technologies are also recommended.
Collapse
Affiliation(s)
- Alison Coates
- Telfer School of Management, University of Ottawa, Ottawa, Canada
| | | | - Lysanne Lessard
- Telfer School of Management, University of Ottawa, Ottawa, Canada, Institut du Savoir Montfort - Research, Ottawa, Canada, LIFE Research Institute, University of Ottawa, Ottawa, Canada.
| | - Agnes Grudniewicz
- Telfer School of Management, University of Ottawa, Ottawa, Canada, Institut du Savoir Monfort - Research, Ottawa, Canada
| | - Cathryn Espadero
- Telfer School of Management, University of Ottawa, Ottawa, Canada
| | - Yasaman Gheidar
- Telfer School of Management, University of Ottawa, Ottawa, Canada
| | - Sampath Bemgal
- Telfer School of Management, University of Ottawa, Ottawa, Canada
| | | | - Antoine Sauré
- Telfer School of Management, University of Ottawa, Ottawa, Canada
| | - James King
- Children's Hospital of Eastern Ontario, Ottawa, Canada
| | - Michael Fung-Kee-Fung
- Departments of Obstetrics-Gynaecology and Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Canada, The Ottawa Hospital - General Campus, University of Ottawa/Ottawa Regional Cancer Centre, Ottawa, Canada
| |
Collapse
|
2
|
Ebnehoseini Z, Tabesh H, Jangi MJ, Deldar K, Mostafavi SM, Tara M. Investigating Evaluation Frameworks for Electronic Health Record: A Literature Review. Open Access Maced J Med Sci 2021. [DOI: 10.3889/oamjms.2021.3421] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND: There are various electronic health records (EHRs) evaluation frameworks with multiple dimensions and numerous sets of evaluation measures, while the coverage rate of evaluation measures in a common framework varies in different studies.
AIM: This study provides a literature review of the current EHR evaluation frameworks and a model for measuring the coverage rate of evaluation measures in EHR frameworks.
METHODS: The current study was a comprehensive literature review and a critical appraisal study. The study was conducted in three phases. In Phase 1, a literature review of EHR evaluation frameworks was conducted. In Phase 2, a three-level hierarchical structure was developed, which includes three aspects, 12 dimensions, and 110 evaluation measures. Subsequently, evaluation measures in the identified studies were categorized based on the hierarchical structure. In Phase 3, relative frequency (RF) of evaluation measures in different dimensions and aspects for each of the identified studies were determined and categorized as follows: Appropriate, moderate, and low coverage.
RESULTS: Out of a total of 8276 retrieved articles, 62 studies were considered relevant. The RF range in the second and third level of the hierarchical structure was between 8.6%–91.94% and 0.2%–61%, respectively. “Ease of use” and “system quality” were the most frequent evaluation measure and dimension. Our results indicate that identified studies cover at least one and at most nine evaluation dimensions and current evaluation frameworks focus more on the technology aspect. Almost in all identified studies, evaluation measures related to the technology aspect were covered. However, evaluation measures related to human and organization aspects were covered in 68% and 84% of the identified studies, respectively.
CONCLUSION: In this study, we systematically reviewed all literature presenting any type of EHR evaluation framework and analyzed and discussed their aspects and features. We believe that the findings of this study can help researchers to review and adopt the EHR evaluation frameworks for their own particular field of usage.
Collapse
|
3
|
JointCalc: A web-based personalised patient decision support tool for joint replacement. Int J Med Inform 2020; 142:104217. [PMID: 32853974 PMCID: PMC7607377 DOI: 10.1016/j.ijmedinf.2020.104217] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 06/02/2020] [Accepted: 06/06/2020] [Indexed: 11/23/2022]
Abstract
JointCalc is the first complete web decision support tool for joint replacement. User-centred design helps avoid common health information system design. Modern software production methods synergise with and enable user-centred design. JointCalc implementation supports claims of high efficiency of eHealth.
Background and purpose Health information systems (HIS) are expected to be effective and efficient in improving healthcare services, but empirical observation of HIS reveals that most perform poorly in terms of these metrics. Theoretical factors of HIS performance are widely studied, and solutions to mitigate poor performance have been proposed. In this paper we implement effective methods to eliminate some common drawbacks of HIS design and demonstrate the synergy between the methods. JointCalc, the first comprehensive patient-facing web-based decision support tool for joint replacement, is used as a case study for this purpose. Methods and results User-centred design and thorough end-user involvement are employed throughout the design and development of JointCalc. This is supported by modern software production paradigms, including continuous integration/continuous development, agile and service-oriented architecture. The adopted methods result in a user-approved application delivered well within the scope of project. Conclusion This work supports the claims of high potential efficiency of HIS. The methods identified are shown to be applicable in the production of an effective HIS whilst aiding development efficiency.
Collapse
|
4
|
Talmon J, Ammenwerth E, Brender J, Rigby M, Nykanen P, de Keizer NF. Systematic Prioritization of the STARE-HI Reporting Items. Methods Inf Med 2018; 51:104-11. [DOI: 10.3414/me10-01-0072] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2010] [Accepted: 12/08/2010] [Indexed: 11/09/2022]
Abstract
SummaryBackground: We previously devised and published a guideline for reporting health informatics evaluation studies named STARE-HI, which is formally endorsed by IMIA and EFMI.Objective: To develop a prioritization framework of ranked reporting items to assist authors when reporting health informatics evaluation studies in space restricted conference papers and to apply this prioritization framework to measure the quality of recent health informatics conference papers on evaluation studies.Method: We deconstructed the STARE-HI guideline to identify reporting items. We invited a total of 111 authors of health informatics evaluation studies, reviewers and editors of health Informatics conference proceedings to score those reporting items on a scale ranging from “0 – not necessary in a conference paper” through to “10 – essential in a conference paper” by a web-based survey. From the responses we derived a mean priority score. All evaluation papers published in proceedings of MIE2006, Medinfo2007, MIE2008 and AMIA2008 were rated on these items by two reviewers. From these ratings a priority adjusted completeness score was computed for each paper.Results: We identified 104 reporting items from the STARE-HI guideline. The response rate for the survey was 59% (66 out of 111). The most important reporting items (mean score ≥ 9) were “Interpret the data and give an answer to the study question – (in Discussion)”, “Whether it is a laboratory, simulation or field study – (in Methods-study design)” and “Description of the outcome measure/evaluation criteria – (in Methods-study design)”. Per reporting area the statistically more significant important reporting items were distinguished from less important ones. Four reporting items had a mean score ≤ 6. The mean priority adjusted completeness of evaluation papers of recent health informatics conferences was 48% (range 14 –78%).Conclusion: We produced a ranked list of reporting items from STARE-HI according to their prioritized relevance for inclusion in space-limited conference papers. The priority adjusted completeness scores demonstrated room for improvement for the analyzed conference papers. We believe that this prioritization framework is an aid to improving the quality and utility of conference papers on health informatics evaluation studies.
Collapse
|
5
|
Crawford PR, Lehmann HP, Sockolow PS. Health Services Research Evaluation Principles. Methods Inf Med 2018; 51:122-30. [DOI: 10.3414/me10-01-0066] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Accepted: 04/04/2011] [Indexed: 11/09/2022]
Abstract
SummaryBackground: Our forthcoming national experiment in increased health information technology (HIT) adoption funded by the American Recovery and Reinvestment Act of 2009 will require a comprehensive approach to evaluating HIT. The quality of evaluation studies of HIT to date reveals a need for broader evaluation frameworks that limits the generalizability of findings and the depth of lessons learned.Objective: Develop an informatics evaluation framework for health information technology (HIT) integrating components of health services research (HSR) evaluation and informatics evaluation to address identified shortcomings in available HIT evaluation frameworks.Method: A systematic literature review updated and expanded the exhaustive review by Ammenwerth and deKeizer (AdK). From retained studies, criteria were elicited and organized into classes within a framework. The resulting Health Information Technology Research-based Evaluation Framework (HITREF) was used to guide clinician satisfaction survey construction, multi-dimensional analysis of data, and interpretation of findings in an evaluation of a vanguard community health care EHR.Results: The updated review identified 128 electronic health record (EHR) evaluation studies and seven evaluation criteria not in AdK: EHR Selection/Development/Training; Patient Privacy Concerns; Unintended Consequences/ Benefits; Functionality; Patient Satisfaction with EHR; Barriers/Facilitators to Adoption; and Patient Satisfaction with Care. HITREF was used productively and was a complete evaluation framework which included all themes that emerged.Conclusions: We can recommend to future EHR evaluators that they consider adding a complete, research-based HIT evaluation framework, such as HITREF, to their evaluation tools suite to monitor HIT challenges as the federal government strives to increase HIT adoption.
Collapse
|
6
|
Rijo RPCL, Crepaldi NY, Bergamini F, Rodrigues LML, de Lima IB, da Silva Castro Perdoná G, Alves D. Impact assessment on patients' satisfaction and healthcare professionals' commitment of software supporting Directly Observed Treatment, Short-course: A protocol proposal. Health Informatics J 2017; 25:350-360. [PMID: 28612646 DOI: 10.1177/1460458217712057] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Doctors, nurses, and other healthcare professionals use software that affects the patients. Directly Observed Treatment, Short-course is the name given to the tuberculosis control strategy recommended by the World Health Organization. The main goal of this work is to propose a protocol for evaluating the impact of healthcare software supporting Directly Observed Treatment, Short-course on patients, healthcare professionals, and services. The proposed protocol consists of a set of instruments and steps. The instruments are reliable and validated existing questionnaires to be applied before and after using the software tool. The literature points out the need for standards on the software assessment. This is particularly critical when software affects patients directly. The present protocol is a universal tool to assess the impact of software used to support the fight against the tragedy of tuberculosis where a rigorous evaluation of IT in healthcare is highly recommended and of great importance.
Collapse
Affiliation(s)
- Rui Pedro Charters Lopes Rijo
- Polytechnic Institute of Leiria, Portugal; Institute for Systems Engineering and Computers at Coimbra (INESC Coimbra), Portugal; Center for Health Technology and Services Research (CINTESIS), Portugal; University of São Paulo, Brazil
| | | | | | | | | | | | - Domingos Alves
- Ribeirão Preto Medical School of the University of São Paulo, Brazil
| |
Collapse
|
7
|
Høstgaard AMB, Bertelsen P, Nøhr C. Constructive eHealth evaluation: lessons from evaluation of EHR development in 4 Danish hospitals. BMC Med Inform Decis Mak 2017; 17:45. [PMID: 28427407 PMCID: PMC5397829 DOI: 10.1186/s12911-017-0444-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 04/11/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Information and communication sources in the healthcare sector are replaced with new eHealth technologies. This has led to problems arising from the lack of awareness of the importance of end-user involvement in eHealth development and of the difficulties caused by using traditional summative evaluation methods. The Constructive eHealth evaluation method (CeHEM) provides a solution to these problems by offering an evaluation framework for supporting and facilitating end-user involvement during all phases of eHealth development. The aim of this paper is to support this process by sharing experiences of the eHealth evaluation method used in the introduction of electronic health records (EHR) in the North Denmark Region of Denmark. It is the first time the fully developed method and the experiences on using the CeHEM in all five phases of a full lifecycle framework is presented. METHODS A case study evaluation of the EHR development process in the North Denmark Region was conducted from 2004 to 2010. The population consisted of clinicians, IT professionals, administrators, and vendors. The study involved 4 hospitals in the region. Data were collected using questionnaires, observations, interviews, and insight gathered from relevant documents. RESULTS The evaluation showed a need for a) Early involvement of clinicians, b) The best possible representation of clinicians, and c) Workload reduction for those involved. The consequences of not providing this were a lack of ownership of decisions and negative attitudes towards the clinical benefits related to these decisions. Further, the result disclosed that by following the above recommendations, and by providing feedback to the 4 actor groups, the physicians' involvement was improved. As a result they took ownership of decisions and gained a positive attitude to the clinical benefits. CONCLUSIONS The CeHEM has proven successful in formative evaluation of EHR development and can point at important issues that need to be taken care of by management. The method provides a framework that takes care of feedback and learning during eHealth development. It can thus support successful eHealth development in a broader context while building on a well-known success factor: end-user involvement in eHealth development.
Collapse
Affiliation(s)
- Anna Marie Balling Høstgaard
- Department of Health Science and Technology, Aalborg University, Niels Jernesvej 14, 9220, Aalborg Øst, Aalborg, Denmark
| | - Pernille Bertelsen
- Department of Planning, Danish Centre for Health Informatics, Aalborg University, Vestre Havnepromenade 5, Aalborg, Denmark
| | - Christian Nøhr
- Department of Planning, Danish Centre for Health Informatics, Aalborg University, Vestre Havnepromenade 5, Aalborg, Denmark
| |
Collapse
|
8
|
Haux R, Koch S. Improving Bridging from Informatics Theory to Practice. Appl Clin Inform 2016; 6:748-56. [PMID: 26767067 DOI: 10.4338/aci-2015-10-ra-0147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 10/30/2015] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND In 1962, Methods of Information in Medicine (MIM) began to publish papers on the methodology and scientific fundamentals of managing data, information, and knowledge in biomedicine and health care. Meeting an increasing demand for research about practical implementation of health information systems, the journal Applied Clinical Informatics (ACI) was launched in 2009. Both journals are official journals of the International Medical Informatics Association (IMIA). OBJECTIVES Based on prior analyses, we aimed to describe major topics published in MIM during 2014 and to explore whether theory of MIM influenced practice of ACI. Our objectives were further to describe lessons learned and to discuss possible editorial policies to improve bridging from theory to practice. METHODS We conducted a retrospective, observational study reviewing MIM articles published during 2014 (N=61) and analyzing reference lists of ACI articles from 2014 (N=70). Lessons learned and opinions about MIM editorial policies were developed in consensus by the two authors. These have been influenced by discussions with the journal's associate editors and editorial board members. RESULTS The publication topics of MIM in 2014 were broad, covering biomedical and health informatics, medical biometry and epidemiology. Important topics discussed were biosignal interpretation, boosting methodologies, citation analysis, health-enabling and ambient assistive technologies, health record banking, safety, and standards. Nine ACI practice articles from 2014 cited eighteen MIM theory papers from any year. These nine ACI articles covered mainly the areas of clinical documentation and medication-related decision support. The methodological basis they cited from was almost exclusively related to evaluation. We could show some direct links where theory impacted practice. These links are however few in relation to the total amount of papers published. CONCLUSIONS Editorial policies such as publishing systematic methodological reviews and clarification of possible practical impact of theory-focused articles may improve bridging.
Collapse
Affiliation(s)
- R Haux
- Peter L. Reichertz Institute for Medical Informatics, University of Braunschweig - Institute of Technology and Hannover Medical School , Germany
| | - S Koch
- Health Informatics Centre, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet , Stockholm, Sweden
| |
Collapse
|
9
|
Choi M, Yang YL, Lee SM. Effectiveness of nursing management information systems: a systematic review. Healthc Inform Res 2014; 20:249-57. [PMID: 25405060 PMCID: PMC4231174 DOI: 10.4258/hir.2014.20.4.249] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 10/22/2014] [Accepted: 10/27/2014] [Indexed: 11/29/2022] Open
Abstract
Objectives The purpose of this study was to review evaluation studies of nursing management information systems (NMISs) and their outcome measures to examine system effectiveness. Methods For the systematic review, a literature search of the PubMed, CINAHL, Embase, and Cochrane Library databases was conducted to retrieve original articles published between 1970 and 2014. Medical Subject Headings (MeSH) terms included informatics, medical informatics, nursing informatics, medical informatics application, and management information systems for information systems and evaluation studies and nursing evaluation research for evaluation research. Additionally, manag* and admin*, and nurs* were combined. Title, abstract, and full-text reviews were completed by two reviewers. And then, year, author, type of management system, study purpose, study design, data source, system users, study subjects, and outcomes were extracted from the selected articles. The quality and risk of bias of the studies that were finally selected were assessed with the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS) criteria. Results Out of the 2,257 retrieved articles, a total of six articles were selected. These included two scheduling programs, two nursing cost-related programs, and two patient care management programs. For the outcome measurements, usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns were included. User satisfaction, time saving, and usefulness mostly showed positive findings. Conclusions The study results suggest that NMISs were effective in time saving and useful in nursing care. Because there was a lack of quality in the reviewed studies, well-designed research, such as randomized controlled trials, should be conducted to more objectively evaluate the effectiveness of NMISs.
Collapse
Affiliation(s)
- Mona Choi
- Nursing Policy Research Institute, College of Nursing, Yonsei University, Seoul, Korea
| | - You Lee Yang
- Nursing Policy Research Institute, College of Nursing, Yonsei University, Seoul, Korea
| | - Sun-Mi Lee
- College of Nursing, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
10
|
Sockolow PS, Bowles KH, Adelsberger MC, Chittams JL, Liao C. Impact of homecare electronic health record on timeliness of clinical documentation, reimbursement, and patient outcomes. Appl Clin Inform 2014; 5:445-62. [PMID: 25024760 DOI: 10.4338/aci-2013-12-ra-0106] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2013] [Accepted: 04/07/2014] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Homecare is an important and effective way of managing chronic illnesses using skilled nursing care in the home. Unlike hospitals and ambulatory settings, clinicians visit patients at home at different times, independent of each other. Twenty-nine percent of 10,000 homecare agencies in the United States have adopted point-of-care EHRs. Yet, relatively little is known about the growing use of homecare EHRs. OBJECTIVE Researchers compared workflow, financial billing, and patient outcomes before and after implementation to evaluate the impact of a homecare point-of-care EHR. METHODS The design was a pre/post observational study embedded in a mixed methods study. The setting was a Philadelphia-based homecare agency with 137 clinicians. Data sources included: (1) clinician EHR documentation completion; (2) EHR usage data; (3) Medicare billing data; (4) an EHR Nurse Satisfaction survey; (5) clinician observations; (6) clinician interviews; and (7) patient outcomes. RESULTS Clinicians were satisfied with documentation timeliness and team communication. Following EHR implementation, 90% of notes were completed within the 1-day compliance interval (n = 56,702) compared with 30% of notes completed within the 7-day compliance interval in the pre-implementation period (n = 14,563; OR 19, p <. 001). Productivity in the number of clinical notes documented post-implementation increased almost 10-fold compared to pre-implementation. Days to Medicare claims fell from 100 days pre-implementation to 30 days post-implementation, while the census rose. EHR implementation impact on patient outcomes was limited to some behavioral outcomes. DISCUSSION Findings from this homecare EHR study indicated clinician EHR use enabled a sustained increase in productivity of note completion, as well as timeliness of documentation and billing for reimbursement with limited impact on improving patient outcomes. As EHR adoption increases to better meet the needs of the growing population of older people with chronic health conditions, these results can inform homecare EHR development and implementation.
Collapse
Affiliation(s)
- P S Sockolow
- Drexel University College of Nursing and Health Professions , Philadelphia, PA, USA
| | - K H Bowles
- University of Pennsylvania School of Nursing , Philadelphia, PA, USA
| | | | - J L Chittams
- University of Pennsylvania School of Nursing , Philadelphia, PA, USA
| | - C Liao
- Temple University College of Health Professions and Social Work , Philadelphia, PA, USA
| |
Collapse
|
11
|
Middleton B, Bloomrosen M, Dente MA, Hashmat B, Koppel R, Overhage JM, Payne TH, Rosenbloom ST, Weaver C, Zhang J. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20:e2-8. [PMID: 23355463 PMCID: PMC3715367 DOI: 10.1136/amiajnl-2012-001458] [Citation(s) in RCA: 330] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
In response to mounting evidence that use of electronic medical record systems may cause unintended consequences, and even patient harm, the AMIA Board of Directors convened a Task Force on Usability to examine evidence from the literature and make recommendations. This task force was composed of representatives from both academic settings and vendors of electronic health record (EHR) systems. After a careful review of the literature and of vendor experiences with EHR design and implementation, the task force developed 10 recommendations in four areas: (1) human factors health information technology (IT) research, (2) health IT policy, (3) industry recommendations, and (4) recommendations for the clinician end-user of EHR software. These AMIA recommendations are intended to stimulate informed debate, provide a plan to increase understanding of the impact of usability on the effective use of health IT, and lead to safer and higher quality care with the adoption of useful and usable EHR systems.
Collapse
Affiliation(s)
- Blackford Middleton
- Clinical Informatics Research and Development, Partners HealthCare System, Harvard Medical School, Wellesley, Massachusetts 02481, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Community-based, interdisciplinary geriatric care team satisfaction with an electronic health record: a multimethod study. Comput Inform Nurs 2012; 30:300-11. [PMID: 22411417 DOI: 10.1097/ncn.0b013e31823eb561] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This multimethod study measured the impact of an electronic health record (EHR) on clinician satisfaction with clinical process. Subjects were 39 clinicians at a Program of All-inclusive Care for Elders (PACE) site in Philadelphia utilizing an EHR. Methods included the evidence-based evaluation framework, Health Information Technology Research-Based Evaluation Framework, which guided assessment of clinician satisfaction with surveys, observations, follow-up interviews, and actual EHR use at two points in time. Mixed-methods analysis of findings provided context for interpretation and improved validity. The study found that clinicians were satisfied with the EHR; however, satisfaction declined between time periods. Use of EHR was universal and wide and was differentiated by clinical role. Between time periods, EHR use increased in volume, with increased timeliness and decreased efficiency. As the first EHR evaluation at a PACE site from the perspective of clinicians who use the system, this study provides insights into EHR use in the care of older people in community-based healthcare settings.
Collapse
|
13
|
A new instrument for measuring clinician satisfaction with electronic health records. Comput Inform Nurs 2012; 29:574-85. [PMID: 21543972 DOI: 10.1097/ncn.0b013e31821a1568] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A new survey instrument was developed and validated to measure clinician (nurse) satisfaction with electronic health record impact on clinical process. The Health Information Technology Reference-Based Evaluation Framework guided the selection of evaluation dimensions for the survey. Survey questions were gathered from existing health information technology satisfaction surveys that reflected individual evaluation concepts, such as efficiency or benefits. Decisions about data-gathering methods (e.g., item selection) were made based on reviews of literature and surveys of clinician satisfaction with health information technology and expert input. Preliminary instrument validation was accomplished using qualitative and statistical analysis of five repeated sets of responses from clinicians at the pilot site and field administrations repeated twice at electronic health record implementation and paper-based comparison sites and by analyzing convergent evidence from observations and interviews. Reliability was assessed on one sample: 30 graduate nursing students at the single pilot site. Validity was assessed on three separate samples: (1) graduate nursing students (n = 30), (2) field test at a site with electronic health record (n = 39 participants), and (3) field test at a paper-based site (n = 17). The implementation and comparison sites are Program of All-Inclusive Care for the Elderly that provide managed day care for frail elderly. Survey responses were assessed for test-retest reliability, internal consistency, and content and construct validity. The instrument design enables its administration before and after electronic health record implementation. Work to date suggests the instrument is reliable and valid; it is offered to electronic health record evaluators for further testing and application.
Collapse
|
14
|
Nykänen P, Brender J, Talmon J, de Keizer N, Rigby M, Beuscart-Zephir MC, Ammenwerth E. Guideline for good evaluation practice in health informatics (GEP-HI). Int J Med Inform 2011; 80:815-27. [PMID: 21920809 DOI: 10.1016/j.ijmedinf.2011.08.004] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2011] [Revised: 08/15/2011] [Accepted: 08/16/2011] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Development of a good practice guideline to plan and perform scientifically robust evaluation studies in health informatics. METHODS Issues to be addressed in evaluation studies were identified and guidance drafted based on the evaluation literature and on experiences by key players. Successive drafts of the guideline were discussed in several rounds by an increasing number of experts during conferences and by e-mail. At a fairly early point the guideline was put up for comments on the web. RESULTS Sixty issues were identified that are of potential relevance for planning, implementation and execution of an evaluation study in the health informatics domain. These issues cover all phases of an evaluation study: Preliminary outline, study design, operationalization of methods, project planning, execution and completion of the evaluation study. Issues of risk management and project control as well as reporting and publication of the evaluation results are also addressed. CONCLUSION A comprehensive list of issues is presented as a guideline for good evaluation practice in health informatics (GEP-HI). The strengths and weaknesses of the guideline are discussed. Application of this guideline will support better handling of an evaluation study, potentially leading to a higher quality of evaluation studies. This guideline is an important step towards building stronger evidence and thus to progress towards evidence-based health informatics.
Collapse
Affiliation(s)
- Pirkko Nykänen
- University of Tampere, School of Information Sciences, eHealth Research, Finland.
| | | | | | | | | | | | | |
Collapse
|
15
|
Abstract
Usability factors are a major obstacle to health information technology (IT) adoption. The purpose of this paper is to review and categorize health IT usability study methods and to provide practical guidance on health IT usability evaluation. 2025 references were initially retrieved from the Medline database from 2003 to 2009 that evaluated health IT used by clinicians. Titles and abstracts were first reviewed for inclusion. Full-text articles were then examined to identify final eligibility studies. 629 studies were categorized into the five stages of an integrated usability specification and evaluation framework that was based on a usability model and the system development life cycle (SDLC)-associated stages of evaluation. Theoretical and methodological aspects of 319 studies were extracted in greater detail and studies that focused on system validation (SDLC stage 2) were not assessed further. The number of studies by stage was: stage 1, task-based or user–task interaction, n=42; stage 2, system–task interaction, n=310; stage 3, user–task–system interaction, n=69; stage 4, user–task–system–environment interaction, n=54; and stage 5, user–task–system–environment interaction in routine use, n=199. The studies applied a variety of quantitative and qualitative approaches. Methodological issues included lack of theoretical framework/model, lack of details regarding qualitative study approaches, single evaluation focus, environmental factors not evaluated in the early stages, and guideline adherence as the primary outcome for decision support system evaluations. Based on the findings, a three-level stratified view of health IT usability evaluation is proposed and methodological guidance is offered based upon the type of interaction that is of primary interest in the evaluation.
Collapse
Affiliation(s)
- Po-Yin Yen
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio 43210, USA.
| | | |
Collapse
|
16
|
Lau F, Kuziemsky C, Price M, Gardner J. A review on systematic reviews of health information system studies. J Am Med Inform Assoc 2011; 17:637-45. [PMID: 20962125 DOI: 10.1136/jamia.2010.004838] [Citation(s) in RCA: 113] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
The purpose of this review is to consolidate existing evidence from published systematic reviews on health information system (HIS) evaluation studies to inform HIS practice and research. Fifty reviews published during 1994-2008 were selected for meta-level synthesis. These reviews covered five areas: medication management, preventive care, health conditions, data quality, and care process/outcome. After reconciliation for duplicates, 1276 HIS studies were arrived at as the non-overlapping corpus. On the basis of a subset of 287 controlled HIS studies, there is some evidence for improved quality of care, but in varying degrees across topic areas. For instance, 31/43 (72%) controlled HIS studies had positive results using preventive care reminders, mostly through guideline adherence such as immunization and health screening. Key factors that influence HIS success included having in-house systems, developers as users, integrated decision support and benchmark practices, and addressing such contextual issues as provider knowledge and perception, incentives, and legislation/policy.
Collapse
Affiliation(s)
- Francis Lau
- School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada.
| | | | | | | |
Collapse
|
17
|
McMullen CK, Ash JS, Sittig DF, Bunce A, Guappone K, Dykstra R, Carpenter J, Richardson J, Wright A. Rapid assessment of clinical information systems in the healthcare setting: an efficient method for time-pressed evaluation. Methods Inf Med 2010; 50:299-307. [PMID: 21170469 DOI: 10.3414/me10-01-0042] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2010] [Accepted: 10/07/2010] [Indexed: 02/05/2023]
Abstract
OBJECTIVE Recent legislation in the United States provides strong incentives for implementation of electronic health records (EHRs). The ensuing transformation in U.S. health care will increase demand for new methods to evaluate clinical informatics interventions. Timeline constraints and a rapidly changing environment will make traditional evaluation techniques burdensome. This paper describes an anthropological approach that provides a fast and flexible way to evaluate clinical information systems. METHODS Adapting mixed-method evaluation approaches from anthropology, we describe a rapid assessment process (RAP) for assessing clinical informatics interventions in health care that we developed and used during seven site visits to diverse community hospitals and primary care settings in the U.S. SETTING Our multidisciplinary team used RAP to evaluate factors that either encouraged people to use clinical decision support (CDS) systems or interfered with use of these systems in settings ranging from large urban hospitals to single-practitioner, private family practices in small towns. RESULTS Critical elements of the method include: 1) developing a fieldwork guide; 2) carefully selecting observation sites and participants; 3) thoroughly preparing for site visits; 4) partnering with local collaborators; 5) collecting robust data by using multiple researchers and methods; and 6) analyzing and reporting data in a structured manner helpful to the organizations being evaluated. CONCLUSIONS RAP, iteratively developed over the course of visits to seven clinical sites across the U.S., has succeeded in allowing a multidisciplinary team of informatics researchers to plan, gather and analyze data, and report results in a maximally efficient manner.
Collapse
Affiliation(s)
- C K McMullen
- The Center for Health Research, Kaiser Permanente Northwest, 3800 N. Interstate Ave, Portland, Oregon 97227, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Häyrinen K, Lammintakanen J, Saranto K. Evaluation of electronic nursing documentation—Nursing process model and standardized terminologies as keys to visible and transparent nursing. Int J Med Inform 2010; 79:554-64. [DOI: 10.1016/j.ijmedinf.2010.05.002] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
19
|
Overview of the Health Informatics Research Field: A Bibliometric Approach. IFIP ADVANCES IN INFORMATION AND COMMUNICATION TECHNOLOGY 2010. [DOI: 10.1007/978-3-642-15515-4_5] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
20
|
Cockcroft S. A media analysis approach to evaluating national health information infrastructure development. ACTA ACUST UNITED AC 2009. [DOI: 10.1108/13287260910983605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
21
|
De Rouck S, Jacobs A, Leys M. A methodology for shifting the focus of e-health support design onto user needs. Int J Med Inform 2008; 77:589-601. [DOI: 10.1016/j.ijmedinf.2007.11.004] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2007] [Revised: 09/09/2007] [Accepted: 11/19/2007] [Indexed: 11/28/2022]
|
22
|
Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U. The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc 2008; 15:585-600. [PMID: 18579832 DOI: 10.1197/jamia.m2667] [Citation(s) in RCA: 398] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
The objective of this systematic review is to analyse the relative risk reduction on medication error and adverse drug events (ADE) by computerized physician order entry systems (CPOE). We included controlled field studies and pretest-posttest studies, evaluating all types of CPOE systems, drugs and clinical settings. We present the results in evidence tables, calculate the risk ratio with 95% confidence interval and perform subgroup analyses for categorical factors, such as the level of care, patient group, type of drug, type of system, functionality of the system, comparison group type, study design, and the method for detecting errors. Of the 25 studies that analysed the effects on the medication error rate, 23 showed a significant relative risk reduction of 13% to 99%. Six of the nine studies that analysed the effects on potential ADEs showed a significant relative risk reduction of 35% to 98%. Four of the seven studies that analysed the effect on ADEs showed a significant relative risk reduction of 30% to 84%. Reporting quality and study quality was often insufficient to exclude major sources of bias. Studies on home-grown systems, studies comparing electronic prescribing to handwriting prescribing, and studies using manual chart review to detect errors seem to show a higher relative risk reduction than other studies. Concluding, it seems that electronic prescribing can reduce the risk for medication errors and ADE. However, studies differ substantially in their setting, design, quality, and results. To further improve the evidence-base of health informatics, more randomized controlled trials (RCTs) are needed, especially to cover a wider range of clinical and geographic settings. In addition, reporting quality of health informatics evaluation studies has to be substantially improved.
Collapse
Affiliation(s)
- Elske Ammenwerth
- UMIT-University for Health Sciences, Medical Informatics and Technology Tyrol, Institute for Health Information Systems, Tyrol, Austria.
| | | | | | | |
Collapse
|
23
|
Yusof MM, Papazafeiropoulou A, Paul RJ, Stergioulas LK. Investigating evaluation frameworks for health information systems. Int J Med Inform 2008; 77:377-85. [PMID: 17904898 DOI: 10.1016/j.ijmedinf.2007.08.004] [Citation(s) in RCA: 90] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2005] [Revised: 08/12/2007] [Accepted: 08/12/2007] [Indexed: 11/28/2022]
Abstract
BACKGROUND AND PURPOSE Evaluation of health information systems (HIS) enables the assessment of the extent to which HIS are fulfilling their objectives in supporting the services of healthcare delivery. This paper presents an overview of evaluation in health informatics and information systems. METHODS Literature review on discourses, dimensions and methods of HIS and IS evaluation. A critical appraisal of selected HIS and IS evaluation frameworks is undertaken in order to identify HIS evaluation dimensions and measures. The frameworks are compared based on their inclusion of human, organizational and technological factors. RESULTS We found that an increasing number of evaluation studies deal with two distinct trends of HIS: one considers human and organizational issues and the other is concerned with the employment of a subjectivist approach. Our review indicates that current evaluation methods complement each other in that they evaluate different aspects of HIS and they can be improved upon. CONCLUSIONS Evaluation is complex; it is easy to measure many things but not necessarily the right ones. Nevertheless, it is possible to consider, a HIS evaluation framework with more comprehensive and specific measures that would incorporate technological, human and organizational issues to facilitate HIS evaluation.
Collapse
Affiliation(s)
- Maryati Mohd Yusof
- Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia.
| | | | | | | |
Collapse
|
24
|
Pagliari C. Design and evaluation in eHealth: challenges and implications for an interdisciplinary field. J Med Internet Res 2007; 9:e15. [PMID: 17537718 PMCID: PMC1913937 DOI: 10.2196/jmir.9.2.e15] [Citation(s) in RCA: 129] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2006] [Revised: 04/19/2007] [Accepted: 05/04/2007] [Indexed: 11/16/2022] Open
Abstract
Much has been written about insufficient user involvement in the design of eHealth applications, the lack of evidence demonstrating impact, and the difficulties these bring for adoption. Part of the problem lies in the differing languages, cultures, motives, and operational constraints of producers and evaluators of eHealth systems and services. This paper reflects on the benefits of and barriers to interdisciplinary collaboration in eHealth, focusing particularly on the relationship between software developers and health services researchers. It argues that the common pattern of silo or parallel working may be ameliorated by developing mutual awareness and respect for each others’ methods, epistemologies, and contextual drivers and by recognizing and harnessing potential synergies. Similarities and differences between models and techniques used in both communities are highlighted in order to illustrate the potential for integrated approaches and the strengths of unique paradigms. By sharing information about our research approaches and seeking to actively collaborate in the process of design and evaluation, the aim of achieving technologies that are truly user-informed, fit for context, high quality, and of demonstrated value is more likely to be realized. This may involve embracing new ways of working jointly that are unfamiliar to the stakeholders involved and that challenge disciplinary conventions. It also has policy implications for agencies commissioning research and development in this area.
Collapse
Affiliation(s)
- Claudia Pagliari
- Division of Community Health Sciences, University of Edinburgh, Edinburgh, United Kingdom.
| |
Collapse
|
25
|
Ammenwerth E, de Keizer N. A viewpoint on evidence-based health informatics, based on a pilot survey on evaluation studies in health care informatics. J Am Med Inform Assoc 2007; 14:368-71. [PMID: 17329724 PMCID: PMC2244873 DOI: 10.1197/jamia.m2276] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Concerned about evidence-based health informatics, the authors conducted a limited pilot survey attempting to determine how many IT evaluation studies in health care are never published, and why. A survey distributed to 722 academics had a low response rate, with 136 respondents giving instructive comments on 217 evaluation studies. Of those studies, half were published in international journals, and more than one-third were never published. Reasons for not publishing (with multiple reasons per study possible) included: "results not of interest for others" (1/3 of all studies), "publication in preparation" (1/3), "no time for publication" (1/5), "limited scientific quality of study" (1/6), "political or legal reasons" (1/7), and "study only conducted for internal use" (1/8). Those reasons for non-publication in health informatics resembled those reported in other fields. Publication bias (preference for positive studies) did not appear to be a major issue. The authors believe that widespread application of guidelines in conducting health informatics evaluation studies and utilization of a registry for evaluation study results could improve the evidence base of the field.
Collapse
Affiliation(s)
- Elske Ammenwerth
- Institute for Health Information Systems, UMIT University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria.
| | | |
Collapse
|
26
|
Dorr D, Bonner LM, Cohen AN, Shoai RS, Perrin R, Chaney E, Young AS. Informatics systems to promote improved care for chronic illness: a literature review. J Am Med Inform Assoc 2007; 14:156-63. [PMID: 17213491 PMCID: PMC2213468 DOI: 10.1197/jamia.m2255] [Citation(s) in RCA: 169] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To understand information systems components important in supporting team-based care of chronic illness through a literature search. DESIGN Systematic search of literature from 1996-2005 for evaluations of information systems used in the care of chronic illness. MEASUREMENTS The relationship of design, quality, information systems components, setting, and other factors with process, quality outcomes, and health care costs was evaluated. RESULTS In all, 109 articles were reviewed involving 112 information system descriptions. Chronic diseases targeted included diabetes (42.9% of reviewed articles), heart disease (36.6%), and mental illness (23.2%), among others. System users were primarily physicians, nurses, and patients. Sixty-seven percent of reviewed experiments had positive outcomes; 94% of uncontrolled, observational studies claimed positive results. Components closely correlated with positive experimental results were connection to an electronic medical record, computerized prompts, population management (including reports and feedback), specialized decision support, electronic scheduling, and personal health records. Barriers identified included costs, data privacy and security concerns, and failure to consider workflow. CONCLUSION The majority of published studies revealed a positive impact of specific health information technology components on chronic illness care. Implications for future research and system designs are discussed.
Collapse
Affiliation(s)
- David Dorr
- Oregon Health & Science University, Department of Medical Informatics & Clinical Epidemiology, Portland, OR, USA.
| | | | | | | | | | | | | |
Collapse
|
27
|
de Keizer NF, Ammenwerth E. The quality of evidence in health informatics: how did the quality of healthcare IT evaluation publications develop from 1982 to 2005? Int J Med Inform 2007; 77:41-9. [PMID: 17208040 DOI: 10.1016/j.ijmedinf.2006.11.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2006] [Revised: 11/28/2006] [Accepted: 11/28/2006] [Indexed: 11/28/2022]
Abstract
OBJECTIVE To obtain an overview of study designs and study methods used in research evaluating IT in health care, to present a list of quality criteria by which all kinds of reported evaluation studies on IT systems in health care can be assessed, and to assess the quality of reported evaluation studies on IT in health care and its development over time (1982-2005). METHODS A generic 10-item list of quality indicators was developed based on existing literature on quality of medical and medical informatics publications. It is applicable to all kind of IT evaluation papers and not restricted to randomized controlled trials. One hundred and twenty explanatory papers evaluating the effects of an IT system in health care published between 1982 and 2005 were randomly selected from PubMed, the study designs and study methods were extracted, and the quality indicators were used to assess the quality of each paper by two independent raters. RESULTS The inter-rater variability of scoring the 10 quality indicators as assessed by a pre-test with nine papers was good (K=0.87). There was a trend towards more multi-centre studies and authors coming more frequently from various departments. About 70% of the studies used a design other than a randomized controlled trial (RCT). Forty percent of the studies combined at least two different data acquisition methods. The quality of IT evaluation papers, as defined by the quality indicators, was only slightly improving in time (Spearman correlation coefficient [rs]=0.19). The quality of RCTs publications was significantly higher than the quality of non-RCT studies (p<0.001). CONCLUSION The continuous and dominant number of non-RCT studies reflects the various approaches applicable to evaluate IT systems in health care. Despite the increasing discussion on evidence-based health informatics, the quality of published evaluation studies on IT interventions in health care is still insufficient in some aspects. Journal editors and referees should take care that reports of evaluation on IT systems contain all aspects needed for a sufficient understanding and reproducibility of a paper. Publication guidelines should be developed to support more complete and better publications of IT evaluation papers.
Collapse
Affiliation(s)
- N F de Keizer
- Department of Medical Informatics, J1b-114, Academic Medical Centre, Universiteit van Amsterdam, P.O. Box 22700, 1100 DE Amsterdam, The Netherlands.
| | | |
Collapse
|
28
|
Rigby M, Budgen D, Turner M, Kotsiopoulos I, Brereton P, Keane J, Bennett K, Russell M, Layzell P, Zhu F. A data-gathering broker as a future-orientated approach to supporting EPR users. Int J Med Inform 2006; 76:137-44. [PMID: 17010664 DOI: 10.1016/j.ijmedinf.2006.07.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2005] [Revised: 06/30/2006] [Accepted: 07/20/2006] [Indexed: 10/24/2022]
Abstract
With the continued expansion of electronic patient record systems ahead of comprehensive evidence, metrics, or future-proofing, health informatics in Europe and beyond is embarking on a faith-driven adventure that also risks data swamping of end-users. An alternative approach is an information broker system, drawing from departmental data sources. A 3-year study in health and social care has produced a first demonstrator which can search for specified information in heterogeneous distributed data stores, with source-specific permission can copy it, and then merge the search results into one integrated picture in a real-time process which is also captured in an audit system. The research project has addressed a number of issues during the study, including updating the concepts of role-based access, semantic interoperability, and harnessing web-based services bound at the time of need. A demonstrator now exists, and provides a platform for further application and development research. This paper summarises how this opens up a viable alternative approach for the next generation of health record systems, enabling record searching and integration as and when it is needed for specific patient-related purposes, whilst being independent of organisations, diagnostic approaches, or service delivery structures, and reducing the risks of data swamping.
Collapse
Affiliation(s)
- Michael Rigby
- Centre for Health Planning and Management, Darwin Building, Keele University, Keele, Staffordshire, UK.
| | | | | | | | | | | | | | | | | | | |
Collapse
|