1
|
Bumbarger NA, Jo SY, Cofield BJ, Haan O, Chatterjee D. DEXA Result Automation into Radiology Reports: An Implementation Guide for Radiologists, PACS Administrators, and Technicians. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01451-4. [PMID: 40011343 DOI: 10.1007/s10278-025-01451-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 02/11/2025] [Accepted: 02/13/2025] [Indexed: 02/28/2025]
Abstract
Osteoporosis is prevalent among older adults, significantly increasing fracture risk, with hip fractures often leading to reduced survival. Dual-energy X-ray absorptiometry (DEXA) is the gold standard for diagnosing osteoporosis. However, manual transcription of DEXA results into radiology reports is error-prone and time-consuming. This study explores the implementation of a vendor-neutral structured report (SR) system to automate data import from DEXA scans, aiming to improve efficiency and accuracy. The study involved the use of Nuance PowerScribe 360 and Hyland's PACSgear ModLink for automating DEXA data entry into radiology reports. ModLink translates DEXA results into structured data, which is mapped to customized report templates. Radiologists compared the templates with and without the SR data being sent, analyzing time differences between the two templates using pre- and post-implementation measurements. The implementation of the SR system led to a significant reduction in report generation time, with radiologists achieving up to a fivefold decrease in dictation time. The slowest reader saw a 2.5-fold improvement, and the fastest reader showed a fivefold improvement (p < 0.01). No errors in data mapping were observed, indicating reliable integration of the SR system. In light of the current radiologist shortage, the SR system demonstrated notable improvements in workflow efficiency without adding to technologist workload. The time savings and reduced transcription errors offer radiology practices a valuable tool to enhance productivity and patient care. Automating the DEXA data transcription process using a structured report system substantially improves efficiency, minimizes errors, and has minimal implementation burden, representing a promising intervention for radiology practices facing increasing demand.
Collapse
Affiliation(s)
- Nathan A Bumbarger
- Wilford Hall Ambulatory Surgical Center, Lackland Air Force Base, Bexar County, TX, USA.
| | - Stephanie Y Jo
- Department of Radiology, University of Maryland Medical Center, Baltimore, MD, USA
| | - Brandon J Cofield
- Department of Radiology, University of Maryland Medical Center, Baltimore, MD, USA
| | - Olga Haan
- Department of Radiology, University of Maryland Medical Center, Baltimore, MD, USA
| | | |
Collapse
|
2
|
Deshpande P, Rasin A. Correlation Aware Relevance-Based Semantic Index for Clinical Big Data Repository. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2597-2611. [PMID: 38653911 PMCID: PMC11522240 DOI: 10.1007/s10278-024-01095-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 03/07/2024] [Accepted: 03/14/2024] [Indexed: 04/25/2024]
Abstract
In this paper, we focus on indexing mechanisms for unstructured clinical big integrated data repository systems. Clinical data is unstructured and heterogeneous, which comes in different files and formats. Accessing data efficiently and effectively are critical challenges. Traditional indexing mechanisms are difficult to apply on unstructured data, especially by identifying correlation information between clinical data elements. In this research work, we developed a correlation-aware relevance-based index that retrieves clinical data by fetching most relevant cases efficiently. In our previous work, we designed a methodology that categorizes medical data based on the semantics of data elements and merges them into an integrated repository. We developed a data integration system for medical data sources that combines heterogeneous medical data and provides access to knowledge-based database repositories to different users. In this research work, we designed an indexing system using semantic tags extracted from clinical data sources and medical ontologies that retrieves relevant data from database repositories and speeds up the process of data retrieval. Our objective is to provide an integrated biomedical database repository that can be used by radiologists as a reference, or for patient care, or by researchers. In this paper, we focus on designing a technique that performs data processing for data integration, learn the semantic properties of data elements, and develop a correlation-aware topic index that facilitates efficient data retrieval. We generated semantic tags by identifying key elements from integrated clinical cases using topic modeling techniques. We investigated a technique that identifies tags for merged categories and provides an index to fetch data from an integrated database repository. We developed a topic coherence matrix that shows how well a topic is supported by a corpus from clinical cases and medical ontologies. We were able to find more relevant results using an annotation index from an integrated database repository, and there was a 61% increase in a recall. We evaluated results with the help of experts and compared them with naive index (index with all terms from the corpus). Our approach improved data retrieval quality by providing most relevant results and reduced data retrieval time as we applied correlation-aware index on an integrated data repository. Topic indexing approach proposed in this research work identifies tags based on a correlation between different data elements, improves data retrieval time, and provides most relevant cases as an outcome of this system.
Collapse
Affiliation(s)
- Priya Deshpande
- Department of Electrical and Computer Engineering, Marquette University, Milwaukee, WI, 53233, USA.
| | | |
Collapse
|
3
|
Castagnoli F, Mencel J, Ap Dafydd D, Gough J, Drake B, Mcaddy NC, Withey SJ, Riddell AM, Koh DM, Shur JD. Response Evaluation Criteria in Gastrointestinal and Abdominal Cancers: Which to Use and How to Measure. Radiographics 2024; 44:e230047. [PMID: 38662587 DOI: 10.1148/rg.230047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
As the management of gastrointestinal malignancy has evolved, tumor response assessment has expanded from size-based assessments to those that include tumor enhancement, in addition to functional data such as those derived from PET and diffusion-weighted imaging. Accurate interpretation of tumor response therefore requires knowledge of imaging modalities used in gastrointestinal malignancy, anticancer therapies, and tumor biology. Targeted therapies such as immunotherapy pose additional considerations due to unique imaging response patterns and drug toxicity; as a consequence, immunotherapy response criteria have been developed. Some gastrointestinal malignancies require assessment with tumor-specific criteria when assessing response, often to guide clinical management (such as watchful waiting in rectal cancer or suitability for surgery in pancreatic cancer). Moreover, anatomic measurements can underestimate therapeutic response when applied to molecular-targeted therapies or locoregional therapies in hypervascular malignancies such as hepatocellular carcinoma. In these cases, responding tumors may exhibit morphologic changes including cystic degeneration, necrosis, and hemorrhage, often without significant reduction in size. Awareness of pitfalls when interpreting gastrointestinal tumor response is required to correctly interpret response assessment imaging and guide appropriate oncologic management. Data-driven image analyses such as radiomics have been investigated in a variety of gastrointestinal tumors, such as identifying those more likely to respond to therapy or recur, with the aim of delivering precision medicine. Multimedia-enhanced radiology reports can facilitate communication of gastrointestinal tumor response by automatically embedding response categories, key data, and representative images. ©RSNA, 2024 Test Your Knowledge questions for this article are available in the supplemental material.
Collapse
Affiliation(s)
- Francesca Castagnoli
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Justin Mencel
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Derfel Ap Dafydd
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Jessica Gough
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Brent Drake
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Naami Charlotte Mcaddy
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Samuel Joseph Withey
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Angela Mary Riddell
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Dow-Mu Koh
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| | - Joshua David Shur
- From the Departments of Radiology (F.C., D.a.D., N.C.M., S.J.W., A.M.R., D.M.K., J.D.S.), Oncology (J.M.), Radiotherapy (J.G.), and Nuclear Medicine (B.D.), Royal Marsden Hospital, Downs Road, Sutton SM2 5PT, UK; and Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK (F.C., D.M.K.)
| |
Collapse
|
4
|
Kadom N, Lasiecka ZM, Nemeth AJ, Rykken JB, Lui YW, Seidenwurm D. Patient Engagement in Neuroradiology: A Narrative Review and Case Studies. AJNR Am J Neuroradiol 2024; 45:250-255. [PMID: 38216301 PMCID: PMC11286113 DOI: 10.3174/ajnr.a8077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 09/20/2023] [Indexed: 01/14/2024]
Abstract
The field of patient engagement in radiology is evolving and offers ample opportunities for neuroradiologists to become involved. The patient journey can serve as a model that inspires patient engagement initiatives. The patient journey in radiology may be viewed in 5 stages: 1) awareness that an imaging test is needed, 2) considering having a specific imaging test, 3) access to imaging, 4) imaging service delivery, and 5) ongoing care. Here, we describe patient engagement opportunities based on literature review and paired with case studies by practicing neuroradiologists.
Collapse
Affiliation(s)
- Nadja Kadom
- From the Emory University School of Medicine (N.K.), Children's Healthcare of Atlanta, Atlanta, Georgia
| | | | - Alexander J Nemeth
- Northwestern University, Feinberg School of Medicine, Northwestern Memorial Hospital (A.J.N.), Chicago, Illinois
| | | | - Yvonne W Lui
- New York University, Grossman School of Medicine (Y.W.L.), New York, New York
| | | |
Collapse
|
5
|
Dutruel SP, Hentel KD, Hecht EM, Kadom N. Patient-Centered Radiology Communications: Engaging Patients as Partners. J Am Coll Radiol 2024; 21:7-18. [PMID: 37863150 DOI: 10.1016/j.jacr.2023.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/22/2023]
Abstract
Patient-centered care is a model in which, by bringing the patient's perspective to the design and delivery of health care, we can better meet patients' needs, enhancing the quality of care. Patient-centered care requires finding ways to communicate effectively with a diverse patient population that has various levels of health literacy, cultural backgrounds, and unique needs and preferences. Moreover, multimedia resources have the potential to inform and educate patients promoting greater independence. In this review, we discuss the fundamentals of communication with the different modes used in radiology and the key elements of effective communication. Then, we highlight five opportunities along the continuum of care in the radiology practice in which we can improve communications to empower our patients and families and strengthen this partnership. Lastly, we discuss the importance on communication training of the workforce, optimizing and seamlessly integrating technology solutions into our workflows, and the need for patient feedback in the design and delivery of care.
Collapse
Affiliation(s)
- Silvina P Dutruel
- Department of Radiology, Weill Cornell Medical Center, New York, New York.
| | - Keith D Hentel
- Professor, Clinical Radiology, Executive Vice Chairman, Department of Radiology; Vice President, Weill Cornell Imaging at New York-Presbyterian, New York, New York
| | - Elizabeth M Hecht
- Vice Chair for Academic Affairs, Department of Radiology, Weill Cornell Medical Center, New York, New York. https://twitter.com/ehecht_md
| | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia; Director of Quality, Department of Radiology, Children's Healthcare of Atlanta, Georgia; Interim Director of Quality, Department of Radiology, Emory Healthcare, Atlanta, Georgia; Chair, Practice and Performance Improvement Committee, ARRS; and Chair, Metrics Committee, ACR
| |
Collapse
|
6
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
7
|
Jorg T, Halfmann MC, Arnhold G, Pinto Dos Santos D, Kloeckner R, Düber C, Mildenberger P, Jungmann F, Müller L. Implementation of structured reporting in clinical routine: a review of 7 years of institutional experience. Insights Imaging 2023; 14:61. [PMID: 37037963 PMCID: PMC10086081 DOI: 10.1186/s13244-023-01408-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/18/2023] [Indexed: 04/12/2023] Open
Abstract
BACKGROUND To evaluate the implementation process of structured reporting (SR) in a tertiary care institution over a period of 7 years. METHODS We analysed the content of our image database from January 2016 to December 2022 and compared the numbers of structured reports and free-text reports. For the ten most common SR templates, usage proportions were calculated on a quarterly basis. Annual modality-specific SR usage was calculated for ultrasound, CT, and MRI. During the implementation process, we surveyed radiologists and clinical referring physicians concerning their views on reporting in radiology. RESULTS As of December 2022, our reporting platform contained more than 22,000 structured reports. Use of the ten most common SR templates increased markedly since their implementation, leading to a mean SR usage of 77% in Q4 2022. The highest percentages of SR usage were shown for trauma CT, focussed assessment with ultrasound for trauma (FAST), and prostate MRI: 97%, 95%, and 92%, respectively, in 2022. Overall modality-specific SR usage was 17% for ultrasound, 13% for CT, and 6% for MRI in 2022. Both radiologists and referring physicians were more satisfied with structured reports and rated SR better than free-text reporting (FTR) on various attributes. CONCLUSIONS The increasing SR usage during the period under review and the positive attitude towards SR among both radiologists and clinical referrers show that SR can be successfully implemented. We therefore encourage others to take this step in order to benefit from the advantages of SR. KEY POINTS 1. Structured reporting usage increased markedly since its implementation at our institution in 2016. 2. Mean usage for the ten most popular structured reporting templates was 77% in 2022. 3. Both radiologists and referring physicians preferred structured reports over free-text reports. 4. Our data shows that structured reporting can be successfully implemented. 5. We strongly encourage others to implement structured reporting at their institutions.
Collapse
Affiliation(s)
- Tobias Jorg
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany.
| | - Moritz C Halfmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Gordon Arnhold
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - Roman Kloeckner
- Institute of Interventional Radiology, University Hospital Schleswig-Holstein - Campus Lübeck, Lübeck, Germany
| | - Christoph Düber
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Peter Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Florian Jungmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - Lukas Müller
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| |
Collapse
|
8
|
McFarland JA, Huang J, Li Y, Gunn AJ, Morgan DE. Patient Engagement with Online Portals and Online Radiology Results. Curr Probl Diagn Radiol 2023; 52:106-109. [PMID: 36030140 DOI: 10.1067/j.cpradiol.2022.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 07/27/2022] [Indexed: 02/05/2023]
Abstract
The purpose of this study was to examine patient portal enrollment and the usage with a specific focus on the utilization of on-line radiology reports by patients. Oracle SQL (Austin, TX, USA) queries were used to extract portal enrollment data over a 13-month period from March 1, 2017 through March 31, 2018 from the hospital system's EMR. Patient enrollment was collected as was patient information including basic demographics and utilization patterns. For enrolled patients, interaction within the portal with the "Radiology" work tab (RADTAB) was used as a surrogate for review of radiology results. As a comparator, interaction within the portal with the "Laboratory" work tab (LABTAB) was used as a surrogate for review of laboratory results. Statistical analysis on the data was performed using Chi-squared, Student's t-test, Logistic regression and multivariate analysis where appropriate. The population for analysis included 424,422 patients. Overall, 138,783 patients (32.7%) were enrolled in the portal. Patients enrolled in the portal were older (P < 0.0001), female (P < 0.0001) and Caucasian (P < 0.0001). Patients enrolled in the portal had higher levels of educational attainment (p < 0.0001), higher annual household income (P < 0.0001), and more outpatient clinic visits (P < 0.0001). The proportion of enrolled patients that interacted with the LABTAB (47.2%) was significantly higher than those that interacted with the RADTAB (27.1%) (P < 0.0001; Table 2). Patients that utilize the portal are more likely to utilize the Laboratory tab than the Radiology tab, and demographic differences do not account for this difference in usage. Further investigation is needed to better understand the reasons for the differing usage trends of Laboratory and Radiology tabs.
Collapse
Affiliation(s)
- J Alex McFarland
- Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL
| | - Junjian Huang
- Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL.
| | - Yufeng Li
- Preventive Medicine, The University of Alabama at Birmingham, Birmingham, AL
| | - Andrew J Gunn
- Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL
| | - Desiree E Morgan
- Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL
| |
Collapse
|
9
|
Interactive Multimedia Reporting Technical Considerations: HIMSS-SIIM Collaborative White Paper. J Digit Imaging 2022; 35:817-833. [PMID: 35962150 PMCID: PMC9485305 DOI: 10.1007/s10278-022-00658-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 05/13/2022] [Accepted: 05/15/2022] [Indexed: 10/28/2022] Open
Abstract
Despite technological advances in the analysis of digital images for medical consultations, many health information systems lack the ability to correlate textual descriptions of image findings linked to the actual images. Images and reports often reside in separate silos in the medical record throughout the process of image viewing, report authoring, and report consumption. Forward-thinking centers and early adopters have created interactive reports with multimedia elements and embedded hyperlinks in reports that connect the narrative text with the related source images and measurements. Most of these solutions rely on proprietary single-vendor systems for viewing and reporting in the absence of any encompassing industry standards to facilitate interoperability with the electronic health record (EHR) and other systems. International standards have enabled the digitization of image acquisition, storage, viewing, and structured reporting. These provide the foundation to discuss enhanced reporting. Lessons learned in the digital transformation of radiology and pathology can serve as a basis for interactive multimedia reporting (IMR) across image-centric medical specialties. This paper describes the standard-based infrastructure and communications to fulfill recently defined clinical requirements through a consensus from an international workgroup of multidisciplinary medical specialists, informaticists, and industry participants. These efforts have led toward the development of an Integrating the Healthcare Enterprise (IHE) profile that will serve as a foundation for interoperable interactive multimedia reporting.
Collapse
|
10
|
Rethinking Clinical Trial Radiology Workflows and Student Training: Integrated Virtual Student Shadowing Experience, Education, and Evaluation. J Digit Imaging 2022; 35:723-731. [PMID: 35194736 PMCID: PMC8863390 DOI: 10.1007/s10278-022-00605-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 12/15/2022] Open
Abstract
There is consistent demand for clinical exposure from students interested in radiology; however, the COVID-19 pandemic resulted in fewer available options and limited student access to radiology departments. Additionally, there is increased demand for radiologists to manage more complex quantification in reports on patients enrolled in clinical trials. We present an online educational curriculum that addresses both of these gaps by virtually immersing students (radiology preprocessors, or RPs) into radiologists' workflows where they identify and measure target lesions in advance of radiologists, streamlining report quantification. RPs switched to remote work at the beginning of the COVID-19 pandemic in our National Institutes of Health (NIH). We accommodated them by transitioning our curriculum on cross-sectional anatomy and advanced PACS tools to a publicly available online curriculum. We describe collaborations between multiple academic research centers and industry through contributions of academic content to this curriculum. Further, we describe how we objectively assess educational effectiveness with cross-sectional anatomical quizzes and decreasing RP miss rates as they gain experience. Our RP curriculum generated significant interest evidenced by a dozen academic and research institutes providing online presentations including radiology modality basics and quantification in clinical trials. We report a decrease in RP miss rate percentage, including one virtual RP over a period of 1 year. Results reflect training effectiveness through decreased discrepancies with radiologist reports and improved tumor identification over time. We present our RP curriculum and multicenter experience as a pilot experience in a clinical trial research setting. Students are able to obtain useful clinical radiology experience in a virtual learning environment by immersing themselves into a clinical radiologist's workflow. At the same time, they help radiologists improve patient care with more valuable quantitative reports, previously shown to improve radiologist efficiency. Students identify and measure lesions in clinical trials before radiologists, and then review their reports for self-evaluation based on included measurements from the radiologists. We consider our virtual approach as a supplement to student education while providing a model for how artificial intelligence will improve patient care with more consistent quantification while improving radiologist efficiency.
Collapse
|
11
|
Talking Points: Enhancing Communication Between Radiologists and Patients. Acad Radiol 2022; 29:888-896. [PMID: 33846062 DOI: 10.1016/j.acra.2021.02.026] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 02/15/2021] [Accepted: 02/21/2021] [Indexed: 11/23/2022]
Abstract
Radiologists communicate along multiple pathways, using written, verbal, and non-verbal means. Radiology trainees must gain skills in all forms of communication, with attention to developing effective professional communication in all forms. This manuscript reviews evidence-based strategies for enhancing effective communication between radiologists and patients through direct communication, written means and enhanced reporting. We highlight patient-centered communication efforts, available evidence, and opportunities to engage learners and enhance training and simulation efforts that improve communication with patients at all levels of clinical care.
Collapse
|
12
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
13
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
14
|
Jungmann F, Arnhold G, Kämpgen B, Jorg T, Düber C, Mildenberger P, Kloeckner R. A Hybrid Reporting Platform for Extended RadLex Coding Combining Structured Reporting Templates and Natural Language Processing. J Digit Imaging 2021; 33:1026-1033. [PMID: 32318897 DOI: 10.1007/s10278-020-00342-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Structured reporting is a favorable and sustainable form of reporting in radiology. Among its advantages are better presentation, clearer nomenclature, and higher quality. By using MRRT-compliant templates, the content of the categorized items (e.g., select fields) can be automatically stored in a database, which allows further research and quality analytics based on established ontologies like RadLex® linked to the items. Additionally, it is relevant to provide free-text input for descriptions of findings and impressions in complex imaging studies or for the information included with the clinical referral. So far, however, this unstructured content cannot be categorized. We developed a solution to analyze and code these free-text parts of the templates in our MRRT-compliant reporting platform, using natural language processing (NLP) with RadLex® terms in addition to the already categorized items. The established hybrid reporting concept is working successfully. The NLP tool provides RadLex® codes with modifiers (affirmed, speculated, negated). Radiologists can confirm or reject codes provided by NLP before finalizing the structured report. Furthermore, users can suggest RadLex® codes from free text that is not correctly coded with NLP or can suggest to change the modifier. Analyzing free-text fields took 1.23 s on average. Hybrid reporting enables coding of free-text information in our MRRT-compliant templates and thus increases the amount of categorized data that can be stored in the database. This enhances the possibilities for further analyses, such as correlating clinical information with radiological findings or storing high-quality structured information for machine-learning approaches.
Collapse
Affiliation(s)
- Florian Jungmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany.
| | - G Arnhold
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - B Kämpgen
- Empolis Information Management GmbH, Kaiserslautern, Germany
| | - T Jorg
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - C Düber
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - P Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| | - R Kloeckner
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst. 1, 55131, Mainz, Germany
| |
Collapse
|
15
|
Machine Learning and Deep Learning in Oncologic Imaging: Potential Hurdles, Opportunities for Improvement, and Solutions-Abdominal Imagers' Perspective. J Comput Assist Tomogr 2021; 45:805-811. [PMID: 34270486 DOI: 10.1097/rct.0000000000001183] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT The applications of machine learning in clinical radiology practice and in particular oncologic imaging practice are steadily evolving. However, there are several potential hurdles for widespread implementation of machine learning in oncologic imaging, including the lack of availability of a large number of annotated data sets and lack of use of consistent methodology and terminology for reporting the findings observed on the staging and follow-up imaging studies that apply to a wide spectrum of solid tumors. This short review discusses some potential hurdles to the implementation of machine learning in oncologic imaging, opportunities for improvement, and potential solutions that can facilitate robust machine learning from the vast number of radiology reports and annotations generated by the dictating radiologists.
Collapse
|
16
|
Roth CJ, Clunie DA, Vining DJ, Berkowitz SJ, Berlin A, Bissonnette JP, Clark SD, Cornish TC, Eid M, Gaskin CM, Goel AK, Jacobs GC, Kwan D, Luviano DM, McBee MP, Miller K, Hafiz AM, Obcemea C, Parwani AV, Rotemberg V, Silver EL, Storm ES, Tcheng JE, Thullner KS, Folio LR. Multispecialty Enterprise Imaging Workgroup Consensus on Interactive Multimedia Reporting Current State and Road to the Future: HIMSS-SIIM Collaborative White Paper. J Digit Imaging 2021; 34:495-522. [PMID: 34131793 PMCID: PMC8329131 DOI: 10.1007/s10278-021-00450-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/05/2021] [Accepted: 03/19/2021] [Indexed: 12/20/2022] Open
Abstract
Diagnostic and evidential static image, video clip, and sound multimedia are captured during routine clinical care in cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, endoscopic procedural specialties, and other medical disciplines. Providers typically describe the multimedia findings in contemporaneous electronic health record clinical notes or associate a textual interpretative report. Visual communication aids commonly used to connect, synthesize, and supplement multimedia and descriptive text outside medicine remain technically challenging to integrate into patient care. Such beneficial interactive elements may include hyperlinks between text, multimedia elements, alphanumeric and geometric annotations, tables, graphs, timelines, diagrams, anatomic maps, and hyperlinks to external educational references that patients or provider consumers may find valuable. This HIMSS-SIIM Enterprise Imaging Community workgroup white paper outlines the current and desired clinical future state of interactive multimedia reporting (IMR). The workgroup adopted a consensus definition of IMR as “interactive medical documentation that combines clinical images, videos, sound, imaging metadata, and/or image annotations with text, typographic emphases, tables, graphs, event timelines, anatomic maps, hyperlinks, and/or educational resources to optimize communication between medical professionals, and between medical professionals and their patients.” This white paper also serves as a precursor for future efforts toward solving technical issues impeding routine interactive multimedia report creation and ingestion into electronic health records.
Collapse
Affiliation(s)
| | | | - David J Vining
- Department of Abdominal Imaging, MD Anderson Cancer Center, Houston, TX, USA
| | - Seth J Berkowitz
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Alejandro Berlin
- Radiation Medicine Program, Princess Margaret Cancer Centre - University Health Network, Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Jean-Pierre Bissonnette
- Departments of Radiation Oncology and Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Shawn D Clark
- University of Miami Hospitals and Clinics, Miami, FL, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado School of Medicine, Aurora, CO, USA
| | - Monief Eid
- eHealth & Digital Transformation Agency, Ministry of Health, Riyadh, Saudi Arabia
| | - Cree M Gaskin
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA
| | | | | | - David Kwan
- Health Technology and Information Management, Ontario Health (Cancer Care Ontario), Toronto, ON, Canada
| | - Damien M Luviano
- Department of Surgery, Virginia Tech Carilion School of Medicine, Roanoke, VA, USA
| | - Morgan P McBee
- Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | | | - Abdul Moiz Hafiz
- Division of Cardiology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Ceferino Obcemea
- Radiation Research Program, National Cancer Institute, Bethesda, MD, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Veronica Rotemberg
- Dermatology Service, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Erik S Storm
- Department of Radiology and Medical Education, Salem VA Medical Center, Salem, VA, USA
| | - James E Tcheng
- Department of Medicine, Division of Cardiology, Duke University, Durham, NC, USA
| | | | - Les R Folio
- Lead CT Radiologist, NIH Clinical Center, Bethesda, MD, USA
| |
Collapse
|
17
|
Ellenbogen AL, Patrie JT, Gaskin CM. Improving Patient Access to Medical Images by Integrating an Imaging Portal With the Electronic Health Record Patient Portal. J Am Coll Radiol 2021; 18:864-867. [DOI: 10.1016/j.jacr.2020.12.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 12/24/2020] [Accepted: 12/29/2020] [Indexed: 10/22/2022]
|
18
|
McCarthy N, Dahlan A, Cook TS, Hare NO, Ryan ML, St John B, Lawlor A, Curran KM. Enterprise imaging and big data: A review from a medical physics perspective. Phys Med 2021; 83:206-220. [DOI: 10.1016/j.ejmp.2021.04.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 03/24/2021] [Accepted: 04/06/2021] [Indexed: 02/04/2023] Open
|
19
|
Stember JN, Celik H, Krupinski E, Chang PD, Mutasa S, Wood BJ, Lignelli A, Moonis G, Schwartz LH, Jambawalikar S, Bagci U. Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks. J Digit Imaging 2020; 32:597-604. [PMID: 31044392 PMCID: PMC6646645 DOI: 10.1007/s10278-019-00220-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
Collapse
Affiliation(s)
- J N Stember
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA.
| | - H Celik
- The National Institutes of Health, Clinical Center, Bethesda, MD, 20892, USA
| | - E Krupinski
- Department of Radiology & Imaging Sciences, Emory University, Atlanta, GA, 30322, USA
| | - P D Chang
- Department of Radiology, University of California, Irvine, CA, 92697, USA
| | - S Mutasa
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - B J Wood
- The National Institutes of Health, Clinical Center, Bethesda, MD, 20892, USA
| | - A Lignelli
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - G Moonis
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - L H Schwartz
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - S Jambawalikar
- Department of Radiology, Columbia University Medical Center - NYPH, New York, NY, 10032, USA
| | - U Bagci
- Center for Research in Computer Vision, University of Central Florida, 4328 Scorpius St. HEC 221, Orlando, FL, 32816, USA
| |
Collapse
|
20
|
Goldberg-Stein S, Chernyak V. Adding Value in Radiology Reporting. J Am Coll Radiol 2020; 16:1292-1298. [PMID: 31492407 DOI: 10.1016/j.jacr.2019.05.042] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/29/2022]
Abstract
The major goal of the radiology report is to deliver timely, accurate, and actionable information to the patient care team and the patient. Structured reporting offers multiple advantages over traditional free-text reporting, including reduction in diagnostic error, comprehensiveness, adherence to national consensus guidelines, revenue capture, data collection, and research. Various technological innovations enhance integration of structured reporting into everyday clinical practice. This review discusses the benefits of innovations in radiology reporting to the clinical decision process, the patient experience, the cost of imaging, and the overall contributions to the health of the population. Future directions, including the use of artificial intelligence, are reviewed.
Collapse
Affiliation(s)
| | - Victoria Chernyak
- Department of Radiology, Montefiore Medical Center, Bronx, New York.
| |
Collapse
|
21
|
Schaub SK, Ermoian RP, Wang CL, O'Malley RB, Kim EY, Shuman WP, Hendrickson K, Apisarnthanarax S. Bridging the Radiation Oncology and Diagnostic Radiology Communication Gap: A Survey to Determine Usefulness and Optimal Presentation of Radiotherapy Treatment Plans for Radiologists. Curr Probl Diagn Radiol 2020; 49:161-167. [PMID: 30885420 DOI: 10.1067/j.cpradiol.2019.02.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 02/13/2019] [Accepted: 02/14/2019] [Indexed: 11/22/2022]
Abstract
RATIONALE AND OBJECTIVES We hypothesized that providing visual-spatial information to radiologists on where radiation has been delivered in an easily accessible way may improve the accuracy of image interpretation and thereby improve quality of patient care. We present a national representation of radiologists' opinions regarding the usefulness and optimal approach for implementing a system to promote access to radiotherapy (RT) plans. METHODS An anonymous survey was sent to the members of the Association of University Radiologists. Descriptive statistics were performed. RESULTS Questionnaires were returned by 95 of 1383 members. Demographics comprised of 76% attendings with 94% practicing within an academic setting. Only 40% of radiologists reported that they knew most of the time whether a patient has received RT in the field scanned. A large majority of respondents (88%) felt that a history of prior radiation in a cancer patient was at least an occasional barrier that affected the ability to interpret imaging findings in a clinically useful way. The following types of information was considered helpful when interpreting a scan: screenshots of the radiation plan (85%), scrollable DICOM data on planning CT showing delivered RT dose lines (54%), and written text RT treatment summary (47%). Nearly all (89%) desired DICOM data within the clinical radiology Picture Archiving and Communication System system. Radiologists expected the ease of accessibility to RT plans to result in increased efficiency (76%) and accuracy (88%). CONCLUSION Diagnostic radiologists desire improved access and integration of radiotherapy plans into the diagnostic radiology clinical workup in the form of visual-spatial data.
Collapse
Affiliation(s)
- Stephanie K Schaub
- University of Washington, Department of Radiation Oncology, Seattle, WA.
| | - Ralph P Ermoian
- University of Washington, Department of Radiation Oncology, Seattle, WA
| | - Carolyn L Wang
- University of Washington, Department of Radiology, Seattle, WA
| | - Ryan B O'Malley
- University of Washington, Department of Radiology, Seattle, WA
| | - Edward Y Kim
- University of Washington, Department of Radiation Oncology, Seattle, WA
| | | | | | | |
Collapse
|
22
|
Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, Folio LR, Summers RM, Rubin DL, Lungren MP. Preparing Medical Imaging Data for Machine Learning. Radiology 2020; 295:4-15. [PMID: 32068507 PMCID: PMC7104701 DOI: 10.1148/radiol.2020192224] [Citation(s) in RCA: 412] [Impact Index Per Article: 82.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 12/03/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) continues to garner substantial interest in medical imaging. The potential applications are vast and include the entirety of the medical imaging life cycle from image creation to diagnosis to outcome prediction. The chief obstacles to development and clinical implementation of AI algorithms include availability of sufficiently large, curated, and representative training data that includes expert labeling (eg, annotations). Current supervised AI methods require a curation process for data to optimally train, validate, and test algorithms. Currently, most research groups and industry have limited data access based on small sample sizes from small geographic areas. In addition, the preparation of data is a costly and time-intensive process, the results of which are algorithms with limited utility and poor generalization. In this article, the authors describe fundamental steps for preparing medical imaging data in AI algorithm development, explain current limitations to data curation, and explore new approaches to address the problem of data availability.
Collapse
Affiliation(s)
- Martin J. Willemink
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Wojciech A. Koszek
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Cailin Hardell
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Jie Wu
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Dominik Fleischmann
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Hugh Harvey
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Les R. Folio
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Ronald M. Summers
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Daniel L. Rubin
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| |
Collapse
|
23
|
|
24
|
Do HM, Spear LG, Nikpanah M, Mirmomen SM, Machado LB, Toscano AP, Turkbey B, Bagheri MH, Gulley JL, Folio LR. Augmented Radiologist Workflow Improves Report Value and Saves Time: A Potential Model for Implementation of Artificial Intelligence. Acad Radiol 2020; 27:96-105. [PMID: 31818390 DOI: 10.1016/j.acra.2019.09.014] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 12/12/2022]
Abstract
RATIONALE AND OBJECTIVES Our primary aim was to improve radiology reports by increasing concordance of target lesion measurements with oncology records using radiology preprocessors (RP). Faster notification of incidental actionable findings to referring clinicians and clinical radiologist exam interpretation time savings with RPs quantifying tumor burden were also assessed. MATERIALS AND METHODS In this prospective quality improvement initiative, RPs annotated lesions before radiologist interpretation of CT exams. Clinical radiologists then hyperlinked approved measurements into interactive reports during interpretations. RPs evaluated concordance with our tumor measurement radiologist, the determinant of tumor burden. Actionable finding detection and notification times were also deduced. Clinical radiologist interpretation times were calculated from established average CT chest, abdomen, and pelvis interpretation times. RESULTS RPs assessed 1287 body CT exams with 812 follow-up CT chest, abdomen, and pelvis studies; 95 (11.7%) of which had 241 verified target lesions. There was improved concordance (67.8% vs. 22.5%) of target lesion measurements. RPs detected 93.1% incidental actionable findings with faster clinician notification by a median time of 1 hour (range: 15 minutes-16 hours). Radiologist exam interpretation times decreased by 37%. CONCLUSIONS This workflow resulted in three-fold improved target lesion measurement concordance with oncology records, earlier detection and faster notification of incidental actionable findings to referring clinicians, and decreased exam interpretation times for clinical radiologists. These findings demonstrate potential roles for automation (such as AI) to improve report value, worklist prioritization, and patient care.
Collapse
|
25
|
|
26
|
Radiologist Adoption of Interactive Multimedia Reporting Technology. J Am Coll Radiol 2018; 16:465-471. [PMID: 30545711 DOI: 10.1016/j.jacr.2018.10.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 10/05/2018] [Accepted: 10/14/2018] [Indexed: 11/21/2022]
Abstract
PURPOSE To determine if radiologists find enough value in available interactive multimedia reporting technology to routinely adopt it into clinical practice. MATERIALS AND METHODS Our institution's reporting application (Vue Reporting, Carestream Health) allows the incorporation of multimedia elements, including active hyperlinks, into clinical reports, but would radiologists find enough value in this technique to change their practice? We retrospectively reviewed 559,841 diagnostic reports issued July 2016 to February 2018 for the presence of text hyperlinks that interactively connect to imaging findings in the PACS. Results were subdivided by modality, reporting radiologist role (ie, resident, fellow, attending physician), and subspecialty. Average percentages over the final 6 months were chosen to represent established adoption rates. RESULTS For each modality, the 6-month average percentages of reports containing hyperlinks to imaging findings subdivided by the role of the radiologist who created the report were found to be as follows: CT: residents = 27.6%, fellows = 19.5%, attending physicians = 26.0%; MRI: residents = 26.6%, fellows = 8.7%, attending physicians = 5.1%; and PET/CT: residents = 53.3%, fellows = 46.7%, attending physicians = 19.4%. Rates were 0% to 4% among ultrasound, radiography, and nuclear medicine reports, regardless of radiologist role. The 6-month average percentages of CT and MRI reports with hyperlinks to imaging findings varied by subspecialty from 5.4% to 57.1%. CONCLUSION Our radiologists found enough value in available interactive multimedia reporting technology to adopt it into their clinical practice, commonly inserting hyperlinks into their CT, PET/CT, and MRI reports to create interactive connections to key imaging findings in the PACS.
Collapse
|
27
|
Towards More Structure: Comparing TNM Staging Completeness and Processing Time of Text-Based Reports versus Fully Segmented and Annotated PET/CT Data of Non-Small-Cell Lung Cancer. CONTRAST MEDIA & MOLECULAR IMAGING 2018; 2018:5693058. [PMID: 30515067 PMCID: PMC6236664 DOI: 10.1155/2018/5693058] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 09/10/2018] [Accepted: 09/26/2018] [Indexed: 12/25/2022]
Abstract
Results of PET/CT examinations are communicated as text-based reports which are frequently not fully structured. Incomplete or missing staging information can be a significant source of staging and treatment errors. We compared standard text-based reports to a manual full 3D-segmentation-based approach with respect to TNM completeness and processing time. TNM information was extracted retrospectively from 395 reports. Moreover, the RIS time stamps of these reports were analyzed. 2995 lesions using a set of 41 classification labels (TNM features + location) were manually segmented on the corresponding image data. Information content and processing time of reports and segmentations were compared using descriptive statistics and modelling. The TNM/UICC stage was mentioned explicitly in only 6% (n=22) of the text-based reports. In 22% (n=86), information was incomplete, most frequently affecting T stage (19%, n=74), followed by N stage (6%, n=22) and M stage (2%, n=9). Full NSCLC-lesion segmentation required a median time of 13.3 min, while the median of the shortest estimator of the text-based reporting time (R1) was 18.1 min (p=0.01). Tumor stage (UICC I/II: 5.2 min, UICC III/IV: 20.3 min, p < 0.001), lesion size (p < 0.001), and lesion count (n=1: 4.4 min, n=12: 37.2 min, p < 0.001) correlated significantly with the segmentation time, but not with the estimators of text-based reporting time. Numerous text-based reports are lacking staging information. A segmentation-based reporting approach tailored to the staging task improves report quality with manageable processing time and helps to avoid erroneous therapy decisions based on incomplete reports. Furthermore, segmented data may be used for multimedia enhancement and automatization.
Collapse
|