1
|
Barufaldi B, Lago MA, Abadi E, Maidment ADA. Container applications for the development and integration of virtual imaging platforms. Med Phys 2025. [PMID: 40121543 DOI: 10.1002/mp.17777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 03/05/2025] [Accepted: 03/05/2025] [Indexed: 03/25/2025] Open
Abstract
BACKGROUND Virtual imaging trials (VIT) have made significant advancements through the development of realistic human anatomy models, scanner-specific simulations, and virtual image interpretation. To promote VIT widespread adoption in the medical imaging community, it is important to develop methods that unify and facilitate the use of VITs, ensuring their reliable application across various imaging studies. PURPOSE: We developed a containerized environment to enhance collaboration and interoperability across VIT platforms. This environment integrates key components of two well-established breast imaging platforms (OpenVCT and VICTRE), enabling direct comparison between specific modules for simulating anthropomorphic phantoms, lesions, and x-ray images. METHODS Wrappers were developed to simplify the setup and execution of OpenVCT and VICTRE platforms and ensure compatibility and interoperability across different software components. These wrappers can streamline the installation of necessary packages, data formatting, and pipeline execution. The containerized environment was built using Docker images to provide resources for cross-platform integration. The breast anatomy generated by VICTRE was augmented using a simplex-based method from OpenVCT, providing additional texture modeling of breast parenchyma. Power spectra (PS) were calculated to assess the texture complexity of the simulated breast tissue and compare the outcomes. Lesion simulations were performed using breast models with calcifications and masses, allowing for a comparison of Monte Carlo (VICTRE) and raytracing (OpenVCT) imaging techniques. Key differences in x-ray attenuation models and image reconstruction methods were analyzed to evaluate the differences in the reconstructed images and overall image quality. RESULTS The containerized approach simplified the setup and execution of the simulation platforms, embedding all the necessary packages and dependencies into the Docker images. These containerized environments supported the simulation of anthropomorphic breast models and x-ray images using both Monte Carlo (VICTRE) and raytracing (OpenVCT) methods. The breast images generated using the conventional VICTRE and the integrated simplex-based method from OpenVCT were visually comparable. The β $\beta$ estimates from the PS for both approaches were close to 3, as expected for mammographic images, with only minor differences observed in the high-frequency components of the spectra (a difference of 0.2). These differences were particularly evident in areas of high tissue density and the regions of interest containing lesions; variations in the acquisition geometry affected the lesion visualization, demonstrating slight differences in the MC and raytracing simulations. Despite these differences, the overall performance of both methods in simulating images was similar, and the integrated environment provided a robust platform for comparing and optimizing imaging simulations. CONCLUSIONS Containerized environments enable cross-platform comparisons and hybrid approaches. In this work, Docker images provided all the resources to simulate and compare the outcomes in breast phantom and x-ray image simulations, ensuring their robustness and reproducibility. The integration of VICTRE and OpenVCT methods allowed for data augmentation and provided resources for selection of imaging methods. The work lays a foundation for future VIT advancements, ensuring that these resources remain credible, reproducible, and accessible to the research community.
Collapse
Affiliation(s)
- Bruno Barufaldi
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Miguel A Lago
- Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ehsan Abadi
- Department of Radiology, Duke University School of Medicine, Durham, USA
| | - Andrew D A Maidment
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
2
|
Abadi E, Barufaldi B, Lago M, Badal A, Mello-Thoms C, Bottenus N, Wangerin KA, Goldburgh M, Tarbox L, Beaucage-Gauvreau E, Frangi AF, Maidment A, Kinahan PE, Bosmans H, Samei E. Toward widespread use of virtual trials in medical imaging innovation and regulatory science. Med Phys 2024; 51:9394-9404. [PMID: 39369717 PMCID: PMC11659034 DOI: 10.1002/mp.17442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 09/06/2024] [Accepted: 09/18/2024] [Indexed: 10/08/2024] Open
Abstract
The rapid advancement in the field of medical imaging presents a challenge in keeping up to date with the necessary objective evaluations and optimizations for safe and effective use in clinical settings. These evaluations are traditionally done using clinical imaging trials, which while effective, pose several limitations including high costs, ethical considerations for repetitive experiments, time constraints, and lack of ground truth. To tackle these issues, virtual trials (aka in silico trials) have emerged as a promising alternative, using computational models of human subjects and imaging devices, and observer models/analysis to carry out experiments. To facilitate the widespread use of virtual trials within the medical imaging research community, a major need is to establish a common consensus framework that all can use. Based on the ongoing efforts of an AAPM Task Group (TG387), this article provides a comprehensive overview of the requirements for establishing virtual imaging trial frameworks, paving the way toward their widespread use within the medical imaging research community. These requirements include credibility, reproducibility, and accessibility. Credibility assessment involves verification, validation, uncertainty quantification, and sensitivity analysis, ensuring the accuracy and realism of computational models. A proper credibility assessment requires a clear context of use and the questions that the study is intended to objectively answer. For reproducibility and accessibility, this article highlights the need for detailed documentation, user-friendly software packages, and standard input/output formats. Challenges in data and software sharing, including proprietary data and inconsistent file formats, are discussed. Recommended solutions to enhance accessibility include containerized environments and data-sharing hubs, along with following standards such as CDISC (Clinical Data Interchange Standards Consortium). By addressing challenges associated with credibility, reproducibility, and accessibility, virtual imaging trials can be positioned as a powerful and inclusive resource, advancing medical imaging innovation and regulatory science.
Collapse
Affiliation(s)
- Ehsan Abadi
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Departments of Radiology and Electrical & Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| | - Bruno Barufaldi
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Miguel Lago
- Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Andreu Badal
- Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Nick Bottenus
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, Colorado, USA
| | - Kristen A. Wangerin
- Research and Development, Pharmaceutical Diagnostics, GE HealthCare, Marlborough, Massachusetts, USA
| | | | - Lawrence Tarbox
- Department of Biomedical Informatics, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Erica Beaucage-Gauvreau
- Institute of Physics-based Modeling for in silico Health (iSi Health), KU Leuven, Leuven, Belgium
| | - Alejandro F. Frangi
- Christabel Pankhurst Institute, Division of Informatics, Imaging and Data Sciences, Department of Computer Science, University of Manchester, Manchester, UK
- Alan Turing Institute, British Library, London, UK
| | - Andrew Maidment
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Paul E. Kinahan
- Departments of Radiology, Bioengineering, and Physics, University of Washington, Seattle, Washington, USA
| | - Hilde Bosmans
- Departments of Radiology and Medical Radiation Physics, KU Leuven, Leuven, Belgium
| | - Ehsan Samei
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Departments of Radiology and Electrical & Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| |
Collapse
|
3
|
Xue S, Fernández A, Carrasco M. Featural Representation and Internal Noise Underlie the Eccentricity Effect in Contrast Sensitivity. J Neurosci 2024; 44:e0743232023. [PMID: 38050093 PMCID: PMC10860475 DOI: 10.1523/jneurosci.0743-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/21/2023] [Accepted: 09/23/2023] [Indexed: 12/06/2023] Open
Abstract
Human visual performance for basic visual dimensions (e.g., contrast sensitivity and acuity) peaks at the fovea and decreases with eccentricity. The eccentricity effect is related to the larger visual cortical surface area corresponding to the fovea, but it is unknown if differential feature tuning contributes to this eccentricity effect. Here, we investigated two system-level computations underlying the eccentricity effect: featural representation (tuning) and internal noise. Observers (both sexes) detected a Gabor embedded in filtered white noise which appeared at the fovea or one of four perifoveal locations. We used psychophysical reverse correlation to estimate the weights assigned by the visual system to a range of orientations and spatial frequencies (SFs) in noisy stimuli, which are conventionally interpreted as perceptual sensitivity to the corresponding features. We found higher sensitivity to task-relevant orientations and SFs at the fovea than that at the perifovea, and no difference in selectivity for either orientation or SF. Concurrently, we measured response consistency using a double-pass method, which allowed us to infer the level of internal noise by implementing a noisy observer model. We found lower internal noise at the fovea than that at the perifovea. Finally, individual variability in contrast sensitivity correlated with sensitivity to and selectivity for task-relevant features as well as with internal noise. Moreover, the behavioral eccentricity effect mainly reflects the foveal advantage in orientation sensitivity compared with other computations. These findings suggest that the eccentricity effect stems from a better representation of task-relevant features and lower internal noise at the fovea than that at the perifovea.
Collapse
Affiliation(s)
- Shutian Xue
- Department of Psychology, NewYork University, New York, New York 10003
| | - Antonio Fernández
- Department of Psychology, NewYork University, New York, New York 10003
| | - Marisa Carrasco
- Department of Psychology, NewYork University, New York, New York 10003
- Center for Neural Science, NewYork University, New York, New York 10003
| |
Collapse
|
4
|
Klein DS, Lago MA, Abbey CK, Eckstein MP. A 2D Synthesized Image Improves the 3D Search for Foveated Visual Systems. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2176-2188. [PMID: 37027767 PMCID: PMC10476603 DOI: 10.1109/tmi.2023.3246005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Current medical imaging increasingly relies on 3D volumetric data making it difficult for radiologists to thoroughly search all regions of the volume. In some applications (e.g., Digital Breast Tomosynthesis), the volumetric data is typically paired with a synthesized 2D image (2D-S) generated from the corresponding 3D volume. We investigate how this image pairing affects the search for spatially large and small signals. Observers searched for these signals in 3D volumes, 2D-S images, and while viewing both. We hypothesize that lower spatial acuity in the observers' visual periphery hinders the search for the small signals in the 3D images. However, the inclusion of the 2D-S guides eye movements to suspicious locations, improving the observer's ability to find the signals in 3D. Behavioral results show that the 2D-S, used as an adjunct to the volumetric data, improves the localization and detection of the small (but not large) signal compared to 3D alone. There is a concomitant reduction in search errors as well. To understand this process at a computational level, we implement a Foveated Search Model (FSM) that executes human eye movements and then processes points in the image with varying spatial detail based on their eccentricity from fixations. The FSM predicts human performance for both signals and captures the reduction in search errors when the 2D-S supplements the 3D search. Our experimental and modeling results delineate the utility of 2D-S in 3D search-reduce the detrimental impact of low-resolution peripheral processing by guiding attention to regions of interest, effectively reducing errors.
Collapse
|
5
|
Drew T, Lavelle M, Kerr KF, Shucard H, Brunyé TT, Weaver DL, Elmore JG. More scanning, but not zooming, is associated with diagnostic accuracy in evaluating digital breast pathology slides. J Vis 2021; 21:7. [PMID: 34636845 PMCID: PMC8525842 DOI: 10.1167/jov.21.11.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 09/15/2021] [Indexed: 12/02/2022] Open
Abstract
Diagnoses of medical images can invite strikingly diverse strategies for image navigation and visual search. In computed tomography screening for lung nodules, distinct strategies, termed scanning and drilling, relate to both radiologists' clinical experience and accuracy in lesion detection. Here, we examined associations between search patterns and accuracy for pathologists (N = 92) interpreting a diverse set of breast biopsy images. While changes in depth in volumetric images reveal new structures through movement in the z-plane, in digital pathology changes in depth are associated with increased magnification. Thus, "drilling" in radiology may be more appropriately termed "zooming" in pathology. We monitored eye-movements and navigation through digital pathology slides to derive metrics of how quickly the pathologists moved through XY (scanning) and Z (zooming) space. Prior research on eye-movements in depth has categorized clinicians as either "scanners" or "drillers." In contrast, we found that there was no reliable association between a clinician's tendency to scan or zoom while examining digital pathology slides. Thus, in the current work we treated scanning and zooming as continuous predictors rather than categorizing as either a "scanner" or "zoomer." In contrast to prior work in volumetric chest images, we found significant associations between accuracy and scanning rate but not zooming rate. These findings suggest fundamental differences in the relative value of information types and review behaviors across two image formats. Our data suggest that pathologists gather critical information by scanning on a given plane of depth, whereas radiologists drill through depth to interrogate critical features.
Collapse
Affiliation(s)
- Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology & Laboratory Medicine, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
6
|
Lago MA, Abbey CK, Eckstein MP. Medical image quality metrics for foveated model observers. J Med Imaging (Bellingham) 2021; 8:041209. [PMID: 34423070 DOI: 10.1117/1.jmi.8.4.041209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 07/20/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: A recently proposed model observer mimics the foveated nature of the human visual system by processing the entire image with varying spatial detail, executing eye movements, and scrolling through slices. The model can predict how human search performance changes with signal type and modality (2D versus 3D), yet its implementation is computationally expensive and time-consuming. Here, we evaluate various image quality metrics using extensions of the classic index of detectability expression and assess foveated model observers for search tasks. Approach: We evaluated foveated extensions of a channelized Hotelling and nonprewhitening matched filter model with an eye filter. The proposed methods involve calculating a model index of detectability ( d ' ) for each retinal eccentricity and combining these with a weighting function into a single detectability metric. We assessed different versions of the weighting function that varied in the required measurements of the human observers' search (no measurements, eye movement patterns, size of the image, and median search times). Results: We show that the index of detectability across eccentricities weighted using the eye movement patterns of observers best predicted human performance in 2D versus 3D search performance for a small microcalcification-like signal and a larger mass-like. The metric with a weighting function based on median search times was the second best predicting human results. Conclusions: The findings provide a set of model observer tools to evaluate image quality in the early stages of imaging system evaluation or design without implementing the more computationally complex foveated search model.
Collapse
Affiliation(s)
- Miguel A Lago
- University of California at Santa Barbara, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Craig K Abbey
- University of California at Santa Barbara, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Miguel P Eckstein
- University of California at Santa Barbara, Department of Psychological and Brain Sciences, Santa Barbara, California, United States.,University of California at Santa Barbara, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
| |
Collapse
|