1
|
Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ. Cross-site Validation of AI Segmentation and Harmonization in Breast MRI. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1642-1652. [PMID: 39320547 DOI: 10.1007/s10278-024-01266-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 09/05/2024] [Accepted: 09/09/2024] [Indexed: 09/26/2024]
Abstract
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.
Collapse
Affiliation(s)
- Yu Huang
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicholas J Leotta
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Lukas Hirsch
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Mary Hughes
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Jeffrey Reiner
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicole B Saphier
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Kelly S Myers
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Babita Panigrahi
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Emily Ambinder
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Philip Di Carlo
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Lars J Grimm
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Dorothy Lowell
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sora Yoon
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sujata V Ghate
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA.
| | - Elizabeth J Sutton
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
2
|
Pang Y, Huang T, Wang Q. AI and Data-Driven Advancements in Industry 4.0. SENSORS (BASEL, SWITZERLAND) 2025; 25:2249. [PMID: 40218762 PMCID: PMC11991204 DOI: 10.3390/s25072249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2025] [Accepted: 03/28/2025] [Indexed: 04/14/2025]
Abstract
Industrial artificial intelligence is rapidly evolving, driven by an unprecedented explosion of diverse data modalities [...].
Collapse
Affiliation(s)
- Yan Pang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Teng Huang
- School of Artificial Intelligence, Guangzhou University, Guangzhou 510700, China;
| | - Qiong Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| |
Collapse
|
3
|
Hirsch L, Huang Y, Makse HA, Martinez DF, Hughes M, Eskreis-Winkler S, Pinker K, Morris EA, Parra LC, Sutton EJ. Early Detection of Breast Cancer in MRI Using AI. Acad Radiol 2025; 32:1218-1225. [PMID: 39482209 PMCID: PMC11875922 DOI: 10.1016/j.acra.2024.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 10/11/2024] [Accepted: 10/12/2024] [Indexed: 11/03/2024]
Abstract
RATIONALE AND OBJECTIVES To develop and evaluate an AI algorithm that detects breast cancer in MRI scans up to one year before radiologists typically identify it, potentially enhancing early detection in high-risk women. MATERIALS AND METHODS A convolutional neural network (CNN) AI model, pre-trained on breast MRI data, was fine-tuned using a retrospective dataset of 3029 MRI scans from 910 patients. These contained 115 cancers that were diagnosed within one year of a negative MRI. The model aimed to identify these cancers, with the goal of predicting cancer development up to one year in advance. The network was fine-tuned and tested with 10-fold cross-validation. Mean age of patients was 52 years (range, 18-88 years), with average follow-up of 4.3 years (range 1-12 years). RESULTS The AI detected cancers one year earlier with an area under the ROC curve of 0.72 (0.67-0.76). Retrospective analysis by a radiologist of the top 10% highest risk MRIs as ranked by the AI could have increased early detection by up to 30%. (35/115, CI:22.2-39.7%, 30% sensitivity). A radiologist identified a visual correlate to biopsy-proven cancers in 83 of prior-year MRIs (83/115, CI: 62.1-79.4%). The AI algorithm identified the anatomic region where cancer would be detected in 66 cases (66/115, CI:47.8-66.5%); with both agreeing in 54 cases (54/115, CI:%37.5-56.4%). CONCLUSION This novel AI-aided re-evaluation of "benign" breasts shows promise for improving early breast cancer detection with MRI. As datasets grow and image quality improves, this approach is expected to become even more impactful.
Collapse
Affiliation(s)
- Lukas Hirsch
- City College of New York, 160 Convent Ave, New York, New York 10031, USA
| | - Yu Huang
- City College of New York, 160 Convent Ave, New York, New York 10031, USA
| | - Hernan A Makse
- City College of New York, 160 Convent Ave, New York, New York 10031, USA
| | - Danny F Martinez
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA
| | - Mary Hughes
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA
| | - Sarah Eskreis-Winkler
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA
| | - Katja Pinker
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA
| | - Elizabeth A Morris
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA; University of California, Davis, 1 Shields Ave, Davis, California 95616, USA
| | - Lucas C Parra
- City College of New York, 160 Convent Ave, New York, New York 10031, USA.
| | - Elizabeth J Sutton
- Memorial Sloan Kettering Cancer Center, 300 E 66th St Floors 1 - 4, New York, New York 10065, USA
| |
Collapse
|
4
|
Weitz M, Pfeiffer JR, Patel S, Biancalana M, Pekis A, Kannan V, Kaklamanos E, Parker A, Bucksot JE, Romera JR, Alvin R, Zhang Y, Stefka AT, Lopez-Ramos D, Peterson JR, Antony AK, Zamora KW, Woodard S. Performance of an AI-powered visualization software platform for precision surgery in breast cancer patients. NPJ Breast Cancer 2024; 10:98. [PMID: 39543194 PMCID: PMC11564706 DOI: 10.1038/s41523-024-00696-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 09/22/2024] [Indexed: 11/17/2024] Open
Abstract
Surgery remains the primary treatment modality in the management of early-stage invasive breast cancer. Artificial intelligence (AI)-powered visualization platforms offer the compelling potential to aid surgeons in evaluating the tumor's location and morphology within the breast and accordingly optimize their surgical approach. We sought to validate an AI platform that employs dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to render three-dimensional (3D) representations of the tumor and 5 additional chest tissues, offering clear visualizations as well as functionalities for quantifying tumor morphology, tumor-to-landmark structure distances, excision volumes, and approximate surgical margins. This retrospective study assessed the visualization platform's performance on 100 cases with ground-truth labels vetted by 2 breast-specialized radiologists. We assessed features including automatic AI-generated clinical metrics (e.g., tumor dimensions) as well as visualization tools including convex hulls at desired margins around the tumor to help visualize lumpectomy volume. The statistical performance of the platform's automated features was robust and within the range of inter-radiologist variability. These detailed 3D tumor and surrounding multi-tissue depictions offer both qualitative and quantitative comprehension of cancer topology and may aid in formulating an optimal surgical approach for breast cancer treatment. We further establish the framework for broader data integration into the platform to enhance precision cancer care.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Kathryn W Zamora
- Department of Radiology, University of Alabama at Birmingham School of Medicine, Birmingham, AL, USA
| | - Stefanie Woodard
- Department of Radiology, University of Alabama at Birmingham School of Medicine, Birmingham, AL, USA.
| |
Collapse
|
5
|
Patra A, Biswas P, Behera SK, Barpanda NK, Sethy PK, Nanthaamornphong A. Transformative insights: Image-based breast cancer detection and severity assessment through advanced AI techniques. JOURNAL OF INTELLIGENT SYSTEMS 2024; 33. [DOI: 10.1515/jisys-2024-0172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2025] Open
Abstract
Abstract
In the realm of image-based breast cancer detection and severity assessment, this study delves into the revolutionary potential of sophisticated artificial intelligence (AI) techniques. By investigating image processing, machine learning (ML), and deep learning (DL), the research illuminates their combined impact on transforming breast cancer diagnosis. This integration offers insights into early identification and precise characterization of cancers. With a foundation in 125 research articles, this article presents a comprehensive overview of the current state of image-based breast cancer detection. Synthesizing the transformative role of AI, including image processing, ML, and DL, the review explores how these technologies collectively reshape the landscape of breast cancer diagnosis and severity assessment. An essential aspect highlighted is the synergy between advanced image processing methods and ML algorithms. This combination facilitates the automated examination of medical images, which is crucial for detecting minute anomalies indicative of breast cancer. The utilization of complex neural networks for feature extraction and pattern recognition in DL models further enhances diagnostic precision. Beyond diagnostic improvements, the abstract underscores the substantial influence of AI-driven methods on breast cancer treatment. The integration of AI not only increases diagnostic precision but also opens avenues for individualized treatment planning, marking a paradigm shift toward personalized medicine in breast cancer care. However, challenges persist, with issues related to data quality and interpretability requiring continued research efforts. Looking forward, the abstract envisions future directions for breast cancer identification and diagnosis, emphasizing the adoption of explainable AI techniques and global collaboration for data sharing. These initiatives promise to propel the field into a new era characterized by enhanced efficiency and precision in breast cancer care.
Collapse
Affiliation(s)
- Ankita Patra
- Department of Electronics, Sambalpur University , Burla , Odisha, 768019 , India
| | - Preesat Biswas
- Department of Electronics and Telecommunication Engineering, GEC Jagdalpur , C.G., 494001 , India
| | - Santi Kumari Behera
- Department of Computer Science and Engineering, VSSUT , Burla , Odisha, 768018 , India
| | | | - Prabira Kumar Sethy
- Department of Electronics, Sambalpur University , Burla , Odisha, 768019 , India
| | - Aziz Nanthaamornphong
- College of Computing, Prince of Songkla University, Phuket Campus , Phuket 83120 , Thailand
| |
Collapse
|
6
|
Wang L, Wang L, Kuai Z, Tang L, Ou Y, Wu M, Shi T, Ye C, Zhu Y. Progressive Dual Priori Network for Generalized Breast Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:5459-5472. [PMID: 38843066 DOI: 10.1109/jbhi.2024.3410274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
To promote the generalization ability of breast tumor segmentation models, as well as to improve the segmentation performance for breast tumors with smaller size, low-contrast and irregular shape, we propose a progressive dual priori network (PDPNet) to segment breast tumors from dynamic enhanced magnetic resonance images (DCE-MRI) acquired at different centers. The PDPNet first cropped tumor regions with a coarse-segmentation based localization module, then the breast tumor mask was progressively refined by using the weak semantic priori and cross-scale correlation prior knowledge. To validate the effectiveness of PDPNet, we compared it with several state-of-the-art methods on multi-center datasets. The results showed that, comparing against the suboptimal method, the DSC and HD95 of PDPNet were improved at least by 5.13% and 7.58% respectively on multi-center test sets. In addition, through ablations, we demonstrated that the proposed localization module can decrease the influence of normal tissues and therefore improve the generalization ability of the model. The weak semantic priors allow focusing on tumor regions to avoid missing small tumors and low-contrast tumors. The cross-scale correlation priors are beneficial for promoting the shape-aware ability for irregular tumors. Thus integrating them in a unified framework improved the multi-center breast tumor segmentation performance.
Collapse
|
7
|
Park GE, Kim SH, Nam Y, Kang J, Park M, Kang BJ. 3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation. J Magn Reson Imaging 2024; 59:2252-2262. [PMID: 37596823 DOI: 10.1002/jmri.28960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND Deep learning models require large-scale training to perform confidently, but obtaining annotated datasets in medical imaging is challenging. Weak annotation has emerged as a way to save time and effort. PURPOSE To develop a deep learning model for 3D breast cancer segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using weak annotation with reliable performance. STUDY TYPE Retrospective. POPULATION Seven hundred and thirty-six women with breast cancer from a single institution, divided into the development (N = 544) and test dataset (N = 192). FIELD STRENGTH/SEQUENCE 3.0-T, 3D fat-saturated gradient-echo axial T1-weighted flash 3D volumetric interpolated brain examination (VIBE) sequences. ASSESSMENT Two radiologists performed a weak annotation of the ground truth using bounding boxes. Based on this, the ground truth annotation was completed through autonomic and manual correction. The deep learning model using 3D U-Net transformer (UNETR) was trained with this annotated dataset. The segmentation results of the test set were analyzed by quantitative and qualitative methods, and the regions were divided into whole breast and region of interest (ROI) within the bounding box. STATISTICAL TESTS As a quantitative method, we used the Dice similarity coefficient to evaluate the segmentation result. The volume correlation with the ground truth was evaluated with the Spearman correlation coefficient. Qualitatively, three readers independently evaluated the visual score in four scales. A P-value <0.05 was considered statistically significant. RESULTS The deep learning model we developed achieved a median Dice similarity score of 0.75 and 0.89 for the whole breast and ROI, respectively. The volume correlation coefficient with respect to the ground truth volume was 0.82 and 0.86 for the whole breast and ROI, respectively. The mean visual score, as evaluated by three readers, was 3.4. DATA CONCLUSION The proposed deep learning model with weak annotation may show good performance for 3D segmentations of breast cancer using DCE-MRI. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yoonho Nam
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Minjeong Park
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Bong Joo Kang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
8
|
Wyatt CR, Huang W. Editorial for "Deep Learning-Based Segmentation of Locally Advanced Breast Cancer on MRI in Relation to Residual Cancer Burden: A Multi-Institutional Cohort Study". J Magn Reson Imaging 2023; 58:1750-1751. [PMID: 36939778 DOI: 10.1002/jmri.28680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 02/08/2023] [Indexed: 03/21/2023] Open
Affiliation(s)
- Cory R Wyatt
- Department of Diagnostic Radiology, Oregon Health & Sciences University, Portland, Oregon, USA
- Advanced Imaging Research Center, Oregon Health & Sciences University, Portland, Oregon, USA
| | - Wei Huang
- Advanced Imaging Research Center, Oregon Health & Sciences University, Portland, Oregon, USA
| |
Collapse
|
9
|
Ostmeier S, Axelrod B, Verhaaren BFJ, Christensen S, Mahammedi A, Liu Y, Pulli B, Li LJ, Zaharchuk G, Heit JJ. Non-inferiority of deep learning ischemic stroke segmentation on non-contrast CT within 16-hours compared to expert neuroradiologists. Sci Rep 2023; 13:16153. [PMID: 37752162 PMCID: PMC10522706 DOI: 10.1038/s41598-023-42961-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 09/17/2023] [Indexed: 09/28/2023] Open
Abstract
We determined if a convolutional neural network (CNN) deep learning model can accurately segment acute ischemic changes on non-contrast CT compared to neuroradiologists. Non-contrast CT (NCCT) examinations from 232 acute ischemic stroke patients who were enrolled in the DEFUSE 3 trial were included in this study. Three experienced neuroradiologists independently segmented hypodensity that reflected the ischemic core on each scan. The neuroradiologist with the most experience (expert A) served as the ground truth for deep learning model training. Two additional neuroradiologists' (experts B and C) segmentations were used for data testing. The 232 studies were randomly split into training and test sets. The training set was further randomly divided into 5 folds with training and validation sets. A 3-dimensional CNN architecture was trained and optimized to predict the segmentations of expert A from NCCT. The performance of the model was assessed using a set of volume, overlap, and distance metrics using non-inferiority thresholds of 20%, 3 ml, and 3 mm, respectively. The optimized model trained on expert A was compared to test experts B and C. We used a one-sided Wilcoxon signed-rank test to test for the non-inferiority of the model-expert compared to the inter-expert agreement. The final model performance for the ischemic core segmentation task reached a performance of 0.46 ± 0.09 Surface Dice at Tolerance 5mm and 0.47 ± 0.13 Dice when trained on expert A. Compared to the two test neuroradiologists the model-expert agreement was non-inferior to the inter-expert agreement, [Formula: see text]. The before, CNN accurately delineates the hypodense ischemic core on NCCT in acute ischemic stroke patients with an accuracy comparable to neuroradiologists.
Collapse
Affiliation(s)
| | - Brian Axelrod
- Department of Computer Science, Stanford University, Stanford, USA
| | | | | | | | | | | | - Li-Jia Li
- Stanford School of Medicine, Stanford, USA
| | | | | |
Collapse
|
10
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
11
|
Kokalj Ž, Džeroski S, Šprajc I, Štajdohar J, Draksler A, Somrak M. Machine learning-ready remote sensing data for Maya archaeology. Sci Data 2023; 10:558. [PMID: 37612295 PMCID: PMC10447422 DOI: 10.1038/s41597-023-02455-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/08/2023] [Indexed: 08/25/2023] Open
Abstract
In our study, we set out to collect a multimodal annotated dataset for remote sensing of Maya archaeology, that is suitable for deep learning. The dataset covers the area around Chactún, one of the largest ancient Maya urban centres in the central Yucatán Peninsula. The dataset includes five types of data records: raster visualisations and canopy height model from airborne laser scanning (ALS) data, Sentinel-1 and Sentinel-2 satellite data, and manual data annotations. The manual annotations (used as binary masks) represent three different types of ancient Maya structures (class labels: buildings, platforms, and aguadas - artificial reservoirs) within the study area, their exact locations, and boundaries. The dataset is ready for use with machine learning, including convolutional neural networks (CNNs) for object recognition, object localization (detection), and semantic segmentation. We would like to provide this dataset to help more research teams develop their own computer vision models for investigations of Maya archaeology or improve existing ones.
Collapse
Affiliation(s)
- Žiga Kokalj
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia.
| | - Sašo Džeroski
- Information and Communication Technologies, Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000, Ljubljana, Slovenia
- Jožef Stefan Institute, Jamova cesta 39, 1000, Ljubljana, Slovenia
| | - Ivan Šprajc
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Jasmina Štajdohar
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Andrej Draksler
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Maja Somrak
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
- Information and Communication Technologies, Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000, Ljubljana, Slovenia
| |
Collapse
|
12
|
Salih M, Austin C, Warty RR, Tiktin C, Rolnik DL, Momeni M, Rezatofighi H, Reddy S, Smith V, Vollenhoven B, Horta F. Embryo selection through artificial intelligence versus embryologists: a systematic review. Hum Reprod Open 2023; 2023:hoad031. [PMID: 37588797 PMCID: PMC10426717 DOI: 10.1093/hropen/hoad031] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Indexed: 08/18/2023] Open
Abstract
STUDY QUESTION What is the present performance of artificial intelligence (AI) decision support during embryo selection compared to the standard embryo selection by embryologists? SUMMARY ANSWER AI consistently outperformed the clinical teams in all the studies focused on embryo morphology and clinical outcome prediction during embryo selection assessment. WHAT IS KNOWN ALREADY The ART success rate is ∼30%, with a worrying trend of increasing female age correlating with considerably worse results. As such, there have been ongoing efforts to address this low success rate through the development of new technologies. With the advent of AI, there is potential for machine learning to be applied in such a manner that areas limited by human subjectivity, such as embryo selection, can be enhanced through increased objectivity. Given the potential of AI to improve IVF success rates, it remains crucial to review the performance between AI and embryologists during embryo selection. STUDY DESIGN SIZE DURATION The search was done across PubMed, EMBASE, Ovid Medline, and IEEE Xplore from 1 June 2005 up to and including 7 January 2022. Included articles were also restricted to those written in English. Search terms utilized across all databases for the study were: ('Artificial intelligence' OR 'Machine Learning' OR 'Deep learning' OR 'Neural network') AND ('IVF' OR 'in vitro fertili*' OR 'assisted reproductive techn*' OR 'embryo'), where the character '*' refers the search engine to include any auto completion of the search term. PARTICIPANTS/MATERIALS SETTING METHODS A literature search was conducted for literature relating to AI applications to IVF. Primary outcomes of interest were accuracy, sensitivity, and specificity of the embryo morphology grade assessments and the likelihood of clinical outcomes, such as clinical pregnancy after IVF treatments. Risk of bias was assessed using the Modified Down and Black Checklist. MAIN RESULTS AND THE ROLE OF CHANCE Twenty articles were included in this review. There was no specific embryo assessment day across the studies-Day 1 until Day 5/6 of embryo development was investigated. The types of input for training AI algorithms were images and time-lapse (10/20), clinical information (6/20), and both images and clinical information (4/20). Each AI model demonstrated promise when compared to an embryologist's visual assessment. On average, the models predicted the likelihood of successful clinical pregnancy with greater accuracy than clinical embryologists, signifying greater reliability when compared to human prediction. The AI models performed at a median accuracy of 75.5% (range 59-94%) on predicting embryo morphology grade. The correct prediction (Ground Truth) was defined through the use of embryo images according to post embryologists' assessment following local respective guidelines. Using blind test datasets, the embryologists' accuracy prediction was 65.4% (range 47-75%) with the same ground truth provided by the original local respective assessment. Similarly, AI models had a median accuracy of 77.8% (range 68-90%) in predicting clinical pregnancy through the use of patient clinical treatment information compared to 64% (range 58-76%) when performed by embryologists. When both images/time-lapse and clinical information inputs were combined, the median accuracy by the AI models was higher at 81.5% (range 67-98%), while clinical embryologists had a median accuracy of 51% (range 43-59%). LIMITATIONS REASONS FOR CAUTION The findings of this review are based on studies that have not been prospectively evaluated in a clinical setting. Additionally, a fair comparison of all the studies were deemed unfeasible owing to the heterogeneity of the studies, development of the AI models, database employed and the study design and quality. WIDER IMPLICATIONS OF THE FINDINGS AI provides considerable promise to the IVF field and embryo selection. However, there needs to be a shift in developers' perception of the clinical outcome from successful implantation towards ongoing pregnancy or live birth. Additionally, existing models focus on locally generated databases and many lack external validation. STUDY FUNDING/COMPETING INTERESTS This study was funded by Monash Data Future Institute. All authors have no conflicts of interest to declare. REGISTRATION NUMBER CRD42021256333.
Collapse
Affiliation(s)
- M Salih
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - C Austin
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - R R Warty
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - C Tiktin
- School of Engineering, RMIT University, Melbourne, Victoria, Australia
| | - D L Rolnik
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
| | - M Momeni
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - H Rezatofighi
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
| | - S Reddy
- School of Medicine, Deakin University, Geelong, Victoria, Australia
| | - V Smith
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - B Vollenhoven
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
- Monash IVF, Melbourne, Victoria, Australia
| | - F Horta
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
- City Fertility, Melbourne, Victoria, Australia
| |
Collapse
|
13
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
14
|
Rahimpour M, Saint Martin MJ, Frouin F, Akl P, Orlhac F, Koole M, Malhaire C. Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI. Eur Radiol 2023; 33:959-969. [PMID: 36074262 PMCID: PMC9889463 DOI: 10.1007/s00330-022-09113-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 07/09/2022] [Accepted: 08/14/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVES To develop a visual ensemble selection of deep convolutional neural networks (CNN) for 3D segmentation of breast tumors using T1-weighted dynamic contrast-enhanced (T1-DCE) MRI. METHODS Multi-center 3D T1-DCE MRI (n = 141) were acquired for a cohort of patients diagnosed with locally advanced or aggressive breast cancer. Tumor lesions of 111 scans were equally divided between two radiologists and segmented for training. The additional 30 scans were segmented independently by both radiologists for testing. Three 3D U-Net models were trained using either post-contrast images or a combination of post-contrast and subtraction images fused at either the image or the feature level. Segmentation accuracy was evaluated quantitatively using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD95) and scored qualitatively by a radiologist as excellent, useful, helpful, or unacceptable. Based on this score, a visual ensemble approach selecting the best segmentation among these three models was proposed. RESULTS The mean and standard deviation of DSC and HD95 between the two radiologists were equal to 77.8 ± 10.0% and 5.2 ± 5.9 mm. Using the visual ensemble selection, a DSC and HD95 equal to 78.1 ± 16.2% and 14.1 ± 40.8 mm was reached. The qualitative assessment was excellent (resp. excellent or useful) in 50% (resp. 77%). CONCLUSION Using subtraction images in addition to post-contrast images provided complementary information for 3D segmentation of breast lesions by CNN. A visual ensemble selection allowing the radiologist to select the most optimal segmentation obtained by the three 3D U-Net models achieved comparable results to inter-radiologist agreement, yielding 77% segmented volumes considered excellent or useful. KEY POINTS • Deep convolutional neural networks were developed using T1-weighted post-contrast and subtraction MRI to perform automated 3D segmentation of breast tumors. • A visual ensemble selection allowing the radiologist to choose the best segmentation among the three 3D U-Net models outperformed each of the three models. • The visual ensemble selection provided clinically useful segmentations in 77% of cases, potentially allowing for a valuable reduction of the manual 3D segmentation workload for the radiologist and greatly facilitating quantitative studies on non-invasive biomarker in breast MRI.
Collapse
Affiliation(s)
| | - Marie-Judith Saint Martin
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO), U1288 Inserm, Université Paris-Saclay, Centre de Recherche de l'Institut Curie, Bâtiment 101B Rue de la Chaufferie, 91400, Orsay, France
| | - Frédérique Frouin
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO), U1288 Inserm, Université Paris-Saclay, Centre de Recherche de l'Institut Curie, Bâtiment 101B Rue de la Chaufferie, 91400, Orsay, France.
| | - Pia Akl
- Department of Radiology, Hôpital Femme Mère Enfant, Hospices civils de Lyon, Lyon, France
| | - Fanny Orlhac
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO), U1288 Inserm, Université Paris-Saclay, Centre de Recherche de l'Institut Curie, Bâtiment 101B Rue de la Chaufferie, 91400, Orsay, France
| | - Michel Koole
- Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Caroline Malhaire
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO), U1288 Inserm, Université Paris-Saclay, Centre de Recherche de l'Institut Curie, Bâtiment 101B Rue de la Chaufferie, 91400, Orsay, France
- Department of Radiology, Ensemble Hospitalier de l'Institut Curie, Paris, France
| |
Collapse
|
15
|
Fazekas S, Budai BK, Stollmayer R, Kaposi PN, Bérczi V. Artificial intelligence and neural networks in radiology – Basics that all radiology residents should know. IMAGING 2022. [DOI: 10.1556/1647.2022.00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
AbstractThe area of Artificial Intelligence is developing at a high rate. In the medical field, an extreme amount of data is created every day. As the images and the reports are quantifiable, the field of radiology aspires to deliver better, more efficient clinical care. Artificial intelligence (AI) means the simulation of human intelligence by a system or machine. It has been developed to enable machines to “think”, which means to be able to learn, reason, predict, categorize, and solve problems concerning high amounts of data and make decisions in a more effective manner than before. Different AI methods can help radiologists with pre-screening images and identifying features. In this review, we summarize the basic concepts which are needed to understand AI. As the AI methods are expected to exceed the threshold for clinical usefulness soon, in the near future it will be inevitable to use AI in medicine.
Collapse
Affiliation(s)
- Szuzina Fazekas
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Bettina Katalin Budai
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Róbert Stollmayer
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Pál Novák Kaposi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Viktor Bérczi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| |
Collapse
|
16
|
deSouza NM, van der Lugt A, Deroose CM, Alberich-Bayarri A, Bidaut L, Fournier L, Costaridou L, Oprea-Lager DE, Kotter E, Smits M, Mayerhoefer ME, Boellaard R, Caroli A, de Geus-Oei LF, Kunz WG, Oei EH, Lecouvet F, Franca M, Loewe C, Lopci E, Caramella C, Persson A, Golay X, Dewey M, O'Connor JPB, deGraaf P, Gatidis S, Zahlmann G. Standardised lesion segmentation for imaging biomarker quantitation: a consensus recommendation from ESR and EORTC. Insights Imaging 2022; 13:159. [PMID: 36194301 PMCID: PMC9532485 DOI: 10.1186/s13244-022-01287-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lesion/tissue segmentation on digital medical images enables biomarker extraction, image-guided therapy delivery, treatment response measurement, and training/validation for developing artificial intelligence algorithms and workflows. To ensure data reproducibility, criteria for standardised segmentation are critical but currently unavailable. METHODS A modified Delphi process initiated by the European Imaging Biomarker Alliance (EIBALL) of the European Society of Radiology (ESR) and the European Organisation for Research and Treatment of Cancer (EORTC) Imaging Group was undertaken. Three multidisciplinary task forces addressed modality and image acquisition, segmentation methodology itself, and standards and logistics. Devised survey questions were fed via a facilitator to expert participants. The 58 respondents to Round 1 were invited to participate in Rounds 2-4. Subsequent rounds were informed by responses of previous rounds. RESULTS/CONCLUSIONS Items with ≥ 75% consensus are considered a recommendation. These include system performance certification, thresholds for image signal-to-noise, contrast-to-noise and tumour-to-background ratios, spatial resolution, and artefact levels. Direct, iterative, and machine or deep learning reconstruction methods, use of a mixture of CE marked and verified research tools were agreed and use of specified reference standards and validation processes considered essential. Operator training and refreshment were considered mandatory for clinical trials and clinical research. Items with a 60-74% agreement require reporting (site-specific accreditation for clinical research, minimal pixel number within lesion segmented, use of post-reconstruction algorithms, operator training refreshment for clinical practice). Items with ≤ 60% agreement are outside current recommendations for segmentation (frequency of system performance tests, use of only CE-marked tools, board certification of operators, frequency of operator refresher training). Recommendations by anatomical area are also specified.
Collapse
Affiliation(s)
- Nandita M deSouza
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK.
| | - Aad van der Lugt
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Christophe M Deroose
- Nuclear Medicine, University Hospitals Leuven, Leuven, Belgium.,Nuclear Medicine and Molecular Imaging, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | | | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, Lincoln, LN6 7TS, UK
| | - Laure Fournier
- INSERM, Radiology Department, AP-HP, Hopital Europeen Georges Pompidou, Université de Paris, PARCC, 75015, Paris, France
| | - Lena Costaridou
- School of Medicine, University of Patras, University Campus, Rio, 26 500, Patras, Greece
| | - Daniela E Oprea-Lager
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Elmar Kotter
- Department of Radiology, University Medical Center Freiburg, Freiburg, Germany
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Marius E Mayerhoefer
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria.,Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anna Caroli
- Department of Biomedical Engineering, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy
| | - Lioe-Fee de Geus-Oei
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.,Biomedical Photonic Imaging Group, University of Twente, Enschede, The Netherlands
| | - Wolfgang G Kunz
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Edwin H Oei
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Frederic Lecouvet
- Department of Radiology, Institut de Recherche Expérimentale et Clinique (IREC), Cliniques Universitaires Saint Luc, Université Catholique de Louvain (UCLouvain), 10 Avenue Hippocrate, 1200, Brussels, Belgium
| | - Manuela Franca
- Department of Radiology, Centro Hospitalar Universitário do Porto, Instituto de Ciências Biomédicas de Abel Salazar, University of Porto, Porto, Portugal
| | - Christian Loewe
- Division of Cardiovascular and Interventional Radiology, Department for Bioimaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Egesta Lopci
- Nuclear Medicine, IRCCS - Humanitas Research Hospital, via Manzoni 56, Rozzano, MI, Italy
| | - Caroline Caramella
- Radiology Department, Hôpital Marie Lannelongue, Institut d'Oncologie Thoracique, Université Paris-Saclay, Le Plessis-Robinson, France
| | - Anders Persson
- Department of Radiology, and Department of Health, Medicine and Caring Sciences, Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Xavier Golay
- Queen Square Institute of Neurology, University College London, London, UK
| | - Marc Dewey
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - James P B O'Connor
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | - Pim deGraaf
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sergios Gatidis
- Department of Radiology, University of Tubingen, Tübingen, Germany
| | - Gudrun Zahlmann
- Radiological Society of North America (RSNA), Oak Brook, IL, USA
| | | | | |
Collapse
|