1
|
Olaisen S, Smistad E, Espeland T, Hu J, Pasdeloup D, Østvik A, Aakhus S, Rösner A, Malm S, Stylidis M, Holte E, Grenne B, Løvstakken L, Dalen H. Automatic measurements of left ventricular volumes and ejection fraction by artificial intelligence: clinical validation in real time and large databases. Eur Heart J Cardiovasc Imaging 2024; 25:383-395. [PMID: 37883712 PMCID: PMC11024810 DOI: 10.1093/ehjci/jead280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 10/11/2023] [Accepted: 10/15/2023] [Indexed: 10/28/2023] Open
Abstract
AIMS Echocardiography is a cornerstone in cardiac imaging, and left ventricular (LV) ejection fraction (EF) is a key parameter for patient management. Recent advances in artificial intelligence (AI) have enabled fully automatic measurements of LV volumes and EF both during scanning and in stored recordings. The aim of this study was to evaluate the impact of implementing AI measurements on acquisition and processing time and test-retest reproducibility compared with standard clinical workflow, as well as to study the agreement with reference in large internal and external databases. METHODS AND RESULTS Fully automatic measurements of LV volumes and EF by a novel AI software were compared with manual measurements in the following clinical scenarios: (i) in real time use during scanning of 50 consecutive patients, (ii) in 40 subjects with repeated echocardiographic examinations and manual measurements by 4 readers, and (iii) in large internal and external research databases of 1881 and 849 subjects, respectively. Real-time AI measurements significantly reduced the total acquisition and processing time by 77% (median 5.3 min, P < 0.001) compared with standard clinical workflow. Test-retest reproducibility of AI measurements was superior in inter-observer scenarios and non-inferior in intra-observer scenarios. AI measurements showed good agreement with reference measurements both in real time and in large research databases. CONCLUSION The software reduced the time taken to perform and volumetrically analyse routine echocardiograms without a decrease in accuracy compared with experts.
Collapse
Affiliation(s)
- Sindre Olaisen
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Erik Smistad
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Torvald Espeland
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Jieyu Hu
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - David Pasdeloup
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Andreas Østvik
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Svend Aakhus
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Assami Rösner
- Department of Cardiology, University Hospital of North Norway, Tromsø, Norway
- Institute for Clinical Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
| | - Siri Malm
- Institute for Clinical Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
- Department of Cardiology, University Hospital of North Norway, UNN Harstad, Tromsø, Norway
| | - Michael Stylidis
- Department of Cardiology, University Hospital of North Norway, Tromsø, Norway
- Department of Community Medicine, UiT, The Arctic University of Norway, Tromsø, Norway
| | - Espen Holte
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Bjørnar Grenne
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Lasse Løvstakken
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
| | - Havard Dalen
- Centre for Innovative Ultrasound Solutions, Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Clinic of Cardiology, St.Olavs Hospital, Trondheim University Hospital, Prinsesse Kristinas Gate 3, 7030 Trondheim, Norway
- Department of Medicine, Levanger Hospital, Nord-Trøndelag Hospital Trust, Kirkegata 2, 7600 Levanger, Norway
| |
Collapse
|
2
|
Hu J, Olaisen SH, Smistad E, Dalen H, Lovstakken L. Automated 2-D and 3-D Left Atrial Volume Measurements Using Deep Learning. Ultrasound Med Biol 2024; 50:47-56. [PMID: 37813702 DOI: 10.1016/j.ultrasmedbio.2023.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 08/18/2023] [Accepted: 08/29/2023] [Indexed: 10/11/2023]
Abstract
OBJECTIVE Echocardiography, a critical tool for assessing left atrial (LA) volume, often relies on manual or semi-automated measurements. This study introduces a fully automated, real-time method for measuring LA volume in both 2-D and 3-D imaging, in the aim of offering accuracy comparable to that of expert assessments while saving time and reducing operator variability. METHODS We developed an automated pipeline comprising a network to identify the end-systole (ES) time point and robust 2-D and 3-D U-Nets for segmentation. We employed data sets of 789 2-D images and 286 3-D recordings and explored various training regimes, including recurrent networks and pseudo-labeling, to estimate volume curves. RESULTS Our baseline results revealed an average volume difference of 2.9 mL for 2-D and 7.8 mL for 3-D, respectively, compared with manual methods. The application of pseudo-labeling to all frames in the cine loop generally led to more robust volume curves and notably improved ES measurement in cases with limited data. CONCLUSION Our results highlight the potential of automated LA volume estimation in clinical practice. The proposed prototype application, capable of processing real-time data from a clinical ultrasound scanner, provides valuable temporal volume curve information in the echo lab.
Collapse
Affiliation(s)
- Jieyu Hu
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Sindre Hellum Olaisen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Erik Smistad
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; SINTEF Medical Technology, Trondheim, Norway
| | - Havard Dalen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway; Levanger Hospital, Nord-Trndelag Hospital Trust, Levanger, Norway
| | - Lasse Lovstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Salte IM, Østvik A, Olaisen SH, Karlsen S, Dahlslett T, Smistad E, Eriksen-Volnes TK, Brunvand H, Haugaa KH, Edvardsen T, Dalen H, Lovstakken L, Grenne B. Response to "Minimal Detectable Change and Reproducibility of Echocardiographic Strain: Implications for Clinical Practice". J Am Soc Echocardiogr 2023; 36:1223-1224. [PMID: 37640086 DOI: 10.1016/j.echo.2023.08.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 08/17/2023] [Indexed: 08/31/2023]
Affiliation(s)
- Ivar M Salte
- Department of Medicine, Hospital of Southern Norway, Kristiansand, Norway; Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Andreas Østvik
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Sindre H Olaisen
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Sigve Karlsen
- Faculty of Medicine, University of Oslo, Oslo, Norway; Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Thomas Dahlslett
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Erik Smistad
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Torfinn Kirknes Eriksen-Volnes
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway
| | - Harald Brunvand
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Kristina H Haugaa
- Faculty of Medicine, University of Oslo, Oslo, Norway; ProCardio Center for Innovation, Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway; Faculty of Medicine, Karolinska Institutet and Cardiovascular Division, Karolinska University Hospital, Stockholm, Sweden
| | - Thor Edvardsen
- Faculty of Medicine, University of Oslo, Oslo, Norway; ProCardio Center for Innovation, Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Håvard Dalen
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway; Levanger Hospital, Nord-Trøndelag Hospital Trust, Levanger, Norway
| | - Lasse Lovstakken
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bjørnar Grenne
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway
| |
Collapse
|
4
|
Salte IM, Østvik A, Olaisen SH, Karlsen S, Dahlslett T, Smistad E, Eriksen-Volnes TK, Brunvand H, Haugaa KH, Edvardsen T, Dalen H, Lovstakken L, Grenne B. Deep Learning for Improved Precision and Reproducibility of Left Ventricular Strain in Echocardiography: A Test-Retest Study. J Am Soc Echocardiogr 2023; 36:788-799. [PMID: 36933849 DOI: 10.1016/j.echo.2023.02.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 02/22/2023] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
AIMS Assessment of left ventricular (LV) function by echocardiography is hampered by modest test-retest reproducibility. A novel artificial intelligence (AI) method based on deep learning provides fully automated measurements of LV global longitudinal strain (GLS) and may improve the clinical utility of echocardiography by reducing user-related variability. The aim of this study was to assess within-patient test-retest reproducibility of LV GLS measured by the novel AI method in repeated echocardiograms recorded by different echocardiographers and to compare the results to manual measurements. METHODS Two test-retest data sets (n = 40 and n = 32) were obtained at separate centers. Repeated recordings were acquired in immediate succession by 2 different echocardiographers at each center. For each data set, 4 readers measured GLS in both recordings using a semiautomatic method to construct test-retest interreader and intrareader scenarios. Agreement, mean absolute difference, and minimal detectable change (MDC) were compared to analyses by AI. In a subset of 10 patients, beat-to-beat variability in 3 cardiac cycles was assessed by 2 readers and AI. RESULTS Test-retest variability was lower with AI compared with interreader scenarios (data set I: MDC = 3.7 vs 5.5, mean absolute difference = 1.4 vs 2.1, respectively; data set II: MDC = 3.9 vs 5.2, mean absolute difference = 1.6 vs 1.9, respectively; all P < .05). There was bias in GLS measurements in 13 of 24 test-retest interreader scenarios (largest bias, 3.2 strain units). In contrast, there was no bias in measurements by AI. Beat-to-beat MDCs were 1,5, 2.1, and 2.3 for AI and the 2 readers, respectively. Processing time for analyses of GLS by the AI method was 7.9 ± 2.8 seconds. CONCLUSION A fast AI method for automated measurements of LV GLS reduced test-retest variability and removed bias between readers in both test-retest data sets. By improving the precision and reproducibility, AI may increase the clinical utility of echocardiography.
Collapse
Affiliation(s)
- Ivar M Salte
- Department of Medicine, Hospital of Southern Norway, Kristiansand, Norway; Faculty of Medicine, University of Oslo, Oslo, Norway; ProCardio Center for Innovation, Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Andreas Østvik
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Sindre H Olaisen
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Sigve Karlsen
- Faculty of Medicine, University of Oslo, Oslo, Norway; Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Thomas Dahlslett
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Erik Smistad
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Medical Image Analysis, Health Research, SINTEF Digital, Trondheim, Norway
| | - Torfinn K Eriksen-Volnes
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway
| | - Harald Brunvand
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Kristina H Haugaa
- Faculty of Medicine, University of Oslo, Oslo, Norway; ProCardio Center for Innovation, Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway; Faculty of Medicine, Karolinska Institutet and Cardiovascular Division, Karolinska University Hospital, Stockholm, Sweden
| | - Thor Edvardsen
- Faculty of Medicine, University of Oslo, Oslo, Norway; ProCardio Center for Innovation, Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Håvard Dalen
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway; Levanger Hospital, Nord-Trøndelag Hospital Trust, Levanger, Norway
| | - Lasse Lovstakken
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bjørnar Grenne
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs University Hospital, Trondheim, Norway.
| |
Collapse
|
5
|
Netteland DF, Aarhus M, Smistad E, Sandset EC, Padayachy L, Helseth E, Brekken R. Noninvasive intracranial pressure assessment by optic nerve sheath diameter: Automated measurements as an alternative to clinician-performed measurements. Front Neurol 2023; 14:1064492. [PMID: 36816558 PMCID: PMC9928958 DOI: 10.3389/fneur.2023.1064492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 01/06/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Optic nerve sheath diameter (ONSD) has shown promise as a noninvasive parameter for estimating intracranial pressure (ICP). In this study, we evaluated a novel automated method of measuring the ONSD in transorbital ultrasound imaging. Methods From adult traumatic brain injury (TBI) patients with invasive ICP monitoring, bedside manual ONSD measurements and ultrasound videos of the optic nerve sheath complex were simultaneously acquired. Automatic ONSD measurements were obtained by the processing of the ultrasound videos by a novel software based on a machine learning approach for segmentation of the optic nerve sheath. Agreement between manual and automated measurements, as well as their correlation to invasive ICP, was evaluated. Furthermore, the ability to distinguish dichotomized ICP for manual and automatic measurements of ONSD was compared, both for ICP dichotomized at ≥20 mmHg and at the 50th percentile (≥14 mmHg). Finally, we performed an exploratory subgroup analysis based on the software's judgment of optic nerve axis alignment to elucidate the reasons for variation in the agreement between automatic and manual measurements. Results A total of 43 ultrasound examinations were performed on 25 adult patients with TBI, resulting in 86 image sequences covering the right and left eyes. The median pairwise difference between automatically and manually measured ONSD was 0.06 mm (IQR -0.44 to 0.38 mm; p = 0.80). The manually measured ONSD showed a positive correlation with ICP, while automatically measured ONSD showed a trend toward, but not a statistically significant correlation with ICP. When examining the ability to distinguish dichotomized ICP, manual and automatic measurements performed with similar accuracy both for an ICP cutoff at 20 mmHg (manual: AUC 0.74, 95% CI 0.58-0.88; automatic: AUC 0.83, 95% CI 0.66-0.93) and for an ICP cutoff at 14 mmHg (manual: AUC 0.70, 95% CI 0.52-0.85; automatic: AUC 0.68, 95% CI 0.48-0.83). In the exploratory subgroup analysis, we found that the agreement between measurements was higher in the subgroup where the automatic software evaluated the optic nerve axis alignment as good as compared to intermediate/poor. Conclusion The novel automated method of measuring the ONSD on the ultrasound videos using segmentation of the optic nerve sheath showed a reasonable agreement with manual measurements and performed equally well in distinguishing high and low ICP.
Collapse
Affiliation(s)
- Dag Ferner Netteland
- Department of Neurosurgery, Oslo University Hospital Ullevål, Oslo, Norway,Faculty of Medicine, University of Oslo, Oslo, Norway,*Correspondence: Dag Ferner Netteland ✉
| | - Mads Aarhus
- Department of Neurosurgery, Oslo University Hospital Ullevål, Oslo, Norway
| | - Erik Smistad
- Department of Health Research, Medical Technology, SINTEF, Trondheim, Norway
| | - Else Charlotte Sandset
- Department of Neurology, Oslo University Hospital Ullevål, Oslo, Norway,The Norwegian Air Ambulance Foundation, Oslo, Norway
| | - Llewellyn Padayachy
- Department of Neurosurgery, School of Medicine, Faculty of Health Sciences, University of Pretoria, Steve Biko Academic Hospital, Pretoria, South Africa
| | - Eirik Helseth
- Department of Neurosurgery, Oslo University Hospital Ullevål, Oslo, Norway,Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Reidar Brekken
- Department of Health Research, Medical Technology, SINTEF, Trondheim, Norway
| |
Collapse
|
6
|
Pasdeloup D, Olaisen SH, Østvik A, Sabo S, Pettersen HN, Holte E, Grenne B, Stølen SB, Smistad E, Aase SA, Dalen H, Løvstakken L. Real-Time Echocardiography Guidance for Optimized Apical Standard Views. Ultrasound Med Biol 2023; 49:333-346. [PMID: 36280443 DOI: 10.1016/j.ultrasmedbio.2022.09.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/08/2022] [Accepted: 09/11/2022] [Indexed: 06/16/2023]
Abstract
Measurements of cardiac function such as left ventricular ejection fraction and myocardial strain are typically based on 2-D ultrasound imaging. The reliability of these measurements depends on the correct pose of the transducer such that the 2-D imaging plane properly aligns with the heart for standard measurement views and is thus dependent on the operator's skills. We propose a deep learning tool that suggests transducer movements to help users navigate toward the required standard views while scanning. The tool can simplify echocardiography for less experienced users and improve image standardization for more experienced users. Training data were generated by slicing 3-D ultrasound volumes, which permits simulation of the movements of a 2-D transducer. Neural networks were further trained to calculate the transducer position in a regression fashion. The method was validated and tested on 2-D images from several data sets representative of a prospective clinical setting. The method proposed the adequate transducer movement 75% of the time when averaging over all degrees of freedom and 95% of the time when considering transducer rotation solely. Real-time application examples illustrate the direct relation between the transducer movements, the ultrasound image and the provided feedback.
Collapse
Affiliation(s)
- David Pasdeloup
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Sindre H Olaisen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Andreas Østvik
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; SINTEF Medical Technology, Trondheim, Norway
| | - Sigbjorn Sabo
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Håkon N Pettersen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Espen Holte
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Bjørnar Grenne
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Stian B Stølen
- Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Erik Smistad
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; SINTEF Medical Technology, Trondheim, Norway
| | | | - Håvard Dalen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's Hospital, Trondheim, Norway
| | - Lasse Løvstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
7
|
Saeboe S, Pettersen HN, Oestvik A, Pasdeloup D, Smistad E, Stoelen S, Grenne B, Loevstakken L, Holte E, Dalen H. Automated analyses and real-time guiding by deep learning to reduce test-retest variability of global longitudinal strain. Eur Heart J 2022. [DOI: 10.1093/eurheartj/ehac544.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Background
Global longitudinal strain (GLS) is recommended for assessment of left ventricular (LV) function. Test-retest variability of GLS rely on recordings and analyses. Foreshortened LV recordings are shown to reduce length measurements and increase GLS. Real-time guiding of operators and automated GLS analyses (auto-GLS) may improve echocardiographic test-retest reproducibility and workflow.
Purpose
We aimed to study whether a deep-learning (DL) software with real-time feedback of LV length during echocardiography combined with auto-GLS reduced the variability between sonographers and cardiologists. Secondly, we aimed to study the variability of manual and automated GLS.
Methods
Patients with mixed cardiac pathology were included. Inclusion criteria were sinus rhythm and no indication for ultrasound contrast. Each patient underwent three consecutive echocardiograms. The first and second examination were performed by two of three randomized sonographers (Sonographer 1 and 2) and the third exam by one of four randomized cardiologists. All exams included standard apical views. Data was collected in two separate periods: In period 1, no operator used the DL guiding. In period 2, DL guiding was used by Sonographer 2. GLS was measured manually by all operators blinded to others. One blinded expert reader measured reference LV length in cardiologists' tri-plane recordings. LV foreshortening was calculated at end-diastole (reference minus the operators' length). Auto-GLS was measured retrospectively in all examinations. One-way ANOVA was used to estimate within-patient variation for auto-GLS and manual measurements. Coefficients of variation (COV) between sonographers and cardiologists were calculated as within patient SD/mean.
Results
In total, 88 patients (45% women) were included with mean (SD) age 63 (16) years. Manual and automated GLS correlated well (Figure 1), while foreshortening of the LV showed some non-significant inverse correlations with both GLS measurements (R: auto-GLS = −0.13 and manual GLS = −0.10, p≥0.06). COVs for auto-GLS were not significantly reduced by real-time guiding to reduce foreshortening (COV: with DL guiding = 5.11% and without DL guiding = 6.39%, p=0.298). Compared to manual GLS measurements, auto-GLS had significantly less within-patient variation (within patient SD: manual GLS = 1.46 and auto-GLS = 1.16, p<0.01). Similarly, auto-GLS showed >30% lower COVs compared to manual measurements (Table 1).
Conclusion
Real-time feedback by DL to reduce LV foreshortening was not significantly associated with reduced variation of GLS measurements, while fully automated DL analyses of GLS reduced test-retest variation between sonographers and cardiologists. This may allow for improved workflow and diagnostics in echocardiography.
Funding Acknowledgement
Type of funding sources: Public Institution(s). Main funding source(s): Norwegian University of Science and Technology, St. Olavs University Hospital, Central-Norway Health Authority
Collapse
Affiliation(s)
- S Saeboe
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - H N Pettersen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - A Oestvik
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - D Pasdeloup
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - E Smistad
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - S Stoelen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - B Grenne
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - L Loevstakken
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - E Holte
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| | - H Dalen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging , Trondheim , Norway
| |
Collapse
|
8
|
Pedersen A, Smistad E, Rise TV, Dale VG, Pettersen HS, Nordmo TAS, Bouget D, Reinertsen I, Valla M. H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images. Front Med (Lausanne) 2022; 9:971873. [PMID: 36186805 PMCID: PMC9515451 DOI: 10.3389/fmed.2022.971873] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 08/24/2022] [Indexed: 12/24/2022] Open
Abstract
Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took ~ 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering.
Collapse
Affiliation(s)
- André Pedersen
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Erik Smistad
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tor V. Rise
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Pathology, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | - Vibeke G. Dale
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Pathology, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | - Henrik S. Pettersen
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Pathology, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | - Tor-Arne S. Nordmo
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| | - David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Marit Valla
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Pathology, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
- Clinic of Laboratory Medicine, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
9
|
Pettersen H, Sabo S, Pasdeloup D, Ostvik A, Smistad E, Stolen SB, Grenne B, Lovstakken L, Dalen H, Holte E. The impact of real-time feedback by deep learning during echocardiographic scanning on test-retest variability of left ventricular systolic function measurements. Eur Heart J Cardiovasc Imaging 2022. [DOI: 10.1093/ehjci/jeab289.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Funding Acknowledgements
Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Norwegian University of Science and Technology, St. Olavs University Hospital, Central-Norway Health Authority
OnBehalf
Department of Circulation and Medical imaging, Norwegian University of Science and Technology, Trondheim, Norway
Background/introduction
Left ventricular (LV) ejection fraction (EF) is the most widely used measure of systolic cardiac function. LV foreshortening is a common problem within echocardiography and cause inaccuracies in estimation of EF and end-diastolic volume (EDV). Guidance of LV length during scanning could improve quality but has not yet been available.
Purpose
To evaluate the impact of real-time feedback using a robust deep learning (DL) tool during echocardiographic scanning to reduce test-retest variability in assessment of EF and LV EDV.
Methods
Patients scheduled for echocardiography were included if they were in sinus rhythm and had no need for use of contrast. Three consecutive echocardiograms were performed, where the first and second by two of three experienced sonographers and the third (reference) by one of four cardiologists in random order. Data collection was divided into two periods. In the first period, sonographers were told to provide high quality echocardiograms for analyses of LV function and no additional tool was provided. Thereafter, the sonographers were trained in use of the DL algorithm on 10 patients each. In the second period of inclusion, the real-time DL was used during scanning by the sonographers performing the second exam (Sonographer 2), while the first (Sonographer 1) had participated in training but had no access to the DL tool. All exams included the standard apical views, and the reference exams included tri-plane recordings of the LV as well.
All measurements were done retrospectively blinded to the others. LV EF and EDV were measured in four- and two-chamber views and averaged by the method of discs’ formula. The coefficients of variation (CoV) were compared for both LV EF and EDV (two groups of sonographers vs cardiologist) before and after the introduction of DL.
Results
A total of 88 patients were included (45% women), 41 in period 1 and 47 in period 2. Mean (SD) age was 63 (16) years, LV EF was 53 (12) % and LV EDV was 126 (55) ml.
Main findings are shown in the table. There was no significant difference in CoV for neither LV EF nor EDV using the DL tool. Compared to the first period the sonographers not using the DL tool had poorer reproducibility of LV EDV in period 2 (p ≤0.02), while there was a trend for reduced CoV for LV EDV for those using the algorithm (p = 0.11). By using the DL algorithm, LV foreshortening was reduced by 2.4 mm (p <0.001), and similarly, alignment of the mitral annulus was numerically improved (p = 0.10). Whether other markers of image quality were changed is not known.
Conclusion
The novel real-time DL algorithm to reduce foreshortening provided more standardized recordings when used by experienced sonographers during scanning, but these changes did not result in significant improvement in test-retest variation. Further development and investigations are needed to significantly reduce test-retest variability. Abstract Table_1 Abstract Figure. DL_tool
Collapse
Affiliation(s)
- H Pettersen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - S Sabo
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - D Pasdeloup
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - A Ostvik
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - E Smistad
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - SB Stolen
- St Olavs Hospital, Clinic of Cardiology, Trondheim, Norway
| | - B Grenne
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - L Lovstakken
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - H Dalen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - E Holte
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| |
Collapse
|
10
|
Saeboe S, Pettersen H, Pasdeloup D, Smistad E, Oestvik A, Stoelen S, Grenne B, Loevstakken L, Holte E, Dalen H. Real-time automatic feedback by deep learning to reduce apical foreshortening in echocardiography. Eur Heart J Cardiovasc Imaging 2022. [DOI: 10.1093/ehjci/jeab289.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
Funding Acknowledgements
Type of funding sources: Public Institution(s). Main funding source(s): Norwegian University of Science and Technology, St. Olavs University Hospital, Central-Norway Health Authority
Background
Left ventricular (LV) foreshortening is common in echocardiography and may impair reproducibility and estimation of LV function. Feedback of foreshortening metrics such as LV length during scanning could improve accuracy, however, such a tool has not been available.
Purpose
To evaluate the effect of feedback of LV length during imaging for experienced sonographers, measured in real-time using a robust deep learning (DL) based tool.
Methods
Consecutive patients (n = 108) with mixed cardiac pathology were included during two separate periods. Each patient underwent three echocardiograms. The first and second examinations were performed by two (of three) randomized sonographers and the third exam by one (of four) randomized cardiologists. All examinations included the three standard apical views and the cardiologists’ exam included tri-plane recordings for reference. The data collection was divided in two phases. In the first period, the sonographers were told to provide high quality echocardiograms for analyses of LV function. Subsequently, sonographers were trained in use of the real-time DL tool on 10 patients each. In period two, the algorithm was used during scanning by the sonographer performing the second exam. One expert reader measured LV length (subendocardial apex to mitral annular plane) in all exams. The cardiologists’ tri-plane recordings were analyzed and used as reference. LV foreshortening was calculated at end-diastole (reference minus the operators LV length). Each exam was classified as foreshortened if the difference was ≥4 mm.
Results
After excluding those not in sinus rhythm (n = 9) and with need for contrast echocardiography (n = 11), 88 patients (45% women) were included (41 in period 1 and 47 in period 2). Age was mean (SD) 63 (16) years (45% women). Main findings are shown in the Table. LV length by both sonographer groups was significantly foreshortened in period 1 (mean 2.6 and 3.5 mm), and both sonographer groups reduced LV foreshortening in period 2 (both p ≤0.02). Improvement was best for sonographers using the real-time DL algorithm. These were not significantly foreshortened compared to the reference (p = 0.53), and less foreshortened compared to sonographer 1 (p = 0.001). Similarly, sonographers had more foreshortened examinations than cardiologist in period 1 (sonographer groups; 15 (37%) and 13 (32%) vs. cardiologist; 5 (12%), both p ≤0.03). In period 2, there was no difference in proportion of foreshortening between cardiologists and sonographer 2 (cardiologist; 0 vs sonographer; 2 (4%, p = 0.5)), while sonographer 1 foreshortened more 11 (23%), p ≤0.003).
Conclusion
Feedback of LV length during imaging significantly reduced foreshortening in echocardiographic acquisitions by sonographers, and has the potential to improve image accuracy even in the hands of experienced operators. Abstract Table Abstract Figure. Deep Learning foreshortening application
Collapse
Affiliation(s)
- S Saeboe
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - H Pettersen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - D Pasdeloup
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - E Smistad
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - A Oestvik
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - S Stoelen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - B Grenne
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - L Loevstakken
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - E Holte
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - H Dalen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| |
Collapse
|
11
|
Fernandes JF, Loncaric F, Marciniak M, Gilbert A, Smistad E, Lovstakken L, Mcleod K, Sitges M, Lamata P. Automatic measurement of LV wall thickness from 2D cardiac echocardiography. Eur Heart J Cardiovasc Imaging 2022. [DOI: 10.1093/ehjci/jeab289.150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Funding Acknowledgements
Type of funding sources: Public grant(s) – EU funding. Main funding source(s): PIC from European Union"s Horizon 2020 Marie Skłodowska-Curie Actions ITN
Background
The wall thickness of the left ventricle (LV) is an important parameter in the diagnosis of hypertension and more specifically in hypertrophic cardiomyopathy. A user-dependent manual assessment of distances on 2D echocardiographic images is the current clinical gold-standard.
Purpose
The automation of LV wall thickness measurements in 2D echocardiography in order to improve robustness and reduce time of clinical reports where wall thickness is required, such as hypertrophy and the presence of Basal Septal Hypertrophy (BSH)(1).
Methods
A dataset of 4-chamber (4CH) echocardiograms on 118 patients with a diagnosis of hypertension (2) is used for this study. The images were segmented automatically (3) extracting the blood pool and the myocardium. Based on the curvature of the complete myocardial contour, the valve annular regions are removed leaving the endocardial and the epicardial walls as independent structures. The wall thickness along the myocardium is calculated as the distance from each endocardial border pixel to the closest epicardial point (see Figure 1). A high pass gaussian filter was applied to remove high frequency noise. Ultimately, the basal-to-mid septal wall thickness ratio that defines BSH (ratio ≥ 1.4) was computed as the maximal of basal-septal segment divided by minimum of mid-septal segment. In order to validate the method for BSH detection, the wall thickness septal ratio was carefully measured by a clinical expert following the guidelines (2). The statistical agreement was accessed via linear correlation and Bland-Altman analysis.
Results
The automatic assessment of LV wall thickness along the myocardium is feasible in 2D echocardiography. The septal ratio showed an excellent agreement with manual measurements (R2 = 0.94, bias=-0.01, see Figure 2), leading to a detection of BSH in n = 19 vs the n = 18 detected manually (1 false-negative and 2 false-positives). In comparison to the intra and inter-observer variabilities of 12% and 42% respectively in the manual measurement (4), the automatic method had no variability for a given image acquisition.
Conclusions
The automatic measurement of myocardial wall thickness from a 2D echocardiographic images is accurate and reproducible. The implementation of the methodology in clinical practise has the potential to improve and automate the assessment of hypertrophic cardiac conditions. Abstract Figure. Pipeline for automatic measurement of WT Abstract Figure. Agreement of BSH WT ratio
Collapse
Affiliation(s)
- JF Fernandes
- Kings College London, School of Biomedical Engineering and Imaging Sciences, London, United Kingdom of Great Britain & Northern Ireland
| | - F Loncaric
- Institute of Biomedical Research August Pi Sunyer (IDIBAPS), Barcelona, Spain
| | - M Marciniak
- Kings College London, School of Biomedical Engineering and Imaging Sciences, London, United Kingdom of Great Britain & Northern Ireland
| | - A Gilbert
- GE Healthcare, GE Vingmed Ultrasound, Horten, Norway
| | - E Smistad
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - L Lovstakken
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - K Mcleod
- GE Healthcare, GE Vingmed Ultrasound, Horten, Norway
| | - M Sitges
- Barcelona Hospital Clinic, Cardiovascular Institute, Barcelona, Spain
| | - P Lamata
- Kings College London, School of Biomedical Engineering and Imaging Sciences, London, United Kingdom of Great Britain & Northern Ireland
| |
Collapse
|
12
|
Pettersen HS, Belevich I, Røyset ES, Smistad E, Simpson MR, Jokitalo E, Reinertsen I, Bakke I, Pedersen A. Code-Free Development and Deployment of Deep Segmentation Models for Digital Pathology. Front Med (Lausanne) 2022; 8:816281. [PMID: 35155486 PMCID: PMC8829033 DOI: 10.3389/fmed.2021.816281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 12/24/2021] [Indexed: 11/13/2022] Open
Abstract
Application of deep learning on histopathological whole slide images (WSIs) holds promise of improving diagnostic efficiency and reproducibility but is largely dependent on the ability to write computer code or purchase commercial solutions. We present a code-free pipeline utilizing free-to-use, open-source software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep learning-based segmentation models for computational pathology. We demonstrate the pipeline on a use case of separating epithelium from stroma in colonic mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin (HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed through active learning using the pipeline. On a hold-out test set of 36 HE and 21 CD3-stained WSIs a mean intersection over union score of 95.5 and 95.3% was achieved on epithelium segmentation. We demonstrate pathologist-level segmentation accuracy and clinical acceptable runtime performance and show that pathologists without programming experience can create near state-of-the-art segmentation solutions for histopathological WSIs using only free-to-use software. The study further demonstrates the strength of open-source solutions in its ability to create generalizable, open pipelines, of which trained models and predictions can seamlessly be exported in open formats and thereby used in external solutions. All scripts, trained models, a video tutorial, and the full dataset of 251 WSIs with ~31 k epithelium annotations are made openly available at https://github.com/andreped/NoCodeSeg to accelerate research in the field.
Collapse
Affiliation(s)
- Henrik Sahlin Pettersen
- Department of Pathology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Laboratory Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Ilya Belevich
- Electron Microscopy Unit, Institute of Biotechnology, Helsinki Institute of Life Science, University of Helsinki, Helsinki, Finland
| | - Elin Synnøve Røyset
- Department of Pathology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Laboratory Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Erik Smistad
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
| | - Melanie Rae Simpson
- Department of Public Health and Nursing, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
- The Clinical Research Unit for Central Norway, Trondheim, Norway
| | - Eija Jokitalo
- Electron Microscopy Unit, Institute of Biotechnology, Helsinki Institute of Life Science, University of Helsinki, Helsinki, Finland
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingunn Bakke
- Department of Clinical and Molecular Medicine, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Laboratory Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - André Pedersen
- Department of Clinical and Molecular Medicine, Faculty of Medicine and Health Sciences, NTNU - Norwegian University of Science and Technology, Trondheim, Norway
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- The Cancer Foundation, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
13
|
Salte IM, Østvik A, Smistad E, Melichova D, Nguyen TM, Karlsen S, Brunvand H, Haugaa KH, Edvardsen T, Lovstakken L, Grenne B. Artificial Intelligence for Automatic Measurement of Left Ventricular Strain in Echocardiography. JACC Cardiovasc Imaging 2021; 14:1918-1928. [PMID: 34147442 DOI: 10.1016/j.jcmg.2021.04.018] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 03/26/2021] [Accepted: 04/15/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVES This study sought to examine if fully automated measurements of global longitudinal strain (GLS) using a novel motion estimation technology based on deep learning and artificial intelligence (AI) are feasible and comparable with a conventional speckle-tracking application. BACKGROUND GLS is an important parameter when evaluating left ventricular function. However, analyses of GLS are time consuming and demand expertise, and thus are underused in clinical practice. METHODS In this study, 200 patients with a wide range of left ventricle (LV) function were included. Three standard apical cine-loops were analyzed using the AI pipeline. The AI method measured GLS and was compared with a commercially available semiautomatic speckle-tracking software (EchoPAC v202, GE Healthcare, Chicago, Illinois). RESULTS The AI method succeeded to both correctly classify all 3 standard apical views and perform timing of cardiac events in 89% of patients. Furthermore, the method successfully performed automatic segmentation, motion estimates, and measurements of GLS in all examinations, across different cardiac pathologies and throughout the spectrum of LV function. GLS was -12.0 ± 4.1% for the AI method and -13.5 ± 5.3% for the reference method. Bias was -1.4 ± 0.3% (95% limits of agreement: 2.3 to -5.1), which is comparable with intervendor studies. The AI method eliminated measurement variability and a complete GLS analysis was processed within 15 s. CONCLUSIONS Through the range of LV function this novel AI method succeeds, without any operator input, to automatically identify the 3 standard apical views, perform timing of cardiac events, trace the myocardium, perform motion estimation, and measure GLS. Fully automated measurements based on AI could facilitate the clinical implementation of GLS.
Collapse
Affiliation(s)
- Ivar M Salte
- Department of Medicine, Hospital of Southern Norway, Kristiansand, Norway; Faculty of Medicine, University of Oslo, Norway
| | - Andreas Østvik
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Erik Smistad
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Daniela Melichova
- Faculty of Medicine, University of Oslo, Norway; Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Thuy Mi Nguyen
- Department of Medicine, Hospital of Southern Norway, Kristiansand, Norway; Faculty of Medicine, University of Oslo, Norway
| | - Sigve Karlsen
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Harald Brunvand
- Department of Medicine, Hospital of Southern Norway, Arendal, Norway
| | - Kristina H Haugaa
- Faculty of Medicine, University of Oslo, Norway; Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Thor Edvardsen
- Faculty of Medicine, University of Oslo, Norway; Department of Cardiology, Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Lasse Lovstakken
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Bjørnar Grenne
- Centre for Innovative Ultrasound Solutions and Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olavs Hospital, Trondheim, Norway.
| |
Collapse
|
14
|
Ostvik A, Salte IM, Smistad E, Nguyen TM, Melichova D, Brunvand H, Haugaa K, Edvardsen T, Grenne B, Lovstakken L. Myocardial Function Imaging in Echocardiography Using Deep Learning. IEEE Trans Med Imaging 2021; 40:1340-1351. [PMID: 33493114 DOI: 10.1109/tmi.2021.3054566] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deformation imaging in echocardiography has been shown to have better diagnostic and prognostic value than conventional anatomical measures such as ejection fraction. However, despite clinical availability and demonstrated efficacy, everyday clinical use remains limited at many hospitals. The reasons are complex, but practical robustness has been questioned, and a large inter-vendor variability has been demonstrated. In this work, we propose a novel deep learning based framework for motion estimation in echocardiography, and use this to fully automate myocardial function imaging. A motion estimator was developed based on a PWC-Net architecture, which achieved an average end point error of (0.06±0.04) mm per frame using simulated data from an open access database, on par or better compared to previously reported state of the art. We further demonstrate unique adaptability to image artifacts such as signal dropouts, made possible using trained models that incorporate relevant image augmentations. Further, a fully automatic pipeline consisting of cardiac view classification, event detection, myocardial segmentation and motion estimation was developed and used to estimate left ventricular longitudinal strain in vivo. The method showed promise by achieving a mean deviation of (-0.7±1.6)% compared to a semi-automatic commercial solution for N=30 patients with relevant disease, within the expected limits of agreement. We thus believe that learning-based motion estimation can facilitate extended use of strain imaging in clinical practice.
Collapse
|
15
|
Leclerc S, Smistad E, Ostvik A, Cervenansky F, Espinosa F, Espeland T, Rye Berg EA, Belhamissi M, Israilov S, Grenier T, Lartizien C, Jodoin PM, Lovstakken L, Bernard O. LU-Net: A Multistage Attention Network to Improve the Robustness of Segmentation of Left Ventricular Structures in 2-D Echocardiography. IEEE Trans Ultrason Ferroelectr Freq Control 2020; 67:2519-2530. [PMID: 32746187 DOI: 10.1109/tuffc.2020.3003403] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmentation of cardiac structures is one of the fundamental steps to estimate volumetric indices of the heart. This step is still performed semiautomatically in clinical routine and is, thus, prone to interobserver and intraobserver variabilities. Recent studies have shown that deep learning has the potential to perform fully automatic segmentation. However, the current best solutions still suffer from a lack of robustness in terms of accuracy and number of outliers. The goal of this work is to introduce a novel network designed to improve the overall segmentation accuracy of left ventricular structures (endocardial and epicardial borders) while enhancing the estimation of the corresponding clinical indices and reducing the number of outliers. This network is based on a multistage framework where both the localization and segmentation steps are optimized jointly through an end-to-end scheme. Results obtained on a large open access data set show that our method outperforms the current best-performing deep learning solution with a lighter architecture and achieved an overall segmentation accuracy lower than the intraobserver variability for the epicardial border (i.e., on average a mean absolute error of 1.5 mm and a Hausdorff distance of 5.1mm) with 11% of outliers. Moreover, we demonstrate that our method can closely reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.96 and a mean absolute error of 7.6 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.83 and an absolute mean error of 5.0%, producing scores that are slightly below the intraobserver margin. Based on this observation, areas for improvement are suggested.
Collapse
|
16
|
Smistad E, Ostvik A, Salte IM, Melichova D, Nguyen TM, Haugaa K, Brunvand H, Edvardsen T, Leclerc S, Bernard O, Grenne B, Lovstakken L. Real-Time Automatic Ejection Fraction and Foreshortening Detection Using Deep Learning. IEEE Trans Ultrason Ferroelectr Freq Control 2020; 67:2595-2604. [PMID: 32175861 DOI: 10.1109/tuffc.2020.2981037] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volume and ejection fraction (EF) measurements of the left ventricle (LV) in 2-D echocardiography are associated with a high uncertainty not only due to interobserver variability of the manual measurement, but also due to ultrasound acquisition errors such as apical foreshortening. In this work, a real-time and fully automated EF measurement and foreshortening detection method is proposed. The method uses several deep learning components, such as view classification, cardiac cycle timing, segmentation and landmark extraction, to measure the amount of foreshortening, LV volume, and EF. A data set of 500 patients from an outpatient clinic was used to train the deep neural networks, while a separate data set of 100 patients from another clinic was used for evaluation, where LV volume and EF were measured by an expert using clinical protocols and software. A quantitative analysis using 3-D ultrasound showed that EF is considerably affected by apical foreshortening, and that the proposed method can detect and quantify the amount of apical foreshortening. The bias and standard deviation of the automatic EF measurements were -3.6 ± 8.1%, while the mean absolute difference was measured at 7.2% which are all within the interobserver variability and comparable with related studies. The proposed real-time pipeline allows for a continuous acquisition and measurement workflow without user interaction, and has the potential to significantly reduce the time spent on the analysis and measurement error due to foreshortening, while providing quantitative volume measurements in the everyday echo lab.
Collapse
|
17
|
Salte IM, Oestvik A, Smistad E, Melichova D, Nguyen TM, Brunvand H, Edvardsen T, Loevstakken L, Grenne B. 545 Deep learning/artificial intelligence for automatic measurement of global longitudinal strain by echocardiography. Eur Heart J Cardiovasc Imaging 2020. [DOI: 10.1093/ehjci/jez319.279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Funding Acknowledgements
The Norwegian Health Association, South-Eastern Norway regional health Authority and the national program for clinical therapy research (KLINBEFORSK).
Background
Global longitudinal strain (GLS) by echocardiography has incremental prognostic value in patients with acute myocardial infarction and heart failure compared to left ventricular (LV) ejection fraction and provides more reproducible measurements of LV function. Recent advances in machine learning for image analysis now open the possibility for robust fully automated tracing of the LV and measurement of global longitudinal strain (GLS), without any operator input. This could make real-time GLS possible and remove inter-reader variability, thus resulting in saved time and improved test-retest reliability. The aim of the present study was to investigate how measurements by this novel automatic method compares to conventional speckle tracking analyses of GLS.
Methods
100 transthoracic echocardiographic examinations were included from a clinical database of patients with acute myocardial infarction or de-novo heart failure. Examinations were included consecutively and regardless of image quality. Simpson biplane LV ejection fraction ranged from 7 to 70%. Images of three standard apical planes from each examination were analysed using our novel and fully automated GLS method based on deep learning technology. The automated GLS measurements were compared to conventional speckle tracking GLS measurements of the same acquisitions using vendor specific format and software (EchoPAC, GE Healthcare), performed by a single experienced observer.
Results
GLS was -11.6 ± 4.5% and -12.8 ± 5.0% for the deep learning method and the conventional method, respectively. Bland-Altman analysis showed a bias of -0.7 ± 1,9% and 95% limits of agreement of -4,6 to 3.1. No clear value dependent bias was found by visual inspection (Figure A). Feasibility for measurement of GLS was 93% for the deep learning based method and 99% for the conventional method. The limits of agreement found in our study is comparable to findings in the intervendor comparison study by the EASCVI/ASE/Industry Task force to standardize deformation imaging.
Conclusion
This novel deep learning based method succeeds without any operator input to automatically identify and classify the three apical standard views, trace the myocardium, perform motion estimation and measure global longitudinal strain. This could further facilitate the clinical use of GLS as an important tool for enhancing clinical decision-making.
Abstract 545 Figure.
Collapse
Affiliation(s)
- I M Salte
- Hospital of Southern Norway, Department of Medicine, Kristiansand, Norway
| | - A Oestvik
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - E Smistad
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - D Melichova
- Hospital of Southern Norway, Department of Medicine, Arendal, Norway
| | - T M Nguyen
- Hospital of Southern Norway, Department of Medicine, Kristiansand, Norway
| | - H Brunvand
- Hospital of Southern Norway, Department of Medicine, Arendal, Norway
| | - T Edvardsen
- Oslo University Hospital, Department of Cardiology, Oslo, Norway
| | - L Loevstakken
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | - B Grenne
- St Olavs Hospital, Clinic of Cardiology, Trondheim, Norway
| |
Collapse
|
18
|
Leclerc S, Smistad E, Pedrosa J, Ostvik A, Cervenansky F, Espinosa F, Espeland T, Berg EAR, Jodoin PM, Grenier T, Lartizien C, Dhooge J, Lovstakken L, Bernard O. Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography. IEEE Trans Med Imaging 2019; 38:2198-2210. [PMID: 30802851 DOI: 10.1109/tmi.2019.2900516] [Citation(s) in RCA: 158] [Impact Index Per Article: 31.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.
Collapse
|
19
|
Østvik A, Smistad E, Aase SA, Haugen BO, Lovstakken L. Real-Time Standard View Classification in Transthoracic Echocardiography Using Convolutional Neural Networks. Ultrasound Med Biol 2019; 45:374-384. [PMID: 30470606 DOI: 10.1016/j.ultrasmedbio.2018.07.024] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 06/22/2018] [Accepted: 07/23/2018] [Indexed: 06/09/2023]
Abstract
Transthoracic echocardiography examinations are usually performed according to a protocol comprising different probe postures providing standard views of the heart. These are used as a basis when assessing cardiac function, and it is essential that the morphophysiological representations are correct. Clinical analysis is often initialized with the current view, and automatic classification can thus be useful in improving today's workflow. In this article, convolutional neural networks (CNNs) are used to create classification models predicting up to seven different cardiac views. Data sets of 2-D ultrasound acquired from studies totaling more than 500 patients and 7000 videos were included. State-of-the-art accuracies of 98.3% ± 0.6% and 98.9% ± 0.6% on single frames and sequences, respectively, and real-time performance with 4.4 ± 0.3 ms per frame were achieved. Further, it was found that CNNs have the potential for use in automatic multiplanar reformatting and orientation guidance. Using 3-D data to train models applicable for 2-D classification, we achieved a median deviation of 4° ± 3° from the optimal orientations.
Collapse
Affiliation(s)
- Andreas Østvik
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Erik Smistad
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; SINTEF Medical Technology, Trondheim, Norway
| | | | - Bjørn Olav Haugen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Lasse Lovstakken
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
20
|
Smistad E, Johansen KF, Iversen DH, Reinertsen I. Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networks. J Med Imaging (Bellingham) 2018; 5:044004. [PMID: 30840734 PMCID: PMC6228309 DOI: 10.1117/1.jmi.5.4.044004] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 10/23/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound images acquired during axillary nerve block procedures can be difficult to interpret. Highlighting the important structures, such as nerves and blood vessels, may be useful for the training of inexperienced users. A deep convolutional neural network is used to identify the musculocutaneous, median, ulnar, and radial nerves, as well as the blood vessels in ultrasound images. A dataset of 49 subjects is collected and used for training and evaluation of the neural network. Several image augmentations, such as rotation, elastic deformation, shadows, and horizontal flipping, are tested. The neural network is evaluated using cross validation. The results showed that the blood vessels were the easiest to detect with a precision and recall above 0.8. Among the nerves, the median and ulnar nerves were the easiest to detect with an F -score of 0.73 and 0.62, respectively. The radial nerve was the hardest to detect with an F -score of 0.39. Image augmentations proved effective, increasing F -score by as much as 0.13. A Wilcoxon signed-rank test showed that the improvement from rotation, shadow, and elastic deformation augmentations were significant and the combination of all augmentations gave the best result. The results are promising; however, there is more work to be done, as the precision and recall are still too low. A larger dataset is most likely needed to improve accuracy, in combination with anatomical and temporal models.
Collapse
Affiliation(s)
- Erik Smistad
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | | - Daniel Høyer Iversen
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | |
Collapse
|
21
|
Smistad E, Iversen DH, Leidig L, Lervik Bakeng JB, Johansen KF, Lindseth F. Automatic Segmentation and Probe Guidance for Real-Time Assistance of Ultrasound-Guided Femoral Nerve Blocks. Ultrasound Med Biol 2017; 43:218-226. [PMID: 27727021 DOI: 10.1016/j.ultrasmedbio.2016.08.036] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 05/13/2016] [Accepted: 08/30/2016] [Indexed: 06/06/2023]
Abstract
Ultrasound-guided regional anesthesia can be challenging, especially for inexperienced physicians. The goal of the proposed methods is to create a system that can assist a user in performing ultrasound-guided femoral nerve blocks. The system indicates in which direction the user should move the ultrasound probe to investigate the region of interest and to reach the target site for needle insertion. Additionally, the system provides automatic real-time segmentation of the femoral artery, the femoral nerve and the two layers fascia lata and fascia iliaca. This aids in interpretation of the 2-D ultrasound images and the surrounding anatomy in 3-D. The system was evaluated on 24 ultrasound acquisitions of both legs from six subjects. The estimated target site for needle insertion and the segmentations were compared with those of an expert anesthesiologist. Average target distance was 8.5 mm with a standard deviation of 2.5 mm. The mean absolute differences of the femoral nerve and the fascia segmentations were about 1-3 mm.
Collapse
Affiliation(s)
- Erik Smistad
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway.
| | - Daniel Høyer Iversen
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| | - Linda Leidig
- Norwegian University of Science and Technology, Trondheim, Norway
| | | | | | - Frank Lindseth
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
22
|
Bernard O, Bosch JG, Heyde B, Alessandrini M, Barbosa D, Camarasu-Pop S, Cervenansky F, Valette S, Mirea O, Bernier M, Jodoin PM, Domingos JS, Stebbing RV, Keraudren K, Oktay O, Caballero J, Shi W, Rueckert D, Milletari F, Ahmadi SA, Smistad E, Lindseth F, van Stralen M, Wang C, Smedby O, Donal E, Monaghan M, Papachristidis A, Geleijnse ML, Galli E, D'hooge J. Standardized Evaluation System for Left Ventricular Segmentation Algorithms in 3D Echocardiography. IEEE Trans Med Imaging 2016; 35:967-977. [PMID: 26625409 DOI: 10.1109/tmi.2015.2503890] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from three experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions.
Collapse
|
23
|
Smistad E, Lindseth F. Real-Time Automatic Artery Segmentation, Reconstruction and Registration for Ultrasound-Guided Regional Anaesthesia of the Femoral Nerve. IEEE Trans Med Imaging 2016; 35:752-761. [PMID: 26513782 DOI: 10.1109/tmi.2015.2494160] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The goal is to create an assistant for ultrasound- guided femoral nerve block. By segmenting and visualizing the important structures such as the femoral artery, we hope to improve the success of these procedures. This article is the first step towards this goal and presents novel real-time methods for identifying and reconstructing the femoral artery, and registering a model of the surrounding anatomy to the ultrasound images. The femoral artery is modelled as an ellipse. The artery is first detected by a novel algorithm which initializes the artery tracking. This algorithm is completely automatic and requires no user interaction. Artery tracking is achieved with a Kalman filter. The 3D artery is reconstructed in real-time with a novel algorithm and a tracked ultrasound probe. A mesh model of the surrounding anatomy was created from a CT dataset. Registration of this model is achieved by landmark registration using the centerpoints from the artery tracking and the femoral artery centerline of the model. The artery detection method was able to automatically detect the femoral artery and initialize the tracking in all 48 ultrasound sequences. The tracking algorithm achieved an average dice similarity coefficient of 0.91, absolute distance of 0.33 mm, and Hausdorff distance 1.05 mm. The mean registration error was 2.7 mm, while the average maximum error was 12.4 mm. The average runtime was measured to be 38, 8, 46 and 0.2 milliseconds for the artery detection, tracking, reconstruction and registration methods respectively.
Collapse
|
24
|
Smistad E, Løvstakken L. Vessel Detection in Ultrasound Images Using Deep Convolutional Neural Networks. Deep Learning and Data Labeling for Medical Applications 2016. [DOI: 10.1007/978-3-319-46976-8_4] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
25
|
Reynisson PJ, Scali M, Smistad E, Hofstad EF, Leira HO, Lindseth F, Nagelhus Hernes TA, Amundsen T, Sorger H, Langø T. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods. PLoS One 2015; 10:e0144282. [PMID: 26657513 PMCID: PMC4676651 DOI: 10.1371/journal.pone.0144282] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 11/15/2015] [Indexed: 12/18/2022] Open
Abstract
INTRODUCTION Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. METHOD CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. RESULTS The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature. CONCLUSION The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.
Collapse
Affiliation(s)
- Pall Jens Reynisson
- Dept. Circulation and medical imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Marta Scali
- Bio-Mechanical Engineering, Faculty of Mechanical Engineering, Delft University of Technology, Delft, Netherlands
| | - Erik Smistad
- Dept. Computer and Information Science, NTNU, Trondheim, Norway
| | | | - Håkon Olav Leira
- Dept. Circulation and medical imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Dept. Thoracic Medicine, St. Olavs Hospital, Trondheim, Norway
| | - Frank Lindseth
- Dept. Computer and Information Science, NTNU, Trondheim, Norway.,Dept. Medical Technology, SINTEF, Trondheim, Norway
| | - Toril Anita Nagelhus Hernes
- Dept. Circulation and medical imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Tore Amundsen
- Dept. Circulation and medical imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Dept. Thoracic Medicine, St. Olavs Hospital, Trondheim, Norway
| | - Hanne Sorger
- Dept. Circulation and medical imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Dept. Thoracic Medicine, St. Olavs Hospital, Trondheim, Norway
| | - Thomas Langø
- Dept. Medical Technology, SINTEF, Trondheim, Norway
| |
Collapse
|
26
|
Smistad E, Bozorgi M, Lindseth F. FAST: framework for heterogeneous medical image computing and visualization. Int J Comput Assist Radiol Surg 2015; 10:1811-22. [PMID: 25684594 DOI: 10.1007/s11548-015-1158-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Accepted: 01/30/2015] [Indexed: 10/24/2022]
Abstract
PURPOSE Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. METHODS FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. RESULTS Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. CONCLUSIONS FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.
Collapse
Affiliation(s)
- Erik Smistad
- Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Saelandsvei 7-9, 7491, Trondheim, Norway. .,SINTEF Medical Technology, Trondheim, Norway.
| | - Mohammadmehdi Bozorgi
- Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Saelandsvei 7-9, 7491, Trondheim, Norway
| | - Frank Lindseth
- Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Saelandsvei 7-9, 7491, Trondheim, Norway.,SINTEF Medical Technology, Trondheim, Norway
| |
Collapse
|
27
|
Smistad E, Falch TL, Bozorgi M, Elster AC, Lindseth F. Medical image segmentation on GPUs--a comprehensive review. Med Image Anal 2014; 20:1-18. [PMID: 25534282 DOI: 10.1016/j.media.2014.10.012] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2014] [Revised: 10/08/2014] [Accepted: 10/23/2014] [Indexed: 01/01/2023]
Abstract
Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, planning and guidance. More efficient implementations are necessary, as most segmentation methods are computationally expensive, and the amount of medical imaging data is growing. The increased programmability of graphic processing units (GPUs) in recent years have enabled their use in several areas. GPUs can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. Furthermore, using a GPU enables concurrent visualization and interactive segmentation, where the user can help the algorithm to achieve a satisfactory result. This review investigates the use of GPUs to accelerate medical image segmentation methods. A set of criteria for efficient use of GPUs are defined and each segmentation method is rated accordingly. In addition, references to relevant GPU implementations and insight into GPU optimization are provided and discussed. The review concludes that most segmentation methods may benefit from GPU processing due to the methods' data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup.
Collapse
Affiliation(s)
- Erik Smistad
- Norwegian University of Science and Technology, Sem Sælandsvei 7-9, 7491 Trondheim, Norway; SINTEF Medical Technology, Postboks 4760 Sluppen, 7465 Trondheim, Norway.
| | - Thomas L Falch
- Norwegian University of Science and Technology, Sem Sælandsvei 7-9, 7491 Trondheim, Norway
| | - Mohammadmehdi Bozorgi
- Norwegian University of Science and Technology, Sem Sælandsvei 7-9, 7491 Trondheim, Norway
| | - Anne C Elster
- Norwegian University of Science and Technology, Sem Sælandsvei 7-9, 7491 Trondheim, Norway
| | - Frank Lindseth
- Norwegian University of Science and Technology, Sem Sælandsvei 7-9, 7491 Trondheim, Norway; SINTEF Medical Technology, Postboks 4760 Sluppen, 7465 Trondheim, Norway
| |
Collapse
|
28
|
Rudyanto RD, Kerkstra S, van Rikxoort EM, Fetita C, Brillet PY, Lefevre C, Xue W, Zhu X, Liang J, Öksüz I, Ünay D, Kadipaşaoğlu K, Estépar RSJ, Ross JC, Washko GR, Prieto JC, Hoyos MH, Orkisz M, Meine H, Hüllebrand M, Stöcker C, Mir FL, Naranjo V, Villanueva E, Staring M, Xiao C, Stoel BC, Fabijanska A, Smistad E, Elster AC, Lindseth F, Foruzan AH, Kiros R, Popuri K, Cobzas D, Jimenez-Carretero D, Santos A, Ledesma-Carbayo MJ, Helmberger M, Urschler M, Pienn M, Bosboom DGH, Campo A, Prokop M, de Jong PA, Ortiz-de-Solorzano C, Muñoz-Barrutia A, van Ginneken B. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study. Med Image Anal 2014; 18:1217-32. [PMID: 25113321 DOI: 10.1016/j.media.2014.07.003] [Citation(s) in RCA: 79] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 03/01/2014] [Accepted: 07/01/2014] [Indexed: 10/25/2022]
Abstract
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Collapse
Affiliation(s)
- Rina D Rudyanto
- Center for Applied Medical Research, University of Navarra, Spain.
| | - Sjoerd Kerkstra
- Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, The Netherlands
| | - Eva M van Rikxoort
- Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, The Netherlands
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Marius Staring
- Division of Image Processing (LKEB), Leiden University Medical Center, The Netherlands
| | | | - Berend C Stoel
- Division of Image Processing (LKEB), Leiden University Medical Center, The Netherlands
| | - Anna Fabijanska
- Institute of Applied Computer Science, Lodz University of Technology, Poland
| | - Erik Smistad
- Norwegian University of Science and Technology, Norway
| | - Anne C Elster
- Norwegian University of Science and Technology, Norway
| | | | | | | | | | | | | | - Andres Santos
- Universidad Politécnica de Madrid, Spain; CIBER-BBN, Spain
| | | | - Michael Helmberger
- Graz University of Technology, Institute for Computer Vision and Graphics, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Michael Pienn
- Ludwig Boltzmann Institute for Lung Vascular Research, Graz, Austria
| | - Dennis G H Bosboom
- Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, The Netherlands
| | - Arantza Campo
- Pulmonary Department, Clínica Universidad de Navarra, University of Navarra, Spain
| | - Mathias Prokop
- Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, The Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Center, Utrecht, The Netherlands
| | | | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, The Netherlands
| |
Collapse
|
29
|
|
30
|
Smistad E, Elster AC, Lindseth F. GPU accelerated segmentation and centerline extraction of tubular structures from medical images. Int J Comput Assist Radiol Surg 2013; 9:561-75. [PMID: 24177985 DOI: 10.1007/s11548-013-0956-x] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Accepted: 10/17/2013] [Indexed: 10/26/2022]
Abstract
PURPOSE To create a fast and generic method with sufficient quality for extracting tubular structures such as blood vessels and airways from different modalities (CT, MR and US) and organs (brain, lungs and liver) by utilizing the computational power of graphic processing units (GPUs). METHODS A cropping algorithm is used to remove unnecessary data from the datasets on the GPU. A model-based tube detection filter combined with a new parallel centerline extraction algorithm and a parallelized region growing segmentation algorithm is used to extract the tubular structures completely on the GPU. Accuracy of the proposed GPU method and centerline algorithm is compared with the ridge traversal and skeletonization/thinning methods using synthetic vascular datasets. RESULTS The implementation is tested on several datasets from three different modalities: airways from CT, blood vessels from MR, and 3D Doppler Ultrasound. The results show that the method is able to extract airways and vessels in 3-5 s on a modern GPU and is less sensitive to noise than other centerline extraction methods. CONCLUSIONS Tubular structures such as blood vessels and airways can be extracted from various organs imaged by different modalities in a matter of seconds, even for large datasets.
Collapse
Affiliation(s)
- Erik Smistad
- Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Saelandsvei 7-9, NO 7491 , Trondheim, Norway,
| | | | | |
Collapse
|