1
|
Yang K, Musio F, Ma Y, Juchler N, Paetzold JC, Al-Maskari R, Höher L, Li HB, Hamamci IE, Sekuboyina A, Shit S, Huang H, Prabhakar C, de la Rosa E, Waldmannstetter D, Kofler F, Navarro F, Menten M, Ezhov I, Rueckert D, Vos I, Ruigrok Y, Velthuis B, Kuijf H, Hämmerli J, Wurster C, Bijlenga P, Westphal L, Bisschop J, Colombo E, Baazaoui H, Makmur A, Hallinan J, Wiestler B, Kirschke JS, Wiest R, Montagnon E, Letourneau-Guillon L, Galdran A, Galati F, Falcetta D, Zuluaga MA, Lin C, Zhao H, Zhang Z, Ra S, Hwang J, Park H, Chen J, Wodzinski M, Müller H, Shi P, Liu W, Ma T, Yalçin C, Hamadache RE, Salvi J, Llado X, Estrada UMLT, Abramova V, Giancardo L, Oliver A, Liu J, Huang H, Cui Y, Lin Z, Liu Y, Zhu S, Patel TR, Tutino VM, Orouskhani M, Wang H, Mossa-Basha M, Zhu C, Rokuss MR, Kirchhoff Y, Disch N, Holzschuh J, Isensee F, Maier-Hein K, Sato Y, Hirsch S, Wegener S, Menze B. Benchmarking the CoW with the TopCoW Challenge: Topology-Aware Anatomical Segmentation of the Circle of Willis for CTA and MRA. ArXiv 2024:arXiv:2312.17670v3. [PMID: 38235066 PMCID: PMC10793481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.
Collapse
|
2
|
Wodzinski M, Marini N, Atzori M, Müller H. RegWSI: Whole slide image registration using combined deep feature- and intensity-based methods: Winner of the ACROBAT 2023 challenge. Comput Methods Programs Biomed 2024; 250:108187. [PMID: 38657383 DOI: 10.1016/j.cmpb.2024.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/05/2024] [Accepted: 04/17/2024] [Indexed: 04/26/2024]
Abstract
BACKGROUND AND OBJECTIVE The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.
Collapse
Affiliation(s)
- Marek Wodzinski
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland.
| | - Niccolò Marini
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland
| | - Manfredo Atzori
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Neuroscience, University of Padova, Padova, Italy
| | - Henning Müller
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Medical Faculty, University of Geneva, Geneva, Switzerland
| |
Collapse
|
3
|
Banzato T, Wodzinski M, Burti S, Vettore E, Muller H, Zotti A. An AI-based algorithm for the automatic evaluation of image quality in canine thoracic radiographs. Sci Rep 2023; 13:17024. [PMID: 37813976 PMCID: PMC10562412 DOI: 10.1038/s41598-023-44089-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 10/03/2023] [Indexed: 10/11/2023] Open
Abstract
The aim of this study was to develop and test an artificial intelligence (AI)-based algorithm for detecting common technical errors in canine thoracic radiography. The algorithm was trained using a database of thoracic radiographs from three veterinary clinics in Italy, which were evaluated for image quality by three experienced veterinary diagnostic imagers. The algorithm was designed to classify the images as correct or having one or more of the following errors: rotation, underexposure, overexposure, incorrect limb positioning, incorrect neck positioning, blurriness, cut-off, or the presence of foreign objects, or medical devices. The algorithm was able to correctly identify errors in thoracic radiographs with an overall accuracy of 81.5% in latero-lateral and 75.7% in sagittal images. The most accurately identified errors were limb mispositioning and underexposure both in latero-lateral and sagittal images. The accuracy of the developed model in the classification of technically correct radiographs was fair in latero-lateral and good in sagittal images. The authors conclude that their AI-based algorithm is a promising tool for improving the accuracy of radiographic interpretation by identifying technical errors in canine thoracic radiographs.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, 35020, Legnaro, Padua, Italy.
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Krakow, PL32059, Krakow, Poland
- Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
| | - Silvia Burti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, 35020, Legnaro, Padua, Italy
| | - Eleonora Vettore
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, 35020, Legnaro, Padua, Italy
| | - Henning Muller
- Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, 35020, Legnaro, Padua, Italy
| |
Collapse
|
4
|
Valente C, Wodzinski M, Guglielmini C, Poser H, Chiavegato D, Zotti A, Venturini R, Banzato T. Development of an artificial intelligence-based method for the diagnosis of the severity of myxomatous mitral valve disease from canine chest radiographs. Front Vet Sci 2023; 10:1227009. [PMID: 37808107 PMCID: PMC10556456 DOI: 10.3389/fvets.2023.1227009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
An algorithm based on artificial intelligence (AI) was developed and tested to classify different stages of myxomatous mitral valve disease (MMVD) from canine thoracic radiographs. The radiographs were selected from the medical databases of two different institutions, considering dogs over 6 years of age that had undergone chest X-ray and echocardiographic examination. Only radiographs clearly showing the cardiac silhouette were considered. The convolutional neural network (CNN) was trained on both the right and left lateral and/or ventro-dorsal or dorso-ventral views. Each dog was classified according to the American College of Veterinary Internal Medicine (ACVIM) guidelines as stage B1, B2 or C + D. ResNet18 CNN was used as a classification network, and the results were evaluated using confusion matrices, receiver operating characteristic curves, and t-SNE and UMAP projections. The area under the curve (AUC) showed good heart-CNN performance in determining the MMVD stage from the lateral views with an AUC of 0.87, 0.77, and 0.88 for stages B1, B2, and C + D, respectively. The high accuracy of the algorithm in predicting the MMVD stage suggests that it could stand as a useful support tool in the interpretation of canine thoracic radiographs.
Collapse
Affiliation(s)
- Carlotta Valente
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
- Information Systems Institute, University of Applied Sciences—Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Carlo Guglielmini
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | - Helen Poser
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | | | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | | | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| |
Collapse
|
5
|
Li J, Ellis DG, Kodym O, Rauschenbach L, Rieß C, Sure U, Wrede KH, Alvarez CM, Wodzinski M, Daniol M, Hemmerling D, Mahdi H, Clement A, Kim E, Fishman Z, Whyne CM, Mainprize JG, Hardisty MR, Pathak S, Sindhura C, Gorthi RKSS, Kiran DV, Gorthi S, Yang B, Fang K, Li X, Kroviakov A, Yu L, Jin Y, Pepe A, Gsaxner C, Herout A, Alves V, Španěl M, Aizenberg MR, Kleesiek J, Egger J. Towards clinical applicability and computational efficiency in automatic cranial implant design: An overview of the AutoImplant 2021 cranial implant design challenge. Med Image Anal 2023; 88:102865. [PMID: 37331241 DOI: 10.1016/j.media.2023.102865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 05/23/2023] [Accepted: 06/02/2023] [Indexed: 06/20/2023]
Abstract
Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Oldřich Kodym
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Laurèl Rauschenbach
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Christoph Rieß
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Ulrich Sure
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Karsten H Wrede
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Carlos M Alvarez
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland; University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems Institute, Sierre, Switzerland
| | - Mateusz Daniol
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Daria Hemmerling
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Hamza Mahdi
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Evan Kim
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Cari M Whyne
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - James G Mainprize
- Sunnybrook Research Institute, Toronto, ON, Canada; Calavera Surgical Design Inc., Toronto, ON, Canada
| | - Michael R Hardisty
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - Shashwat Pathak
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Chitimireddy Sindhura
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | | | - Degala Venkata Kiran
- Department of Mechanical Engineering, Indian Institute of Technology, Tirupati, India
| | - Subrahmanyam Gorthi
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Bokai Yang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Ke Fang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Artem Kroviakov
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Lei Yu
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Adam Herout
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Victor Alves
- ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
| | | | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| |
Collapse
|
6
|
Hemmerling D, Wodzinski M, Orozco-Arroyave JR, Sztaho D, Daniol M, Jemiolo P, Wojcik-Pedziwiatr M. Vision Transformer for Parkinson's Disease Classification using Multilingual Sustained Vowel Recordings. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38083719 DOI: 10.1109/embc40787.2023.10340478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Parkinson's disease (PD) is the 2nd most prevalent neurodegenerative disease in the world. Thus, the early detection of PD has recently been the subject of several scientific and commercial studies. In this paper, we propose a pipeline using Vision Transformer applied to mel-spectrograms for PD classification using multilingual sustained vowel recordings. Furthermore, our proposed transformed-based model shows a great potential to use voice as a single modality biomarker for automatic PD detection without language restrictions, a wide range of vowels, with an F1-score equal to 0.78. The results of our study fall within the range of the estimated prevalence of voice and speech disorders in Parkinson's disease, which ranges from 70-90%. Our study demonstrates a high potential for adaptation in clinical decision-making, allowing for increasingly systematic and fast diagnosis of PD with the potential for use in telemedicine.Clinical relevance- There is an urgent need to develop non invasive biomarker of Parkinson's disease effective enough to detect the onset of the disease to introduce neuroprotective treatment at the earliest stage possible and to follow the results of that intervention. Voice disorders in PD are very frequent and are expected to be utilized as an early diagnostic biomarker. The voice analysis using deep neural networks open new opportunities to assess neurodegenerative diseases' symptoms, for fast diagnosis-making, to guide treatment initiation, and risk prediction. The detection accuracy for voice biomarkers according to our method reached close to the maximum achievable value.
Collapse
|
7
|
Jurgas A, Wodzinski M, Celniak W, Atzori M, Muller H. Artifact Augmentation for Learning-based Quality Control of Whole Slide Images. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38082977 DOI: 10.1109/embc40787.2023.10340997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The acquisition of whole slide images is prone to artifacts that can require human control and re-scanning, both in clinical workflows and in research-oriented settings. Quality control algorithms are a first step to overcome this challenge, as they limit the use of low quality images. Developing quality control systems in histopathology is not straightforward, also due to the limited availability of data related to this topic. We address the problem by proposing a tool to augment data with artifacts. The proposed method seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The datasets augmented by the blended artifacts are then used to train an artifact detection network in a supervised way. We use the YOLOv5 model for the artifact detection with a slightly modified training pipeline. The proposed tool can be extended into a complete framework for the quality assessment of whole slide images.Clinical relevance- The proposed method may be useful for the initial quality screening of whole slide images. Each year, millions of whole slide images are acquired and digitized worldwide. Numerous of them contain artifacts affecting the following AI-oriented analysis. Therefore, a tool operating at the acquisition phase and improving the initial quality assessment is crucial to increase the performance of digital pathology algorithms, e.g., early cancer diagnosis.
Collapse
|
8
|
Zulfiqar M, Stanuch M, Wodzinski M, Skalski A. DRU-Net: Pulmonary Artery Segmentation via Dense Residual U-Network with Hybrid Loss Function. Sensors (Basel) 2023; 23:5427. [PMID: 37420595 DOI: 10.3390/s23125427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/30/2023] [Accepted: 06/06/2023] [Indexed: 07/09/2023]
Abstract
The structure and topology of the pulmonary arteries is crucial to understand, plan, and conduct medical treatment in the thorax area. Due to the complex anatomy of the pulmonary vessels, it is not easy to distinguish between the arteries and veins. The pulmonary arteries have a complex structure with an irregular shape and adjacent tissues, which makes automatic segmentation a challenging task. A deep neural network is required to segment the topological structure of the pulmonary artery. Therefore, in this study, a Dense Residual U-Net with a hybrid loss function is proposed. The network is trained on augmented Computed Tomography volumes to improve the performance of the network and prevent overfitting. Moreover, the hybrid loss function is implemented to improve the performance of the network. The results show an improvement in the Dice and HD95 scores over state-of-the-art techniques. The average scores achieved for the Dice and HD95 scores are 0.8775 and 4.2624 mm, respectively. The proposed method will support physicians in the challenging task of preoperative planning of thoracic surgery, where the correct assessment of the arteries is crucial.
Collapse
Affiliation(s)
- Manahil Zulfiqar
- Department of Measurement and Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland
- MedApp S.A., 30-150 Krakow, Poland
| | - Maciej Stanuch
- Department of Measurement and Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland
- MedApp S.A., 30-150 Krakow, Poland
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland
- MedApp S.A., 30-150 Krakow, Poland
| | - Andrzej Skalski
- Department of Measurement and Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland
- MedApp S.A., 30-150 Krakow, Poland
| |
Collapse
|
9
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE Trans Med Imaging 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
10
|
Marini N, Otalora S, Wodzinski M, Tomassini S, Dragoni AF, Marchand-Maillet S, Morales JPD, Duran-Lopez L, Vatrano S, Müller H, Atzori M. Data-driven color augmentation for H&E stained images in computational pathology. J Pathol Inform 2023; 14:100183. [PMID: 36687531 PMCID: PMC9852546 DOI: 10.1016/j.jpi.2022.100183] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/28/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
Computational pathology targets the automatic analysis of Whole Slide Images (WSI). WSIs are high-resolution digitized histopathology images, stained with chemical reagents to highlight specific tissue structures and scanned via whole slide scanners. The application of different parameters during WSI acquisition may lead to stain color heterogeneity, especially considering samples collected from several medical centers. Dealing with stain color heterogeneity often limits the robustness of methods developed to analyze WSIs, in particular Convolutional Neural Networks (CNN), the state-of-the-art algorithm for most computational pathology tasks. Stain color heterogeneity is still an unsolved problem, although several methods have been developed to alleviate it, such as Hue-Saturation-Contrast (HSC) color augmentation and stain augmentation methods. The goal of this paper is to present Data-Driven Color Augmentation (DDCA), a method to improve the efficiency of color augmentation methods by increasing the reliability of the samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. DDCA is applied to HSC color augmentation, stain augmentation and H&E-adversarial networks in colon and prostate cancer classification tasks. DDCA is then compared with 11 state-of-the-art baseline methods to handle color heterogeneity, showing that it can substantially improve classification performance on unseen data including heterogeneous color variations.
Collapse
Affiliation(s)
- Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland,Centre Universitaire d'Informatique, University of Geneva, Geneva, Switzerland,Corresponding author.
| | - Sebastian Otalora
- Support Center for Advanced Neuroimaging, University Institute of Diagnostic and Interventional Neuroradiology, Bern, Switzerland
| | - Marek Wodzinski
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland,Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
| | - Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy
| | | | - Juan Pedro Dominguez Morales
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla, Spain,SCORE Lab, I3US, Universidad de Sevilla, Spain
| | - Lourdes Duran-Lopez
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla, Spain,SCORE Lab, I3US, Universidad de Sevilla, Spain
| | - Simona Vatrano
- Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland,Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland,Department of Neurosciences, University of Padua, Padua, Italy
| |
Collapse
|
11
|
Marini N, Marchesin S, Otálora S, Wodzinski M, Caputo A, van Rijthoven M, Aswolinskiy W, Bokhorst JM, Podareanu D, Petters E, Boytcheva S, Buttafuoco G, Vatrano S, Fraggetta F, van der Laak J, Agosti M, Ciompi F, Silvello G, Muller H, Atzori M. Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations. NPJ Digit Med 2022; 5:102. [PMID: 35869179 PMCID: PMC9307641 DOI: 10.1038/s41746-022-00635-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 06/24/2022] [Indexed: 01/02/2023] Open
Abstract
The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3’769 clinical images and reports, provided by two hospitals and tested on over 11’000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.
Collapse
|
12
|
Wodzinski M, Daniol M, Socha M, Hemmerling D, Stanuch M, Skalski A. Deep learning-based framework for automatic cranial defect reconstruction and implant modeling. Comput Methods Programs Biomed 2022; 226:107173. [PMID: 36257198 DOI: 10.1016/j.cmpb.2022.107173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/19/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE This article presents a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling. METHODS We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction, and a dedicated iterative procedure to improve the implant geometry, followed by an automatic generation of models ready for 3-D printing. We propose a cross-case augmentation based on imperfect image registration combining cases from different datasets. Additional ablation studies compare different augmentation strategies and other state-of-the-art methods. RESULTS We evaluate the method on three datasets introduced during the AutoImplant 2021 challenge, organized jointly with the MICCAI conference. We perform the quantitative evaluation using the Dice and boundary Dice coefficients, and the Hausdorff distance. The Dice coefficient, boundary Dice coefficient, and the 95th percentile of Hausdorff distance averaged across all test sets, are 0.91, 0.94, and 1.53 mm respectively. We perform an additional qualitative evaluation by 3-D printing and visualization in mixed reality to confirm the implant's usefulness. CONCLUSION The article proposes a complete pipeline that enables one to create the cranial implant model ready for 3-D printing. The described method is a greatly extended version of the method that scored 1st place in all AutoImplant 2021 challenge tasks. We freely release the source code, which together with the open datasets, makes the results fully reproducible. The automatic reconstruction of cranial defects may enable manufacturing personalized implants in a significantly shorter time, possibly allowing one to perform the 3-D printing process directly during a given intervention. Moreover, we show the usability of the defect reconstruction in a mixed reality that may further reduce the surgery time.
Collapse
Affiliation(s)
- Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland; Information Systems Institute, University of Applied Sciences Western Switzerland, Sierre, Switzerland.
| | - Mateusz Daniol
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| | - Miroslaw Socha
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
| | - Daria Hemmerling
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
| | - Maciej Stanuch
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| | - Andrzej Skalski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| |
Collapse
|
13
|
Banzato T, Wodzinski M, Tauceri F, Donà C, Scavazza F, Müller H, Zotti A. An AI-Based Algorithm for the Automatic Classification of Thoracic Radiographs in Cats. Front Vet Sci 2021; 8:731936. [PMID: 34722699 PMCID: PMC8554083 DOI: 10.3389/fvets.2021.731936] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/21/2021] [Indexed: 01/31/2023] Open
Abstract
An artificial intelligence (AI)-based computer-aided detection (CAD) algorithm to detect some of the most common radiographic findings in the feline thorax was developed and tested. The database used for training comprised radiographs acquired at two different institutions. Only correctly exposed and positioned radiographs were included in the database used for training. The presence of several radiographic findings was recorded. Consequenly, the radiographic findings included for training were: no findings, bronchial pattern, pleural effusion, mass, alveolar pattern, pneumothorax, cardiomegaly. Multi-label convolutional neural networks (CNNs) were used to develop the CAD algorithm, and the performance of two different CNN architectures, ResNet 50 and Inception V3, was compared. Both architectures had an area under the receiver operating characteristic curve (AUC) above 0.9 for alveolar pattern, bronchial pattern and pleural effusion, an AUC above 0.8 for no findings and pneumothorax, and an AUC above 0.7 for cardiomegaly. The AUC for mass was low (above 0.5) for both architectures. No significant differences were evident in the diagnostic accuracy of either architecture.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland.,Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Federico Tauceri
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Chiara Donà
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Filippo Scavazza
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Legnaro, Italy
| |
Collapse
|
14
|
Sikorska M, Skalski A, Wodzinski M, Witkowski A, Pellacani G, Ludzik J. Learning-based local quality assessment of reflectance confocal microscopy images for dermatology applications. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
15
|
Wodzinski M, Ciepiela I, Kuszewski T, Kedzierawski P, Skalski A. Semi-Supervised Deep Learning-Based Image Registration Method with Volume Penalty for Real-Time Breast Tumor Bed Localization. Sensors (Basel) 2021; 21:4085. [PMID: 34198497 PMCID: PMC8231789 DOI: 10.3390/s21124085] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/07/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Breast-conserving surgery requires supportive radiotherapy to prevent cancer recurrence. However, the task of localizing the tumor bed to be irradiated is not trivial. The automatic image registration could significantly aid the tumor bed localization and lower the radiation dose delivered to the surrounding healthy tissues. This study proposes a novel image registration method dedicated to breast tumor bed localization addressing the problem of missing data due to tumor resection that may be applied to real-time radiotherapy planning. We propose a deep learning-based nonrigid image registration method based on a modified U-Net architecture. The algorithm works simultaneously on several image resolutions to handle large deformations. Moreover, we propose a dedicated volume penalty that introduces the medical knowledge about tumor resection into the registration process. The proposed method may be useful for improving real-time radiation therapy planning after the tumor resection and, thus, lower the surrounding healthy tissues' irradiation. The data used in this study consist of 30 computed tomography scans acquired in patients with diagnosed breast cancer, before and after tumor surgery. The method is evaluated using the target registration error between manually annotated landmarks, the ratio of tumor volume, and the subjective visual assessment. We compare the proposed method to several other approaches and show that both the multilevel approach and the volume regularization improve the registration results. The mean target registration error is below 6.5 mm, and the relative volume ratio is close to zero. The registration time below 1 s enables the real-time processing. These results show improvements compared to the classical, iterative methods or other learning-based approaches that do not introduce the knowledge about tumor resection into the registration process. In future research, we plan to propose a method dedicated to automatic localization of missing regions that may be used to automatically segment tumors in the source image and scars in the target image.
Collapse
Affiliation(s)
- Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, PL30059 Kraków, Poland;
| | - Izabela Ciepiela
- Department of Radiotherapy, The Holycross Cancer Center, PL25734 Kielce, Poland; (I.C.); (P.K.)
| | - Tomasz Kuszewski
- Department of Medical Physics, The Holycross Cancer Center, PL25734 Kielce, Poland;
- Collegium Medicum, Institute of Health Sciences, Jan Kochanowski University, PL25369 Kielce, Poland
| | - Piotr Kedzierawski
- Department of Radiotherapy, The Holycross Cancer Center, PL25734 Kielce, Poland; (I.C.); (P.K.)
- Collegium Medicum, Institute of Health Sciences, Jan Kochanowski University, PL25369 Kielce, Poland
| | - Andrzej Skalski
- Department of Measurement and Electronics, AGH University of Science and Technology, PL30059 Kraków, Poland;
| |
Collapse
|
16
|
Banzato T, Wodzinski M, Burti S, Osti VL, Rossoni V, Atzori M, Zotti A. Automatic classification of canine thoracic radiographs using deep learning. Sci Rep 2021; 11:3964. [PMID: 33597566 PMCID: PMC7889925 DOI: 10.1038/s41598-021-83515-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 02/04/2021] [Indexed: 01/13/2023] Open
Abstract
The interpretation of thoracic radiographs is a challenging and error-prone task for veterinarians. Despite recent advancements in machine learning and computer vision, the development of computer-aided diagnostic systems for radiographs remains a challenging and unsolved problem, particularly in the context of veterinary medicine. In this study, a novel method, based on multi-label deep convolutional neural network (CNN), for the classification of thoracic radiographs in dogs was developed. All the thoracic radiographs of dogs performed between 2010 and 2020 in the institution were retrospectively collected. Radiographs were taken with two different radiograph acquisition systems and were divided into two data sets accordingly. One data set (Data Set 1) was used for training and testing and another data set (Data Set 2) was used to test the generalization ability of the CNNs. Radiographic findings used as non mutually exclusive labels to train the CNNs were: unremarkable, cardiomegaly, alveolar pattern, bronchial pattern, interstitial pattern, mass, pleural effusion, pneumothorax, and megaesophagus. Two different CNNs, based on ResNet-50 and DenseNet-121 architectures respectively, were developed and tested. The CNN based on ResNet-50 had an Area Under the Receive-Operator Curve (AUC) above 0.8 for all the included radiographic findings except for bronchial and interstitial patterns both on Data Set 1 and Data Set 2. The CNN based on DenseNet-121 had a lower overall performance. Statistically significant differences in the generalization ability between the two CNNs were evident, with the CNN based on ResNet-50 showing better performance for alveolar pattern, interstitial pattern, megaesophagus, and pneumothorax.
Collapse
Affiliation(s)
- Tommaso Banzato
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy.
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, 32059, Kraków, Poland
| | - Silvia Burti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Valentina Longhin Osti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Valentina Rossoni
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland.,Department of Neuroscience, University of Padua, 35128, Padua, IT, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Productions, and Health, Legnaro (PD), University of Padua, 35020, Padua, Italy
| |
Collapse
|
17
|
Wodzinski M, Skalski A. Multistep, automatic and nonrigid image registration method for histology samples acquired using multiple stains. Phys Med Biol 2021; 66:025006. [PMID: 33197906 DOI: 10.1088/1361-6560/abcad7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The use of multiple dyes during histological sample preparation can reveal distinct tissue properties. However, since the slide preparation differs for each dye, the tissue slides are being deformed and a nonrigid registration is required before further processing. The registration of histology images is complicated because of: (i) a high resolution of histology images, (ii) complex, large, nonrigid deformations, (iii) difference in the appearance and partially missing data due to the use of multiple dyes. In this work, we propose a multistep, automatic, nonrigid image registration method dedicated to histology samples acquired with multiple stains. The proposed method consists of a feature-based affine registration, an exhaustive rotation alignment, an iterative, intensity-based affine registration, and a nonrigid alignment based on modality independent neighbourhood descriptor coupled with the Demons algorithm. A dedicated failure detection mechanism is proposed to make the method fully automatic, without the necessity of any manual interaction. The described method was proposed by the AGH team during the Automatic Non-rigid Histological Image Registration (ANHIR) challenge. The ANHIR dataset consists of 481 image pairs annotated by histology experts. Moreover, the ANHIR challenge submissions were evaluated using an independent, server-side evaluation tool. The main evaluation criteria was the target registration error normalized by the image diagonal. The median of median target registration error is below 0.19%. The proposed method is currently the second-best in terms of the average ranking of median target registration error, without statistically significant differences compared to the top-ranked method. We provide an open access to the method software and used parameters, making the results fully reproducible.
Collapse
Affiliation(s)
- Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, al. Mickiewicza 30, PL30059 Cracow, Poland
| | | |
Collapse
|
18
|
Wodzinski M, Müller H. DeepHistReg: Unsupervised Deep Learning Registration Framework for Differently Stained Histology Samples. Comput Methods Programs Biomed 2021; 198:105799. [PMID: 33137701 DOI: 10.1016/j.cmpb.2020.105799] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 10/10/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The use of several stains during histology sample preparation can be useful for fusing complementary information about different tissue structures. It reveals distinct tissue properties that combined may be useful for grading, classification, or 3-D reconstruction. Nevertheless, since the slide preparation is different for each stain and the procedure uses consecutive slices, the tissue undergoes complex and possibly large deformations. Therefore, a nonrigid registration is required before further processing. The nonrigid registration of differently stained histology images is a challenging task because: (i) the registration must be fully automatic, (ii) the histology images are extremely high-resolution, (iii) the registration should be as fast as possible, (iv) there are significant differences in the tissue appearance, and (v) there are not many unique features due to a repetitive texture. METHODS In this article, we propose a deep learning-based solution to the histology registration. We describe a registration framework dedicated to high-resolution histology images that can perform the registration in real-time. The framework consists of an automatic background segmentation, iterative initial rotation search and learning-based affine/nonrigid registration. RESULTS We evaluate our approach using an open dataset provided for the Automatic Non-rigid Histological Image Registration (ANHIR) challenge organized jointly with the IEEE ISBI 2019 conference. We compare our solution to the challenge participants using a server-side evaluation tool provided by the challenge organizers. Following the challenge evaluation criteria, we use the target registration error (TRE) as the evaluation metric. Our algorithm provides registration accuracy close to the best scoring teams (median rTRE 0.19% of the image diagonal) while being significantly faster (the average registration time is about 2 seconds). CONCLUSIONS The proposed framework provides results, in terms of the TRE, comparable to the best-performing state-of-the-art methods. However, it is significantly faster, thus potentially more useful in clinical practice where a large number of histology images are being processed. The proposed method is of particular interest to researchers requiring an accurate, real-time, nonrigid registration of high-resolution histology images for whom the processing time of traditional, iterative methods in unacceptable. We provide free access to the software implementation of the method, including training and inference code, as well as pretrained models. Since the ANHIR dataset is open, this makes the results fully and easily reproducible.
Collapse
Affiliation(s)
- Marek Wodzinski
- AGH University of Science and Technology Department of Measurement and Electronics Kraków, Poland.
| | - Henning Müller
- University of Applied Sciences Western Switzerland (HES-SO Valais) Information Systems Institute Sierre, Switzerland.
| |
Collapse
|
19
|
Wodzinski M, Banzato T, Atzori M, Andrearczyk V, Cid YD, Muller H. Training Deep Neural Networks for Small and Highly Heterogeneous MRI Datasets for Cancer Grading. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1758-1761. [PMID: 33018338 DOI: 10.1109/embc44109.2020.9175634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Using medical images recorded in clinical practice has the potential to be a game-changer in the application of machine learning for medical decision support. Thousands of medical images are produced in daily clinical activity. The diagnosis of medical doctors on these images represents a source of knowledge to train machine learning algorithms for scientific research or computer-aided diagnosis. However, the requirement of manual data annotations and the heterogeneity of images and annotations make it difficult to develop algorithms that are effective on images from different centers or sources (scanner manufacturers, protocols, etc.). The objective of this article is to explore the opportunities and the limits of highly heterogeneous biomedical data, since many medical data sets are small and entail a challenge for machine learning techniques. Particularly, we focus on a small data set targeting meningioma grading. Meningioma grading is crucial for patient treatment and prognosis. It is normally performed by histological examination but recent articles showed that it is possible to do it also on magnetic resonance images (MRI), so non-invasive. Our data set consists of 174 T1-weighted MRI images of patients with meningioma, divided into 126 benign and 48 atypical/anaplastic cases, acquired using 26 different MRI scanners and 125 acquisition protocols, which shows the enormous variability in the data set. The performed preprocessing steps include tumor segmentation, spatial image normalization and data augmentation based on color and affine transformations. The preprocessed cases are passed to a carefully trained 2-D convolutional neural network. Accuracy above 74% was obtained, with the high-grade tumor recall above 74%. The results are encouraging considering the limited size and high heterogeneity of the data set. The proposed methodology can be useful for other problems involving classification of small and highly heterogeneous data sets.
Collapse
|
20
|
Borovec J, Kybic J, Arganda-Carreras I, Sorokin DV, Bueno G, Khvostikov AV, Bakas S, Chang EIC, Heldmann S, Kartasalo K, Latonen L, Lotz J, Noga M, Pati S, Punithakumar K, Ruusuvuori P, Skalski A, Tahmasebi N, Valkonen M, Venet L, Wang Y, Weiss N, Wodzinski M, Xiang Y, Xu Y, Yan Y, Yushkevich P, Zhao S, Munoz-Barrutia A. ANHIR: Automatic Non-Rigid Histological Image Registration Challenge. IEEE Trans Med Imaging 2020; 39:3042-3052. [PMID: 32275587 PMCID: PMC7584382 DOI: 10.1109/tmi.2020.2986331] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.
Collapse
|
21
|
Wodzinski M, Pajak M, Skalski A, Witkowski A, Pellacani G, Ludzik J. Automatic Quality Assessment of Reflectance Confocal Microscopy Mosaics using Attention-Based Deep Neural Network. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1824-1827. [PMID: 33018354 DOI: 10.1109/embc44109.2020.9176557] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Skin cancers are the most common cancers with an increased incidence, and a valid, early diagnosis may significantly reduce its morbidity and mortality. Reflectance confocal microscopy (RCM) is a relatively new, non-invasive imaging technique that allows screening lesions at a cellular resolution. However, one of the main disadvantages of the RCM is frequently occurring artifacts which makes the diagnostic process more time consuming and hard to automate using e.g. end-to-end deep learning approach. A tool to automatically determine the RCM mosaic quality could be beneficial for both the lesion classification and informing the user (dermatologist) about its quality in real-time, during the examination procedure. In this work, we propose an attention-based deep network to automatically determine if a given RCM mosaic has an acceptable quality. We achieved accuracy above 87% on the test set which may considerably improve further classification results and the RCM-based examination.
Collapse
|
22
|
Wodzinski M, Skalski A, Hemmerling D, Orozco-Arroyave JR, Noth E. Deep Learning Approach to Parkinson's Disease Detection Using Voice Recordings and Convolutional Neural Network Dedicated to Image Classification. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:717-720. [PMID: 31945997 DOI: 10.1109/embc.2019.8856972] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This study presents an approach to Parkinson's disease detection using vowels with sustained phonation and a ResNet architecture dedicated originally to image classification. We calculated spectrum of the audio recordings and used them as an image input to the ResNet architecture pre-trained using the ImageNet and SVD databases. To prevent overfitting the dataset was strongly augmented in the time domain. The Parkinson's dataset (from PC-GITA database) consists of 100 patients (50 were healthy / 50 were diagnosed with Parkinson's disease). Each patient was recorded 3 times. The obtained accuracy on the validation set is above 90% which is comparable to the current state-of-the-art methods. The results are promising because it turned out that features learned on natural images are able to transfer the knowledge to artificial images representing the spectrogram of the voice signal. What is more, we showed that it is possible to perform a successful detection of Parkinson's disease using only frequency-based features. A spectrogram enables visual representation of frequencies spectrum of a signal. It allows to follow the frequencies changes of a signal in time.
Collapse
|
23
|
Wodzinski M, Skalski A, Witkowski A, Pellacani G, Ludzik J. Convolutional Neural Network Approach to Classify Skin Lesions Using Reflectance Confocal Microscopy. Annu Int Conf IEEE Eng Med Biol Soc 2019; 2019:4754-4757. [PMID: 31946924 DOI: 10.1109/embc.2019.8856731] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We propose an approach based on a convolutional neural network to classify skin lesions using the reflectance confocal microscopy (RCM) mosaics. Skin cancers are the most common type of cancers and a correct, early diagnosis significantly lowers both morbidity and mortality. RCM is an in-vivo non-invasive screening tool that produces virtual biopsies of skin lesions but its proficient and safe use requires hard to obtain expertise. Therefore, it may be useful to have an additional tool to aid diagnosis. The proposed network is based on the ResNet architecture. The dataset consists of 429 RCM mosaics and is divided into 3 classes: melanoma, basal cell carcinoma, and benign naevi with the ground-truth confirmed by a histopathological examination. The test set classification accuracy was 87%, higher than the accuracy achieved by medical, confocal users. The results show that the proposed classification system can be a useful tool to aid in early, noninvasive melanoma detection.
Collapse
|
24
|
Wodzinski M, Skalski A, Ciepiela I, Kuszewski T, Kedzierawski P, Gajda J. Improving oncoplastic breast tumor bed localization for radiotherapy planning using image registration algorithms. Phys Med Biol 2018; 63:035024. [PMID: 29293469 DOI: 10.1088/1361-6560/aaa4b1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem, even though the visual alignment seems to be better than for the Demons algorithm. However, no algorithm could recover the deformation field with sufficient accuracy in terms of vector length and rotation angle differences.
Collapse
Affiliation(s)
- Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, al. A.Mickiewicza 30, PL30059, Krakow, Poland. Author to whom any correspondence should be addressed
| | | | | | | | | | | |
Collapse
|
25
|
Wodzinski M, Skalski A, Kedzierawski P, Kuszewski T, Ciepiela I. Usage of ICP Algorithm for Initial Alignment in B-Splines FFD Image Registration in Breast Cancer Radiotherapy Planning. Recent Developments and Achievements in Biocybernetics and Biomedical Engineering 2018. [DOI: 10.1007/978-3-319-66905-2_12] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|