1
|
Nie Q, Zhang X, Hu Y, Gong M, Liu J. Medical image registration and its application in retinal images: a review. Vis Comput Ind Biomed Art 2024; 7:21. [PMID: 39167337 PMCID: PMC11339199 DOI: 10.1186/s42492-024-00173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/31/2024] [Indexed: 08/23/2024] Open
Abstract
Medical image registration is vital for disease diagnosis and treatment with its ability to merge diverse information of images, which may be captured under different times, angles, or modalities. Although several surveys have reviewed the development of medical image registration, they have not systematically summarized the existing medical image registration methods. To this end, a comprehensive review of these methods is provided from traditional and deep-learning-based perspectives, aiming to help audiences quickly understand the development of medical image registration. In particular, we review recent advances in retinal image registration, which has not attracted much attention. In addition, current challenges in retinal image registration are discussed and insights and prospects for future research provided.
Collapse
Affiliation(s)
- Qiushi Nie
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
- Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yan Hu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Mingdao Gong
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.
- Singapore Eye Research Institute, Singapore, 169856, Singapore.
- State Key Laboratory of Ophthalmology, Optometry and Visual Science, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
2
|
Fan X, Li Z, Li Z, Wang X, Liu R, Luo Z, Huang H. Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4880-4892. [PMID: 37624710 DOI: 10.1109/tip.2023.3307215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.
Collapse
|
3
|
Babaei S, Dai B, Abbey CK, Ambreen Y, Dobrucki WL, Insana MF. Monitoring Muscle Perfusion in Rodents During Short-Term Ischemia Using Power Doppler Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1465-1475. [PMID: 36967332 PMCID: PMC10106419 DOI: 10.1016/j.ultrasmedbio.2023.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVE The aim of this work was to evaluate the reliability of power Doppler ultrasound (PD-US) measurements made without contrast enhancement to monitor temporal changes in peripheral blood perfusion. METHODS On the basis of pre-clinical rodent studies, we found that combinations of spatial registration and clutter filtering techniques applied to PD-US signals reproducibly tracked blood perfusion in skeletal muscle. Perfusion is monitored while modulating hindlimb blood flow. First, in invasive studies, PD-US measurements in deep muscle with laser speckle contrast imaging (LSCI) of superficial tissues made before, during and after short-term arterial clamping were compared. Then, in non-invasive studies, a pressure cuff was employed to generate longer-duration hindlimb ischemia. Here, B-mode imaging was also applied to measure flow-mediated dilation of the femoral artery while, simultaneously, PD-US was used to monitor downstream muscle perfusion to quantify reactive hyperemia. Measurements in adult male and female mice and rats, some with exercise conditioning, were included to explore biological variables. RESULTS PD-US methods are validated through comparisons with LSCI measurements. As expected, no significant differences were found between sexes or fitness levels in flow-mediated dilation or reactive hyperemia estimates, although post-ischemic perfusion was enhanced with exercise conditioning, suggesting there could be differences between the hyperemic responses of conduit and resistive vessels. CONCLUSION Overall, we found non-contrast PD-US imaging can reliably monitor relative spatiotemporal changes in muscle perfusion. This study supports the development of PD-US methods for monitoring perfusion changes in patients at risk for peripheral artery disease.
Collapse
Affiliation(s)
- Somaye Babaei
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Bingze Dai
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Yamenah Ambreen
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Wawrzyniec L Dobrucki
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Department of Biomedical and Translational Sciences, Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Michael F Insana
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
| |
Collapse
|
4
|
Liu X, Li S, Wang B, Xu L, Gao Z, Yang G. Motion estimation based on projective information disentanglement for 3D reconstruction of rotational coronary angiography. Comput Biol Med 2023; 157:106743. [PMID: 36934532 DOI: 10.1016/j.compbiomed.2023.106743] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/01/2023] [Accepted: 03/03/2023] [Indexed: 03/13/2023]
Abstract
The 2D projection space-based motion compensation reconstruction (2D-MCR) is a kind of representative method for 3D reconstruction of rotational coronary angiography owing to its high efficiency. However, due to the lack of accurate motion estimation of the overlapping projection pixels, existing 2D-MCR methods may still have a certain level of under-sampling artifacts or lose accuracy for cases with strong cardiac motion. To overcome this, in this study, we proposed a motion estimation approach based on projective information disentanglement (PID-ME) for 3D reconstruction of rotational coronary angiography. The reconstruction method adopts the framework of 2D-MCR, which is referred to as 2D-PID-MCR. The PID-ME consists of two parts: generation of the reference projection sequence based on the fast simplified distance driven projector (FSDDP) algorithm, motion estimation and correction based on the projective average minimal distance measure (PAMD) model. The FSDDP algorithm generates the reference projection sequence faster and accelerates the whole reconstruction greatly. The PAMD model can disentangle the projection information effectively and estimate the motion of both overlapping and non-overlapping projection pixels accurately. The main contribution of this study is the construction of 2D-PID-MCR to overcome the inherent limitations of the existing 2D-MCR method. Simulated and clinical experiments show that the PID-ME, consisting of FSDDP and PAMD, can estimate the motion of the projection sequence data accurately and efficiently. Our 2D-PID-MCR method outperforms the state-of-the-art approaches in terms of accuracy and real-time performance.
Collapse
Affiliation(s)
- Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Si Li
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Bin Wang
- Department of Cardiology, the First Affiliated Hospital of Shantou University Medical College, Shantou, China; The Clinical Research Center of the First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Lin Xu
- General Hospital of the Southern Theatre Command, PLA and The First School of Clinical Medicine, Southern Medical University, Guangzhou, China.
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
| |
Collapse
|
5
|
Oonk JGM, Dobbe JGG, Strackee SD, Strijkers GJ, Streekstra GJ. Quantification of the methodological error in kinematic evaluation of the DRUJ using dynamic CT. Sci Rep 2023; 13:3159. [PMID: 36823242 PMCID: PMC9950078 DOI: 10.1038/s41598-023-29726-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 02/09/2023] [Indexed: 02/25/2023] Open
Abstract
Distal radio-ulnar joint (DRUJ) motion analysis using dynamic CT is gaining popularity. Following scanning and segmentation, 3D bone models are registered to (4D-)CT target frames. Imaging errors like low signal-to-noise ratio (SNR), limited Z-coverage and motion artefacts influence registration, causing misinterpretation of joint motion. This necessitates quantification of the methodological error. A cadaver arm and dynamic phantom were subjected to multiple 4D-CT scans, while varying tube charge-time product and phantom angular velocity, to evaluate the effects of SNR and motion artefacts on registration accuracy and precision. 4D-CT Z-coverage is limited by the scanner. To quantify the effects of different Z-coverages on registration accuracy and precision, 4D-CT was simulated by acquiring multiple spiral 3D-CT scans of the cadaver arm. Z-coverage was varied by clipping the 3D bone models prior to registration. The radius position relative to the ulna was obtained from the segmentation image. Apparent relative displacement seen in the target images is caused by registration errors. Worst-case translations were 0.45, 0.08 and 1.1 mm for SNR-, Z-coverage- and motion-related errors respectively. Worst-case rotations were 0.41, 0.13 and 6.0 degrees. This study showed that quantification of the methodological error enables composition of accurate and precise DRUJ motion scanning protocols.
Collapse
Affiliation(s)
- J. G. M. Oonk
- grid.509540.d0000 0004 6880 3010Department of Biomedical Engineering and Physics, Amsterdam UMC, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands ,Amsterdam Movement Sciences, Musculoskeletal Health, Restoration and Development, Amsterdam, The Netherlands
| | - J. G. G. Dobbe
- grid.509540.d0000 0004 6880 3010Department of Biomedical Engineering and Physics, Amsterdam UMC, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands ,Amsterdam Movement Sciences, Musculoskeletal Health, Restoration and Development, Amsterdam, The Netherlands
| | - S. D. Strackee
- grid.509540.d0000 0004 6880 3010Department of Plastic-, Reconstructive- and Handsurgery, Amsterdam UMC, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands
| | - G. J. Strijkers
- grid.509540.d0000 0004 6880 3010Department of Biomedical Engineering and Physics, Amsterdam UMC, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands
| | - G. J. Streekstra
- grid.509540.d0000 0004 6880 3010Department of Biomedical Engineering and Physics, Amsterdam UMC, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands
| |
Collapse
|
6
|
Öfverstedt J, Lindblad J, Sladoje N. INSPIRE: Intensity and spatial information-based deformable image registration. PLoS One 2023; 18:e0282432. [PMID: 36867617 PMCID: PMC9983883 DOI: 10.1371/journal.pone.0282432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/15/2023] [Indexed: 03/04/2023] Open
Abstract
We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire.
Collapse
Affiliation(s)
- Johan Öfverstedt
- Department of Information Technology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Joakim Lindblad
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Nataša Sladoje
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
7
|
Lu J, Öfverstedt J, Lindblad J, Sladoje N. Is image-to-image translation the panacea for multimodal image registration? A comparative study. PLoS One 2022; 17:e0276196. [PMID: 36441754 PMCID: PMC9704666 DOI: 10.1371/journal.pone.0276196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 09/30/2022] [Indexed: 11/29/2022] Open
Abstract
Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.
Collapse
Affiliation(s)
- Jiahao Lu
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- IMAGE Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Johan Öfverstedt
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Joakim Lindblad
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Nataša Sladoje
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
8
|
Tran MQ, Do T, Tran H, Tjiputra E, Tran QD, Nguyen A. Light-Weight Deformable Registration Using Adversarial Learning With Distilling Knowledge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1443-1453. [PMID: 34990354 DOI: 10.1109/tmi.2022.3141013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is a crucial step in many medical procedures such as image-guided surgery and radiation therapy. Most recent learning-based methods focus on improving the accuracy by optimizing the non-linear spatial correspondence between the input images. Therefore, these methods are computationally expensive and require modern graphic cards for real-time deployment. In this paper, we introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy. In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network. We design the student network such as it is light-weight and well suitable for deployment on a typical CPU. The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods. We further show that the use of our adversarial learning algorithm is essential for a time-efficiency deformable registration method. Finally, our source code and trained models are available at https://github.com/aioz-ai/LDR_ALDK.
Collapse
|
9
|
Infrared and Visible Image Registration Based on Automatic Robust Algorithm. ELECTRONICS 2022. [DOI: 10.3390/electronics11111674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Image registration is the base of subsequent image processing and has been widely utilized in computer vision. Aiming at the differences in the resolution, spectrum, and viewpoint of infrared and visible images, and in order to accurately register infrared and visible images, an automatic robust infrared and visible image registration algorithm, based on a deep convolutional network, was proposed. In order to precisely search and locate the feature points, a deep convolutional network is introduced, which solves the problem that a large number of feature points can still be extracted when the pixels of the infrared image are not clear. Then, in order to achieve accurate feature point matching, a rough-to-fine matching algorithm is designed. The rough matching is obtained by location orientation scale transform Euclidean distance, and then, the fine matching is performed based on the update global optimization, and finally, the image registration is realized. Experimental results show that the proposed algorithm has better robustness and accuracy than several advanced registration algorithms.
Collapse
|
10
|
Öfverstedt J, Lindblad J, Sladoje N. Fast computation of mutual information in the frequency domain with applications to global multimodal image alignment. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.05.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
11
|
Moshaei-Nezhad Y, Müller J, Oelschlägel M, Kirsch M, Tetzlaff R. Registration of IRT and visible light images in neurosurgery: analysis and comparison of automatic intensity-based registration approaches. Int J Comput Assist Radiol Surg 2022; 17:683-697. [PMID: 35175502 DOI: 10.1007/s11548-022-02562-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 01/06/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE The purpose of this study is to analyze and compare six automatic intensity-based registration methods for intraoperative infrared thermography (IRT) and visible light imaging (VIS/RGB). The practical requirement is to get a good performance of Euclidean distance between manually set landmarks in reference and target images as well as to achieve a high structural similarity index metric (SSIM) and peak signal-to-noise ratio (PSNR) with respect to the reference image. METHODS In this study, preprocessing is applied to bring both image types to a similar intensity. Similarity transformation is employed to align roughly IRT and visible light images. Two optimizers and two measures are used in this process. Thereafter, due to locally different displacement of the brain surface through respiration and heartbeat, two non-rigid transformations are applied, and finally, a bicubic interpolation is carried out to compensate for the resulting estimated transformation. Performance was assessed using eleven image datasets. The registration accuracy of the different computational approaches was assessed based on SSIM and PSNR. Additionally, five concise landmarks for each dataset were selected manually in reference and target images and the Euclidean distance between the corresponding landmarks was compared. RESULTS The results are showing that the combination of normalized intensity, mutual information measure with one-plus-one evolutionary optimizer in combination with Demon registration results in improved accuracy and performance as compared to all other methods tested here. Furthermore, the obtained results led to [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] registrations for datasets 1, 2, 5, 7, and 8 with respect to the second best result by calculating the mean Euclidean distance of five landmarks. CONCLUSIONS We conclude that the mutual information measure with one-plus-one evolutionary optimizer in combination with Demon registration can achieve better accuracy and performance to those other methods mentioned here for automatic registration of IRT and visible light images in neurosurgery.
Collapse
Affiliation(s)
- Yahya Moshaei-Nezhad
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, Technische Universität Dresden, 01062, Dresden, Germany.
| | - Juliane Müller
- Carl Gustav Carus Faculty of Medicine, Anesthesiology and Intensive Care Medicine, Clinical Sensoring and Monitoring, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Martin Oelschlägel
- Carl Gustav Carus Faculty of Medicine, Anesthesiology and Intensive Care Medicine, Clinical Sensoring and Monitoring, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Matthias Kirsch
- Carl Gustav Carus Faculty of Medicine, Department of Neurosurgery, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.,Department of Neurosurgery, Asklepios Kliniken Schildautal, Karl-Herold-Str. 1, 38723, Seesen, Germany
| | - Ronald Tetzlaff
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
12
|
Pereira C, Park JH, Campelos S, Gullo I, Lemos C, Solorzano L, Martins D, Gonçalves G, Leitão D, Lee HJ, Kong SH, André A, Borges C, Almeida D, Wälbhy C, Almeida R, Kim WH, Carneiro F, Yang HK, Almeida GM, Oliveira C. Comparison of East-Asia and West-Europe cohorts explains disparities in survival outcomes and highlights predictive biomarkers of early gastric cancer aggressiveness. Int J Cancer 2021; 150:868-880. [PMID: 34751446 DOI: 10.1002/ijc.33872] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/09/2021] [Accepted: 10/22/2021] [Indexed: 11/06/2022]
Abstract
Surgical resection with lymphadenectomy and peri-operative chemotherapy is the universal mainstay for curative treatment of gastric cancer (GC) patients with loco-regional disease. However, GC survival remains asymmetric in West- and East-world regions. We hypothesize this asymmetry derives from differential clinical management. Therefore, we collected chemo-naïve GC patients from Portugal and South-Korea to explore specific immunophenotypic profiles related to disease aggressiveness, and clinicopathological factors potentially explaining associated overall survival (OS) differences. Clinicopathological and survival data were collected from chemo-naïve surgical cohorts from Portugal (West-Europe cohort (WE-C); n=170) and South-Korea (East-Asia cohort (EA-C); n=367), and correlated with immunohistochemical expression profiles of E-cadherin and CD44v6 obtained from consecutive tissue microarrays sections. Survival analysis revealed a subset of 12.4% of WE-C patients, whose tumors concomitantly express E-cadherin_abnormal and CD44v6_very-high, displaying extremely poor OS, even at TNM stages I and II. These WE-C stages I and II patients were particularly aggressive compared to all other, invading deeper into the gastric wall (p=0.032) and more often permeating the vasculature (p=0.018) and nerves (p=0.009). A similar immunophenotypic profile was found in 11.9% of EA-C patients, but unrelated to survival. Stage I and II EA-C patients displaying both biomarkers also permeated more lymphatic vessels (p=0.003), promoting lymph node (LN) metastasis (p=0.019), being diagnosed on average 8-years earlier and submitted to more extensive LN dissection than WE-C. Concomitant E-cadherin_abnormal/CD44v6_very-high expression predicts aggressiveness and poor survival of stage I and II GC submitted to conservative lymphadenectomy.
Collapse
Affiliation(s)
- Carla Pereira
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Doctoral Programme in Biomedicine, Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| | - Ji-Hyeon Park
- Department of Surgery, Seoul National University Hospital, Seoul, Korea
| | - Sofia Campelos
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Department of Pathology, Ipatimup Diagnostics, Institute of Molecular Pathology and Immunology, University of Porto, 4200-135, Porto, Portugal
| | - Irene Gullo
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Department of Pathology, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal.,Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| | - Carolina Lemos
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,UniGENe, IBMC - Institute for Molecular and Cell Biology, 4200-135, Porto, Portugal.,ICBAS - Instituto Ciências Biomédicas Abel Salazar, Universidade do Porto, 4050-313, Porto, Portugal
| | - Leslie Solorzano
- Center for Image Analysis, Dept. of IT and SciLifeLab, Uppsala University, SE-751 05, Uppsala, Sweden
| | - Diana Martins
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Department of Biomedical Laboratory Sciences, ESTESC- Coimbra Health School, Polytechnic Institute of Coimbra, 3046-854, Coimbra, Portugal
| | - Gilza Gonçalves
- Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal
| | - Dina Leitão
- Department of Pathology, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal.,Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| | - Hyuk-Joon Lee
- Department of Surgery, Seoul National University Hospital, Seoul, Korea.,Cancer Research Institute, Seoul National University College of Medicine, Seoul, Korea.,Department of Surgery, Seoul National University College of Medicine, Seoul, Korea
| | - Seong-Ho Kong
- Department of Surgery, Seoul National University Hospital, Seoul, Korea.,Department of Surgery, Seoul National University College of Medicine, Seoul, Korea
| | - Ana André
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal
| | - Clara Borges
- Medical Oncology Department, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal
| | - Daniela Almeida
- Medical Oncology Department, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal
| | - Carolina Wälbhy
- Center for Image Analysis, Dept. of IT and SciLifeLab, Uppsala University, SE-751 05, Uppsala, Sweden
| | - Raquel Almeida
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Department of Pathology, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal.,Faculty of Sciences of the University of Porto, 4169-007, Porto, Portugal
| | - Woo Ho Kim
- Department of Pathology, Seoul National University College of Medicine, Seoul, Korea
| | - Fátima Carneiro
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Department of Pathology, Centro Hospitalar Universitário de São João, 4200-319, Porto, Portugal.,Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| | - Han-Kwang Yang
- Department of Surgery, Seoul National University Hospital, Seoul, Korea.,Cancer Research Institute, Seoul National University College of Medicine, Seoul, Korea.,Department of Surgery, Seoul National University College of Medicine, Seoul, Korea
| | - Gabriela M Almeida
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| | - Carla Oliveira
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135, Porto, Portugal.,Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135, Porto, Portugal.,Faculty of Medicine of the University of Porto, 4200-319, Porto, Portugal
| |
Collapse
|
13
|
Hou X, Gao Q, Wang R, Luo X. Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features. SENSORS 2021; 21:s21082695. [PMID: 33920434 PMCID: PMC8069145 DOI: 10.3390/s21082695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 04/08/2021] [Accepted: 04/08/2021] [Indexed: 11/21/2022]
Abstract
Since technologies in image fusion, image splicing, and target recognition have developed rapidly, as the basis of many image applications, the performance of image registration directly affects subsequent work. In this work, for rich features of satellite-borne optical imagery such as panchromatic and multispectral images, the Harris corner algorithm is combined with the scale invariant feature transform (SIFT) operator for feature point extraction. Our rough matching strategy uses the K-D (K-Dimensional) tree combined with the BBF (Best Bin First) method, and the similarity measure is the nearest neighbor/the second-nearest neighbor ratio. Finally, a triangle-area representation (TAR) algorithm is utilized to eliminate false matches in order to ensure registration accuracy. The performance of the proposed algorithm is compared with existing popular algorithms. The experimental results indicate that for visible light and multi-spectral satellite remote sensing images of different sizes and different sources, the proposed algorithm in this work is excellent in accuracy and efficiency.
Collapse
Affiliation(s)
- Xinan Hou
- School of Electronic Engineering, Xidian University, Xi’an 710071, China;
| | - Quanxue Gao
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China;
| | - Rong Wang
- Yangtze Delta Region Institute (HuZhou), University of Electronic Science and Technology of China, Huzhou 313099, China;
| | - Xin Luo
- Yangtze Delta Region Institute (HuZhou), University of Electronic Science and Technology of China, Huzhou 313099, China;
- School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, China
- Correspondence: ; Tel.: +86-028-61830279
| |
Collapse
|
14
|
Solorzano L, Pereira C, Martins D, Almeida R, Carneiro F, Almeida GM, Oliveira C, Wahlby C. Towards Automatic Protein Co-Expression Quantification in Immunohistochemical TMA Slides. IEEE J Biomed Health Inform 2021; 25:393-402. [PMID: 32750958 DOI: 10.1109/jbhi.2020.3008821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Immunohistochemical (IHC) analysis of tissue biopsies is currently used for clinical screening of solid cancers to assess protein expression. The large amount of image data produced from these tissue samples requires specialized computational pathology methods to perform integrative analysis. Even though proteins are traditionally studied independently, the study of protein co-expression may offer new insights towards patients' clinical and therapeutic decisions. To explore protein co-expression, we constructed a modular image analysis pipeline to spatially align tissue microarray (TMA) image slides, evaluate alignment quality, define tumor regions, and ultimately quantify protein expression, before and after tumor segmentation. The pipeline was built with open-source tools that can manage gigapixel slides. To evaluate the consensus between pathologist and computer, we characterized a cohort of 142 gastric cancer (GC) cases regarding the extent of E-cadherin and CD44v6 expression. We performed IHC analysis in consecutive TMA slides and compared the automated quantification with the pathologists' manual assessment. Our results show that automated quantification within tumor regions improves agreement with the pathologists' classification. A co-expression map was created to identify the cores co-expressing both proteins. The proposed pipeline provides not only computational tools forwarding current pathology practices to explore co-expression, but also a framework for merging data and transferring information in learning-based approaches to pathology.
Collapse
|
15
|
Qu Z, Li J, Bao KH, Si ZC. An Unordered Image Stitching Method Based on Binary Tree and Estimated Overlapping Area. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:6734-6744. [PMID: 32406839 DOI: 10.1109/tip.2020.2993134] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Aiming at the complex computation and time-consuming problem during unordered image stitching, we present a method based on the binary tree and the estimated overlapping areas to stitch images without order in this paper. For image registration, the overlapping areas between input images are estimated, so that the extraction and matching of feature points are only performed in these areas. For image stitching, we build a model of the binary tree to stitch each two matched images without sorting. Compared to traditional methods, our method significantly reduces the computational time of matching irrelevant image pairs and improves the efficiency of image registration and stitching. Moreover, the stitching model of the binary tree proposed in this paper further reduces the distortion of the panorama. Experimental results show that the number of extracted feature points in the estimated overlapping area is approximately 0.3∼0.6 times of that in the entire image by using the same method, which greatly reduces the computational time of feature extraction and matching. Compared to the exhaustive image matching method, our approach only takes about 1/3 of the time to find all matching images.
Collapse
|
16
|
Yang F, Ding M, Zhang X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. SENSORS 2019; 19:s19214675. [PMID: 31661828 PMCID: PMC6864520 DOI: 10.3390/s19214675] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 10/05/2019] [Accepted: 10/23/2019] [Indexed: 11/22/2022]
Abstract
The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medical images to transform multi-modal image registration into mono-modal one. The sum of absolute differences between structural representations is computed as the similarity measure. Subsequently, the foveated MIND based spatial constraint is introduced into the Markov random field (MRF) optimization to reduce the number of transformation parameters and restrict the calculation of the energy function in the image region involving non-rigid deformation. Finally, the accurate and efficient 3D medical image registration is realized by minimizing the similarity measure based MRF energy function. Extensive experiments on 3D positron emission tomography (PET), computed tomography (CT), T1, T2, and (proton density) PD weighted magnetic resonance (MR) images with synthetic deformation demonstrate that the proposed method has higher computational efficiency and registration accuracy in terms of target registration error (TRE) than the registration methods that are based on the hybrid L-BFGS-B and cat swarm optimization (HLCSO), the sum of squared differences on entropy images, the MIND, and the self-similarity context (SSC) descriptor, except that it provides slightly bigger TRE than the HLCSO for CT-PET image registration. Experiments on real MR and ultrasound images with unknown deformation have also be done to demonstrate the practicality and superiority of the proposed method.
Collapse
Affiliation(s)
- Feng Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
- School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China.
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Xuming Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|