1
|
Carsuzaa F, Favier V, Seguin L, Turri-Zanoni M, Camarda AM, Verillaud B, Herman P, Borsetto D, Schreiber A, Taboni S, Rampinelli V, Vinciguerra A, Vural A, Liem X, Busato F, Renard S, Dupin C, Doré M, Graff P, Tao Y, Racadot S, Moya Plana A, Landis BN, Marcy PY, Patron V, de Gabory L, Orlandi E, Ferrari M, Thariat J. Consensus for a postoperative atlas of sinonasal substructures from a modified Delphi study to guide radiotherapy in sinonasal malignancies. Radiother Oncol 2025; 206:110784. [PMID: 39986542 DOI: 10.1016/j.radonc.2025.110784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 01/30/2025] [Accepted: 02/10/2025] [Indexed: 02/24/2025]
Abstract
BACKGROUND Sinonasal and skull base tumor surgery-related morbidity has been reduced by the use of endoscopic endonasal skull base surgery (EESBS). Postoperative radiation therapy (poRT) requires precise definition of target volumes. To enhance the accuracy of poRT planning, histological and radiological correlations are necessary to locate the tumor attachment on poRT CT scans. An accurate atlas of structures resected or identified during EESBS could serve for the interdisciplinary postoperative management of patients, personalizing poRT by adequate radiation dose delivery. The objective of this study was to achieve a consensual segmentation atlas on CT scan with surgeons practicing EESBS and radiation oncologists. METHODS The sinonasal structures relevant for poRT of sinonasal malignancies were determined by a two-round Delphi process. A rating group of 25 European experts in sinonasal malignancies was set up. Consensual structures emerged and were used to determine the anatomical limits of the retained structures to draft an atlas with expert based relevant structures. The atlas was then critically reviewed, discussed, and edited by another 2 skull base surgeons and 2 radiation oncologists. RESULTS After the two rating rounds, 46 structures obtained a strong agreement, 7 an agreement, 5 were rejected and 5 did not reach consensus. The atlas integrating all the selected structures is presented attached. CONCLUSION Consensual segmentation atlas on CT scan might allow, through careful poRT planning to limit the morbidity of poRT while maintaining good local control. Prospective studies are necessary to validate this potential precision medicine-based approach.
Collapse
Affiliation(s)
- Florent Carsuzaa
- Department of Otolaryngology - Head & Neck Surgery, University Hospital of Poitiers, Poitiers, France; LITEC UR15560, University of Poitiers, Poitiers, France.
| | - Valentin Favier
- Department of Otolaryngology - Head & Neck Surgery, Hospital Gui de Chauliac, University Hospital of Montpellier, Montpellier, France; Research-team ICAR, Laboratory of Computer Science, Robotics and Microelectronics of Montpellier (LIRMM), University of Montpellier, French National Centre for Scientific Research (CNRS), Montpellier, France.
| | - Lise Seguin
- Department of Otolaryngology - Head & Neck Surgery, University Hospital of Poitiers, Poitiers, France
| | - Mario Turri-Zanoni
- Division of Otorhinolaryngology, Department of Biotechnology and Life Sciences, University of Insubria, ASST Lariana, Como, Italy
| | - Anna-Maria Camarda
- Clinical Department, National Center for Oncological Hadrontherapy (Fondazione CNAO), Pavia, Italy; Department of Clinical, Surgical, Diagnostic, and Pediatric Sciences, University of Pavia, Pavia, Italy
| | - Benjamin Verillaud
- Otorhinolaryngology and Skull Base Center, AP-HP, Hospital Lariboisière, Paris, France
| | - Philippe Herman
- Otorhinolaryngology and Skull Base Center, AP-HP, Hospital Lariboisière, Paris, France
| | - Daniele Borsetto
- Department of ENT, Cambridge University Hospitals NHS Trust, United Kingdom
| | - Alberto Schreiber
- Unit of Otorhinolaryngology Head and Neck Surgery, ASST Spedali Civili Brescia, Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Stefano Taboni
- Section of Otorhinolaryngology Head and Neck Surgery, Department of Neurosciences, University of Padua, Padua, Italy
| | - Vittorio Rampinelli
- Unit of Otorhinolaryngology Head and Neck Surgery, ASST Spedali Civili Brescia, Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | | | - Alperen Vural
- Department of Otolaryngology - Head & Neck Surgery, Istanbul University Cerrahpasa - Cerrahpasa Faculty of Medicine, Istanbul, Turkey
| | - Xavier Liem
- Department of Radiation Oncology, Oscar Lambret Center, Lille, France
| | - Fabio Busato
- Department of Radiation Oncology, Abano Terme Hospital, Padua, Italy
| | - Sophie Renard
- Department of Radiation Oncology, Institut de cancérologie de Lorraine, Vandœuvre-lès-Nancy, France
| | - Charles Dupin
- Department of Radiation Oncology, Bordeaux University Hospital, Bordeaux, France; Bordeaux University, BRIC (BoRdeaux Institute of OnCology), UMR1312, INSERM, University of Bordeaux, Bordeaux, France
| | - Mélanie Doré
- Department of Radiation Oncology, Institute de cancérologie de l'Ouest (ICO) centre René-Gauducheau, Saint-Herblain, France
| | - Pierre Graff
- Department of Radiation Oncology, Institut Curie, PSL Research University, Paris - Saint Cloud-Orsay, France
| | - Yungan Tao
- Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Séverine Racadot
- Department of Radiation Oncology, Centre Léon Bérard, Lyon, France
| | - Antoine Moya Plana
- Department of Otolaryngology - Head & Neck Surgery, Gustave Roussy, Villejuif, France
| | - Basile N Landis
- Rhinology-Olfactory Unit, Department of Otorhinolaryngology-Head and Neck Surgery, University Hospital of Geneva Medical School, Geneva, Switzerland
| | - Pierre-Yves Marcy
- PolyClinics ELSAN Group, Medipole Sud, Quartier Quiez, 83189 Ollioules, France
| | - Vincent Patron
- Department of Otolaryngology - Head & Neck Surgery, University Hospital of Caen, Caen, France
| | - Ludovic de Gabory
- Department of Otolaryngology and Head & Neck Surgery, Bordeaux University Hospital, Bordeaux, France
| | - Ester Orlandi
- Clinical Department, National Center for Oncological Hadrontherapy (Fondazione CNAO), Pavia, Italy
| | - Marco Ferrari
- Section of Otorhinolaryngology Head and Neck Surgery, Department of Neurosciences, University of Padua, Padua, Italy
| | - Juliette Thariat
- Centre François Baclesse, Comprehensive Cancer Center, Caen, France; Laboratoire de physique Corpusculaire IN2P3/ENSICAEN/CNRS UMR 6534, Normandie Université, Caen France
| |
Collapse
|
2
|
Meyer S, Hu YC, Rimner A, Mechalakos J, Cerviño L, Zhang P. Deformable image registration uncertainty-encompassing dose accumulation for adaptive radiotherapy. Int J Radiat Oncol Biol Phys 2025:S0360-3016(25)00371-2. [PMID: 40239820 DOI: 10.1016/j.ijrobp.2025.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 03/03/2025] [Accepted: 04/04/2025] [Indexed: 04/18/2025]
Abstract
PURPOSE Deformable image registration (DIR) represents an inherently ill-posed problem, and its quality highly depends on the algorithm and user input, which can severely affect its applications in adaptive radiotherapy. We propose an automated framework for integrating DIR uncertainty into dose accumulation. METHODS A hyperparameter perturbation approach was applied to estimate an ensemble of deformation vector fields (DVFs) for a given CT to cone-beam CT (CBCT) DIR. For each voxel, a principal component analysis (PCA) was performed on the distribution of homologous points to construct voxel-specific DIR uncertainty confidence ellipsoids. During the resampling process for dose mapping, the complete dose within each ellipsoid was evaluated via interpolation to estimate the upper and lower dose limits for the particular voxel. We applied the proposed framework in a retrospective dose accumulation study of 20 lung cancer patients who underwent image-guided radiation therapy with weekly CBCTs. RESULTS The average computational time was around 30 min, making the approach clinically feasible for automated offline evaluations. The uncertainty (i.e., largest ellipsoid semi-axis length) for the fifth week CBCT DIR was 3.8±1.8 mm, 2.5±0.7 mm, 1.5±0.4 mm, 3.2±1.3 mm, and 4.5±1.8 mm for the GTV, esophagus, spinal cord, lungs, and heart, respectively. Confidence ellipsoids were markedly elongated, with the largest semi-axis 5.5 and 2.5 times longer than the other two axes. The dosimetric uncertainties were mainly within 4 Gy but exhibited significant spatial variation due to the interplay between dose gradient and DIR uncertainty. When DIR uncertainty was considered in dose accumulation, three cases exceeded institutional limits for dose-volume histogram metrics, highlighting the importance of considering the inherent uncertainty of DIR. CONCLUSIONS This framework has the potential to facilitate the clinical implementation of dose accumulation, which can improve clinical decision-making in adaptive radiotherapy and provide more personalized radiation therapy treatments.
Collapse
Affiliation(s)
- Sebastian Meyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - James Mechalakos
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Laura Cerviño
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
3
|
Zheng D, Preuss K, Milano MT, He X, Gou L, Shi Y, Marples B, Wan R, Yu H, Du H, Zhang C. Mathematical modeling in radiotherapy for cancer: a comprehensive narrative review. Radiat Oncol 2025; 20:49. [PMID: 40186295 PMCID: PMC11969940 DOI: 10.1186/s13014-025-02626-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 03/17/2025] [Indexed: 04/07/2025] Open
Abstract
Mathematical modeling has long been a cornerstone of radiotherapy for cancer, guiding treatment prescription, planning, and delivery through versatile applications. As we enter the era of medical big data, where the integration of molecular, imaging, and clinical data at both the tumor and patient levels could promise more precise and personalized cancer treatment, the role of mathematical modeling has become even more critical. This comprehensive narrative review aims to summarize the main applications of mathematical modeling in radiotherapy, bridging the gap between classical models and the latest advancements. The review covers a wide range of applications, including radiobiology, clinical workflows, stereotactic radiosurgery/stereotactic body radiotherapy (SRS/SBRT), spatially fractionated radiotherapy (SFRT), FLASH radiotherapy (FLASH-RT), immune-radiotherapy, and the emerging concept of radiotherapy digital twins. Each of these areas is explored in depth, with a particular focus on how newer trends and innovations are shaping the future of radiation cancer treatment. By examining these diverse applications, this review provides a comprehensive overview of the current state of mathematical modeling in radiotherapy. It also highlights the growing importance of these models in the context of personalized medicine and multi-scale, multi-modal data integration, offering insights into how they can be leveraged to enhance treatment precision and patient outcomes. As radiotherapy continues to evolve, the insights gained from this review will help guide future research and clinical practice, ensuring that mathematical modeling continues to propel innovations in radiation cancer treatment.
Collapse
Affiliation(s)
- Dandan Zheng
- Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, 601 Elmwood Avenue, Box 647, Rochester, NY, 14642, USA.
| | | | - Michael T Milano
- Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, 601 Elmwood Avenue, Box 647, Rochester, NY, 14642, USA
| | - Xiuxiu He
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Lang Gou
- Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, 601 Elmwood Avenue, Box 647, Rochester, NY, 14642, USA
| | - Yu Shi
- School of Biological Sciences, University of Nebraska Lincoln, Lincoln, USA
| | - Brian Marples
- Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, 601 Elmwood Avenue, Box 647, Rochester, NY, 14642, USA
| | - Raphael Wan
- Department of Radiation Oncology, Wilmot Cancer Institute, University of Rochester Medical Center, 601 Elmwood Avenue, Box 647, Rochester, NY, 14642, USA
| | - Hongfeng Yu
- Department of Computer Science, University of Nebraska Lincoln, Lincoln, USA
| | - Huijing Du
- Department of Mathematics, University of Nebraska Lincoln, Lincoln, USA
| | - Chi Zhang
- School of Biological Sciences, University of Nebraska Lincoln, Lincoln, USA
| |
Collapse
|
4
|
Wang Z, Wang H, Ni D, Xu M, Wang Y. Encoding matching criteria for cross-domain deformable image registration. Med Phys 2025; 52:2305-2315. [PMID: 39688347 DOI: 10.1002/mp.17565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 11/08/2024] [Accepted: 11/30/2024] [Indexed: 12/18/2024] Open
Abstract
BACKGROUND Most existing deep learning-based registration methods are trained on single-type images to address same-domain tasks, resulting in performance degradation when applied to new scenarios. Retraining a model for new scenarios requires extra time and data. Therefore, efficient and accurate solutions for cross-domain deformable registration are in demand. PURPOSE We argue that the tailor-made matching criteria in traditional registration methods is one of the main reason they are applicable in different domains. Motivated by this, we devise a registration-oriented encoder to model the matching criteria of image features and structural features, which is beneficial to boost registration accuracy and adaptability. METHODS Specifically, a general feature encoder (Encoder-G) is proposed to capture comprehensive medical image features, while a structural feature encoder (Encoder-S) is designed to encode the structural self-similarity into the global representation. Moreover, by updating Encoder-S using one-shot learning, our method can effectively adapt to different domains. The efficacy of our method is evaluated using MRI images from three different domains, including brain images (training/testing: 870/90 pairs), abdomen images (training/testing: 1406/90 pairs), and cardiac images (training/testing: 64770/870 pairs). The comparison methods include traditional method (SyN) and cutting-edge deep networks. The evaluation metrics contain dice similarity coefficient (DSC) and average symmetric surface distance (ASSD). RESULTS In the single-domain task, our method attains an average DSC of 68.9%/65.2%/72.8%, and ASSD of 9.75/3.82/1.30 mm on abdomen/cardiac/brain images, outperforming the second-best comparison methods by large margins. In the cross-domain task, without one-shot optimization, our method outperforms other deep networks in five out of six cross-domain scenarios and even surpasses symmetric image normalization method (SyN) in two scenarios. By conducting the one-shot optimization, our method successfully surpasses SyN in all six cross-domain scenarios. CONCLUSIONS Our method yields favorable results in the single-domain task while ensuring improved generalization and adaptation performance in the cross-domain task, showing its feasibility for the challenging cross-domain registration applications. The code is publicly available at https://github.com/JuliusWang-7/EncoderReg.
Collapse
Affiliation(s)
- Zhuoyuan Wang
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Haiqiao Wang
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Ming Xu
- Department of Medical Ultrasound, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yi Wang
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| |
Collapse
|
5
|
Windolf C, Yu H, Paulk AC, Meszéna D, Muñoz W, Boussard J, Hardstone R, Caprara I, Jamali M, Kfir Y, Xu D, Chung JE, Sellers KK, Ye Z, Shaker J, Lebedeva A, Raghavan RT, Trautmann E, Melin M, Couto J, Garcia S, Coughlin B, Elmaleh M, Christianson D, Greenlee JDW, Horváth C, Fiáth R, Ulbert I, Long MA, Movshon JA, Shadlen MN, Churchland MM, Churchland AK, Steinmetz NA, Chang EF, Schweitzer JS, Williams ZM, Cash SS, Paninski L, Varol E. DREDge: robust motion correction for high-density extracellular recordings across species. Nat Methods 2025; 22:788-800. [PMID: 40050699 DOI: 10.1038/s41592-025-02614-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/24/2025] [Indexed: 03/12/2025]
Abstract
High-density microelectrode arrays have opened new possibilities for systems neuroscience, but brain motion relative to the array poses challenges for downstream analyses. We introduce DREDge (Decentralized Registration of Electrophysiology Data), a robust algorithm for the registration of noisy, nonstationary extracellular electrophysiology recordings. In addition to estimating motion from action potential data, DREDge enables automated, high-temporal-resolution motion tracking in local field potential data. In human intraoperative recordings, DREDge's local field potential-based tracking reliably recovered evoked potentials and single-unit spike sorting. In recordings of deep probe insertions in nonhuman primates, DREDge tracked motion across centimeters of tissue and several brain regions while mapping single-unit electrophysiological features. DREDge reliably improved motion correction in acute mouse recordings, especially in those made with a recent ultrahigh-density probe. Applying DREDge to recordings from chronic implantations in mice yielded stable motion tracking despite changes in neural activity between experimental sessions. These advances enable automated, scalable registration of electrophysiological data across species, probes and drift types, providing a foundation for downstream analyses of these rich datasets.
Collapse
Affiliation(s)
- Charlie Windolf
- Department of Statistics, Columbia University, New York City, NY, USA.
- Zuckerman Institute, Columbia University, New York City, NY, USA.
| | - Han Yu
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Department of Electrical Engineering, Columbia University, New York City, NY, USA
| | - Angelique C Paulk
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Domokos Meszéna
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - William Muñoz
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Julien Boussard
- Department of Statistics, Columbia University, New York City, NY, USA
- Zuckerman Institute, Columbia University, New York City, NY, USA
| | - Richard Hardstone
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Irene Caprara
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Mohsen Jamali
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Yoav Kfir
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Duo Xu
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Jason E Chung
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Kristin K Sellers
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Zhiwen Ye
- Department of Neurobiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Jordan Shaker
- Department of Neurobiology and Biophysics, University of Washington, Seattle, WA, USA
| | | | - R T Raghavan
- Center for Neural Science, New York University, New York City, NY, USA
| | - Eric Trautmann
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York City, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York City, NY, USA
- CTRL-Labs at Reality Labs, Seattle, WA, USA
- Department of Neurological Surgery, University of California, Davis, Davis, CA, USA
| | - Max Melin
- David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - João Couto
- David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Samuel Garcia
- Centre National de la Recherche Scientifique, Centre de Recherche en Neurosciences de Lyon, Lyon, France
| | - Brian Coughlin
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Margot Elmaleh
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York City, NY, US
| | - David Christianson
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Csaba Horváth
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Richárd Fiáth
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - István Ulbert
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Department of Information Technology and Bionics, Péter Pázmány Catholic University, Budapest, Hungary
- Department of Neurosurgery and Neurointervention, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York City, NY, US
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York City, NY, USA
| | - Michael N Shadlen
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Mark M Churchland
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York City, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York City, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York City, NY, USA
| | - Anne K Churchland
- David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Nicholas A Steinmetz
- Department of Neurobiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Edward F Chang
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Jeffrey S Schweitzer
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sydney S Cash
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Liam Paninski
- Department of Statistics, Columbia University, New York City, NY, USA
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Department of Neuroscience, Columbia University Medical Center, New York City, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York City, NY, USA
| | - Erdem Varol
- Department of Statistics, Columbia University, New York City, NY, USA
- Zuckerman Institute, Columbia University, New York City, NY, USA
- Department of Computer Science & Engineering, New York University, New York City, NY, USA
| |
Collapse
|
6
|
Yang L, Cao G, Zhang S, Zhang W, Sun Y, Zhou J, Zhong T, Yuan Y, Liu T, Liu T, Guo L, Yu Y, Jiang X, Li G, Han J, Zhang T. Contrastive machine learning reveals species -shared and -specific brain functional architecture. Med Image Anal 2025; 101:103431. [PMID: 39689450 DOI: 10.1016/j.media.2024.103431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 07/19/2024] [Accepted: 12/05/2024] [Indexed: 12/19/2024]
Abstract
A deep comparative analysis of brain functional connectome across species in primates has the potential to yield valuable insights for both scientific and clinical applications. However, the interspecies commonality and differences are inherently entangled with each other and with other irrelevant factors. Here we develop a novel contrastive machine learning method, called shared-unique variation autoencoder (SU-VAE), to allow disentanglement of the species-shared and species-specific functional connectome variation between macaque and human brains on large-scale resting-state fMRI datasets. The method was validated by confirming that human-specific features are differentially related to cognitive scores, while features shared with macaque better capture sensorimotor ones. The projection of disentangled connectomes to the cortex revealed a gradient that reflected species divergence. In contrast to macaque, the introduction of human-specific connectomes to the shared ones enhanced network efficiency. We identified genes enriched on 'axon guidance' that could be related to the human-specific connectomes. The code contains the model and analysis can be found in https://github.com/BBBBrain/SU-VAE.
Collapse
Affiliation(s)
- Li Yang
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Guannan Cao
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Songyao Zhang
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Weihan Zhang
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Yusong Sun
- School of Life Sciences and Technology, University of Electronic Science and Technology, Chengdu, 611731, China
| | - Jingchao Zhou
- School of Life Sciences and Technology, University of Electronic Science and Technology, Chengdu, 611731, China
| | - Tianyang Zhong
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Yixuan Yuan
- The Department of Electronic Engineering, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Tao Liu
- School of Science, North China University of Science and Technology, Tangshan, 063210, China
| | - Tianming Liu
- School of Computing, The University of Georgia, Athens, 30602, USA
| | - Lei Guo
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China
| | - Yongchun Yu
- Institutes of Brain Sciences, FuDan University, Shanghai, 200433, China
| | - Xi Jiang
- School of Life Sciences and Technology, University of Electronic Science and Technology, Chengdu, 611731, China
| | - Gang Li
- Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, 27599, USA
| | - Junwei Han
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China.
| | - Tuo Zhang
- School of Automation, Northwestern Polytechnic University, Xi'an, 710072, China.
| |
Collapse
|
7
|
Ortiz E, Rivera J, Granja M, Agudelo N, Hernández Hoyos M, Salazar A. Automated ASPECTS Segmentation and Scoring Tool: a Method Tailored for a Colombian Telestroke Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1076-1090. [PMID: 39284983 PMCID: PMC11950988 DOI: 10.1007/s10278-024-01258-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 09/02/2024] [Accepted: 09/03/2024] [Indexed: 03/29/2025]
Abstract
To evaluate our two non-machine learning (non-ML)-based algorithmic approaches for detecting early ischemic infarcts on brain CT images of patients with acute ischemic stroke symptoms, tailored to our local population, to be incorporated in our telestroke software. One-hundred and thirteen acute stroke patients, excluding hemorrhagic, subacute, and chronic patients, with accessible brain CT images were divided into calibration and test sets. The gold standard was determined through consensus among three neuroradiologist. Four neuroradiologist independently reported Alberta Stroke Program Early CT Scores (ASPECTSs). ASPECTSs were also obtained using a commercial ML solution (CMLS), and our two methods, namely the Mean Hounsfield Unit (HU) relative difference (RELDIF) and the density distribution equivalence test (DDET), which used statistical analyze the of the HUs of each region and its contralateral side. Automated segmentation was perfect for cortical regions, while minimal adjustment was required for basal ganglia regions. For dichotomized-ASPECTSs (ASPECTS < 6) in the test set, the area under the receiver operating characteristic curve (AUC) was 0.85 for the DDET method, 0.84 for the RELDIF approach, 0.64 for the CMLS, and ranged from 0.71-0.89 for the neuroradiologist. The accuracy was 0.85 for the DDET method, 0.88 for the RELDIF approach, and was ranged from 0.83 - 0.96 for the neuroradiologist. Equivalence at a margin of 5% was documented among the DDET, RELDIF, and gold standard on mean ASPECTSs. Noninferiority tests of the AUC and accuracy of infarct detection revealed similarities between both DDET and RELDIF, and the CMLS, and with at least one neuroradiologist. The alignment of our methods with the evaluations of neuroradiologist and the CMLS indicates the potential of our methods to serve as supportive tools in clinical settings, facilitating prompt and accurate stroke diagnosis, especially in health care settings, such as Colombia, where neuroradiologist are limited.
Collapse
Affiliation(s)
- Esteban Ortiz
- Systems and Computing Engineering Department, Universidad de los Andes, Bogotá, Colombia
| | - Juan Rivera
- Systems and Computing Engineering Department, Universidad de los Andes, Bogotá, Colombia
| | - Manuel Granja
- Department of Diagnostic Imaging, University Hospital Fundación Santa Fe de Bogotá, Bogotá, Colombia
| | - Nelson Agudelo
- Grupo Suomaya, Servicio Nacional de Aprendizaje (SENA), Bogotá, Colombia
| | | | - Antonio Salazar
- Electrophysiology and Telemedicine Laboratory, Universidad de los Andes, Bogotá, Colombia.
| |
Collapse
|
8
|
Dillon O, Lau B, Vinod SK, Keall PJ, Reynolds T, Sonke JJ, O'Brien RT. Real-time spatiotemporal optimization during imaging. COMMUNICATIONS ENGINEERING 2025; 4:61. [PMID: 40164691 PMCID: PMC11958730 DOI: 10.1038/s44172-025-00391-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 03/12/2025] [Indexed: 04/02/2025]
Abstract
High quality imaging is required for high quality medical care, especially in precision applications such as radiation therapy. Patient motion during image acquisition reduces image quality and is either accepted or dealt with retrospectively during image reconstruction. Here we formalize a general approach in which data acquisition is treated as a spatiotemporal optimization problem to solve in real time so that the acquired data has a specific structure that can be exploited during reconstruction. We provide results of the first-in-world clinical trial implementation of our spatiotemporal optimization approach, applied to respiratory correlated 4D cone beam computed tomography for lung cancer radiation therapy (NCT04070586, ethics approval 2019/ETH09968). Performing spatiotemporal optimization allowed us to maintain or improve image quality relative to the current clinical standard while reducing scan time by 63% and reducing scan radiation by 85%, improving clinical throughput and reducing the risk of secondary tumors. This result motivates application of the general spatiotemporal optimization approach to other types of patient motion such as cardiac signals and other modalities such as CT and MRI.
Collapse
Affiliation(s)
- Owen Dillon
- University of Sydney, Faculty of Medicine and Health, Image X Institute, Sydney, Australia.
| | - Benjamin Lau
- University of Sydney, Faculty of Medicine and Health, Image X Institute, Sydney, Australia
| | - Shalini K Vinod
- University of New South Wales, South Western Sydney Clinical School & Ingham Institute for Applied Medical Research, Sydney, Australia
- Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, Australia
| | - Paul J Keall
- University of Sydney, Faculty of Medicine and Health, Image X Institute, Sydney, Australia
| | - Tess Reynolds
- University of Sydney, Faculty of Medicine and Health, Image X Institute, Sydney, Australia
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Ricky T O'Brien
- Royal Melbourne Institute of Technology, School of Health and Biomedical Sciences, Medical Imaging Facility, Melbourne, Australia
| |
Collapse
|
9
|
Orellana B, Navazo I, Brunet P, Monclús E, Bendezú Á, Azpiroz F. Automatic colon segmentation on T1-FS MR images. Comput Med Imaging Graph 2025; 123:102528. [PMID: 40112651 DOI: 10.1016/j.compmedimag.2025.102528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 03/07/2025] [Accepted: 03/07/2025] [Indexed: 03/22/2025]
Abstract
The volume and distribution of the colonic contents provides valuable insights into the effects of diet on gut microbiotica involving both clinical diagnosis and research. In terms of Magnetic Resonance Imaging modalities, T2-weighted images allow the segmentation of the colon lumen, while fecal and gas contents can be only distinguished on the T1-weighted Fat-Sat modality. However, the manual segmentation of T1-weighted Fat-Sat is challenging, and no automatic segmentation methods are known. This paper proposed a non-supervised algorithm providing an accurate T1-weighted Fat-Sat colon segmentation via the registration of an existing colon segmentation in T2-weighted modality. The algorithm consists of two phases. It starts with a registration process based on a classical deformable registration method, followed by a novel Iterative Colon Registration process that utilizes a mesh deformation approach. This approach is guided by a probabilistic model that provides the likelihood of the colon boundary, followed by a shape preservation process of the colon segmentation on T2-weighted images. The iterative process converges to achieve an optimal fit for colon segmentation in T1-weighted Fat-Sat images. The segmentation algorithm has been tested on multiple datasets (154 scans) and acquisition machines (3) as part of the proof of concept for the proposed methodology. The quantitative evaluation was based on two metrics: the percentage of ground truth labeled feces correctly identified by our proposal (93±5%), and the volume variation between the existing colon segmentation in the T2-weighted modality and the colon segmentation computed in T1-weighted Fat-Sat images. Quantitative and medical evaluations demonstrated a degree of accuracy, usability, and stability concerning the acquisition hardware, making the algorithm suitable for clinical application and research.
Collapse
Affiliation(s)
- Bernat Orellana
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Isabel Navazo
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Pere Brunet
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Eva Monclús
- ViRVIG Group, UPC-BarcelonaTech, Llorens i Artigas, 4-6, Barcelona 08028, Spain.
| | - Álvaro Bendezú
- Digestive Department, Hospital Universitari General de Catalunya. Pedro i Pons 1, Sant Cugat del Vallès 08195, Spain.
| | - Fernando Azpiroz
- Digestive System Research Unit, University Hospital Vall d'Hebron, 08035 Barcelona, Spain; Departament de Medicina, Universitat Autònoma de Barcelona, 08193 Bellaterra, Spain; Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (Ciberehd), Spain.
| |
Collapse
|
10
|
Tong MW, Zhou J, Akkaya Z, Majumdar S, Bhattacharjee R. Artificial intelligence in musculoskeletal applications: a primer for radiologists. Diagn Interv Radiol 2025; 31:89-101. [PMID: 39157958 PMCID: PMC11880867 DOI: 10.4274/dir.2024.242830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Accepted: 07/11/2024] [Indexed: 08/20/2024]
Abstract
As an umbrella term, artificial intelligence (AI) covers machine learning and deep learning. This review aimed to elaborate on these terms to act as a primer for radiologists to learn more about the algorithms commonly used in musculoskeletal radiology. It also aimed to familiarize them with the common practices and issues in the use of AI in this domain.
Collapse
Affiliation(s)
- Michelle W. Tong
- University of California San Francisco Department of Radiology and Biomedical Imaging, San Francisco, USA
- University of California San Francisco Department of Bioengineering, San Francisco, USA
- University of California Berkeley Department of Bioengineering, Berkeley, USA
| | - Jiamin Zhou
- University of California San Francisco Department of Orthopaedic Surgery, San Francisco, USA
| | - Zehra Akkaya
- University of California San Francisco Department of Radiology and Biomedical Imaging, San Francisco, USA
- Ankara University Faculty of Medicine Department of Radiology, Ankara, Türkiye
| | - Sharmila Majumdar
- University of California San Francisco Department of Radiology and Biomedical Imaging, San Francisco, USA
- University of California San Francisco Department of Bioengineering, San Francisco, USA
| | - Rupsa Bhattacharjee
- University of California San Francisco Department of Radiology and Biomedical Imaging, San Francisco, USA
| |
Collapse
|
11
|
Liu Y, Wang L, Ning X, Gao Y, Wang D. Enhancing unsupervised learning in medical image registration through scale-aware context aggregation. iScience 2025; 28:111734. [PMID: 39898031 PMCID: PMC11787544 DOI: 10.1016/j.isci.2024.111734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 09/24/2024] [Accepted: 12/30/2024] [Indexed: 02/04/2025] Open
Abstract
Deformable image registration (DIR) is essential for medical image analysis, facilitating the establishment of dense correspondences between images to analyze complex deformations. Traditional registration algorithms often require significant computational resources due to iterative optimization, while deep learning approaches face challenges in managing diverse deformation complexities and task requirements. We introduce ScaMorph, an unsupervised learning model for DIR that employs scale-aware context aggregation, integrating multiscale mixed convolution with lightweight multiscale context fusion. This model effectively combines convolutional networks and vision transformers, addressing various registration tasks. We also present diffeomorphic variants of ScaMorph to maintain topological deformations. Extensive experiments on 3D medical images across five applications-atlas-to-patient and inter-patient brain magnetic resonance imaging (MRI) registration, inter-modal brain MRI registration, inter-patient liver computed tomography (CT) registration as well as inter-modal abdomen MRI-CT registration-demonstrate that our model significantly outperforms existing methods, highlighting its effectiveness and broader implications for enhancing medical image registration techniques.
Collapse
Affiliation(s)
- Yuchen Liu
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
| | - Ling Wang
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
| | - Xiaolin Ning
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
- Hefei National Laboratory, Hefei 230000, China
| | - Yang Gao
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
- Institute of Large-Scale Scientific Facility and Centre for Zero Magnetic Field Science, Beihang University, Beijing 100191, China
- Hefei National Laboratory, Hefei 230000, China
| | - Defeng Wang
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
12
|
Liao R, F Williamson J, Xia T, Ge T, A O'Sullivan J. IConDiffNet: an unsupervised inverse-consistent diffeomorphic network for medical image registration. Phys Med Biol 2025; 70:055011. [PMID: 39746299 DOI: 10.1088/1361-6560/ada516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 01/02/2025] [Indexed: 01/04/2025]
Abstract
Objective.Deformable image registration (DIR) is critical in many medical imaging applications. Diffeomorphic transformations, which are smooth invertible mappings with smooth inverses preserve topological properties and are an anatomically plausible means of constraining the solution space in many settings. Traditional iterative optimization-based diffeomorphic DIR algorithms are computationally costly and are not able to consistently resolve large and complicated deformations in medical image registration. Convolutional neural network implementations can rapidly estimate the transformation in through a pre-trained model. However, the structure design of most neural networks for DIR fails to systematically enforce diffeomorphism and inverse consistency. In this paper, a novel unsupervised neural network structure is proposed to perform a fast, accurate, and inverse-consistent diffeomorphic DIR.Approach.This paper introduces a novel unsupervised inverse-consistent diffeomorphic registration network termed IConDiffNet, which incorporates an energy constraint that minimizes the total energy expended during the deformation process. The IConDiffNet architecture consists of two symmetric paths, each employing multiple recursive cascaded updating blocks (neural networks) to handle different virtual time steps parameterizing the path from the initial undeformed image to the final deformation. These blocks estimate velocities corresponding to specific time steps, generating a series of smooth time-dependent velocity vector fields. Simultaneously, the inverse transformations are estimated by corresponding blocks in the inverse path. By integrating these series of time-dependent velocity fields from both paths, optimal forward and inverse transformations are obtained, aligning the image pair in both directions.Main result.Our proposed method was evaluated on a three-dimensional inter-patient image registration task with a large-scale brain MRI image dataset containing 375 subjects. The proposed IConDiffNet achieves fast and accurate DIR with better DSC, lower Hausdorff distance metric, and lower total energy spent during the deformation in the test dataset compared to competing state-of-the-art deep-learning diffeomorphic DIR approaches. Visualization shows that IConDiffNet produces more complicated transformations that better align structures than the VoxelMorph-Diff, SYMNet, and ANTs-SyN methods.Significance.The proposed IConDiffNet represents an advancement in unsupervised deep-learning-based DIR approaches. By ensuring inverse consistency and diffeomorphic properties in the outcome transformations, IConDiffNet offers a pathway for improved registration accuracy, particularly in clinical settings where diffeomorphic properties are crucial. Furthermore, the generality of IConDiffNet's network structure supports direct extension to diverse 3D image registration challenges. This adaptability is facilitated by the flexibility of the objective function used in optimizing the network, which can be tailored to suit different registration tasks.
Collapse
Affiliation(s)
- Rui Liao
- Washington University in St. Louis, Saint Louis, MO 63130, United States of America
| | - Jeffrey F Williamson
- Washington University in St. Louis, Saint Louis, MO 63130, United States of America
| | - Tianyu Xia
- Peking University, Beijing, People's Republic of China
| | - Tao Ge
- Washington University in St. Louis, Saint Louis, MO 63130, United States of America
| | - Joseph A O'Sullivan
- Washington University in St. Louis, Saint Louis, MO 63130, United States of America
| |
Collapse
|
13
|
Han Y, Wang L, Huang Z, Zhang Y, Zheng X. A Novel 3D Magnetic Resonance Imaging Registration Framework Based on the Swin-Transformer UNet+ Model with 3D Dynamic Snake Convolution Scheme. J Imaging 2025; 11:54. [PMID: 39997556 PMCID: PMC11856140 DOI: 10.3390/jimaging11020054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Revised: 02/05/2025] [Accepted: 02/08/2025] [Indexed: 02/26/2025] Open
Abstract
Transformer-based image registration methods have achieved notable success, but they still face challenges, such as difficulties in representing both global and local features, the inability of standard convolution operations to focus on key regions, and inefficiencies in restoring global context using the decoder. To address these issues, we extended the Swin-UNet architecture and incorporated dynamic snake convolution (DSConv) into the model, expanding it into three dimensions. This improvement enables the model to better capture spatial information at different scales, enhancing its adaptability to complex anatomical structures and their intricate components. Additionally, multi-scale dense skip connections were introduced to mitigate the spatial information loss caused by downsampling, enhancing the model's ability to capture both global and local features. We also introduced a novel optimization-based weakly supervised strategy, which iteratively refines the deformation field generated during registration, enabling the model to produce more accurate registered images. Building on these innovations, we proposed OSS DSC-STUNet+ (Swin-UNet+ with 3D dynamic snake convolution). Experimental results on the IXI, OASIS, and LPBA40 brain MRI datasets demonstrated up to a 16.3% improvement in Dice coefficient compared to five classical methods. The model exhibits outstanding performance in terms of registration accuracy, efficiency, and feature preservation.
Collapse
Affiliation(s)
| | - Lei Wang
- School of Computer Science and Technology, Shandong University of Technology, Zibo 255049, China; (Y.H.); (Z.H.); (Y.Z.); (X.Z.)
| | | | | | | |
Collapse
|
14
|
Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A, Prince JL, Du Y. A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond. Med Image Anal 2025; 100:103385. [PMID: 39612808 PMCID: PMC11730935 DOI: 10.1016/j.media.2024.103385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 12/01/2024]
Abstract
Deep learning technologies have dramatically reshaped the field of medical image registration over the past decade. The initial developments, such as regression-based and U-Net-based networks, established the foundation for deep learning in image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, network architectures, and uncertainty estimation. These advancements have not only enriched the field of image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Zhangxing Bian
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shalini Subramanian
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| |
Collapse
|
15
|
Zhang R, Mo H, Wang J, Jie B, He Y, Jin N, Zhu L. UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:891-902. [PMID: 39321000 DOI: 10.1109/tmi.2024.3467919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
Complicated image registration is a key issue in medical image analysis, and deep learning-based methods have achieved better results than traditional methods. The methods include ConvNet-based and Transformer-based methods. Although ConvNets can effectively utilize local information to reduce redundancy via small neighborhood convolution, the limited receptive field results in the inability to capture global dependencies. Transformers can establish long-distance dependencies via a self-attention mechanism; however, the intense calculation of the relationships among all tokens leads to high redundancy. We propose a novel unsupervised image registration method named the unified Transformer and superresolution (UTSRMorph) network, which can enhance feature representation learning in the encoder and generate detailed displacement fields in the decoder to overcome these problems. We first propose a fusion attention block to integrate the advantages of ConvNets and Transformers, which inserts a ConvNet-based channel attention module into a multihead self-attention module. The overlapping attention block, a novel cross-attention method, uses overlapping windows to obtain abundant correlations with match information of a pair of images. Then, the blocks are flexibly stacked into a new powerful encoder. The decoder generation process of a high-resolution deformation displacement field from low-resolution features is considered as a superresolution process. Specifically, the superresolution module was employed to replace interpolation upsampling, which can overcome feature degradation. UTSRMorph was compared to state-of-the-art registration methods in the 3D brain MR (OASIS, IXI) and MR-CT datasets (abdomen, craniomaxillofacial). The qualitative and quantitative results indicate that UTSRMorph achieves relatively better performance. The code and datasets are publicly available at https://github.com/Runshi-Zhang/UTSRMorph.
Collapse
|
16
|
Duan T, Chen W, Ruan M, Zhang X, Shen S, Gu W. Unsupervised deep learning-based medical image registration: a survey. Phys Med Biol 2025; 70:02TR01. [PMID: 39667278 DOI: 10.1088/1361-6560/ad9e69] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 12/12/2024] [Indexed: 12/14/2024]
Abstract
In recent decades, medical image registration technology has undergone significant development, becoming one of the core technologies in medical image analysis. With the rise of deep learning, deep learning-based medical image registration methods have achieved revolutionary improvements in processing speed and automation, showing great potential, especially in unsupervised learning. This paper briefly introduces the core concepts of deep learning-based unsupervised image registration, followed by an in-depth discussion of innovative network architectures and a detailed review of these studies, highlighting their unique contributions. Additionally, this paper explores commonly used loss functions, datasets, and evaluation metrics. Finally, we discuss the main challenges faced by various categories and propose potential future research topics. This paper surveys the latest advancements in unsupervised deep neural network-based medical image registration methods, aiming to help active readers interested in this field gain a deep understanding of this exciting area.
Collapse
Affiliation(s)
- Taisen Duan
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Wenkang Chen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Meilin Ruan
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Shaofei Shen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Weiyu Gu
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| |
Collapse
|
17
|
Liu H, McKenzie E, Xu D, Xu Q, Chin RK, Ruan D, Sheng K. MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Med Image Anal 2025; 99:103351. [PMID: 39388843 PMCID: PMC11817760 DOI: 10.1016/j.media.2024.103351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 06/05/2024] [Accepted: 09/16/2024] [Indexed: 10/12/2024]
Abstract
Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at https://github.com/HengjieLiu/DIR-MUSA.
Collapse
Affiliation(s)
- Hengjie Liu
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Elizabeth McKenzie
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Di Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Qifan Xu
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Dan Ruan
- Physics and Biology in Medicine Graduate Program, University of California Los Angeles, Los Angeles, CA, USA; Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ke Sheng
- UCSF/UC Berkeley Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA; Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, USA.
| |
Collapse
|
18
|
Zhang Z, Criscuolo ER, Hao Y, McKeown T, Yang D. A vessel bifurcation liver CT landmark pair dataset for evaluating deformable image registration algorithms. Med Phys 2025; 52:703-715. [PMID: 39504386 PMCID: PMC11915780 DOI: 10.1002/mp.17507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 09/10/2024] [Accepted: 10/14/2024] [Indexed: 11/08/2024] Open
Abstract
PURPOSE Evaluating deformable image registration (DIR) algorithms is vital for enhancing algorithm performance and gaining clinical acceptance. However, there is a notable lack of dependable DIR benchmark datasets for assessing DIR performance except for lung images. To address this gap, we aim to introduce our comprehensive liver computed tomography (CT) DIR landmark dataset library. This dataset is designed for efficient and quantitative evaluation of various DIR methods for liver CTs, paving the way for more accurate and reliable image registration techniques. ACQUISITION AND VALIDATION METHODS Forty CT liver image pairs were acquired from several publicly available image archives and authors' institutions under institutional review board (IRB) approval. The images were processed with a semi-automatic procedure to generate landmark pairs: (1) for each case, liver vessels were automatically segmented on one image; (2) landmarks were automatically detected at vessel bifurcations; (3) corresponding landmarks in the second image were placed using two deformable image registration methods to avoid algorithm-specific biases; (4) a comprehensive validation process based on quantitative evaluation and manual assessment was applied to reject outliers and ensure the landmarks' positional accuracy. This workflow resulted in an average of ∼56 landmark pairs per image pair, comprising a total of 2220 landmarks for 40 cases. The general landmarking accuracy of this procedure was evaluated using digital phantoms and manual landmark placement. The landmark pair target registration errors (TRE) on digital phantoms were 0.37 ± 0.26 and 0.55 ± 0.34 mm respectively for the two selected DIR algorithms used in our workflow, with 97% of landmark pairs having TREs below 1.5 mm. The distances from the calculated landmarks to the averaged manual placement were 1.27 ± 0.79 mm. DATA FORMAT AND USAGE NOTES All data, including image files and landmark information, are publicly available at Zenodo (https://zenodo.org/records/13738577). Instructions for using our data can be found on our GitHub page at https://github.com/deshanyang/Liver-DIR-QA. POTENTIAL APPLICATIONS The landmark dataset generated in this work is the first collection of large-scale liver CT DIR landmarks prepared on real patient images. This dataset can provide researchers with a dense set of ground truth benchmarks for the quantitative evaluation of DIR algorithms within the liver.
Collapse
Affiliation(s)
- Zhendong Zhang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | | | - Yao Hao
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Trevor McKeown
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Deshan Yang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
19
|
Shur JD, Porta N, Kafaei L, Pendower L, McCall J, Khan N, Oyen W, Koh DM, Johnston E. Evaluation of Local Tumor Outcomes Following Microwave Ablation of Colorectal Liver Metastases Using CT Imaging: A Comparison of Visual versus Quantitative Methods. Radiol Imaging Cancer 2025; 7:e230147. [PMID: 39853201 PMCID: PMC11791670 DOI: 10.1148/rycan.230147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/01/2023] [Accepted: 11/19/2023] [Indexed: 01/26/2025]
Abstract
Purpose To compare visual versus quantitative ablation confirmation for identifying local tumor progression and residual tumor following microwave ablation (MWA) of colorectal liver metastases (CRLM). Materials and Methods This retrospective study included patients undergoing MWA of CRLM from October 2014 to February 2018. Two independent readers visually assessed pre- and postprocedure images and semiquantitatively scored for incomplete ablation, using a six-point Likert scale, and extracted quantitative imaging metrics of minimal ablative margin (MAM) and percentage of tumor outside of the ablation zone, using both rigid and deformable registration. Diagnostic accuracy and intra- and interobserver agreement were assessed. Results The study included 60 patients (median age, 71 years [IQR, 60-74.5 years]; 38 male) with 97 tumors with a median diameter of 1.3 cm (IQR, 1.0-1.8 cm). Median follow-up time was 749 days (IQR, 330-1519 days). Median time to complete rigid and deformable workflows was 3.0 minutes (IQR, 3.0-3.2 minutes) and 14.0 minutes (IQR,13.9-14.4 minutes), respectively. MAM with deformable registration had the highest intra- and interobserver agreement, with Gwet AC1 of 0.92 and 0.67, respectively, significantly higher than interobserver agreement of visual assessment (Gwet AC1, 0.18; P < .0001). Overall, quantitative methods using MAM had generally higher sensitivity, of up to 95.6%, than visual methods (67.3%, P < .001), at a cost of lower specificity (40% vs 71.1%, P < .001), using deformable image registration. Conclusion Quantitative ablation margin metrics provide more reliable assessment of outcomes than visual comparison using pre- and postprocedure diagnostic images following MWA of CRLM. Keywords: Interventional-Body, Liver, Neoplasms, Ablation Techniques Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Joshua D. Shur
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Nuria Porta
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Leila Kafaei
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Laura Pendower
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - James McCall
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Nasir Khan
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Wim Oyen
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Dow-Mu Koh
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| | - Edward Johnston
- From the Department of Radiology, Royal Marsden Hospital NHS
Foundation Trust, 203 Fulham Road, London SW3 6JJ, England (J.D.S., L.K.,
L.P., J.M., N.K., D.M.K., E.J.); Institute of Cancer Research, London, England
(N.P., D.M.K.); and Department of Radiology and Nuclear Medicine, Rijnstate
Hospital, Arnhem, the Netherlands (W.O.)
| |
Collapse
|
20
|
付 麟, 朱 遥, 姚 宇. [The dual-stream feature pyramid network based on Mamba and convolution for brain magnetic resonance image registration]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:1177-1184. [PMID: 40000207 PMCID: PMC11955372 DOI: 10.7507/1001-5515.202405026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 11/11/2024] [Indexed: 02/27/2025]
Abstract
Deformable image registration plays a crucial role in medical image analysis. Despite various advanced registration models having been proposed, achieving accurate and efficient deformable registration remains challenging. Leveraging the recent outstanding performance of Mamba in computer vision, we introduced a novel model called MCRDP-Net. MCRDP-Net adapted a dual-stream network architecture that combined Mamba blocks and convolutional blocks to simultaneously extract global and local information from fixed and moving images. In the decoding stage, we employed a pyramid network structure to obtain high-resolution deformation fields, achieving efficient and precise registration. The effectiveness of MCRDP-Net was validated on public brain registration datasets, OASIS and IXI. Experimental results demonstrated significant advantages of MCRDP-Net in medical image registration, with DSC, HD95, and ASD reaching 0.815, 8.123, and 0.521 on the OASIS dataset and 0.773, 7.786, and 0.871 on the IXI dataset. In summary, MCRDP-Net demonstrates superior performance in deformable image registration, proving its potential in medical image analysis. It effectively enhances the accuracy and efficiency of registration, providing strong support for subsequent medical research and applications.
Collapse
Affiliation(s)
- 麟杰 付
- 中国科学院 成都计算机应用研究所(成都 610213)Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610213, P. R. China
- 中国科学院大学(北京 100049)University of Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - 遥遥 朱
- 中国科学院 成都计算机应用研究所(成都 610213)Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610213, P. R. China
- 中国科学院大学(北京 100049)University of Chinese Academy of Sciences, Beijing 100049, P. R. China
| | - 宇 姚
- 中国科学院 成都计算机应用研究所(成都 610213)Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610213, P. R. China
- 中国科学院大学(北京 100049)University of Chinese Academy of Sciences, Beijing 100049, P. R. China
| |
Collapse
|
21
|
Shakoorioskooie M, Granget E, Cocen O, Hovind J, Mannes D, Kaestner A, Brambilla L. Neutron tomography and image registration methods to study local physical deformations and attenuation variations in treated archaeological iron nail samples. APPLIED PHYSICS. A, MATERIALS SCIENCE & PROCESSING 2024; 130:849. [PMID: 39498273 PMCID: PMC11531447 DOI: 10.1007/s00339-024-07990-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 10/12/2024] [Indexed: 11/07/2024]
Abstract
This study presents a preliminary examination of the effects of environment changes post-excavation on heavily corroded archaeological Roman iron nails using neutron tomography and image registration techniques. Roman nails were exposed to either a high relative humidity environment, or fast thermal drying as primary experiments to show the power of this imaging technique to monitor and quantify the structural changes of corroded metal artifacts. This research employed a series of pre- and post-treatment tomography acquisitions (time-series) complemented by advanced image registration methods. Based on mutual information (MI) metrics, we performed rigid body and affine image registrations to meticulously account for sample repositioning challenges and variations in imaging parameters. Using non-affine local registration results, in a second step, we detected localized expansion and shrinkage in the samples attributable to imposed environmental changes. Specifically, we observed local shrinkage on the nail that was dried, mostly in their Transformed Medium (TM), the outer layer where corrosion products are cementing soil and sand particles. Conversely, the sample subjected to high relative humidity environment exhibited localized expansion, with varying degrees of change across different regions. This work highlights the efficacy of our registration techniques in accommodating manual removal or loss of extraneous material (loosely adhering soil and TM layers around the nails) post-initial tomography, successfully capturing local structural changes with high precision. Using differential analysis on the accurately registered samples we could also detect and volumetrically quantify the variation in moisture and detect changes in active corrosion sites (ACS) in the sample. These preliminary experiments allowed us to advance and optimize the application of a neutron tomography and image registration workflow for future, more advanced experiments such as humidity fluctuations, corrosion removal through micro-blasting, dechlorination and other stabilization treatments.
Collapse
Affiliation(s)
| | - Elodie Granget
- Haute Ecole Arc Conservation-Restauration, HES-SO University of Applied Sciences and Arts Western Switzerland, Espace de L’Europe 11, 2000 Neuchâtel, Switzerland
| | - Ocson Cocen
- Haute Ecole Arc Conservation-Restauration, HES-SO University of Applied Sciences and Arts Western Switzerland, Espace de L’Europe 11, 2000 Neuchâtel, Switzerland
- Tribology and Interfacial Chemistry Group, SCI-STI-SM, Institute of Materials, École Polytechnique Fédérale de Lausanne, Station 12, 1015 Lausanne, Switzerland
| | - Jan Hovind
- PSI Center for Neutron and Muon Sciences, Forschungsstrasse 111, 5232 Villigen, Switzerland
| | - David Mannes
- PSI Center for Neutron and Muon Sciences, Forschungsstrasse 111, 5232 Villigen, Switzerland
| | - Anders Kaestner
- PSI Center for Neutron and Muon Sciences, Forschungsstrasse 111, 5232 Villigen, Switzerland
| | - Laura Brambilla
- Haute Ecole Arc Conservation-Restauration, HES-SO University of Applied Sciences and Arts Western Switzerland, Espace de L’Europe 11, 2000 Neuchâtel, Switzerland
| |
Collapse
|
22
|
Liu S, Wei G, Fan Y, Chen L, Zhang Z. Multimodal registration network with multi-scale feature-crossing. Int J Comput Assist Radiol Surg 2024; 19:2269-2278. [PMID: 39285109 DOI: 10.1007/s11548-024-03258-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 08/20/2024] [Indexed: 11/07/2024]
Abstract
PURPOSE A critical piece of information for prostate intervention and cancer treatment is provided by the complementary medical imaging modalities of ultrasound (US) and magnetic resonance imaging (MRI). Therefore, MRI-US image fusion is often required during prostate examination to provide contrast-enhanced TRUS, in which image registration is a key step in multimodal image fusion. METHODS We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. We designed a feature-crossing module to enhance information flow in the hidden layer by integrating intermediate features between adjacent scales. Additionally, an attention block utilizing three-dimensional convolution interacts information between channels, improving the correlation between different modal features. We used 100 cases randomly selected from The Cancer Imaging Archive (TCIA) for our experiments. A fivefold cross-validation method was applied, dividing the dataset into five subsets. Four subsets were used for training, and one for testing, repeating this process five times to ensure each subset served as the test set once. RESULTS We test and evaluate our technique using fivefold cross-validation. The cross-validation trials result in a median target registration error of 2.20 mm on landmark centroids and a median Dice of 0.87 on prostate glands, both of which were better than the baseline model. In addition, the standard deviation of the dice similarity coefficient is 0.06, which suggests that the model is stable. CONCLUSION We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. A random selection of 100 cases from The Cancer Imaging Archive (TCIA) was used to test and evaluate our approach using fivefold cross-validation. The experimental results showed that our method improves the registration accuracy. After registration, MRI and TURS images were more similar in structure and morphology, and the location and morphology of cancer were more accurately reflected in the images.
Collapse
Affiliation(s)
- Shuting Liu
- Business School, University of Shanghai for Science and Technology, Jungong Road, Shanghai, 200093, China
| | - Guoliang Wei
- Business School, University of Shanghai for Science and Technology, Jungong Road, Shanghai, 200093, China.
| | - Yi Fan
- Puncture Intelligent Medical Technology Co Ltd, Xinzhuan Road, Shanghai, 201600, China
| | - Lei Chen
- Shanghai Sixth People's Hospital, Yishan Road, Shanghai, 200233, China
| | - Zhaodong Zhang
- Puncture Intelligent Medical Technology Co Ltd, Xinzhuan Road, Shanghai, 201600, China
| |
Collapse
|
23
|
Sha Q, Sun K, Jiang C, Xu M, Xue Z, Cao X, Shen D. Detail-preserving image warping by enforcing smooth image sampling. Neural Netw 2024; 178:106426. [PMID: 38878640 DOI: 10.1016/j.neunet.2024.106426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 04/14/2024] [Accepted: 06/01/2024] [Indexed: 08/13/2024]
Abstract
Multi-phase dynamic contrast-enhanced magnetic resonance imaging image registration makes a substantial contribution to medical image analysis. However, existing methods (e.g., VoxelMorph, CycleMorph) often encounter the problem of image information misalignment in deformable registration tasks, posing challenges to the practical application. To address this issue, we propose a novel smooth image sampling method to align full organic information to realize detail-preserving image warping. In this paper, we clarify that the phenomenon about image information mismatch is attributed to imbalanced sampling. Then, a sampling frequency map constructed by sampling frequency estimators is utilized to instruct smooth sampling by reducing the spatial gradient and discrepancy between all-ones matrix and sampling frequency map. In addition, our estimator determines the sampling frequency of a grid voxel in the moving image by aggregating the sum of interpolation weights from warped non-grid sampling points in its vicinity and vectorially constructs sampling frequency map through projection and scatteration. We evaluate the effectiveness of our approach through experiments on two in-house datasets. The results showcase that our method preserves nearly complete details with ideal registration accuracy compared with several state-of-the-art registration methods. Additionally, our method exhibits a statistically significant difference in the regularity of the registration field compared to other methods, at a significance level of p < 0.05. Our code will be released at https://github.com/QingRui-Sha/SFM.
Collapse
Affiliation(s)
- Qingrui Sha
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Kaicong Sun
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Caiwen Jiang
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Mingze Xu
- School of Science and Engineering, Chinese University of Hong Kong-Shenzhen, Guangdong, China.
| | - Zhong Xue
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
24
|
Casamitjana A, Mancini M, Robinson E, Peter L, Annunziata R, Althonayan J, Crampsie S, Blackburn E, Billot B, Atzeni A, Puonti O, Balbastre Y, Schmidt P, Hughes J, Augustinack JC, Edlow BL, Zöllei L, Thomas DL, Kliemann D, Bocchetta M, Strand C, Holton JL, Jaunmuktane Z, Iglesias JE. A next-generation, histological atlas of the human brain and its application to automated brain MRI segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.579016. [PMID: 39282320 PMCID: PMC11398399 DOI: 10.1101/2024.02.05.579016] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
Magnetic resonance imaging (MRI) is the standard tool to image the human brain in vivo. In this domain, digital brain atlases are essential for subject-specific segmentation of anatomical regions of interest (ROIs) and spatial comparison of neuroanatomy from different subjects in a common coordinate frame. High-resolution, digital atlases derived from histology (e.g., Allen atlas [7], BigBrain [13], Julich [15]), are currently the state of the art and provide exquisite 3D cytoarchitectural maps, but lack probabilistic labels throughout the whole brain. Here we present NextBrain, a next-generation probabilistic atlas of human brain anatomy built from serial 3D histology and corresponding highly granular delineations of five whole brain hemispheres. We developed AI techniques to align and reconstruct ~10,000 histological sections into coherent 3D volumes with joint geometric constraints (no overlap or gaps between sections), as well as to semi-automatically trace the boundaries of 333 distinct anatomical ROIs on all these sections. Comprehensive delineation on multiple cases enabled us to build the first probabilistic histological atlas of the whole human brain. Further, we created a companion Bayesian tool for automated segmentation of the 333 ROIs in any in vivo or ex vivo brain MRI scan using the NextBrain atlas. We showcase two applications of the atlas: automated segmentation of ultra-high-resolution ex vivo MRI and volumetric analysis of Alzheimer's disease and healthy brain ageing based on ~4,000 publicly available in vivo MRI scans. We publicly release: the raw and aligned data (including an online visualisation tool); the probabilistic atlas; the segmentation tool; and ground truth delineations for a 100 μm isotropic ex vivo hemisphere (that we use for quantitative evaluation of our segmentation method in this paper). By enabling researchers worldwide to analyse brain MRI scans at a superior level of granularity without manual effort or highly specific neuroanatomical knowledge, NextBrain holds promise to increase the specificity of MRI findings and ultimately accelerate our quest to understand the human brain in health and disease.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Matteo Mancini
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Cardiovascular, Endocrine-Metabolic Diseases and Aging, Italian National Institute of Health, Rome, Italy
- Cardiff University Brain Research Imaging Centre, Cardiff University, Cardiff, United Kingdom
| | - Eleanor Robinson
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Loïc Peter
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Roberto Annunziata
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Juri Althonayan
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Shauna Crampsie
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Emily Blackburn
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Benjamin Billot
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Alessia Atzeni
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Copenhagen, Denmark
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Yaël Balbastre
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Peter Schmidt
- Advanced Research Computing Centre, University College London, London, United Kingdom
| | - James Hughes
- Advanced Research Computing Centre, University College London, London, United Kingdom
| | - Jean C Augustinack
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Lilla Zöllei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - David L Thomas
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Dorit Kliemann
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, United States
| | - Martina Bocchetta
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Centre for Cognitive and Clinical Neuroscience, Division of Psychology, Department of Life Sciences, College of Health, Medicine and Life Sciences, Brunel University London, United Kingdom
| | - Catherine Strand
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Janice L Holton
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Zane Jaunmuktane
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Juan Eugenio Iglesias
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
25
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
26
|
Cui X, Xu H, Liu J, Tian Z, Yang J. NCNet: Deformable medical image registration network based on neighborhood cross-attention combined with multi-resolution constraints. Biomed Phys Eng Express 2024; 10:055023. [PMID: 39084234 DOI: 10.1088/2057-1976/ad6992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/31/2024] [Indexed: 08/02/2024]
Abstract
Objective. Existing registration networks based on cross-attention design usually divide the image pairs to be registered into patches for input. The division and merging operations of a series of patches are difficult to maintain the topology of the deformation field and reduce the interpretability of the network. Therefore, our goal is to develop a new network architecture based on a cross-attention mechanism combined with a multi-resolution strategy to improve the accuracy and interpretability of medical image registration.Approach. We propose a new deformable image registration network NCNet based on neighborhood cross-attention combined with multi-resolution strategy. The network structure mainly consists of a multi-resolution feature encoder, a multi-head neighborhood cross-attention module and a registration decoder. The hierarchical feature extraction capability of our encoder is improved by introducing large kernel parallel convolution blocks; the cross-attention module based on neighborhood calculation is used to reduce the impact on the topology of the deformation field and double normalization is used to reduce its computational complexity.Main result. We performed atlas-based registration and inter-subject registration tasks on the public 3D brain magnetic resonance imaging datasets LPBA40 and IXI respectively. Compared with the popular VoxelMorph method, our method improves the average DSC value by 7.9% and 3.6% on LPBA40 and IXI. Compared with the popular TransMorph method, our method improves the average DSC value by 4.9% and 1.3% on LPBA40 and IXI.Significance. We proved the advantages of the neighborhood attention calculation method compared to the window attention calculation method based on partitioning patches, and analyzed the impact of the pyramid feature encoder and double normalization on network performance. This has made a valuable contribution to promoting the further development of medical image registration methods.
Collapse
Affiliation(s)
- Xinxin Cui
- School of Medical Information Engineering, Gansu University of Traditional Chinese Medicine, Lanzhou, 730000, People's Republic of China
| | - Hao Xu
- School of Medical Information Engineering, Gansu University of Traditional Chinese Medicine, Lanzhou, 730000, People's Republic of China
| | - Jing Liu
- School of Medical Information Engineering, Gansu University of Traditional Chinese Medicine, Lanzhou, 730000, People's Republic of China
| | - Zhenyu Tian
- School of Medical Information Engineering, Gansu University of Traditional Chinese Medicine, Lanzhou, 730000, People's Republic of China
| | - Jianlan Yang
- School of Medical Information Engineering, Gansu University of Traditional Chinese Medicine, Lanzhou, 730000, People's Republic of China
- Orthopedic Traumatology Hospital, Quanzhou, Fujian, 362000, People's Republic of China
| |
Collapse
|
27
|
Hua Y, Xu K, Yang X. Variational image registration with learned prior using multi-stage VAEs. Comput Biol Med 2024; 178:108785. [PMID: 38925089 DOI: 10.1016/j.compbiomed.2024.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 05/16/2024] [Accepted: 06/15/2024] [Indexed: 06/28/2024]
Abstract
Variational Autoencoders (VAEs) are an efficient variational inference technique coupled with the generated network. Due to the uncertainty provided by variational inference, VAEs have been applied in medical image registration. However, a critical problem in VAEs is that the simple prior cannot provide suitable regularization, which leads to the mismatch between the variational posterior and prior. An optimal prior can close the gap between the evidence's real and variational posterior. In this paper, we propose a multi-stage VAE to learn the optimal prior, which is the aggregated posterior. A lightweight VAE is used to generate the aggregated posterior as a whole. It is an effective way to estimate the distribution of the high-dimensional aggregated posterior that commonly exists in medical image registration based on VAEs. A factorized telescoping classifier is trained to estimate the density ratio of a simple given prior and aggregated posterior, aiming to calculate the KL divergence between the variational and aggregated posterior more accurately. We analyze the KL divergence and find that the finer the factorization, the smaller the KL divergence is. However, too fine a partition is not conducive to registration accuracy. Moreover, the diagonal hypothesis of the variational posterior's covariance ignores the relationship between latent variables in image registration. To address this issue, we learn a covariance matrix with low-rank information to enable correlations with each dimension of the variational posterior. The covariance matrix is further used as a measure to reduce the uncertainty of deformation fields. Experimental results on four public medical image datasets demonstrate that our proposed method outperforms other methods in negative log-likelihood (NLL) and achieves better registration accuracy.
Collapse
Affiliation(s)
- Yong Hua
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China
| | - Kangrong Xu
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China
| | - Xuan Yang
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China.
| |
Collapse
|
28
|
Zhong H, Pursley JM, Rong Y. Deformable dose accumulation is required for adaptive radiotherapy practice. J Appl Clin Med Phys 2024; 25:e14457. [PMID: 39031438 DOI: 10.1002/acm2.14457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 07/08/2024] [Indexed: 07/22/2024] Open
Affiliation(s)
- Hualiang Zhong
- Department of Radiation Oncology, Medical College of Wisconsin, MILWAUKEE, Wisconsin, USA
| | - Jennifer M Pursley
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Yi Rong
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona, USA
| |
Collapse
|
29
|
Hernandez M, Ramon Julvez U. Insights into traditional Large Deformation Diffeomorphic Metric Mapping and unsupervised deep-learning for diffeomorphic registration and their evaluation. Comput Biol Med 2024; 178:108761. [PMID: 38908357 DOI: 10.1016/j.compbiomed.2024.108761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 06/04/2024] [Accepted: 06/13/2024] [Indexed: 06/24/2024]
Abstract
This paper explores the connections between traditional Large Deformation Diffeomorphic Metric Mapping methods and unsupervised deep-learning approaches for non-rigid registration, particularly emphasizing diffeomorphic registration. The study provides useful insights and establishes connections between the methods, thereby facilitating a profound understanding of the methodological landscape. The methods considered in our study are extensively evaluated in T1w MRI images using traditional NIREP and Learn2Reg OASIS evaluation protocols with a focus on fairness, to establish equitable benchmarks and facilitate informed comparisons. Through a comprehensive analysis of the results, we address key questions, including the intricate relationship between accuracy and transformation quality in performance, the disentanglement of the influence of registration ingredients on performance, and the determination of benchmark methods and baselines. We offer valuable insights into the strengths and limitations of both traditional and deep-learning methods, shedding light on their comparative performance and guiding future advancements in the field.
Collapse
Affiliation(s)
- Monica Hernandez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain.
| | - Ubaldo Ramon Julvez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain
| |
Collapse
|
30
|
Yasuda N, Iwasawa T, Baba T, Misumi T, Cheng S, Kato S, Utsunomiya D, Ogura T. Evaluation of Progressive Architectural Distortion in Idiopathic Pulmonary Fibrosis Using Deformable Registration of Sequential CT Images. Diagnostics (Basel) 2024; 14:1650. [PMID: 39125526 PMCID: PMC11311668 DOI: 10.3390/diagnostics14151650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/20/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND Monitoring the progression of idiopathic pulmonary fibrosis (IPF) using CT primarily focuses on assessing the extent of fibrotic lesions, without considering the distortion of lung architecture. OBJECTIVES To evaluate three-dimensional average displacement (3D-AD) quantification of lung structures using deformable registration of serial CT images as a parameter of local lung architectural distortion and predictor of IPF prognosis. MATERIALS AND METHODS Patients with IPF evaluated between January 2016 and March 2017 who had undergone CT at least twice were retrospectively included (n = 114). The 3D-AD was obtained by deformable registration of baseline and follow-up CT images. A computer-aided quantification software measured the fibrotic lesion volume. Cox regression analysis evaluated these variables to predict mortality. RESULTS The 3D-AD and the fibrotic lesion volume change were significantly larger in the subpleural lung region (5.2 mm (interquartile range (IQR): 3.6-7.1 mm) and 0.70% (IQR: 0.22-1.60%), respectively) than those in the inner region (4.7 mm (IQR: 3.0-6.4 mm) and 0.21% (IQR: 0.004-1.12%), respectively). Multivariable logistic analysis revealed that subpleural region 3D-AD and fibrotic lesion volume change were independent predictors of mortality (hazard ratio: 1.12 and 1.23; 95% confidence interval: 1.02-1.22 and 1.10-1.38; p = 0.01 and p < 0.001, respectively). CONCLUSIONS The 3D-AD quantification derived from deformable registration of serial CT images serves as a marker of lung architectural distortion and a prognostic predictor in patients with IPF.
Collapse
Affiliation(s)
- Naofumi Yasuda
- Department of Radiology, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan;
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Tae Iwasawa
- Department of Radiology, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan;
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Tomohisa Baba
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan; (T.B.); (T.O.)
| | - Toshihiro Misumi
- Department of Biostatistics, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan;
| | - Shihyao Cheng
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Shingo Kato
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Daisuke Utsunomiya
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Takashi Ogura
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan; (T.B.); (T.O.)
| |
Collapse
|
31
|
Zhang S, Lichti DD, Kuntze G, Ronsky JL. A Rigorous 2D-3D Registration Method for a High-Speed Bi-Planar Videoradiography Imaging System. Diagnostics (Basel) 2024; 14:1488. [PMID: 39061626 PMCID: PMC11276268 DOI: 10.3390/diagnostics14141488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/06/2024] [Accepted: 07/08/2024] [Indexed: 07/28/2024] Open
Abstract
High-speed biplanar videoradiography can derive the dynamic bony translations and rotations required for joint cartilage contact mechanics to provide insights into the mechanical processes and mechanisms of joint degeneration or pathology. A key challenge is the accurate registration of 3D bone models (from MRI or CT scans) with 2D X-ray image pairs. Marker-based or model-based 2D-3D registration can be performed. The former has higher registration accuracy owing to corresponding marker pairs. The latter avoids bead implantation and uses radiograph intensity or features. A rigorous new method based on projection strategy and least-squares estimation that can be used for both methods is proposed and validated by a 3D-printed bone with implanted beads. The results show that it can achieve greater marker-based registration accuracy than the state-of-the-art RSA method. Model-based registration achieved a 3D reconstruction accuracy of 0.79 mm. Systematic offsets between detected edges in the radiographs and their actual position were observed and modeled to improve the reconstruction accuracy to 0.56 mm (tibia) and 0.64 mm (femur). This method is demonstrated on in vivo data, achieving a registration precision of 0.68 mm (tibia) and 0.60 mm (femur). The proposed method allows the determination of accurate 3D kinematic parameters that can be used to calculate joint cartilage contact mechanics.
Collapse
Affiliation(s)
- Shu Zhang
- Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada;
| | - Derek D. Lichti
- Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada;
| | - Gregor Kuntze
- Department of Mechanical and Manufacturing Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada; (G.K.); (J.L.R.)
| | - Janet L. Ronsky
- Department of Mechanical and Manufacturing Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada; (G.K.); (J.L.R.)
| |
Collapse
|
32
|
Neelakantan S, Mukherjee T, Myers KJ, Rizi R, Avazmohammadi R. Physics-informed motion registration of lung parenchyma across static CT images. ARXIV 2024:arXiv:2407.03457v1. [PMID: 39010873 PMCID: PMC11247911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
Lung injuries, such as ventilator-induced lung injury and radiation-induced lung injury, can lead to heterogeneous alterations in the biomechanical behavior of the lungs. While imaging methods, e.g., X-ray and static computed tomography (CT), can point to regional alterations in lung structure between healthy and diseased tissue, they fall short of delineating timewise kinematic variations between the former and the latter. Image registration has gained recent interest as a tool to estimate the displacement experienced by the lungs during respiration via regional deformation metrics such as volumetric expansion and distortion. However, successful image registration commonly relies on a temporal series of image stacks with small displacements in the lungs across succeeding image stacks, which remains limited in static imaging. In this study, we have presented a finite element (FE) method to estimate strains from static images acquired at the end-expiration (EE) and end-inspiration (EI) timepoints, i.e., images with a large deformation between the two distant timepoints. Physiologically realistic loads were applied to the geometry obtained at EE to deform this geometry to match the geometry obtained at EI. The results indicated that the simulation could minimize the error between the two geometries. Using four-dimensional (4D) dynamic CT in a rat, the strain at an isolated transverse plane estimated by our method showed sufficient agreement with that estimated through non-rigid image registration that used all the timepoints. Through the proposed method, we can estimate the lung deformation at any timepoint between EE and EI. The proposed method offers a tool to estimate timewise regional deformation in the lungs using only static images acquired at EE and EI.
Collapse
Affiliation(s)
- Sunder Neelakantan
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Tanmay Mukherjee
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Kyle J Myers
- Hagler Institute for Advanced Study, Texas A&M University, College Station, TX, USA
| | - Rahim Rizi
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Reza Avazmohammadi
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| |
Collapse
|
33
|
Zhang J, Xie X, Cheng X, Li T, Zhong J, Hu X, Sun L, Yan H. Deep learning-based deformable image registration with bilateral pyramid to align pre-operative and follow-up magnetic resonance imaging (MRI) scans. Quant Imaging Med Surg 2024; 14:4779-4791. [PMID: 39022247 PMCID: PMC11250335 DOI: 10.21037/qims-23-1821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 05/23/2024] [Indexed: 07/20/2024]
Abstract
Background The evaluation of brain tumor recurrence after surgery is based on the comparison between tumor regions on pre-operative and follow-up magnetic resonance imaging (MRI) scans in clinical practice. Accurate alignment of MRI scans is important in this evaluation process. However, existing methods often fail to yield accurate alignment due to substantial appearance and shape changes of tumor regions. The study aimed to improve this misalignment situation through multimodal information and compensation for shape changes. Methods In this work, a deep learning-based deformation registration method using bilateral pyramid to create multi-scale image features was developed. Moreover, morphology operations were employed to build correspondence between the surgical resection on the follow-up and pre-operative MRI scans. Results Compared with baseline methods, the proposed method achieved the lowest mean absolute error of 1.82 mm on the public BraTS-Reg 2022 dataset. Conclusions The results suggest that the proposed method is potentially useful for evaluating tumor recurrence after surgery. We effectively verified its ability to extract and integrate the information of the second modality, and also revealed the micro representation of tumor recurrence. This study can assist doctors in registering multiple sequence images of patients, observing lesions and surrounding areas, analyzing and processing them, and guiding doctors in their treatment plans.
Collapse
Affiliation(s)
- Jingjing Zhang
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Xin Xie
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
| | - Xuebin Cheng
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Teng Li
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Jinqin Zhong
- School of Internet, Anhui University, Hefei, China
| | - Xiaokun Hu
- Interventional Medicine Center, Affiliated Hospital of Qingdao University, Qingdao, China
| | - Lu Sun
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Hui Yan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
34
|
Neelakantan S, Mukherjee T, Myers K, Rizi R, Avazmohammadi R. Physics-informed motion registration of lung parenchyma across static CT images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039407 DOI: 10.1109/embc53108.2024.10781530] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Lung injuries, such as ventilator-induced lung injury and radiation-induced lung injury, can lead to heterogeneous alterations in the biomechanical behavior of the lungs. While imaging methods, e.g., X-ray and static computed tomography (CT), can point to regional alterations in lung structure between healthy and diseased tissue, they fall short of delineating timewise kinematic variations between the former and the latter. Image registration has gained recent interest as a tool to estimate the displacement experienced by the lungs during respiration via regional deformation metrics such as volumetric expansion and distortion. However, successful image registration commonly relies on a temporal series of image stacks with small displacements in the lungs across succeeding image stacks, which remains limited in static imaging. In this study, we have presented a finite element (FE) method to estimate strains from static images acquired at the end-expiration (EE) and end-inspiration (EI) timepoints, i.e., images with a large deformation between the two distant timepoints. Physiologically realistic loads were applied to the geometry obtained at EE to deform this geometry to match the geometry obtained at EI. The results indicated that the simulation could minimize the error between the two geometries. Using four-dimensional (4D) dynamic CT in a rat, the strain at an isolated transverse plane estimated by our method showed sufficient agreement with that estimated through non-rigid image registration that used all the timepoints. Through the proposed method, we can estimate the lung deformation at any timepoint between EE and EI. The proposed method offers a tool to estimate timewise regional deformation in the lungs using only static images acquired at EE and EI.
Collapse
|
35
|
Tzitzimpasis P, Ries M, Raaymakers BW, Zachiu C. Generalized div-curl based regularization for physically constrained deformable image registration. Sci Rep 2024; 14:15002. [PMID: 38951683 PMCID: PMC11217375 DOI: 10.1038/s41598-024-65896-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 06/25/2024] [Indexed: 07/03/2024] Open
Abstract
Variational image registration methods commonly employ a similarity metric and a regularization term that renders the minimization problem well-posed. However, many frequently used regularizations such as smoothness or curvature do not necessarily reflect the underlying physics that apply to anatomical deformations. This, in turn, can make the accurate estimation of complex deformations particularly challenging. Here, we present a new highly flexible regularization inspired from the physics of fluid dynamics which allows applying independent penalties on the divergence and curl of the deformations and/or their nth order derivative. The complexity of the proposed generalized div-curl regularization renders the problem particularly challenging using conventional optimization techniques. To this end, we develop a transformation model and an optimization scheme that uses the divergence and curl components of the deformation as control parameters for the registration. We demonstrate that the original unconstrained minimization problem reduces to a constrained problem for which we propose the use of the augmented Lagrangian method. Doing this, the equations of motion greatly simplify and become managable. Our experiments indicate that the proposed framework can be applied on a variety of different registration problems and produce highly accurate deformations with the desired physical properties.
Collapse
Affiliation(s)
- Paris Tzitzimpasis
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands.
| | - Mario Ries
- Imaging Division, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Bas W Raaymakers
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Cornel Zachiu
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| |
Collapse
|
36
|
Haloubi T, Thomas SA, Hines C, Dhaliwal K, Hopgood JR. Navigating Noise and Texture: Motion Compensation Methodology for Fluorescence Lifetime Imaging in Pulmonary Research. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40039579 DOI: 10.1109/embc53108.2024.10781956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
In addressing the challenges in real-time Fluorescence Lifetime Imaging (FLIm)-Optical Endomicroscopy (OEM), particularly motion artefacts, this study introduces a comprehensive framework designed to enhance FLIm processing in in vivo studies. The framework focuses on improving image quality by selectively discarding uninformative frames and employing a novel registration technique. This technique integrates Normalised Cross Correlation (NCC) and Channel and Spatial Reliability Tracker (CSRT) to consistently track the dominant correlation peak across temporal sequences of images, thus enhancing the reliability and precision of subsequent analyses. This approach has shown a significant improvement upon existing registration methods in handling temporal FLIm motion artefacts. Our method overcomes the optimisation issues inherent in similarity-based registration and demonstrates a 17% enhancement in Quality of Alignment (QA) metric and a 25% increase in Structural Similarity Index Measure (SSIM) across various datasets.Clinical relevance- Our study introduces a significant advancement in FLIm imaging, with a novel method that increases the precision and reliability of the registration. This enhancement is crucial for the translational clinical research sphere, where precise, real-time imaging underpins the development of more effective diagnostics and treatments in pulmonary medicine.
Collapse
|
37
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, Williams-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, Fischl B, MacDonald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. eLife 2024; 12:RP91398. [PMID: 38896568 PMCID: PMC11186625 DOI: 10.7554/elife.91398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024] Open
Abstract
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).
Collapse
Affiliation(s)
- Harshvardhan Gazula
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Henry FJ Tregidgo
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Benjamin Billot
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Yael Balbastre
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - Rogeny Herisse
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Lucas J Deden-Binder
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Adria Casamitjana
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Biomedical Imaging Group, Universitat Politècnica de CatalunyaBarcelonaSpain
| | - Erica J Melief
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Caitlin S Latimer
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mitchell D Kilgore
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mark Montine
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Eleanor Robinson
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Emily Blackburn
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Michael S Marshall
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Theresa R Connors
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Derek H Oakley
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Matthew P Frosch
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Sean I Young
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Neuroscience and Biomedical Engineering, Aalto UniversityEspooFinland
| | - Adrian V Dalca
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - C Dirk Keene
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Bradley T Hyman
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Juan E Iglesias
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| |
Collapse
|
38
|
Shahsavarani S, Lopez F, Ibarra-Castanedo C, Maldague XPV. Advanced Image Stitching Method for Dual-Sensor Inspection. SENSORS (BASEL, SWITZERLAND) 2024; 24:3778. [PMID: 38931562 PMCID: PMC11207425 DOI: 10.3390/s24123778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 05/24/2024] [Accepted: 06/05/2024] [Indexed: 06/28/2024]
Abstract
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Sara Shahsavarani
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Fernando Lopez
- TORNGATS, 200 Boul. du Parc-Technologique, Quebec City, QC G1P 4S3, Canada;
| | - Clemente Ibarra-Castanedo
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Xavier P. V. Maldague
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| |
Collapse
|
39
|
Lin YH, Chen LW, Wang HJ, Hsieh MS, Lu CW, Chuang JH, Chang YC, Chen JS, Chen CM, Lin MW. Quantification of Resection Margin following Sublobar Resection in Lung Cancer Patients through Pre- and Post-Operative CT Image Comparison: Utilizing a CT-Based 3D Reconstruction Algorithm. Cancers (Basel) 2024; 16:2181. [PMID: 38927887 PMCID: PMC11201844 DOI: 10.3390/cancers16122181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 06/02/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024] Open
Abstract
Sublobar resection has emerged as a standard treatment option for early-stage peripheral non-small cell lung cancer. Achieving an adequate resection margin is crucial to prevent local tumor recurrence. However, gross measurement of the resection margin may lack accuracy due to the elasticity of lung tissue and interobserver variability. Therefore, this study aimed to develop an objective measurement method, the CT-based 3D reconstruction algorithm, to quantify the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. An automated subvascular matching technique was first developed to ensure accuracy and reproducibility in the matching process. Following the extraction of matched feature points, another key technique involves calculating the displacement field within the image. This is particularly important for mapping discontinuous deformation fields around the surgical resection area. A transformation based on thin-plate spline is used for medical image registration. Upon completing the final step of image registration, the distance at the resection margin was measured. After developing the CT-based 3D reconstruction algorithm, we included 12 cases for resection margin distance measurement, comprising 4 right middle lobectomies, 6 segmentectomies, and 2 wedge resections. The outcomes obtained with our method revealed that the target registration error for all cases was less than 2.5 mm. Our method demonstrated the feasibility of measuring the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. Further validation with a multicenter, large cohort, and analysis of clinical outcome correlation is necessary in future studies.
Collapse
Affiliation(s)
- Yu-Hsuan Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Li-Wei Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Hao-Jen Wang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Min-Shu Hsieh
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Chao-Wen Lu
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Jen-Hao Chuang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Jin-Shing Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Mong-Wei Lin
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| |
Collapse
|
40
|
Wang H, Ni D, Wang Y. Recursive Deformable Pyramid Network for Unsupervised Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2229-2240. [PMID: 38319758 DOI: 10.1109/tmi.2024.3362968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Complicated deformation problems are frequently encountered in medical image registration tasks. Although various advanced registration models have been proposed, accurate and efficient deformable registration remains challenging, especially for handling the large volumetric deformations. To this end, we propose a novel recursive deformable pyramid (RDP) network for unsupervised non-rigid registration. Our network is a pure convolutional pyramid, which fully utilizes the advantages of the pyramid structure itself, but does not rely on any high-weight attentions or transformers. In particular, our network leverages a step-by-step recursion strategy with the integration of high-level semantics to predict the deformation field from coarse to fine, while ensuring the rationality of the deformation field. Meanwhile, due to the recursive pyramid strategy, our network can effectively attain deformable registration without separate affine pre-alignment. We compare the RDP network with several existing registration methods on three public brain magnetic resonance imaging (MRI) datasets, including LPBA, Mindboggle and IXI. Experimental results demonstrate our network consistently outcompetes state of the art with respect to the metrics of Dice score, average symmetric surface distance, Hausdorff distance, and Jacobian. Even for the data without the affine pre-alignment, our network maintains satisfactory performance on compensating for the large deformation. The code is publicly available at https://github.com/ZAX130/RDP.
Collapse
|
41
|
Wodzinski M, Marini N, Atzori M, Müller H. RegWSI: Whole slide image registration using combined deep feature- and intensity-based methods: Winner of the ACROBAT 2023 challenge. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108187. [PMID: 38657383 DOI: 10.1016/j.cmpb.2024.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/05/2024] [Accepted: 04/17/2024] [Indexed: 04/26/2024]
Abstract
BACKGROUND AND OBJECTIVE The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.
Collapse
Affiliation(s)
- Marek Wodzinski
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland.
| | - Niccolò Marini
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland
| | - Manfredo Atzori
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Neuroscience, University of Padova, Padova, Italy
| | - Henning Müller
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Medical Faculty, University of Geneva, Geneva, Switzerland
| |
Collapse
|
42
|
Lu X, Liang X, Liu W, Miao X, Guan X. ReeGAN: MRI image edge-preserving synthesis based on GANs trained with misaligned data. Med Biol Eng Comput 2024; 62:1851-1868. [PMID: 38396277 DOI: 10.1007/s11517-024-03035-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 01/27/2024] [Indexed: 02/25/2024]
Abstract
As a crucial medical examination technique, different modalities of magnetic resonance imaging (MRI) complement each other, offering multi-angle and multi-dimensional insights into the body's internal information. Therefore, research on MRI cross-modality conversion is of great significance, and many innovative techniques have been explored. However, most methods are trained on well-aligned data, and the impact of misaligned data has not received sufficient attention. Additionally, many methods focus on transforming the entire image and ignore crucial edge information. To address these challenges, we propose a generative adversarial network based on multi-feature fusion, which effectively preserves edge information while training on noisy data. Notably, we consider images with limited range random transformations as noisy labels and use an additional small auxiliary registration network to help the generator adapt to the noise distribution. Moreover, we inject auxiliary edge information to improve the quality of synthesized target modality images. Our goal is to find the best solution for cross-modality conversion. Comprehensive experiments and ablation studies demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xiangjiang Lu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China.
| | - Xiaoshuang Liang
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Wenjing Liu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xiuxia Miao
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xianglong Guan
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| |
Collapse
|
43
|
Rao F, Lyu T, Feng Z, Wu Y, Ni Y, Zhu W. A landmark-supervised registration framework for multi-phase CT images with cross-distillation. Phys Med Biol 2024; 69:115059. [PMID: 38768601 DOI: 10.1088/1361-6560/ad4e01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/20/2024] [Indexed: 05/22/2024]
Abstract
Objective.Multi-phase computed tomography (CT) has become a leading modality for identifying hepatic tumors. Nevertheless, the presence of misalignment in the images of different phases poses a challenge in accurately identifying and analyzing the patient's anatomy. Conventional registration methods typically concentrate on either intensity-based features or landmark-based features in isolation, so imposing limitations on the accuracy of the registration process.Method.We establish a nonrigid cycle-registration network that leverages semi-supervised learning techniques, wherein a point distance term based on Euclidean distance between registered landmark points is introduced into the loss function. Additionally, a cross-distillation strategy is proposed in network training to further improve registration performance which incorporates response-based knowledge concerning the distances between feature points.Results.We conducted experiments using multi-centered liver CT datasets to evaluate the performance of the proposed method. The results demonstrate that our method outperforms baseline methods in terms of target registration error. Additionally, Dice scores of the warped tumor masks were calculated. Our method consistently achieved the highest scores among all the comparing methods. Specifically, it achieved scores of 82.9% and 82.5% in the hepatocellular carcinoma and the intrahepatic cholangiocarcinoma dataset, respectively.Significance.The superior registration performance indicates its potential to serve as an important tool in hepatic tumor identification and analysis.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Zhan Feng
- Department of Radiology, College of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou 311100, People's Republic of China
| | - Yuanfeng Wu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Yangfan Ni
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| |
Collapse
|
44
|
Zhou D, Yu C, Liu W, Liu F. Registration of multimodal bone images based on edge similarity metaheuristic. Comput Biol Med 2024; 174:108379. [PMID: 38631115 DOI: 10.1016/j.compbiomed.2024.108379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 03/09/2024] [Accepted: 03/24/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE Blurry medical images affect the accuracy and efficiency of multimodal image registration, whose existing methods require further improvement. METHODS We propose an edge-based similarity registration method optimised for multimodal medical images, especially bone images, by a balance optimiser. First, we use a GPU (graphics processing unit) rendering simulation to convert computed tomography data into digitally reconstructed radiographs. Second, we introduce the improved cascaded edge network (ICENet), a convolutional neural network that extracts edge information of blurry medical images. Then, the bilateral Gaussian-weighted similarity of pairs of X-ray images and digitally reconstructed radiographs is measured. The a balanced optimiser is iteratively applied to finally estimate the best pose to perform image registration. RESULTS Experimental results show that, on average, the proposed method with ICENet outperforms other edge detection networks by 20%, 12%, 18.83%, and 11.93% in the overall Dice similarity, overall intersection over union, peak signal-to-noise ratio, and structural similarity index, respectively, with a registration success rate up to 90% and average reduction of 220% in registration time. CONCLUSION The proposed method with ICENet can achieve a high registration success rate even for blurry medical images, and its efficiency and robustness are higher than those of existing methods. SIGNIFICANCE Our proposal may be suitable for supporting medical diagnosis, radiation therapy, image-guided surgery, and other clinical applications.
Collapse
Affiliation(s)
- Dibin Zhou
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Chen Yu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Wenhao Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Fuchang Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| |
Collapse
|
45
|
Maria Antony AN, Narisetti N, Gladilin E. Linel2D-Net: A deep learning approach to solving 2D linear elastic boundary value problems on image domains. iScience 2024; 27:109519. [PMID: 38595795 PMCID: PMC11002675 DOI: 10.1016/j.isci.2024.109519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/02/2024] [Accepted: 03/14/2024] [Indexed: 04/11/2024] Open
Abstract
Efficient solution of physical boundary value problems (BVPs) remains a challenging task demanded in many applications. Conventional numerical methods require time-consuming domain discretization and solving techniques that have limited throughput capabilities. Here, we present an efficient data-driven DNN approach to non-iterative solving arbitrary 2D linear elastic BVPs. Our results show that a U-Net-based surrogate model trained on a representative set of reference FDM solutions can accurately emulate linear elastic material behavior with manifold applications in deformable modeling and simulation.
Collapse
Affiliation(s)
- Anto Nivin Maria Antony
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| | - Narendra Narisetti
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| | - Evgeny Gladilin
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| |
Collapse
|
46
|
Ahmad N, Dahlberg H, Jönsson H, Tarai S, Guggilla RK, Strand R, Lundström E, Bergström G, Ahlström H, Kullberg J. Voxel-wise body composition analysis using image registration of a three-slice CT imaging protocol: methodology and proof-of-concept studies. Biomed Eng Online 2024; 23:42. [PMID: 38614974 PMCID: PMC11015680 DOI: 10.1186/s12938-024-01235-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/02/2024] [Indexed: 04/15/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is an imaging modality commonly used for studies of internal body structures and very useful for detailed studies of body composition. The aim of this study was to develop and evaluate a fully automatic image registration framework for inter-subject CT slice registration. The aim was also to use the results, in a set of proof-of-concept studies, for voxel-wise statistical body composition analysis (Imiomics) of correlations between imaging and non-imaging data. METHODS The current study utilized three single-slice CT images of the liver, abdomen, and thigh from two large cohort studies, SCAPIS and IGT. The image registration method developed and evaluated used both CT images together with image-derived tissue and organ segmentation masks. To evaluate the performance of the registration method, a set of baseline 3-single-slice CT images (from 2780 subjects including 8285 slices) from the SCAPIS and IGT cohorts were registered. Vector magnitude and intensity magnitude error indicating inverse consistency were used for evaluation. Image registration results were further used for voxel-wise analysis of associations between the CT images (as represented by tissue volume from Hounsfield unit and Jacobian determinant) and various explicit measurements of various tissues, fat depots, and organs collected in both cohort studies. RESULTS Our findings demonstrated that the key organs and anatomical structures were registered appropriately. The evaluation parameters of inverse consistency, such as vector magnitude and intensity magnitude error, were on average less than 3 mm and 50 Hounsfield units. The registration followed by Imiomics analysis enabled the examination of associations between various explicit measurements (liver, spleen, abdominal muscle, visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), thigh SAT, intermuscular adipose tissue (IMAT), and thigh muscle) and the voxel-wise image information. CONCLUSION The developed and evaluated framework allows accurate image registrations of the collected three single-slice CT images and enables detailed voxel-wise studies of associations between body composition and associated diseases and risk factors.
Collapse
Affiliation(s)
- Nouman Ahmad
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden.
| | - Hugo Dahlberg
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Hanna Jönsson
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Sambit Tarai
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | | | - Robin Strand
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Elin Lundström
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Göran Bergström
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Håkan Ahlström
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Antaros Medical, Mölndal, Sweden
| | - Joel Kullberg
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Antaros Medical, Mölndal, Sweden
| |
Collapse
|
47
|
Murr M, Bernchou U, Bubula-Rehm E, Ruschin M, Sadeghi P, Voet P, Winter JD, Yang J, Younus E, Zachiu C, Zhao Y, Zhong H, Thorwarth D. A multi-institutional comparison of retrospective deformable dose accumulation for online adaptive magnetic resonance-guided radiotherapy. Phys Imaging Radiat Oncol 2024; 30:100588. [PMID: 38883145 PMCID: PMC11176923 DOI: 10.1016/j.phro.2024.100588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 06/18/2024] Open
Abstract
Background and Purpose Application of different deformable dose accumulation (DDA) solutions makes institutional comparisons after online-adaptive magnetic resonance-guided radiotherapy (OA-MRgRT) challenging. The aim of this multi-institutional study was to analyze accuracy and agreement of DDA-implementations in OA-MRgRT. Material and Methods One gold standard (GS) case deformed with a biomechanical-model and five clinical cases consisting of prostate (2x), cervix, liver, and lymph node cancer, treated with OA-MRgRT, were analyzed. Six centers conducted DDA using institutional implementations. Deformable image registration (DIR) and DDA results were compared using the contour metrics Dice Similarity Coefficient (DSC), surface-DSC, Hausdorff-distance (HD95%), and accumulated dose-volume histograms (DVHs) analyzed via intraclass correlation coefficient (ICC) and clinical dosimetric criteria (CDC). Results For the GS, median DDA errors ranged from 0.0 to 2.8 Gy across contours and implementations. DIR of clinical cases resulted in DSC > 0.8 for up to 81.3% of contours and a variability of surface-DSC values depending on the implementation. Maximum HD95%=73.3 mm was found for duodenum in the liver case. Although DVH ICC > 0.90 was found after DDA for all but two contours, relevant absolute CDC differences were observed in clinical cases: Prostate I/II showed maximum differences in bladder V28Gy (10.2/7.6%), while for cervix, liver, and lymph node the highest differences were found for rectum D2cm3 (2.8 Gy), duodenum Dmax (7.1 Gy), and rectum D0.5cm3 (4.6 Gy). Conclusion Overall, high agreement was found between the different DIR and DDA implementations. Case- and algorithm-dependent differences were observed, leading to potentially clinically relevant results. Larger studies are needed to define future DDA-guidelines.
Collapse
Affiliation(s)
- Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | - Uffe Bernchou
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Laboratory of Radiation Physics, Odense University Hospital, Denmark
| | | | - Mark Ruschin
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Parisa Sadeghi
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | | | - Jeff D Winter
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Jinzhong Yang
- Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Eyesha Younus
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Radiation Oncology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Cornel Zachiu
- University Medical Centre Utrecht, Department of Radiotherapy, 3584 CX Utrecht, the Netherlands
| | - Yao Zhao
- Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Hualiang Zhong
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| |
Collapse
|
48
|
Yan Z, Ji J, Ma J, Cao W. HGCMorph: joint discontinuity-preserving and pose-learning via GNN and capsule networks for deformable medical images registration. Phys Med Biol 2024; 69:075032. [PMID: 38373349 DOI: 10.1088/1361-6560/ad2a96] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
Objective.This study aims to enhance medical image registration by addressing the limitations of existing approaches that rely on spatial transformations through U-Net, ConvNets, or Transformers. The objective is to develop a novel architecture that combines ConvNets, graph neural networks (GNNs), and capsule networks to improve the accuracy and efficiency of medical image registration, which can also deal with the problem of rotating registration.Approach.We propose an deep learning-based approach which can be utilized in both unsupervised and semi-supervised manners, named as HGCMorph. It leverages a hybrid framework that integrates ConvNets and GNNs to capture lower-level features, specifically short-range attention, while also utilizing capsule networks (CapsNets) to model abstract higher-level features, including entity properties such as position, size, orientation, deformation, and texture. This hybrid framework aims to provide a comprehensive representation of anatomical structures and their spatial relationships in medical images.Main results.The results demonstrate the superiority of HGCMorph over existing state-of-the-art deep learning-based methods in both qualitative and quantitative evaluations. In unsupervised training process, our model outperforms the recent SOTA method TransMorph by achieving 7%/38% increase on Dice score coefficient (DSC), and 2%/7% improvement on negative jacobian determinant for OASIS and LPBA40 datasets, respectively. Furthermore, HGCMorph achieves improved registration accuracy in semi-supervised training process. In addition, when dealing with complex 3D rotations and secondary randomly deformations, our method still achieves the best performance. We also tested our methods on lung datasets, such as Japanese Society of Radiology, Montgoermy and Shenzhen.Significance.The significance lies in its innovative design to medical image registration. HGCMorph offers a novel framework that overcomes the limitations of existing methods by efficiently capturing both local and abstract features, leading to enhanced registration accuracy, discontinuity-preserving, and pose-learning abilities. The incorporation of capsule networks introduces valuable improvements, making the proposed method a valuable contribution to the field of medical image analysis. HGCMorph not only advances the SOTA methods but also has the potential to improve various medical applications that rely on accurate image registration.
Collapse
Affiliation(s)
- Zhiyue Yan
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| | - Jianhua Ji
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| | - Jia Ma
- The Second People's Hospital of Futian District, Shenzhen 518049, Guangdong Province, People's Republic of China
| | - Wenming Cao
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| |
Collapse
|
49
|
Wu Y, Wang Z, Chu Y, Peng R, Peng H, Yang H, Guo K, Zhang J. Current Research Status of Respiratory Motion for Thorax and Abdominal Treatment: A Systematic Review. Biomimetics (Basel) 2024; 9:170. [PMID: 38534855 DOI: 10.3390/biomimetics9030170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/29/2024] [Accepted: 03/09/2024] [Indexed: 03/28/2024] Open
Abstract
Malignant tumors have become one of the serious public health problems in human safety and health, among which the chest and abdomen diseases account for the largest proportion. Early diagnosis and treatment can effectively improve the survival rate of patients. However, respiratory motion in the chest and abdomen can lead to uncertainty in the shape, volume, and location of the tumor, making treatment of the chest and abdomen difficult. Therefore, compensation for respiratory motion is very important in clinical treatment. The purpose of this review was to discuss the research and development of respiratory movement monitoring and prediction in thoracic and abdominal surgery, as well as introduce the current research status. The integration of modern respiratory motion compensation technology with advanced sensor detection technology, medical-image-guided therapy, and artificial intelligence technology is discussed and analyzed. The future research direction of intraoperative thoracic and abdominal respiratory motion compensation should be non-invasive, non-contact, use a low dose, and involve intelligent development. The complexity of the surgical environment, the constraints on the accuracy of existing image guidance devices, and the latency of data transmission are all present technical challenges.
Collapse
Affiliation(s)
- Yuwen Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Zhisen Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yuyi Chu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Renyuan Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Haoran Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Hongbo Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Kai Guo
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Juzhong Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| |
Collapse
|
50
|
Vinodkumar PK, Karabulut D, Avots E, Ozcinar C, Anbarjafari G. Deep Learning for 3D Reconstruction, Augmentation, and Registration: A Review Paper. ENTROPY (BASEL, SWITZERLAND) 2024; 26:235. [PMID: 38539747 PMCID: PMC10968962 DOI: 10.3390/e26030235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/01/2024] [Accepted: 03/05/2024] [Indexed: 11/11/2024]
Abstract
The research groups in computer vision, graphics, and machine learning have dedicated a substantial amount of attention to the areas of 3D object reconstruction, augmentation, and registration. Deep learning is the predominant method used in artificial intelligence for addressing computer vision challenges. However, deep learning on three-dimensional data presents distinct obstacles and is now in its nascent phase. There have been significant advancements in deep learning specifically for three-dimensional data, offering a range of ways to address these issues. This study offers a comprehensive examination of the latest advancements in deep learning methodologies. We examine many benchmark models for the tasks of 3D object registration, augmentation, and reconstruction. We thoroughly analyse their architectures, advantages, and constraints. In summary, this report provides a comprehensive overview of recent advancements in three-dimensional deep learning and highlights unresolved research areas that will need to be addressed in the future.
Collapse
Affiliation(s)
- Prasoon Kumar Vinodkumar
- iCV Lab, Institute of Technology, University of Tartu, 50090 Tartu, Estonia; (P.K.V.); (D.K.); (C.O.)
| | - Dogus Karabulut
- iCV Lab, Institute of Technology, University of Tartu, 50090 Tartu, Estonia; (P.K.V.); (D.K.); (C.O.)
| | - Egils Avots
- iCV Lab, Institute of Technology, University of Tartu, 50090 Tartu, Estonia; (P.K.V.); (D.K.); (C.O.)
| | - Cagri Ozcinar
- iCV Lab, Institute of Technology, University of Tartu, 50090 Tartu, Estonia; (P.K.V.); (D.K.); (C.O.)
| | - Gholamreza Anbarjafari
- iCV Lab, Institute of Technology, University of Tartu, 50090 Tartu, Estonia; (P.K.V.); (D.K.); (C.O.)
- PwC Advisory, 00180 Helsinki, Finland
- iVCV OÜ, 51011 Tartu, Estonia
- Institute of Higher Education, Yildiz Technical University, Beşiktaş, Istanbul 34349, Turkey
| |
Collapse
|