1
|
Li Y, Li H, Chen W, O’Riordan K, Mani N, Qi Y, Liu T, Mani S, Ozcan A. Deep learning-based detection of bacterial swarm motion using a single image. Gut Microbes 2025; 17:2505115. [PMID: 40366861 PMCID: PMC12080278 DOI: 10.1080/19490976.2025.2505115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2025] [Revised: 03/27/2025] [Accepted: 05/07/2025] [Indexed: 05/16/2025] Open
Abstract
Motility is a fundamental characteristic of bacteria. Distinguishing between swarming and swimming, the two principal forms of bacterial movement, holds significant conceptual and clinical relevance. Conventionally, the detection of bacterial swarming involves inoculating samples on an agar surface and observing colony expansion, which is qualitative, time-intensive, and requires additional testing to rule out other motility forms. A recent methodology that differentiates swarming and swimming motility in bacteria using circular confinement offers a rapid approach to detecting swarming. However, it still heavily depends on the observer's expertise, making the process labor-intensive, costly, slow, and susceptible to inevitable human bias. To address these limitations, we developed a deep learning-based swarming classifier that rapidly and autonomously predicts swarming probability using a single blurry image. Compared with traditional video-based, manually processed approaches, our method is particularly suited for high-throughput environments and provides objective, quantitative assessments of swarming probability. The swarming classifier demonstrated in our work was trained on Enterobacter sp. SM3 and showed good performance when blindly tested on new swarming (positive) and swimming (negative) test images of SM3, achieving a sensitivity of 97.44% and a specificity of 100%. Furthermore, this classifier demonstrated robust external generalization capabilities when applied to unseen bacterial species, such as Serratia marcescens DB10 and Citrobacter koseri H6. This competitive performance indicates the potential to adapt our approach for diagnostic applications through portable devices, which would facilitate rapid, objective, on-site screening for bacterial swarming motility, potentially enhancing the early detection and treatment assessment of various diseases, including inflammatory bowel diseases (IBD) and urinary tract infections (UTI).
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Hao Li
- Department of Medicine, Genetics and Molecular Pharmacology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Weijie Chen
- Department of Medicine, Genetics and Molecular Pharmacology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Keelan O’Riordan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- ‘Department of Physics and Astronomy, University of California, Los Angeles, CA, USA
| | - Neha Mani
- Department of Biochemistry and Molecular Biophysics, Columbia University, New York, NY, USA
| | - Yuxuan Qi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Sridhar Mani
- Department of Medicine, Genetics and Molecular Pharmacology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
- Department of Surgery, University of California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Rolfe SM, Mao D, Maga AM. Streamlining asymmetry quantification in fetal mouse imaging: A semi-automated pipeline supported by expert guidance. Dev Dyn 2025. [PMID: 40421888 DOI: 10.1002/dvdy.70028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Revised: 03/04/2025] [Accepted: 03/04/2025] [Indexed: 05/28/2025] Open
Abstract
BACKGROUND Asymmetry is a key feature of numerous developmental disorders and in phenotypic screens is often used as a readout for environmental or genetic perturbations. A better understanding of the genetic basis of asymmetry and its relationship to disease susceptibility will help unravel the complex genetic and environmental factors and their interactions that increase risk in a range of developmental disorders. Large-scale imaging datasets offer opportunities to work with sample sizes necessary to detect and quantify differences in morphology beyond severe deformities but also pose challenges to manual phenotyping protocols. RESULTS We introduce a tool for quantifying asymmetry in 3D images and apply it to explore the role of genes contributing to abnormal asymmetry by deep phenotyping 3D fetal microCT images from knockout strains acquired as part of the Knockout Mouse Phenotyping Program. Four knockout strains: Ccdc186, Acvr2a, Nhlh1, and Fam20c were identified with highly significant asymmetry in craniofacial regions, making them good candidates for further analysis. CONCLUSION In this work, we demonstrate an open-source, semi-automated tool to quantify the asymmetry of craniofacial structures that integrates expert anatomical knowledge. This tool can detect abnormally asymmetric phenotypes in fetal mice to explore the relationship between facial asymmetry, perturbed development, and developmental instability.
Collapse
Affiliation(s)
- S M Rolfe
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, Washington, DC, USA
| | - D Mao
- Department of Pediatrics, University of Washington, Seattle, Washington, DC, USA
| | - A M Maga
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, Washington, DC, USA
- Department of Pediatrics, University of Washington, Seattle, Washington, DC, USA
| |
Collapse
|
3
|
Li C, Sultan RI, Bagher-Ebadian H, Qiang Y, Thind K, Zhu D, Chetty IJ. Enhancing CT image segmentation accuracy through ensemble loss function optimization. Med Phys 2025. [PMID: 40275531 DOI: 10.1002/mp.17848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2024] [Revised: 04/02/2025] [Accepted: 04/04/2025] [Indexed: 04/26/2025] Open
Abstract
BACKGROUND In CT-based medical image segmentation, the choice of loss function profoundly impacts the training efficacy of deep neural networks. Traditional loss functions like cross entropy (CE), Dice, Boundary, and TopK each have unique strengths and limitations, often introducing biases when used individually. PURPOSE This study aims to enhance segmentation accuracy by optimizing ensemble loss functions, thereby addressing the biases and limitations of single loss functions and their linear combinations. METHODS We implemented a comprehensive evaluation of loss function combinations by integrating CE, Dice, Boundary, and TopK loss functions through both loss-level linear combination and model-level ensemble methods. Our approach utilized two state-of-the-art 3D segmentation architectures, Attention U-Net (AttUNet) and SwinUNETR, to test the impact of these methods. The study was conducted on two large CT dataset cohorts: an institutional dataset containing pelvic organ segmentations, and a public dataset consisting of multiple organ segmentations. All the models were trained from scratch with different loss settings, and performance was evaluated using Dice similarity coefficient (DSC), Hausdorff distance (HD), and average surface distance (ASD). In the ensemble approach, both static averaging and learnable dynamic weighting strategies were employed to combine the outputs of models trained with different loss functions. RESULTS Extensive experiments revealed the following: (1) the linear combination of loss functions achieved results comparable to those of single loss-driven methods; (2) compared to the best non-ensemble methods, ensemble-based approaches resulted in a 2%-7% increase in DSC scores, along with notable reductions in HD (e.g., a 19.1% reduction for rectum segmentation using SwinUNETR) and ASD (e.g., a 49.0% reduction for prostate segmentation using AttUNet); (3) the learnable ensemble approach with optimized weights produced finer details in predicted masks, as confirmed by qualitative analyses; and (4) the learnable ensemble consistently outperforms the static ensemble across most metrics (DSC, HD, ASD) for both AttUNet and SwinUNETR architectures. CONCLUSIONS Our findings support the efficacy of using ensemble models with optimized weights to improve segmentation accuracy, highlighting the potential for broader applications in automated medical image analysis.
Collapse
Affiliation(s)
- Chengyin Li
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
| | - Rafi Ibn Sultan
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
- Department of Radiology, Michigan State University, E. Lansing, Michigan, USA
- Department of Osteopathic, Michigan State University, E. Lansing, Michigan, USA
- Department of Physics, Oakland University, Rochester, Michigan, USA
| | - Yao Qiang
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Kundan Thind
- Department of Radiation Oncology, Henry Ford Health, Detroit, Michigan, USA
| | - Dongxiao Zhu
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Cedars Sinai Medical Center, Los Angeles, California, USA
| |
Collapse
|
4
|
Kimura T, Takiguchi K, Tsukita S, Muto M, Chiba H, Sato N, Kofunato Y, Ishigame T, Kenjo A, Tanaka H, Marubashi S. Development of anatomically accurate digital organ models for surgical simulation and training. PLoS One 2025; 20:e0320816. [PMID: 40203219 PMCID: PMC11981654 DOI: 10.1371/journal.pone.0320816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 02/24/2025] [Indexed: 04/11/2025] Open
Abstract
Advancements in robotics and other technological innovations have accelerated the development of surgical procedures, increasing the demand for training environments that accurately replicate human anatomy. This study developed a system that utilizes the AutoSegmentator extension of 3D Slicer, based on nnU-Net, a state-of-the-art deep learning framework for automatic organ extraction, to import automatically extracted organ surface data into CAD software along with original DICOM-derived images. This system allows medical experts to manually refine the automatically extracted data, making it more accurate and closer to the ideal dataset. First, Python programming is used to automatically generate and save JPEG-format image data from DICOM data for display in Blender. Next, DICOM data imported into 3D Slicer is processed by AutoSegmentator to extract surface data of 104 organs in bulk, which is then exported in STL format. In Blender, a custom-developed Python script aligns the image data and organ surface data within the same 3D space, ensuring accurate spatial coordinates. By using Blender's CAD functionality within this space, the automatically extracted organ boundaries can be manually adjusted based on the image data, resulting in more precise organ surface data. Additionally, organs and blood vessels that cannot be automatically extracted can be newly created and added by referencing the image data. Through this process, a comprehensive anatomical dataset encompassing all required organs and blood vessels can be constructed. The dataset created with this system is easily customizable and can be applied to various surgical simulations, including 3D-printed simulators, hybrid simulators that incorporate animal organs, and surgical simulators utilizing augmented reality (AR). Furthermore, this system is built entirely using open-source, free software, providing high reproducibility, flexibility, and accessibility. By using this system, medical professionals can actively participate in the design and data processing of surgical simulation systems, leading to shorter development times and reduced costs.
Collapse
Affiliation(s)
- Takashi Kimura
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Kazuaki Takiguchi
- Department of Pediatric Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Shigeyuki Tsukita
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Makoto Muto
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Hiroto Chiba
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Naoya Sato
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Yasuhide Kofunato
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Teruhide Ishigame
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Akira Kenjo
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Hideaki Tanaka
- Department of Pediatric Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| | - Shigeru Marubashi
- Department of Hepato-Biliary-Pancreatic and Transplant Surgery, Fukushima Medical University, Fukushima-city, Fukushima, Japan
| |
Collapse
|
5
|
Tkachev S, Brosalov V, Kit O, Maksimov A, Goncharova A, Sadyrin E, Dalina A, Popova E, Osipenko A, Voloshin M, Karnaukhov N, Timashev P. Unveiling Another Dimension: Advanced Visualization of Cancer Invasion and Metastasis via Micro-CT Imaging. Cancers (Basel) 2025; 17:1139. [PMID: 40227647 PMCID: PMC11988112 DOI: 10.3390/cancers17071139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 03/19/2025] [Accepted: 03/20/2025] [Indexed: 04/15/2025] Open
Abstract
Invasion and metastasis are well-known hallmarks of cancer, with metastatic disease accounting for 60% to 90% of cancer-related deaths [...].
Collapse
Affiliation(s)
- Sergey Tkachev
- Institute for Regenerative Medicine, Sechenov University, 119992 Moscow, Russia
| | | | - Oleg Kit
- National Medical Research Centre for Oncology, 344037 Rostov-on-Don, Russia
| | - Alexey Maksimov
- National Medical Research Centre for Oncology, 344037 Rostov-on-Don, Russia
| | - Anna Goncharova
- National Medical Research Centre for Oncology, 344037 Rostov-on-Don, Russia
| | - Evgeniy Sadyrin
- Laboratory of Mechanics of Biocompatible Materials, Don State Technical University, 344003 Rostov-on-Don, Russia
| | - Alexandra Dalina
- Center for Precision Genome Editing and Genetic Technologies for Biomedicine, Engelhardt Institute of Molecular Biology, Russian Academy of Sciences, 119334 Moscow, Russia
| | - Elena Popova
- Federal Research and Clinical Center of Specialized Medical Care and Medical Technologies, 115682 Moscow, Russia
| | - Anton Osipenko
- Department of Pharmacology, Siberian State Medical University, 634050 Tomsk, Russia
| | - Mark Voloshin
- A.S. Loginov Moscow Clinical Scientific Center, 111123 Moscow, Russia
| | - Nikolay Karnaukhov
- A.S. Loginov Moscow Clinical Scientific Center, 111123 Moscow, Russia
- Institute of Clinical Morphology and Digital Pathology, Sechenov University, 119991 Moscow, Russia
| | - Peter Timashev
- Institute for Regenerative Medicine, Sechenov University, 119992 Moscow, Russia
| |
Collapse
|
6
|
Lagzouli A, Pivonka P, Cooper DML, Sansalone V, Othmani A. A robust deep learning approach for segmenting cortical and trabecular bone from 3D high resolution µCT scans of mouse bone. Sci Rep 2025; 15:8656. [PMID: 40082604 PMCID: PMC11906900 DOI: 10.1038/s41598-025-92954-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 03/04/2025] [Indexed: 03/16/2025] Open
Abstract
Recent advancements in deep learning have significantly enhanced the segmentation of high-resolution microcomputed tomography (µCT) bone scans. In this paper, we present the dual-branch attention-based hybrid network (DBAHNet), a deep learning architecture designed for automatically segmenting the cortical and trabecular compartments in 3D µCT scans of mouse tibiae. DBAHNet's hierarchical structure combines transformers and convolutional neural networks to capture long-range dependencies and local features for improved contextual representation. We trained DBAHNet on a limited dataset of 3D µCT scans of mouse tibiae and evaluated its performance on a diverse dataset collected from seven different research studies. This evaluation covered variations in resolutions, ages, mouse strains, drug treatments, surgical procedures, and mechanical loading. DBAHNet demonstrated excellent performance, achieving high accuracy, particularly in challenging scenarios with significantly altered bone morphology. The model's robustness and generalization capabilities were rigorously tested under diverse and unseen conditions, confirming its effectiveness in the automated segmentation of high-resolution µCT mouse tibia scans. Our findings highlight DBAHNet's potential to provide reliable and accurate 3D µCT mouse tibia segmentation, thereby enhancing and accelerating preclinical bone studies in drug development. The model and code are available at https://github.com/bigfahma/DBAHNet .
Collapse
Affiliation(s)
- Amine Lagzouli
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, Australia.
- Univ Paris Est Créteil, Univ Gustave Eiffel, CNRS, UMR 8208, MSME, F-94010, Créteil, France.
| | - Peter Pivonka
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, Australia
| | - David M L Cooper
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, SK, Canada
| | - Vittorio Sansalone
- Univ Paris Est Créteil, Univ Gustave Eiffel, CNRS, UMR 8208, MSME, F-94010, Créteil, France
| | - Alice Othmani
- LISSI, Université Paris-Est Creteil (UPEC), 94400, Vitry sur Seine, France.
| |
Collapse
|
7
|
Keshavarz P, Nezami N, Yazdanpanah F, Khojaste-Sarakhsi M, Mohammadigoldar Z, Azami M, Hajati A, Ebrahimian Sadabad F, Chiang J, McWilliams JP, Lu DSK, Raman SS. Prediction of treatment response and outcome of transarterial chemoembolization in patients with hepatocellular carcinoma using artificial intelligence: A systematic review of efficacy. Eur J Radiol 2025; 184:111948. [PMID: 39892373 DOI: 10.1016/j.ejrad.2025.111948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Revised: 01/10/2025] [Accepted: 01/22/2025] [Indexed: 02/03/2025]
Abstract
PURPOSE To perform a systematic literature review of the efficacy of different AI models to predict HCC treatment response to transarterial chemoembolization (TACE), including overall survival (OS) and time to progression (TTP). METHODS This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines until May 2, 2024. RESULTS The systematic review included 23 studies with 4,486 HCC patients. The AI algorithm receiver operator characteristic (ROC) area under the curve (AUC) for predicting HCC response to TACE based on mRECIST criteria ranged from 0.55 to 0.97. Radiomics-models outperformed non-radiomics models (AUCs: 0.79, 95 %CI: 0.75-0.82 vs. 0.73, 0.61-0.77, respectively). The best ML methods used for the prediction of TACE response for HCC patients were CNN, GB, SVM, and RF with AUCs of 0.88 (0.79-0.97), 0.82 (0.71-0.89), 0.8 (0.60-0.87) and 0.8 (0.55-0.96), respectively. Of all predictive feature models, those combining clinic-radiologic features (ALBI grade, BCLC stage, AFP level, tumor diameter, distribution, and peritumoral arterial enhancement) had higher AUCs compared with models based on clinical characteristics alone (0.79, 0.73-0.89; p = 0.04 for CT + clinical, 0.81, 0.75-0.88; p = 0.017 for MRI + clinical versus 0.6, 0.55-0.75 in clinical characteristics alone). CONCLUSION Integrating clinic-radiologic features enhances AI models' predictive performance for HCC patient response to TACE, with CNN, GB, SVM, and RF methods outperforming others. Key predictive clinic-radiologic features include ALBI grade, BCLC stage, AFP level, tumor diameter, distribution, and peritumoral arterial enhancement. Multi-institutional studies are needed to improve AI model accuracy, address heterogeneity, and resolve validation issues.
Collapse
Affiliation(s)
- Pedram Keshavarz
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA.
| | - Nariman Nezami
- Department of Radiology, MedStar Georgetown University Hospital, Washington, DC 20007, USA; Georgetown University School of Medicine, Washington, DC 20007, USA; Lombardi Comprehensive Cancer Center, Washington, DC 20007, USA
| | | | | | - Zahra Mohammadigoldar
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| | - Mobin Azami
- Department of Diagnostic & Interventional Radiology, New Hospitals Ltd., Tbilisi 0114, Georgia
| | - Azadeh Hajati
- Department of Radiology, Division of Abdominal Imaging, Harvard Medical School, Boston, MA 02114, USA
| | | | - Jason Chiang
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| | - Justin P McWilliams
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| | - David S K Lu
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| | - Steven S Raman
- Department of Radiological Sciences, David Geffen School of Medicine at The University of California, Los Angeles (UCLA), Los Angeles, CA, USA
| |
Collapse
|
8
|
Jiang L, Xu D, Xu Q, Chatziioannou A, Iwamoto KS, Hui S, Sheng K. Robust Automated Mouse Micro-CT Segmentation Using Swin UNEt TRansformers. Bioengineering (Basel) 2024; 11:1255. [PMID: 39768073 PMCID: PMC11673508 DOI: 10.3390/bioengineering11121255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 12/07/2024] [Accepted: 12/09/2024] [Indexed: 01/11/2025] Open
Abstract
Image-guided mouse irradiation is essential to understand interventions involving radiation prior to human studies. Our objective is to employ Swin UNEt TRansformers (Swin UNETR) to segment native micro-CT and contrast-enhanced micro-CT scans and benchmark the results against 3D no-new-Net (nnU-Net). Swin UNETR reformulates mouse organ segmentation as a sequence-to-sequence prediction task using a hierarchical Swin Transformer encoder to extract features at five resolution levels, and it connects to a Fully Convolutional Neural Network (FCNN)-based decoder via skip connections. The models were trained and evaluated on open datasets, with data separation based on individual mice. Further evaluation on an external mouse dataset acquired on a different micro-CT with lower kVp and higher imaging noise was also employed to assess model robustness and generalizability. The results indicate that Swin UNETR consistently outperforms nnU-Net and AIMOS in terms of the average dice similarity coefficient (DSC) and the Hausdorff distance (HD95p), except in two mice for intestine contouring. This superior performance is especially evident in the external dataset, confirming the model's robustness to variations in imaging conditions, including noise and quality, and thereby positioning Swin UNETR as a highly generalizable and efficient tool for automated contouring in pre-clinical workflows.
Collapse
Affiliation(s)
- Lu Jiang
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA 94115, USA; (L.J.)
| | - Di Xu
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA 94115, USA; (L.J.)
| | - Qifan Xu
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA 94115, USA; (L.J.)
| | - Arion Chatziioannou
- Department of Molecular and Medical Pharmacology, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Keisuke S. Iwamoto
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Susanta Hui
- Department of Radiation Oncology, City of Hope, Duarte, CA 91010, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA 94115, USA; (L.J.)
| |
Collapse
|
9
|
Pregowska A, Roszkiewicz A, Osial M, Giersig M. How scanning probe microscopy can be supported by artificial intelligence and quantum computing? Microsc Res Tech 2024; 87:2515-2539. [PMID: 38864463 DOI: 10.1002/jemt.24629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 05/28/2024] [Accepted: 05/29/2024] [Indexed: 06/13/2024]
Abstract
The impact of Artificial Intelligence (AI) is rapidly expanding, revolutionizing both science and society. It is applied to practically all areas of life, science, and technology, including materials science, which continuously requires novel tools for effective materials characterization. One of the widely used techniques is scanning probe microscopy (SPM). SPM has fundamentally changed materials engineering, biology, and chemistry by providing tools for atomic-precision surface mapping. Despite its many advantages, it also has some drawbacks, such as long scanning times or the possibility of damaging soft-surface materials. In this paper, we focus on the potential for supporting SPM-based measurements, with an emphasis on the application of AI-based algorithms, especially Machine Learning-based algorithms, as well as quantum computing (QC). It has been found that AI can be helpful in automating experimental processes in routine operations, algorithmically searching for optimal sample regions, and elucidating structure-property relationships. Thus, it contributes to increasing the efficiency and accuracy of optical nanoscopy scanning probes. Moreover, the combination of AI-based algorithms and QC may have enormous potential to enhance the practical application of SPM. The limitations of the AI-QC-based approach were also discussed. Finally, we outline a research path for improving AI-QC-powered SPM. RESEARCH HIGHLIGHTS: Artificial intelligence and quantum computing as support for scanning probe microscopy. The analysis indicates a research gap in the field of scanning probe microscopy. The research aims to shed light into ai-qc-powered scanning probe microscopy.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Agata Roszkiewicz
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Magdalena Osial
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Michael Giersig
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
10
|
Kampfer S, Dobiasch S, Combs SE, Wilkens JJ. Development of a PTV margin for preclinical irradiation of orthotopic pancreatic tumors derived from a well-known recipe for humans. Z Med Phys 2024; 34:533-541. [PMID: 37225604 PMCID: PMC11624325 DOI: 10.1016/j.zemedi.2023.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/27/2023] [Accepted: 03/30/2023] [Indexed: 05/26/2023]
Abstract
In human radiotherapy a safety margin (PTV margin) is essential for successful irradiation and is usually part of clinical treatment planning. In preclinical radiotherapy research with small animals, most uncertainties and inaccuracies are present as well, but according to the literature a margin is used only scarcely. In addition, there is only little experience about the appropriate size of the margin, which should carefully be investigated and considered, since sparing of organs at risk or normal tissue is affected. Here we estimate the needed margin for preclinical irradiation by adapting a well-known human margin recipe from van Herck et al. to the dimensions and requirements of the specimen on a small animal radiation research platform (SARRP). We adjusted the factors of the described formula to the specific challenges in an orthotopic pancreatic tumor mouse model to establish an appropriate margin concept. The SARRP was used with its image-guidance irradiation possibility for arc irradiation with a field size of 10 × 10 mm2 for 5 fractions. Our goal was to irradiate the clinical target volume (CTV) of at least 90% of our mice with at least 95% of the prescribed dose. By carefully analyzing all relevant factors we gain a CTV to planning target volume (PTV) margin of 1.5 mm for our preclinical setup. The stated safety margin is strongly dependent on the exact setting of the experiment and has to be adjusted for other experimental settings. The few stated values in literature correspond well to our result. Even if using margins in the preclinical setting might be an additional challenge, we think it is crucial to use them to produce reliable results and improve the efficacy of radiotherapy.
Collapse
Affiliation(s)
- Severin Kampfer
- Department of Radiation Oncology, School of Medicine and Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany; Physics Department, Technical University of Munich (TUM), Garching, Germany.
| | - Sophie Dobiasch
- Department of Radiation Oncology, School of Medicine and Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Zentrum München, Neuherberg, Germany.
| | - Stephanie E Combs
- Department of Radiation Oncology, School of Medicine and Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Zentrum München, Neuherberg, Germany; German Cancer Consortium (DKTK), Partner Site Munich, Germany.
| | - Jan J Wilkens
- Department of Radiation Oncology, School of Medicine and Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany; Physics Department, Technical University of Munich (TUM), Garching, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Zentrum München, Neuherberg, Germany.
| |
Collapse
|
11
|
Liu J, Zhang Y, Wang K, Yavuz MC, Chen X, Yuan Y, Li H, Yang Y, Yuille A, Tang Y, Zhou Z. Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Med Image Anal 2024; 97:103226. [PMID: 38852215 DOI: 10.1016/j.media.2024.103226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/30/2024] [Accepted: 05/27/2024] [Indexed: 06/11/2024]
Abstract
The advancement of artificial intelligence (AI) for organ segmentation and tumor detection is propelled by the growing availability of computed tomography (CT) datasets with detailed, per-voxel annotations. However, these AI models often struggle with flexibility for partially annotated datasets and extensibility for new classes due to limitations in the one-hot encoding, architectural design, and learning scheme. To overcome these limitations, we propose a universal, extensible framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes (e.g., organs/tumors). Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models, enriching semantic encoding compared with one-hot encoding. Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors and ease the addition of new classes. We train our Universal Model on 3410 CT volumes assembled from 14 publicly available datasets and then test it on 6173 CT volumes from four external datasets. Universal Model achieves first place on six CT tasks in the Medical Segmentation Decathlon (MSD) public leaderboard and leading performance on the Beyond The Cranial Vault (BTCV) dataset. In summary, Universal Model exhibits remarkable computational efficiency (6× faster than other dataset-specific models), demonstrates strong generalization across different hospitals, transfers well to numerous downstream tasks, and more importantly, facilitates the extensibility to new classes while alleviating the catastrophic forgetting of previously learned classes. Codes, models, and datasets are available at https://github.com/ljwztc/CLIP-Driven-Universal-Model.
Collapse
Affiliation(s)
- Jie Liu
- City University of Hong Kong, Hong Kong
| | - Yixiao Zhang
- Johns Hopkins University, United States of America
| | - Kang Wang
- University of California, San Francisco, United States of America
| | - Mehmet Can Yavuz
- University of California, San Francisco, United States of America
| | - Xiaoxi Chen
- University of Illinois Urbana-Champaign, United States of America
| | | | | | - Yang Yang
- University of California, San Francisco, United States of America
| | - Alan Yuille
- Johns Hopkins University, United States of America
| | | | - Zongwei Zhou
- Johns Hopkins University, United States of America.
| |
Collapse
|
12
|
Delgado-Rodriguez P, Lamanna-Rama N, Saande C, Aldabe R, Soto-Montenegro ML, Munoz-Barrutia A. Multiscale and multimodal evaluation of autosomal dominant polycystic kidney disease development. Commun Biol 2024; 7:1183. [PMID: 39300231 DOI: 10.1038/s42003-024-06868-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 09/09/2024] [Indexed: 09/22/2024] Open
Abstract
Autosomal Dominant Polycystic Kidney Disease (ADPKD) is the most prevalent kidney genetic disorder, producing structural abnormalities and impaired function. This research investigates its evolution on mouse models, utilizing a combination of histology imaging, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) to evaluate its progression thoroughly. ADPKD has been induced in mice via PKD2 gene knockout, followed by image acquisition at different stages. Histology data provides two-dimensional details, like the cystic area ratio, whereas CT and MRI facilitate three-dimensional temporal monitoring. Our approach allows to quantify the affected tissue at different disease stages through multiple quantitative metrics. A pivotal point is shown at approximately ten weeks after induction, marked by a swift acceleration in disease advancement, and leading to a notable increase in cyst formation. This multimodal strategy augments our comprehension of ADPKD dynamics and suggests the possibility of employing higher-resolution imaging in the future for more accurate volumetric analyses.
Collapse
Affiliation(s)
- Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain.
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain.
| | - Nicolás Lamanna-Rama
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
- Instituto de Investigacion Sanitaria Fundación Jimenez Diaz (IIS - FJD), Madrid, Spain
| | - Cassondra Saande
- Division of Gene Therapy and Regulation of Gene Expression, Centre for Applied Medical Research (CIMA), University of Navarra, Pamplona, Spain
| | - Rafael Aldabe
- Division of Gene Therapy and Regulation of Gene Expression, Centre for Applied Medical Research (CIMA), University of Navarra, Pamplona, Spain
| | - María L Soto-Montenegro
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
- CIBER de Salud Mental (CIBERSAM), Madrid, Spain
- High Performance Research Group in Physiopathology and Pharmacology of the Digestive System (NeuGut), University Rey Juan Carlos (URJC), Alcorcon, Spain
| | - Arrate Munoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigacion Sanitaria Gregorio Marañon (IiSGM), Madrid, Spain
| |
Collapse
|
13
|
Shih CP, Tang WC, Chen P, Chen BC. Applications of Lightsheet Fluorescence Microscopy by High Numerical Aperture Detection Lens. J Phys Chem B 2024; 128:8273-8289. [PMID: 39177503 PMCID: PMC11382282 DOI: 10.1021/acs.jpcb.4c01721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
This Review explores the evolution, improvements, and recent applications of Light Sheet Fluorescence Microscopy (LSFM) in biological research using a high numerical aperture detection objective (lens) for imaging subcellular structures. The Review begins with an overview of the development of LSFM, tracing its evolution from its inception to its current state and emphasizing key milestones and technological advancements over the years. Subsequently, we will discuss various improvements of LSFM techniques, covering advancements in hardware such as illumination strategies, optical designs, and sample preparation methods that have enhanced imaging capabilities and resolution. The advancements in data acquisition and processing are also included, which provides a brief overview of the recent development of artificial intelligence. Fluorescence probes that were commonly used in LSFM will be highlighted, together with some insights regarding the selection of potential probe candidates for future LSFM development. Furthermore, we also discuss recent advances in the application of LSFM with a focus on high numerical aperture detection objectives for various biological studies. For sample preparation techniques, there are discussions regarding fluorescence probe selection, tissue clearing protocols, and some insights into expansion microscopy. Integrated setups such as adaptive optics, single objective modification, and microfluidics will also be some of the key discussion points in this Review. We hope that this comprehensive Review will provide a holistic perspective on the historical development, technical enhancements, and cutting-edge applications of LSFM, showcasing its pivotal role and future potential in advancing biological research.
Collapse
Affiliation(s)
- Chun-Pei Shih
- Institute of Physics, Academia Sinica, Taipei 11529, Taiwan
- Department of Chemistry, National Taiwan University, Taipei 106319, Taiwan
- Nano Science and Technology Program, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei 11529, Taiwan
| | - Wei-Chun Tang
- Research Center for Applied Sciences, Academia Sinica, Taipei 11529, Taiwan
| | - Peilin Chen
- Institute of Physics, Academia Sinica, Taipei 11529, Taiwan
- Research Center for Applied Sciences, Academia Sinica, Taipei 11529, Taiwan
| | - Bi-Chang Chen
- Research Center for Applied Sciences, Academia Sinica, Taipei 11529, Taiwan
| |
Collapse
|
14
|
Zwijnen AW, Watzema L, Ridwan Y, van Der Pluijm I, Smal I, Essers J. Self-adaptive deep learning-based segmentation for universal and functional clinical and preclinical CT image analysis. Comput Biol Med 2024; 179:108853. [PMID: 39013341 DOI: 10.1016/j.compbiomed.2024.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 07/04/2024] [Accepted: 07/04/2024] [Indexed: 07/18/2024]
Abstract
BACKGROUND Methods to monitor cardiac functioning non-invasively can accelerate preclinical and clinical research into novel treatment options for heart failure. However, manual image analysis of cardiac substructures is resource-intensive and error-prone. While automated methods exist for clinical CT images, translating these to preclinical μCT data is challenging. We employed deep learning to automate the extraction of quantitative data from both CT and μCT images. METHODS We collected a public dataset of cardiac CT images of human patients, as well as acquired μCT images of wild-type and accelerated aging mice. The left ventricle, myocardium, and right ventricle were manually segmented in the μCT training set. After template-based heart detection, two separate segmentation neural networks were trained using the nnU-Net framework. RESULTS The mean Dice score of the CT segmentation results (0.925 ± 0.019, n = 40) was superior to those achieved by state-of-the-art algorithms. Automated and manual segmentations of the μCT training set were nearly identical. The estimated median Dice score (0.940) of the test set results was comparable to existing methods. The automated volume metrics were similar to manual expert observations. In aging mice, ejection fractions had significantly decreased, and myocardial volume increased by age 24 weeks. CONCLUSIONS With further optimization, automated data extraction expands the application of (μ)CT imaging, while reducing subjectivity and workload. The proposed method efficiently measures the left and right ventricular ejection fraction and myocardial mass. With uniform translation between image types, cardiac functioning in diastolic and systolic phases can be monitored in both animals and humans.
Collapse
Affiliation(s)
- Anne-Wietje Zwijnen
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands
| | | | - Yanto Ridwan
- AMIE Core Facility, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Ingrid van Der Pluijm
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Ihor Smal
- Department of Cell Biology, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Jeroen Essers
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Radiotherapy, Erasmus University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
15
|
Kuntner C, Alcaide C, Anestis D, Bankstahl JP, Boutin H, Brasse D, Elvas F, Forster D, Rouchota MG, Tavares A, Teuter M, Wanek T, Zachhuber L, Mannheim JG. Optimizing SUV Analysis: A Multicenter Study on Preclinical FDG-PET/CT Highlights the Impact of Standardization. Mol Imaging Biol 2024; 26:668-679. [PMID: 38907124 PMCID: PMC11281957 DOI: 10.1007/s11307-024-01927-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Revised: 05/29/2024] [Accepted: 06/04/2024] [Indexed: 06/23/2024]
Abstract
PURPOSE Preclinical imaging, with translational potential, lacks a standardized method for defining volumes of interest (VOIs), impacting data reproducibility. The aim of this study was to determine the interobserver variability of VOI sizes and standard uptake values (SUVmean and SUVmax) of different organs using the same [18F]FDG-PET and PET/CT datasets analyzed by multiple observers. In addition, the effect of a standardized analysis approach was evaluated. PROCEDURES In total, 12 observers (4 beginners and 8 experts) analyzed identical preclinical [18F]FDG-PET-only and PET/CT datasets according to their local default image analysis protocols for multiple organs. Furthermore, a standardized protocol was defined, including detailed information on the respective VOI size and position for multiple organs, and all observers reanalyzed the PET/CT datasets following this protocol. RESULTS Without standardization, significant differences in the SUVmean and SUVmax were found among the observers. Coregistering CT images with PET images improved the comparability to a limited extent. The introduction of a standardized protocol that details the VOI size and position for multiple organs reduced interobserver variability and enhanced comparability. CONCLUSIONS The protocol offered clear guidelines and was particularly beneficial for beginners, resulting in improved comparability of SUVmean and SUVmax values for various organs. The study suggested that incorporating an additional VOI template could further enhance the comparability of the findings in preclinical imaging analyses.
Collapse
Affiliation(s)
- Claudia Kuntner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Waehringer Guertel 18-20, 1090 Vienna, Vienna, Austria.
- Medical Imaging Cluster (MIC), Medical University of Vienna, Vienna, Austria.
| | | | | | | | - Herve Boutin
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
- INSERM, UMR 1253, iBrainUniversité de Tours, Tours, France
| | - David Brasse
- Institut Pluridisciplinaire Hubert Curien, UMR7178, Université de Strasbourg, CNRS, Strasbourg, France
| | - Filipe Elvas
- Molecular Imaging Center Antwerp, University of Antwerpen, Antwerp, Belgium
| | - Duncan Forster
- Division of Informatics, Imaging and Data Sciences, Manchester Molecular Imaging Centre, The University of Manchester, Manchester, UK
| | | | | | | | - Thomas Wanek
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Waehringer Guertel 18-20, 1090 Vienna, Vienna, Austria
| | - Lena Zachhuber
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Waehringer Guertel 18-20, 1090 Vienna, Vienna, Austria
| | - Julia G Mannheim
- Department of Preclinical Imaging and Radiopharmacy Werner Siemens Imaging Center, Eberhard-Karls University Tuebingen, Tuebingen, Germany
- Cluster of Excellence iFIT (EXC 2180) "Image Guided and Functionally Instructed Tumor Therapies", Tuebingen, Germany
| |
Collapse
|
16
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
17
|
Jiang L, Xu D, Xu Q, Chatziioannou A, Iwamoto KS, Hui S, Sheng K. Exploring Automated Contouring Across Institutional Boundaries: A Deep Learning Approach with Mouse Micro-CT Datasets. ARXIV 2024:arXiv:2405.18676v1. [PMID: 38855547 PMCID: PMC11160888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Image-guided mouse irradiation is essential to understand interventions involving radiation prior to human studies. Our objective is to employ Swin UNEt Transformers (Swin UNETR) to segment native micro-CT and contrast-enhanced micro-CT scans and benchmark the results against 3D no-new-Net (nnU-Net). Swin UNETR reformulates mouse organ segmentation as a sequence-to-sequence prediction task, using a hierarchical Swin Transformer encoder to extract features at 5 resolution levels, and connects to a Fully Convolutional Neural Network (FCNN)-based decoder via skip connections. The models were trained and evaluated on open datasets, with data separation based on individual mice. Further evaluation on an external mouse dataset acquired on a different micro-CT with lower kVp and higher imaging noise was also employed to assess model robustness and generalizability. Results indicate that Swin UNETR consistently outperforms nnU-Net and AIMOS in terms of average dice similarity coefficient (DSC) and Hausdorff distance (HD95p), except in two mice of intestine contouring. This superior performance is especially evident in the external dataset, confirming the model's robustness to variations in imaging conditions, including noise and quality, thereby positioning Swin UNETR as a highly generalizable and efficient tool for automated contouring in pre-clinical workflows.
Collapse
Affiliation(s)
- Lu Jiang
- Department of Radiation Oncology, University of California San Francisco
| | - Di Xu
- Department of Radiation Oncology, University of California San Francisco
| | - Qifan Xu
- Department of Radiation Oncology, University of California San Francisco
| | - Arion Chatziioannou
- Department of Molecular and Medical Pharmacology, University of California Los Angeles
| | | | - Susanta Hui
- Department of Radiation Oncology, City of Hope
| | - Ke Sheng
- Department of Radiation Oncology, University of California San Francisco
| |
Collapse
|
18
|
Liu H, Xu Z, Gao R, Li H, Wang J, Chabin G, Oguz I, Grbic S. COSST: Multi-Organ Segmentation With Partially Labeled Datasets Using Comprehensive Supervisions and Self-Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1995-2009. [PMID: 38224508 DOI: 10.1109/tmi.2024.3354673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.
Collapse
|
19
|
Triki Z, Zhou T, Argyriou E, Sousa de Novais E, Servant O, Kolm N. Social complexity affects cognitive abilities but not brain structure in a Poeciliid fish. Behav Ecol 2024; 35:arae026. [PMID: 38638166 PMCID: PMC11025466 DOI: 10.1093/beheco/arae026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 02/16/2024] [Accepted: 04/01/2024] [Indexed: 04/20/2024] Open
Abstract
Some cognitive abilities are suggested to be the result of a complex social life, allowing individuals to achieve higher fitness through advanced strategies. However, most evidence is correlative. Here, we provide an experimental investigation of how group size and composition affect brain and cognitive development in the guppy (Poecilia reticulata). For 6 months, we reared sexually mature females in one of 3 social treatments: a small conspecific group of 3 guppies, a large heterospecific group of 3 guppies and 3 splash tetras (Copella arnoldi)-a species that co-occurs with the guppy in the wild, and a large conspecific group of 6 guppies. We then tested the guppies' performance in self-control (inhibitory control), operant conditioning (associative learning), and cognitive flexibility (reversal learning) tasks. Using X-ray imaging, we measured their brain size and major brain regions. Larger groups of 6 individuals, both conspecific and heterospecific groups, showed better cognitive flexibility than smaller groups but no difference in self-control and operant conditioning tests. Interestingly, while social manipulation had no significant effect on brain morphology, relatively larger telencephalons were associated with better cognitive flexibility. This suggests alternative mechanisms beyond brain region size enabled greater cognitive flexibility in individuals from larger groups. Although there is no clear evidence for the impact on brain morphology, our research shows that living in larger social groups can enhance cognitive flexibility. This indicates that the social environment plays a role in the cognitive development of guppies.
Collapse
Affiliation(s)
- Zegni Triki
- Behavioral Ecology Division, Institute of Ecology and Evolution, University of Bern, Baltzerstrasse 6, 3012 Bern, Switzerland
- Department of Zoology, Stockholm University, Svante Arrheniusväg 18 B, 10691, Stockholm, Sweden
| | - Tunhe Zhou
- Brain Imaging Centre, Stockholm University, Svante Arrheniusväg 16 A, 10691, Stockholm, Sweden
| | - Elli Argyriou
- Department of Zoology, Stockholm University, Svante Arrheniusväg 18 B, 10691, Stockholm, Sweden
| | - Edson Sousa de Novais
- Behavioural Ecology Laboratory, Faculty of Science, University of Neuchâtel, Emile-Argand 11, 2000 Neuchâtel, Switzerland
| | - Oriane Servant
- Department of Zoology, Stockholm University, Svante Arrheniusväg 18 B, 10691, Stockholm, Sweden
| | - Niclas Kolm
- Department of Zoology, Stockholm University, Svante Arrheniusväg 18 B, 10691, Stockholm, Sweden
| |
Collapse
|
20
|
Dhaliwal A, Ma J, Zheng M, Lyu Q, Rajora MA, Ma S, Oliva L, Ku A, Valic M, Wang B, Zheng G. Deep learning for automatic organ and tumor segmentation in nanomedicine pharmacokinetics. Theranostics 2024; 14:973-987. [PMID: 38250039 PMCID: PMC10797295 DOI: 10.7150/thno.90246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 11/17/2023] [Indexed: 01/23/2024] Open
Abstract
Rationale: Multimodal imaging provides important pharmacokinetic and dosimetry information during nanomedicine development and optimization. However, accurate quantitation is time-consuming, resource intensive, and requires anatomical expertise. Methods: We present NanoMASK: a 3D U-Net adapted deep learning tool capable of rapid, automatic organ segmentation of multimodal imaging data that can output key clinical dosimetry metrics without manual intervention. This model was trained on 355 manually-contoured PET/CT data volumes of mice injected with a variety of nanomaterials and imaged over 48 hours. Results: NanoMASK produced 3-dimensional contours of the heart, lungs, liver, spleen, kidneys, and tumor with high volumetric accuracy (pan-organ average %DSC of 92.5). Pharmacokinetic metrics including %ID/cc, %ID, and SUVmax achieved correlation coefficients exceeding R = 0.987 and relative mean errors below 0.2%. NanoMASK was applied to novel datasets of lipid nanoparticles and antibody-drug conjugates with a minimal drop in accuracy, illustrating its generalizability to different classes of nanomedicines. Furthermore, 20 additional auto-segmentation models were developed using training data subsets based on image modality, experimental imaging timepoint, and tumor status. These were used to explore the fundamental biases and dependencies of auto-segmentation models built on a 3D U-Net architecture, revealing significant differential impacts on organ segmentation accuracy. Conclusions: NanoMASK is an easy-to-use, adaptable tool for improving accuracy and throughput in imaging-based pharmacokinetic studies of nanomedicine. It has been made publicly available to all readers for automatic segmentation and pharmacokinetic analysis across a diverse array of nanoparticles, expediting agent development.
Collapse
Affiliation(s)
- Alex Dhaliwal
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Mark Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Qing Lyu
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Maneesha A. Rajora
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Shihao Ma
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Laura Oliva
- Techna Institute, University Health Network, 190 Elizabeth Street, Toronto, M5G 2C4, Ontario, Canada
| | - Anthony Ku
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, 94305-5484, California, United States of America
| | - Michael Valic
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Bo Wang
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
| |
Collapse
|
21
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 58] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
22
|
Xie Y, Zhang J, Xia Y, Shen C. Learning From Partially Labeled Data for Multi-Organ and Tumor Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:14905-14919. [PMID: 37672381 DOI: 10.1109/tpami.2023.3312587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Medical image benchmarks for the segmentation of organs and tumors suffer from the partially labeling issue due to its intensive cost of labor and expertise. Current mainstream approaches follow the practice of one network solving one task. With this pipeline, not only the performance is limited by the typically small dataset of a single task, but also the computation cost linearly increases with the number of tasks. To address this, we propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple partially labeled datasets. Specifically, TransDoDNet has a hybrid backbone that is composed of the convolutional neural network and Transformer. A dynamic head enables the network to accomplish multiple segmentation tasks flexibly. Unlike existing approaches that fix kernels after training, the kernels in the dynamic head are generated adaptively by the Transformer, which employs the self-attention mechanism to model long-range organ-wise dependencies and decodes the organ embedding that can represent each organ. We create a large-scale partially labeled Multi-Organ and Tumor Segmentation benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors on seven organ and tumor segmentation tasks. This study also provides a general 3D medical image segmentation model, which has been pre-trained on the large-scale MOTS benchmark and has demonstrated advanced performance over current predominant self-supervised learning methods.
Collapse
|
23
|
Jiang L, Ramesh P, Neph R, Sheng K. Technical note: Multi-MATE, a high-throughput platform for automated image-guided small-animal irradiation. Med Phys 2023; 50:7383-7389. [PMID: 37341036 PMCID: PMC10733545 DOI: 10.1002/mp.16563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 06/07/2023] [Indexed: 06/22/2023] Open
Abstract
BACKGROUND Small animal irradiation is essential to study the radiation response of new interventions before or parallel to human therapy. Image-guided radiotherapy (IGRT) and intensity-modulated radiotherapy (IMRT) are recently adopted in small animal irradiation to more closely mimic human treatments. However, sophisticated techniques require exceedingly high time, resources, and expertize that are often impractical. PURPOSE We propose a high throughput and high precision platform named Multiple Mouse Automated Treatment Environment (Multi-MATE) to streamline image-guided small animal irradiation. METHODS Multi-MATE consists of six parallel and hexagonally arranged channels, each equipped with a transfer railing, a 3D-printed immobilization pod, and an electromagnetic control unit, computer-controlled via an Arduino interface. The mouse immobilization pods are transferred along the railings between the home position outside the radiation field and the imaging/irradiation position at the irradiator isocenter. All six immobilization pods are transferred to the isocenter in the proposed workflow for parallel CBCT scans and treatment planning. The immobilization pods are then sequentially transported to the imaging/therapy position for dose delivery. The positioning reproducibility of Multi-MATE are evaluated using CBCT and radiochromic films. RESULTS While parallelizing and automating the image-guided small animal radiation delivery, Multi-MATE achieved the average pod position reproducibility of 0.17 ± 0.04 mm in the superior-inferior direction, 0.20 ± 0.04 mm in the left-right direction, and 0.12 ± 0.02mm in the anterior-posterior direction in repeated CBCT tests. Additionally, in image-guided dose delivery tasks, Multi-MATE demonstrated the positioning reproducibility of 0.17 ± 0.06 mm in the superior-inferior direction, 0.19 ± 0.06 mm in the left-right direction. CONCLUSIONS We designed, fabricated, and tested a novel automated irradiation platform, Multi-MATE to accelerate and automate image-guided small animal irradiation. The automated platform minimizes human operation and achieves high setup reproducibility and image-guided dose delivery accuracy. Multi-MATE thus removes a major barrier to implementing high-precision preclinical radiation research.
Collapse
Affiliation(s)
- Lu Jiang
- Department of Radiation Oncology, University of California, Los Angeles, 90095, USA
| | - Pavitra Ramesh
- Department of Radiation Oncology, University of California, Los Angeles, 90095, USA
| | - Ryan Neph
- Department of Radiation Oncology, University of California, Los Angeles, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California, San Francisco, 94115, USA
| |
Collapse
|
24
|
Ding Y, Yang F, Han M, Li C, Wang Y, Xu X, Zhao M, Zhao M, Yue M, Deng H, Yang H, Yao J, Liu Y. Multi-center study on predicting breast cancer lymph node status from core needle biopsy specimens using multi-modal and multi-instance deep learning. NPJ Breast Cancer 2023; 9:58. [PMID: 37443117 DOI: 10.1038/s41523-023-00562-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
The objective of our study is to develop a deep learning model based on clinicopathological data and digital pathological image of core needle biopsy specimens for predicting breast cancer lymph node metastasis. We collected 3701 patients from the Fourth Hospital of Hebei Medical University and 190 patients from four medical centers in Hebei Province. Integrating clinicopathological data and image features build multi-modal and multi-instance (MMMI) deep learning model to obtain the final prediction. For predicting with or without lymph node metastasis, the AUC was 0.770, 0.709, 0.809 based on the clinicopathological features, WSI and MMMI, respectively. For predicting four classification of lymph node status (no metastasis, isolated tumor cells (ITCs), micrometastasis, and macrometastasis), the prediction based on clinicopathological features, WSI and MMMI were compared. The AUC for no metastasis was 0.770, 0.709, 0.809, respectively; ITCs were 0.619, 0.531, 0.634, respectively; micrometastasis were 0.636, 0.617, 0.691, respectively; and macrometastasis were 0.748, 0.691, 0.758, respectively. The MMMI model achieved the highest prediction accuracy. For prediction of different molecular types of breast cancer, MMMI demonstrated a better prediction accuracy for any type of lymph node status, especially in the molecular type of triple negative breast cancer (TNBC). In the external validation sets, MMMI also showed better prediction accuracy in the four classification, with AUC of 0.725, 0.757, 0.525, and 0.708, respectively. Finally, we developed a breast cancer lymph node metastasis prediction model based on a MMMI model. Through all cases tests, the results showed that the overall prediction ability was high.
Collapse
Affiliation(s)
- Yan Ding
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | - Fan Yang
- AI Lab, Tencent, 518057, Shenzhen, China
| | - Mengxue Han
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | - Chunhui Li
- Department of Pathology, Chengde Medical University Affiliated Hospital, 067000, Chengde, Hebei, China
| | - Yanan Wang
- Department of Pathology, Affiliated Hospital of Hebei University, 071000, Baoding, Hebei, China
| | - Xin Xu
- Department of Pathology, Xingtai People's Hospital, 054000, Xingtai, Hebei, China
| | - Min Zhao
- Department of Pathology, First Hospital of Qinhuangdao, 066000, Qinhuangdao, Hebei, China
| | - Meng Zhao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | - Meng Yue
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | - Huiyan Deng
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | - Huichai Yang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China
| | | | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, 050011, Shijiazhuang, Hebei, China.
| |
Collapse
|
25
|
Streeter SS, Zuurbier RA, diFlorio-Alexander RM, Hansberry MT, Maloney BW, Pogue BW, Wells WA, Paulsen KD, Barth RJ. Breast-Conserving Surgery Margin Guidance Using Micro-Computed Tomography: Challenges When Imaging Radiodense Resection Specimens. Ann Surg Oncol 2023; 30:4097-4108. [PMID: 37041429 PMCID: PMC10600965 DOI: 10.1245/s10434-023-13364-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 02/27/2023] [Indexed: 04/13/2023]
Abstract
BACKGROUND Breast-conserving surgery (BCS) is an integral component of early-stage breast cancer treatment, but costly reexcision procedures are common due to the high prevalence of cancer-positive margins on primary resections. A need exists to develop and evaluate improved methods of margin assessment to detect positive margins intraoperatively. METHODS A prospective trial was conducted through which micro-computed tomography (micro-CT) with radiological interpretation by three independent readers was evaluated for BCS margin assessment. Results were compared to standard-of-care intraoperative margin assessment (i.e., specimen palpation and radiography [abbreviated SIA]) for detecting cancer-positive margins. RESULTS Six hundred margins from 100 patients were analyzed. Twenty-one margins in 14 patients were pathologically positive. On analysis at the specimen-level, SIA yielded a sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 42.9%, 76.7%, 23.1%, and 89.2%, respectively. SIA correctly identified six of 14 margin-positive cases with a 23.5% false positive rate (FPR). Micro-CT readers achieved sensitivity, specificity, PPV, and NPV ranges of 35.7-50.0%, 55.8-68.6%, 15.6-15.8%, and 86.8-87.3%, respectively. Micro-CT readers correctly identified five to seven of 14 margin-positive cases with an FPR range of 31.4-44.2%. If micro-CT scanning had been combined with SIA, up to three additional margin-positive specimens would have been identified. DISCUSSION Micro-CT identified a similar proportion of margin-positive cases as standard specimen palpation and radiography, but due to difficulty distinguishing between radiodense fibroglandular tissue and cancer, resulted in a higher proportion of false positive margin assessments.
Collapse
Affiliation(s)
- Samuel S Streeter
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA.
- Department of Orthopaedics, Dartmouth Health, Lebanon, NH, USA.
| | - Rebecca A Zuurbier
- Department of Radiology, Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA
| | - Roberta M diFlorio-Alexander
- Department of Radiology, Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA
| | - Mark T Hansberry
- Department of Radiology, Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
| | | | - Brian W Pogue
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA
- Department of Medical Physics, University of Wisconsin, Madison, WI, USA
| | - Wendy A Wells
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine, Dartmouth College, Hanover, NH, USA
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA
| | - Richard J Barth
- Dartmouth Cancer Center, Dartmouth Health, Lebanon, NH, USA.
- Department of Surgery, Geisel School of Medicine, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
26
|
Arús BA, Cosco ED, Yiu J, Balba I, Bischof TS, Sletten EM, Bruns OT. Shortwave infrared fluorescence imaging of peripheral organs in awake and freely moving mice. Front Neurosci 2023; 17:1135494. [PMID: 37274204 PMCID: PMC10232761 DOI: 10.3389/fnins.2023.1135494] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 04/24/2023] [Indexed: 06/06/2023] Open
Abstract
Extracting biological information from awake and unrestrained mice is imperative to in vivo basic and pre-clinical research. Accordingly, imaging methods which preclude invasiveness, anesthesia, and/or physical restraint enable more physiologically relevant biological data extraction by eliminating these extrinsic confounders. In this article, we discuss the recent development of shortwave infrared (SWIR) fluorescent imaging to visualize peripheral organs in freely-behaving mice, as well as propose potential applications of this imaging modality in the neurosciences.
Collapse
Affiliation(s)
- Bernardo A. Arús
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Medizinische Fakultät and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany
| | - Emily D. Cosco
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
- Department of Chemistry and Biochemistry, University of California, Los Angeles, Los Angeles, CA, United States
| | - Joycelyn Yiu
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
| | - Ilaria Balba
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
| | - Thomas S. Bischof
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Medizinische Fakultät and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany
| | - Ellen M. Sletten
- Department of Chemistry and Biochemistry, University of California, Los Angeles, Los Angeles, CA, United States
| | - Oliver T. Bruns
- Helmholtz Pioneer Campus, Helmholtz Zentrum München, Neuherberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Medizinische Fakultät and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany
| |
Collapse
|
27
|
Arús BA, Cosco ED, Yiu J, Balba I, Bischof TS, Sletten EM, Bruns OT. Shortwave infrared (SWIR) fluorescence imaging of peripheral organs in awake and freely moving mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.26.538387. [PMID: 37163051 PMCID: PMC10168299 DOI: 10.1101/2023.04.26.538387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Extracting biological information from awake and unrestrained mice is imperative to in vivo basic and pre-clinical research. Accordingly, imaging methods which preclude invasiveness, anesthesia, and/or physical restraint enable more physiologically relevant biological data extraction by eliminating these extrinsic confounders. In this article we discuss the recent development of shortwave infrared (SWIR) fluorescent imaging to visualize peripheral organs in freely-behaving mice, as well as propose potential applications of this imaging modality in the neurosciences.
Collapse
|
28
|
Verhaegen F, Butterworth KT, Chalmers AJ, Coppes RP, de Ruysscher D, Dobiasch S, Fenwick JD, Granton PV, Heijmans SHJ, Hill MA, Koumenis C, Lauber K, Marples B, Parodi K, Persoon LCGG, Staut N, Subiel A, Vaes RDW, van Hoof S, Verginadis IL, Wilkens JJ, Williams KJ, Wilson GD, Dubois LJ. Roadmap for precision preclinical x-ray radiation studies. Phys Med Biol 2023; 68:06RM01. [PMID: 36584393 DOI: 10.1088/1361-6560/acaf45] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
This Roadmap paper covers the field of precision preclinical x-ray radiation studies in animal models. It is mostly focused on models for cancer and normal tissue response to radiation, but also discusses other disease models. The recent technological evolution in imaging, irradiation, dosimetry and monitoring that have empowered these kinds of studies is discussed, and many developments in the near future are outlined. Finally, clinical translation and reverse translation are discussed.
Collapse
Affiliation(s)
- Frank Verhaegen
- MAASTRO Clinic, Radiotherapy Division, GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
- SmART Scientific Solutions BV, Maastricht, The Netherlands
| | - Karl T Butterworth
- Patrick G. Johnston, Centre for Cancer Research, Queen's University Belfast, Belfast, Northern Ireland, United Kingdom
| | - Anthony J Chalmers
- School of Cancer Sciences, University of Glasgow, Glasgow G61 1QH, United Kingdom
| | - Rob P Coppes
- Departments of Biomedical Sciences of Cells & Systems, Section Molecular Cell Biology and Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 AD Groningen, The Netherlands
| | - Dirk de Ruysscher
- MAASTRO Clinic, Radiotherapy Division, GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Sophie Dobiasch
- Department of Radiation Oncology, Technical University of Munich (TUM), School of Medicine and Klinikum rechts der Isar, Germany
- Department of Medical Physics, Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Zentrum München, Germany
| | - John D Fenwick
- Department of Medical Physics & Biomedical Engineering University College LondonMalet Place Engineering Building, London WC1E 6BT, United Kingdom
| | | | | | - Mark A Hill
- MRC Oxford Institute for Radiation Oncology, University of Oxford, ORCRB Roosevelt Drive, Oxford OX3 7DQ, United Kingdom
| | - Constantinos Koumenis
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Kirsten Lauber
- Department of Radiation Oncology, University Hospital, LMU München, Munich, Germany
- German Cancer Consortium (DKTK), Partner site Munich, Germany
| | - Brian Marples
- Department of Radiation Oncology, University of Rochester, NY, United States of America
| | - Katia Parodi
- German Cancer Consortium (DKTK), Partner site Munich, Germany
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, Garching b. Munich, Germany
| | | | - Nick Staut
- SmART Scientific Solutions BV, Maastricht, The Netherlands
| | - Anna Subiel
- National Physical Laboratory, Medical Radiation Science Hampton Road, Teddington, Middlesex, TW11 0LW, United Kingdom
| | - Rianne D W Vaes
- MAASTRO Clinic, Radiotherapy Division, GROW-School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Ioannis L Verginadis
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jan J Wilkens
- Department of Radiation Oncology, Technical University of Munich (TUM), School of Medicine and Klinikum rechts der Isar, Germany
- Physics Department, Technical University of Munich (TUM), Germany
| | - Kaye J Williams
- Division of Pharmacy and Optometry, University of Manchester, Manchester, United Kingdom
| | - George D Wilson
- Department of Radiation Oncology, Beaumont Health, MI, United States of America
- Henry Ford Health, Detroit, MI, United States of America
| | - Ludwig J Dubois
- The M-Lab, Department of Precision Medicine, GROW-School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
29
|
Kushwaha A, Mourad RF, Heist K, Tariq H, Chan HP, Ross BD, Chenevert TL, Malyarenko D, Hadjiiski LM. Improved Repeatability of Mouse Tibia Volume Segmentation in Murine Myelofibrosis Model Using Deep Learning. Tomography 2023; 9:589-602. [PMID: 36961007 PMCID: PMC10037585 DOI: 10.3390/tomography9020048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
A murine model of myelofibrosis in tibia was used in a co-clinical trial to evaluate segmentation methods for application of image-based biomarkers to assess disease status. The dataset (32 mice with 157 3D MRI scans including 49 test-retest pairs scanned on consecutive days) was split into approximately 70% training, 10% validation, and 20% test subsets. Two expert annotators (EA1 and EA2) performed manual segmentations of the mouse tibia (EA1: all data; EA2: test and validation). Attention U-net (A-U-net) model performance was assessed for accuracy with respect to EA1 reference using the average Jaccard index (AJI), volume intersection ratio (AVI), volume error (AVE), and Hausdorff distance (AHD) for four training scenarios: full training, two half-splits, and a single-mouse subsets. The repeatability of computer versus expert segmentations for tibia volume of test-retest pairs was assessed by within-subject coefficient of variance (%wCV). A-U-net models trained on full and half-split training sets achieved similar average accuracy (with respect to EA1 annotations) for test set: AJI = 83-84%, AVI = 89-90%, AVE = 2-3%, and AHD = 0.5 mm-0.7 mm, exceeding EA2 accuracy: AJ = 81%, AVI = 83%, AVE = 14%, and AHD = 0.3 mm. The A-U-net model repeatability wCV [95% CI]: 3 [2, 5]% was notably better than that of expert annotators EA1: 5 [4, 9]% and EA2: 8 [6, 13]%. The developed deep learning model effectively automates murine bone marrow segmentation with accuracy comparable to human annotators and substantially improved repeatability.
Collapse
|
30
|
Rolfe SM, Whikehart SM, Maga AM. Deep learning enabled multi-organ segmentation of mouse embryos. Biol Open 2023; 12:bio059698. [PMID: 36802342 PMCID: PMC9990908 DOI: 10.1242/bio.059698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 01/13/2023] [Indexed: 02/23/2023] Open
Abstract
The International Mouse Phenotyping Consortium (IMPC) has generated a large repository of three-dimensional (3D) imaging data from mouse embryos, providing a rich resource for investigating phenotype/genotype interactions. While the data is freely available, the computing resources and human effort required to segment these images for analysis of individual structures can create a significant hurdle for research. In this paper, we present an open source, deep learning-enabled tool, Mouse Embryo Multi-Organ Segmentation (MEMOS), that estimates a segmentation of 50 anatomical structures with a support for manually reviewing, editing, and analyzing the estimated segmentation in a single application. MEMOS is implemented as an extension on the 3D Slicer platform and is designed to be accessible to researchers without coding experience. We validate the performance of MEMOS-generated segmentations through comparison to state-of-the-art atlas-based segmentation and quantification of previously reported anatomical abnormalities in a Cbx4 knockout strain. This article has an associated First Person interview with the first author of the paper.
Collapse
Affiliation(s)
- S. M. Rolfe
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
| | - S. M. Whikehart
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
| | - A. M. Maga
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
- Department of Pediatrics, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
31
|
Bilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, et alBilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, Heinemann V, Pal C, Tang A, Kadoury S, Soler L, van Ginneken B, Greenspan H, Joskowicz L, Menze B. The Liver Tumor Segmentation Benchmark (LiTS). Med Image Anal 2023; 84:102680. [PMID: 36481607 PMCID: PMC10631490 DOI: 10.1016/j.media.2022.102680] [Show More Authors] [Citation(s) in RCA: 189] [Impact Index Per Article: 94.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 09/27/2022] [Accepted: 10/29/2022] [Indexed: 11/18/2022]
Abstract
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Collapse
Affiliation(s)
- Patrick Bilic
- Department of Informatics, Technical University of Munich, Germany
| | - Patrick Christ
- Department of Informatics, Technical University of Munich, Germany
| | - Hongwei Bran Li
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland.
| | | | - Avi Ben-Cohen
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Georgios Kaissis
- Institute for AI in Medicine, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Adi Szeskin
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Gabriel Chartrand
- The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada
| | - Fabian Lohöfer
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Julian Walter Holch
- Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Wieland Sommer
- Department of Radiology, University Hospital, LMU Munich, Germany
| | - Felix Hofmann
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; Department of Radiology, University Hospital, LMU Munich, Germany
| | - Alexandre Hostettler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | | | | | | | - Jacob Sosna
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Fernando Navarro
- Department of Informatics, Technical University of Munich, Germany; Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Johannes C Paetzold
- Department of Computing, Imperial College London, London, United Kingdom; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany
| | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Germany
| | - Jana Lipková
- Brigham and Women's Hospital, Harvard Medical School, USA
| | - Markus Rempfler
- Department of Informatics, Technical University of Munich, Germany
| | - Marie Piraud
- Department of Informatics, Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Jan Kirschke
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Benedikt Wiestler
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Zhiheng Zhang
- Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China
| | | | - Marcel Beetz
- Department of Informatics, Technical University of Munich, Germany
| | | | - Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | | | | | - Lei Bi
- School of Computer Science, the University of Sydney, Australia
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China
| | - Grzegorz Chlebus
- Fraunhofer MEVIS, Bremen, Germany; Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Denmark
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Xavier Giró-I-Nieto
- Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Felix Gruen
- Institute of Control Engineering, Technische Universität Braunschweig, Germany
| | - Xu Han
- Department of computer science, UNC Chapel Hill, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Denmark
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Paul Jäger
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Krishna Chaitanya Kaluva
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Mahendra Khened
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | | | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea
| | | | - Simon Kohl
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tomasz Konopczynski
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany
| | - Avinash Kori
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Ganapathy Krishnamurthi
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Fan Li
- Sensetime, Shanghai, China
| | - Hongchao Li
- Department of Computer Science, Guangdong University of Foreign Studies, China
| | - Junbo Li
- Philips Research China, Philips China Innovation Campus, Shanghai, China
| | - Xiaomeng Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - John Lowengrub
- Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; Center for Complex Biological Systems, University of California, Irvine, USA; Chao Family Comprehensive Cancer Center, University of California, Irvine, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, China
| | - Klaus Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | | | - Hans Meine
- Fraunhofer MEVIS, Bremen, Germany; Medical Image Computing Group, FB3, University of Bremen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Denmark
| | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Denmark
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jordi Pont-Tuset
- Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland
| | - Jin Qi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | | | - Ignacio Sarasua
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Andrea Schenk
- Fraunhofer MEVIS, Bremen, Germany; Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Zengming Shen
- Beckman Institute, University of Illinois at Urbana-Champaign, USA; Siemens Healthineers, USA
| | - Jordi Torres
- Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Christian Wachinger
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Leon Weninger
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co., Ltd, China
| | | | - Xiaoping Yang
- Department of Mathematics, Nanjing University, China
| | - Simon Chun-Ho Yu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Miao Yue
- CGG Services (Singapore) Pte. Ltd., Singapore
| | - Liping Zhang
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany
| | - Volker Heinemann
- Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany
| | | | - An Tang
- Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada
| | | | - Luc Soler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| |
Collapse
|
32
|
Ferl GZ, Barck KH, Patil J, Jemaa S, Malamut EJ, Lima A, Long JE, Cheng JH, Junttila MR, Carano RA. Automated segmentation of lungs and lung tumors in mouse micro-CT scans. iScience 2022; 25:105712. [PMID: 36582483 PMCID: PMC9792881 DOI: 10.1016/j.isci.2022.105712] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 10/28/2022] [Accepted: 11/29/2022] [Indexed: 12/12/2022] Open
Abstract
Here, we have developed an automated image processing algorithm for segmenting lungs and individual lung tumors in in vivo micro-computed tomography (micro-CT) scans of mouse models of non-small cell lung cancer and lung fibrosis. Over 3000 scans acquired across multiple studies were used to train/validate a 3D U-net lung segmentation model and a Support Vector Machine (SVM) classifier to segment individual lung tumors. The U-net lung segmentation algorithm can be used to estimate changes in soft tissue volume within lungs (primarily tumors and blood vessels), whereas the trained SVM is able to discriminate between tumors and blood vessels and identify individual tumors. The trained segmentation algorithms (1) significantly reduce time required for lung and tumor segmentation, (2) reduce bias and error associated with manual image segmentation, and (3) facilitate identification of individual lung tumors and objective assessment of changes in lung and individual tumor volumes under different experimental conditions.
Collapse
Affiliation(s)
- Gregory Z. Ferl
- Preclinical & Translational PKPD, Genentech, South San Francisco, CA 94080, USA,Department of Translational Imaging, Genentech, South San Francisco, CA 94080, USA,Corresponding author
| | - Kai H. Barck
- Department of Translational Imaging, Genentech, South San Francisco, CA 94080, USA,Corresponding author
| | - Jasmine Patil
- Genetic Science Group, Thermo Fisher Scientific, South San Francisco, CA 94080, USA
| | - Skander Jemaa
- Data, Analytics and Imaging, Product Development, Genentech, South San Francisco, CA 94080, USA
| | - Evelyn J. Malamut
- Preclinical & Translational PKPD, Genentech, South San Francisco, CA 94080, USA
| | - Anthony Lima
- Department of Translational Oncology, Genentech, South San Francisco, CA 94080, USA
| | - Jason E. Long
- ORIC Pharmaceuticals, South San Francisco, CA 94080, USA
| | - Jason H. Cheng
- Department of Translational Oncology, Genentech, South San Francisco, CA 94080, USA
| | | | - Richard A.D. Carano
- Data, Analytics and Imaging, Product Development, Genentech, South San Francisco, CA 94080, USA
| |
Collapse
|
33
|
Vincenzi E, Fantazzini A, Basso C, Barla A, Odone F, Leo L, Mecozzi L, Mambrini M, Ferrini E, Sverzellati N, Stellari FF. A fully automated deep learning pipeline for micro-CT-imaging-based densitometry of lung fibrosis murine models. Respir Res 2022; 23:308. [DOI: 10.1186/s12931-022-02236-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 10/15/2022] [Indexed: 11/13/2022] Open
Abstract
AbstractIdiopathic pulmonary fibrosis, the archetype of pulmonary fibrosis (PF), is a chronic lung disease of a poor prognosis, characterized by progressively worsening of lung function. Although histology is still the gold standard for PF assessment in preclinical practice, histological data typically involve less than 1% of total lung volume and are not amenable to longitudinal studies. A miniaturized version of computed tomography (µCT) has been introduced to radiologically examine lung in preclinical murine models of PF. The linear relationship between X-ray attenuation and tissue density allows lung densitometry on total lung volume. However, the huge density changes caused by PF usually require manual segmentation by trained operators, limiting µCT deployment in preclinical routine. Deep learning approaches have achieved state-of-the-art performance in medical image segmentation. In this work, we propose a fully automated deep learning approach to segment right and left lung on µCT imaging and subsequently derive lung densitometry. Our pipeline first employs a convolutional network (CNN) for pre-processing at low-resolution and then a 2.5D CNN for higher-resolution segmentation, combining computational advantage of 2D and ability to address 3D spatial coherence without compromising accuracy. Finally, lungs are divided into compartments based on air content assessed by density. We validated this pipeline on 72 mice with different grades of PF, achieving a Dice score of 0.967 on test set. Our tests demonstrate that this automated tool allows for rapid and comprehensive analysis of µCT scans of PF murine models, thus laying the ground for its wider exploitation in preclinical settings.
Collapse
|
34
|
Salimi Y, Shiri I, Akhavanallaf A, Mansouri Z, Sanaat A, Pakbin M, Ghasemian M, Arabi H, Zaidi H. Deep Learning-based Calculation of Patient Size and Attenuation Surrogates from Localizer Image: Toward Personalized Chest CT Protocol Optimization. Eur J Radiol 2022; 157:110602. [DOI: 10.1016/j.ejrad.2022.110602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 11/02/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
|
35
|
High-resolution micro-CT for 3D infarct characterization and segmentation in mice stroke models. Sci Rep 2022; 12:17471. [PMID: 36261475 PMCID: PMC9582034 DOI: 10.1038/s41598-022-21494-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 09/28/2022] [Indexed: 01/12/2023] Open
Abstract
Characterization of brain infarct lesions in rodent models of stroke is crucial to assess stroke pathophysiology and therapy outcome. Until recently, the analysis of brain lesions was performed using two techniques: (1) histological methods, such as TTC (Triphenyltetrazolium chloride), a time-consuming and inaccurate process; or (2) MRI imaging, a faster, 3D imaging method, that comes at a high cost. In the last decade, high-resolution micro-CT for 3D sample analysis turned into a simple, fast, and cheaper solution. Here, we successfully describe the application of brain contrasting agents (Osmium tetroxide and inorganic iodine) for high-resolution micro-CT imaging for fine location and quantification of ischemic lesion and edema in mouse preclinical stroke models. We used the intraluminal transient MCAO (Middle Cerebral Artery Occlusion) mouse stroke model to identify and quantify ischemic lesion and edema, and segment core and penumbra regions at different time points after ischemia, by manual and automatic methods. In the transient-ischemic-attack (TIA) mouse model, we can quantify striatal myelinated fibers degeneration. Of note, whole brain 3D reconstructions allow brain atlas co-registration, to identify the affected brain areas, and correlate them with functional impairment. This methodology proves to be a breakthrough in the field, by providing a precise and detailed assessment of stroke outcomes in preclinical animal studies.
Collapse
|
36
|
Liu Y, Gargesha M, Scott B, Tchilibou Wane AO, Wilson DL. Deep learning multi-organ segmentation for whole mouse cryo-images including a comparison of 2D and 3D deep networks. Sci Rep 2022; 12:15161. [PMID: 36071089 PMCID: PMC9452525 DOI: 10.1038/s41598-022-19037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 08/23/2022] [Indexed: 11/25/2022] Open
Abstract
Cryo-imaging provided 3D whole-mouse microscopic color anatomy and fluorescence images that enables biotechnology applications (e.g., stem cells and metastatic cancer). In this report, we compared three methods of organ segmentation: 2D U-Net with 2D-slices and 3D U-Net with either 3D-whole-mouse or 3D-patches. We evaluated the brain, thymus, lung, heart, liver, stomach, spleen, left and right kidney, and bladder. Training with 63 mice, 2D-slices had the best performance, with median Dice scores of > 0.9 and median Hausdorff distances of < 1.2 mm in eightfold cross validation for all organs, except bladder, which is a problem organ due to variable filling and poor contrast. Results were comparable to those for a second analyst on the same data. Regression analyses were performed to fit learning curves, which showed that 2D-slices can succeed with fewer samples. Review and editing of 2D-slices segmentation results reduced human operator time from ~ 2-h to ~ 25-min, with reduced inter-observer variability. As demonstrations, we used organ segmentation to evaluate size changes in liver disease and to quantify the distribution of therapeutic mesenchymal stem cells in organs. With a 48-GB GPU, we determined that extra GPU RAM improved the performance of 3D deep learning because we could train at a higher resolution.
Collapse
Affiliation(s)
- Yiqiao Liu
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | | | - Bryan Scott
- BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA
| | - Arthure Olivia Tchilibou Wane
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA. .,BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA. .,Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| |
Collapse
|
37
|
Wehrse E, Klein L, Rotkopf LT, Stiller W, Finke M, Echner G, Glowa C, Heinze S, Ziener CH, Schlemmer HP, Kachelrieß M, Sawall S. Ultrahigh resolution whole body photon counting computed tomography as a novel versatile tool for translational research from mouse to man. Z Med Phys 2022:S0939-3889(22)00066-6. [PMID: 35868888 DOI: 10.1016/j.zemedi.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 06/18/2022] [Accepted: 06/19/2022] [Indexed: 11/19/2022]
Abstract
X-ray computed tomography (CT) is a cardinal tool in clinical practice. It provides cross-sectional images within seconds. The recent introduction of clinical photon-counting CT allowed for an increase in spatial resolution by more than a factor of two resulting in a pixel size in the center of rotation of about 150 µm. This level of spatial resolution is in the order of dedicated preclinical micro-CT systems. However so far, the need for different dedicated clinical and preclinical systems often hinders the rapid translation of early research results to applications in men. This drawback might be overcome by ultra-high resolution (UHR) clinical photon-counting CT unifying preclinical and clinical research capabilities in a single machine. Herein, the prototype of a clinical UHR PCD CT (SOMATOM CounT, Siemens Healthineers, Forchheim, Germany) was used. The system comprises a conventional energy-integrating detector (EID) and a novel photon-counting detector (PCD). While the EID provides a pixel size of 0.6 mm in the centre of rotation, the PCD provides a pixel size of 0.25 mm. Additionally, it provides a quantification of photon energies by sorting them into up to four distinct energy bins. This acquisition of multi-energy data allows for a multitude of applications, e.g. pseudo-monochromatic imaging. In particular, we examine the relation between spatial resolution, image noise and administered radiation dose for a multitude of use-cases. These cases include ultra-high resolution and multi-energy acquisitions of mice administered with a prototype bismuth-based contrast agent (nanoPET Pharma, Berlin, Germany) as well as larger animals and actual patients. The clinical EID provides a spatial resolution of about 9 lp/cm (modulation transfer function at 10%, MTF10%) while UHR allows for the acquisition of images with up to 16 lp/cm allowing for the visualization of all relevant anatomical structures in preclinical and clinical specimen. The spectral capabilities of the system enable a variety of applications previously not available in preclinical research such as pseudo-monochromatic images. Clinical ultra-high resolution photon-counting CT has the potential to unify preclinical and clinical research on a single system enabling versatile imaging of specimens and individuals ranging from mice to man.
Collapse
Affiliation(s)
- E Wehrse
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - L Klein
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany; Division of X-ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - L T Rotkopf
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - W Stiller
- Diagnostic and Interventional Radiology (DIR), Heidelberg University Hospital, Heidelberg, Germany
| | - M Finke
- Diagnostic and Interventional Radiology (DIR), Heidelberg University Hospital, Heidelberg, Germany
| | - G Echner
- Division of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
| | - C Glowa
- Division of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Radiation Oncology and Radiotherapy, University Hospital Heidelberg, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
| | - S Heinze
- Institute of Forensic and Traffic Medicine, University Hospital Heidelberg, Heidelberg, Germany
| | - C H Ziener
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - H-P Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - M Kachelrieß
- Medical Faculty, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany; Division of X-ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Sawall
- Medical Faculty, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany; Division of X-ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
38
|
Synchrotron X-ray biosample imaging: opportunities and challenges. Biophys Rev 2022; 14:625-633. [DOI: 10.1007/s12551-022-00964-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 05/25/2022] [Indexed: 12/17/2022] Open
|
39
|
Cong W, Li M, Guo X, Wang G. Estimating optical parameters of biological tissues with photon-counting micro-CT. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:841-846. [PMID: 36215445 PMCID: PMC9552592 DOI: 10.1364/josaa.451319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/16/2022] [Indexed: 06/16/2023]
Abstract
Wavelength-dependent absorption and scattering properties determine the fluorescence photon transport in biological tissues and image resolution of optical molecular tomography. Currently, these parameters are computed from optically measured data. For small animal imaging, estimation of optical parameters is a large-scale optimization problem, which is highly ill-posed. In this paper, we propose a new, to the best of our knowledge, approach to estimate optical parameters of biological tissues with photon-counting micro-computed tomography (micro-CT). From photon-counting x-ray data, multi-energy micro-CT images can be reconstructed to perform multi-organ segmentation and material decomposition in terms of tissue constituents. The concentration and characteristics of major tissue constituents can be utilized to calculate the optical absorption and scattering coefficients of the involved tissues. In our study, we perform numerical simulation, phantom experiments, and in vivo animal studies to calculate the optical parameters using our proposed approach. The results show that our approach can estimate optical parameters of tissues with a relative error of <10%, accurately mapping the optical parameter distributions in a small animal.
Collapse
|
40
|
Sforazzini F, Salome P, Moustafa M, Zhou C, Schwager C, Rein K, Bougatf N, Kudak A, Woodruff H, Dubois L, Lambin P, Debus J, Abdollahi A, Knoll M. Deep Learning-based Automatic Lung Segmentation on Multiresolution CT Scans from Healthy and Fibrotic Lungs in Mice. Radiol Artif Intell 2022; 4:e210095. [PMID: 35391764 PMCID: PMC8980878 DOI: 10.1148/ryai.210095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 11/23/2021] [Accepted: 11/28/2021] [Indexed: 06/02/2023]
Abstract
PURPOSE To develop a model to accurately segment mouse lungs with varying levels of fibrosis and investigate its applicability to mouse images with different resolutions. MATERIALS AND METHODS In this experimental retrospective study, a U-Net was trained to automatically segment lungs on mouse CT images. The model was trained (n = 1200), validated (n = 300), and tested (n = 154) on longitudinally acquired and semiautomatically segmented CT images, which included both healthy and irradiated mice (group A). A second independent group of 237 mice (group B) was used for external testing. The Dice score coefficient (DSC) and Hausdorff distance (HD) were used as metrics to quantify segmentation accuracy. Transfer learning was applied to adapt the model to high-spatial-resolution mouse micro-CT segmentation (n = 20; group C [n = 16 for training and n = 4 for testing]). RESULTS The trained model yielded a high median DSC in both test datasets: 0.984 (interquartile range [IQR], 0.977-0.988) in group A and 0.966 (IQR, 0.955-0.972) in group B. The median HD in both test datasets was 0.47 mm (IQR, 0-0.51 mm [group A]) and 0.31 mm (IQR, 0.30-0.32 mm [group B]). Spatially resolved quantification of differences toward reference masks revealed two hot spots close to the air-tissue interfaces, which are particularly prone to deviation. Finally, for the higher-resolution mouse CT images, the median DSC was 0.905 (IQR, 0.902-0.929) and the median 95th percentile of the HD was 0.33 mm (IQR, 2.61-2.78 mm). CONCLUSION The developed deep learning-based method for mouse lung segmentation performed well independently of disease state (healthy, fibrotic, emphysematous lungs) and CT resolution.Keywords: Deep Learning, Lung Fibrosis, Radiation Therapy, Segmentation, Animal Studies, CT, Thorax, Lung Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
|
41
|
Retson TA, Hasenstab KA, Kligerman SJ, Jacobs KE, Yen AC, Brouha SS, Hahn LD, Hsiao A. Reader Perceptions and Impact of AI on CT Assessment of Air Trapping. Radiol Artif Intell 2022; 4:e210160. [PMID: 35391767 DOI: 10.1148/ryai.2021210160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 09/22/2021] [Accepted: 10/22/2021] [Indexed: 11/11/2022]
Abstract
Quantitative imaging measurements can be facilitated by artificial intelligence (AI) algorithms, but how they might impact decision-making and be perceived by radiologists remains uncertain. After creation of a dedicated inspiratory-expiratory CT examination and concurrent deployment of a quantitative AI algorithm for assessing air trapping, five cardiothoracic radiologists retrospectively evaluated severity of air trapping on 17 examination studies. Air trapping severity of each lobe was evaluated in three stages: qualitatively (visually); semiquantitatively, allowing manual region-of-interest measurements; and quantitatively, using results from an AI algorithm. Readers were surveyed on each case for their perceptions of the AI algorithm. The algorithm improved interreader agreement (intraclass correlation coefficients: visual, 0.28; semiquantitative, 0.40; quantitative, 0.84; P < .001) and improved correlation with pulmonary function testing (forced expiratory volume in 1 second-to-forced vital capacity ratio) (visual r = -0.26, semiquantitative r = -0.32, quantitative r = -0.44). Readers perceived moderate agreement with the AI algorithm (Likert scale average, 3.7 of 5), a mild impact on their final assessment (average, 2.6), and a neutral perception of overall utility (average, 3.5). Though the AI algorithm objectively improved interreader consistency and correlation with pulmonary function testing, individual readers did not immediately perceive this benefit, revealing a potential barrier to clinical adoption. Keywords: Technology Assessment, Quantification © RSNA, 2021.
Collapse
Affiliation(s)
- Tara A Retson
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Kyle A Hasenstab
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Seth J Kligerman
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Kathleen E Jacobs
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Andrew C Yen
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Sharon S Brouha
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Lewis D Hahn
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| | - Albert Hsiao
- Department of Radiology, University of California, San Diego, 9452 Medical Center Dr, 4th Floor, La Jolla, CA 92037 (T.A.R., S.J.K., K.E.J., A.C.Y., S.S.B., L.D.H., A.H.); and Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.)
| |
Collapse
|
42
|
Emerging and future use of intra-surgical volumetric X-ray imaging and adjuvant tools for decision support in breast-conserving surgery. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2022; 22. [DOI: 10.1016/j.cobme.2022.100382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
43
|
Virtual monoenergetic micro-CT imaging in mice with artificial intelligence. Sci Rep 2022; 12:2324. [PMID: 35149703 PMCID: PMC8837804 DOI: 10.1038/s41598-022-06172-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/23/2022] [Indexed: 11/26/2022] Open
Abstract
Micro cone-beam computed tomography (µCBCT) imaging is of utmost importance for carrying out extensive preclinical research in rodents. The imaging of animals is an essential step prior to preclinical precision irradiation, but also in the longitudinal assessment of treatment outcomes. However, imaging artifacts such as beam hardening will occur due to the low energetic nature of the X-ray imaging beam (i.e., 60 kVp). Beam hardening artifacts are especially difficult to resolve in a ‘pancake’ imaging geometry with stationary source and detector, where the animal is rotated around its sagittal axis, and the X-ray imaging beam crosses a wide range of thicknesses. In this study, a seven-layer U-Net based network architecture (vMonoCT) is adopted to predict virtual monoenergetic X-ray projections from polyenergetic X-ray projections. A Monte Carlo simulation model is developed to compose a training dataset of 1890 projection pairs. Here, a series of digital anthropomorphic mouse phantoms was derived from the reference DigiMouse phantom as simulation geometry. vMonoCT was trained on 1512 projection pairs (= 80%) and tested on 378 projection pairs (= 20%). The percentage error calculated for the test dataset was 1.7 ± 0.4%. Additionally, the vMonoCT model was evaluated on a retrospective projection dataset of five mice and one frozen cadaver. It was found that beam hardening artifacts were minimized after image reconstruction of the vMonoCT-corrected projections, and that anatomically incorrect gradient errors were corrected in the cranium up to 15%. Our results disclose the potential of Artificial Intelligence to enhance the µCBCT image quality in biomedical applications. vMonoCT is expected to contribute to the reproducibility of quantitative preclinical applications such as precision irradiations in X-ray cabinets, and to the evaluation of longitudinal imaging data in extensive preclinical studies.
Collapse
|
44
|
Lappas G, Wolfs CJA, Staut N, Lieuwes NG, Biemans R, van Hoof SJ, Dubois LJ, Verhaegen F. Automatic contouring of normal tissues with deep learning for preclinical radiation studies. Phys Med Biol 2022; 67. [PMID: 35061600 DOI: 10.1088/1361-6560/ac4da3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 01/21/2022] [Indexed: 02/05/2023]
Abstract
Objective.Delineation of relevant normal tissues is a bottleneck in image-guided precision radiotherapy workflows for small animals. A deep learning (DL) model for automatic contouring using standardized 3D micro cone-beam CT (μCBCT) volumes as input is proposed, to provide a fully automatic, generalizable method for normal tissue contouring in preclinical studies.Approach.A 3D U-net was trained to contour organs in the head (whole brain, left/right brain hemisphere, left/right eye) and thorax (complete lungs, left/right lung, heart, spinal cord, thorax bone) regions. As an important preprocessing step, Hounsfield units (HUs) were converted to mass density (MD) values, to remove the energy dependency of theμCBCT scanner and improve generalizability of the DL model. Model performance was evaluated quantitatively by Dice similarity coefficient (DSC), mean surface distance (MSD), 95th percentile Hausdorff distance (HD95p), and center of mass displacement (ΔCoM). For qualitative assessment, DL-generated contours (for 40 and 80 kV images) were scored (0: unacceptable, manual re-contouring needed - 5: no adjustments needed). An uncertainty analysis using Monte Carlo dropout uncertainty was performed for delineation of the heart.Main results.The proposed DL model and accompanying preprocessing method provide high quality contours, with in general median DSC > 0.85, MSD < 0.25 mm, HD95p < 1 mm and ΔCoM < 0.5 mm. The qualitative assessment showed very few contours needed manual adaptations (40 kV: 20/155 contours, 80 kV: 3/155 contours). The uncertainty of the DL model is small (within 2%).Significance.A DL-based model dedicated to preclinical studies has been developed for multi-organ segmentation in two body sites. For the first time, a method independent of image acquisition parameters has been quantitatively evaluated, resulting in sub-millimeter performance, while qualitative assessment demonstrated the high quality of the DL-generated contours. The uncertainty analysis additionally showed that inherent model variability is low.
Collapse
Affiliation(s)
- Georgios Lappas
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Cecile J A Wolfs
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Nick Staut
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, The Netherlands.,SmART Scientific Solutions BV, Maastricht, The Netherlands
| | - Natasja G Lieuwes
- The M-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Rianne Biemans
- The M-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | | | - Ludwig J Dubois
- The M-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (Maastro), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, The Netherlands.,SmART Scientific Solutions BV, Maastricht, The Netherlands
| |
Collapse
|
45
|
Malimban J, Lathouwers D, Qian H, Verhaegen F, Wiedemann J, Brandenburg S, Staring M. Deep learning-based segmentation of the thorax in mouse micro-CT scans. Sci Rep 2022; 12:1822. [PMID: 35110676 PMCID: PMC8810936 DOI: 10.1038/s41598-022-05868-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022] Open
Abstract
For image-guided small animal irradiations, the whole workflow of imaging, organ contouring, irradiation planning, and delivery is typically performed in a single session requiring continuous administration of anaesthetic agents. Automating contouring leads to a faster workflow, which limits exposure to anaesthesia and thereby, reducing its impact on experimental results and on animal wellbeing. Here, we trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) for autocontouring of the thorax in mouse micro-CT images. We trained the models only on native CTs and evaluated their performance using an independent testing dataset (i.e., native CTs not included in the training and validation). Unlike previous studies, we also tested the model performance on an external dataset (i.e., contrast-enhanced CTs) to see how well they predict on CTs completely different from what they were trained on. We also assessed the interobserver variability using the generalized conformity index ([Formula: see text]) among three observers, providing a stronger human baseline for evaluating automated contours than previous studies. Lastly, we showed the benefit on the contouring time compared to manual contouring. The results show that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models. For all target organs, the mean surface distance (MSD) and the Hausdorff distance (95p HD) of the best performing model for this task (nnU-Net 3d_fullres) are within 0.16 mm and 0.60 mm, respectively. These values are below the minimum required contouring accuracy of 1 mm for small animal irradiations, and improve significantly upon state-of-the-art 2D U-Net-based AIMOS method. Moreover, the conformity indices of the 3d_fullres model also compare favourably to the interobserver variability for all target organs, whereas the 2D models perform poorly in this regard. Importantly, the 3d_fullres model offers 98% reduction in contouring time.
Collapse
Affiliation(s)
- Justin Malimban
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands.
| | - Danny Lathouwers
- Department of Radiation Science and Technology, Faculty of Applied Sciences, Delft University of Technology, 2629 JB, Delft, The Netherlands
| | - Haibin Qian
- Department of Medical Biology, Amsterdam University Medical Centers (Location AMC) and Cancer Center Amsterdam, 1105 AZ, Amsterdam, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, 6229 ER, Maastricht, The Netherlands
| | - Julia Wiedemann
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
- Department of Biomedical Sciences of Cells and Systems-Section Molecular Cell Biology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
| | - Sytze Brandenburg
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
| | - Marius Staring
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, The Netherlands
| |
Collapse
|
46
|
Lappas G, Staut N, Lieuwes NG, Biemans R, Wolfs CJ, van Hoof SJ, Dubois LJ, Verhaegen F. Inter-observer variability of organ contouring for preclinical studies with cone beam Computed Tomography imaging. Phys Imaging Radiat Oncol 2022; 21:11-17. [PMID: 35111981 PMCID: PMC8790504 DOI: 10.1016/j.phro.2022.01.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 01/05/2022] [Accepted: 01/12/2022] [Indexed: 12/28/2022] Open
Abstract
Background and purpose In preclinical radiation studies, there is great interest in quantifying the radiation response of healthy tissues. Manual contouring has significant impact on the treatment-planning because of variation introduced by human interpretation. This results in inconsistencies when assessing normal tissue volumes. Evaluation of these discrepancies can provide a better understanding on the limitations of the current preclinical radiation workflow. In the present work, interobserver variability (IOV) in manual contouring of rodent normal tissues on cone-beam Computed Tomography, in head and thorax regions was evaluated. Materials and methods Two animal technicians performed manually (assisted) contouring of normal tissues located within the thorax and head regions of rodents, 20 cases per body site. Mean surface distance (MSD), displacement of center of mass (ΔCoM), DICE similarity coefficient (DSC) and the 95th percentile Hausdorff distance (HD95) were calculated between the contours of the two observers to evaluate the IOV. Results For the thorax organs, right lung had the lowest IOV (ΔCoM: 0.08 ± 0.04 mm, DSC: 0.96 ± 0.01, MSD:0.07 ± 0.01 mm, HD95:0.20 ± 0.03 mm) while spinal cord, the highest IOV (ΔCoM:0.5 ± 0.3 mm, DSC:0.81 ± 0.05, MSD:0.14 ± 0.03 mm, HD95:0.8 ± 0.2 mm). Regarding head organs, right eye demonstrated the lowest IOV (ΔCoM:0.12 ± 0.08 mm, DSC: 0.93 ± 0.02, MSD: 0.15 ± 0.04 mm, HD95: 0.29 ± 0.07 mm) while complete brain, the highest IOV (ΔCoM: 0.2 ± 0.1 mm, DSC: 0.94 ± 0.02, MSD: 0.3 ± 0.1 mm, HD95: 0.5 ± 0.1 mm). Conclusions Our findings reveal small IOV, within the sub-mm range, for thorax and head normal tissues in rodents. The set of contours can serve as a basis for developing an automated delineation method for e.g., treatment planning.
Collapse
Affiliation(s)
- Georgios Lappas
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Nick Staut
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | | | - Rianne Biemans
- SmART Scientific Solutions BV, Maastricht, the Netherlands
| | - Cecile J.A. Wolfs
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Stefan J. van Hoof
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | | | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
- Corresponding author at: Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands.
| |
Collapse
|
47
|
Richardson DS, Guan W, Matsumoto K, Pan C, Chung K, Ertürk A, Ueda HR, Lichtman JW. TISSUE CLEARING. NATURE REVIEWS. METHODS PRIMERS 2021; 1:84. [PMID: 35128463 PMCID: PMC8815095 DOI: 10.1038/s43586-021-00080-9] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/29/2021] [Indexed: 12/16/2022]
Abstract
Tissue clearing of gross anatomical samples was first described over a century ago and has only recently found widespread use in the field of microscopy. This renaissance has been driven by the application of modern knowledge of optical physics and chemical engineering to the development of robust and reproducible clearing techniques, the arrival of new microscopes that can image large samples at cellular resolution and computing infrastructure able to store and analyze large data volumes. Many biological relationships between structure and function require investigation in three dimensions and tissue clearing therefore has the potential to enable broad discoveries in the biological sciences. Unfortunately, the current literature is complex and could confuse researchers looking to begin a clearing project. The goal of this Primer is to outline a modular approach to tissue clearing that allows a novice researcher to develop a customized clearing pipeline tailored to their tissue of interest. Further, the Primer outlines the required imaging and computational infrastructure needed to perform tissue clearing at scale, gives an overview of current applications, discusses limitations and provides an outlook on future advances in the field.
Collapse
Affiliation(s)
- Douglas S. Richardson
- Harvard Center for Biological Imaging, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Webster Guan
- Department of Chemical Engineering, MIT, Cambridge, MA, USA
| | - Katsuhiko Matsumoto
- Department of Systems Pharmacology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan
| | - Chenchen Pan
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig Maximilians University of Munich, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| | - Kwanghun Chung
- Department of Systems Pharmacology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Broad Institute of Harvard University and MIT, Cambridge, MA, USA
- Center for Nanomedicine, Institute for Basic Science (IBS), Seoul, Republic of Korea
- Nano Biomedical Engineering (Nano BME) Graduate Program, Yonsei-IBS Institute, Yonsei University, Seoul, Republic of Korea
| | - Ali Ertürk
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig Maximilians University of Munich, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| | - Hiroki R. Ueda
- Department of Systems Pharmacology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan
| | - Jeff W. Lichtman
- Harvard Center for Biological Imaging, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
48
|
He B, Yin D, Chen X, Luo H, Xiao D, He M, Wang G, Fang C, Liu L, Jia F. A study of generalization and compatibility performance of 3D U-Net segmentation on multiple heterogeneous liver CT datasets. BMC Med Imaging 2021; 21:178. [PMID: 34819022 PMCID: PMC8611902 DOI: 10.1186/s12880-021-00708-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Most existing algorithms have been focused on the segmentation from several public Liver CT datasets scanned regularly (no pneumoperitoneum and horizontal supine position). This study primarily segmented datasets with unconventional liver shapes and intensities deduced by contrast phases, irregular scanning conditions, different scanning objects of pigs and patients with large pathological tumors, which formed the multiple heterogeneity of datasets used in this study. METHODS The multiple heterogeneous datasets used in this paper includes: (1) One public contrast-enhanced CT dataset and one public non-contrast CT dataset; (2) A contrast-enhanced dataset that has abnormal liver shape with very long left liver lobes and large-sized liver tumors with abnormal presets deduced by microvascular invasion; (3) One artificial pneumoperitoneum dataset under the pneumoperitoneum and three scanning profiles (horizontal/left/right recumbent position); (4) Two porcine datasets of Bama type and domestic type that contains pneumoperitoneum cases but with large anatomy discrepancy with humans. The study aimed to investigate the segmentation performances of 3D U-Net in: (1) generalization ability between multiple heterogeneous datasets by cross-testing experiments; (2) the compatibility when hybrid training all datasets in different sampling and encoder layer sharing schema. We further investigated the compatibility of encoder level by setting separate level for each dataset (i.e., dataset-wise convolutions) while sharing the decoder. RESULTS Model trained on different datasets has different segmentation performance. The prediction accuracy between LiTS dataset and Zhujiang dataset was about 0.955 and 0.958 which shows their good generalization ability due to that they were all contrast-enhanced clinical patient datasets scanned regularly. For the datasets scanned under pneumoperitoneum, their corresponding datasets scanned without pneumoperitoneum showed good generalization ability. Dataset-wise convolution module in high-level can improve the dataset unbalance problem. The experimental results will facilitate researchers making solutions when segmenting those special datasets. CONCLUSIONS (1) Regularly scanned datasets is well generalized to irregularly ones. (2) The hybrid training is beneficial but the dataset imbalance problem always exits due to the multi-domain homogeneity. The higher levels encoded more domain specific information than lower levels and thus were less compatible in terms of our datasets.
Collapse
Affiliation(s)
- Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, Harbin Medical University, Harbin, China
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, University of Science and Technology of China, Hefei, China
| | - Xiaoxia Chen
- Department of Radiology, The Third Medical Center, General Hospital of PLA, Beijing, China
| | - Huoling Luo
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Deqiang Xiao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Mu He
- First Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Guisheng Wang
- Department of Radiology, The Third Medical Center, General Hospital of PLA, Beijing, China
| | - Chihua Fang
- First Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, Harbin Medical University, Harbin, China.
- Department of Hepatobiliary Surgery, The First Affiliated Hospital, University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
- Pazhou Lab, Guangzhou, China.
| |
Collapse
|
49
|
Almagro J, Messal HA, Zaw Thin M, van Rheenen J, Behrens A. Tissue clearing to examine tumour complexity in three dimensions. Nat Rev Cancer 2021; 21:718-730. [PMID: 34331034 DOI: 10.1038/s41568-021-00382-w] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/18/2021] [Indexed: 02/07/2023]
Abstract
The visualization of whole organs and organisms through tissue clearing and fluorescence volumetric imaging has revolutionized the way we look at biological samples. Its application to solid tumours is changing our perception of tumour architecture, revealing signalling networks and cell interactions critical in tumour progression, and provides a powerful new strategy for cancer diagnostics. This Review introduces the latest advances in tissue clearing and three-dimensional imaging, examines the challenges in clearing epithelia - the tissue of origin of most malignancies - and discusses the insights that tissue clearing has brought to cancer research, as well as the prospective applications to experimental and clinical oncology.
Collapse
Affiliation(s)
- Jorge Almagro
- Adult Stem Cell Laboratory, The Francis Crick Institute, London, UK
| | - Hendrik A Messal
- Department of Molecular Pathology, Oncode Institute, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - May Zaw Thin
- Cancer Stem Cell Laboratory, Institute of Cancer Research, London, UK
| | - Jacco van Rheenen
- Department of Molecular Pathology, Oncode Institute, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Axel Behrens
- Adult Stem Cell Laboratory, The Francis Crick Institute, London, UK.
- Cancer Stem Cell Laboratory, Institute of Cancer Research, London, UK.
- Convergence Science Centre and Division of Cancer, Department of Surgery and Cancer, Imperial College London, London, UK.
| |
Collapse
|
50
|
Park J, Choi B, Ko J, Chun J, Park I, Lee J, Kim J, Kim J, Eom K, Kim JS. Deep-Learning-Based Automatic Segmentation of Head and Neck Organs for Radiation Therapy in Dogs. Front Vet Sci 2021; 8:721612. [PMID: 34552975 PMCID: PMC8450455 DOI: 10.3389/fvets.2021.721612] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose: This study was conducted to develop a deep learning-based automatic segmentation (DLBAS) model of head and neck organs for radiotherapy (RT) in dogs, and to evaluate the feasibility for delineating the RT planning. Materials and Methods: The segmentation indicated that there were potentially 15 organs at risk (OARs) in the head and neck of dogs. Post-contrast computed tomography (CT) was performed in 90 dogs. The training and validation sets comprised 80 CT data sets, including 20 test sets. The accuracy of the segmentation was assessed using both the Dice similarity coefficient (DSC) and the Hausdorff distance (HD), and by referencing the expert contours as the ground truth. An additional 10 clinical test sets with relatively large displacement or deformation of organs were selected for verification in cancer patients. To evaluate the applicability in cancer patients, and the impact of expert intervention, three methods–HA, DLBAS, and the readjustment of the predicted data obtained via the DLBAS of the clinical test sets (HA_DLBAS)–were compared. Results: The DLBAS model (in the 20 test sets) showed reliable DSC and HD values; it also had a short contouring time of ~3 s. The average (mean ± standard deviation) DSC (0.83 ± 0.04) and HD (2.71 ± 1.01 mm) values were similar to those of previous human studies. The DLBAS was highly accurate and had no large displacement of head and neck organs. However, the DLBAS in the 10 clinical test sets showed lower DSC (0.78 ± 0.11) and higher HD (4.30 ± 3.69 mm) values than those of the test sets. The HA_DLBAS was comparable to both the HA (DSC: 0.85 ± 0.06 and HD: 2.74 ± 1.18 mm) and DLBAS presented better comparison metrics and decreased statistical deviations (DSC: 0.94 ± 0.03 and HD: 2.30 ± 0.41 mm). In addition, the contouring time of HA_DLBAS (30 min) was less than that of HA (80 min). Conclusion: In conclusion, HA_DLBAS method and the proposed DLBAS was highly consistent and robust in its performance. Thus, DLBAS has great potential as a single or supportive tool to the key process in RT planning.
Collapse
Affiliation(s)
- Jeongsu Park
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Byoungsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jaeeun Ko
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Inkyung Park
- Department of Integrative Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Juyoung Lee
- Department of Integrative Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jayon Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jaehwan Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Kidong Eom
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|