1
|
Liu J, Zheng Y, Lin L, Guo J, Lv Y, Yuan J, Zhai H, Chen X, Shen L, Li L, Bai S, Han H. A robust transformer-based pipeline of 3D cell alignment, denoise and instance segmentation on electron microscopy sequence images. JOURNAL OF PLANT PHYSIOLOGY 2024; 297:154236. [PMID: 38621330 DOI: 10.1016/j.jplph.2024.154236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 03/15/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Germline cells are critical for transmitting genetic information to subsequent generations in biological organisms. While their differentiation from somatic cells during embryonic development is well-documented in most animals, the regulatory mechanisms initiating plant germline cells are not well understood. To thoroughly investigate the complex morphological transformations of their ultrastructure over developmental time, nanoscale 3D reconstruction of entire plant tissues is necessary, achievable exclusively through electron microscopy imaging. This paper presents a full-process framework designed for reconstructing large-volume plant tissue from serial electron microscopy images. The framework ensures end-to-end direct output of reconstruction results, including topological networks and morphological analysis. The proposed 3D cell alignment, denoise, and instance segmentation pipeline (3DCADS) leverages deep learning to provide a cell instance segmentation workflow for electron microscopy image series, ensuring accurate and robust 3D cell reconstructions with high computational efficiency. The pipeline involves five stages: the registration of electron microscopy serial images; image enhancement and denoising; semantic segmentation using a Transformer-based neural network; instance segmentation through a supervoxel-based clustering algorithm; and an automated analysis and statistical assessment of the reconstruction results, with the mapping of topological connections. The 3DCADS model's precision was validated on a plant tissue ground-truth dataset, outperforming traditional baseline models and deep learning baselines in overall accuracy. The framework was applied to the reconstruction of early meiosis stages in the anthers of Arabidopsis thaliana, resulting in a topological connectivity network and analysis of morphological parameters and characteristics of cell distribution. The experiment underscores the 3DCADS model's potential for biological tissue identification and its significance in quantitative analysis of plant cell development, crucial for examining samples across different genetic phenotypes and mutations in plant development. Additionally, the paper discusses the regulatory mechanisms of Arabidopsis thaliana's germline cells and the development of stamen cells before meiosis, offering new insights into the transition from somatic to germline cell fate in plants.
Collapse
Affiliation(s)
- Jiazheng Liu
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Yafeng Zheng
- College of Life Sciences, Peking University, Beijing 100871, China; State Key Laboratory of Protein and Plant Gene Research, Beijing 100871, China
| | - Limei Lin
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Jingyue Guo
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Yanan Lv
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Jingbin Yuan
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Hao Zhai
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Xi Chen
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - Lijun Shen
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China
| | - LinLin Li
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China.
| | - Shunong Bai
- College of Life Sciences, Peking University, Beijing 100871, China; State Key Laboratory of Protein and Plant Gene Research, Beijing 100871, China.
| | - Hua Han
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Team of Microscale Reconstruction and Intelligent Analysis, Laboratory of Brain-AI, Institute of Automation, Chinese Academy of Sciences, Beijing 101499, China.
| |
Collapse
|
2
|
Schmidt M, Motta A, Sievers M, Helmstaedter M. RoboEM: automated 3D flight tracing for synaptic-resolution connectomics. Nat Methods 2024; 21:908-913. [PMID: 38514779 PMCID: PMC11093750 DOI: 10.1038/s41592-024-02226-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 02/26/2024] [Indexed: 03/23/2024]
Abstract
Mapping neuronal networks from three-dimensional electron microscopy (3D-EM) data still poses substantial reconstruction challenges, in particular for thin axons. Currently available automated image segmentation methods require manual proofreading for many types of connectomic analysis. Here we introduce RoboEM, an artificial intelligence-based self-steering 3D 'flight' system trained to navigate along neurites using only 3D-EM data as input. Applied to 3D-EM data from mouse and human cortex, RoboEM substantially improves automated state-of-the-art segmentations and can replace manual proofreading for more complex connectomic analysis problems, yielding computational annotation cost for cortical connectomes about 400-fold lower than the cost of manual error correction.
Collapse
Affiliation(s)
- Martin Schmidt
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt, Germany.
| | - Alessandro Motta
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Meike Sievers
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt, Germany
- Faculty of Science, Radboud University, Nijmegen, the Netherlands
| | - Moritz Helmstaedter
- Department of Connectomics, Max Planck Institute for Brain Research, Frankfurt, Germany.
| |
Collapse
|
3
|
Chen Y, Chazalon J, Carlinet E, Ôn Vũ Ngoc M, Mallet C, Perret J. Automatic vectorization of historical maps: A benchmark. PLoS One 2024; 19:e0298217. [PMID: 38359045 PMCID: PMC10868791 DOI: 10.1371/journal.pone.0298217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/21/2024] [Indexed: 02/17/2024] Open
Abstract
Shape vectorization is a key stage of the digitization of large-scale historical maps, especially city maps that exhibit complex and valuable details. Having access to digitized buildings, building blocks, street networks and other geographic content opens numerous new approaches for historical studies such as change tracking, morphological analysis and density estimations. In the context of the digitization of Paris atlases created in the 19th and early 20th centuries, we have designed a supervised pipeline that reliably extract closed shapes from historical maps. This pipeline is based on a supervised edge filtering stage using deep filters, and a closed shape extraction stage using a watershed transform. It relies on probable multiple suboptimal methodological choices that hamper the vectorization performances in terms of accuracy and completeness. Objectively investigating which solutions are the most adequate among the numerous possibilities is comprehensively addressed in this paper. The following contributions are subsequently introduced: (i) we propose an improved training protocol for map digitization; (ii) we introduce a joint optimization of the edge detection and shape extraction stages; (iii) we compare the performance of state-of-the-art deep edge filters with topology-preserving loss functions, including vision transformers; (iv) we evaluate the end-to-end deep learnable watershed against Meyer watershed. We subsequently design the critical path for a fully automatic extraction of key elements of historical maps. All the data, code, benchmark results are freely available at https://github.com/soduco/Benchmark_historical_map_vectorization.
Collapse
Affiliation(s)
- Yizi Chen
- EPITA Research Lab. (LRE), Kremlin-Bicêtre, France
- LASTIG, Univ Gustave Eiffel, IGN, ENSG, Saint-Mande, France
- IKG, ETHZ, Zürich, Switzerland
| | | | | | | | - Clément Mallet
- LASTIG, Univ Gustave Eiffel, IGN, ENSG, Saint-Mande, France
| | - Julien Perret
- LASTIG, Univ Gustave Eiffel, IGN, ENSG, Saint-Mande, France
- LaDéHiS, CRH, EHESS, Paris, France
| |
Collapse
|
4
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
5
|
Kato S, Hotta K. Expanded tube attention for tubular structure segmentation. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-03038-2. [PMID: 38112883 DOI: 10.1007/s11548-023-03038-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 11/13/2023] [Indexed: 12/21/2023]
Abstract
PURPOSE Semantic segmentation of tubular structures, such as blood vessels and cell membranes, is a very difficult task, and it tends to break many predicted regions in the middle. This problem is due to the fact that tubular ground truth is very thin, and the number of pixels is extremely unbalanced compared to the background. METHODS We present a novel training method using pseudo-labels generated by morphological transformation. Furthermore, we present an attention module using thickened pseudo-labels, called the expanded tube attention (ETA) module. By using the ETA module, the network learns thickened regions based on pseudo-labels at first and then gradually learns thinned original regions while transferring information in the thickened regions as an attention map. RESULTS Through experiments conducted on retina vessel image datasets using various evaluation measures, we confirmed that the proposed method using ETA modules improved the clDice metric accuracy in comparison with the conventional methods. CONCLUSIONS We demonstrated that the proposed novel expanded tube attention module using thickened pseudo-labels can achieve easy-to-hard learning.
Collapse
Affiliation(s)
- Sota Kato
- Department of Electrical, Information, Materials and Materials Engineering, Meijo University, Tempaku-ku, Nagoya, Aichi, 468-8502, Japan.
| | - Kazuhiro Hotta
- Department of Electrical and Electronic Engineering, Meijo University, Tempaku-ku, Nagoya, Aichi, Japan
| |
Collapse
|
6
|
Franco-Barranco D, Lin Z, Jang WD, Wang X, Shen Q, Yin W, Fan Y, Li M, Chen C, Xiong Z, Xin R, Liu H, Chen H, Li Z, Zhao J, Chen X, Pape C, Conrad R, Nightingale L, de Folter J, Jones ML, Liu Y, Ziaei D, Huschauer S, Arganda-Carreras I, Pfister H, Wei D. Current Progress and Challenges in Large-Scale 3D Mitochondria Instance Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3956-3971. [PMID: 37768797 PMCID: PMC10753957 DOI: 10.1109/tmi.2023.3320497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
In this paper, we present the results of the MitoEM challenge on mitochondria 3D instance segmentation from electron microscopy images, organized in conjunction with the IEEE-ISBI 2021 conference. Our benchmark dataset consists of two large-scale 3D volumes, one from human and one from rat cortex tissue, which are 1,986 times larger than previously used datasets. At the time of paper submission, 257 participants had registered for the challenge, 14 teams had submitted their results, and six teams participated in the challenge workshop. Here, we present eight top-performing approaches from the challenge participants, along with our own baseline strategies. Posterior to the challenge, annotation errors in the ground truth were corrected without altering the final ranking. Additionally, we present a retrospective evaluation of the scoring system which revealed that: 1) challenge metric was permissive with the false positive predictions; and 2) size-based grouping of instances did not correctly categorize mitochondria of interest. Thus, we propose a new scoring system that better reflects the correctness of the segmentation results. Although several of the top methods are compared favorably to our own baselines, substantial errors remain unsolved for mitochondria with challenging morphologies. Thus, the challenge remains open for submission and automatic evaluation, with all volumes available for download.
Collapse
Affiliation(s)
- Daniel Franco-Barranco
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), 20018 San Sebastián, Spain, and also with the Donostia International Physics Center (DIPC), 20018 San Sebastián, Spain
| | - Zudi Lin
- Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Harvard University, Allston, MA 02134 USA
| | - Won-Dong Jang
- Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Harvard University, Allston, MA 02134 USA
| | - Xueying Wang
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138 USA
| | - Qijia Shen
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K
| | - Wenjie Yin
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138 USA
| | - Yutian Fan
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138 USA
| | - Mingxing Li
- Department of Electronic Engineering and Information Science (EEIS), University of Science and Technology of China, Anhui 230026, China
| | - Chang Chen
- Department of Electronic Engineering and Information Science (EEIS), University of Science and Technology of China, Anhui 230026, China
| | - Zhiwei Xiong
- Department of Electronic Engineering and Information Science (EEIS), University of Science and Technology of China, Anhui 230026, China
| | - Rui Xin
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Liu
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Huai Chen
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Zhili Li
- National Engineering Laboratory for Brain-Inspired Intelligence Technology and Application, University of Science and Technology of China, Anhui 230026, China
| | - Jie Zhao
- National Engineering Laboratory for Brain-Inspired Intelligence Technology and Application, University of Science and Technology of China, Anhui 230026, China
| | - Xuejin Chen
- National Engineering Laboratory for Brain-Inspired Intelligence Technology and Application, University of Science and Technology of China, Anhui 230026, China
| | - Constantin Pape
- European Molecular Biology Laboratory (EMBL), 69117 Heidelberg, Germany. He is now with the Institute for Computer Science, Georg-August-Universität Göttingen, Göttingen, Germany
| | - Ryan Conrad
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892 USA, and also with the Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick, MD 21701 USA
| | | | | | | | - Yanling Liu
- Advanced Biomedical Computational Science Group, Frederick National Laboratory for Cancer Research, Frederick, MD 21701 USA
| | - Dorsa Ziaei
- Advanced Biomedical Computational Science Group, Frederick National Laboratory for Cancer Research, Frederick, MD 21701 USA
| | | | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), 20018 San Sebastián, Spain, also with the Donostia International Physics Center (DIPC), 20018 San Sebastián, Spain, also with the IKERBASQUE, Basque Foundation for Science, 48009 Bilbao, Spain, and also with the Biofisika Institute, 48940 Leioa, Spain
| | - Hanspeter Pfister
- Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Harvard University, Allston, MA 02134 USA
| | - Donglai Wei
- Computer Science Department, Boston College, Chestnut Hill, MA 02467 USA
| |
Collapse
|
7
|
Dorkenwald S, Li PH, Januszewski M, Berger DR, Maitin-Shepard J, Bodor AL, Collman F, Schneider-Mizell CM, da Costa NM, Lichtman JW, Jain V. Multi-layered maps of neuropil with segmentation-guided contrastive learning. Nat Methods 2023; 20:2011-2020. [PMID: 37985712 PMCID: PMC10703674 DOI: 10.1038/s41592-023-02059-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 10/02/2023] [Indexed: 11/22/2023]
Abstract
Maps of the nervous system that identify individual cells along with their type, subcellular components and connectivity have the potential to elucidate fundamental organizational principles of neural circuits. Nanometer-resolution imaging of brain tissue provides the necessary raw data, but inferring cellular and subcellular annotation layers is challenging. We present segmentation-guided contrastive learning of representations (SegCLR), a self-supervised machine learning technique that produces representations of cells directly from 3D imagery and segmentations. When applied to volumes of human and mouse cortex, SegCLR enables accurate classification of cellular subcompartments and achieves performance equivalent to a supervised approach while requiring 400-fold fewer labeled examples. SegCLR also enables inference of cell types from fragments as small as 10 μm, which enhances the utility of volumes in which many neurites are truncated at boundaries. Finally, SegCLR enables exploration of layer 5 pyramidal cell subtypes and automated large-scale analysis of synaptic partners in mouse visual cortex.
Collapse
Affiliation(s)
- Sven Dorkenwald
- Google Research, Mountain View, CA, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Computer Science Department, Princeton University, Princeton, NJ, USA
| | | | | | - Daniel R Berger
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard, Cambridge, MA, USA
| | | | | | | | | | | | - Jeff W Lichtman
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard, Cambridge, MA, USA
| | - Viren Jain
- Google Research, Mountain View, CA, USA.
| |
Collapse
|
8
|
Handler A, Zhang Q, Pang S, Nguyen TM, Iskols M, Nolan-Tamariz M, Cattel S, Plumb R, Sanchez B, Ashjian K, Shotland A, Brown B, Kabeer M, Turecek J, DeLisle MM, Rankin G, Xiang W, Pavarino EC, Africawala N, Santiago C, Lee WCA, Xu CS, Ginty DD. Three-dimensional reconstructions of mechanosensory end organs suggest a unifying mechanism underlying dynamic, light touch. Neuron 2023; 111:3211-3229.e9. [PMID: 37725982 PMCID: PMC10773061 DOI: 10.1016/j.neuron.2023.08.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 07/31/2023] [Accepted: 08/22/2023] [Indexed: 09/21/2023]
Abstract
Across mammalian skin, structurally complex and diverse mechanosensory end organs respond to mechanical stimuli and enable our perception of dynamic, light touch. How forces act on morphologically dissimilar mechanosensory end organs of the skin to gate the requisite mechanotransduction channel Piezo2 and excite mechanosensory neurons is not understood. Here, we report high-resolution reconstructions of the hair follicle lanceolate complex, Meissner corpuscle, and Pacinian corpuscle and the subcellular distribution of Piezo2 within them. Across all three end organs, Piezo2 is restricted to the sensory axon membrane, including axon protrusions that extend from the axon body. These protrusions, which are numerous and elaborate extensively within the end organs, tether the axon to resident non-neuronal cells via adherens junctions. These findings support a unified model for dynamic touch in which mechanical stimuli stretch hundreds to thousands of axon protrusions across an end organ, opening proximal, axonal Piezo2 channels and exciting the neuron.
Collapse
Affiliation(s)
- Annie Handler
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Qiyu Zhang
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Song Pang
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Tri M Nguyen
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Michael Iskols
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Michael Nolan-Tamariz
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Stuart Cattel
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Rebecca Plumb
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Brianna Sanchez
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Karyl Ashjian
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Aria Shotland
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Bartianna Brown
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Madiha Kabeer
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Josef Turecek
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Michelle M DeLisle
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Genelle Rankin
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Wangchu Xiang
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Elisa C Pavarino
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Nusrat Africawala
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Celine Santiago
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA
| | - Wei-Chung Allen Lee
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; F.M. Kirby Neurobiology Center, Boston Children's Hospital, Boston, MA, USA
| | - C Shan Xu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - David D Ginty
- Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA; Howard Hughes Medical Institute, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA.
| |
Collapse
|
9
|
Aswath A, Alsahaf A, Giepmans BNG, Azzopardi G. Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey. Med Image Anal 2023; 89:102920. [PMID: 37572414 DOI: 10.1016/j.media.2023.102920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 07/05/2023] [Accepted: 07/31/2023] [Indexed: 08/14/2023]
Abstract
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets.
Collapse
Affiliation(s)
- Anusha Aswath
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands; Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| | - Ahmad Alsahaf
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Ben N G Giepmans
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - George Azzopardi
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands
| |
Collapse
|
10
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
11
|
Handler A, Zhang Q, Pang S, Nguyen TM, Iskols M, Nolan-Tamariz M, Cattel S, Plumb R, Sanchez B, Ashjian K, Shotland A, Brown B, Kabeer M, Turecek J, Rankin G, Xiang W, Pavarino EC, Africawala N, Santiago C, Lee WCA, Shan Xu C, Ginty DD. Three-dimensional reconstructions of mechanosensory end organs suggest a unifying mechanism underlying dynamic, light touch. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.17.533188. [PMID: 36993253 PMCID: PMC10055218 DOI: 10.1101/2023.03.17.533188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Specialized mechanosensory end organs within mammalian skin-hair follicle-associated lanceolate complexes, Meissner corpuscles, and Pacinian corpuscles-enable our perception of light, dynamic touch 1 . In each of these end organs, fast-conducting mechanically sensitive neurons, called Aβ low-threshold mechanoreceptors (Aβ LTMRs), associate with resident glial cells, known as terminal Schwann cells (TSCs) or lamellar cells, to form complex axon ending structures. Lanceolate-forming and corpuscle-innervating Aβ LTMRs share a low threshold for mechanical activation, a rapidly adapting (RA) response to force indentation, and high sensitivity to dynamic stimuli 1-6 . How mechanical stimuli lead to activation of the requisite mechanotransduction channel Piezo2 7-15 and Aβ RA-LTMR excitation across the morphologically dissimilar mechanosensory end organ structures is not understood. Here, we report the precise subcellular distribution of Piezo2 and high-resolution, isotropic 3D reconstructions of all three end organs formed by Aβ RA-LTMRs determined by large volume enhanced Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) imaging. We found that within each end organ, Piezo2 is enriched along the sensory axon membrane and is minimally or not expressed in TSCs and lamellar cells. We also observed a large number of small cytoplasmic protrusions enriched along the Aβ RA-LTMR axon terminals associated with hair follicles, Meissner corpuscles, and Pacinian corpuscles. These axon protrusions reside within close proximity to axonal Piezo2, occasionally contain the channel, and often form adherens junctions with nearby non-neuronal cells. Our findings support a unified model for Aβ RA-LTMR activation in which axon protrusions anchor Aβ RA-LTMR axon terminals to specialized end organ cells, enabling mechanical stimuli to stretch the axon in hundreds to thousands of sites across an individual end organ and leading to activation of proximal Piezo2 channels and excitation of the neuron.
Collapse
|
12
|
Gallusser B, Maltese G, Di Caprio G, Vadakkan TJ, Sanyal A, Somerville E, Sahasrabudhe M, O’Connor J, Weigert M, Kirchhausen T. Deep neural network automated segmentation of cellular structures in volume electron microscopy. J Cell Biol 2023; 222:e202208005. [PMID: 36469001 PMCID: PMC9728137 DOI: 10.1083/jcb.202208005] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 10/03/2022] [Accepted: 11/14/2022] [Indexed: 12/12/2022] Open
Abstract
Volume electron microscopy is an important imaging modality in contemporary cell biology. Identification of intracellular structures is a laborious process limiting the effective use of this potentially powerful tool. We resolved this bottleneck with automated segmentation of intracellular substructures in electron microscopy (ASEM), a new pipeline to train a convolutional neural network to detect structures of a wide range in size and complexity. We obtained dedicated models for each structure based on a small number of sparsely annotated ground truth images from only one or two cells. Model generalization was improved with a rapid, computationally effective strategy to refine a trained model by including a few additional annotations. We identified mitochondria, Golgi apparatus, endoplasmic reticulum, nuclear pore complexes, caveolae, clathrin-coated pits, and vesicles imaged by focused ion beam scanning electron microscopy. We uncovered a wide range of membrane-nuclear pore diameters within a single cell and derived morphological metrics from clathrin-coated pits and vesicles, consistent with the classical constant-growth assembly model.
Collapse
Affiliation(s)
- Benjamin Gallusser
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
- Institute of Bioengineering, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Giorgio Maltese
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
| | - Giuseppe Di Caprio
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
- Department of Pediatrics, Harvard Medical School, Boston, MA
| | - Tegy John Vadakkan
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
| | - Anwesha Sanyal
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
- Department of Cell Biology, Harvard Medical School, Boston, MA
| | - Elliott Somerville
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
| | - Mihir Sahasrabudhe
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Justin O’Connor
- Department of Biological Chemistry & Molecular Pharmacology, Harvard Medical School, Boston, MA
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Tom Kirchhausen
- Program in Cellular and Molecular Medicine, Boston Children’s Hospital, Boston, MA
- Department of Pediatrics, Harvard Medical School, Boston, MA
- Department of Cell Biology, Harvard Medical School, Boston, MA
| |
Collapse
|
13
|
Galbraith CG. Pumping up the volume. J Cell Biol 2023; 222:e202212042. [PMID: 36696087 PMCID: PMC9930139 DOI: 10.1083/jcb.202212042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
The time and cost of annotating ground-truth images and network training are major challenges to utilizing machine learning to automate the mining of volume electron microscopy data. In this issue, Gallusser et al. (2023. J. Cell Biol.https://doi.org/10.1083/jcb.202208005) present a less computationally intense pipeline to detect a single type of organelle using a limited number of loosely annotated images.
Collapse
Affiliation(s)
- Catherine G. Galbraith
- Oregon Health and Science University, Portland, OR, USA
- Quantitative and Systems Biology Program in Biomedical Engineering and The Knight Cancer Institute, Portland, OR, USA
| |
Collapse
|
14
|
Artificial intelligence gives neuron reconstruction a performance boost. Nat Methods 2023; 20:189-190. [PMID: 36604608 DOI: 10.1038/s41592-022-01712-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
15
|
Conrad R, Narayan K. Instance segmentation of mitochondria in electron microscopy images with a generalist deep learning model trained on a diverse dataset. Cell Syst 2023; 14:58-71.e5. [PMID: 36657391 PMCID: PMC9883049 DOI: 10.1016/j.cels.2022.12.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 10/10/2022] [Accepted: 12/14/2022] [Indexed: 01/19/2023]
Abstract
Mitochondria are extremely pleomorphic organelles. Automatically annotating each one accurately and precisely in any 2D or volume electron microscopy (EM) image is an unsolved computational challenge. Current deep learning-based approaches train models on images that provide limited cellular contexts, precluding generality. To address this, we amassed a highly heterogeneous ∼1.5 × 106 image 2D unlabeled cellular EM dataset and segmented ∼135,000 mitochondrial instances therein. MitoNet, a model trained on these resources, performs well on challenging benchmarks and on previously unseen volume EM datasets containing tens of thousands of mitochondria. We release a Python package and napari plugin, empanada, to rapidly run inference, visualize, and proofread instance segmentations. A record of this paper's transparent peer review process is included in the supplemental information.
Collapse
Affiliation(s)
- Ryan Conrad
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda 20892, Maryland, USA.,Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick 21702, Maryland, USA
| | - Kedar Narayan
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda 20892, Maryland, USA.,Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick 21702, Maryland, USA
| |
Collapse
|
16
|
Sheridan A, Nguyen TM, Deb D, Lee WCA, Saalfeld S, Turaga SC, Manor U, Funke J. Local shape descriptors for neuron segmentation. Nat Methods 2023; 20:295-303. [PMID: 36585455 PMCID: PMC9911350 DOI: 10.1038/s41592-022-01711-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 11/01/2022] [Indexed: 12/31/2022]
Abstract
We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.
Collapse
Affiliation(s)
- Arlo Sheridan
- grid.443970.dHHMI Janelia, Ashburn, VA USA ,grid.250671.70000 0001 0662 7144Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA USA
| | - Tri M. Nguyen
- grid.38142.3c000000041936754XDepartment of Neurobiology, Harvard Medical School, Boston, MA USA
| | | | - Wei-Chung Allen Lee
- grid.38142.3c000000041936754XF.M. Kirby Neurobiology Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA USA
| | | | | | - Uri Manor
- grid.250671.70000 0001 0662 7144Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA USA
| | | |
Collapse
|
17
|
Structured cerebellar connectivity supports resilient pattern separation. Nature 2023; 613:543-549. [PMID: 36418404 DOI: 10.1038/s41586-022-05471-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 10/20/2022] [Indexed: 11/25/2022]
Abstract
The cerebellum is thought to help detect and correct errors between intended and executed commands1,2 and is critical for social behaviours, cognition and emotion3-6. Computations for motor control must be performed quickly to correct errors in real time and should be sensitive to small differences between patterns for fine error correction while being resilient to noise7. Influential theories of cerebellar information processing have largely assumed random network connectivity, which increases the encoding capacity of the network's first layer8-13. However, maximizing encoding capacity reduces the resilience to noise7. To understand how neuronal circuits address this fundamental trade-off, we mapped the feedforward connectivity in the mouse cerebellar cortex using automated large-scale transmission electron microscopy and convolutional neural network-based image segmentation. We found that both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks.
Collapse
|
18
|
Chequer Charan D, Hua Y, Wang H, Huang W, Wang F, Elgoyhen AB, Boergens KM, Di Guilmi MN. Volume electron microscopy reveals age-related circuit remodeling in the auditory brainstem. Front Cell Neurosci 2022; 16:1070438. [PMID: 36589288 PMCID: PMC9799098 DOI: 10.3389/fncel.2022.1070438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022] Open
Abstract
The medial nucleus of the trapezoid body (MNTB) is an integral component of the auditory brainstem circuitry involved in sound localization. The giant presynaptic nerve terminal with multiple active zones, the calyx of Held (CH), is a hallmark of this nucleus, which mediates fast and synchronized glutamatergic synaptic transmission. To delineate how these synaptic structures adapt to reduced auditory afferents due to aging, we acquired and reconstructed circuitry-level volumes of mouse MNTB at different ages (3 weeks, 6, 18, and 24 months) using serial block-face electron microscopy. We used C57BL/6J, the most widely inbred mouse strain used for transgenic lines, which displays a type of age-related hearing loss. We found that MNTB neurons reduce in density with age. Surprisingly we observed an average of approximately 10% of poly-innervated MNTB neurons along the mouse lifespan, with prevalence in the low frequency region. Moreover, a tonotopy-dependent heterogeneity in CH morphology was observed in young but not in older mice. In conclusion, our data support the notion that age-related hearing impairments can be in part a direct consequence of several structural alterations and circuit remodeling in the brainstem.
Collapse
Affiliation(s)
- Daniela Chequer Charan
- Instituto de Investigaciones en Ingeniería Genética y Biología Molecular, Dr. Héctor N. Torres, INGEBI-CONICET, Buenos Aires, Argentina
| | - Yunfeng Hua
- Shanghai Institute of Precision Medicine, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haoyu Wang
- Shanghai Institute of Precision Medicine, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenqing Huang
- Shanghai Institute of Precision Medicine, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fangfang Wang
- Shanghai Institute of Precision Medicine, Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ana Belén Elgoyhen
- Instituto de Investigaciones en Ingeniería Genética y Biología Molecular, Dr. Héctor N. Torres, INGEBI-CONICET, Buenos Aires, Argentina
| | - Kevin M. Boergens
- Department of Physics, The University of Illinois at Chicago, Chicago, IL, United States,*Correspondence: Kevin M. Boergens Mariano N. Di Guilmi
| | - Mariano N. Di Guilmi
- Instituto de Investigaciones en Ingeniería Genética y Biología Molecular, Dr. Héctor N. Torres, INGEBI-CONICET, Buenos Aires, Argentina,*Correspondence: Kevin M. Boergens Mariano N. Di Guilmi
| |
Collapse
|
19
|
Deng S, Huang W, Chen C, Fu X, Xiong Z. A Unified Deep Learning Framework for ssTEM Image Restoration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3734-3746. [PMID: 35905070 DOI: 10.1109/tmi.2022.3194984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Serial section transmission electron micro-scopy (ssTEM) reveals biological information at a scale of nanometer and plays an important role in the ultrastructural analysis. However, due to the imperfect preparation of biological samples, ssTEM images are usually degraded with various artifacts that greatly challenge the subsequent analysis and visualization. In this paper, we introduce a unified deep learning framework for ssTEM image restoration which addresses three main types of artifacts, i.e., Support Film Folds (SFF), Staining Precipitates (SP), and Missing Sections (MS). To achieve this goal, we first model the appearance of SFF and SP artifacts by conducting comprehensive analyses on the statistics of real degraded images, relying on which we can then simulate a large number of paired images (degraded/artifacts-free) for training a deep restoration network. Then, we design a coarse-to-fine restoration network consisting of three modules, i.e., interpolation, correction, and fusion. The interpolation module exploits the adjacent artifacts-free images for an initial restoration, while the correction module resorts to the degraded image itself to rectify the artifacts. Finally, the fusion module jointly utilizes the above two results to further improve the restoration fidelity. Experimental results on both synthetic and real test data validate the significantly improved performance of our proposed framework over existing solutions, in terms of both image restoration fidelity and neuron segmentation accuracy. To the best of our knowledge, this is the first unified deep learning framework for ssTEM image restoration from different types of artifacts. Code is available at https://github.com/sydeng99/ssTEM-restoration.
Collapse
|
20
|
Dorkenwald S, Turner NL, Macrina T, Lee K, Lu R, Wu J, Bodor AL, Bleckert AA, Brittain D, Kemnitz N, Silversmith WM, Ih D, Zung J, Zlateski A, Tartavull I, Yu SC, Popovych S, Wong W, Castro M, Jordan CS, Wilson AM, Froudarakis E, Buchanan J, Takeno MM, Torres R, Mahalingam G, Collman F, Schneider-Mizell CM, Bumbarger DJ, Li Y, Becker L, Suckow S, Reimer J, Tolias AS, Macarico da Costa N, Reid RC, Seung HS. Binary and analog variation of synapses between cortical pyramidal neurons. eLife 2022; 11:e76120. [PMID: 36382887 PMCID: PMC9704804 DOI: 10.7554/elife.76120] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 11/15/2022] [Indexed: 11/17/2022] Open
Abstract
Learning from experience depends at least in part on changes in neuronal connections. We present the largest map of connectivity to date between cortical neurons of a defined type (layer 2/3 [L2/3] pyramidal cells in mouse primary visual cortex), which was enabled by automated analysis of serial section electron microscopy images with improved handling of image defects (250 × 140 × 90 μm3 volume). We used the map to identify constraints on the learning algorithms employed by the cortex. Previous cortical studies modeled a continuum of synapse sizes by a log-normal distribution. A continuum is consistent with most neural network models of learning, in which synaptic strength is a continuously graded analog variable. Here, we show that synapse size, when restricted to synapses between L2/3 pyramidal cells, is well modeled by the sum of a binary variable and an analog variable drawn from a log-normal distribution. Two synapses sharing the same presynaptic and postsynaptic cells are known to be correlated in size. We show that the binary variables of the two synapses are highly correlated, while the analog variables are not. Binary variation could be the outcome of a Hebbian or other synaptic plasticity rule depending on activity signals that are relatively uniform across neuronal arbors, while analog variation may be dominated by other influences such as spontaneous dynamical fluctuations. We discuss the implications for the longstanding hypothesis that activity-dependent plasticity switches synapses between bistable states.
Collapse
Affiliation(s)
- Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Nicholas L Turner
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Brain & Cognitive Sciences Department, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Agnes L Bodor
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | | | - Dodam Ih
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jonathan Zung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Aleksandar Zlateski
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Ignacio Tartavull
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - William Wong
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Manuel Castro
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Chris S Jordan
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Alyssa M Wilson
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | | | - Marc M Takeno
- Allen Institute for Brain ScienceSeattleUnited States
| | - Russel Torres
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | | | | | - Yang Li
- Allen Institute for Brain ScienceSeattleUnited States
| | - Lynne Becker
- Allen Institute for Brain ScienceSeattleUnited States
| | - Shelby Suckow
- Allen Institute for Brain ScienceSeattleUnited States
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
- Department of Electrical and Computer Engineering, Rice UniversityHoustonUnited States
| | | | - R Clay Reid
- Allen Institute for Brain ScienceSeattleUnited States
| | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| |
Collapse
|
21
|
Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, Zhou Q, Zhou J, Wei Y, Shao Y, Chen Y, Yu Y, Cao X, Zhan Y, Zhou XS, Gao Y, Shen D. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun 2022; 13:6566. [PMID: 36323677 PMCID: PMC9630370 DOI: 10.1038/s41467-022-34257-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022] Open
Abstract
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.
Collapse
Affiliation(s)
- Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Weigang Hu
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Miaofei Han
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiazhou Wang
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wei Zhang
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjie Zhou
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yanbo Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yue Yu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaohuan Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China ,grid.440637.20000 0004 4657 8879School of Biomedical Engineering, ShanghaiTech University, Shanghai, China ,grid.452344.0Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
22
|
Oner D, Kozinski M, Citraro L, Dadap NC, Konings AG, Fua P. Promoting Connectivity of Network-Like Structures by Enforcing Region Separation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5401-5413. [PMID: 33881988 DOI: 10.1109/tpami.2021.3074366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a novel, connectivity-oriented loss function for training deep convolutional networks to reconstruct network-like structures, like roads and irrigation canals, from aerial images. The main idea behind our loss is to express the connectivity of roads, or canals, in terms of disconnections that they create between background regions of the image. In simple terms, a gap in the predicted road causes two background regions, that lie on the opposite sides of a ground truth road, to touch in prediction. Our loss function is designed to prevent such unwanted connections between background regions, and therefore close the gaps in predicted roads. It also prevents predicting false positive roads and canals by penalizing unwarranted disconnections of background regions. In order to capture even short, dead-ending road segments, we evaluate the loss in small image crops. We show, in experiments on two standard road benchmarks and a new data set of irrigation canals, that convnets trained with our loss function recover road connectivity so well that it suffices to skeletonize their output to produce state of the art maps. A distinct advantage of our approach is that the loss can be plugged in to any existing training setup without further modifications.
Collapse
|
23
|
Kievits AJ, Lane R, Carroll EC, Hoogenboom JP. How innovations in methodology offer new prospects for volume electron microscopy. J Microsc 2022; 287:114-137. [PMID: 35810393 PMCID: PMC9546337 DOI: 10.1111/jmi.13134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/29/2022] [Accepted: 07/06/2022] [Indexed: 11/29/2022]
Abstract
Detailed knowledge of biological structure has been key in understanding biology at several levels of organisation, from organs to cells and proteins. Volume electron microscopy (volume EM) provides high resolution 3D structural information about tissues on the nanometre scale. However, the throughput rate of conventional electron microscopes has limited the volume size and number of samples that can be imaged. Recent improvements in methodology are currently driving a revolution in volume EM, making possible the structural imaging of whole organs and small organisms. In turn, these recent developments in image acquisition have created or stressed bottlenecks in other parts of the pipeline, like sample preparation, image analysis and data management. While the progress in image analysis is stunning due to the advent of automatic segmentation and server‐based annotation tools, several challenges remain. Here we discuss recent trends in volume EM, emerging methods for increasing throughput and implications for sample preparation, image analysis and data management.
Collapse
Affiliation(s)
- Arent J. Kievits
- Imaging Physics Delft University of Technology Delft 2624CJ The Netherlands
| | - Ryan Lane
- Imaging Physics Delft University of Technology Delft 2624CJ The Netherlands
| | | | | |
Collapse
|
24
|
Peddie CJ, Genoud C, Kreshuk A, Meechan K, Micheva KD, Narayan K, Pape C, Parton RG, Schieber NL, Schwab Y, Titze B, Verkade P, Aubrey A, Collinson LM. Volume electron microscopy. NATURE REVIEWS. METHODS PRIMERS 2022; 2:51. [PMID: 37409324 PMCID: PMC7614724 DOI: 10.1038/s43586-022-00131-9] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/10/2022] [Indexed: 07/07/2023]
Abstract
Life exists in three dimensions, but until the turn of the century most electron microscopy methods provided only 2D image data. Recently, electron microscopy techniques capable of delving deep into the structure of cells and tissues have emerged, collectively called volume electron microscopy (vEM). Developments in vEM have been dubbed a quiet revolution as the field evolved from established transmission and scanning electron microscopy techniques, so early publications largely focused on the bioscience applications rather than the underlying technological breakthroughs. However, with an explosion in the uptake of vEM across the biosciences and fast-paced advances in volume, resolution, throughput and ease of use, it is timely to introduce the field to new audiences. In this Primer, we introduce the different vEM imaging modalities, the specialized sample processing and image analysis pipelines that accompany each modality and the types of information revealed in the data. We showcase key applications in the biosciences where vEM has helped make breakthrough discoveries and consider limitations and future directions. We aim to show new users how vEM can support discovery science in their own research fields and inspire broader uptake of the technology, finally allowing its full adoption into mainstream biological imaging.
Collapse
Affiliation(s)
- Christopher J. Peddie
- Electron Microscopy Science Technology Platform, The Francis Crick Institute, London, UK
| | - Christel Genoud
- Electron Microscopy Facility, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Kimberly Meechan
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
- Present address: Faculty of Biosciences, Heidelberg University, Heidelberg, Germany
| | - Kristina D. Micheva
- Department of Molecular and Cellular Physiology, Stanford University, Palo Alto, CA, USA
| | - Kedar Narayan
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
- Cancer Research Technology Program, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Constantin Pape
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Robert G. Parton
- The Institute for Molecular Bioscience, The University of Queensland, Brisbane, Queensland, Australia
- Centre for Microscopy and Microanalysis, The University of Queensland, Brisbane, Queensland, Australia
| | - Nicole L. Schieber
- Centre for Microscopy and Microanalysis, The University of Queensland, Brisbane, Queensland, Australia
| | - Yannick Schwab
- Cell Biology and Biophysics Unit/ Electron Microscopy Core Facility, European Molecular Biology Laboratory, Heidelberg, Germany
| | | | - Paul Verkade
- School of Biochemistry, University of Bristol, Bristol, UK
| | - Aubrey Aubrey
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Lucy M. Collinson
- Electron Microscopy Science Technology Platform, The Francis Crick Institute, London, UK
| |
Collapse
|
25
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
26
|
Candelabrum cells are ubiquitous cerebellar cortex interneurons with specialized circuit properties. Nat Neurosci 2022; 25:702-713. [PMID: 35578131 PMCID: PMC9548381 DOI: 10.1038/s41593-022-01057-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 03/21/2022] [Indexed: 01/22/2023]
Abstract
To understand how the cerebellar cortex transforms mossy fiber (MF) inputs into Purkinje cell (PC) outputs, it is vital to delineate the elements of this circuit. Candelabrum cells (CCs) are enigmatic interneurons of the cerebellar cortex that have been identified based on their morphology, but their electrophysiological properties, synaptic connections and function remain unknown. Here, we clarify these properties using electrophysiology, single-nucleus RNA sequencing, in situ hybridization and serial electron microscopy in mice. We find that CCs are the most abundant PC layer interneuron. They are GABAergic, molecularly distinct and present in all cerebellar lobules. Their high resistance renders CC firing highly sensitive to synaptic inputs. CCs are excited by MFs and granule cells and are strongly inhibited by PCs. CCs in turn primarily inhibit molecular layer interneurons, which leads to PC disinhibition. Thus, inputs, outputs and local signals converge onto CCs to allow them to assume a unique role in controlling cerebellar output.
Collapse
|
27
|
Ma B, Xu Y, Chen J, Puquan P, Ban X, Wang H, Xue W. Deep learning based object tracking for 3D microstructure reconstruction. Methods 2022; 204:172-178. [DOI: 10.1016/j.ymeth.2022.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 03/14/2022] [Accepted: 04/05/2022] [Indexed: 10/18/2022] Open
|
28
|
Turner NL, Macrina T, Bae JA, Yang R, Wilson AM, Schneider-Mizell C, Lee K, Lu R, Wu J, Bodor AL, Bleckert AA, Brittain D, Froudarakis E, Dorkenwald S, Collman F, Kemnitz N, Ih D, Silversmith WM, Zung J, Zlateski A, Tartavull I, Yu SC, Popovych S, Mu S, Wong W, Jordan CS, Castro M, Buchanan J, Bumbarger DJ, Takeno M, Torres R, Mahalingam G, Elabbady L, Li Y, Cobos E, Zhou P, Suckow S, Becker L, Paninski L, Polleux F, Reimer J, Tolias AS, Reid RC, da Costa NM, Seung HS. Reconstruction of neocortex: Organelles, compartments, cells, circuits, and activity. Cell 2022; 185:1082-1100.e24. [PMID: 35216674 PMCID: PMC9337909 DOI: 10.1016/j.cell.2022.01.023] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 07/26/2021] [Accepted: 01/27/2022] [Indexed: 12/31/2022]
Abstract
We assembled a semi-automated reconstruction of L2/3 mouse primary visual cortex from ∼250 × 140 × 90 μm3 of electron microscopic images, including pyramidal and non-pyramidal neurons, astrocytes, microglia, oligodendrocytes and precursors, pericytes, vasculature, nuclei, mitochondria, and synapses. Visual responses of a subset of pyramidal cells are included. The data are publicly available, along with tools for programmatic and three-dimensional interactive access. Brief vignettes illustrate the breadth of potential applications relating structure to function in cortical circuits and neuronal cell biology. Mitochondria and synapse organization are characterized as a function of path length from the soma. Pyramidal connectivity motif frequencies are predicted accurately using a configuration model of random graphs. Pyramidal cells receiving more connections from nearby cells exhibit stronger and more reliable visual responses. Sample code shows data access and analysis.
Collapse
Affiliation(s)
- Nicholas L Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - J Alexander Bae
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Electrical and Computer Engineering Department, Princeton University, Princeton, NJ 08544, USA
| | - Runzhe Yang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Alyssa M Wilson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | | | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Brain & Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Agnes L Bodor
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | | | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Dodam Ih
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | | | - Jonathan Zung
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Aleksandar Zlateski
- Electrical Engineering and Computer Science Department, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ignacio Tartavull
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Shang Mu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - William Wong
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Chris S Jordan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Manuel Castro
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - JoAnn Buchanan
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Marc Takeno
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Russel Torres
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Leila Elabbady
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Yang Li
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Erick Cobos
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Pengcheng Zhou
- Department of Statistics, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA
| | - Shelby Suckow
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Lynne Becker
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Liam Paninski
- Department of Statistics, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science at Columbia University, New York, NY 10027, USA
| | - Franck Polleux
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science at Columbia University, New York, NY 10027, USA
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
29
|
Wildenberg G, Sorokina A, Koranda J, Monical A, Heer C, Sheffield M, Zhuang X, McGehee D, Kasthuri B. Partial connectomes of labeled dopaminergic circuits reveal non-synaptic communication and axonal remodeling after exposure to cocaine. eLife 2021; 10:71981. [PMID: 34965204 PMCID: PMC8716107 DOI: 10.7554/elife.71981] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 11/29/2021] [Indexed: 12/15/2022] Open
Abstract
Dopaminergic (DA) neurons exert profound influences on behavior including addiction. However, how DA axons communicate with target neurons and how those communications change with drug exposure remains poorly understood. We leverage cell type-specific labeling with large volume serial electron microscopy to detail DA connections in the nucleus accumbens (NAc) of the mouse (Mus musculus) before and after exposure to cocaine. We find that individual DA axons contain different varicosity types based on their vesicle contents. Spatially ordering along individual axons further suggests that varicosity types are non-randomly organized. DA axon varicosities rarely make specific synapses (<2%, 6/410), but instead are more likely to form spinule-like structures (15%, 61/410) with neighboring neurons. Days after a brief exposure to cocaine, DA axons were extensively branched relative to controls, formed blind-ended 'bulbs' filled with mitochondria, and were surrounded by elaborated glia. Finally, mitochondrial lengths increased by ~2.2 times relative to control only in DA axons and NAc spiny dendrites after cocaine exposure. We conclude that DA axonal transmission is unlikely to be mediated via classical synapses in the NAc and that the major locus of anatomical plasticity of DA circuits after exposure to cocaine are large-scale axonal re-arrangements with correlated changes in mitochondria.
Collapse
Affiliation(s)
- Gregg Wildenberg
- Department of Neurobiology, University of Chicago, Chicago, United States.,Argonne National Laboratory, Lemont, United States
| | - Anastasia Sorokina
- Department of Neurobiology, University of Chicago, Chicago, United States.,Argonne National Laboratory, Lemont, United States
| | - Jessica Koranda
- Department of Neurobiology, University of Chicago, Chicago, United States
| | - Alexis Monical
- Department of Anesthesia & Critical Care, University of Chicago, Chicago, United States
| | - Chad Heer
- Department of Neurobiology, University of Chicago, Chicago, United States
| | - Mark Sheffield
- Department of Neurobiology, University of Chicago, Chicago, United States
| | - Xiaoxi Zhuang
- Department of Neurobiology, University of Chicago, Chicago, United States
| | - Daniel McGehee
- Department of Anesthesia & Critical Care, University of Chicago, Chicago, United States
| | - Bobby Kasthuri
- Department of Neurobiology, University of Chicago, Chicago, United States.,Argonne National Laboratory, Lemont, United States
| |
Collapse
|
30
|
Lee K, Lu R, Luther K, Seung HS. Learning and Segmenting Dense Voxel Embeddings for 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3801-3811. [PMID: 34270419 PMCID: PMC8692755 DOI: 10.1109/tmi.2021.3097826] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We show dense voxel embeddings learned via deep metric learning can be employed to produce a highly accurate segmentation of neurons from 3D electron microscopy images. A "metric graph" on a set of edges between voxels is constructed from the dense voxel embeddings generated by a convolutional network. Partitioning the metric graph with long-range edges as repulsive constraints yields an initial segmentation with high precision, with substantial accuracy gain for very thin objects. The convolutional embedding net is reused without any modification to agglomerate the systematic splits caused by complex "self-contact" motifs. Our proposed method achieves state-of-the-art accuracy on the challenging problem of 3D neuron reconstruction from the brain images acquired by serial section electron microscopy. Our alternative, object-centered representation could be more generally useful for other computational tasks in automated neural circuit reconstruction.
Collapse
|
31
|
Whole-cell organelle segmentation in volume electron microscopy. Nature 2021; 599:141-146. [PMID: 34616042 DOI: 10.1038/s41586-021-03977-3] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 08/31/2021] [Indexed: 12/12/2022]
Abstract
Cells contain hundreds of organelles and macromolecular assemblies. Obtaining a complete understanding of their intricate organization requires the nanometre-level, three-dimensional reconstruction of whole cells, which is only feasible with robust and scalable automatic methods. Here, to support the development of such methods, we annotated up to 35 different cellular organelle classes-ranging from endoplasmic reticulum to microtubules to ribosomes-in diverse sample volumes from multiple cell types imaged at a near-isotropic resolution of 4 nm per voxel with focused ion beam scanning electron microscopy (FIB-SEM)1. We trained deep learning architectures to segment these structures in 4 nm and 8 nm per voxel FIB-SEM volumes, validated their performance and showed that automatic reconstructions can be used to directly quantify previously inaccessible metrics including spatial interactions between cellular components. We also show that such reconstructions can be used to automatically register light and electron microscopy images for correlative studies. We have created an open data and open-source web repository, 'OpenOrganelle', to share the data, computer code and trained models, which will enable scientists everywhere to query and further improve automatic reconstruction of these datasets.
Collapse
|
32
|
Wolf S, Bailoni A, Pape C, Rahaman N, Kreshuk A, Kothe U, Hamprecht FA. The Mutex Watershed and its Objective: Efficient, Parameter-Free Graph Partitioning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:3724-3738. [PMID: 32175858 DOI: 10.1109/tpami.2020.2980827] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Image partitioning, or segmentation without semantics, is the task of decomposing an image into distinct segments, or equivalently to detect closed contours. Most prior work either requires seeds, one per segment; or a threshold; or formulates the task as multicut / correlation clustering, an NP-hard problem. Here, we propose an efficient algorithm for graph partitioning, the "Mutex Watershed". Unlike seeded watershed, the algorithm can accommodate not only attractive but also repulsive cues, allowing it to find a previously unspecified number of segments without the need for explicit seeds or a tunable threshold. We also prove that this simple algorithm solves to global optimality an objective function that is intimately related to the multicut / correlation clustering integer linear programming formulation. The algorithm is deterministic, very simple to implement, and has empirically linearithmic complexity. When presented with short-range attractive and long-range repulsive cues from a deep neural network, the Mutex Watershed gives the best results currently known for the competitive ISBI 2012 EM segmentation benchmark.
Collapse
|
33
|
Whole-body integration of gene expression and single-cell morphology. Cell 2021; 184:4819-4837.e22. [PMID: 34380046 PMCID: PMC8445025 DOI: 10.1016/j.cell.2021.07.017] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 05/05/2021] [Accepted: 07/14/2021] [Indexed: 01/10/2023]
Abstract
Animal bodies are composed of cell types with unique expression programs that implement their distinct locations, shapes, structures, and functions. Based on these properties, cell types assemble into specific tissues and organs. To systematically explore the link between cell-type-specific gene expression and morphology, we registered an expression atlas to a whole-body electron microscopy volume of the nereid Platynereis dumerilii. Automated segmentation of cells and nuclei identifies major cell classes and establishes a link between gene activation, chromatin topography, and nuclear size. Clustering of segmented cells according to gene expression reveals spatially coherent tissues. In the brain, genetically defined groups of neurons match ganglionic nuclei with coherent projections. Besides interneurons, we uncover sensory-neurosecretory cells in the nereid mushroom bodies, which thus qualify as sensory organs. They furthermore resemble the vertebrate telencephalon by molecular anatomy. We provide an integrated browser as a Fiji plugin for remote exploration of all available multimodal datasets. A cellular atlas integrates gene expression and ultrastructure for an entire annelid Morphometry of all segmented cells, nuclei, and chromatin categorizes cell classes Molecular anatomy and projectome of head ganglionic nuclei and mushroom bodies An open-source browser for multimodal big image data exploration and analysis
Collapse
|
34
|
An effective AI integrated system for neuron tracing on anisotropic electron microscopy volume. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
35
|
Buhmann J, Sheridan A, Malin-Mayor C, Schlegel P, Gerhard S, Kazimiers T, Krause R, Nguyen TM, Heinrich L, Lee WCA, Wilson R, Saalfeld S, Jefferis GSXE, Bock DD, Turaga SC, Cook M, Funke J. Automatic detection of synaptic partners in a whole-brain Drosophila electron microscopy data set. Nat Methods 2021; 18:771-774. [PMID: 34168373 PMCID: PMC7611460 DOI: 10.1038/s41592-021-01183-7] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 05/10/2021] [Indexed: 02/03/2023]
Abstract
We develop an automatic method for synaptic partner identification in insect brains and use it to predict synaptic partners in a whole-brain electron microscopy dataset of the fruit fly. The predictions can be used to infer a connectivity graph with high accuracy, thus allowing fast identification of neural pathways. To facilitate circuit reconstruction using our results, we develop CIRCUITMAP, a user interface add-on for the circuit annotation tool CATMAID.
Collapse
Affiliation(s)
- Julia Buhmann
- HHMI Janelia Research Campus, Ashburn, VA, USA
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | | | | | | | - Stephan Gerhard
- Harvard Medical School, Boston, MA, USA
- UniDesign Solutions GmbH, Zürich, Switzerland
| | | | - Renate Krause
- HHMI Janelia Research Campus, Ashburn, VA, USA
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | | | | | | | | | | | | | | | | | - Matthew Cook
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Jan Funke
- HHMI Janelia Research Campus, Ashburn, VA, USA.
| |
Collapse
|
36
|
Conrad R, Narayan K. CEM500K, a large-scale heterogeneous unlabeled cellular electron microscopy image dataset for deep learning. eLife 2021; 10:e65894. [PMID: 33830015 PMCID: PMC8032397 DOI: 10.7554/elife.65894] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 03/13/2021] [Indexed: 01/03/2023] Open
Abstract
Automated segmentation of cellular electron microscopy (EM) datasets remains a challenge. Supervised deep learning (DL) methods that rely on region-of-interest (ROI) annotations yield models that fail to generalize to unrelated datasets. Newer unsupervised DL algorithms require relevant pre-training images, however, pre-training on currently available EM datasets is computationally expensive and shows little value for unseen biological contexts, as these datasets are large and homogeneous. To address this issue, we present CEM500K, a nimble 25 GB dataset of 0.5 × 106 unique 2D cellular EM images curated from nearly 600 three-dimensional (3D) and 10,000 two-dimensional (2D) images from >100 unrelated imaging projects. We show that models pre-trained on CEM500K learn features that are biologically relevant and resilient to meaningful image augmentations. Critically, we evaluate transfer learning from these pre-trained models on six publicly available and one newly derived benchmark segmentation task and report state-of-the-art results on each. We release the CEM500K dataset, pre-trained models and curation pipeline for model building and further expansion by the EM community. Data and code are available at https://www.ebi.ac.uk/pdbe/emdb/empiar/entry/10592/ and https://git.io/JLLTz.
Collapse
Affiliation(s)
- Ryan Conrad
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of HealthBethesdaUnited States
- Cancer Research Technology Program, Frederick National Laboratory for Cancer ResearchFrederickUnited States
| | - Kedar Narayan
- Center for Molecular Microscopy, Center for Cancer Research, National Cancer Institute, National Institutes of HealthBethesdaUnited States
- Cancer Research Technology Program, Frederick National Laboratory for Cancer ResearchFrederickUnited States
| |
Collapse
|
37
|
Xing R, Niu S, Gao X, Liu T, Fan W, Chen Y. Weakly supervised serous retinal detachment segmentation in SD-OCT images by two-stage learning. BIOMEDICAL OPTICS EXPRESS 2021; 12:2312-2327. [PMID: 33996231 PMCID: PMC8086451 DOI: 10.1364/boe.416167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/12/2021] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
Automated lesion segmentation is one of the important tasks for the quantitative assessment of retinal diseases in SD-OCT images. Recently, deep convolutional neural networks (CNN) have shown promising advancements in the field of automated image segmentation, whereas they always benefit from large-scale datasets with high-quality pixel-wise annotations. Unfortunately, obtaining accurate annotations is expensive in both human effort and finance. In this paper, we propose a weakly supervised two-stage learning architecture to detect and further segment central serous chorioretinopathy (CSC) retinal detachment with only image-level annotations. Specifically, in the first stage, a Located-CNN is designed to detect the location of lesion regions in the whole SD-OCT retinal images, and highlight the distinguishing regions. To generate available a pseudo pixel-level label, the conventional level set method is employed to refine the distinguishing regions. In the second stage, we customize the active-contour loss function in deep networks to achieve the effective segmentation of the lesion area. A challenging dataset is used to evaluate our proposed method, and the results demonstrate that the proposed method consistently outperforms some current models trained with a different level of supervision, and is even as competitive as those relying on stronger supervision. To our best knowledge, we are the first to achieve CSC segmentation in SD-OCT images using weakly supervised learning, which can greatly reduce the labeling efforts.
Collapse
Affiliation(s)
- Ruiwen Xing
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Sijie Niu
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Xizhan Gao
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Tingting Liu
- Shandong Eye Hospital, State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250014, Jinan 250014, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Yuehui Chen
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| |
Collapse
|
38
|
Yuan Z, Ma X, Yi J, Luo Z, Peng J. HIVE-Net: Centerline-aware hierarchical view-ensemble convolutional network for mitochondria segmentation in EM images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105925. [PMID: 33508773 DOI: 10.1016/j.cmpb.2020.105925] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 12/29/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE With the advancement of electron microscopy (EM) imaging technology, neuroscientists can investigate the function of various intracellular organelles, e.g, mitochondria, at nano-scale. Semantic segmentation of electron microscopy (EM) is an essential step to efficiently obtain reliable morphological statistics. Despite the great success achieved using deep convolutional neural networks (CNNs), they still produce coarse segmentations with lots of discontinuities and false positives for mitochondria segmentation. METHODS In this study, we introduce a centerline-aware multitask network by utilizing centerline as an intrinsic shape cue of mitochondria to regularize the segmentation. Since the application of 3D CNNs on large medical volumes is usually hindered by their substantial computational cost and storage overhead, we introduce a novel hierarchical view-ensemble convolution (HVEC), a simple alternative of 3D convolution to learn 3D spatial contexts using more efficient 2D convolutions. The HVEC enables both decomposing and sharing multi-view information, leading to increased learning capacity. RESULTS Extensive validation results on two challenging benchmarks show that, the proposed method performs favorably against the state-of-the-art methods in accuracy and visual quality but with a greatly reduced model size. Moreover, the proposed model also shows significantly improved generalization ability, especially when training with quite limited amount of training data. Detailed sensitivity analysis and ablation study have also been conducted, which show the robustness of the proposed model and effectiveness of the proposed modules. CONCLUSIONS The experiments highlighted that the proposed architecture enables both simplicity and efficiency leading to increased capacity of learning spatial contexts. Moreover, incorporating shape cues such as centerline information is a promising approach to improve the performance of mitochondria segmentation.
Collapse
Affiliation(s)
- Zhimin Yuan
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Xiaofen Ma
- Department of Medical Imaging, Guangdong Second Provincial General Hospital, Guangzhou, 510317, China
| | - Jiajin Yi
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Zhengrong Luo
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Jialin Peng
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China; Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361021, China.
| |
Collapse
|
39
|
Abdollahzadeh A, Belevich I, Jokitalo E, Sierra A, Tohka J. DeepACSON automated segmentation of white matter in 3D electron microscopy. Commun Biol 2021; 4:179. [PMID: 33568775 PMCID: PMC7876004 DOI: 10.1038/s42003-021-01699-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 01/12/2021] [Indexed: 01/30/2023] Open
Abstract
Tracing the entirety of ultrastructures in large three-dimensional electron microscopy (3D-EM) images of the brain tissue requires automated segmentation techniques. Current segmentation techniques use deep convolutional neural networks (DCNNs) and rely on high-contrast cellular membranes and high-resolution EM volumes. On the other hand, segmenting low-resolution, large EM volumes requires methods to account for severe membrane discontinuities inescapable. Therefore, we developed DeepACSON, which performs DCNN-based semantic segmentation and shape-decomposition-based instance segmentation. DeepACSON instance segmentation uses the tubularity of myelinated axons and decomposes under-segmented myelinated axons into their constituent axons. We applied DeepACSON to ten EM volumes of rats after sham-operation or traumatic brain injury, segmenting hundreds of thousands of long-span myelinated axons, thousands of cell nuclei, and millions of mitochondria with excellent evaluation scores. DeepACSON quantified the morphology and spatial aspects of white matter ultrastructures, capturing nanoscopic morphological alterations five months after the injury.
Collapse
Affiliation(s)
- Ali Abdollahzadeh
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Ilya Belevich
- Electron Microscopy Unit, Institute of Biotechnology, University of Helsinki, Helsinki, Finland
| | - Eija Jokitalo
- Electron Microscopy Unit, Institute of Biotechnology, University of Helsinki, Helsinki, Finland
| | - Alejandra Sierra
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland.
| | - Jussi Tohka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
40
|
Kuan AT, Phelps JS, Thomas LA, Nguyen TM, Han J, Chen CL, Azevedo AW, Tuthill JC, Funke J, Cloetens P, Pacureanu A, Lee WCA. Dense neuronal reconstruction through X-ray holographic nano-tomography. Nat Neurosci 2020; 23:1637-1643. [PMID: 32929244 PMCID: PMC8354006 DOI: 10.1038/s41593-020-0704-9] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 08/05/2020] [Indexed: 12/21/2022]
Abstract
Imaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.
Collapse
Affiliation(s)
- Aaron T Kuan
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Jasper S Phelps
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Program in Neuroscience, Harvard University, Boston, MA, USA
| | - Logan A Thomas
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Tri M Nguyen
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Julie Han
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Chiao-Lin Chen
- Department of Genetics, Harvard Medical School, Boston, MA, USA
| | - Anthony W Azevedo
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - John C Tuthill
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Jan Funke
- HHMI Janelia Research Campus, Ashburn, VA, USA
| | | | - Alexandra Pacureanu
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- ESRF, The European Synchrotron, Grenoble, France.
| | - Wei-Chung Allen Lee
- F.M. Kirby Neurobiology Center, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
41
|
Wei D, Lin Z, Franco-Barranco D, Wendt N, Liu X, Yin W, Huang X, Gupta A, Jang WD, Wang X, Arganda-Carreras I, Lichtman JW, Pfister H. MitoEM Dataset: Large-scale 3D Mitochondria Instance Segmentation from EM Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12265:66-76. [PMID: 33283212 PMCID: PMC7713709 DOI: 10.1007/978-3-030-59722-1_7] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Electron microscopy (EM) allows the identification of intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. However, public mitochondria segmentation datasets only contain hundreds of instances with simple shapes. It is unclear if existing methods achieving human-level accuracy on these small datasets are robust in practice. To this end, we introduce the MitoEM dataset, a 3D mitochondria instance segmentation dataset with two (30μm)3 volumes from human and rat cortices respectively, 3, 600× larger than previous benchmarks. With around 40K instances, we find a great diversity of mitochondria in terms of shape and density. For evaluation, we tailor the implementation of the average precision (AP) metric for 3D data with a 45× speedup. On MitoEM, we find existing instance segmentation methods often fail to correctly segment mitochondria with complex shapes or close contacts with other instances. Thus, our MitoEM dataset poses new challenges to the field. We release our code and data: https://donglaiw.github.io/page/mitoEM/index.html.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | - Ignacio Arganda-Carreras
- Donostia International Physics Center
- University of the Basque Country
- Ikerbasque, Basque Foundation for Science
| | | | | |
Collapse
|
42
|
Liu S, Yin L, Miao S, Ma J, Cong S, Hu S. Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization. Curr Med Imaging 2020; 16:1243-1258. [PMID: 32807062 DOI: 10.2174/1573405616999200817103920] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 06/27/2020] [Accepted: 07/01/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Medical image fusion is very important for the diagnosis and treatment of diseases. In recent years, there have been a number of different multi-modal medical image fusion algorithms that can provide delicate contexts for disease diagnosis more clearly and more conveniently. Recently, nuclear norm minimization and deep learning have been used effectively in image processing. METHODS A multi-modality medical image fusion method using a rolling guidance filter (RGF) with a convolutional neural network (CNN) based feature mapping and nuclear norm minimization (NNM) is proposed. At first, we decompose medical images to base layer components and detail layer components by using RGF. In the next step, we get the basic fused image through the pretrained CNN model. The CNN model with pre-training is used to obtain the significant characteristics of the base layer components. And we can compute the activity level measurement from the regional energy of CNN-based fusion maps. Then, a detail fused image is gained by NNM. That is, we use NNM to fuse the detail layer components. At last, the basic and detail fused images are integrated into the fused result. RESULTS From the comparison with the most advanced fusion algorithms, the results of experiments indicate that this fusion algorithm has the best effect in visual evaluation and objective standard. CONCLUSION The fusion algorithm using RGF and CNN-based feature mapping, combined with NNM, can improve fusion effects and suppress artifacts and blocking effects in the fused results.
Collapse
Affiliation(s)
- Shuaiqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Lu Yin
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Siyu Miao
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Jian Ma
- College of Electronic and Information Engineering, Hebei University, Baoding Hebei, China
| | - Shuai Cong
- Industrial and Commercial College, Hebei University, Baoding Hebei, China
| | - Shaohai Hu
- College of Computer and Information, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
43
|
Peng J, Yuan Z. Mitochondria Segmentation From EM Images via Hierarchical Structured Contextual Forest. IEEE J Biomed Health Inform 2020; 24:2251-2259. [DOI: 10.1109/jbhi.2019.2961792] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Wolny A, Cerrone L, Vijayan A, Tofanelli R, Barro AV, Louveaux M, Wenzl C, Strauss S, Wilson-Sánchez D, Lymbouridou R, Steigleder SS, Pape C, Bailoni A, Duran-Nebreda S, Bassel GW, Lohmann JU, Tsiantis M, Hamprecht FA, Schneitz K, Maizel A, Kreshuk A. Accurate and versatile 3D segmentation of plant tissues at cellular resolution. eLife 2020; 9:e57613. [PMID: 32723478 PMCID: PMC7447435 DOI: 10.7554/elife.57613] [Citation(s) in RCA: 85] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023] Open
Abstract
Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.
Collapse
Affiliation(s)
- Adrian Wolny
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Lorenzo Cerrone
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Athul Vijayan
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Rachele Tofanelli
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | | | - Marion Louveaux
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Christian Wenzl
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Sören Strauss
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - David Wilson-Sánchez
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Rena Lymbouridou
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | | | - Constantin Pape
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Alberto Bailoni
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | | | - George W Bassel
- School of Life Sciences, University of WarwickCoventryUnited Kingdom
| | - Jan U Lohmann
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Miltos Tsiantis
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Fred A Hamprecht
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Kay Schneitz
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Alexis Maizel
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | | |
Collapse
|
45
|
Computer Vision and Deep Learning Techniques for the Analysis of Drone-Acquired Forest Images, a Transfer Learning Study. REMOTE SENSING 2020. [DOI: 10.3390/rs12081287] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unmanned Aerial Vehicles (UAV) are becoming an essential tool for evaluating the status and the changes in forest ecosystems. This is especially important in Japan due to the sheer magnitude and complexity of the forest area, made up mostly of natural mixed broadleaf deciduous forests. Additionally, Deep Learning (DL) is becoming more popular for forestry applications because it allows for the inclusion of expert human knowledge into the automatic image processing pipeline. In this paper we study and quantify issues related to the use of DL with our own UAV-acquired images in forestry applications such as: the effect of Transfer Learning (TL) and the Deep Learning architecture chosen or whether a simple patch-based framework may produce results in different practical problems. We use two different Deep Learning architectures (ResNet50 and UNet), two in-house datasets (winter and coastal forest) and focus on two separate problem formalizations (Multi-Label Patch or MLP classification and semantic segmentation). Our results show that Transfer Learning is necessary to obtain satisfactory outcome in the problem of MLP classification of deciduous vs evergreen trees in the winter orthomosaic dataset (with a 9.78% improvement from no transfer learning to transfer learning from a a general-purpose dataset). We also observe a further 2.7% improvement when Transfer Learning is performed from a dataset that is closer to our type of images. Finally, we demonstrate the applicability of the patch-based framework with the ResNet50 architecture in a different and complex example: Detection of the invasive broadleaf deciduous black locust (Robinia pseudoacacia) in an evergreen coniferous black pine (Pinus thunbergii) coastal forest typical of Japan. In this case we detect images containing the invasive species with a 75% of True Positives (TP) and 9% False Positives (FP) while the detection of native trees was 95% TP and 10% FP.
Collapse
|
46
|
Bates AS, Manton JD, Jagannathan SR, Costa M, Schlegel P, Rohlfing T, Jefferis GSXE. The natverse, a versatile toolbox for combining and analysing neuroanatomical data. eLife 2020; 9:e53350. [PMID: 32286229 PMCID: PMC7242028 DOI: 10.7554/elife.53350] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Accepted: 04/11/2020] [Indexed: 11/18/2022] Open
Abstract
To analyse neuron data at scale, neuroscientists expend substantial effort reading documentation, installing dependencies and moving between analysis and visualisation environments. To facilitate this, we have developed a suite of interoperable open-source R packages called the natverse. The natverse allows users to read local and remote data, perform popular analyses including visualisation and clustering and graph-theoretic analysis of neuronal branching. Unlike most tools, the natverse enables comparison across many neurons of morphology and connectivity after imaging or co-registration within a common template space. The natverse also enables transformations between different template spaces and imaging modalities. We demonstrate tools that integrate the vast majority of Drosophila neuroanatomical light microscopy and electron microscopy connectomic datasets. The natverse is an easy-to-use environment for neuroscientists to solve complex, large-scale analysis challenges as well as an open platform to create new code and packages to share with the community.
Collapse
Affiliation(s)
| | - James D Manton
- Neurobiology Division, MRC Laboratory of Molecular BiologyCambridgeUnited Kingdom
| | - Sridhar R Jagannathan
- Drosophila Connectomics Group, Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| | - Marta Costa
- Neurobiology Division, MRC Laboratory of Molecular BiologyCambridgeUnited Kingdom
- Drosophila Connectomics Group, Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| | - Philipp Schlegel
- Neurobiology Division, MRC Laboratory of Molecular BiologyCambridgeUnited Kingdom
- Drosophila Connectomics Group, Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| | - Torsten Rohlfing
- SRI International, Neuroscience Program, Center for Health SciencesMenlo ParkUnited States
| | - Gregory SXE Jefferis
- Neurobiology Division, MRC Laboratory of Molecular BiologyCambridgeUnited Kingdom
- Drosophila Connectomics Group, Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| |
Collapse
|
47
|
Ishii S, Lee S, Urakubo H, Kume H, Kasai H. Generative and discriminative model-based approaches to microscopic image restoration and segmentation. Microscopy (Oxf) 2020; 69:79-91. [PMID: 32215571 PMCID: PMC7141893 DOI: 10.1093/jmicro/dfaa007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/02/2020] [Accepted: 02/17/2020] [Indexed: 11/14/2022] Open
Abstract
Image processing is one of the most important applications of recent machine learning (ML) technologies. Convolutional neural networks (CNNs), a popular deep learning-based ML architecture, have been developed for image processing applications. However, the application of ML to microscopic images is limited as microscopic images are often 3D/4D, that is, the image sizes can be very large, and the images may suffer from serious noise generated due to optics. In this review, three types of feature reconstruction applications to microscopic images are discussed, which fully utilize the recent advancements in ML technologies. First, multi-frame super-resolution is introduced, based on the formulation of statistical generative model-based techniques such as Bayesian inference. Second, data-driven image restoration is introduced, based on supervised discriminative model-based ML technique. In this application, CNNs are demonstrated to exhibit preferable restoration performance. Third, image segmentation based on data-driven CNNs is introduced. Image segmentation has become immensely popular in object segmentation based on electron microscopy (EM); therefore, we focus on EM image processing.
Collapse
Affiliation(s)
- Shin Ishii
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
- ATR Neural Information Analysis Laboratories, Kyoto 619-0288, Japan
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan
| | - Sehyung Lee
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan
| | - Hidetoshi Urakubo
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Hideaki Kume
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan
- Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan
| | - Haruo Kasai
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan
- Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
48
|
Koziński M, Mosinska A, Salzmann M, Fua P. Tracing in 2D to reduce the annotation effort for 3D deep delineation of linear structures. Med Image Anal 2019; 60:101590. [PMID: 31841949 DOI: 10.1016/j.media.2019.101590] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 10/19/2019] [Accepted: 10/21/2019] [Indexed: 11/16/2022]
Abstract
The difficulty of obtaining annotations to build training databases still slows down the adoption of recent deep learning approaches for biomedical image analysis. In this paper, we show that we can train a Deep Net to perform 3D volumetric delineation given only 2D annotations in Maximum Intensity Projections (MIP) of the training volumes. This significantly reduces the annotation time: We conducted a user study that suggests that annotating 2D projections is on average twice as fast as annotating the original 3D volumes. Our technical contribution is a loss function that evaluates a 3D prediction against annotations of 2D projections. It is inspired by space carving, a classical approach to reconstructing complex 3D shapes from arbitrarily-positioned cameras. It can be used to train any deep network with volumetric output, without the need to change the network's architecture. Substituting the loss is all it takes to enable 2D annotations in an existing training setup. In extensive experiments on 3D light microscopy images of neurons and retinal blood vessels, and on Magnetic Resonance Angiography (MRA) brain scans, we show that, when trained on projection annotations, deep delineation networks perform as well as when they are trained using costlier 3D annotations.
Collapse
Affiliation(s)
- Mateusz Koziński
- Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne, Station 15, Lausanne CH-1015, Switzerland.
| | - Agata Mosinska
- Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne, Station 15, Lausanne CH-1015, Switzerland
| | - Mathieu Salzmann
- Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne, Station 15, Lausanne CH-1015, Switzerland
| | - Pascal Fua
- Computer Vision Laboratory, École Polytechnique Fédérale de Lausanne, Station 15, Lausanne CH-1015, Switzerland
| |
Collapse
|
49
|
Pape C, Matskevych A, Wolny A, Hennies J, Mizzon G, Louveaux M, Musser J, Maizel A, Arendt D, Kreshuk A. Leveraging Domain Knowledge to Improve Microscopy Image Segmentation With Lifted Multicuts. FRONTIERS IN COMPUTER SCIENCE 2019. [DOI: 10.3389/fcomp.2019.00006] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
50
|
Schubert PJ, Dorkenwald S, Januszewski M, Jain V, Kornfeld J. Learning cellular morphology with neural networks. Nat Commun 2019; 10:2736. [PMID: 31227718 PMCID: PMC6588634 DOI: 10.1038/s41467-019-10836-3] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 05/30/2019] [Indexed: 01/10/2023] Open
Abstract
Reconstruction and annotation of volume electron microscopy data sets of brain tissue is challenging but can reveal invaluable information about neuronal circuits. Significant progress has recently been made in automated neuron reconstruction as well as automated detection of synapses. However, methods for automating the morphological analysis of nanometer-resolution reconstructions are less established, despite the diversity of possible applications. Here, we introduce cellular morphology neural networks (CMNs), based on multi-view projections sampled from automatically reconstructed cellular fragments of arbitrary size and shape. Using unsupervised training, we infer morphology embeddings (Neuron2vec) of neuron reconstructions and train CMNs to identify glia cells in a supervised classification paradigm, which are then used to resolve neuron reconstruction errors. Finally, we demonstrate that CMNs can be used to identify subcellular compartments and the cell types of neuron reconstructions.
Collapse
Affiliation(s)
- Philipp J Schubert
- Max Planck Institute of Neurobiology, Electrons - Photons - Neurons, 82152, Planegg-Martinsried, Germany.
| | - Sven Dorkenwald
- Max Planck Institute of Neurobiology, Electrons - Photons - Neurons, 82152, Planegg-Martinsried, Germany
| | | | - Viren Jain
- Google AI, Mountain View, 94043, CA, USA
| | - Joergen Kornfeld
- Max Planck Institute of Neurobiology, Electrons - Photons - Neurons, 82152, Planegg-Martinsried, Germany.
| |
Collapse
|