1
|
Darling C, Kumar S, Alexandrov Y, de Faye J, Almagro Santiago J, Rýdlová A, Bugeon L, Dallman MJ, Behrens AJ, French PMW, McGinty J. Optical projection tomography implemented for accessibility and low cost ( OPTImAL). PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2024; 382:20230101. [PMID: 38826047 DOI: 10.1098/rsta.2023.0101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 03/21/2024] [Indexed: 06/04/2024]
Abstract
Optical projection tomography (OPT) is a three-dimensional mesoscopic imaging modality that can use absorption or fluorescence contrast, and is widely applied to fixed and live samples in the mm-cm scale. For fluorescence OPT, we present OPT implemented for accessibility and low cost, an open-source research-grade implementation of modular OPT hardware and software that has been designed to be widely accessible by using low-cost components, including light-emitting diode (LED) excitation and cooled complementary metal-oxide-semiconductor (CMOS) cameras. Both the hardware and software are modular and flexible in their implementation, enabling rapid switching between sample size scales and supporting compressive sensing to reconstruct images from undersampled sparse OPT data, e.g. to facilitate rapid imaging with low photobleaching/phototoxicity. We also explore a simple implementation of focal scanning OPT to achieve higher resolution, which entails the use of a fan-beam geometry reconstruction method to account for variation in magnification. This article is part of the Theo Murphy meeting issue 'Open, reproducible hardware for microscopy'.
Collapse
Affiliation(s)
- C Darling
- Physics Department, Imperial College London , London SW7 2AZ, UK
| | - S Kumar
- Physics Department, Imperial College London , London SW7 2AZ, UK
- Francis Crick Institute , London NW1 1AT, UK
| | - Y Alexandrov
- Physics Department, Imperial College London , London SW7 2AZ, UK
- Francis Crick Institute , London NW1 1AT, UK
| | - J de Faye
- Cancer Stem Cell Laboratory, Institute of Cancer Research , London SW7 3RP, UK
| | - J Almagro Santiago
- Cancer Stem Cell Laboratory, Institute of Cancer Research , London SW7 3RP, UK
| | - A Rýdlová
- Department of Life Sciences, Imperial College London , London SW7 2AZ, UK
| | - L Bugeon
- Department of Life Sciences, Imperial College London , London SW7 2AZ, UK
| | - M J Dallman
- Department of Life Sciences, Imperial College London , London SW7 2AZ, UK
| | - A J Behrens
- Cancer Stem Cell Laboratory, Institute of Cancer Research , London SW7 3RP, UK
- CRUK Convergence Science Centre & Division of Cancer, Department of Surgery and Cancer, Imperial College , London, UK
| | - P M W French
- Physics Department, Imperial College London , London SW7 2AZ, UK
- Francis Crick Institute , London NW1 1AT, UK
| | - J McGinty
- Physics Department, Imperial College London , London SW7 2AZ, UK
- Francis Crick Institute , London NW1 1AT, UK
| |
Collapse
|
2
|
Obando M, Bassi A, Ducros N, Mato G, Correia TM. Model-based deep learning framework for accelerated optical projection tomography. Sci Rep 2023; 13:21735. [PMID: 38066010 PMCID: PMC10709405 DOI: 10.1038/s41598-023-47650-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/16/2023] [Indexed: 12/18/2023] Open
Abstract
In this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (ToMoDL), to greatly reduce acquisition and reconstruction times. The proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. A preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. The algorithm is trained using a database of wild-type zebrafish (Danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. Using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a U-Net. The proposed method performs equally well or better than the alternatives. For a highly reduced number of projections, only the U-Net method provides images comparable to those obtained with ToMoDL. However, ToMoDL has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller.
Collapse
Affiliation(s)
- Marcos Obando
- Medical Physics Department, Centro Atómico Bariloche and Instituto Balseiro, 8400, Bariloche, Argentina
| | - Andrea Bassi
- Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133, Milano, Italy
| | - Nicolas Ducros
- University of Lyon INSA-Lyon, Université Claude Bernard Lyon 1, UJM Saint-Etienne, CREATIS CNRS UMR 5220, Inserm, U1294, Lyon, France
- IUF, Institut Universitaire de France, Paris, France
| | - Germán Mato
- Medical Physics Department, Centro Atómico Bariloche and Instituto Balseiro, 8400, Bariloche, Argentina
| | - Teresa M Correia
- Centre of Marine Sciences (CCMAR), Universidade do Algarve, Campus de Gambelas, 8005-139, Faro, Portugal.
- School of Biomedical Engineering and Imaging Sciences, King's College London, SE1 7EH, London, United Kingdom.
| |
Collapse
|
3
|
Sun J, Zhao F, Zhu L, Liu B, Fei P. Optical projection tomography reconstruction with few views using highly-generalizable deep learning at sinogram domain. BIOMEDICAL OPTICS EXPRESS 2023; 14:6260-6270. [PMID: 38420331 PMCID: PMC10898583 DOI: 10.1364/boe.500152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/31/2023] [Accepted: 10/31/2023] [Indexed: 03/02/2024]
Abstract
Optical projection tomography (OPT) reconstruction using a minimal number of measured views offers the potential to significantly reduce excitation dosage and greatly enhance temporal resolution in biomedical imaging. However, traditional algorithms for tomographic reconstruction exhibit severe quality degradation, e.g., presence of streak artifacts, when the number of views is reduced. In this study, we introduce a novel domain evaluation method which can evaluate the domain complexity, and thereby validate that the sinogram domain exhibits lower complexity as compared to the conventional spatial domain. Then we achieve robust deep-learning-based reconstruction with a feedback-based data initialization method at sinogram domain, which shows strong generalization ability that notably improves the overall performance for OPT image reconstruction. This learning-based approach, termed SinNet, enables 4-view OPT reconstructions of diverse biological samples showing robust generalization ability. It surpasses the conventional OPT reconstruction approaches in terms of peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics, showing its potential for the augment of widely-used OPT techniques.
Collapse
Affiliation(s)
- Jiahao Sun
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Fang Zhao
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Lanxin Zhu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - BinBing Liu
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Peng Fei
- School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
- Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
4
|
Darling C, Davis SPX, Kumar S, French PMW, McGinty J. Single-shot optical projection tomography for high-speed volumetric imaging of dynamic biological samples. JOURNAL OF BIOPHOTONICS 2023; 16:e202200232. [PMID: 36087031 DOI: 10.1002/jbio.202200232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 09/07/2022] [Accepted: 09/08/2022] [Indexed: 06/15/2023]
Abstract
A single-shot adaptation of Optical Projection Tomography (OPT) for high-speed volumetric snapshot imaging of dynamic mesoscopic biological samples is presented. Conventional OPT has been applied to in vivo imaging of animal models such as D. rerio, but the sequential acquisition of projection images typically requires samples to be immobilized during the acquisition. A proof-of-principle system capable of single-shot tomography of a ~1 mm3 volume is presented, demonstrating camera-limited rates of up to 62.5 volumes/s, which has been applied to 3D imaging of a freely swimming zebrafish embryo. This is achieved by recording eight projection views simultaneously on four low-cost CMOS cameras. With no stage required to rotate the sample, this single-shot OPT system can be implemented with a component cost of under £5000. The system design can be adapted to different sized fields of view and may be applied to a broad range of dynamic samples, including high throughput flow cytometry applied to model organisms and fluid dynamics studies.
Collapse
Affiliation(s)
- Connor Darling
- Photonics Group, Department of Physics, Imperial College London, London, UK
| | - Samuel P X Davis
- Photonics Group, Department of Physics, Imperial College London, London, UK
| | - Sunil Kumar
- Photonics Group, Department of Physics, Imperial College London, London, UK
- Francis Crick Institute, London, UK
| | - Paul M W French
- Photonics Group, Department of Physics, Imperial College London, London, UK
- Francis Crick Institute, London, UK
| | - James McGinty
- Photonics Group, Department of Physics, Imperial College London, London, UK
- Francis Crick Institute, London, UK
| |
Collapse
|
5
|
Zhang H, Liu B, Fei P. Self-supervised next view prediction for limited-angle optical projection tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:5952-5961. [PMID: 36733724 PMCID: PMC9872893 DOI: 10.1364/boe.472762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 10/01/2022] [Accepted: 10/05/2022] [Indexed: 06/18/2023]
Abstract
Optical projection tomography captures 2-D projections of rotating biological samples and computationally reconstructs 3-D structures from these projections, where hundreds of views with an angular range of π radian is desired for a reliable reconstruction. Limited-angle tomography tries to recover the structures of the sample using fewer angles of projections. However, the result is far from satisfactory due to the missing of wedge information. Here we introduce a novel view prediction technique, which is able to extending the angular range of captured views for the limited-angle tomography. Following a self-supervised technique that learns the relationship between the captured limited-angle views, unseen views can be computationally synthesized without any prior label data required. Combined with an optical tomography system, the proposed approach can robustly generate new projections of unknown biological samples and extends the angles of the projections from the original 60° to nearly 180°, thereby yielding high-quality 3-D reconstructions of samples even with highly incomplete measurement.
Collapse
Affiliation(s)
- Hao Zhang
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - BinBing Liu
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Peng Fei
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
6
|
Koudounas P, Koniaris E, Manolis I, Asvestas P, Kostopoulos S, Cavouras D, Glotsos D. Three-dimensional tissue volume generation in conventional brightfield microscopy. Microsc Res Tech 2022; 85:2913-2923. [PMID: 35510792 DOI: 10.1002/jemt.24141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/08/2022] [Accepted: 04/22/2022] [Indexed: 11/07/2022]
Abstract
The purpose of the study is to develop and automate a series of steps for enabling digital 3D tissue volume generation in conventional Brightfield microscopy for histopathology applications. Tissue samples were retrieved from the General Hospital of Athens "Hippocration", Greece. Samples were placed on a microtome that produced consecutive 2 μm sections. Each section was stained using Hematoxylin and Eosin and placed on microscope slides. A histopathologist specified the region of interest (ROI) on each slide. A 2D image was created from each ROI using a LEICA DM2500 microscope with a LEICA DFC 420C camera. Τhe 3D volume was created by stacking consecutive 2D images using a deep learning image interpolation method. The reconstructed 3D tissue volumes were evaluated by an expert histopathologist. Results showed that the 3D volumes might reveal information that is not clearly visible or even undetectable in the conventional 2D Brightfield images. In contrast to other 3D tissue imaging technologies, the proposed method (a) does not depend on the distance of the sample from the objectives producing 3D tissue volumes at any desired magnification, (b) does not require a special instrument, it may be implemented with any conventional Brightfield microscope, and (c) can be used for any given routine application, not only for some specialized clinical studies. The proposed study provides the basis for a feasible, cost-less and time-less upgrade of any standard 2D microscope into a 3D imaging instrument that may enhance the quality of diagnostic assessments in histopathology.
Collapse
Affiliation(s)
- Panteleimon Koudounas
- Medical Image and Signal Processing Laboratory, Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | | | - Ioannis Manolis
- Department of Biomedical Sciences, University of West Attica, Athens, Greece
| | - Panteleimon Asvestas
- Medical Image and Signal Processing Laboratory, Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Spiros Kostopoulos
- Medical Image and Signal Processing Laboratory, Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Dionisis Cavouras
- Medical Image and Signal Processing Laboratory, Department of Biomedical Engineering, University of West Attica, Athens, Greece
| | - Dimitris Glotsos
- Medical Image and Signal Processing Laboratory, Department of Biomedical Engineering, University of West Attica, Athens, Greece
| |
Collapse
|
7
|
Wang H, Wang N, Xie H, Wang L, Zhou W, Yang D, Cao X, Zhu S, Liang J, Chen X. Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography. Quant Imaging Med Surg 2022; 12:2535-2551. [PMID: 35371942 PMCID: PMC8923870 DOI: 10.21037/qims-21-778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/20/2021] [Indexed: 08/30/2023]
Abstract
BACKGROUND Projection tomography (PT) is a very important and valuable method for fast volumetric imaging with isotropic spatial resolution. Sparse-view or limited-angle reconstruction-based PT can greatly reduce data acquisition time, lower radiation doses, and simplify sample fixation modes. However, few techniques can currently achieve image reconstruction based on few-view projection data, which is especially important for in vivo PT in living organisms. METHODS A 2-stage deep learning network (TSDLN)-based framework was proposed for parallel-beam PT reconstructions using few-view projections. The framework is composed of a reconstruction network (R-net) and a correction network (C-net). The R-net is a generative adversarial network (GAN) used to complete image information with direct back-projection (BP) of a sparse signal, bringing the reconstructed image close to reconstruction results obtained from fully projected data. The C-net is a U-net array that denoises the compensation result to obtain a high-quality reconstructed image. RESULTS The accuracy and feasibility of the proposed TSDLN-based framework in few-view projection-based reconstruction were first evaluated with simulations, using images from the DeepLesion public dataset. The framework exhibited better reconstruction performance than traditional analytic reconstruction algorithms and iterative algorithms, especially in cases using sparse-view projection images. For example, with as few as two projections, the TSDLN-based framework reconstructed high-quality images very close to the original image, with structural similarities greater than 0.8. By using previously acquired optical PT (OPT) data in the TSDLN-based framework trained on computed tomography (CT) data, we further exemplified the migration capabilities of the TSDLN-based framework. The results showed that when the number of projections was reduced to 5, the contours and distribution information of the samples in question could still be seen in the reconstructed images. CONCLUSIONS The simulations and experimental results showed that the TSDLN-based framework has strong reconstruction abilities using few-view projection images, and has great potential in the application of in vivo PT.
Collapse
Affiliation(s)
- Huiyuan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Hui Xie
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Lin Wang
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China
| | - Wangting Zhou
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Defu Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Xu Cao
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Shouping Zhu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi’an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| |
Collapse
|
8
|
Guglielmi L, Heliot C, Kumar S, Alexandrov Y, Gori I, Papaleonidopoulou F, Barrington C, East P, Economou AD, French PMW, McGinty J, Hill CS. Smad4 controls signaling robustness and morphogenesis by differentially contributing to the Nodal and BMP pathways. Nat Commun 2021; 12:6374. [PMID: 34737283 PMCID: PMC8569018 DOI: 10.1038/s41467-021-26486-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 10/07/2021] [Indexed: 12/25/2022] Open
Abstract
The transcriptional effector SMAD4 is a core component of the TGF-β family signaling pathways. However, its role in vertebrate embryo development remains unresolved. To address this, we deleted Smad4 in zebrafish and investigated the consequences of this on signaling by the TGF-β family morphogens, BMPs and Nodal. We demonstrate that in the absence of Smad4, dorsal/ventral embryo patterning is disrupted due to the loss of BMP signaling. However, unexpectedly, Nodal signaling is maintained, but lacks robustness. This Smad4-independent Nodal signaling is sufficient for mesoderm specification, but not for optimal endoderm specification. Furthermore, using Optical Projection Tomography in combination with 3D embryo morphometry, we have generated a BMP morphospace and demonstrate that Smad4 mutants are morphologically indistinguishable from embryos in which BMP signaling has been genetically/pharmacologically perturbed. Smad4 is thus differentially required for signaling by different TGF-β family ligands, which has implications for diseases where Smad4 is mutated or deleted.
Collapse
Affiliation(s)
- Luca Guglielmi
- Developmental Signalling Laboratory, The Francis Crick Institute, London, NW1 1AT, UK
| | - Claire Heliot
- Developmental Signalling Laboratory, The Francis Crick Institute, London, NW1 1AT, UK
| | - Sunil Kumar
- Advanced Light Microscopy, The Francis Crick Institute, London, NW1 1AT, UK
| | - Yuriy Alexandrov
- Advanced Light Microscopy, The Francis Crick Institute, London, NW1 1AT, UK
| | - Ilaria Gori
- Developmental Signalling Laboratory, The Francis Crick Institute, London, NW1 1AT, UK
| | | | - Christopher Barrington
- Bioinformatics and Biostatistics Facility, The Francis Crick Institute, London, NW1 1AT, UK
| | - Philip East
- Bioinformatics and Biostatistics Facility, The Francis Crick Institute, London, NW1 1AT, UK
| | - Andrew D Economou
- Developmental Signalling Laboratory, The Francis Crick Institute, London, NW1 1AT, UK
| | - Paul M W French
- Department of Physics, Imperial College London, SW7 2AZ, London, UK
| | - James McGinty
- Department of Physics, Imperial College London, SW7 2AZ, London, UK
| | - Caroline S Hill
- Developmental Signalling Laboratory, The Francis Crick Institute, London, NW1 1AT, UK.
| |
Collapse
|
9
|
Shen H, Gao J. Portable deep learning singlet microscope. JOURNAL OF BIOPHOTONICS 2020; 13:e202000013. [PMID: 32125774 DOI: 10.1002/jbio.202000013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 02/26/2020] [Accepted: 02/29/2020] [Indexed: 06/10/2023]
Abstract
Having the least lenses, the significant feature of the singlet imaging system, helps the development of the portable and cost-effective microscopes. A novel method of monochromatic/color singlet microscopy, which is combined with only one aspheric lens and deep learning computational imaging technology, is proposed in this article. The designed singlet aspheric lens is an approximate linear signal system, which means modulation-transfer-function curves on all field-of-views (5 mm diagonally) are almost coincident with each other. The purpose of the designed linear signal system is to further improve the resolution of our microscope by using deep learning algorithm. As a proof of concept, we designed a singlet microscopy based on our method, which weighs only 400 g. The experimental data and results of the sample USAF-1951 target and bio-sample (the Equisetum-arvense Strobile L.S), prove that the performance of the proposed singlet microscope is competitive to a commercial microscope with the 4X/NA0.1 objective lens. We believe that our idea and method would guide to design more cost-effective and powerful singlet imaging system.
Collapse
Affiliation(s)
- Hua Shen
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
- Department of Material Science and Engineering, University of California Los Angeles, Los Angeles, California, USA
| | - Jinming Gao
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, China
| |
Collapse
|