1
|
Sivaraman VB, Imran M, Wei Q, Muralidharan P, Tamplin MR, Grumbach IM, Kardon RH, Wang JK, Zhou Y, Shao W. RetinaRegNet: A zero-shot approach for retinal image registration. Comput Biol Med 2025; 186:109645. [PMID: 39813746 DOI: 10.1016/j.compbiomed.2024.109645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Revised: 12/01/2024] [Accepted: 12/28/2024] [Indexed: 01/18/2025]
Abstract
Retinal image registration is essential for monitoring eye diseases and planning treatments, yet it remains challenging due to large deformations, minimal overlap, and varying image quality. To address these challenges, we propose RetinaRegNet, a multi-stage image registration model with zero-shot generalizability across multiple retinal imaging modalities. RetinaRegNet begins by extracting image features using a pretrained latent diffusion model. Feature points are sampled from the fixed image using a combination of the SIFT algorithm and random sampling. For each sampled point, its corresponding point in the moving image is estimated by cosine similarities between diffusion feature vectors of that point and all pixels in the moving image. Outliers in point correspondences are detected by an inverse consistency constraint, ensuring consistency in both forward and backward directions. Outliers with large distances between true and estimated points are further removed by a transformation-based outlier detector. The resulting point correspondences are then used to estimate a geometric transformation between the two images. We use a two-stage registration framework for robust and accurate alignment: the first stage estimates a homography for global alignment, and the second stage estimates a third-order polynomial transformation to capture local deformations. We evaluated RetinaRegNet on three imaging modalities: color fundus, fluorescein angiography, and laser speckle flowgraphy. Across these datasets, it consistently outperformed state-of-the-art methods, achieving AUC scores of 0.901, 0.868, and 0.861, respectively. RetinaRegNet's zero-shot performance highlights its potential as a valuable tool for tracking disease progression and evaluating treatment efficacy. Our code is publicly available at: https://github.com/mirthAI/RetinaRegNet.
Collapse
Affiliation(s)
- Vishal Balaji Sivaraman
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32610, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Qingyue Wei
- Department of Computational and Mathematical Engineering, Stanford University, Stanford, CA, 94305, United States
| | - Preethika Muralidharan
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, 32610, United States
| | - Michelle R Tamplin
- Department of Internal Medicine, University of Iowa, Iowa City, IA, 52242, United States; Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, 52242, United States
| | - Isabella M Grumbach
- Department of Internal Medicine, University of Iowa, Iowa City, IA, 52242, United States; Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, 52242, United States; Department of Radiation Oncology, University of Iowa, Iowa City, IA, 52242, United States
| | - Randy H Kardon
- Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, 52242, United States; Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, 52242, United States
| | - Jui-Kai Wang
- Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, 52242, United States; Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, 52242, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Wei Shao
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32610, United States; Department of Medicine, University of Florida, Gainesville, FL, 32610, United States; Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, 32610, United States; Intelligent Clinical Care Center, University of Florida, Gainesville, FL, 32610, United States.
| |
Collapse
|
2
|
Ahmed W, Liatsis P. LHU-VT: A Lightweight Hypercomplex U-Net with Vessel Thickness-Guided Dice Loss for retinal vessel segmentation. Comput Biol Med 2025; 185:109470. [PMID: 39667053 DOI: 10.1016/j.compbiomed.2024.109470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 11/11/2024] [Accepted: 11/22/2024] [Indexed: 12/14/2024]
Abstract
Vision loss is often caused by retinal disorders, such as age-related macular degeneration and diabetic retinopathy, where early indicators like microaneurysms and hemorrhages appear as changes in retinal blood vessels. Accurate segmentation of these vessels in retinal images is essential for early diagnosis. However, retinal vessel segmentation presents challenges due to complex vessel structures, low contrast, and dense branching patterns, which are further complicated in resource-limited settings requiring lightweight solutions. To address these challenges, we propose a novel Lightweight Hypercomplex U-Net1 (LHUN) with Vessel Thickness-Guided Dice Loss (VTDL), collectively called LHU-VT. LHUN utilizes hypercomplex octonions to capture intricate patterns and cross-channel relationships in fundus images, reducing parameter count and enabling edge deployment. The VTDL component applies vessel thickness-guided weights to address class imbalance, thereby enhancing segmentation accuracy. Our experiments show that LHU-VT significantly outperforms current methods, achieving up to 2.4× fewer FLOPs, 4.4× fewer parameters, and 2.6× smaller model size. The model achieves AUC scores of 0.9938, 0.9879, 0.9988, and 0.9808, respectively, on four benchmark datasets CHASE, DRIVE, STARE, and HRF.
Collapse
Affiliation(s)
- Waqar Ahmed
- Silo AI, Helsinki, Finland; Department of Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Panos Liatsis
- Department of Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
3
|
Bahr T, Vu TA, Tuttle JJ, Iezzi R. Deep Learning and Machine Learning Algorithms for Retinal Image Analysis in Neurodegenerative Disease: Systematic Review of Datasets and Models. Transl Vis Sci Technol 2024; 13:16. [PMID: 38381447 PMCID: PMC10893898 DOI: 10.1167/tvst.13.2.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/26/2023] [Indexed: 02/22/2024] Open
Abstract
Purpose Retinal images contain rich biomarker information for neurodegenerative disease. Recently, deep learning models have been used for automated neurodegenerative disease diagnosis and risk prediction using retinal images with good results. Methods In this review, we systematically report studies with datasets of retinal images from patients with neurodegenerative diseases, including Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, and others. We also review and characterize the models in the current literature which have been used for classification, regression, or segmentation problems using retinal images in patients with neurodegenerative diseases. Results Our review found several existing datasets and models with various imaging modalities primarily in patients with Alzheimer's disease, with most datasets on the order of tens to a few hundred images. We found limited data available for the other neurodegenerative diseases. Although cross-sectional imaging data for Alzheimer's disease is becoming more abundant, datasets with longitudinal imaging of any disease are lacking. Conclusions The use of bilateral and multimodal imaging together with metadata seems to improve model performance, thus multimodal bilateral image datasets with patient metadata are needed. We identified several deep learning tools that have been useful in this context including feature extraction algorithms specifically for retinal images, retinal image preprocessing techniques, transfer learning, feature fusion, and attention mapping. Importantly, we also consider the limitations common to these models in real-world clinical applications. Translational Relevance This systematic review evaluates the deep learning models and retinal features relevant in the evaluation of retinal images of patients with neurodegenerative disease.
Collapse
Affiliation(s)
- Tyler Bahr
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| | - Truong A. Vu
- University of the Incarnate Word, School of Osteopathic Medicine, San Antonio, TX, USA
| | - Jared J. Tuttle
- University of Texas Health Science Center at San Antonio, Joe R. and Teresa Lozano Long School of Medicine, San Antonio, TX, USA
| | - Raymond Iezzi
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| |
Collapse
|
4
|
McGarry SD, Adjekukor C, Ahuja S, Greysson-Wong J, Vien I, Rinker KD, Childs SJ. Vessel Metrics: A software tool for automated analysis of vascular structure in confocal imaging. Microvasc Res 2024; 151:104610. [PMID: 37739214 DOI: 10.1016/j.mvr.2023.104610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 08/31/2023] [Accepted: 09/09/2023] [Indexed: 09/24/2023]
Abstract
Images contain a wealth of information that is often under analyzed in biological studies. Developmental models of vascular disease are a powerful way to quantify developmentally regulated vessel phenotypes to identify the roots of the disease process. We present vessel Metrics, a software tool specifically designed to analyze developmental vascular microscopy images that will expedite the analysis of vascular images and provide consistency between research groups. We developed a segmentation algorithm that robustly quantifies different image types, developmental stages, organisms, and disease models at a similar accuracy level to a human observer. We validate the algorithm on confocal, lightsheet, and two photon microscopy data in a zebrafish model expressing fluorescent protein in the endothelial nuclei. The tool accurately segments data taken by multiple scientists on varying microscopes. We validate vascular parameters such as vessel density, network length, and diameter, across developmental stages, genetic mutations, and drug treatments, and show a favorable comparison to other freely available software tools. Additionally, we validate the tool in a mouse model. Vessel Metrics reduces the time to analyze experimental results, improves repeatability within and between institutions, and expands the percentage of a given vascular network analyzable in experiments.
Collapse
Affiliation(s)
- Sean D McGarry
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada
| | - Cynthia Adjekukor
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada
| | - Suchit Ahuja
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada
| | - Jasper Greysson-Wong
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada
| | - Idy Vien
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada
| | - Kristina D Rinker
- Centre for Bioengineering Research and Education, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada; Department of Chemical and Petroleum Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
| | - Sarah J Childs
- Alberta Children's Hospital Research Institute, University of Calgary, T2N 4N1, Canada; Libin Institute, University of Calgary, T2N 4N1, Canada; Department of Biochemistry and Molecular Biology, University of Calgary, T2N 4N1, Canada.
| |
Collapse
|
5
|
Han Q, Wang H, Hou M, Weng T, Pei Y, Li Z, Chen G, Tian Y, Qiu Z. HWA-SegNet: Multi-channel skin lesion image segmentation network with hierarchical analysis and weight adjustment. Comput Biol Med 2023; 152:106343. [PMID: 36481758 DOI: 10.1016/j.compbiomed.2022.106343] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/30/2022] [Accepted: 11/21/2022] [Indexed: 11/30/2022]
Abstract
Convolutional neural networks (CNNs) show excellent performance in accurate medical image segmentation. However, the characteristics of sample with small size and insufficient feature expression, irregular shape of the segmented target and inaccurate judgment of edge texture have always been problems to be faced in the field of skin lesion image segmentation. Therefore, in order to solve these problems, discrete Fourier transform (DFT) is introduced to enrich the input data and a CNN architecture (HWA-SegNet) is proposed in this paper. Firstly, DFT is improved to analyze the features of the skin lesions image, and multi-channel data is extended for each image. Secondly, a hierarchical dilated analysis module is constructed to understand the semantic features under multi-channel. Finally, the pre-prediction results are fine-tuned using a weight adjustment structure with fully connected layers to obtain higher accuracy prediction results. Then, 520 skin lesion images are tested on the ISIC 2018 dataset. Extensive experimental results show that our HWA-SegNet improve the average segmentation Dice Similarity Coefficient from 88.30% to 91.88%, Sensitivity from 89.29% to 92.99%, and Jaccard similarity index from 81.15% to 85.90% compared with U-Net. Compared with the State-of-the-Art method, the Jaccard similarity index and Specificity are close, but the Dice Similarity Coefficient is higher. The experimental data show that the data augmentation strategy based on improved DFT and HWA-SegNet are effective for skin lesion image segmentation.
Collapse
Affiliation(s)
- Qi Han
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Hongyi Wang
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China.
| | - Mingyang Hou
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Tengfei Weng
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yangjun Pei
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Zhong Li
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Guorong Chen
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yuan Tian
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Zicheng Qiu
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| |
Collapse
|
6
|
Liu H, Fu Y, Zhang S, Liu J, Wang Y, Wang G, Fang J. GCHA-Net: Global context and hybrid attention network for automatic liver segmentation. Comput Biol Med 2023; 152:106352. [PMID: 36481761 DOI: 10.1016/j.compbiomed.2022.106352] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 11/15/2022] [Accepted: 11/23/2022] [Indexed: 11/27/2022]
Abstract
Liver segmentation is a critical step in liver cancer diagnosis and surgical planning. The U-Net's architecture is one of the most efficient deep networks for medical image segmentation. However, the continuous downsampling operators in U-Net causes the loss of spatial information. To solve these problems, we propose a global context and hybrid attention network, called GCHA-Net, to adaptive capture the structural and detailed features. To capture the global features, a global attention module (GAM) is designed to model the channel and positional dimensions of the interdependencies. To capture the local features, a feature aggregation module (FAM) is designed, where a local attention module (LAM) is proposed to capture the spatial information. LAM can make our model focus on the local liver regions and suppress irrelevant information. The experimental results on the dataset LiTS2017 show that the dice per case (DPC) value and dice global (DG) value of liver were 96.5% and 96.9%, respectively. Compared with the state-of-the-art models, our model has superior performance in liver segmentation. Meanwhile, we test the experiment results on the 3Dircadb dataset, and it shows our model can obtain the highest accuracy compared with the closely related models. From these results, it can been seen that the proposed model can effectively capture the global context information and build the correlation between different convolutional layers. The code is available at the website: https://github.com/HuaxiangLiu/GCAU-Net.
Collapse
Affiliation(s)
- Huaxiang Liu
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Youyao Fu
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Shiqing Zhang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Jun Liu
- College of Mechanical Engineering, Quzhou University, Quzhou, 324000, Zhejiang, China
| | - Yong Wang
- School of Aeronautics and Astronautics, Sun Yat Sen University, Guangzhou, 510275, Guangdong, China
| | - Guoyu Wang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Jiangxiong Fang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China; College of Mechanical Engineering, Quzhou University, Quzhou, 324000, Zhejiang, China.
| |
Collapse
|
7
|
Zhang Z, Jiang Y, Qiao H, Wang M, Yan W, Chen J. SIL-Net: A Semi-Isotropic L-shaped network for dermoscopic image segmentation. Comput Biol Med 2022; 150:106146. [PMID: 36228460 DOI: 10.1016/j.compbiomed.2022.106146] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/13/2022] [Accepted: 09/24/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Dermoscopic image segmentation using deep learning algorithms is a critical technology for skin cancer detection and therapy. Specifically, this technology is a spatially equivariant task and relies heavily on Convolutional Neural Networks (CNNs), which lost more effective features during cascading down-sampling or up-sampling. Recently, vision isotropic architecture has emerged to eliminate cascade procedures in CNNs as well as demonstrates superior performance. Nevertheless, it cannot be used for the segmentation task directly. Based on these discoveries, this research intends to explore an efficient architecture which not only preserves the advantages of the isotropic architecture but is also suitable for clinical dermoscopic diagnosis. METHODS In this work, we introduce a novel Semi-Isotropic L-shaped network (SIL-Net) for dermoscopic image segmentation. First, we propose a Patch Embedding Weak Correlation (PEWC) module to address the issue of no interaction between adjacent patches during the standard Patch Embedding process. Second, a plug-and-play and zero-parameter Residual Spatial Mirror Information (RSMI) path is proposed to supplement effective features during up-sampling and optimize the lesion boundaries. Third, to further reconstruct deep features and get refined lesion regions, a Depth Separable Transpose Convolution (DSTC) based up-sampling module is designed. RESULTS The proposed architecture obtains state-of-the-art performance on dermoscopy benchmark datasets ISIC-2017, ISIC-2018 and PH2. Respectively, the Dice coefficient (DICE) of above datasets achieves 89.63%, 93.47%, and 95.11%, where the Mean Intersection over Union (MIoU) are 82.02%, 88.21%, and 90.81%. Furthermore, the robustness and generalizability of our method has been demonstrated through additional experiments on standard intestinal polyp datasets (CVC-ClinicDB and Kvasir-SEG). CONCLUSION Our findings demonstrate that SIL-Net not only has great potential for precise segmentation of the lesion region but also exhibits stronger generalizability and robustness, indicating that it meets the requirements for clinical diagnosis. Notably, our method shows state-of-the-art performance on all five datasets, which highlights the effectiveness of the semi-isotropic design mechanism.
Collapse
Affiliation(s)
- Zequn Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Hao Qiao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Meiqi Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Wei Yan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Jie Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| |
Collapse
|
8
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
9
|
Minimally parametrized segmentation framework with dual metaheuristic optimisation algorithms and FCM for detection of anomalies in MR brain images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|