1
|
PSMA PET/MR is a New Imaging Option for Identifying Glioma Recurrence and Predicting Prognosis. Recent Pat Anticancer Drug Discov 2024; 19:383-395. [PMID: 38214322 DOI: 10.2174/1574892818666230519150401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 01/13/2024]
Abstract
BACKGROUND Glioma is characterized by a high recurrence rate, while the results of the traditional imaging methods (including magnetic resonance imaging, MRI) to distinguish recurrence from treatment-related changes (TRCs) are poor. Prostate-specific membrane antigen (PSMA) (US10815200B2, Deutsches Krebsforschungszentrum, German Cancer Research Center) is a type II transmembrane glycoprotein overexpressed in glioma vascular endothelium, and it is a promising target for imaging and therapy. OBJECTIVE The study aimed to assess the performance of PSMA positron emission tomography/ magnetic resonance (PET/MR) for diagnosing recurrence and predicting prognosis in glioma patients. MATERIALS AND METHODS Patients suspected of glioma recurrence who underwent 18F-PSMA-1007 PET/MR were prospectively enrolled. Eight metabolic parameters and fifteen texture features of the lesion were extracted from PSMA PET/MR. The ability of PSMA PET/MR to diagnose glioma recurrence was investigated and compared with conventional MRI. The diagnostic agreement was assessed using Cohen κ scores and the predictive parameters of PSMA PET/MR were obtained. Kaplan-Meier method and Cox proportional hazard model were used to analyze recurrence- free survival (RFS) and overall survival (OS). Finally, the expression of PSMA was analyzed by immunohistochemistry (IHC). RESULTS Nineteen patients with a mean age of 48.11±15.72 were assessed. The maximum tumorto- parotid ratio (TPRmax) and texture features extracted from PET and T1-weighted contrast enhancement (T1-CE) MR showed differences between recurrence and TRCs (all p <0.05). PSMA PET/MR and conventional MRI exhibited comparable power in diagnosing recurrence with specificity and PPV of 100%. The interobserver concordance was fair between the two modalities (κ = 0.542, p = 0.072). The optimal cutoffs of metabolic parameters, including standardized uptake value (SUV, SUVmax, SUVmean, and SUVpeak) and TPRmax for predicting recurrence were 3.35, 1.73, 1.99, and 0.17 respectively, with the area under the curve (AUC) ranging from 0.767 to 0.817 (all p <0.05). In grade 4 glioblastoma (GBM) patients, SUVmax, SUVmean, SUVpeak, TBRmax, TBRmean, and TPRmax showed improved performance of AUC (0.833-0.867, p <0.05). Patients with SUVmax, SUVmean, or SUVpeak more than the cutoff value had significantly shorter RFS (all p <0.05). In addition, patients with SUVmean, SUVpeak, or TPRmax more than the cutoff value had significantly shorter OS (all p <0.05). PSMA expression of glioma vascular endothelium was observed in ten (10/11, 90.9%) patients with moderate-to-high levels in all GBM cases (n = 6/6, 100%). CONCLUSION This primitive study shows multiparameter PSMA PET/MR to be useful in identifying glioma (especially GBM) recurrence by providing excellent tumor background comparison, tumor heterogeneity, recurrence prediction and prognosis information, although it did not improve the diagnostic performance compared to conventional MRI. Further and larger studies are required to define its potential clinical application in this setting.
Collapse
|
2
|
Research on Recognition of Coal and Gangue Based on Laser Speckle Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:9113. [PMID: 38005501 PMCID: PMC10674464 DOI: 10.3390/s23229113] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 11/07/2023] [Accepted: 11/09/2023] [Indexed: 11/26/2023]
Abstract
Coal gangue image recognition is a critical technology for achieving automatic separation in coal processing, characterized by its rapid, environmentally friendly, and energy-saving nature. However, the response characteristics of coal and gangue vary greatly under different illuminance conditions, which poses challenges to the stability of feature extraction and recognition, especially when strict illuminance requirements are necessary. This leads to fluctuating coal gangue recognition accuracy in industrial environments. To address these issues and improve the accuracy and stability of image recognition under variable illuminance conditions, we propose a novel coal gangue recognition method based on laser speckle images. Firstly, we studied the inter-class separability and intra-class compactness of the collected laser speckle images of coal and gangue by extracting gray and texture features from the laser speckle images, and analyzed the performance of laser speckle images in representing the differences between coal and gangue minerals. Subsequently, coal gangue recognition was achieved using an SVM classifier based on the extracted features from the laser speckle images. The fusion feature approach achieved a recognition accuracy of 94.4%, providing further evidence of the feasibility of this method. Lastly, we conducted a comparative experiment between natural images and laser speckle images for coal gangue recognition using the same features. The average accuracy of coal gangue laser speckle image recognition under various lighting conditions is 96.7%, with a standard deviation of the recognition accuracy of 1.7%. This significantly surpasses the recognition accuracy obtained from natural coal and gangue images. The results showed that the proposed laser speckle image features can facilitate more stable coal gangue recognition with illumination factors, providing a new, reliable method for achieving accurate classification of coal and gangue in the industrial environment of mines.
Collapse
|
3
|
Application of Feature Definition and Quantification in Biological Sequence Analysis. Curr Genomics 2023; 24:64-65. [PMID: 37994326 PMCID: PMC10662379 DOI: 10.2174/1389202924666230816150732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 06/01/2023] [Accepted: 07/25/2023] [Indexed: 11/24/2023] Open
Abstract
Biological sequence analysis is the most fundamental work in bioinformatics. Many research methods have been developed in the development of biological sequence analysis. These methods include sequence alignment-based methods and alignment-free methods. In addition, there are also some sequence analysis methods based on the feature definition and quantification of the sequence itself. This editorial introduces the methods of biological sequence analysis and explores the significance of defining features and quantitative research of biological sequences.
Collapse
|
4
|
Texture-based speciation of otitis media-related bacterial biofilms from optical coherence tomography images using supervised classification. RESEARCH SQUARE 2023:rs.3.rs-3466690. [PMID: 37961282 PMCID: PMC10635317 DOI: 10.21203/rs.3.rs-3466690/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Otitis media (OM) is primarily a bacterial middle-ear infection prevalent among children worldwide. In recurrent and/or chronic OM cases, antibiotic-resistant bacterial biofilms can develop in the middle ear. A biofilm related to OM typically contains one or multiple bacterial strains, the most common include Haemophilus influenzae, Streptococcus pneumoniae, Moraxella catarrhalis, Pseudomonas aeruginosa, and Staphylococcus aureus. Optical coherence tomography (OCT) has been used clinically to visualize the presence of bacterial biofilms in the middle ear. This study used OCT to compare microstructural image texture features from primary bacterial biofilms in vitro and in vivo. The proposed method applied supervised machine-learning-based frameworks (SVM, random forest (RF), and XGBoost) to classify and speciate multiclass bacterial biofilms from the texture features extracted from OCT B-Scan images obtained from in vitro cultures and from clinically-obtained in vivo images from human subjects. Our findings show that optimized SVM-RBF and XGBoost classifiers can help distinguish bacterial biofilms by incorporating clinical knowledge into classification decisions. Furthermore, both classifiers achieved more than 95% of AUC (area under receiver operating curve), detecting each biofilm class. These results demonstrate the potential for differentiating OM-causing bacterial biofilms through texture analysis of OCT images and a machine-learning framework, which could provide additional clinically relevant data during real-time in vivo characterization of ear infections.
Collapse
|
5
|
Evaluating the Gray Level Co-Occurrence Matrix-Based Texture Features of Magnetic Resonance Images for Glioblastoma Multiform Patients' Treatment Response Assessment. JOURNAL OF MEDICAL SIGNALS & SENSORS 2023; 13:261-271. [PMID: 37809020 PMCID: PMC10559301 DOI: 10.4103/jmss.jmss_50_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/29/2022] [Accepted: 03/14/2023] [Indexed: 10/10/2023]
Abstract
Background Medical images of cancer patients are usually evaluated qualitatively by clinical specialists which makes the accuracy of the diagnosis subjective and related to the skills of clinicians. Quantitative methods based on the textural feature analysis may be useful to facilitate such evaluations. This study aimed to analyze the gray level co-occurrence matrix (GLCM)-based texture features extracted from T1-axial magnetic resonance (MR) images of glioblastoma multiform (GBM) patients to determine the distinctive features specific to treatment response or disease progression. Methods 20 GLCM-based texture features, in addition to mean, standard deviation, entropy, RMS, kurtosis, and skewness were extracted from step I MR images (obtained 72 h after surgery) and step II MR images (obtained three months later). Responded and not responded patients to treatment were classified manually based on the radiological evaluation of step II images. Extracted texture features from Step I and Step II images were analyzed to determine the distinctive features for each group of responsive or progressive diseases. MATLAB 2020 was applied to feature extraction. SPSS version 26 was used for the statistical analysis. P value < 0.05 was considered statistically significant. Results Despite no statistically significant differences between Step I texture features for two considered groups, almost all step II extracted GLCM-based texture features in addition to entropy M and skewness were significantly different between responsive and progressive disease groups. Conclusions GLCM-based texture features extracted from MR images of GBM patients can be used with automatic algorithms for the expeditious prediction or interpretation of response to the treatment quantitatively besides qualitative evaluations.
Collapse
|
6
|
Estimating potassium in potato plants based on multispectral images acquired from unmanned aerial vehicles. FRONTIERS IN PLANT SCIENCE 2023; 14:1265132. [PMID: 37810376 PMCID: PMC10551631 DOI: 10.3389/fpls.2023.1265132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Plant potassium content (PKC) is a crucial indicator of crop potassium nutrient status and is vital in making informed fertilization decisions in the field. This study aims to enhance the accuracy of PKC estimation during key potato growth stages by using vegetation indices (VIs) and spatial structure features derived from UAV-based multispectral sensors. Specifically, the fraction of vegetation coverage (FVC), gray-level co-occurrence matrix texture, and multispectral VIs were extracted from multispectral images acquired at the potato tuber formation, tuber growth, and starch accumulation stages. Linear regression and stepwise multiple linear regression analyses were conducted to investigate how VIs, both individually and in combination with spatial structure features, affect potato PKC estimation. The findings lead to the following conclusions: (1) Estimating potato PKC using multispectral VIs is feasible but necessitates further enhancements in accuracy. (2) Augmenting VIs with either the FVC or texture features makes potato PKC estimation more accurate than when using single VIs. (3) Finally, integrating VIs with both the FVC and texture features improves the accuracy of potato PKC estimation, resulting in notable R 2 values of 0.63, 0.84, and 0.80 for the three fertility periods, respectively, with corresponding root mean square errors of 0.44%, 0.29%, and 0.25%. Overall, these results highlight the potential of integrating canopy spectral information and spatial-structure information obtained from multispectral sensors mounted on unmanned aerial vehicles for monitoring crop growth and assessing potassium nutrient status. These findings thus have significant implications for agricultural management.
Collapse
|
7
|
Fractal dimension: analyzing its potential as a neuroimaging biomarker for brain tumor diagnosis using machine learning. Front Physiol 2023; 14:1201617. [PMID: 37528895 PMCID: PMC10390093 DOI: 10.3389/fphys.2023.1201617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/28/2023] [Indexed: 08/03/2023] Open
Abstract
Purpose: The main purpose of this study was to comprehensively investigate the potential of fractal dimension (FD) measures in discriminating brain gliomas into low-grade glioma (LGG) and high-grade glioma (HGG) by examining tumor constituents and non-tumorous gray matter (GM) and white matter (WM) regions. Methods: Retrospective magnetic resonance imaging (MRI) data of 42 glioma patients (LGG, n = 27 and HGG, n = 15) were used in this study. Using MRI, we calculated different FD measures based on the general structure, boundary, and skeleton aspects of the tumorous and non-tumorous brain GM and WM regions. Texture features, namely, angular second moment, contrast, inverse difference moment, correlation, and entropy, were also measured in the tumorous and non-tumorous regions. The efficacy of FD features was assessed by comparing them with texture features. Statistical inference and machine learning approaches were used on the aforementioned measures to distinguish LGG and HGG patients. Results: FD measures from tumorous and non-tumorous regions were able to distinguish LGG and HGG patients. Among the 15 different FD measures, the general structure FD values of enhanced tumor regions yielded high accuracy (93%), sensitivity (97%), specificity (98%), and area under the receiver operating characteristic curve (AUC) score (98%). Non-tumorous GM skeleton FD values also yielded good accuracy (83.3%), sensitivity (100%), specificity (60%), and AUC score (80%) in classifying the tumor grades. These measures were also found to be significantly (p < 0.05) different between LGG and HGG patients. On the other hand, among the 25 texture features, enhanced tumor region features, namely, contrast, correlation, and entropy, revealed significant differences between LGG and HGG. In machine learning, the enhanced tumor region texture features yielded high accuracy, sensitivity, specificity, and AUC score. Conclusion: A comparison between texture and FD features revealed that FD analysis on different aspects of the tumorous and non-tumorous components not only distinguished LGG and HGG patients with high statistical significance and classification accuracy but also provided better insights into glioma grade classification. Therefore, FD features can serve as potential neuroimaging biomarkers for glioma.
Collapse
|
8
|
Non-destructive monitoring of maize LAI by fusing UAV spectral and textural features. FRONTIERS IN PLANT SCIENCE 2023; 14:1158837. [PMID: 37063231 PMCID: PMC10102429 DOI: 10.3389/fpls.2023.1158837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
Leaf area index (LAI) is an essential indicator for crop growth monitoring and yield prediction. Real-time, non-destructive, and accurate monitoring of crop LAI is of great significance for intelligent decision-making on crop fertilization, irrigation, as well as for predicting and warning grain productivity. This study aims to investigate the feasibility of using spectral and texture features from unmanned aerial vehicle (UAV) multispectral imagery combined with machine learning modeling methods to achieve maize LAI estimation. In this study, remote sensing monitoring of maize LAI was carried out based on a UAV high-throughput phenotyping platform using different varieties of maize as the research target. Firstly, the spectral parameters and texture features were extracted from the UAV multispectral images, and the Normalized Difference Texture Index (NDTI), Difference Texture Index (DTI) and Ratio Texture Index (RTI) were constructed by linear calculation of texture features. Then, the correlation between LAI and spectral parameters, texture features and texture indices were analyzed, and the image features with strong correlation were screened out. Finally, combined with machine learning method, LAI estimation models of different types of input variables were constructed, and the effect of image features combination on LAI estimation was evaluated. The results revealed that the vegetation indices based on the red (650 nm), red-edge (705 nm) and NIR (842 nm) bands had high correlation coefficients with LAI. The correlation between the linearly transformed texture features and LAI was significantly improved. Besides, machine learning models combining spectral and texture features have the best performance. Support Vector Machine (SVM) models of vegetation and texture indices are the best in terms of fit, stability and estimation accuracy (R2 = 0.813, RMSE = 0.297, RPD = 2.084). The results of this study were conducive to improving the efficiency of maize variety selection and provide some reference for UAV high-throughput phenotyping technology for fine crop management at the field plot scale. The results give evidence of the breeding efficiency of maize varieties and provide a certain reference for UAV high-throughput phenotypic technology in crop management at the field scale.
Collapse
|
9
|
Multiparametric Quantitative Analysis of Photodamage to Skin Using Optical Coherence Tomography. SENSORS (BASEL, SWITZERLAND) 2023; 23:3589. [PMID: 37050649 PMCID: PMC10098911 DOI: 10.3390/s23073589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 03/27/2023] [Accepted: 03/28/2023] [Indexed: 06/19/2023]
Abstract
Ultraviolet (UV) irradiation causes 90% of photodamage to skin and long-term exposure to UV irradiation is the largest threat to skin health. To study the mechanism of UV-induced photodamage and the repair of sunburnt skin, the key problem to solve is how to non-destructively and continuously evaluate UV-induced photodamage to skin. In this study, a method to quantitatively analyze the structural and tissue optical parameters of artificial skin (AS) using optical coherence tomography (OCT) was proposed as a way to non-destructively and continuously evaluate the effect of photodamage. AS surface roughness was achieved based on the characteristic peaks of the intensity signal of the OCT images, and this was the basis for quantifying AS cuticle thickness using Dijkstra's algorithm. Local texture features within the AS were obtained through the gray-level co-occurrence matrix method. A modified depth-resolved algorithm was used to quantify the 3D scattering coefficient distribution within AS based on a single-scattering model. A multiparameter assessment of AS photodamage was carried out, and the results were compared with the MTT experiment results and H&E staining. The results of the UV photodamage experiments showed that the cuticle of the photodamaged model was thicker (56.5%) and had greater surface roughness (14.4%) compared with the normal cultured AS. The angular second moment was greater and the correlation was smaller, which was in agreement with the results of the H&E staining microscopy. The angular second moment and correlation showed a good linear relationship with the UV irradiation dose, illustrating the potential of OCT in measuring internal structural damage. The tissue scattering coefficient of AS correlated well with the MTT results, which can be used to quantify the damage to the bioactivity. The experimental results also demonstrate the anti-photodamage efficacy of the vitamin C factor. Quantitative analysis of structural and tissue optical parameters of AS by OCT enables the non-destructive and continuous detection of AS photodamage in multiple dimensions.
Collapse
|
10
|
Reproducibility of the principal component analysis (PCA)-based data-driven respiratory gating on texture features in non-small cell lung cancer patients with 18 F-FDG PET/CT. J Appl Clin Med Phys 2023; 24:e13967. [PMID: 36943700 PMCID: PMC10161026 DOI: 10.1002/acm2.13967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 02/15/2023] [Accepted: 03/01/2023] [Indexed: 03/23/2023] Open
Abstract
OBJECTIVE Texture analysis is one of the lung cancer countermeasures in the field of radiomics. Even though image quality affects texture features, the reproducibility of principal component analysis (PCA)-based data-driven respiratory gating (DDG) on texture features remains poorly understood. Hence, this study aimed to clarify the reproducibility of PCA-based DDG on texture features in non-small cell lung cancer (NSCLC) patients with 18 F-Fluorodeoxyglucose (18 F-FDG) Positron emission tomography/computed tomography (PET/CT). METHODS Twenty patients with NSCLC who underwent 18 F-FDG PET/CT in routine clinical practice were retrospectively analyzed. Each patient's PET data were reconstructed in two PET groups of no gating (NG-PET) and PCA-based DDG gating (DDG-PET). Forty-six image features were analyzed using LIFEx software. Reproducibility was evaluated using Lin's concordance correlation coefficient ( ρ c ${\rho _c}$ ) and percentage difference (%Diff). Non-reproducibility was defined as having unacceptable strength ( ρ c $({\rho _c}$ < 0.8) and a %Diff of >10%. NG-PET and DDG-PET were compared using the Wilcoxon signed-rank test. RESULTS A total of 3/46 (6.5%) image features had unacceptable strength, and 9/46 (19.6%) image features had a %Diff of >10%. Significant differences between the NG-PET and DDG-PET groups were confirmed in only 4/46 (8.7%) of the high %Diff image features. CONCLUSION Although the DDG application affected several texture features, most image features had adequate reproducibility. PCA-based DDG-PET can be routinely used as interchangeable images for texture feature extraction from NSCLC patients.
Collapse
|
11
|
Impact of Aggregation Methods for Texture Features on Their Robustness Performance: Application to Nasopharyngeal 18F-FDG PET/CT. Cancers (Basel) 2023; 15:cancers15030932. [PMID: 36765889 PMCID: PMC9913076 DOI: 10.3390/cancers15030932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/27/2023] [Accepted: 01/30/2023] [Indexed: 02/05/2023] Open
Abstract
PURPOSE This study aims to investigate the impact of aggregation methods used for the generation of texture features on their robustness of nasopharyngeal carcinoma (NPC) based on 18F-FDG PET/CT images. METHODS 128 NPC patients were enrolled and 95 texture features were extracted for each patient including six feature families under different aggregation methods. For GLCM and GLRLM features, six aggregation methods were considered. For GLSZM, GLDZM, NGTDM and NGLDM features, three aggregation methods were considered. The robustness of the features affected by aggregation methods was assessed by the pair-wise intra-class correlation coefficient (ICC). Furthermore, the effects of discretization and partial volume correction (PVC) on the percent of ICC categories of all texture features were evaluated by overall ICC instead of the pair-wise ICC. RESULTS There were 12 features with excellent pair-wise ICCs varying aggregation methods, namely joint average, sum average, autocorrelation, long run emphasis, high grey level run emphasis, short run high grey level emphasis, long run high grey level emphasis, run length variance, SZM high grey level emphasis, DZM high grey level emphasis, high grey level count emphasis and dependence count percentage. For GLCM and GLRLM features, 19/25 and 14/16 features showed excellent pair-wise ICCs varying aggregation methods (averaged and merged) on the same dimensional features (2D, 2.5D or 3D). Different discretization levels and partial volume corrections lead to consistent robustness of textural features affected by aggregation methods. CONCLUSION Different dimensional features with the same aggregation methods showed worse robustness compared with the same dimensional features with different aggregation methods. Different discretization levels and PVC algorithms had a negligible effect on the percent of ICC categories of all texture features.
Collapse
|
12
|
Intelligent image analysis recognizes important orchid viral diseases. FRONTIERS IN PLANT SCIENCE 2022; 13:1051348. [PMID: 36531380 PMCID: PMC9755359 DOI: 10.3389/fpls.2022.1051348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 11/11/2022] [Indexed: 06/17/2023]
Abstract
Phalaenopsis orchids are one of the most important exporting commodities for Taiwan. Most orchids are planted and grown in greenhouses. Early detection of orchid diseases is crucially valuable to orchid farmers during orchid cultivation. At present, orchid viral diseases are generally identified with manual observation and the judgment of the grower's experience. The most commonly used assays for virus identification are nucleic acid amplification and serology. However, it is neither time nor cost efficient. Therefore, this study aimed to create a system for automatically identifying the common viral diseases in orchids using the orchid image. Our methods include the following steps: the image preprocessing by color space transformation and gamma correction, detection of leaves by a U-net model, removal of non-leaf fragment areas by connected component labeling, feature acquisition of leaf texture, and disease identification by the two-stage model with the integration of a random forest model and an inception network (deep learning) model. Thereby, the proposed system achieved the excellent accuracy of 0.9707 and 0.9180 for the image segmentation of orchid leaves and disease identification, respectively. Furthermore, this system outperformed the naked-eye identification for the easily misidentified categories [cymbidium mosaic virus (CymMV) and odontoglossum ringspot virus (ORSV)] with the accuracy of 0.842 using two-stage model and 0.667 by naked-eye identification. This system would benefit the orchid disease recognition for Phalaenopsis cultivation.
Collapse
|
13
|
Image Enhancement of Maritime Infrared Targets Based on Scene Discrimination. SENSORS (BASEL, SWITZERLAND) 2022; 22:5873. [PMID: 35957429 PMCID: PMC9371148 DOI: 10.3390/s22155873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/01/2022] [Accepted: 08/03/2022] [Indexed: 06/15/2023]
Abstract
Infrared image enhancement technology can effectively improve the image quality and enhance the saliency of the target and is a critical component in the marine target search and tracking system. However, the imaging quality of maritime infrared images is easily affected by weather and sea conditions and has low contrast defects and weak target contour information. At the same time, the target is disturbed by different intensities of sea clutter, so the characteristics of the target are also different, which cannot be processed by a single algorithm. Aiming at these problems, the relationship between the directional texture features of the target and the roughness of the sea surface is deeply analyzed. According to the texture roughness of the waves, the image scene is adaptively divided into calm sea surface and rough sea surface. At the same time, through the Gabor filter at a specific frequency and the gradient-based target feature extraction operator proposed in this paper, the clutter suppression and feature fusion strategies are set, and the target feature image of multi-scale fusion in two types of scenes are obtained, which is used as a guide image for guided filtering. The original image is decomposed into a target and a background layer to extract the target features and avoid image distortion. The blurred background around the target contour is extracted by Gaussian filtering based on the potential target region, and the edge blur caused by the heat conduction of the target is eliminated. Finally, an enhanced image is obtained by fusing the target and background layers with appropriate weights. The experimental results show that, compared with the current image enhancement method, the method proposed in this paper can improve the clarity and contrast of images, enhance the detectability of targets in distress, remove sea surface clutter while retaining the natural environment features in the background, and provide more information for target detection and continuous tracking in maritime search and rescue.
Collapse
|
14
|
Cotton Yield Estimation Based on Vegetation Indices and Texture Features Derived From RGB Image. FRONTIERS IN PLANT SCIENCE 2022; 13:925986. [PMID: 35783985 PMCID: PMC9240637 DOI: 10.3389/fpls.2022.925986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Yield monitoring is an important parameter to evaluate cotton productivity during cotton harvest. Nondestructive and accurate yield monitoring is of great significance to cotton production. Unmanned aerial vehicle (UAV) remote sensing has fast and repetitive acquisition ability. The visible vegetation indices has the advantages of low cost, small amount of calculation and high resolution. The combination of the UAV and visible vegetation indices has been more and more applied to crop yield monitoring. However, there are some shortcomings in estimating cotton yield based on visible vegetation indices only as the similarity between cotton and mulch film makes it difficult to differentiate them and yields may be saturated based on vegetation index estimates near harvest. Texture feature is another important remote sensing information that can provide geometric information of ground objects and enlarge the spatial information identification based on original image brightness. In this study, RGB images of cotton canopy were acquired by UAV carrying RGB sensors before cotton harvest. The visible vegetation indices and texture features were extracted from RGB images for cotton yield monitoring. Feature parameters were selected in different methods after extracting the information. Linear and nonlinear methods were used to build cotton yield monitoring models based on visible vegetation indices, texture features and their combinations. The results show that (1) vegetation indices and texture features extracted from the ultra-high-resolution RGB images obtained by UAVs were significantly correlated with the cotton yield; (2) The best model was that combined with vegetation indices and texture characteristics RF_ELM model, verification set R 2 was 0.9109, and RMSE was 0.91277 t.ha-1. rRMSE was 29.34%. In conclusion, the research results prove that UAV carrying RGB sensor has a certain potential in cotton yield monitoring, which can provide theoretical basis and technical support for field cotton production evaluation.
Collapse
|
15
|
Quantitative Response of Gray-Level Co-Occurrence Matrix Texture Features to the Salinity of Cracked Soda Saline-Alkali Soil. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19116556. [PMID: 35682139 PMCID: PMC9180774 DOI: 10.3390/ijerph19116556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/19/2022] [Accepted: 05/26/2022] [Indexed: 12/10/2022]
Abstract
Desiccation cracking during water evaporation is a common phenomenon in soda saline–alkali soils and is mainly determined by soil salinity. Therefore, quantitative measurement of the surface cracking status of soda saline–alkali soils is highly significant in different applications. Texture features can help to determine the mechanical properties of soda saline–alkali soils, thus improving the understanding of the mechanism of desiccation cracking in saline–alkali soils. This study aims to provide a new standard describing the surface cracking conditions of soda saline–alkali soil on the basis of gray-level co-occurrence matrix (GLCM) texture analysis and to quantitatively study the responses of GLCM texture features to soil salinity. To achieve this, images of 200 field soil samples with different surface cracks were processed and calculated for GLCMs under different parameters, including directions, gray levels, and step sizes. Subsequently, correlation analysis was then conducted between texture features and electrical conductivity (EC) values. The results indicated that direction had little effect on the GLCM texture features, and that four selected texture features, contrast (CON), angular second moment (ASM), entropy (ENT), and homogeneity (HOM), were the most correlated with EC under a gray level of 2 and step size of 1 pixel. The results also showed that logarithmic models can be used to accurately describe the relationships between EC values and GLCM texture features of soda saline–alkali soils in the Songnen Plain of China, with calibration R2 ranging from 0.88 to 0.92, and RMSE from 2.12 × 10−4 to 9.68 × 10−3, respectively. This study can therefore enhance the understanding of desiccation cracking of salt-affected soil to a certain extent and can also help to improve the detection accuracy of soil salinity.
Collapse
|
16
|
Quantifying invasiveness of clinical stage IA lung adenocarcinoma with computed tomography texture features. J Thorac Cardiovasc Surg 2022; 163:805-815.e3. [PMID: 33541730 DOI: 10.1016/j.jtcvs.2020.12.092] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/21/2020] [Accepted: 12/11/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVES The study objectives were to establish and validate a nomogram for pathological invasiveness prediction in clinical stage IA lung adenocarcinoma and to help identify those potentially unsuitable for sublobar resection-based computed tomography texture features. METHOD Patients with clinical stage IA lung adenocarcinoma who underwent surgery at Guangdong Provincial People's Hospital between January 2015 and October 2018 were retrospectively reviewed. All surgically resected nodules were pathologically classified into less-invasive and invasive cohorts. Each nodule was manually segmented, and its computerized texture features were extracted. Clinicopathological and computed tomographic texture features were compared between 2 cohorts. A nomogram for distinguishing the pathological invasiveness was established and validated. RESULTS Among 428 enrolled patients, 249 were diagnosed with invasive pathological subtypes. Smoking status (odds ratio, 2.906; 95% confidence interval, 1.285-6.579; P = .011), mean computed tomography attenuation value (odds ratio, 1.005, 95% confidence interval, 1.002-1.007; P < .001), and entropy (odds ratio, 8.536, 95% confidence interval, 3.478-20.951; P < .001) were identified as independent predictors for pathological invasiveness by multivariate logistics regression analysis. The nomogram showed good calibration (P = .182) with an area under the curve of 0.849 when validated with testing set data. Decision curve analysis indicated the potentially clinical usefulness of the model with respect to treat-all or treat-none scenario. Compared with intraoperative frozen-section, the nomogram performed better in pathological invasiveness diagnosis (area under the curve, 0.815 vs 0.670; P = .00095). CONCLUSIONS We established and validated a nomogram to compute the probability of invasiveness of clinical stage IA lung adenocarcinoma with great calibration, which may contribute to decisions related to resection extent.
Collapse
|
17
|
[Performance of the Combined Model Based on Both Clinicopathological and CT Texture Features in Predicting Liver Metastasis of High-risk Gastrointestinal Stromal Tumors]. ZHONGGUO YI XUE KE XUE YUAN XUE BAO. ACTA ACADEMIAE MEDICINAE SINICAE 2022; 44:53-59. [PMID: 35300765 DOI: 10.3881/j.issn.1000-503x.14051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Objective To investigate the performance of the combined model based on both clinicopathological features and CT texture features in predicting liver metastasis of high-risk gastrointestinal stromal tumors(GISTs). Methods The high-risk GISTs confirmed by pathology from January 2015 to December 2020 were analyzed retrospectively,including 153 cases from the Cancer Hospital of the University of Chinese Academy of Sciences and 51 cases from the Shaoxing Central Hospital.The cases were randomly assigned into a training set(n=142)and a test set(n=62)at a ratio of 7∶3.According to the results of operation or puncture,they were classified into a liver metastasis group(76 cases)and a non-metastasis group(128 cases).ITK-SNAP was employed to delineate the volume of interest of the stromal tumors.Least absolute shrinkage and selection operator(LASSO)was employed to screen out the effective features.Multivariate logistic regression was adopted to construct the models based on clinicopathological features,texture features extracted from CT scans,and the both(combined model),respectively.Receiver operating characteristic(ROC)curve and calibration curve were established to evaluate the predictive performance of the models.The area under the curve(AUC)was compared by Delong test. Results Body mass index(BMI),tumor size,Ki-67,tumor occurrence site,abdominal mass,gastrointestinal bleeding,and CA125 level showed statistical differences between groups(all P<0.05).A total of 107 texture features were extracted from CT images,from which 13 and 7 texture features were selected by LASSO from CT plain scans and CT enhanced scans,respectively.The AUC of the prediction with the training set and the test set respectively was 0.870 and 0.855 for the model based on clinicopathological features,0.918 and 0.836 for the model based on texture features extracted from CT plain scans,0.920 and 0.846 for the model based on texture features extracted from CT enhanced scans,and 0.930 and 0.889 for the combined model based on both clinicopathological features and texture features extracted from CT plain scans.Delong test demonstrated no significant difference in AUC between the models based on the texture features extracted from CT plain scans and CT enhanced scans(P=0.762),whereas the AUC of the combined model was significantly different from that of the clinicopathological feature-based model and texture feature-based model(P=0.001 and P=0.023,respectively). Conclusion Texture features extracted from CT plain scans can predict the liver metastasis of high-risk GISTs,and the model established with clinicopathological features combined with CT texture features has best prediction performance.
Collapse
|
18
|
Dynamic PET Imaging Using Dual Texture Features. Front Comput Neurosci 2022; 15:819840. [PMID: 35069162 PMCID: PMC8782430 DOI: 10.3389/fncom.2021.819840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 12/10/2021] [Indexed: 11/16/2022] Open
Abstract
Purpose: This study aims to explore the impact of adding texture features in dynamic positron emission tomography (PET) reconstruction of imaging results. Methods: We have improved a reconstruction method that combines radiological dual texture features. In this method, multiple short time frames are added to obtain composite frames, and the image reconstructed by composite frames is used as the prior image. We extract texture features from prior images by using the gray level-gradient cooccurrence matrix (GGCM) and gray-level run length matrix (GLRLM). The prior information contains the intensity of the prior image, the inverse difference moment of the GGCM and the long-run low gray-level emphasis of the GLRLM. Results: The computer simulation results show that, compared with the traditional maximum likelihood, the proposed method obtains a higher signal-to-noise ratio (SNR) in the image obtained by dynamic PET reconstruction. Compared with similar methods, the proposed algorithm has a better normalized mean squared error (NMSE) and contrast recovery coefficient (CRC) at the tumor in the reconstructed image. Simulation studies on clinical patient images show that this method is also more accurate for reconstructing high-uptake lesions. Conclusion: By adding texture features to dynamic PET reconstruction, the reconstructed images are more accurate at the tumor.
Collapse
|
19
|
Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J Imaging 2021; 7:jimaging7100205. [PMID: 34677291 PMCID: PMC8540831 DOI: 10.3390/jimaging7100205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022] Open
Abstract
This paper investigates the usefulness of multi-fractal analysis and local binary patterns (LBP) as texture descriptors for classifying mammogram images into different breast density categories. Multi-fractal analysis is also used in the pre-processing step to segment the region of interest (ROI). We use four multi-fractal measures and the LBP method to extract texture features, and to compare their classification performance in experiments. In addition, a feature descriptor combining multi-fractal features and multi-resolution LBP (MLBP) features is proposed and evaluated in this study to improve classification accuracy. An autoencoder network and principal component analysis (PCA) are used for reducing feature redundancy in the classification model. A full field digital mammogram (FFDM) dataset, INBreast, which contains 409 mammogram images, is used in our experiment. BI-RADS density labels given by radiologists are used as the ground truth to evaluate the classification results using the proposed methods. Experimental results show that the proposed feature descriptor based on multi-fractal features and LBP result in higher classification accuracy than using individual texture feature sets.
Collapse
|
20
|
Identification of oral precancerous and cancerous tissue by swept source optical coherence tomography. Lasers Surg Med 2021; 54:320-328. [PMID: 34342365 DOI: 10.1002/lsm.23461] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND OBJECTIVES Distinguishing cancer from precancerous lesions is critical and challenging in oral medicine. As a noninvasive method, optical coherence tomography (OCT) has the advantages of real-time, in vivo, and large-depth imaging. Texture information hidden in OCT images can provide an important auxiliary effect for improving diagnostic accuracy. The aim of this study is to explore a reliable and accurate OCT-based method for the screening and diagnosis of human oral diseases, especially oral cancer. MATERIALS AND METHODS Fresh ex vivo oral tissues including normal mucosa, leukoplakia with epithelial hyperplasia (LEH), and oral squamous cell carcinoma (OSCC) were imaged intraoperatively by a homemade OCT system, and 58 texture features were extracted to create computational models of these tissues. A principal component analysis algorithm was employed to optimize the combination of texture feature vectors. The identification based on artificial neural network (ANN) was proposed and the sensitivity/specificity was calculated statistically to evaluate the classification performance. RESULTS A total of 71 sites of three types of oral tissues were measured, and 5176 OCT images of three types of oral tissues were used in this study. The superior classification result based on ANN was obtained with an average accuracy of 98.17%. The sensitivity and specificity of normal mucosa, LEH, and OSCC are 98.17% / 98.38%, 93.81% / 98.54%, and 98.11% / 99.04%, respectively. CONCLUSION It is demonstrated from the high accuracies, sensitivities, and specificities that texture-based analysis can be used to identify oral precancerous and cancerous tissue in OCT images, and it has the potential to help surgeons in diseases screening and diagnosis effectively.
Collapse
|
21
|
MRI-aided kernel PET image reconstruction method based on texture features. Phys Med Biol 2021; 66. [PMID: 34192685 DOI: 10.1088/1361-6560/ac1024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 06/30/2021] [Indexed: 11/11/2022]
Abstract
We investigate the reconstruction of low-count positron emission tomography (PET) projection, which is an important, but challenging, task. Using the texture feature extraction method of radiomics, i.e. the gray-level co-occurrence matrix (GLCM), texture features can be extracted from magnetic resonance imaging images with high-spatial resolution. In this work, we propose a kernel reconstruction method combining autocorrelation texture features derived from the GLCM. The new kernel function includes the correlations of both the intensity and texture features from the prior image. By regarding the GLCM as a discrete approximation of a probability density function, the asymptotically gray-level-invariant autocorrelation texture feature is generated, which can maintain the accuracy of texture features extracted from small image regions by reducing the number of quantized image gray levels. A computer simulation shows that the proposed method can effectively reduce the noise in the reconstructed image compared to the maximum likelihood expectation maximum method and improve the image quality and tumor region accuracy compared to the original kernel method for low-count PET reconstruction. A simulation study on clinical patient images also shows that the proposed method can improve the whole image quality and that the reconstruction of a high-uptake lesion is more accurate than that achieved by the original kernel method.
Collapse
|
22
|
Water Extraction Method Based on Multi- Texture Feature Fusion of Synthetic Aperture Radar Images. SENSORS 2021; 21:s21144945. [PMID: 34300685 PMCID: PMC8309776 DOI: 10.3390/s21144945] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/17/2021] [Accepted: 07/18/2021] [Indexed: 11/17/2022]
Abstract
Lakes play an important role in the water ecosystem on earth, and are vulnerable to climate change and human activities. Thus, the detection of water quality changes is of great significance for ecosystem assessment, disaster warning and water conservancy projects. In this paper, the dynamic changes of the Poyang Lake are monitored by Synthetic Aperture Radar (SAR). In order to extract water from SAR images to monitor water change, a water extraction algorithm composed of texture feature extraction, feature fusion and target segmentation was proposed. Firstly, the fractal dimension and lacunarity were calculated to construct the texture feature set of a water object. Then, an iterated function system (IFS) was constructed to fuse texture features into composite feature vectors. Finally, lake water was segmented by the multifractal spectrum method. Experimental results showed that the proposed algorithm accurately extracted water targets from SAR images of different regions and different imaging modes. Compared with common algorithms such as fuzzy C-means (FCM), the accuracy of the proposed algorithm is significantly improved, with an accuracy of over 98%. Moreover, the proposed algorithm can accurately segment complex coastlines with mountain shadow interference. In addition, the dynamic analysis of the changes of the water area of the Poyang Lake Basin was carried out with the local hydrological data. It showed that the extracted results of the algorithm in this paper are a good match with the hydrological data. This study provides an accurate monitoring method for lake water under complex backgrounds.
Collapse
|
23
|
Combined Use of Texture Features and Morphological Classification Based on Dynamic Contrast-enhanced MR Imaging: Differentiating Benign and Malignant Breast Masses with High Negative Predictive Value. Magn Reson Med Sci 2021; 21:485-498. [PMID: 34176860 PMCID: PMC9316135 DOI: 10.2463/mrms.mp.2020-0160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Purpose: We evaluated the diagnostic performance of the texture features of dynamic contrast-enhanced (DCE) MRI for breast cancer diagnosis in which the discriminator was optimized, so that the specificity was maximized via the restriction of the negative predictive value (NPV) to greater than 98%. Methods: Histologically proven benign and malignant mass lesions of DCE MRI were enrolled retrospectively. Training and testing sets consist of 166 masses (49 benign, 117 malignant) and 50 masses (15 benign, 35 malignant), respectively. Lesions were classified via MRI review by a radiologist into 4 shape types: smooth (S-type, 34 masses in training set and 8 masses in testing set), irregular without rim-enhancement (I-type, 60 in training and 14 in testing), irregular with rim-enhancement (R-type, 56 in training and 22 in testing), and spicula (16 in training and 6 in testing). Spicula were immediately classified as malignant. For the remaining masses, 298 texture features were calculated using a parametric map of DCE MRI in 3D mass regions. Masses were classified into malignant or benign using two thresholds on a feature pair. On the training set, several feature pairs and their thresholds were selected and optimized for each mass shape type to maximize specificity with the restriction of NPV > 98%. NPV and specificity were computed using the testing set by comparison with histopathologic results and averaged on the selected feature pairs. Results: In the training set, 27, 12, and 15 texture feature pairs are selected for S-type, I-type, and R-type masses, respectively, and thresholds are determined. In the testing set, average NPV and specificity using the selected texture features were 99.0% and 45.2%, respectively, compared to the NPV (85.7%) and specificity (40.0%) in visually assessed MRI category-based diagnosis. Conclusion: We, therefore, suggest that the NPV of our texture-based features method described performs similarly to or greater than the NPV of the MRI category-based diagnosis.
Collapse
|
24
|
An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. ULTRASONIC IMAGING 2021; 43:124-138. [PMID: 33629652 DOI: 10.1177/0161734621998091] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Significant successes in machine learning approaches to image analysis for various applications have energized strong interest in automated diagnostic support systems for medical images. The evolving in-depth understanding of the way carcinogenesis changes the texture of cellular networks of a mass/tumor has been informing such diagnostics systems with use of more suitable image texture features and their extraction methods. Several texture features have been recently applied in discriminating malignant and benign ovarian masses by analysing B-mode images from ultrasound scan of the ovary with different levels of performance. However, comparative performance evaluation of these reported features using common sets of clinically approved images is lacking. This paper presents an empirical evaluation of seven commonly used texture features (histograms, moments of histogram, local binary patterns [256-bin and 59-bin], histograms of oriented gradients, fractal dimensions, and Gabor filter), using a collection of 242 ultrasound scan images of ovarian masses of various pathological characteristics. The evaluation examines not only the effectiveness of classification schemes based on the individual texture features but also the effectiveness of various combinations of these schemes using the simple majority-rule decision level fusion. Trained support vector machine classifiers on the individual texture features without any specific pre-processing, achieve levels of accuracy between 75% and 85% where the seven moments and the 256-bin LBP are at the lower end while the Gabor filter is at the upper end. Combining the classification results of the top k (k = 3, 5, 7) best performing features further improve the overall accuracy to a level between 86% and 90%. These evaluation results demonstrate that each of the investigated image-based texture features provides informative support in distinguishing benign or malignant ovarian masses.
Collapse
|
25
|
Whole-Tumor Histogram and Texture Imaging Features on Magnetic Resonance Imaging Combined With Epstein-Barr Virus Status to Predict Disease Progression in Patients With Nasopharyngeal Carcinoma. Front Oncol 2021; 11:610804. [PMID: 33767984 PMCID: PMC7986723 DOI: 10.3389/fonc.2021.610804] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 02/05/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: We aimed to investigate whether Epstein–Barr virus (EBV) could produce differences on MRI by examining the histogram and texture imaging features. We also sought to determine the predictive value of pretreatment MRI texture analyses incorporating with EBV status for disease progression (PD) in patients with primary nasopharyngeal carcinoma (NPC). Materials and Methods: Eighty-one patients with primary T2-T4 NPC and known EBV status who underwent contrast-enhanced MRI were included in this retrospective study. Whole-tumor-based histogram and texture features were extracted from pretreatment T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and contrast-enhanced (CE)-T1WI images. Mann–Whitney U-tests were performed to identify the differences in histogram and texture parameters between EBV DNA-positive and EBV DNA-negative NPC images. The effects of clinical variables as well as histogram and texture features were estimated by using univariate and multivariate logistic regression analyses. Receiver operating characteristic (ROC) curve analysis was used to predict the EBV status and PD. Finally, an integrated model with the best performance was built. Results: Of the 81 patients included, 54 had EBV DNA-positive NPC, and 27 had EBV DNA-negative NPC. Patients who were tested EBV DNA-positive had higher overall stage (P = 0.016), more lymphatic metastases (p < 0.0001), and easier distant metastases (P = 0.026) than the patients who were tested EBV DNA-negative. Tumor volume, T1WISkewness and T2WIKurtosis showed significant differences between the two groups. The combination of the three features achieved an AUC of 0.783 [95% confidence interval (CI) 0.678–0.888] with a sensitivity and specificity of 70.4 and 74.1%, respectively, in differentiating EBV DNA-positive tumors from EBV DNA-negative tumors. The combination of overall stage and tumor volume of T2WIKurtosis and EBV status was the most effective model for predicting PD in patients with primary NPC. The overall accuracy was 84.6%, with a sensitivity and specificity of 93.8 and 66.2%, respectively (AUC, 0.800; 95% CI 0.700–0.900). Conclusion: This study demonstrates that MRI-based radiological features and EBV status can be used as an aid tool for the evaluation of PD, in order to develop tailored treatment targeting specific characteristics of individual patients.
Collapse
|
26
|
Estimating the Growing Stem Volume of Coniferous Plantations Based on Random Forest Using an Optimized Variable Selection Method. SENSORS 2020; 20:s20247248. [PMID: 33348807 PMCID: PMC7766647 DOI: 10.3390/s20247248] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 12/08/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022]
Abstract
Forest growing stem volume (GSV) reflects the richness of forest resources as well as the quality of forest ecosystems. Remote sensing technology enables robust and efficient GSV estimation as it greatly reduces the survey time and cost while facilitating periodic monitoring. Given its red edge bands and a short revisit time period, Sentinel-2 images were selected for the GSV estimation in Wangyedian forest farm, Inner Mongolia, China. The variable combination was shown to significantly affect the accuracy of the estimation model. After extracting spectral variables, texture features, and topographic factors, a stepwise random forest (SRF) method was proposed to select variable combinations and establish random forest regressions (RFR) for GSV estimation. The linear stepwise regression (LSR), Boruta, Variable Selection Using Random Forests (VSURF), and random forest (RF) methods were then used as references for comparison with the proposed SRF for selection of predictors and GSV estimation. Combined with the observed GSV data and the Sentinel-2 images, the distributions of GSV were generated by the RFR models with the variable combinations determined by the LSR, RF, Boruta, VSURF, and SRF. The results show that the texture features of Sentinel-2’s red edge bands can significantly improve the accuracy of GSV estimation. The SRF method can effectively select the optimal variable combination, and the SRF-based model results in the highest estimation accuracy with the decreases of relative root mean square error by 16.4%, 14.4%, 16.3%, and 10.6% compared with those from the LSR-, RF-, Boruta-, and VSURF-based models, respectively. The GSV distribution generated by the SRF-based model matched that of the field observations well. The results of this study are expected to provide a reference for GSV estimation of coniferous plantations.
Collapse
|
27
|
Texture Feature Comparison Between Step-and-Shoot and Continuous-Bed-Motion 18F-FDG PET. J Nucl Med Technol 2020; 49:58-64. [PMID: 33020230 DOI: 10.2967/jnmt.120.246157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Accepted: 08/11/2020] [Indexed: 11/16/2022] Open
Abstract
Our objective was to investigate the differences in texture features between step-and-shoot (SS) and continuous-bed-motion (CBM) imaging in phantom and clinical studies. Methods: A National Electrical Manufacturers Association body phantom was filled with 18F-FDG solution at a sphere-to-background ratio of 4:1. SS and CBM were performed using the same acquisition duration, and the data were reconstructed using 3-dimensional ordered-subset expectation maximization with time-of-flight algorithms. Texture features were extracted using the software LIFEx. A volume of interest was delineated on the 22-, 28-, and 37-mm spheres with a threshold of 42% of the maximum SUV. The voxel intensities were discretized using 2 resampling methods, namely a fixed bin size and a fixed bin number discretization. The discrete resampling values were set to 64 and 128. In total, 31 texture features were calculated with gray-level cooccurrence matrix (GLCM), gray-level run length matrix, neighborhood gray-level different matrix, and gray-level zone length matrix. The texture features of the SS and CBM images were compared for all settings using the paired t test and the coefficient of variation. In a clinical study, 27 lesions from 20 patients were examined using the same acquisition and image processing as were used during the phantom study. The percentage difference (%Diff) and correlation between the texture features from SS and CBM images were calculated to evaluate agreement between the 2 scanning techniques. Results: In the phantom study, the 11 features exhibited no significant difference between SS and CBM images, and the coefficient of variation was no more than 10%, depending on resampling conditions, whereas entropy and dissimilarity from GLCM fulfilled the criteria for all settings. In the clinical study, the entropy and dissimilarity from GLCM exhibited a low %Diff and excellent correlation in all resampling conditions. The %Diff of entropy was lower than that of dissimilarity. Conclusion: Differences between the texture features of SS and CBM images varied depending on the type of feature. Because entropy for GLCM exhibits minimal differences between SS and CBM images irrespective of resampling conditions, entropy may be the optimal feature to reduce the differences between the 2 scanning techniques.
Collapse
|
28
|
Diagnosis of the Severity of Fusarium Head Blight of Wheat Ears on the Basis of Image and Spectral Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2887. [PMID: 32443656 PMCID: PMC7287655 DOI: 10.3390/s20102887] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 04/03/2020] [Accepted: 04/08/2020] [Indexed: 01/10/2023]
Abstract
Fusarium head blight (FHB), one of the most prevalent and damaging infection diseases of wheat, affects quality and safety of associated food. In this study, to realize the early accurate monitoring of FHB, a diagnostic model of disease severity was proposed based on the fusion features of image and spectral features. First, the hyperspectral image of FHB infected in the range of the 400-1000 nm spectrum was collected, and the color parameters of wheat ear and spot region were segmented based on image features. Twelve sensitive bands were extracted using the successive projection algorithm, gray-scale co-occurrence matrix, and RGB color model. Four texture features were extracted from each feature band image as texture variables, and nine color feature variables were extracted from R, G, and B component images. Texture features with high correlation and color features were selected to participate in the final model building parameters via correlation analysis. Finally, the particle swarm optimization support vector machine (PSO-SVM) algorithm was used to build the model based on the diagnosis model of disease severity of FHB with different combinations of characteristic variables. The experimental results showed that the PSO-SVM model based on spectral and color feature fusion was optimal. Moreover, the accuracy of the training and prediction set was 95% and 92%, respectively. The method based on fusion features of image and spectral features can accurately and effectively diagnose the severity of FHB, thereby providing a technical basis for the timely and effective control of FHB and precise application of a pesticide.
Collapse
|
29
|
[Multi-feature Extraction and Classification of Breast Tumor in Ultrasound Image]. ZHONGGUO YI LIAO QI XIE ZA ZHI = CHINESE JOURNAL OF MEDICAL INSTRUMENTATION 2020; 44:294-301. [PMID: 32762200 DOI: 10.3969/j.issn.1671-7104.2020.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Feature extraction of breast tumors is very important in the breast tumor detection (benign and malignant) in ultrasound image. The traditional quantitative description of breast tumors has some shortcomings, such as inaccuracy. A simple and accurate feature extraction method has been studied. METHODS In this paper, a new method of boundary feature extraction was proposed. Firstly, the shape histogram of ultrasound breast tumors was constructed. Secondly, the relevant boundary feature factors were calculated from a local point of view, including sum of maximum curvature, sum of maximum curvature and peak, sum of maximum curvature and standard deviation. Based on the boundary features, shape features and texture features, the linear support vector machine classifiers for benign and malignant breast tumor recognition was constructed. RESULTS The accuracy of boundary features in the benign and malignant breast tumors classification was 82.69%. The accuracy of shape features was 73.08%. The accuracy of texture features was 63.46%. The classification accuracy of the three fusion features was 86.54%. CONCLUSIONS The classification accuracy of boundary features was higher than that of texture features and shape features. The classification method based on multi-features has the highest accuracy and it describes the benign and malignant tumors from different angles. The research results have practical value.
Collapse
|
30
|
Bleeding detection in wireless capsule endoscopy videos - Color versus texture features. J Appl Clin Med Phys 2019; 20:141-154. [PMID: 31251460 PMCID: PMC6698770 DOI: 10.1002/acm2.12662] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 10/15/2018] [Accepted: 05/26/2019] [Indexed: 12/22/2022] Open
Abstract
Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient’s video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine‐learning‐based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel‐level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state‐of‐the‐art approaches. In this research, we have conducted a broad comparison of a number of different state‐of‐the‐art features and classification methods that allows building an efficient and flexible WCE video processing system.
Collapse
|
31
|
Multi-feature fusion method for medical image retrieval using wavelet and bag-of-features. Comput Assist Surg (Abingdon) 2019; 24:72-80. [PMID: 30689441 DOI: 10.1080/24699322.2018.1560087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Color, texture, and shape are the common features used for the retrieval systems. However, many medical images have a spot of color information. Therefore, the discriminative texture and shape features should be extracted to obtain a satisfied retrieval result. In order to increase the credibility of the retrieval process, many features can be combined to be used for medical image retrieval. Meanwhile, more features require more processing time, which will decrease the retrieval speed. In this paper, wavelet decomposition is adopted to generate different resolution images. Bag-of-feature, texture, and LBP feature are extracted from three different-level wavelet images. Finally, the similarity measure function is obtained by fusing these three types of features. Experimental results show that the proposed multi-feature fusion method can achieve a higher retrieval accuracy with an acceptable retrieval time.
Collapse
|
32
|
Label-free dynamic segmentation and morphological analysis of subcellular optical scatterers. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-11. [PMID: 30251486 DOI: 10.1117/1.jbo.23.9.096004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 08/13/2018] [Indexed: 06/08/2023]
Abstract
Imaging without fluorescent protein labels or dyes presents significant advantages for studying living cells without confounding staining artifacts and with minimal sample preparation. Here, we combine label-free optical scatter imaging with digital segmentation and processing to create dynamic subcellular masks, which highlight significantly scattering objects within the cells' cytoplasms. The technique is tested by quantifying organelle morphology and redistribution during cell injury induced by calcium overload. Objects within the subcellular mask are first analyzed individually. We show that the objects' aspect ratio and degree of orientation ("orientedness") decrease in response to calcium overload, while they remain unchanged in untreated control cells. These changes are concurrent with mitochondrial fission and rounding observed by fluorescence, and are consistent with our previously published data demonstrating scattering changes associated with mitochondrial rounding during calcium injury. In addition, we show that the magnitude of the textural features associated with the spatial distribution of the masked objects' orientedness values, changes by more than 30% in the calcium-treated cells compared with no change or changes of less than 10% in untreated controls, reflecting dynamic changes in the overall spatial distribution and arrangement of subcellular scatterers in response to injury. Taken together, our results suggest that our method successfully provides label-free morphological signatures associated with cellular injury. Thus, we propose that dynamically segmenting and analyzing the morphology and organizational patterns of subcellular scatterers as a function of time can be utilized to quantify changes in a given cellular condition or state.
Collapse
|
33
|
Radiomics strategy for glioma grading using texture features from multiparametric MRI. J Magn Reson Imaging 2018; 48:1518-1528. [PMID: 29573085 DOI: 10.1002/jmri.26010] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 03/01/2018] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Accurate glioma grading plays an important role in the clinical management of patients and is also the basis of molecular stratification nowadays. PURPOSE/HYPOTHESIS To verify the superiority of radiomics features extracted from multiparametric MRI to glioma grading and evaluate the grading potential of different MRI sequences or parametric maps. STUDY TYPE Retrospective; radiomics. POPULATION A total of 153 patients including 42, 33, and 78 patients with Grades II, III, and IV gliomas, respectively. FIELD STRENGTH/SEQUENCE 3.0T MRI/T1 -weighted images before and after contrast-enhanced, T2 -weighted, multi-b-value diffusion-weighted and 3D arterial spin labeling images. ASSESSMENT After multiparametric MRI preprocessing, high-throughput features were derived from patients' volumes of interests (VOIs). The support vector machine-based recursive feature elimination was adopted to find the optimal features for low-grade glioma (LGG) vs. high-grade glioma (HGG), and Grade III vs. IV glioma classification tasks. Then support vector machine (SVM) classifiers were established using the optimal features. The accuracy and area under the curve (AUC) was used to assess the grading efficiency. STATISTICAL TESTS Student's t-test or a chi-square test were applied on different clinical characteristics to confirm whether intergroup significant differences exist. RESULTS Patients' ages between LGG and HGG groups were significantly different (P < 0.01). For each patient, 420 texture and 90 histogram parameters were derived from 10 VOIs of multiparametric MRI. SVM models were established using 30 and 28 optimal features for classifying LGGs from HGGs and grades III from IV, respectively. The accuracies/AUCs were 96.8%/0.987 for classifying LGGs from HGGs, and 98.1%/0.992 for classifying grades III from IV, which were more promising than using histogram parameters or using the single sequence MRI. DATA CONCLUSION Texture features were more effective for noninvasively grading gliomas than histogram parameters. The combined application of multiparametric MRI provided a higher grading efficiency. The proposed radiomic strategy could facilitate clinical decision-making for patients with varied glioma grades. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;48:1518-1528.
Collapse
|
34
|
Fast Detection of Striped Stem-Borer (Chilo suppressalis Walker) Infested Rice Seedling Based on Visible/Near-Infrared Hyperspectral Imaging System. SENSORS 2017; 17:s17112470. [PMID: 29077040 PMCID: PMC5713110 DOI: 10.3390/s17112470] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Revised: 10/22/2017] [Accepted: 10/24/2017] [Indexed: 11/16/2022]
Abstract
Striped stem-borer (SSB) infestation is one of the most serious sources of damage to rice growth. A rapid and non-destructive method of early SSB detection is essential for rice-growth protection. In this study, hyperspectral imaging combined with chemometrics was used to detect early SSB infestation in rice and identify the degree of infestation (DI). Visible/near-infrared hyperspectral images (in the spectral range of 380 nm to 1030 nm) were taken of the healthy rice plants and infested rice plants by SSB for 2, 4, 6, 8 and 10 days. A total of 17 characteristic wavelengths were selected from the spectral data extracted from the hyperspectral images by the successive projection algorithm (SPA). Principal component analysis (PCA) was applied to the hyperspectral images, and 16 textural features based on the gray-level co-occurrence matrix (GLCM) were extracted from the first two principal component (PC) images. A back-propagation neural network (BPNN) was used to establish infestation degree evaluation models based on full spectra, characteristic wavelengths, textural features and features fusion, respectively. BPNN models based on a fusion of characteristic wavelengths and textural features achieved the best performance, with classification accuracy of calibration and prediction sets over 95%. The accuracy of each infestation degree was satisfactory, and the accuracy of rice samples infested for 2 days was slightly low. In all, this study indicated the feasibility of hyperspectral imaging techniques to detect early SSB infestation and identify degrees of infestation.
Collapse
|
35
|
A Methodology for Texture Feature-based Quality Assessment in Nucleus Segmentation of Histopathology Image. J Pathol Inform 2017; 8:38. [PMID: 28966837 PMCID: PMC5609357 DOI: 10.4103/jpi.jpi_43_17] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Accepted: 07/11/2017] [Indexed: 12/31/2022] Open
Abstract
CONTEXT Image segmentation pipelines often are sensitive to algorithm input parameters. Algorithm parameters optimized for a set of images do not necessarily produce good-quality-segmentation results for other images. Even within an image, some regions may not be well segmented due to a number of factors, including multiple pieces of tissue with distinct characteristics, differences in staining of the tissue, normal versus tumor regions, and tumor heterogeneity. Evaluation of quality of segmentation results is an important step in image analysis. It is very labor intensive to do quality assessment manually with large image datasets because a whole-slide tissue image may have hundreds of thousands of nuclei. Semi-automatic mechanisms are needed to assist researchers and application developers to detect image regions with bad segmentations efficiently. AIMS Our goal is to develop and evaluate a machine-learning-based semi-automated workflow to assess quality of nucleus segmentation results in a large set of whole-slide tissue images. METHODS We propose a quality control methodology, in which machine-learning algorithms are trained with image intensity and texture features to produce a classification model. This model is applied to image patches in a whole-slide tissue image to predict the quality of nucleus segmentation in each patch. The training step of our methodology involves the selection and labeling of regions by a pathologist in a set of images to create the training dataset. The image regions are partitioned into patches. A set of intensity and texture features is computed for each patch. A classifier is trained with the features and the labels assigned by the pathologist. At the end of this process, a classification model is generated. The classification step applies the classification model to unlabeled test images. Each test image is partitioned into patches. The classification model is applied to each patch to predict the patch's label. RESULTS The proposed methodology has been evaluated by assessing the segmentation quality of a segmentation method applied to images from two cancer types in The Cancer Genome Atlas; WHO Grade II lower grade glioma (LGG) and lung adenocarcinoma (LUAD). The results show that our method performs well in predicting patches with good-quality segmentations and achieves F1 scores 84.7% for LGG and 75.43% for LUAD. CONCLUSIONS As image scanning technologies advance, large volumes of whole-slide tissue images will be available for research and clinical use. Efficient approaches for the assessment of quality and robustness of output from computerized image analysis workflows will become increasingly critical to extracting useful quantitative information from tissue images. Our work demonstrates the feasibility of machine-learning-based semi-automated techniques to assist researchers and algorithm developers in this process.
Collapse
|
36
|
Radiomics assessment of bladder cancer grade using texture features from diffusion-weighted imaging. J Magn Reson Imaging 2017; 46:1281-1288. [PMID: 28199039 DOI: 10.1002/jmri.25669] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Accepted: 01/30/2017] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To 1) describe textural features from diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) maps that can distinguish low-grade bladder cancer from high-grade, and 2) propose a radiomics-based strategy for cancer grading using texture features. MATERIALS AND METHODS In all, 61 patients with bladder cancer (29 in high- and 32 in low-grade groups) were enrolled in this retrospective study. Histogram- and gray-level co-occurrence matrix (GLCM)-based radiomics features were extracted from cancerous volumes of interest (VOIs) on DWI and corresponding ADC maps of each patient acquired from 3.0T magnetic resonance imaging (MRI). A Mann-Whitney U-test was applied to select features with significant differences between low- and high-grade groups (P < 0.05). Then support vector machine with recursive feature elimination (SVM-RFE) and classification strategy was adopted to find an optimal feature subset and then to establish a classification model for grading. RESULTS A total 102 features were derived from each VOI and among them, 47 candidate features were selected, which showed significant intergroup differences (P < 0.05). By the SVM-RFE method, an optimal feature subset including 22 features was further selected from candidate features. The SVM classifier using the optimal feature subset achieved the best performance in bladder cancer grading, with an area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of 0.861, 82.9%, 78.4%, and 87.1%, respectively. CONCLUSION Textural features from DWI and ADC maps can reflect the difference between low- and high-grade bladder cancer, especially those GLCM features from ADC maps. The proposed radiomics strategy using these features, combined with the SVM classifier, may better facilitate image-based bladder cancer grading preoperatively. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1281-1288.
Collapse
|
37
|
Automatic identification of parathyroid in optical coherence tomography images. Lasers Surg Med 2017; 49:305-311. [PMID: 28129441 DOI: 10.1002/lsm.22622] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2016] [Indexed: 01/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The identification and preservation of parathyroid is a major problem in thyroid surgery. In order to solve this problem, optical coherence tomography was involved as a real-time, non-invasive high-resolution imaging technique. This study demonstrated an effective and fast method to distinguish parathyroid tissue from thyroid, lymph node, and adipose tissue in their ex vivo optical coherence tomography (OCT) images automatically. METHODS OCT images were obtained from parathyroid, thyroid, lymph node, and adipose tissue, respectively. A classification and an identification system based on texture features analysis and back propagation artificial neural network (BP-ANN) were established to classify the four types of tissue and identify each of the four types automatically. RESULTS A total of 248 OCT images were taken from 16 patients undergoing thyroidectomy. The accuracy of classification for parathyroid, thyroid, lymph node, and adipose were 99.21, 98.43, 97.65, and 98.43%, respectively. CONCLUSION The proposed automatic identification method is capable of distinguishing among parathyroid, thyroid, lymph, and adipose automatically and effectively. Compared with the identification results of human, it has a better accuracy and reliability. For identifying parathyroid from the other entities, it has a satisfying performance. Lasers Surg. Med. 49:305-311, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
|
38
|
Texture Feature Extraction and Analysis for Polyp Differentiation via Computed Tomography Colonography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1522-31. [PMID: 26800530 PMCID: PMC4891231 DOI: 10.1109/tmi.2016.2518958] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Image textures in computed tomography colonography (CTC) have great potential for differentiating non-neoplastic from neoplastic polyps and thus can advance the current CTC detection-only paradigm to a new level with diagnostic capability. However, image textures are frequently compromised, particularly in low-dose CT imaging. Furthermore, texture feature extraction may vary, depending on the polyp spatial orientation variation, resulting in variable results. To address these issues, this study proposes an adaptive approach to extract and analyze the texture features for polyp differentiation. Firstly, derivative (e.g. gradient and curvature) operations are performed on the CT intensity image to amplify the textures with adequate noise control. Then Haralick co-occurrence matrix (CM) is used to calculate texture measures along each of the 13 directions (defined by the first and second order image voxel neighbors) through the polyp volume in the intensity, gradient and curvature images. Instead of taking the mean and range of each CM measure over the 13 directions as the so-called Haralick texture features, Karhunen-Loeve transform is performed to map the 13 directions into an orthogonal coordinate system so that the resulted texture features are less dependent on the polyp orientation variation. These simple ideas for amplifying textures and stabilizing spatial variation demonstrated a significant impact for the differentiating task by experiments using 384 polyp datasets, of which 52 are non-neoplastic polyps and the rest are neoplastic polyps. By the merit of area under the curve of receiver operating characteristic, the innovative ideas achieved differentiation capability of 0.8016, indicating the CTC diagnostic feasibility.
Collapse
|
39
|
Spectrum and Image Texture Features Analysis for Early Blight Disease Detection on Eggplant Leaves. SENSORS 2016; 16:s16050676. [PMID: 27187387 PMCID: PMC4883367 DOI: 10.3390/s16050676] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2016] [Revised: 04/30/2016] [Accepted: 05/05/2016] [Indexed: 11/16/2022]
Abstract
This study investigated both spectrum and texture features for detecting early blight disease on eggplant leaves. Hyperspectral images for healthy and diseased samples were acquired covering the wavelengths from 380 to 1023 nm. Four gray images were identified according to the effective wavelengths (408, 535, 624 and 703 nm). Hyperspectral images were then converted into RGB, HSV and HLS images. Finally, eight texture features (mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation) based on gray level co-occurrence matrix (GLCM) were extracted from gray images, RGB, HSV and HLS images, respectively. The dependent variables for healthy and diseased samples were set as 0 and 1. K-Nearest Neighbor (KNN) and AdaBoost classification models were established for detecting healthy and infected samples. All models obtained good results with the classification rates (CRs) over 88.46% in the testing sets. The results demonstrated that spectrum and texture features were effective for early blight disease detection on eggplant leaves.
Collapse
|
40
|
Advances in feature selection methods for hyperspectral image processing in food industry applications: a review. Crit Rev Food Sci Nutr 2016; 55:1368-82. [PMID: 24689555 DOI: 10.1080/10408398.2013.871692] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.
Collapse
|
41
|
Quantitative analysis on collagen of dermatofibrosarcoma protuberans skin by second harmonic generation microscopy. SCANNING 2015; 37:1-5. [PMID: 25369371 DOI: 10.1002/sca.21172] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/24/2014] [Accepted: 10/06/2014] [Indexed: 06/04/2023]
Abstract
Dermatofibrosarcoma protuberans (DFSP) is a skin cancer usually mistaken as other benign tumors. Abnormal DFSP resection results in tumor recurrence. Quantitative characterization of collagen alteration on the skin tumor is essential for developing a diagnostic technique. In this study, second harmonic generation (SHG) microscopy was performed to obtain images of the human DFSP skin and normal skin. Subsequently, structure and texture analysis methods were applied to determine the differences in skin texture characteristics between the two skin types, and the link between collagen alteration and tumor was established. Results suggest that combining SHG microscopy and texture analysis methods is a feasible and effective method to describe the characteristics of skin tumor like DFSP.
Collapse
|
42
|
HyMaP: A hybrid magnitude-phase approach to unsupervised segmentation of tumor areas in breast cancer histology images. J Pathol Inform 2013; 4:S1. [PMID: 23766931 PMCID: PMC3678741 DOI: 10.4103/2153-3539.109802] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2013] [Accepted: 01/21/2013] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Segmentation of areas containing tumor cells in standard H&E histopathology images of breast (and several other tissues) is a key task for computer-assisted assessment and grading of histopathology slides. Good segmentation of tumor regions is also vital for automated scoring of immunohistochemical stained slides to restrict the scoring or analysis to areas containing tumor cells only and avoid potentially misleading results from analysis of stromal regions. Furthermore, detection of mitotic cells is critical for calculating key measures such as mitotic index; a key criteria for grading several types of cancers including breast cancer. We show that tumor segmentation can allow detection and quantification of mitotic cells from the standard H&E slides with a high degree of accuracy without need for special stains, in turn making the whole process more cost-effective. METHOD BASED ON THE TISSUE MORPHOLOGY, BREAST HISTOLOGY IMAGE CONTENTS CAN BE DIVIDED INTO FOUR REGIONS: Tumor, Hypocellular Stroma (HypoCS), Hypercellular Stroma (HyperCS), and tissue fat (Background). Background is removed during the preprocessing stage on the basis of color thresholding, while HypoCS and HyperCS regions are segmented by calculating features using magnitude and phase spectra in the frequency domain, respectively, and performing unsupervised segmentation on these features. RESULTS All images in the database were hand segmented by two expert pathologists. The algorithms considered here are evaluated on three pixel-wise accuracy measures: precision, recall, and F1-Score. The segmentation results obtained by combining HypoCS and HyperCS yield high F1-Score of 0.86 and 0.89 with re-spect to the ground truth. CONCLUSIONS In this paper, we show that segmentation of breast histopathology image into hypocellular stroma and hypercellular stroma can be achieved using magnitude and phase spectra in the frequency domain. The segmentation leads to demarcation of tumor margins leading to improved accuracy of mitotic cell detection.
Collapse
|
43
|
A computer-aided diagnosis system for quantitative scoring of extent of lung fibrosis in scleroderma patients. Clin Exp Rheumatol 2010; 28:S26-S35. [PMID: 21050542 PMCID: PMC3177564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Accepted: 09/22/2010] [Indexed: 05/30/2023]
Abstract
OBJECTIVES To evaluate an improved quantitative lung fibrosis score based on a computer-aided diagnosis (CAD) system that classifies CT pixels with the visual semi-quantitative pulmonary fibrosis score in patients with scleroderma-related interstitial lung disease (SSc-ILD). METHODS High-resolution, thin-section CT images were obtained and analysed on 129 subjects with SSc-ILD (36 men, 93 women; mean age 48.8±12.1 years) who underwent baseline CT in the prone position at full inspiration. The CAD system segmented each lung of each patient into 3 zones. A quantitative lung fibrosis (QLF) score was established via 5 steps: 1) images were denoised; 2) images were grid sampled; 3) the characteristics of grid intensities were converted into texture features; 4) texture features classified pixels as fibrotic or non-fibrotic, with fibrosis defined by a reticular pattern with architectural distortion; and 5) fibrotic pixels were reported as percentages. Quantitative scores were obtained from 709 zones with complete data and then compared with ordinal scores from two independent expert radiologists. ROC curve analyses were used to measure performance. RESULTS When the two radiologists agreed that fibrosis affected more than 1% or 25% of a zone or zones, the areas under the ROC curves for QLF score were 0.86 and 0.96, respectively. CONCLUSIONS Our technique exhibited good accuracy for detecting fibrosis at a threshold of both 1% (i.e. presence or absence of pulmonary fibrosis) and a clinically meaningful threshold of 25% extent of fibrosis in patients with SSc-ILD.
Collapse
|
44
|
Abstract
Since 1983, a long-term clinical trial of esophageal carcinoma chemoprevention has been conducted in a high-risk area in China. From this study, 25 esophageal severe dysplasia patients without therapy were selected for analysis. After 5-year follow-ups, 14 cases progressed to esophageal carcinoma, while the other 11 cases remained stable. Three Papanicolaou's smears were used for each case, including one from the esophageal cytological examination at the beginning, two from the re-examinations three and five years later respectively. About 100 visually normal intermediate cells were randomly collected per slide by high resolution image analysis. More than 100 features (morphologic, densitometric, textural) were extracted. The classifications were made by means of stepwise linear discriminate analysis at the single cell level as on the specimen level using up to ten features. In all three comparisons of patients with progression and with regression at time of diagnosis, three years after diagnosis and five years later, the correct cell classification rates were about 70%. The subsequent specimen classifications by means of the a posteriori probability (APOP) distribution of the cells in each case led to 80% correct classification. All selected features reflected the chromatin structure of nuclei. The result demonstrated that the chromatin structures of esophageal epithelial cells in severely dysplasic patients are different between cases with and without progression. These results suggest the possibility of the application of image analysis in the clinical trials to find the dysplasia patients with higher risk of progression, in order to reduce the number of patients for therapy.
Collapse
|