1
|
Baking releases microplastics from polyethylene terephthalate bakeware as detected by optical photothermal infrared and quantum cascade laser infrared. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 924:171408. [PMID: 38432360 DOI: 10.1016/j.scitotenv.2024.171408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 02/09/2024] [Accepted: 02/29/2024] [Indexed: 03/05/2024]
Abstract
The use of plastic bakeware is a potential source of human exposure to microplastics (MPs). However, characterizing MPs remains a challenge. This study aims to employ optical photothermal infrared (O-PTIR) and quantum cascade laser infrared (QCL-IR) technology to characterise polyethylene terephthalate (PET) MPs shed from PET bakeware during the baking process. The bakeware, filled with ultrapure water, underwent baking cycles at 220 °C for 20 min, 60 min, and three consecutive cycles of 60 min each. Subsequently, particles present in the ultrapure water were collected using an Al2O3 filter. O-PTIR and QCL-IR were used to characterise PET MPs collected from the filtration. Analysis revealed that QCL-IR spectra exhibited broader absorption peaks, compared to O-PTIR. Notably, MP spectra obtained from both techniques displayed common absorption peaks around 1119, 1623, 1341 and 1725 cm-1. The dominant size of PET MPs detected by O-PTIR and QCL-IR was 1-3 μm and 5-20 μm, respectively. The quantity of identified PET MPs using O-PTIR was 18 times greater than that with QCL-IR, which was attributed to variations in spatial resolution, sampling methods for spectra collection, and data analysis employed by the two methods. Importantly, findings from both techniques highlighted a notably large quantity of MPs released from PET bakeware, particularly evident after 3 cycles of 60 min of baking, suggesting a substantial increase in the potential ingestion of MPs, especially in scenarios involving extended baking durations. The research outcomes will guide consumers on minimizing the intake of microplastics by using PET bakeware for shorter baking time. Additionally, the study will yield valuable insights into the application of O-PTIR and QCL-IR for MPs detection, potentially inspiring advancements in MPs detection methodologies through cutting-edge technologies.
Collapse
|
2
|
Machine learning aided single cell image analysis improves understanding of morphometric heterogeneity of human mesenchymal stem cells. Methods 2024; 225:62-73. [PMID: 38490594 DOI: 10.1016/j.ymeth.2024.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/10/2024] [Accepted: 03/12/2024] [Indexed: 03/17/2024] Open
Abstract
The multipotent stem cells of our body have been largely harnessed in biotherapeutics. However, as they are derived from multiple anatomical sources, from different tissues, human mesenchymal stem cells (hMSCs) are a heterogeneous population showing ambiguity in their in vitro behavior. Intra-clonal population heterogeneity has also been identified and pre-clinical mechanistic studies suggest that these cumulatively depreciate the therapeutic effects of hMSC transplantation. Although various biomarkers identify these specific stem cell populations, recent artificial intelligence-based methods have capitalized on the cellular morphologies of hMSCs, opening a new approach to understand their attributes. A robust and rapid platform is required to accommodate and eliminate the heterogeneity observed in the cell population, to standardize the quality of hMSC therapeutics globally. Here, we report our primary findings of morphological heterogeneity observed within and across two sources of hMSCs namely, stem cells from human exfoliated deciduous teeth (SHEDs) and human Wharton jelly mesenchymal stem cells (hWJ MSCs), using real-time single-cell images generated on immunophenotyping by imaging flow cytometry (IFC). We used the ImageJ software for identification and comparison between the two types of hMSCs using statistically significant morphometric descriptors that are biologically relevant. To expand on these insights, we have further applied deep learning methods and successfully report the development of a Convolutional Neural Network-based image classifier. In our research, we introduced a machine learning methodology to streamline the entire procedure, utilizing convolutional neural networks and transfer learning for binary classification, achieving an accuracy rate of 97.54%. We have also critically discussed the challenges, comparisons between solutions and future directions of machine learning in hMSC classification in biotherapeutics.
Collapse
|
3
|
Extraction of Lilium davidii var. unicolor Planting Information Based on Deep Learning and Multi-Source Data. SENSORS (BASEL, SWITZERLAND) 2024; 24:1543. [PMID: 38475077 DOI: 10.3390/s24051543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/18/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
Accurate extraction of crop acreage is an important element of digital agriculture. This study uses Sentinel-2A, Sentinel-1, and DEM as data sources to construct a multidimensional feature dataset encompassing spectral features, vegetation index, texture features, terrain features, and radar features. The Relief-F algorithm is applied for feature selection to identify the optimal feature dataset. And the combination of deep learning and the random forest (RF) classification method is utilized to identify lilies in Qilihe District and Yuzhong County of Lanzhou City, obtain their planting structure, and analyze their spatial distribution characteristics in Gansu Province. The findings indicate that terrain features significantly contribute to ground object classification, with the highest classification accuracy when the number of features in the feature dataset is 36. The precision of the deep learning classification method exceeds that of RF, with an overall classification accuracy and kappa coefficient of 95.9% and 0.934, respectively. The Lanzhou lily planting area is 137.24 km2, and it primarily presents a concentrated and contiguous distribution feature. The study's findings can serve as a solid scientific foundation for Lanzhou City's lily planting structure adjustment and optimization and a basis of data for local lily yield forecasting, development, and application.
Collapse
|
4
|
Development of an automated optimal distance feature-based decision system for diagnosing knee osteoarthritis using segmented X-ray images. Heliyon 2023; 9:e21703. [PMID: 38027947 PMCID: PMC10665756 DOI: 10.1016/j.heliyon.2023.e21703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 10/25/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
Knee Osteoarthritis (KOA) is a leading cause of disability and physical inactivity. It is a degenerative joint disease that affects the cartilage, cushions the bones, and protects them from rubbing against each other during motion. If not treated early, it may lead to knee replacement. In this regard, early diagnosis of KOA is necessary for better treatment. Nevertheless, manual KOA detection is time-consuming and error-prone for large data hubs. In contrast, an automated detection system aids the specialist in diagnosing KOA grades accurately and quickly. So, the main objective of this study is to create an automated decision system that can analyze KOA and classify the severity grades, utilizing the extracted features from segmented X-ray images. In this study, two different datasets were collected from the Mendeley and Kaggle database and combined to generate a large data hub containing five classes: Grade 0 (Healthy), Grade 1 (Doubtful), Grade 2 (Minimal), Grade 3 (Moderate), and Grade 4 (Severe). Several image processing techniques were employed to segment the region of interest (ROI). These included Gradient-weighted Class Activation Mapping (Grad-Cam) to detect the ROI, cropping the ROI portion, applying histogram equalization (HE) to improve contrast, brightness, and image quality, and noise reduction (using Otsu thresholding, inverting the image, and morphological closing). Besides, the focus filtering method was utilized to eliminate unwanted images. Then, six feature sets (morphological, GLCM, statistical, texture, LBP, and proposed features) were generated from segmented ROIs. After evaluating the statistical significance of the features and selection methods, the optimal feature set (prominent six distance features) was selected, and five machine learning (ML) models were employed. Additionally, a decision-making strategy based on the six optimal features is proposed. The XGB model outperformed other models with a 99.46 % accuracy, using six distance features, and the proposed decision-making strategy was validated by testing 30 images.
Collapse
|
5
|
A comparative study of Machine Learning-based classification of Tomato fungal diseases: Application of GLCM texture features. Heliyon 2023; 9:e21697. [PMID: 38027996 PMCID: PMC10656238 DOI: 10.1016/j.heliyon.2023.e21697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 10/11/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
Globally, agriculture remains an important source of food and economic development. Due to various plant diseases, farmers continue to suffer huge yield losses in both quality and quantity. In this study, we explored the potential of using Artificial Neural Networks, K-Nearest Neighbors, Random Forest, and Support Vector Machine to classify tomato fungal leaf diseases: Alternaria, Curvularia, Helminthosporium, and Lasiodiplodi based on Gray Level Co-occurrence Matrix texture features. Small differences between symptoms of these diseases make it difficult to use the naked eye to obtain better results in detecting and distinguishing these diseases. The Artificial Neural Network outperformed other classifiers with an overall accuracy of 94% and average scores of 93.6% for Precision, 93.8% for Recall, and 93.8% for F1-score. Generally, the models confused samples originally belonging to Helminthosporium with Curvularia. The extracted texture features show great potential to classify the different tomato leaf fungal diseases. The results of this study show that texture characteristics of the Gray Level Co-occurrence Matrix play a critical role in the establishment of tomato leaf disease classification systems and can facilitate the implementation of preventive measures by farmers, resulting in enhanced yield quality and quantity.
Collapse
|
6
|
Assessment of Soybean Lodging Using UAV Imagery and Machine Learning. PLANTS (BASEL, SWITZERLAND) 2023; 12:2893. [PMID: 37631105 PMCID: PMC10458648 DOI: 10.3390/plants12162893] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 07/24/2023] [Accepted: 07/31/2023] [Indexed: 08/27/2023]
Abstract
Plant lodging is one of the most essential phenotypes for soybean breeding programs. Soybean lodging is conventionally evaluated visually by breeders, which is time-consuming and subject to human errors. This study aimed to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in assessing the lodging conditions of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores were visually assessed by experienced breeders, and the scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN) and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data preprocessing methods were used to treat the imbalanced dataset to improve classification accuracy. Results indicate that the preprocessing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Oversampling-Edited Nearest Neighbor (SMOTE-ENN) may be a good preprocessing method for using unbalanced datasets and the classification task. Furthermore, an overall accuracy of 96% was obtained using the SMOTE-ENN dataset and ANN classifier. The study indicated that an imagery-based classification model could be implemented in a breeding program to differentiate soybean lodging phenotype and classify lodging scores effectively.
Collapse
|
7
|
Identification of Solid and Liquid Materials Using Acoustic Signals and Frequency-Graph Features. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1170. [PMID: 37628200 PMCID: PMC10453644 DOI: 10.3390/e25081170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023]
Abstract
Material identification is playing an increasingly important role in various sectors such as industry, petrochemical, mining, and in our daily lives. In recent years, material identification has been utilized for security checks, waste sorting, etc. However, current methods for identifying materials require direct contact with the target and specialized equipment that can be costly, bulky, and not easily portable. Past proposals for addressing this limitation relied on non-contact material identification methods, such as Wi-Fi-based and radar-based material identification methods, which can identify materials with high accuracy without physical contact; however, they are not easily integrated into portable devices. This paper introduces a novel non-contact material identification based on acoustic signals. Different from previous work, our design leverages the built-in microphone and speaker of smartphones as the transceiver to identify target materials. The fundamental idea of our design is that acoustic signals, when propagated through different materials, reach the receiver via multiple paths, producing distinct multipath profiles. These profiles can serve as fingerprints for material identification. We captured and extracted them using acoustic signals, calculated channel impulse response (CIR) measurements, and then extracted image features from the time-frequency domain feature graphs, including histogram of oriented gradient (HOG) and gray-level co-occurrence matrix (GLCM) image features. Furthermore, we adopted the error-correcting output code (ECOC) learning method combined with the majority voting method to identify target materials. We built a prototype for this paper using three mobile phones based on the Android platform. The results from three different solid and liquid materials in varied multipath environments reveal that our design can achieve average identification accuracies of 90% and 97%.
Collapse
|
8
|
Gray-Level Co-occurrence Matrix Analysis of Nuclear Textural Patterns in Laryngeal Squamous Cell Carcinoma: Focus on Artificial Intelligence Methods. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2023; 29:1220-1227. [PMID: 37749686 DOI: 10.1093/micmic/ozad042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/05/2023] [Accepted: 03/10/2023] [Indexed: 09/27/2023]
Abstract
Gray-level co-occurrence matrix (GLCM) and discrete wavelet transform (DWT) analyses are two contemporary computational methods that can identify discrete changes in cell and tissue textural features. Previous research has indicated that these methods may be applicable in the pathology for identification and classification of various types of cancers. In this study, we present findings that squamous epithelial cells in laryngeal carcinoma, which appear morphologically intact during conventional pathohistological evaluation, have distinct nuclear GLCM and DWT features. The average values of nuclear GLCM indicators of these cells, such as angular second moment, inverse difference moment, and textural contrast, substantially differ when compared to those in noncancerous tissue. In this work, we also propose machine learning models based on random forests and support vector machine that can be successfully trained to separate the cells using GLCM and DWT quantifiers as input data. We show that, based on a limited cell sample, these models have relatively good classification accuracy and discriminatory power, which makes them suitable candidates for future development of AI-based sensors potentially applicable in laryngeal carcinoma diagnostic protocols.
Collapse
|
9
|
Conditional random field-recurrent neural network segmentation with optimized deep learning for brain tumour classification using magnetic resonance imaging. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2178611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
10
|
Prognostic value of 18F-FDG PET/CT-based radiomics combining dosiomics and dose volume histogram for head and neck cancer. EJNMMI Res 2023; 13:14. [PMID: 36779997 PMCID: PMC9925656 DOI: 10.1186/s13550-023-00959-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/26/2023] [Indexed: 02/14/2023] Open
Abstract
OBJECTIVES By comparing the prognostic performance of 18F-FDG PET/CT-based radiomics combining dose features [Includes Dosiomics feature and the dose volume histogram (DVH) features] with that of conventional radiomics in head and neck cancer (HNC), multidimensional prognostic models were constructed to investigate the overall survival (OS) in HNC. MATERIALS AND METHODS A total of 220 cases from four centres based on the Cancer Imaging Archive public dataset were used in this study, 2260 radiomics features and 1116 dosiomics features and 8 DVH features were extracted for each case, and classified into seven different models of PET, CT, Dose, PET+CT, PET+Dose, CT+Dose and PET+CT+Dose. Features were selected by univariate Cox and Spearman correlation coefficients, and the selected features were brought into the least absolute shrinkage and selection operator (LASSO)-Cox model. A nomogram was constructed to visually analyse the prognostic impact of the incorporated dose features. C-index and Kaplan-Meier curves (log-rank analysis) were used to evaluate and compare these models. RESULTS The cases from the four centres were divided into three different training and validation sets according to the hospitals. The PET+CT+Dose model had C-indexes of 0.873 (95% CI 0.812-0.934), 0.759 (95% CI 0.663-0.855) and 0.835 (95% CI 0.745-0.925) in the validation set respectively, outperforming the rest models overall. The PET+CT+Dose model did well in classifying patients into high- and low-risk groups under all three different sets of experiments (p < 0.05). CONCLUSION Multidimensional model of radiomics features combining dosiomics features and DVH features showed high prognostic performance for predicting OS in patients with HNC.
Collapse
|
11
|
Artificially intelligent differential diagnosis of enlarged lymph nodes with random vector functional link network plus. Med Eng Phys 2023; 111:103939. [PMID: 36792248 DOI: 10.1016/j.medengphy.2022.103939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/10/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Differential diagnosis of enlarged lymph nodes (ELNs) is essential for the treatment of related patients. Though multi-modal ultrasound including B-mode, Doppler ultrasound, elastography and contrast-enhanced ultrasound (CEUS) can enhance diagnostic performance for ELNs, the scenario of having only single or dual modal data is often encountered. In this study, an artificially intelligent diagnosis model based on the learning using privileged information was proposed to aid in differential diagnosis of ELNs in the case of single or dual modal images. In our model, B-mode, or combined with another modality, was used as the standard information (SI) and other modalities were used as the privileged information (PI). The model was constructed through the combination of the SI and PI in the training stage. By learning from the training samples, a random vector functional link network with privileged information (RVFL+) was obtained, which was used to classify the testing samples of solely the SI. Results showed that the accuracy, precision and Youden's index of the RVFL+ model, using B-mode with elastography as the SI and CEUS as the PI, reached 78.4%, 92.4% and 54.9%, increased by 14.0%, 8.4% and 24.5% compared with the model using B-mode as the SI without the PI. The method based on the LUPI can improve the diagnostic performance for ELNs.
Collapse
|
12
|
Machine learning based gray-level co-occurrence matrix early warning system enables accurate detection of colorectal cancer pelvic bone metastases on MRI. Front Oncol 2023; 13:1121594. [PMID: 37035167 PMCID: PMC10073745 DOI: 10.3389/fonc.2023.1121594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/02/2023] [Indexed: 04/11/2023] Open
Abstract
Objective The mortality of colorectal cancer patients with pelvic bone metastasis is imminent, and timely diagnosis and intervention to improve the prognosis is particularly important. Therefore, this study aimed to build a bone metastasis prediction model based on Gray level Co-occurrence Matrix (GLCM) - based Score to guide clinical diagnosis and treatment. Methods We retrospectively included 614 patients with colorectal cancer who underwent pelvic multiparameter magnetic resonance image(MRI) from January 2015 to January 2022 in the gastrointestinal surgery department of Gezhouba Central Hospital of Sinopharm. GLCM-based Score and Machine learning algorithm, that is,artificial neural net7work model(ANNM), random forest model(RFM), decision tree model(DTM) and support vector machine model(SVMM) were used to build prediction model of bone metastasis in colorectal cancer patients. The effectiveness evaluation of each model mainly included decision curve analysis(DCA), area under the receiver operating characteristic (AUROC) curve and clinical influence curve(CIC). Results We captured fourteen categories of radiomics data based on GLCM for variable screening of bone metastasis prediction models. Among them, Haralick_90, IV_0, IG_90, Haralick_30, CSV, Entropy and Haralick_45 were significantly related to the risk of bone metastasis, and were listed as candidate variables of machine learning prediction models. Among them, the prediction efficiency of RFM in combination with Haralick_90, Haralick_all, IV_0, IG_90, IG_0, Haralick_30, CSV, Entropy and Haralick_45 in training set and internal verification set was [AUC: 0.926,95% CI: 0.873-0.979] and [AUC: 0.919,95% CI: 0.868-0.970] respectively. The prediction efficiency of the other four types of prediction models was between [AUC: 0.716,95% CI: 0.663-0.769] and [AUC: 0.912,95% CI: 0.859-0.965]. Conclusion The automatic segmentation model based on diffusion-weighted imaging(DWI) using depth learning method can accurately segment the pelvic bone structure, and the subsequently established radiomics model can effectively detect bone metastases within the pelvic scope, especially the RFM algorithm, which can provide a new method for automatically evaluating the pelvic bone turnover of colorectal cancer patients.
Collapse
|
13
|
On the Quantification of Visual Texture Complexity. J Imaging 2022; 8:jimaging8090248. [PMID: 36135413 PMCID: PMC9505268 DOI: 10.3390/jimaging8090248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 08/26/2022] [Accepted: 09/08/2022] [Indexed: 11/20/2022] Open
Abstract
Complexity is one of the major attributes of the visual perception of texture. However, very little is known about how humans visually interpret texture complexity. A psychophysical experiment was conducted to visually quantify the seven texture attributes of a series of textile fabrics: complexity, color variation, randomness, strongness, regularity, repetitiveness, and homogeneity. It was found that the observers could discriminate between the textures with low and high complexity using some high-level visual cues such as randomness, color variation, strongness, etc. The results of principal component analysis (PCA) on the visual scores of the above attributes suggest that complexity and homogeneity could be essentially the underlying attributes of the same visual texture dimension, with complexity at the negative extreme and homogeneity at the positive extreme of this dimension. We chose to call this dimension visual texture complexity. Several texture measures including the first-order image statistics, co-occurrence matrix, local binary pattern, and Gabor features were computed for images of the textiles in sRGB, and four luminance-chrominance color spaces (i.e., HSV, YCbCr, Ohta’s I1I2I3, and CIELAB). The relationships between the visually quantified texture complexity of the textiles and the corresponding texture measures of the images were investigated. Analyzing the relationships showed that simple standard deviation of the image luminance channel had a strong correlation with the corresponding visual ratings of texture complexity in all five color spaces. Standard deviation of the energy of the image after convolving with an appropriate Gabor filter and entropy of the co-occurrence matrix, both computed for the image luminance channel, also showed high correlations with the visual data. In this comparison, sRGB, YCbCr, and HSV always outperformed the I1I2I3 and CIELAB color spaces. The highest correlations between the visual data and the corresponding image texture features in the luminance-chrominance color spaces were always obtained for the luminance channel of the images, and one of the two chrominance channels always performed better than the other. This result indicates that the arrangement of the image texture elements that impacts the observer’s perception of visual texture complexity cannot be represented properly by the chrominance channels. This must be carefully considered when choosing an image channel to quantify the visual texture complexity. Additionally, the good performance of the luminance channel in the five studied color spaces proves that variations in the luminance of the texture, or as one could call the luminance contrast, plays a crucial role in creating visual texture complexity.
Collapse
|
14
|
Marine Oil Spill Detection with X-Band Shipborne Radar Using GLCM, SVM and FCM. REMOTE SENSING 2022. [DOI: 10.3390/rs14153715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Marine oil spills have a significant adverse impact on the economy, ecology, and human health. Rapid and effective oil spill monitoring action is extraordinarily important for controlling marine pollution. A marine oil spill detection scheme based on X-band shipborne radar image with machine learning is proposed here. First, the original shipborne radar image collected on Dalian 7.16 oil spill accident was transformed into a Cartesian coordinate system and noise suppressed. Then, texture features and SVM were used to indicate the effective monitoring location of ocean waves. Third, FCM was applied to classify the oil films and ocean waves. Finally, the oil spill detection result was transformed back to a polar coordinate system. Compared with an improved active contour model and another oil spill detection method with SVM, our method performed more intelligently. It can provide data support for marine oil spill emergency response.
Collapse
|
15
|
Dynamic Monitoring of Desertification in Ningdong Based on Landsat Images and Machine Learning. SUSTAINABILITY 2022. [DOI: 10.3390/su14127470] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
The ecological stability of mining areas in Northwest China has been threatened by desertification for a long time. Remote sensing information combined with machine learning algorithms can effectively monitor and evaluate desertification. However, due to the fact that the geological environment of a mining area is easily affected by factors such as resource exploitation, it is challenging to accurately grasp the development process of desertification in a mining area. In order to better play the role of remote sensing technology and machine learning algorithms in the monitoring of desertification in mining areas, based on Landsat images, we used a variety of machine learning algorithms and feature combinations to monitor desertification in Ningdong coal base. The performance of each monitoring model was evaluated by various performance indexes. Then, the optimal monitoring model was selected to extract the long-time desertification information of the base, and the spatial-temporal characteristics of desertification were discussed in many aspects. Finally, the factors driving desertification change were quantitatively studied. The results showed that random forest with the best feature combination had better recognition performance than other monitoring models. Its accuracy was 87.2%, kappa was 0.825, Macro-F1 was 0.851, and AUC was 0.961. In 2003–2017, desertification land in Ningdong increased first and then slowly improved. In 2021, the desertification situation deteriorated. The driving force analysis showed that human economic activities such as coal mining have become the dominant factor in controlling the change of desert in Ningdong coal base, and the change of rainfall plays an auxiliary role. The study comprehensively analyzed the spatial-temporal characteristics and driving factors of desertification in Ningdong coal base. It can provide a scientific basis for combating desertification and for the construction of green mines.
Collapse
|
16
|
Geographic Scene Understanding of High-Spatial-Resolution Remote Sensing Images: Methodological Trends and Current Challenges. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
As one of the primary means of Earth observation, high-spatial-resolution remote sensing images can describe the geometry, texture and structure of objects in detail. It has become a research hotspot to recognize the semantic information of objects, analyze the semantic relationship between objects and then understand the more abstract geographic scenes in high-spatial-resolution remote sensing images. Based on the basic connotation of geographic scene understanding of high-spatial-resolution remote sensing images, this paper firstly summarizes the keystones in geographic scene understanding, such as various semantic hierarchies, complex spatial structures and limited labeled samples. Then, the achievements in the processing strategies and techniques of geographic scene understanding in recent years are reviewed from three layers: visual semantics, object semantics and concept semantics. On this basis, the new challenges in the research of geographic scene understanding of high-spatial-resolution remote sensing images are analyzed, and future research prospects have been proposed.
Collapse
|
17
|
A Convolutional Neural Networks-Based Approach for Texture Directionality Detection. SENSORS 2022; 22:s22020562. [PMID: 35062522 PMCID: PMC8778371 DOI: 10.3390/s22020562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/04/2022] [Accepted: 01/07/2022] [Indexed: 02/04/2023]
Abstract
The perceived texture directionality is an important, not fully explored image characteristic. In many applications texture directionality detection is of fundamental importance. Several approaches have been proposed, such as the fast Fourier-based method. We recently proposed a method based on the interpolated grey-level co-occurrence matrix (iGLCM), robust to image blur and noise but slower than the Fourier-based method. Here we test the applicability of convolutional neural networks (CNNs) to texture directionality detection. To obtain the large amount of training data required, we built a training dataset consisting of synthetic textures with known directionality and varying perturbation levels. Subsequently, we defined and tested shallow and deep CNN architectures. We present the test results focusing on the CNN architectures and their robustness with respect to image perturbations. We identify the best performing CNN architecture, and compare it with the iGLCM, the Fourier and the local gradient orientation methods. We find that the accuracy of CNN is lower, yet comparable to the iGLCM, and it outperforms the other two methods. As expected, the CNN method shows the highest computing speed. Finally, we demonstrate the best performing CNN on real-life images. Visual analysis suggests that the learned patterns generalize to real-life image data. Hence, CNNs represent a promising approach for texture directionality detection, warranting further investigation.
Collapse
|
18
|
Modeling of texture quantification and image classification for change prediction due to COVID lockdown using Skysat and Planetscope imagery. ACTA ACUST UNITED AC 2021; 8:2767-2792. [PMID: 34458559 PMCID: PMC8384559 DOI: 10.1007/s40808-021-01258-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 08/09/2021] [Indexed: 12/26/2022]
Abstract
This research work models two methods together to provide maximum information about a study area. The quantification of image texture is performed using the “grey level co-occurrence matrix (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{GLCM}$$\end{document}GLCM)” technique. Image classification-based “object-based change detection (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{OBCD}$$\end{document}OBCD)” methods are used to visually represent the developed transformation in the study area. Pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, are investigated in this research work to develop a model for the study area. Texture classification of the study area is performed based on visual texture features for eight distances and four orientations. Six different image classification methodologies are used for mapping the study area. These methodologies are “Parallelepiped classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{PC}$$\end{document}PC),” “Minimum distance classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MDC}$$\end{document}MDC),” “Maximum likelihood classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MLC}$$\end{document}MLC),” “Spectral angle mapper (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SAM}$$\end{document}SAM),” “Spectral information divergence (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SID}$$\end{document}SID)” and “Support vector machine (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SVM}$$\end{document}SVM).” \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{GLCM}$$\end{document}GLCM calculations have provided a pattern in texture features contrast, correlation, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{ASM}$$\end{document}ASM, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{IDM}$$\end{document}IDM. Maximum classification accuracy of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$83.68\%$$\end{document}83.68% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$73.65\%$$\end{document}73.65% are obtained for pre-COVID and post-COVID image data through \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MLC}$$\end{document}MLC classification technique. Finally, a model is presented to analyze before and after COVID images to get complete information about the study area numerically and visually.
Collapse
|