1
|
Huang S, Lu Z, Shi Y, Dong J, Hu L, Yang W, Huang C. A Novel Method for Filled/Unfilled Grain Classification Based on Structured Light Imaging and Improved PointNet+. SENSORS (BASEL, SWITZERLAND) 2023; 23:6331. [PMID: 37514625 PMCID: PMC10384795 DOI: 10.3390/s23146331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/01/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low efficiency, poor repeatability, and low precision. In this study, we have proposed a novel method for filled/unfilled grain classification based on structured light imaging and Improved PointNet++. Firstly, the 3D point cloud data of rice grains were obtained by structured light imaging. And then the specified processing algorithms were developed for the single grain segmentation, and data enhancement with normal vector. Finally, the PointNet++ network was improved by adding an additional Set Abstraction layer and combining the maximum pooling of normal vectors to realize filled/unfilled rice grain point cloud classification. To verify the model performance, the Improved PointNet++ was compared with six machine learning methods, PointNet and PointConv. The results showed that the optimal machine learning model is XGboost, with a classification accuracy of 91.99%, while the classification accuracy of Improved PointNet++ was 98.50% outperforming the PointNet 93.75% and PointConv 92.25%. In conclusion, this study has demonstrated a novel and effective method for filled/unfilled grain recognition.
Collapse
Affiliation(s)
- Shihao Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518000, China
| | - Zhihao Lu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Yuxuan Shi
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Jiale Dong
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Lin Hu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan 430070, China
| | - Chenglong Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518000, China
| |
Collapse
|
2
|
Jasińska A, Pyka K, Pastucha E, Midtiby HS. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. SENSORS (BASEL, SWITZERLAND) 2023; 23:728. [PMID: 36679525 PMCID: PMC9860635 DOI: 10.3390/s23020728] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion-Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.
Collapse
Affiliation(s)
- Aleksandra Jasińska
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Krystian Pyka
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Elżbieta Pastucha
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| | - Henrik Skov Midtiby
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| |
Collapse
|
3
|
Trojnacki M, Dąbek P, Jaroszek P. Analysis of the Influence of the Geometrical Parameters of the Body Scanner on the Accuracy of Reconstruction of the Human Figure Using the Photogrammetry Technique. SENSORS (BASEL, SWITZERLAND) 2022; 22:9181. [PMID: 36501882 PMCID: PMC9739902 DOI: 10.3390/s22239181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
This article concerns the research of the HUBO full-body scanner, which includes the analysis and selection of the scanner's geometrical parameters in order to obtain the highest possible accuracy of the reconstruction of a human figure. In the scanner version analyzed in this paper, smartphone cameras are used as sensors. In order to process the collected photos into a 3D model, the photogrammetry technique is applied. As part of the work, dependencies between the geometrical parameters of the scanner are derived, which allows to significantly reduce the number of degrees of freedom in the selection of its geometrical parameters. Based on these dependencies, a numerical analysis is carried out, as a result of which the initial values of the geometrical parameters are pre-selected and distribution of scanner cameras is visualized. As part of the experimental research, the influence of selected scanner parameters on the scanning accuracy is analyzed. For the experimental research, a specially prepared dummy was used instead of the participation of a real human, which allowed to ensure the constancy of the scanned object. The accuracy of the object reconstruction was assessed in relation to the reference 3D model obtained with a scanner of superior measurement uncertainty. On the basis of the conducted research, a method for the selection of the scanner's geometrical parameters was finally verified, leading to the arrangement of cameras around a human, which guarantees high accuracy of the reconstruction. Additionally, to quantify the results, the quality rates were used, taking into account not only the obtained measurement uncertainty of the scanner, but also the processing time and the resulting efficiency.
Collapse
Affiliation(s)
| | - Przemysław Dąbek
- ŁUKASIEWICZ Research Network—Industrial Research Institute for Automation and Measurements PIAP, Al. Jerozolimskie 202, 02-486 Warsaw, Poland
| | | |
Collapse
|
4
|
Photogrammetric Method to Determine Physical Aperture and Roughness of a Rock Fracture. SENSORS 2022; 22:s22114165. [PMID: 35684786 PMCID: PMC9185246 DOI: 10.3390/s22114165] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 02/01/2023]
Abstract
Rock discontinuities play an important role in the behavior of rock masses and have a high impact on their mechanical and hydrological properties, such as strength and permeability. The surfaces roughness and physical aperture of rock joints are vital characteristics in joint shear strength and fluid flow properties. This study presents a method to digitally measure the physical aperture of a rock fracture digitized using photogrammetry. A 50 cm × 50 cm rock sample of Kuru grey granite with a thoroughgoing fracture was digitized. The data was collected using a high-resolution digital camera and four low-cost cameras. The aperture and surface roughness were measured, and the influence of the camera type and 3D model rasterization on the measurement results was quantified. The results showed that low-cost cameras and smartphones can be used for generating 3D models for accurate measurement of physical aperture and roughness of rock fractures. However, the selection of appropriate rasterization grid interval plays a key role in accurate estimations. For measuring the physical aperture from the photogrammetric 3D models, reducing rasterization grid interval results in less scattered measurement results and a small rasterization grid interval of 0.1 mm is recommended. For roughness measurements, increasing the grid interval results in smaller measurement errors, and therefore a larger rasterization grid interval of 0.5 mm is recommended for high-resolution smartphones and 1 mm for other low-cost cameras.
Collapse
|
5
|
Qin Z, Zhang Z, Hua X, Yang W, Liang X, Zhai R, Huang C. Cereal grain 3D point cloud analysis method for shape extraction and filled/unfilled grain identification based on structured light imaging. Sci Rep 2022; 12:3145. [PMID: 35210561 PMCID: PMC8873360 DOI: 10.1038/s41598-022-07221-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/07/2022] [Indexed: 11/09/2022] Open
Abstract
Cereals are the main food for mankind. The grain shape extraction and filled/unfilled grain recognition are meaningful for crop breeding and genetic analysis. The conventional measuring method is mainly manual, which is inefficient, labor-intensive and subjective. Therefore, a novel method was proposed to extract the phenotypic traits of cereal grains based on point clouds. First, a structured light scanner was used to obtain the grains point cloud data. Then, the single grain segmentation was accomplished by image preprocessing, plane fitting, region growth clustering. The length, width, thickness, surface area and volume was calculated by the specified analysis algorithms for grain point cloud. To demonstrate this method, experimental materials included rice, wheat and corn were tested. Compared with manual measurement results, the average measurement error of grain length, width and thickness was 2.07%, 0.97%, 1.13%, and the average measurement efficiency was about 9.6 s per grain. In addition, the grain identification model was conducted with 25 grain phenotypic traits, using 6 machine learning methods. The results showed that the best accuracy for filled/unfilled grain classification was 90.184%.The best accuracy for indica and japonica identification was 99.950%, while for different varieties identification was only 47.252%. Therefore, this method was proved to be an efficient and effective way for crop research.
Collapse
Affiliation(s)
- Zhijie Qin
- College of Engineering, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Zhongfu Zhang
- College of Engineering, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Xiangdong Hua
- College of Engineering, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Xiuying Liang
- College of Engineering, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Ruifang Zhai
- College of Informatics, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China
| | - Chenglong Huang
- College of Engineering, Huazhong Agricultural University, Wuhan, 430070, People's Republic of China.
| |
Collapse
|