1
|
Koshiishi Y, Tanaka S, Iwauchi Y, Baba K. Impact of scanning range and image count on the precision of digitally recorded intermaxillary relationships in interocclusal record using intraoral scanner. J Oral Sci 2024; 66:111-115. [PMID: 38403675 DOI: 10.2334/josnusd.23-0379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
PURPOSE The effect of scan range and the number of scanned images on the precision of in vivo intermaxillary relationship reproduction was evaluated using digital scans acquired with an intraoral scanner. METHODS The study involved 15 participants with normal occlusion. Two different interocclusal recording settings were employed using the intraoral scanner (TRIOS 4): 'MIN,' focusing on the minimal scan range of the first molar region, and 'MAX,' including the scan range from the right first premolar to the right second molar. These settings were combined with three different image counts, resulting in six experimental conditions. Interocclusal recordings were performed four times for each condition. Dimensional discrepancies between datasets were analyzed using three-dimensional morphometric software and compared using two-way analysis of variance. RESULTS Median dimensional discrepancies (interquartile range; IQR) of 39.2 (30.7-49.4), 42.2 (32.6-49.3), 30.3 (26.8-44.1), 20.1 (16.0-34.8), 21.8 (19.0-25.1), and 26.6 (19.9-34.5) µm were found for MIN/200, MIN/400, MIN/600, MAX/200, MAX/400, and MAX/600, respectively. Significant differences in dimensional discrepancies according to scan range were found. Wilcoxon signed-rank test showed significant differences between MAX and MIN (P < 0.01). CONCLUSION Scan range may affect the precision of intermaxillary relationship reproduction. Thus, scanning of the most extensive region practically achievable is recommended.
Collapse
Affiliation(s)
- Yusuke Koshiishi
- Department of Prosthodontics, School of Dentistry, Showa University
| | - Shinpei Tanaka
- Department of Prosthodontics, School of Dentistry, Showa University
| | - Yotaro Iwauchi
- Department of Prosthodontics, School of Dentistry, Showa University
| | - Kazuyoshi Baba
- Department of Prosthodontics, School of Dentistry, Showa University
| |
Collapse
|
2
|
Ntiyakunze J, Inoue T. Segmentation of Structural Elements from 3D Point Cloud Using Spatial Dependencies for Sustainability Studies. SENSORS (BASEL, SWITZERLAND) 2023; 23:1924. [PMID: 36850520 PMCID: PMC9959029 DOI: 10.3390/s23041924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 01/29/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
The segmentation of point clouds obtained from existing buildings provides the ability to perform a detailed structural analysis and overall life-cycle assessment of buildings. The major challenge in dealing with existing buildings is the presence of diverse and large amounts of occluding objects, which limits the segmentation process. In this study, we use unsupervised methods that integrate knowledge about the structural forms of buildings and their spatial dependencies to segment points into common structural classes. We first develop a novelty approach of joining remotely disconnected patches that happened due to missing data from occluding objects using pairs of detected planar patches. Afterward, segmentation approaches are introduced to classify the pairs of refined planes into floor slabs, floor beams, walls, and columns. Finally, we test our approach using a large dataset with high levels of occlusions. We also compare our approach to recent segmentation methods. Compared to many other segmentation methods the study shows good results in segmenting structural elements by their constituent surfaces. Potential areas of improvement, particularly in segmenting walls and beam classes, are highlighted for further studies.
Collapse
|
3
|
Automatic Extraction of Indoor Structural Information from Point Clouds. REMOTE SENSING 2021. [DOI: 10.3390/rs13234930] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We propose an innovative method with which to extract building interior structure information automatically, including ceiling, floor, and wall. Our approach outperforms previous methods in the following respects. First, we propose an approach based on principal component analysis (PCA) to find the ground plane, which is regarded as the new Cartesian plane. Second, to reduce the complexity of data processing, the data are projected into two dimensions and transformed into a binary image via the operation of an improved radius outlier removal (ROR) filter. Third, a traditional thinning algorithm is adopted to extract the image skeleton. Then, we propose a method for calculating slope through the nearest neighbor point. Moreover, the line is represented with the slopes to obtain information pertaining to the interior planes. Finally, the outline of the line is restored to a three-dimensional structure. The proposed method is evaluated in multiple scenarios, and the results show that the method is accurate (the maximum error of 0.03 m was in three scenarios) in indoor environments.
Collapse
|
4
|
Automated Data Acquisition in Construction with Remote Sensing Technologies. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082846] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Near real-time tracking of construction operations and timely progress reporting are essential for effective management of construction projects. This does not only mitigate potential negative impact of schedule delays and cost overruns but also helps to improve safety on site. Such timely tracking circumvents the drawbacks of conventional methods for data acquisition, which are manual, labor-intensive, and not reliable enough for various construction purposes. To address these issues, a wide range of automated site data acquisition, including remote sensing (RS) technologies, has been introduced. This review article describes the capabilities and limitations of various scenarios employing RS enabling technologies for localization, with a focus on multi-sensor data fusion models. In particular, we have considered integration of real-time location systems (RTLSs) including GPS and UWB with other sensing technologies such as RFID, WSN, and digital imaging for their use in construction. This integrated use of technologies, along with information models (e.g., BIM models) is expected to enhance the efficiency of automated site data acquisition. It is also hoped that this review will prompt researchers to investigate fusion-based data capturing and processing.
Collapse
|
5
|
Abstract
RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m.
Collapse
|
6
|
Kamimura E, Tanaka S, Takaba M, Tachi K, Baba K. In vivo evaluation of inter-operator reproducibility of digital dental and conventional impression techniques. PLoS One 2017; 12:e0179188. [PMID: 28636642 PMCID: PMC5479543 DOI: 10.1371/journal.pone.0179188] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 05/25/2017] [Indexed: 11/18/2022] Open
Abstract
PURPOSE The aim of this study was to evaluate and compare the inter-operator reproducibility of three-dimensional (3D) images of teeth captured by a digital impression technique to a conventional impression technique in vivo. MATERIALS AND METHODS Twelve participants with complete natural dentition were included in this study. A digital impression of the mandibular molars of these participants was made by two operators with different levels of clinical experience, 3 or 16 years, using an intra-oral scanner (Lava COS, 3M ESPE). A silicone impression also was made by the same operators using the double mix impression technique (Imprint3, 3M ESPE). Stereolithography (STL) data were directly exported from the Lava COS system, while STL data of a plaster model made from silicone impression were captured by a three-dimensional (3D) laboratory scanner (D810, 3shape). The STL datasets recorded by two different operators were compared using 3D evaluation software and superimposed using the best-fit-algorithm method (least-squares method, PolyWorks, InnovMetric Software) for each impression technique. Inter-operator reproducibility as evaluated by average discrepancies of corresponding 3D data was compared between the two techniques (Wilcoxon signed-rank test). RESULTS The visual inspection of superimposed datasets revealed that discrepancies between repeated digital impression were smaller than observed with silicone impression. Confirmation was forthcoming from statistical analysis revealing significantly smaller average inter-operator reproducibility using a digital impression technique (0.014± 0.02 mm) than when using a conventional impression technique (0.023 ± 0.01 mm). CONCLUSION The results of this in vivo study suggest that inter-operator reproducibility with a digital impression technique may be better than that of a conventional impression technique and is independent of the clinical experience of the operator.
Collapse
Affiliation(s)
- Emi Kamimura
- Department of Prosthodontics, School of Dentistry, Showa University, Tokyo, Japan
| | - Shinpei Tanaka
- Department of Prosthodontics, School of Dentistry, Showa University, Tokyo, Japan
| | - Masayuki Takaba
- Department of Prosthodontics, School of Dentistry, Showa University, Tokyo, Japan
| | - Keita Tachi
- Department of Prosthodontics, School of Dentistry, Showa University, Tokyo, Japan
| | - Kazuyoshi Baba
- Department of Prosthodontics, School of Dentistry, Showa University, Tokyo, Japan
| |
Collapse
|
7
|
Valero E, Adán A, Cerrada C. Evolution of RFID Applications in Construction: A Literature Review. SENSORS 2015; 15:15988-6008. [PMID: 26151210 PMCID: PMC4541864 DOI: 10.3390/s150715988] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2015] [Revised: 04/26/2015] [Accepted: 06/29/2015] [Indexed: 11/25/2022]
Abstract
Radio frequency identification (RFID) technology has been widely used in the field of construction during the last two decades. Basically, RFID facilitates the control on a wide variety of processes in different stages of the lifecycle of a building, from its conception to its inhabitance. The main objective of this paper is to present a review of RFID applications in the construction industry, pointing out the existing developments, limitations and gaps. The paper presents the establishment of the RFID technology in four main stages of the lifecycle of a facility: planning and design, construction and commission and operation and maintenance. Concerning this last stage, an RFID application aiming to facilitate the identification of pieces of furniture in scanned inhabited environments is presented. Conclusions and future advances are presented at the end of the paper.
Collapse
Affiliation(s)
- Enrique Valero
- School of Energy, Geoscience, Infrastructure and Society, Heriot-Watt University, Edinburgh EH14 4AS, UK.
| | - Antonio Adán
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha, Paseo de la Universidad, 4, 13071 Ciudad Real, Spain.
| | - Carlos Cerrada
- Escuela Técnica Superior de Ingeniería Informática, Universidad Nacional de Educación a Distancia, Juan del Rosal, 16, 28040 Madrid, Spain.
| |
Collapse
|
8
|
Adán A, Quintana B, Vázquez AS, Olivares A, Parra E, Prieto S. Towards the automatic scanning of indoors with robots. SENSORS 2015; 15:11551-74. [PMID: 25996513 PMCID: PMC4481921 DOI: 10.3390/s150511551] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Revised: 05/13/2015] [Accepted: 05/15/2015] [Indexed: 11/28/2022]
Abstract
This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios.
Collapse
Affiliation(s)
- Antonio Adán
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| | - Blanca Quintana
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| | - Andres S Vázquez
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| | - Alberto Olivares
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| | - Eduardo Parra
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| | - Samuel Prieto
- Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha (UCLM), Paseo de la Universidad, 4, Ciudad Real 13071, Spain.
| |
Collapse
|
9
|
Lorenzo-Navarro J, Castrillón-Santana M, Hernández-Sosa D. On the use of simple geometric descriptors provided by RGB-D sensors for re-identification. SENSORS 2013; 13:8222-38. [PMID: 23807686 PMCID: PMC3758592 DOI: 10.3390/s130708222] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2013] [Revised: 06/07/2013] [Accepted: 06/20/2013] [Indexed: 11/17/2022]
Abstract
The re-identification problem has been commonly accomplished using appearance features based on salient points and color information. In this paper, we focus on the possibilities that simple geometric features obtained from depth images captured with RGB-D cameras may offer for the task, particularly working under severe illumination conditions. The results achieved for different sets of simple geometric features extracted in a top-view setup seem to provide useful descriptors for the re-identification task, which can be integrated in an ambient intelligent environment as part of a sensor network.
Collapse
Affiliation(s)
- Javier Lorenzo-Navarro
- SIANI, Universidad de Las Palmas de Gran Canaria, Campus de Tafira, Las Palmas de Gran Canaria 35017, Spain.
| | | | | |
Collapse
|
10
|
Automatic method for building indoor boundary models from dense point clouds collected by laser scanners. SENSORS 2012; 12:16099-115. [PMID: 23443369 PMCID: PMC3571773 DOI: 10.3390/s121216099] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Revised: 10/30/2012] [Accepted: 11/08/2012] [Indexed: 12/04/2022]
Abstract
In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.
Collapse
|