1
|
Yin W, Zang X, Wu L, Zhang X, Zhao J. A Distortion Correction Method Based on Actual Camera Imaging Principles. Sensors (Basel) 2024; 24:2406. [PMID: 38676023 PMCID: PMC11053859 DOI: 10.3390/s24082406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 04/03/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
In the human-robot collaboration system, the high-precision distortion correction of the camera as an important sensor is a crucial prerequisite for accomplishing the task. The traditional correction process is to calculate the lens distortion with the camera model parameters or separately from the camera model. However, in the optimization process calculate with the camera model parameters, the mutual compensation between the parameters may lead to numerical instability, and the existing distortion correction methods separated from the camera model are difficult to ensure the accuracy of the correction. To address this problem, this study proposes a model-independent lens distortion correction method based on the image center area from the perspective of the actual camera lens distortion principle. The proposed method is based on the idea that the structured image preserves its ratios through perspective transformation, and uses the local image information in the central area of the image to correct the overall image. The experiments are verified from two cases of low distortion and high distortion under simulation and actual experiments. The experimental results show that the accuracy and stability of this method are better than other methods in training and testing results.
Collapse
Affiliation(s)
- Wenxin Yin
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China; (X.Z.); (L.W.); (X.Z.); (J.Z.)
| | | | | | | | | |
Collapse
|
2
|
Park J, Gagneur JD, Chungbin SJ, Rong Y, Lim SB, Chan MF. Resolving signal drift in the wall-mounted camera of the RGSC system. J Appl Clin Med Phys 2024; 25:e14291. [PMID: 38306504 DOI: 10.1002/acm2.14291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/10/2024] [Accepted: 01/16/2024] [Indexed: 02/04/2024] Open
Abstract
PURPOSE To present a modified calibration method to reduce signal drift due to table sagging in Respiratory Gating for Scanner (RGSC) systems with a wall-mounted camera. MATERIALS AND METHODS Approximately 70 kg of solid water phantoms were evenly distributed on the CT couch, mimicking the patient's weight. New calibration measurements were performed at 9 points at the combination of three lateral positions, the CT isocenter and ±10 cm laterally from the isocenter, and three longitudinal locations, the CT isocenter and ±30 cm or ±40 cm from the isocenter. The new calibration was tested in two hospitals. RESULTS Implementing the new weighed calibration method at the extended distance yielded improved results during the DIBH scan, reducing the drift to within 1 from 3 mm. The extended calibration positions exhibited similarly reduced drift in both hospitals, reinforcing the method's robustness and its potential applicability across all centers. CONCLUSION This proposed solution aims to minimize the systematic error in radiation delivery for patients undergoing motion management with wall-mounted camera RGSC systems, especially in conjunction with a bariatric CT couchtop.
Collapse
Affiliation(s)
- Jeonghoon Park
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, USA
| | - Justin D Gagneur
- Department of Radiation Oncology, Mayo Clinic, Phoenix, Arizona, USA
| | | | - Yi Rong
- Department of Radiation Oncology, Mayo Clinic, Phoenix, Arizona, USA
| | - Seng Boh Lim
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, USA
| | - Maria F Chan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, USA
| |
Collapse
|
3
|
Li S, Yoon HS. Enhancing Camera Calibration for Traffic Surveillance with an Integrated Approach of Genetic Algorithm and Particle Swarm Optimization. Sensors (Basel) 2024; 24:1456. [PMID: 38474992 DOI: 10.3390/s24051456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024]
Abstract
Recent advancements in sensor technologies, coupled with signal processing and machine learning, have enabled real-time traffic control systems to effectively adapt to changing traffic conditions. Cameras, as sensors, offer a cost-effective means to determine the number, location, type, and speed of vehicles, aiding decision-making at traffic intersections. However, the effective use of cameras for traffic surveillance requires proper calibration. This paper proposes a new optimization-based method for camera calibration. In this approach, initial calibration parameters are established using the Direct Linear Transformation (DLT) method. Then, optimization algorithms are applied to further refine the calibration parameters for the correction of nonlinear lens distortions. A significant enhancement in the optimization process is achieved through the integration of the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) into a combined Integrated GA and PSO (IGAPSO) technique. The effectiveness of this method is demonstrated through the calibration of eleven roadside cameras at three different intersections. The experimental results show that when compared to the baseline DLT method, the vehicle localization error is reduced by 22.30% with GA, 22.31% with PSO, and 25.51% with IGAPSO.
Collapse
Affiliation(s)
- Shenglin Li
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Hwan-Sik Yoon
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| |
Collapse
|
4
|
Zhang S, Fu Q. Wand-Based Calibration of Unsynchronized Multiple Cameras for 3D Localization. Sensors (Basel) 2024; 24:284. [PMID: 38203146 PMCID: PMC10781378 DOI: 10.3390/s24010284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 12/29/2023] [Accepted: 12/31/2023] [Indexed: 01/12/2024]
Abstract
Three-dimensional (3D) localization plays an important role in visual sensor networks. However, the frame rate and flexibility of the existing vision-based localization systems are limited by using synchronized multiple cameras. For such a purpose, this paper focuses on developing an indoor 3D localization system based on unsynchronized multiple cameras. First of all, we propose a calibration method for unsynchronized perspective/fish-eye cameras based on timestamp matching and pixel fitting by using a wand under general motions. With the multi-camera calibration result, we then designed a localization method for the unsynchronized multi-camera system based on the extended Kalman filter (EKF). Finally, extensive experiments were conducted to demonstrate the effectiveness of the established 3D localization system. The obtained results provided valuable insights into the camera calibration and 3D localization of unsynchronized multiple cameras in visual sensor networks.
Collapse
Affiliation(s)
- Sujie Zhang
- Tianjin College, University of Science and Technology Beijing, Tianjin 301830, China;
| | - Qiang Fu
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing 100083, China
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, China
- Key Laboratory of Intelligent Bionic Unmanned Systems, Ministry of Education, University of Science and Technology Beijing, Beijing 100083, China
| |
Collapse
|
5
|
Zheng H, Duan F, Li T, Li J, Niu G, Cheng Z, Li X. A Stable, Efficient, and High-Precision Non-Coplanar Calibration Method: Applied for Multi-Camera-Based Stereo Vision Measurements. Sensors (Basel) 2023; 23:8466. [PMID: 37896558 PMCID: PMC10610649 DOI: 10.3390/s23208466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Traditional non-coplanar calibration methods, represented by Tsai's method, are difficult to apply in multi-camera-based stereo vision measurements because of insufficient calibration accuracy, inconvenient operation, etc. Based on projective theory and matrix transformation theory, a novel mathematical model is established to characterize the transformation from targets' 3D affine coordinates to cameras' image coordinates. Then, novel non-coplanar calibration methods for both monocular and binocular camera systems are proposed in this paper. To further improve the stability and accuracy of calibration methods, a novel circular feature points extraction method based on region Otsu algorithm and radial section scanning method is proposed to precisely extract the circular feature points. Experiments verify that our novel calibration methods are easy to operate, and have better accuracy than several classical methods, including Tsai's and Zhang's methods. Intrinsic and extrinsic parameters of multi-camera-systems can be calibrated simultaneously by our methods. Our novel circular feature points extraction algorithm is stable, and with high precision can effectively improve calibration accuracy for coplanar and non-coplanar methods. Real stereo measurement experiments demonstrate that the proposed calibration method and feature extraction method have high accuracy and stability, and can further serve for complicated shape and deformation measurements, for instance, stereo-DIC measurements, etc.
Collapse
Affiliation(s)
- Hao Zheng
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Fajie Duan
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Tianyu Li
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Jiaxin Li
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Guangyue Niu
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Zhonghai Cheng
- State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China; (H.Z.); (T.L.); (J.L.); (G.N.); (Z.C.)
| | - Xin Li
- China North Engine Research Institute, Tianjin 30040, China;
| |
Collapse
|
6
|
Kim J, Kim C, Yoon S, Choi T, Sull S. RBF-Based Camera Model Based on a Ray Constraint to Compensate for Refraction Error. Sensors (Basel) 2023; 23:8430. [PMID: 37896523 PMCID: PMC10610825 DOI: 10.3390/s23208430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/01/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023]
Abstract
A camera equipped with a transparent shield can be modeled using the pinhole camera model and residual error vectors defined by the difference between the estimated ray from the pinhole camera model and the actual three-dimensional (3D) point. To calculate the residual error vectors, we employ sparse calibration data consisting of 3D points and their corresponding 2D points on the image. However, the observation noise and sparsity of the 3D calibration points pose challenges in determining the residual error vectors. To address this, we first fit Gaussian Process Regression (GPR) operating robustly against data noise to the observed residual error vectors from the sparse calibration data to obtain dense residual error vectors. Subsequently, to improve performance in unobserved areas due to data sparsity, we use an additional constraint; the 3D points on the estimated ray should be projected to one 2D image point, called the ray constraint. Finally, we optimize the radial basis function (RBF)-based regression model to reduce the residual error vector differences with GPR at the predetermined dense set of 3D points while reflecting the ray constraint. The proposed RBF-based camera model reduces the error of the estimated rays by 6% on average and the reprojection error by 26% on average.
Collapse
Affiliation(s)
| | | | | | | | - Sanghoon Sull
- School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea; (J.K.); (C.K.); (S.Y.); (T.C.)
| |
Collapse
|
7
|
Hao Y, Tai VC, Tan YC. A Systematic Stereo Camera Calibration Strategy: Leveraging Latin Hypercube Sampling and 2 k Full-Factorial Design of Experiment Methods. Sensors (Basel) 2023; 23:8240. [PMID: 37837069 PMCID: PMC10575035 DOI: 10.3390/s23198240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 09/26/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
This research aimed to optimize the camera calibration process by identifying the optimal distance and angle for capturing checkered board images, with a specific focus on understanding the factors that influence the reprojection error (ϵRP). The objective was to improve calibration efficiency by exploring the impacts of distance and orientation factors and the feasibility of independently manipulating these factors. The study employed Zhang's camera calibration method, along with the 2k full-factorial analysis method and the Latin Hypercube Sampling (LHS) method, to identify the optimal calibration parameters. Three calibration methods were devised: calibration with distance factors (D, H, V), orientation factors (R, P, Y), and the combined two influential factors from both sets of factors. The calibration study was carried out with three different stereo cameras. The results indicate that D is the most influential factor, while H and V are nearly equally influential for method A; P and R are the two most influential orientation factors for method B. Compared to Zhang's method alone, on average, methods A, B, and C reduce ϵRP by 25%, 24%, and 34%, respectively. However, method C requires about 10% more calibration images than methods A and B combined. For applications where lower value of ϵRP is required, method C is recommended. This study provides valuable insights into the factors affecting ϵRP in calibration processes. The proposed methods can be used to improve the calibration accuracy for stereo cameras for the applications in object detection and ranging. The findings expand our understanding of camera calibration, particularly the influence of distance and orientation factors, making significant contributions to camera calibration procedures.
Collapse
Affiliation(s)
- Yanan Hao
- Department of Electronic Engineering, Taiyuan Institute of Technology, Taiyuan 030008, China;
- Faculty of Engineering, Built Environment and Information Technology, SEGI University, Petaling Jaya 47810, Malaysia;
| | - Vin Cent Tai
- Faculty of Engineering, Built Environment and Information Technology, SEGI University, Petaling Jaya 47810, Malaysia;
| | - Yong Chai Tan
- Faculty of Engineering, Built Environment and Information Technology, SEGI University, Petaling Jaya 47810, Malaysia;
| |
Collapse
|
8
|
Hu H, Zhang R, Fong T, Rhodin H, Murphy TH. Standardized 3D test object for multi- camera calibration during animal pose capture. Neurophotonics 2023; 10:046602. [PMID: 37942210 PMCID: PMC10629347 DOI: 10.1117/1.nph.10.4.046602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 11/10/2023]
Abstract
Accurate capture of animal behavior and posture requires the use of multiple cameras to reconstruct three-dimensional (3D) representations. Typically, a paper ChArUco (or checker) board works well for correcting distortion and calibrating for 3D reconstruction in stereo vision. However, measuring the error in two-dimensional (2D) is also prone to bias related to the placement of the 2D board in 3D. We proposed a procedure as a visual way of validating camera placement, and it also can provide some guidance about the positioning of cameras and potential advantages of using multiple cameras. We propose the use of a 3D printable test object for validating multi-camera surround-view calibration in small animal video capture arenas. The proposed 3D printed object has no bias to a particular dimension and is designed to minimize occlusions. The use of the calibrated test object provided an estimate of 3D reconstruction accuracy. The approach reveals that for complex specimens such as mice, some view angles will be more important for accurate capture of keypoints. Our method ensures accurate 3D camera calibration for surround image capture of laboratory mice and other specimens.
Collapse
Affiliation(s)
- Hao Hu
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Vancouver, British Columbia, Canada
- University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, British Columbia, Canada
| | - Roark Zhang
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Vancouver, British Columbia, Canada
- University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, British Columbia, Canada
| | - Tony Fong
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Vancouver, British Columbia, Canada
- University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, British Columbia, Canada
| | - Helge Rhodin
- University of British Columbia, Department of Computer Science, Vancouver, British Columbia, Canada
| | - Timothy H. Murphy
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Vancouver, British Columbia, Canada
- University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, British Columbia, Canada
| |
Collapse
|
9
|
Gutiérrez-Moizant R, Boada MJL, Ramírez-Berasategui M, Al-Kaff A. Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang's Camera Calibration Method. Sensors (Basel) 2023; 23:7903. [PMID: 37765959 PMCID: PMC10535815 DOI: 10.3390/s23187903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/12/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
Camera calibration is necessary for many machine vision applications. The calibration methods are based on linear or non-linear optimization techniques that aim to find the best estimate of the camera parameters. One of the most commonly used methods in computer vision for the calibration of intrinsic camera parameters and lens distortion (interior orientation) is Zhang's method. Additionally, the uncertainty of the camera parameters is normally estimated by assuming that their variability can be explained by the images of the different poses of a checkerboard. However, the degree of reliability for both the best parameter values and their associated uncertainties has not yet been verified. Inaccurate estimates of intrinsic and extrinsic parameters during camera calibration may introduce additional biases in post-processing. This is why we propose a novel Bayesian inference-based approach that has allowed us to evaluate the degree of certainty of Zhang's camera calibration procedure. For this purpose, the a prioriprobability was assumed to be the one estimated by Zhang, and the intrinsic parameters were recalibrated by Bayesian inversion. The uncertainty of the intrinsic parameters was found to differ from the ones estimated with Zhang's method. However, the major source of inaccuracy is caused by the procedure for calculating the extrinsic parameters. The procedure used in the novel Bayesian inference-based approach significantly improves the reliability of the predictions of the image points, as it optimizes the extrinsic parameters.
Collapse
Affiliation(s)
- Ramón Gutiérrez-Moizant
- Mechanical Engineering Department, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
| | - María Jesús L Boada
- Mechanical Engineering Department, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
| | - María Ramírez-Berasategui
- Mechanical Engineering Department, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
| | - Abdulla Al-Kaff
- Systems Engineering and Automation, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganés, Spain
| |
Collapse
|
10
|
Lohner SA, Nothelfer S, Kienle A. Generic and Model-Based Calibration Method for Spatial Frequency Domain Imaging with Parameterized Frequency and Intensity Correction. Sensors (Basel) 2023; 23:7888. [PMID: 37765944 PMCID: PMC10534425 DOI: 10.3390/s23187888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/12/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023]
Abstract
Spatial frequency domain imaging (SFDI) is well established in biology and medicine for non-contact, wide-field imaging of optical properties and 3D topography. Especially for turbid media with displaced, tilted or irregularly shaped surfaces, the reliable quantitative measurement of diffuse reflectance requires efficient calibration and correction methods. In this work, we present the implementation of a generic and hardware independent calibration routine for SFDI setups based on the so-called pinhole camera model for both projection and detection. Using a two-step geometric and intensity calibration, we obtain an imaging model that efficiently and accurately determines 3D topography and diffuse reflectance for subsequently measured samples, taking into account their relative distance and orientation to the camera and projector, as well as the distortions of the optical system. Derived correction procedures for position- and orientation-dependent changes in spatial frequency and intensity allow the determination of the effective scattering coefficient μs' and the absorption coefficient μa when measuring a spherical optical phantom at three different measurement positions and at nine wavelengths with an average error of 5% and 12%, respectively. Model-based calibration allows the characterization of the imaging properties of the entire SFDI system without prior knowledge, enabling the future development of a digital twin for synthetic data generation or more robust evaluation methods.
Collapse
Affiliation(s)
- Stefan A Lohner
- Institut für Lasertechnologien in der Medizin und Meßtechnik an der Universität Ulm, Helmholtzstr. 12, D-89081 Ulm, Germany
| | - Steffen Nothelfer
- Institut für Lasertechnologien in der Medizin und Meßtechnik an der Universität Ulm, Helmholtzstr. 12, D-89081 Ulm, Germany
| | - Alwin Kienle
- Institut für Lasertechnologien in der Medizin und Meßtechnik an der Universität Ulm, Helmholtzstr. 12, D-89081 Ulm, Germany
| |
Collapse
|
11
|
Eppenga R, Snaauw G, Kuhlmann K, van der Heijden F, Ruers T, Nijkamp J. An improved camera model for oblique-viewing laparoscopes: high reprojection accuracy independent of telescope rotation. Phys Med Biol 2023; 68:185007. [PMID: 37582390 DOI: 10.1088/1361-6560/acf08f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective. Oblique-viewing laparoscopes are popular in laparoscopic surgeries where the target anatomy is located in narrow areas. Their viewing direction can be shifted by telescope rotation without changing the laparoscope pose. This rotation also changes laparoscope camera parameters that are estimated by camera calibration to be able to reproject an anatomical model onto the laparoscopic view, creating augmented reality (AR). The aim of this study was to develop a camera model that accounts for these changes, achieving high reprojection accuracy for any telescope rotation.Approach. Camera parameters were acquired by calibrations encompassing a wide telescope rotation range. For those parameters showing periodic changes upon rotation, interpolation models were created and used to establish an updatable camera model. With this model, corner points of a tracked checkerboard were reprojected onto the checkerboard laparoscopic images, at random rotation angles. Root-mean-square reprojection errors (RMSEs) were calculated between the reprojected and imaged corner points.Main results. Reprojection RMSEs were low and approximately independent on telescope rotation angle, over a wide rotation range of 320°. The mean reprojection RMSE was 2.8±0.7 pixels for a conventional laparoscope and 3.6±0.7 pixels for a chip-on-the-tip (COTT) laparoscope, corresponding to 0.3±0.1 mm and 0.4±0.1 mm in world coordinates respectively. Worst-case reprojection errors were about 9 pixels (0.8 mm) for both laparoscopes.Significance. The camera model developed in this study improves on existing models for oblique-viewing laparoscopes because it provides high reprojection accuracy independent of the telescope rotation angle and is applicable for conventional and chip-on-a-tip oblique-viewing laparoscopes. The work presented here is an important step towards creating accurate AR in image-guided interventions where oblique-viewing laparoscopes are used while simultaneously providing the surgeon the flexibility to rotate the telescope to any desired rotation angle.Acronyms. CC: camera coordinates; CCToolbox: camera calibration toolbox; COTT: chip-on-the-tip; CS: camera sensor; DD: decentering distortion; FL: focal length; OTS: optical tracking system; PP: principal point; RD: radial distortion; SI: supplementary information;tHE:hand-eye translation component.
Collapse
Affiliation(s)
- Roeland Eppenga
- Department of Surgical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Gerard Snaauw
- Department of Surgical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Koert Kuhlmann
- Department of Surgical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Theo Ruers
- Department of Surgical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
- Nanobiophysics Group, Faculty TNW, University of Twente, Enschede, The Netherlands
| | - Jasper Nijkamp
- Department of Surgical Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Choi T, Yoon S, Kim J, Sull S. Noniterative Generalized Camera Model for Near-Central Camera System. Sensors (Basel) 2023; 23:s23115294. [PMID: 37300020 DOI: 10.3390/s23115294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/31/2023] [Accepted: 05/31/2023] [Indexed: 06/12/2023]
Abstract
This paper proposes a near-central camera model and its solution approach. 'Near-central' refers to cases in which the rays do not converge to a point and do not have severely arbitrary directions (non-central cases). Conventional calibration methods are difficult to apply in such cases. Although the generalized camera model can be applied, dense observation points are required for accurate calibration. Moreover, this approach is computationally expensive in the iterative projection framework. We developed a noniterative ray correction method based on sparse observation points to address this problem. First, we established a smoothed three-dimensional (3D) residual framework using a backbone to avoid using the iterative framework. Second, we interpolated the residual by applying local inverse distance weighting on the nearest neighbor of a given point. Specifically, we prevented excessive computation and the deterioration in accuracy that may occur in inverse projection through the 3D smoothed residual vectors. Moreover, the 3D vectors can represent the ray directions more accurately than the 2D entities. Synthetic experiments show that the proposed method can achieve prompt and accurate calibration. The depth error is reduced by approximately 63% in the bumpy shield dataset, and the proposed approach is noted to be two digits faster than the iterative methods.
Collapse
Affiliation(s)
- Taehyeon Choi
- School of Electrical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Seongwook Yoon
- School of Electrical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Jaehyun Kim
- School of Electrical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Sanghoon Sull
- School of Electrical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
13
|
Li S, Yoon HS. Vehicle Localization in 3D World Coordinates Using Single Camera at Traffic Intersection. Sensors (Basel) 2023; 23:3661. [PMID: 37050721 PMCID: PMC10098535 DOI: 10.3390/s23073661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 03/29/2023] [Accepted: 03/29/2023] [Indexed: 06/19/2023]
Abstract
Optimizing traffic control systems at traffic intersections can reduce the network-wide fuel consumption, as well as emissions of conventional fuel-powered vehicles. While traffic signals have been controlled based on predetermined schedules, various adaptive signal control systems have recently been developed using advanced sensors such as cameras, radars, and LiDARs. Among these sensors, cameras can provide a cost-effective way to determine the number, location, type, and speed of the vehicles for better-informed decision-making at traffic intersections. In this research, a new approach for accurately determining vehicle locations near traffic intersections using a single camera is presented. For that purpose, a well-known object detection algorithm called YOLO is used to determine vehicle locations in video images captured by a traffic camera. YOLO draws a bounding box around each detected vehicle, and the vehicle location in the image coordinates is converted to the world coordinates using camera calibration data. During this process, a significant error between the center of a vehicle's bounding box and the real center of the vehicle in the world coordinates is generated due to the angled view of the vehicles by a camera installed on a traffic light pole. As a means of mitigating this vehicle localization error, two different types of regression models are trained and applied to the centers of the bounding boxes of the camera-detected vehicles. The accuracy of the proposed approach is validated using both static camera images and live-streamed traffic video. Based on the improved vehicle localization, it is expected that more accurate traffic signal control can be made to improve the overall network-wide energy efficiency and traffic flow at traffic intersections.
Collapse
|
14
|
Lu R, Wang Z, Zou Z. Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement. Sensors (Basel) 2023; 23:3464. [PMID: 37050524 PMCID: PMC10099204 DOI: 10.3390/s23073464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/22/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
In the vision-based inspection of specular or shiny surfaces, we often compute the camera pose with respect to a reference plane by analyzing images of calibration grids, reflected in such a surface. To obtain high precision in camera calibration, the calibration target should be large enough to cover the whole field of view (FOV). For a camera with a large FOV, using a small target can only obtain a locally optimal solution. However, using a large target causes many difficulties in making, carrying, and employing the large target. To solve this problem, an improved calibration method based on coplanar constraint is proposed for a camera with a large FOV. Firstly, with an auxiliary plane mirror provided, the positions of the calibration grid and the tilt angles of the plane mirror are changed several times to capture several mirrored calibration images. Secondly, the initial parameters of the camera are calculated based on each group of mirrored calibration images. Finally, adding with the coplanar constraint between each group of calibration grid, the external parameters between the camera and the reference plane are optimized via the Levenberg-Marquardt algorithm (LM). The experimental results show that the proposed camera calibration method has good robustness and accuracy.
Collapse
Affiliation(s)
- Rongsheng Lu
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei 230009, China
| | - Zhizhuo Wang
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
| | - Zhiting Zou
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
| |
Collapse
|
15
|
Patonis P. Methodology and Tool Development for Mobile Device Cameras Calibration and Evaluation of the Results. Sensors (Basel) 2023; 23:1538. [PMID: 36772578 PMCID: PMC9921959 DOI: 10.3390/s23031538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
In this paper, a procedure for calibrating the image sensors of mobile devices and evaluating their results was developed and implemented in a software application. Regarding the calibration, two methods were used, an OpenCV function and a photogrammetry method, which used the same camera model. In evaluating the calibration results, a method is proposed that uses single-image rectification to examine the performance of the calibration parameters in a practical and supervisory way. After an experiment followed by a study, a standard is proposed regarding the number and shooting angles of the photographs that should be used in the calibration. During the development, problems related to processing large images and automating processes were solved. Finally, the procedure and software application were tested in a case study.
Collapse
Affiliation(s)
- Photis Patonis
- School of Rural & Surveying Engineering, Aristotle University of Thessaloniki, Univ. Box 439, GR-54 124 Thessaloniki, Greece
| |
Collapse
|
16
|
Jasińska A, Pyka K, Pastucha E, Midtiby HS. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. Sensors (Basel) 2023; 23:728. [PMID: 36679525 PMCID: PMC9860635 DOI: 10.3390/s23020728] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion-Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.
Collapse
Affiliation(s)
- Aleksandra Jasińska
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Krystian Pyka
- Faculty of Geo-Data Science, Geodesy, and Environmental Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
| | - Elżbieta Pastucha
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| | - Henrik Skov Midtiby
- UAS Center, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvey 55, 5230 Odense, Denmark
| |
Collapse
|
17
|
Lin KY, Tseng YH, Chiang KW. Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors (Basel) 2022; 22:9602. [PMID: 36559969 PMCID: PMC9787778 DOI: 10.3390/s22249602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/04/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
The precision modelling of intrinsic camera geometry is a common issue in the fields of photogrammetry (PH) and computer vision (CV). However, in both fields, intrinsic camera geometry has been modelled differently, which has led researchers to adopt different definitions of intrinsic camera parameters (ICPs), including focal length, principal point, radial distortion, decentring distortion, affinity and shear. These ICPs are indispensable for vision-based measurements. These differences can confuse researchers from one field when using ICPs obtained from a camera calibration software package developed in another field. This paper clarifies the ICP definitions used in each field and proposes an ICP transformation algorithm. The originality of this study lies in its use of least-squares adjustment, applying the image points involving ICPs defined in PH and CV image frames to convert a complete set of ICPs. This ICP transformation method is more rigorous than the simplified formulas used in conventional methods. Selecting suitable image points can increase the accuracy of the generated adjustment model. In addition, the proposed ICP transformation method enables users to apply mixed software in the fields of PH and CV. To validate the transformation algorithm, two cameras with different view angles were calibrated using typical camera calibration software packages applied in each field to obtain ICPs. Experimental results demonstrate that our proposed transformation algorithm can be used to convert ICPs derived from different software packages. Both the PH-to-CV and CV-to-PH transformation processes were executed using complete mathematical camera models. We also compared the rectified images and distortion plots generated using different ICPs. Furthermore, by comparing our method with the state of art method, we confirm the performance improvement of ICP conversions between PH and CV models.
Collapse
|
18
|
Bräuer-Burchardt C, Ramm R, Kühmstedt P, Notni G. The Duality of Ray-Based and Pinhole-Camera Modeling and 3D Measurement Improvements Using the Ray-Based Model. Sensors (Basel) 2022; 22:7540. [PMID: 36236639 PMCID: PMC9573748 DOI: 10.3390/s22197540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/27/2022] [Accepted: 10/01/2022] [Indexed: 06/16/2023]
Abstract
Geometrical camera modeling is the precondition for 3D-reconstruction tasks using photogrammetric sensor systems. The purpose of this study is to describe an approach for possible accuracy improvements by using the ray-based-camera model. The relations between the common pinhole and the generally valid ray-based-camera model are shown. A new approach to the implementation and calibration of the ray-based-camera model is introduced. Using a simple laboratory setup consisting of two cameras and a projector, experimental measurements were performed. The experiments and results showed the possibility of easily transforming the common pinhole model into a ray-based model and of performing calibration using the ray-based model. These initial results show the model's potential for considerable accuracy improvements, especially for sensor systems using wide-angle lenses or with deep 3D measurements. This study presents several approaches for further improvements to and the practical usage of high-precision optical 3D measurements.
Collapse
Affiliation(s)
- Christian Bräuer-Burchardt
- Department Imaging and Sensing, Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, D-07745 Jena, Germany
| | - Roland Ramm
- Department Imaging and Sensing, Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, D-07745 Jena, Germany
| | - Peter Kühmstedt
- Department Imaging and Sensing, Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, D-07745 Jena, Germany
| | - Gunther Notni
- Department Imaging and Sensing, Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, D-07745 Jena, Germany
- Machine Engineering Faculty, Technical University Ilmenau, Ehrenbergstraße 29, D-98693 Ilmenau, Germany
| |
Collapse
|
19
|
Pak A, Reichel S, Burke J. Machine-Learning-Inspired Workflow for Camera Calibration. Sensors (Basel) 2022; 22:6804. [PMID: 36146154 PMCID: PMC9501149 DOI: 10.3390/s22186804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/31/2022] [Accepted: 08/31/2022] [Indexed: 06/16/2023]
Abstract
The performance of modern digital cameras approaches physical limits and enables high-precision measurements in optical metrology and in computer vision. All camera-assisted geometrical measurements are fundamentally limited by the quality of camera calibration. Unfortunately, this procedure is often effectively considered a nuisance: calibration data are collected in a non-systematic way and lack quality specifications; imaging models are selected in an ad hoc fashion without proper justification; and calibration results are evaluated, interpreted, and reported inconsistently. We outline an (arguably more) systematic and metrologically sound approach to calibrating cameras and characterizing the calibration outcomes that is inspired by typical machine learning workflows and practical requirements of camera-based measurements. Combining standard calibration tools and the technique of active targets with phase-shifted cosine patterns, we demonstrate that the imaging geometry of a typical industrial camera can be characterized with sub-mm uncertainty up to distances of a few meters even with simple parametric models, while the quality of data and resulting parameters can be known and controlled at all stages.
Collapse
Affiliation(s)
- Alexey Pak
- Fraunhofer Institute of Optronics, System Technologies, and Image Exploitation IOSB, Fraunhoferstraße 1, 76131 Karlsruhe, Germany
| | - Steffen Reichel
- Hochschule Pforzheim, Tiefenbronner Straße 65, 75175 Pforzheim, Germany
| | - Jan Burke
- Fraunhofer Institute of Optronics, System Technologies, and Image Exploitation IOSB, Fraunhoferstraße 1, 76131 Karlsruhe, Germany
| |
Collapse
|
20
|
Jin Z, Li Z, Gan T, Fu Z, Zhang C, He Z, Zhang H, Wang P, Liu J, Ye X. A Novel Central Camera Calibration Method Recording Point-to-Point Distortion for Vision-Based Human Activity Recognition. Sensors (Basel) 2022; 22:s22093524. [PMID: 35591215 PMCID: PMC9105339 DOI: 10.3390/s22093524] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 04/28/2022] [Accepted: 04/29/2022] [Indexed: 06/02/2023]
Abstract
The camera is the main sensor of vison-based human activity recognition, and its high-precision calibration of distortion is an important prerequisite of the task. Current studies have shown that multi-parameter model methods achieve higher accuracy than traditional methods in the process of camera calibration. However, these methods need hundreds or even thousands of images to optimize the camera model, which limits their practical use. Here, we propose a novel point-to-point camera distortion calibration method that requires only dozens of images to get a dense distortion rectification map. We have designed an objective function based on deformation between the original images and the projection of reference images, which can eliminate the effect of distortion when optimizing camera parameters. Dense features between the original images and the projection of the reference images are calculated by digital image correlation (DIC). Experiments indicate that our method obtains a comparable result with the multi-parameter model method using a large number of pictures, and contributes a 28.5% improvement to the reprojection error over the polynomial distortion model.
Collapse
Affiliation(s)
- Ziyi Jin
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Zhixue Li
- Independent Researcher, 181 Gaojiao Road, Yuhang District, Hangzhou 311122, China;
| | - Tianyuan Gan
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Zuoming Fu
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Chongan Zhang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Zhongyu He
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Hong Zhang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Peng Wang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Jiquan Liu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| | - Xuesong Ye
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China; (Z.J.); (T.G.); (Z.F.); (C.Z.); (Z.H.); (H.Z.); (P.W.)
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China;
| |
Collapse
|
21
|
Dan X, Gong Q, Zhang M, Li T, Li G, Wang Y. Chessboard Corner Detection Based on EDLines Algorithm. Sensors (Basel) 2022; 22:s22093398. [PMID: 35591087 PMCID: PMC9106018 DOI: 10.3390/s22093398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 04/24/2022] [Accepted: 04/26/2022] [Indexed: 02/04/2023]
Abstract
To improve the robustness and accuracy of the corner-detection algorithm, this paper proposes a camera-calibration method based on the EDLines algorithm for the automatic detection of chessboard corners. The EDLines algorithm is initially used to perform straight-line detection on the calibration image. The features of the broken straight lines at the corners are then used to filter the straight lines and remove the background straight lines outside the chessboard. The pixels in the rectangular area around the filtered straight line are sorted by the gray gradient. After using the sorted results to fit the straight line, the coordinates of the intersection of the straight lines are taken as the initial coordinates of the corners and perform subpixel optimization on them. Finally, the corner points are sorted by the conversion between pixel-coordinate systems. The camera exposure time changes and complex imaging-background experiments show that the algorithm has no missed detection and redundancy in corner detection. The average reprojection error is found to be less than 0.05 pixels, which can be used in actual calibration.
Collapse
Affiliation(s)
- Xizuo Dan
- School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China; (X.D.); (Q.G.); (M.Z.); (T.L.)
| | - Qicheng Gong
- School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China; (X.D.); (Q.G.); (M.Z.); (T.L.)
| | - Mei Zhang
- School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China; (X.D.); (Q.G.); (M.Z.); (T.L.)
| | - Tao Li
- School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China; (X.D.); (Q.G.); (M.Z.); (T.L.)
| | - Guihua Li
- School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China; (X.D.); (Q.G.); (M.Z.); (T.L.)
- Correspondence: ; Tel.: +86-13956932686
| | - Yonghong Wang
- School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China;
| |
Collapse
|
22
|
McFadden D, Amos B, Heintzmann R. Quality control of image sensors using gaseous tritium light sources. Philos Trans A Math Phys Eng Sci 2022; 380:20210130. [PMID: 35152762 PMCID: PMC7613193 DOI: 10.1098/rsta.2021.0130] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 11/14/2021] [Indexed: 05/19/2023]
Abstract
We propose a practical method for radiometrically calibrating cameras using widely available gaseous tritium light sources (betalights). Along with the gain (conversion factor) and read noise level, the predictable photon flux of the source allows us to gauge the quantum efficiency. The design is easily reproducible with a 3D printer (three-dimensional printer) and three inexpensive parts. Suitable for common image sensors, we believe that the method has the potential to be a useful tool in microscopy facilities and optical laboratories alike. This article is part of the theme issue 'Super-resolution structured illumination microscopy (part 2)'.
Collapse
Affiliation(s)
- David McFadden
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, Jena 07745, Germany
- Jena Center for Soft Matter (JCSM), Friedrich Schiller University Jena, Jena, Germany
| | - Brad Amos
- Medical Research Council, MRC, Laboratory of Molecular Biology, Cambridge, UK
| | - Rainer Heintzmann
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-University, Jena, Germany
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, Jena 07745, Germany
- Jena Center for Soft Matter (JCSM), Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
23
|
Fryskowska-Skibniewska A, Delis P, Kedzierski M, Matusiak D. The Conception of Test Fields for Fast Geometric Calibration of the FLIR VUE PRO Thermal Camera for Low-Cost UAV Applications. Sensors (Basel) 2022; 22:s22072468. [PMID: 35408084 PMCID: PMC9003006 DOI: 10.3390/s22072468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 02/05/2023]
Abstract
The dynamic evolution of photogrammetry led to the development of numerous methods of geometric calibration of cameras, which are mostly based on building flat targets (fields) with features that can be distinguished in the images. Geometric calibration of thermal cameras for UAVs is an active research field that attracts numerous researchers. As a result of their low price and general availability, non-metric cameras are being increasingly used for measurement purposes. Apart from resolution, non-metric sensors do not have any other known parameters. The commonly applied process is self-calibration, which enables the determining of the approximate elements of the camera’s interior orientation. The purpose of this work was to analyze the possibilities of geometric calibration of thermal UAV cameras using proposed test field patterns and materials. The experiment was conducted on a FLIR VUE PRO thermal camera dedicated to UAV platforms. The authors propose the selection of various image processing methods (histogram equalization, thresholding, brightness correction) in order to improve the quality of the thermograms. The consecutive processing methods resulted in over 80% effectiveness on average by 94%, 81%, and 80 %, respectively. This effectiveness, for no processing and processing with the use of the filtering method, was: 42% and 38%, respectively. Only high-pass filtering did not improve the obtained results. The final results of the proposed method and structure of test fields were verified on chosen geometric calibration algorithms. The results of fast and low-cost calibration are satisfactory, especially in terms of the automation of this process. For geometric correction, the standard deviations for the results of specific methods of thermogram sharpness enhancement are two to three times better than results without any correction.
Collapse
|
24
|
Samson É, Laurendeau D, Parizeau M. Calibration of Stereo Pairs Using Speckle Metrology. Sensors (Basel) 2022; 22:1784. [PMID: 35270930 PMCID: PMC8914707 DOI: 10.3390/s22051784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 06/14/2023]
Abstract
The accuracy of 3D reconstruction for metrology applications using active stereo pairs depends on the quality of the calibration of the system. Active stereo pairs are generally composed of cameras mounted on tilt/pan mechanisms separated by a constant or variable baseline. This paper presents a calibration approach based on speckle metrology that allows the separation of translation and rotation in the estimation of extrinsic parameters. To achieve speckle-based calibration, a device called an Almost Punctual Speckle Source (APSS) is introduced. Using the APSS, a thorough method for the calibration of extrinsic parameters of stereo pairs is described. Experimental results obtained with a stereo system called the Agile Stereo Pair (ASP) demonstrate that speckle-based calibration achieves better reconstruction performance than methods using standard calibration procedures. Although the experiments were performed with a specific stereo pair, such as the ASP, which is described in the paper, the speckle-based calibration approach using the APSS can be transposed to other stereo setups.
Collapse
Affiliation(s)
| | - Denis Laurendeau
- Electrical and Computer Engineering, Faculty of Science and Engineering, Université Laval, Quebec City, QC G1V 0A6, Canada; (É.S.); (M.P.)
| | | |
Collapse
|
25
|
Heo J, Kwon Y(J. 3D Vehicle Trajectory Extraction Using DCNN in an Overlapping Multi-Camera Crossroad Scene. Sensors (Basel) 2021; 21:s21237879. [PMID: 34883887 PMCID: PMC8659789 DOI: 10.3390/s21237879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/15/2021] [Accepted: 11/24/2021] [Indexed: 11/16/2022]
Abstract
The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.
Collapse
|
26
|
Xiong P, Wang S, Wang W, Ye Q, Ye S. Model-Independent Lens Distortion Correction Based on Sub-Pixel Phase Encoding. Sensors (Basel) 2021; 21:s21227465. [PMID: 34833544 PMCID: PMC8624224 DOI: 10.3390/s21227465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 11/04/2021] [Accepted: 11/07/2021] [Indexed: 11/16/2022]
Abstract
Lens distortion can introduce deviations in visual measurement and positioning. The distortion can be minimized by optimizing the lens and selecting high-quality optical glass, but it cannot be completely eliminated. Most existing correction methods are based on accurate distortion models and stable image characteristics. However, the distortion is usually a mixture of the radial distortion and the tangential distortion of the lens group, which makes it difficult for the mathematical model to accurately fit the non-uniform distortion. This paper proposes a new model-independent lens complex distortion correction method. Taking the horizontal and vertical stripe pattern as the calibration target, the sub-pixel value distribution visualizes the image distortion, and the correction parameters are directly obtained from the pixel distribution. A quantitative evaluation method suitable for model-independent methods is proposed. The method only calculates the error based on the characteristic points of the corrected picture itself. Experiments show that this method can accurately correct distortion with only 8 pictures, with an error of 0.39 pixels, which provides a simple method for complex lens distortion correction.
Collapse
Affiliation(s)
- Pengbo Xiong
- Institute of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150001, China; (P.X.); (S.W.); (Q.Y.); (S.Y.)
- Key Lab of Ultra-Precision Intelligent Instrumentation, Harbin Institute of Technology, Ministry of Industry and Information Technology, Harbin 150001, China
| | - Shaokai Wang
- Institute of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150001, China; (P.X.); (S.W.); (Q.Y.); (S.Y.)
| | - Weibo Wang
- Institute of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150001, China; (P.X.); (S.W.); (Q.Y.); (S.Y.)
- Key Lab of Ultra-Precision Intelligent Instrumentation, Harbin Institute of Technology, Ministry of Industry and Information Technology, Harbin 150001, China
- Postdoctoral Research Station of Optical Engineering, Harbin Institute of Technology, Harbin 150001, China
- Correspondence:
| | - Qixin Ye
- Institute of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150001, China; (P.X.); (S.W.); (Q.Y.); (S.Y.)
- Key Lab of Ultra-Precision Intelligent Instrumentation, Harbin Institute of Technology, Ministry of Industry and Information Technology, Harbin 150001, China
| | - Shujiao Ye
- Institute of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150001, China; (P.X.); (S.W.); (Q.Y.); (S.Y.)
- Key Lab of Ultra-Precision Intelligent Instrumentation, Harbin Institute of Technology, Ministry of Industry and Information Technology, Harbin 150001, China
| |
Collapse
|
27
|
Tosti F, Nardinocchi C, Wahbeh W, Ciampini C, Marsella M, Lopes P, Giuliani S. Human height estimation from highly distorted surveillance image. J Forensic Sci 2021; 67:332-344. [PMID: 34596235 PMCID: PMC9291900 DOI: 10.1111/1556-4029.14888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/27/2021] [Accepted: 08/25/2021] [Indexed: 11/30/2022]
Abstract
Video surveillance camera (VSC) is an important source of information during investigations especially if used as a tool for the extraction of verified and reliable forensic measurements. In this study, some aspects of human height extraction from VSC video frames are analyzed with the aim of identifying and mitigating error sources that can strongly affect the measurement. More specifically, those introduced by lens distortion are present in wide-field-of-view lens such as VSCs. A weak model, which is not able to properly describe and correct the lens distortion, could introduce systematic errors. This study focuses on the aspect of camera calibration to verify human height extraction by Amped FIVE software, which is adopted by the Forensic science laboratories of Carabinieri Force (RaCIS), Italy. A stable and reliable approach of camera calibration is needed since investigators have to deal with different cameras while inspecting the crime scene. The performance of the software in correcting distorted images is compared with a technique of single view self-calibration. Both approaches were applied to several frames acquired by a fish-eye camera and then measuring the height of five different people. Moreover, two actual cases, both characterized by common low-resolution and distorted images, were also analyzed. The height of four known persons was measured and used as reference value for validation. Results show no significant difference between the two calibration approaches working with fish-eye camera in test field, while evidence of differences was found in the measurement on the actual cases.
Collapse
Affiliation(s)
| | | | - Wissam Wahbeh
- IDIBAU, University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland
| | | | | | | | | |
Collapse
|
28
|
Karashchuk P, Rupp KL, Dickinson ES, Walling-Bell S, Sanders E, Azim E, Brunton BW, Tuthill JC. Anipose: A toolkit for robust markerless 3D pose estimation. Cell Rep 2021; 36:109730. [PMID: 34592148 PMCID: PMC8498918 DOI: 10.1016/j.celrep.2021.109730] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 06/15/2021] [Accepted: 08/27/2021] [Indexed: 01/12/2023] Open
Abstract
Quantifying movement is critical for understanding animal behavior. Advances in computer vision now enable markerless tracking from 2D video, but most animals move in 3D. Here, we introduce Anipose, an open-source toolkit for robust markerless 3D pose estimation. Anipose is built on the 2D tracking method DeepLabCut, so users can expand their existing experimental setups to obtain accurate 3D tracking. It consists of four components: (1) a 3D calibration module, (2) filters to resolve 2D tracking errors, (3) a triangulation module that integrates temporal and spatial regularization, and (4) a pipeline to structure processing of large numbers of videos. We evaluate Anipose on a calibration board as well as mice, flies, and humans. By analyzing 3D leg kinematics tracked with Anipose, we identify a key role for joint rotation in motor control of fly walking. To help users get started with 3D tracking, we provide tutorials and documentation at http://anipose.org/.
Collapse
Affiliation(s)
- Pierre Karashchuk
- Neuroscience Graduate Program, University of Washington, Seattle, WA, USA
| | - Katie L. Rupp
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Evyn S. Dickinson
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Sarah Walling-Bell
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Elischa Sanders
- Molecular Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Eiman Azim
- Molecular Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Bingni W. Brunton
- Department of Biology, University of Washington, Seattle, WA, USA,Senior author,Correspondence: (B.W.B.), (J.C.T.)
| | - John C. Tuthill
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA,Senior author,Lead contact,Correspondence: (B.W.B.), (J.C.T.)
| |
Collapse
|
29
|
Roncella R, Forlani G. UAV Block Geometry Design and Camera Calibration: A Simulation Study. Sensors (Basel) 2021; 21:s21186090. [PMID: 34577297 PMCID: PMC8473092 DOI: 10.3390/s21186090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 09/06/2021] [Accepted: 09/07/2021] [Indexed: 11/21/2022]
Abstract
Acknowledged guidelines and standards such as those formerly governing project planning in analogue aerial photogrammetry are still missing in UAV photogrammetry. The reasons are many, from a great variety of projects goals to the number of parameters involved: camera features, flight plan design, block control and georeferencing options, Structure from Motion settings, etc. Above all, perhaps, stands camera calibration with the alternative between pre- and on-the-job approaches. In this paper we present a Monte Carlo simulation study where the accuracy estimation of camera parameters and tie points’ ground coordinates is evaluated as a function of various project parameters. A set of UAV (Unmanned Aerial Vehicle) synthetic photogrammetric blocks, built by varying terrain shape, surveyed area shape, block control (ground and aerial), strip type (longitudinal, cross and oblique), image observation and control data precision has been synthetically generated, overall considering 144 combinations in on-the-job self-calibration. Bias in ground coordinates (dome effect) due to inaccurate pre-calibration has also been investigated. Under the test scenario, the accuracy gap between different block configurations can be close to an order of magnitude. Oblique imaging is confirmed as key requisite in flat terrain, while ground control density is not. Aerial control by accurate camera station positions is overall more accurate and efficient than GCP in flat terrain.
Collapse
|
30
|
Liu L, Xie J, Tang X, Ren C, Chen J, Liu R. Coarse-to-Fine Image Matching-Based Footprint Camera Calibration of the GF-7 Satellite. Sensors (Basel) 2021; 21:s21072297. [PMID: 33805992 PMCID: PMC8037635 DOI: 10.3390/s21072297] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/22/2021] [Accepted: 03/23/2021] [Indexed: 11/22/2022]
Abstract
The GF-7 satellite is China’s first high-resolution stereo mapping satellite that reaches sub-meter resolution, equipped with new-type payloads, such as an area array footprint camera that can achieve synchronization acquisition of laser spots. When the satellite is in space, the variation of camera parameters may occur due to launch vibration and environmental changes, and on-orbit geometric calibration thereby must be made. Coupled with the data from the GF-7 satellite, this paper constructs a geometric imaging model of the area array footprint camera based on the two-dimensional direction angle, and proposes a coarse-to-fine “LPM-SIFT + Phase correlation” matching strategy for the automatic extraction of calibration control points. The single-image calibration experiment shows that the on-orbit geometric calibration model of the footprint camera constructed in this paper is correct and effective. The matching method proposed is used to register the footprint images with the DOM (Digital Orthophoto Map) reference data to obtain dense control points. Compared with the calibration result using a small number of manually collected control points, the root mean square error (RMSE) of the residual of the control points is improved from half a pixel to 1/3, and the RMSE of the same orbit checkpoints in the image space is improved from 1 pixel to 0.7. It can be concluded that using the coarse-to-fine image matching method proposed in this paper to extract control points can significantly improve the on-orbit calibration accuracy of the footprint camera on the GF-7 satellite.
Collapse
Affiliation(s)
- Lirong Liu
- Land Satellite Remote Sensing Application Center, MNR, Beijing 100048, China; (L.L.); (X.T.); (J.C.)
| | - Junfeng Xie
- Land Satellite Remote Sensing Application Center, MNR, Beijing 100048, China; (L.L.); (X.T.); (J.C.)
- Correspondence: ; Tel.: +86-10-6841-2292
| | - Xinming Tang
- Land Satellite Remote Sensing Application Center, MNR, Beijing 100048, China; (L.L.); (X.T.); (J.C.)
| | - Chaofeng Ren
- College of Geological Engineering and Geomatics, Chang’an University, Xi’an 710054, China;
| | - Jiyi Chen
- Land Satellite Remote Sensing Application Center, MNR, Beijing 100048, China; (L.L.); (X.T.); (J.C.)
| | - Ren Liu
- The School of Earth Sciences and Engineering, Hohai University, Nanjing 211100, China;
| |
Collapse
|
31
|
Vila O, Boada I, Raba D, Farres E. A Method to Compensate for the Errors Caused by Temperature in Structured-Light 3D Cameras. Sensors (Basel) 2021; 21:s21062073. [PMID: 33809467 PMCID: PMC7999897 DOI: 10.3390/s21062073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/08/2021] [Accepted: 03/12/2021] [Indexed: 11/16/2022]
Abstract
Although low cost red-green-blue-depth (RGB-D) cameras are factory calibrated, to meet the accuracy requirements needed in many industrial applications proper calibration strategies have to be applied. Generally, these strategies do not consider the effect of temperature on the camera measurements. The aim of this paper is to evaluate this effect considering an Orbbec Astra camera. To analyze this camera performance, an experimental study in a thermal chamber has been carried out. From this experiment, it has been seen that produced errors can be modeled as an hyperbolic paraboloid function. To compensate for this error, a two-step method that first computes the error and then corrects it has been proposed. To compute the error two possible strategies are proposed, one based on the infrared distortion map and the other on the depth map. The proposed method has been tested in an experimental scenario with different Orbbec Astra cameras and also in a real environment. In both cases, its good performance has been demonstrated. In addition, the method has been compared with the Kinect v1 achieving similar results. Therefore, the proposed method corrects the error due to temperature, is simple, requires a low computational cost and might be applicable to other similar cameras.
Collapse
Affiliation(s)
- Oriol Vila
- Graphics and Imaging Laboratory, University of Girona, 17003 Girona, Spain
- Insylo Technologies S.L., 17003 Girona, Spain; (D.R.); (E.F.)
- Correspondence: (O.V.); (I.B.)
| | - Imma Boada
- Graphics and Imaging Laboratory, University of Girona, 17003 Girona, Spain
- Correspondence: (O.V.); (I.B.)
| | - David Raba
- Insylo Technologies S.L., 17003 Girona, Spain; (D.R.); (E.F.)
| | - Esteve Farres
- Insylo Technologies S.L., 17003 Girona, Spain; (D.R.); (E.F.)
| |
Collapse
|
32
|
Schollemann F, Barbosa Pereira C, Rosenhain S, Follmann A, Gremse F, Kiessling F, Czaplik M, Abreu de Souza M. An Anatomical Thermal 3D Model in Preclinical Research: Combining CT and Thermal Images. Sensors (Basel) 2021; 21:1200. [PMID: 33572091 DOI: 10.3390/s21041200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/25/2021] [Accepted: 02/05/2021] [Indexed: 12/14/2022]
Abstract
Even though animal trials are a controversial topic, they provide knowledge about diseases and the course of infections in a medical context. To refine the detection of abnormalities that can cause pain and stress to the animal as early as possible, new processes must be developed. Due to its noninvasive nature, thermal imaging is increasingly used for severity assessment in animal-based research. Within a multimodal approach, thermal images combined with anatomical information could be used to simulate the inner temperature profile, thereby allowing the detection of deep-seated infections. This paper presents the generation of anatomical thermal 3D models, forming the underlying multimodal model in this simulation. These models combine anatomical 3D information based on computed tomography (CT) data with a registered thermal shell measured with infrared thermography. The process of generating these models consists of data acquisition (both thermal images and CT), camera calibration, image processing methods, and structure from motion (SfM), among others. Anatomical thermal 3D models were successfully generated using three anesthetized mice. Due to the image processing improvement, the process was also realized for areas with few features, which increases the transferability of the process. The result of this multimodal registration in 3D space can be viewed and analyzed within a visualization tool. Individual CT slices can be analyzed axially, sagittally, and coronally with the corresponding superficial skin temperature distribution. This is an important and successfully implemented milestone on the way to simulating the internal temperature profile. Using this temperature profile, deep-seated infections and inflammation can be detected in order to reduce animal suffering.
Collapse
|
33
|
Van Crombrugge I, Penne R, Vanlanduit S. Extrinsic Camera Calibration with Line-Laser Projection. Sensors (Basel) 2021; 21:s21041091. [PMID: 33562538 PMCID: PMC7914869 DOI: 10.3390/s21041091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 01/27/2021] [Accepted: 02/01/2021] [Indexed: 11/27/2022]
Abstract
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.
Collapse
|
34
|
Weng J, Zhou W, Ma S, Qi P, Zhong J. Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns. Sensors (Basel) 2020; 21:E209. [PMID: 33396238 PMCID: PMC7795500 DOI: 10.3390/s21010209] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/26/2020] [Accepted: 12/27/2020] [Indexed: 11/16/2022]
Abstract
The existing lens correction methods deal with the distortion correction by one or more specific image distortion models. However, distortion determination may fail when an unsuitable model is used. So, methods based on the distortion model would have some drawbacks. A model-free lens distortion correction based on the phase analysis of fringe-patterns is proposed in this paper. Firstly, the mathematical relationship of the distortion displacement and the modulated phase of the sinusoidal fringe-pattern are established in theory. By the phase demodulation analysis of the fringe-pattern, the distortion displacement map can be determined point by point for the whole distorted image. So, the image correction is achieved according to the distortion displacement map by a model-free approach. Furthermore, the distortion center, which is important in obtaining an optimal result, is measured by the instantaneous frequency distribution according to the character of distortion automatically. Numerical simulation and experiments performed by a wide-angle lens are carried out to validate the method.
Collapse
Affiliation(s)
- Jiawen Weng
- Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China; (J.W.); (S.M.)
| | - Weishuai Zhou
- Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China;
| | - Simin Ma
- Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China; (J.W.); (S.M.)
| | - Pan Qi
- Department of Electronics Engineering, Guangdong Communication Polytechnic, Guangzhou 510650, China;
| | - Jingang Zhong
- Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China;
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510650, China
| |
Collapse
|
35
|
Tang X, Song H, Wang W, Yang Y. Vehicle Spatial Distribution and 3D Trajectory Extraction Algorithm in a Cross-Camera Traffic Scene. Sensors (Basel) 2020; 20:s20226517. [PMID: 33202659 PMCID: PMC7698096 DOI: 10.3390/s20226517] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/13/2020] [Accepted: 11/13/2020] [Indexed: 11/16/2022]
Abstract
The three-dimensional trajectory data of vehicles have important practical meaning for traffic behavior analysis. To solve the problems of narrow visual angle in single-camera scenes and lack of continuous trajectories in 3D space by current cross-camera trajectory extraction methods, we propose an algorithm of vehicle spatial distribution and 3D trajectory extraction in this paper. First, a panoramic image of a road with spatial information is generated based on camera calibration, which is used to convert cross-camera perspectives into 3D physical space. Then, we choose YOLOv4 to obtain 2D bounding boxes of vehicles in cross-camera scenes. Based on the above information, 3D bounding boxes around vehicles are built with geometric constraints which are used to obtain projection centroids of vehicles. Finally, by calculating the spatial distribution of projection centroids in the panoramic image, 3D trajectories of vehicles are extracted. The experimental results indicate that our algorithm can effectively complete vehicle spatial distribution and 3D trajectory extraction in various traffic scenes, which outperforms other comparison algorithms.
Collapse
|
36
|
Paulus S, Mahlein AK. Technical workflows for hyperspectral plant image assessment and processing on the greenhouse and laboratory scale. Gigascience 2020; 9:5894826. [PMID: 32815537 PMCID: PMC7439585 DOI: 10.1093/gigascience/giaa090] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 06/26/2020] [Accepted: 08/04/2020] [Indexed: 11/13/2022] Open
Abstract
Background The use of hyperspectral cameras is well established in the field of plant phenotyping, especially as a part of high-throughput routines in greenhouses. Nevertheless, the workflows used differ depending on the applied camera, the plants being imaged, the experience of the users, and the measurement set-up. Results This review describes a general workflow for the assessment and processing of hyperspectral plant data at greenhouse and laboratory scale. Aiming at a detailed description of possible error sources, a comprehensive literature review of possibilities to overcome these errors and influences is provided. The processing of hyperspectral data of plants starting from the hardware sensor calibration, the software processing steps to overcome sensor inaccuracies, and the preparation for machine learning is shown and described in detail. Furthermore, plant traits extracted from spectral hypercubes are categorized to standardize the terms used when describing hyperspectral traits in plant phenotyping. A scientific data perspective is introduced covering information for canopy, single organs, plant development, and also combined traits coming from spectral and 3D measuring devices. Conclusions This publication provides a structured overview on implementing hyperspectral imaging into biological studies at greenhouse and laboratory scale. Workflows have been categorized to define a trait-level scale according to their metrological level and the processing complexity. A general workflow is shown to outline procedures and requirements to provide fully calibrated data of the highest quality. This is essential for differentiation of the smallest changes from hyperspectral reflectance of plants, to track and trace hyperspectral development as an answer to biotic or abiotic stresses.
Collapse
Affiliation(s)
- Stefan Paulus
- Institute of Sugar Beet Research, Holtenser Landstr. 77, 37079 Göttingen, Germany
| | - Anne-Katrin Mahlein
- Institute of Sugar Beet Research, Holtenser Landstr. 77, 37079 Göttingen, Germany
| |
Collapse
|
37
|
Ricolfe-Viala C, Esparza A. Depth-Dependent High Distortion Lens Calibration. Sensors (Basel) 2020; 20:s20133695. [PMID: 32630342 PMCID: PMC7374366 DOI: 10.3390/s20133695] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 06/22/2020] [Accepted: 06/26/2020] [Indexed: 11/29/2022]
Abstract
Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is not considered. Several authors have presented the depth dependency of lens distortion but none of them have treated it with highly distorted images. This paper presents an analysis of the distortion depth dependency in strongly distorted images. The division model that is able to represent high distortion with only one parameter is modified to represent a depth-dependent high distortion lens model. The proposed calibration method obtains more accurate results when compared to existing calibration methods.
Collapse
Affiliation(s)
- Carlos Ricolfe-Viala
- Instituto de Automática e Informática Industrial, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
- Correspondence:
| | - Alicia Esparza
- Department of Systems Engineering and Automatic Control, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain;
| |
Collapse
|
38
|
Liu X, Lu R. Testing System for the Mechanical Properties of Small-Scale Specimens Based on 3D Microscopic Digital Image Correlation. Sensors (Basel) 2020; 20:E3530. [PMID: 32580343 DOI: 10.3390/s20123530] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/10/2020] [Accepted: 06/18/2020] [Indexed: 11/16/2022]
Abstract
The testing of the mechanical properties of materials on a small scale is difficult because of the small specimen size and the difficulty of measuring the full-field strain. To tackle this problem, a testing system for investigating the mechanical properties of small-scale specimens based on the three-dimensional (3D) microscopic digital image correlation (DIC) combined with a micro tensile machine is proposed. Firstly, the testing system is described in detail, including the design of the micro tensile machine and the 3D microscopic DIC method. Then, the effects of different shape functions on the matching accuracy obtained by the inverse compositional Gauss-Newton (IC-GN) algorithm are investigated and the numerical experiment results verify that the error due to under matched shape functions is far larger than that of overmatched shape functions. The reprojection error is shown to be smaller than before when employing the modified iteratively weighted radial alignment constraint method. Both displacement and uniaxial measurements were performed to demonstrate the 3D microscopic DIC method and the testing system built. The experimental results confirm that the testing system built can accurately measure the full-field strain and mechanical properties of small-scale specimens.
Collapse
|
39
|
Kowalski C, Arizpe-Gomez P, Fifelski C, Brinkmann A, Hein A. Design of a Supportive Transfer Robot System for Caregivers to Reduce Physical Strain During Nursing Activities. Stud Health Technol Inform 2020; 270:1245-1246. [PMID: 32570601 DOI: 10.3233/shti200384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The number of people in need of long-term care is rising and personnel scarcity is already foreseeable. The shortage of caregivers is further increased due to early retirement attributed to the major health burden when working at high speed with heavy lifting. Since nursing staff in many cases work beyond their physical strain limits during routine activities at the bed and existing systems do not counteract the current trend, we investigate with the present work whether the concept of a collaborative robotic support system can contribute to the physical relief of the nursing staff to make it possible to fall below the physical strain limits.
Collapse
Affiliation(s)
| | | | | | - Anna Brinkmann
- Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| | - Andreas Hein
- Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| |
Collapse
|
40
|
Lee K, Hwang I, Kim YM, Lee H, Kang M, Yu J. Real-Time Weld Quality Prediction Using a Laser Vision Sensor in a Lap Fillet Joint during Gas Metal Arc Welding. Sensors (Basel) 2020; 20:E1625. [PMID: 32183310 DOI: 10.3390/s20061625] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 03/04/2020] [Accepted: 03/11/2020] [Indexed: 11/17/2022]
Abstract
Nondestructive test (NDT) technology is required in the gas metal arc (GMA) welding process to secure weld robustness and to monitor the welding quality in real-time. In this study, a laser vision sensor (LVS) is designed and fabricated, and an image processing algorithm is developed and implemented to extract precise laser lines on tested welds. A camera calibration method based on a gyro sensor is used to cope with the complex motion of the welding robot. Data are obtained based on GMA welding experiments at various welding conditions for the estimation of quality prediction models. Deep neural network (DNN) models are developed based on external bead shapes and welding conditions to predict the internal bead shapes and the tensile strengths of welded joints.
Collapse
|
41
|
Puerto P, Estala B, Mendikute A. A Study on the Uncertainty of a Laser Triangulator Considering System Covariances. Sensors (Basel) 2020; 20:E1630. [PMID: 32183368 DOI: 10.3390/s20061630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 02/14/2020] [Accepted: 03/11/2020] [Indexed: 11/26/2022]
Abstract
A laser triangulation system, which is composed of a camera and a laser, calculates distances between objects intersected by the laser plane. Even though there are commercial triangulation systems, developing a new system allows the design to be adapted to the needs, in addition to allowing dimensions or processing times to be optimized; however the disadvantage is that the real accuracy is not known. The aim of the research is to identify and discuss the relevance of the most significant error sources in laser triangulator systems, predicting their error contribution to the final joint measurement accuracy. Two main phases are considered in this study, namely the calibration and measurement processes. The main error sources are identified and characterized throughout both phases, and a synthetic error propagation methodology is proposed to study the measurement accuracy. As a novelty in uncertainty analysis, the present approach encompasses the covariances of correlated system variables, characterizing both phases for a laser triangulator. An experimental methodology is adopted to evaluate the measurement accuracy in a laser triangulator, comparing it with the values obtained with the synthetic error propagation methodology. The relevance of each error source is discussed, as well as the accuracy of the error propagation. A linearity value of 40 µm and maximum error of 0.6 mm are observed for a 100 mm measuring range, with the camera calibration phase being the main error contributor.
Collapse
|
42
|
Madeira T, Oliveira M, Dias P. Enhancement of RGB-D Image Alignment Using Fiducial Markers. Sensors (Basel) 2020; 20:s20051497. [PMID: 32182872 PMCID: PMC7085533 DOI: 10.3390/s20051497] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 03/05/2020] [Accepted: 03/06/2020] [Indexed: 11/16/2022]
Abstract
Three-dimensional (3D) reconstruction methods generate a 3D textured model from the combination of data from several captures. As such, the geometrical transformations between these captures are required. The process of computing or refining these transformations is referred to as alignment. It is often a difficult problem to handle, in particular due to a lack of accuracy in the matching of features. We propose an optimization framework that takes advantage of fiducial markers placed in the scene. Since these markers are robustly detected, the problem of incorrect matching of features is overcome. The proposed procedure is capable of enhancing the 3D models created using consumer level RGB-D hand-held cameras, reducing visual artefacts caused by misalignments. One problem inherent to this solution is that the scene is polluted by the markers. Therefore, a tool was developed to allow their removal from the texture of the scene. Results show that our optimization framework is able to significantly reduce alignment errors between captures, which results in visually appealing reconstructions. Furthermore, the markers used to enhance the alignment are seamlessly removed from the final model texture.
Collapse
Affiliation(s)
- Tiago Madeira
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (M.O.); (P.D.)
- Correspondence:
| | - Miguel Oliveira
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (M.O.); (P.D.)
- Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Paulo Dias
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (M.O.); (P.D.)
- Department of Electronics, Telecommunications and Informatics, University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
43
|
Itu R, Danescu RG. A Self-Calibrating Probabilistic Framework for 3D Environment Perception Using Monocular Vision. Sensors (Basel) 2020; 20:s20051280. [PMID: 32120868 PMCID: PMC7085646 DOI: 10.3390/s20051280] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 02/20/2020] [Accepted: 02/25/2020] [Indexed: 11/27/2022]
Abstract
Cameras are sensors that are available anywhere and to everyone, and can be placed easily inside vehicles. While stereovision setups of two or more synchronized cameras have the advantage of directly extracting 3D information, a single camera can be easily set up behind the windshield (like a dashcam), or above the dashboard, usually as an internal camera of a mobile phone placed there for navigation assistance. This paper presents a framework for extracting and tracking obstacle 3D data from the surrounding environment of a vehicle in traffic, using as a sensor a generic camera. The system combines the strength of Convolutional Neural Network (CNN)-based segmentation with a generic probabilistic model of the environment, the dynamic occupancy grid. The main contributions presented in this paper are the following: A method for generating the probabilistic measurement model from monocular images, based on CNN segmentation, which takes into account the particularities, uncertainties, and limitations of monocular vision; a method for automatic calibration of the extrinsic and intrinsic parameters of the camera, without the need of user assistance; the integration of automatic calibration and measurement model generation into a scene tracking system that is able to work with any camera to perceive the obstacles in real traffic. The presented system can be easily fitted to any vehicle, working standalone or together with other sensors, to enhance the environment perception capabilities and improve the traffic safety.
Collapse
|
44
|
Barone F, Marrazzo M, Oton CJ. Camera Calibration with Weighted Direct Linear Transformation and Anisotropic Uncertainties of Image Control Points. Sensors (Basel) 2020; 20:E1175. [PMID: 32093348 DOI: 10.3390/s20041175] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 12/29/2019] [Accepted: 12/30/2019] [Indexed: 12/03/2022]
Abstract
Camera calibration is a crucial step for computer vision in many applications. For example, adequate calibration is required in infrared thermography inside gas turbines for blade temperature measurements, for associating each pixel with the corresponding point on the blade 3D model. The blade has to be used as the calibration frame, but it is always only partially visible, and thus, there are few control points. We propose and test a method that exploits the anisotropic uncertainty of the control points and improves the calibration in conditions where the number of control points is limited. Assuming a bivariate Gaussian 2D distribution of the position error of each control point, we set uncertainty areas of control points’ position, which are ellipses (with specific axis lengths and rotations) within which the control points are supposed to be. We use these ellipses to set a weight matrix to be used in a weighted Direct Linear Transformation (wDLT). We present the mathematical formalism for this modified calibration algorithm, and we apply it to calibrate a camera from a picture of a well known object in different situations, comparing its performance to the standard DLT method, showing that the wDLT algorithm provides a more robust and precise solution. We finally discuss the quantitative improvements of the algorithm by varying the modules of random deviations in control points’ positions and with partial occlusion of the object.
Collapse
|
45
|
Elias M, Eltner A, Liebold F, Maas HG. Assessing the Influence of Temperature Changes on the Geometric Stability of Smartphone- and Raspberry Pi Cameras. Sensors (Basel) 2020; 20:E643. [PMID: 31979284 DOI: 10.3390/s20030643] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 01/17/2020] [Accepted: 01/20/2020] [Indexed: 12/01/2022]
Abstract
Knowledge about the interior and exterior camera orientation parameters is required to establish the relationship between 2D image content and 3D object data. Camera calibration is used to determine the interior orientation parameters, which are valid as long as the camera remains stable. However, information about the temporal stability of low-cost cameras due to the physical impact of temperature changes, such as those in smartphones, is still missing. This study investigates on the one hand the influence of heat dissipating smartphone components at the geometric integrity of implemented cameras and on the other hand the impact of ambient temperature changes at the geometry of uncoupled low-cost cameras considering a Raspberry Pi camera module that is exposed to controlled thermal radiation changes. If these impacts are neglected, transferring image measurements into object space will lead to wrong measurements due to high correlations between temperature and camera’s geometric stability. Monte-Carlo simulation is used to simulate temperature-related variations of the interior orientation parameters to assess the extent of potential errors in the 3D data ranging from a few millimetres up to five centimetres on a target in X- and Y-direction. The target is positioned at a distance of 10 m to the camera and the Z-axis is aligned with camera’s depth direction.
Collapse
|
46
|
Kalia M, Mathur P, Navab N, Salcudean SE. Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality. Healthc Technol Lett 2019; 6:255-260. [PMID: 32038867 PMCID: PMC6952262 DOI: 10.1049/htl.2019.0094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 10/02/2019] [Indexed: 12/28/2022] Open
Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Collapse
Affiliation(s)
- Megha Kalia
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada.,Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Prateek Mathur
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Septimiu E Salcudean
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|
47
|
Komagata H, Kakinuma E, Ishikawa M, Shinoda K, Kobayashi N. Semi-Automatic Calibration Method for a Bed-Monitoring System Using Infrared Image Depth Sensors. Sensors (Basel) 2019; 19:E4581. [PMID: 31640256 DOI: 10.3390/s19204581] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/16/2019] [Accepted: 10/18/2019] [Indexed: 11/21/2022]
Abstract
With the aging of society, the number of fall accidents has increased in hospitals and care facilities, and some accidents have happened around beds. To help prevent accidents, mats and clip sensors have been used in these facilities but they can be invasive, and their purpose may be misinterpreted. In recent years, research has been conducted using an infrared-image depth sensor as a bed-monitoring system for detecting a patient getting up, exiting the bed, and/or falling; however, some manual calibration was required initially to set up the sensor in each instance. We propose a bed-monitoring system that retains the infrared-image depth sensors but uses semi-automatic rather than manual calibration in each situation where it is applied. Our automated methods robustly calculate the bed region, surrounding floor, sensor location, and attitude, and can recognize the spatial position of the patient even when the sensor is attached but unconstrained. Also, we propose a means to reconfigure the spatial position considering occlusion by parts of the bed and also accounting for the gravity center of the patient’s body. Experimental results of multi-view calibration and motion simulation showed that our methods were effective for recognition of the spatial position of the patient.
Collapse
|
48
|
Zhang X, Zeinali Y, Story BA, Rajan D. Measurement of Three-Dimensional Structural Displacement Using a Hybrid Inertial Vision-Based System. Sensors (Basel) 2019; 19:E4083. [PMID: 31546595 DOI: 10.3390/s19194083] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 09/06/2019] [Accepted: 09/17/2019] [Indexed: 11/24/2022]
Abstract
Accurate three-dimensional displacement measurements of bridges and other structures have received significant attention in recent years. The main challenges of such measurements include the cost and the need for a scalable array of instrumentation. This paper presents a novel Hybrid Inertial Vision-Based Displacement Measurement (HIVBDM) system that can measure three-dimensional structural displacements by using a monocular charge-coupled device (CCD) camera, a stationary calibration target, and an attached tilt sensor. The HIVBDM system does not require the camera to be stationary during the measurements, while the camera movements, i.e., rotations and translations, during the measurement process are compensated by using a stationary calibration target in the field of view (FOV) of the camera. An attached tilt sensor is further used to refine the camera movement compensation, and better infers the global three-dimensional structural displacements. This HIVBDM system is evaluated on both short-term and long-term synthetic static structural displacements, which are conducted in an indoor simulated experimental environment. In the experiments, at a 9.75 m operating distance between the monitoring camera and the structure that is being monitored, the proposed HIVBDM system achieves an average of 1.440 mm Root Mean Square Error (RMSE) on the in-plane structural translations and an average of 2.904 mm RMSE on the out-of-plane structural translations.
Collapse
|
49
|
Yang YS. Measurement of Dynamic Responses from Large Structural Tests by Analyzing Non-Synchronized Videos. Sensors (Basel) 2019; 19:s19163520. [PMID: 31405251 PMCID: PMC6721229 DOI: 10.3390/s19163520] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 08/08/2019] [Accepted: 08/09/2019] [Indexed: 11/23/2022]
Abstract
Image analysis techniques have been employed to measure displacements, deformation, crack propagation, and structural health monitoring. With the rapid development and wide application of digital imaging technology, consumer digital cameras are commonly used for making such measurements because of their satisfactory imaging resolution, video recording capability, and relatively low cost. However, three-dimensional dynamic response monitoring and measurement on large-scale structures pose challenges of camera calibration and synchronization to image analysis. Without satisfactory camera position and orientation obtained from calibration and well-synchronized imaging, significant errors would occur in the dynamic responses during image analysis and stereo triangulation. This paper introduces two camera calibration approaches that are suitable for large-scale structural experiments, as well as a synchronization method to estimate the time difference between two cameras and further minimize the error of stereo triangulation. Two structural experiments are used to verify the calibration approaches and the synchronization method to acquire dynamic responses. The results demonstrate the performance and accuracy improvement by using the proposed methods.
Collapse
|
50
|
Zhang Z, Zhao R, Liu E, Yan K, Ma Y. A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard. Sensors (Basel) 2019; 19:s19061315. [PMID: 30884756 PMCID: PMC6470948 DOI: 10.3390/s19061315] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 03/01/2019] [Accepted: 03/11/2019] [Indexed: 11/16/2022]
Abstract
In this paper, a simple and easy high-precision calibration method is proposed for the LRF-camera combined measurement system which is widely used at present. This method can be applied not only to mainstream 2D and 3D LRF-cameras, but also to calibrate newly developed 1D LRF-camera combined systems. It only needs a calibration board to record at least three sets of data. First, the camera parameters and distortion coefficients are decoupled by the distortion center. Then, the spatial coordinates of laser spots are solved using line and plane constraints, and the estimation of LRF-camera extrinsic parameters is realized. In addition, we establish a cost function for optimizing the system. Finally, the calibration accuracy and characteristics of the method are analyzed through simulation experiments, and the validity of the method is verified through the calibration of a real system.
Collapse
Affiliation(s)
- Zhuang Zhang
- Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China.
- University of Chinese Academy of Sciences, Beijing 100149, China.
| | - Rujin Zhao
- Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China.
| | - Enhai Liu
- Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China.
| | - Kun Yan
- Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China.
- University of Chinese Academy of Sciences, Beijing 100149, China.
| | - Yuebo Ma
- Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China.
- University of Chinese Academy of Sciences, Beijing 100149, China.
| |
Collapse
|