1
|
Eckenhoff K, Geneva P, Huang G. MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3049445] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
2
|
Garg S, Suenderhauf N, Milford M. Semantic–geometric visual place recognition: a new perspective for reconciling opposing views. Int J Rob Res 2019. [DOI: 10.1177/0278364919839761] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
- Sourav Garg
- Australian Centre for Robotic Vision, Queensland University of Technology, Brisbane, Australia
| | - Niko Suenderhauf
- Australian Centre for Robotic Vision, Queensland University of Technology, Brisbane, Australia
| | - Michael Milford
- Australian Centre for Robotic Vision, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
3
|
Liu J, Hao K, Ding Y, Yang S, Gao L. Multi-State Self-Learning Template Library Updating Approach for Multi-Camera Human Tracking in Complex Scenes. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417550163] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In multi-camera video tracking, the tracking scene and tracking-target appearance can become complex, and current tracking methods use entirely different databases and evaluation criteria. Herein, for the first time to our knowledge, we present a universally applicable template library updating approach for multi-camera human tracking called multi-state self-learning template library updating (RS-TLU), which can be applied in different multi-camera tracking algorithms. In RS-TLU, self-learning divides tracking results into three states, namely steady state, gradually changing state, and suddenly changing state, by using the similarity of objects with historical templates and instantaneous templates because every state requires a different decision strategy. Subsequently, the tracking results for each state are judged and learned with motion and occlusion information. Finally, the correct template is chosen in the robust template library. We investigate the effectiveness of the proposed method using three databases and 42 test videos, and calculate the number of false positives, false matches, and missing tracking targets. Experimental results demonstrate that, in comparison with the state-of-the-art algorithms for 15 complex scenes, our RS-TLU approach effectively improves the number of correct target templates and reduces the number of similar templates and error templates in the template library.
Collapse
Affiliation(s)
- Jian Liu
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Kuangrong Hao
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Yongsheng Ding
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Shiyu Yang
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, College of Information Science and Technology, Donghua University, Shanghai 201620, P. R. China
| | - Lei Gao
- CSIRO, Private Mail Bag 2, Glen Osmond, SA 5064, Australia
| |
Collapse
|
4
|
Liu J, Hao K, Ding Y, Yang S, Gao L. Moving human tracking across multi-camera based on artificial immune random forest and improved colour-texture feature fusion. THE IMAGING SCIENCE JOURNAL 2017. [DOI: 10.1080/13682199.2017.1319608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
5
|
Paton M, Pomerleau F, MacTavish K, Ostafew CJ, Barfoot TD. Expanding the Limits of Vision-based Localization for Long-term Route-following Autonomy. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21669] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Michael Paton
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - François Pomerleau
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Kirk MacTavish
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Chris J. Ostafew
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Timothy D. Barfoot
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| |
Collapse
|