Learning to perceive objects for autonomous navigation

Jing Peng, Bir Bhanu

Research output: Contribution to journalArticle

4 Scopus citations

Abstract

Current machine perception techniques that typically use segmentation followed by object recognition lack the required robustness to cope with the large variety of situations encountered in real-world navigation. Many existing techniques are brittle in the sense that even minor changes in the expected task environment (e.g., different lighting conditions, geometrical distortion, etc.) can severely degrade the performance of the system or even make it fail completely. In this paper we present a system that achieves robust performance by using local reinforcement learning to induce a highly adaptive mapping from input images to segmentation strategies for successful recognition. This is accomplished by using the confidence level of model matching as reinforcement to drive learning. Local reinforcement learning gives rises to better improvement in recognition performance. The system is verified through experiments on a large set of real images of traffic signs.

Original languageEnglish
Pages (from-to)187-201
Number of pages15
JournalAutonomous Robots
Volume6
Issue number2
DOIs
StatePublished - 1 Jan 1999

Fingerprint Dive into the research topics of 'Learning to perceive objects for autonomous navigation'. Together they form a unique fingerprint.

  • Cite this