Personalize Vison-based Human Following for Mobile Robots by Learning from Human-Driven Demonstrations

Lihua Jiang, Weitian Wang, Yi Chen, Yunyi Jia

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Human following is an important feature in various human-mobile-robot collaboration applications. Vision-based human following approaches employing visual servoing controls to achieve human following are commonly adopted. Such approaches, however, require to pre-define the desired human-following parameters and need to online extract features from the acquired images to calculate the human-following parameters which serve as the feedback of visual servoing controls. This paper proposes a novel visual servoing control using the non-vector space control theory, which makes the robot be able to personalize its desired human-following parameters as a desired image learnt from human-driven demonstrations. The approach provides an easy and intuitive way for humans to personalize mobile robots to complete human-following tasks in the manners that humans prefer. Experimental results demonstrate the effectiveness and advantage of the proposed approach.

Original languageEnglish
Title of host publicationRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages726-731
Number of pages6
ISBN (Electronic)9781538679807
DOIs
StatePublished - 6 Nov 2018
Event27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018 - Nanjing, China
Duration: 27 Aug 201831 Aug 2018

Publication series

NameRO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication

Conference

Conference27th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2018
CountryChina
CityNanjing
Period27/08/1831/08/18

Fingerprint Dive into the research topics of 'Personalize Vison-based Human Following for Mobile Robots by Learning from Human-Driven Demonstrations'. Together they form a unique fingerprint.

Cite this