Modeling and learning of object placing tasks from human demonstrations in smart manufacturing

Yi Chen, Weitian Wang, Zhujun Zhang, Venkat N. Krovi, Yunyi Jia

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes. When the object or the relative position and orientation between the object and the workpiece are changed, the GVD, as well as the shape of contours extracted from the GVD, are also different. The Fourier Descriptor (FD) is applied to describe these differences on the shape of contours in the GVD. The proposed framework is validated through experimental results.

Original languageEnglish
JournalSAE Technical Papers
Volume2019-April
Issue numberApril
DOIs
StatePublished - 2 Apr 2019
EventSAE World Congress Experience, WCX 2019 - Detroit, United States
Duration: 9 Apr 201911 Apr 2019

Fingerprint

Dive into the research topics of 'Modeling and learning of object placing tasks from human demonstrations in smart manufacturing'. Together they form a unique fingerprint.

Cite this