TY - JOUR
T1 - Modeling and learning of object placing tasks from human demonstrations in smart manufacturing
AU - Chen, Yi
AU - Wang, Weitian
AU - Zhang, Zhujun
AU - Krovi, Venkat N.
AU - Jia, Yunyi
N1 - Publisher Copyright:
© 2019 SAE International. All Rights Reserved.
PY - 2019/4/2
Y1 - 2019/4/2
N2 - In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes. When the object or the relative position and orientation between the object and the workpiece are changed, the GVD, as well as the shape of contours extracted from the GVD, are also different. The Fourier Descriptor (FD) is applied to describe these differences on the shape of contours in the GVD. The proposed framework is validated through experimental results.
AB - In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes. When the object or the relative position and orientation between the object and the workpiece are changed, the GVD, as well as the shape of contours extracted from the GVD, are also different. The Fourier Descriptor (FD) is applied to describe these differences on the shape of contours in the GVD. The proposed framework is validated through experimental results.
UR - http://www.scopus.com/inward/record.url?scp=85064703405&partnerID=8YFLogxK
U2 - 10.4271/2019-01-0700
DO - 10.4271/2019-01-0700
M3 - Conference article
AN - SCOPUS:85064703405
SN - 0148-7191
VL - 2019-April
JO - SAE Technical Papers
JF - SAE Technical Papers
IS - April
T2 - SAE World Congress Experience, WCX 2019
Y2 - 9 April 2019 through 11 April 2019
ER -