Abstract
Human-robot collaborative assembly has been one of the next-generation manufacturing paradigms in which superiorities of humans and robots can be fully leveraged. To enable robots effectively collaborate with humans, similar to human-human collaboration, robot learning from human demonstrations has been adopted to learn the assembly tasks. However, existing feature-based approaches require critical feature design and extraction process and are usually complex to incorporate task contexts. Existing learning-based approaches usually require a large amount of manual effort for data labeling and also rarely consider task contexts. This article proposes a dual-input deep learning approach to incorporate task contexts into the robot learning from human demonstration process to assist human in assembly. In addition, online automated data labeling during human demonstration is proposed to reduce the training efforts for learning. The experimental validations on a realistic human-robot model car assembly task with safety-concerned execution designs demonstrate the effectiveness and advantages of the proposed approaches.
Original language | English |
---|---|
Pages (from-to) | 728-738 |
Number of pages | 11 |
Journal | IEEE Transactions on Systems, Man, and Cybernetics: Systems |
Volume | 52 |
Issue number | 2 |
DOIs | |
State | Published - 1 Feb 2022 |
Keywords
- Assembly
- deep learning
- human demonstration
- human-robot collaboration
- robot learning