Abstract
Most robots are programmed to carry out specific tasks routinely with minor variations. However, more and more applications from SMEs require robots work alongside their counterpart human workers. To smooth the collaboration task flow and improve the collaboration efficiency, a better way is to formulate the robot to surmise what kind of assistance a human coworker needs and naturally take the right action at the right time. This paper proposes a prediction‐based human‐robot collaboration model for assembly scenarios. An embedded learning from demonstration technique enables the robot to understand various task descriptions and customized working preferences. A state‐enhanced convolutional long short‐term memory (ConvLSTM)‐based framework is formulated for extracting the high‐level spatiotemporal features from the shared workspace and predicting the future actions to facilitate the fluent task transition. This model allows the robot to adapt itself to pre-dicted human actions and enables proactive assistance during collaboration. We applied our model to the seats assembly experiment for a scale model vehicle and it can obtain a human worker’s intentions, predict a coworker’s future actions, and provide assembly parts correspondingly. It has been verified that the proposed framework yields higher smoothness and shorter idle times, and meets more working styles, compared to the state‐of‐the‐art methods without prediction awareness.
Original language | English |
---|---|
Article number | 4279 |
Journal | Sensors (Switzerland) |
Volume | 22 |
Issue number | 11 |
DOIs | |
State | Published - 1 Jun 2022 |
Keywords
- action prediction
- assembly
- deep learning
- human demonstration
- human‐robot collaboration
- robot learning
- spatiotemporal