TY - GEN
T1 - Assisting Humans in Human-Robot Collaborative Assembly Contexts through Deep Q-Learning
AU - Modery, Garrett
AU - Wang, Weitian
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Collaborative robots, affectionately referred to as 'cobots,' serve to function alongside their human counterparts to help them complete a specific task. This differs from traditional systems in which the machines are set about their own jobs and are often locked behind cages so as to prevent human access in favor of safety. By removing these walls and introducing collaborative systems, a new level of versatility and productivity is opened within the contexts that they are often employed. This focus of human-robot interaction has grown in recent years, and alongside it the topic of teaching and learning from demonstration has been investigated. Several methods of implementation for this topic have been developed, and while they are potentially effective, they still have gaps in versatility. Thus, we propose a different method of robot learning from demonstrations through the employment of deep Q-networks. These networks permit effective learning not only with human demonstration data, but also with direct feedback from the collaborating user. The proposed solution is experimentally implemented in real-world human-robot collaborative tasks. Preliminary results and analysis suggest the competitive performance of the proposed approach. Future work of this study is also discussed.
AB - Collaborative robots, affectionately referred to as 'cobots,' serve to function alongside their human counterparts to help them complete a specific task. This differs from traditional systems in which the machines are set about their own jobs and are often locked behind cages so as to prevent human access in favor of safety. By removing these walls and introducing collaborative systems, a new level of versatility and productivity is opened within the contexts that they are often employed. This focus of human-robot interaction has grown in recent years, and alongside it the topic of teaching and learning from demonstration has been investigated. Several methods of implementation for this topic have been developed, and while they are potentially effective, they still have gaps in versatility. Thus, we propose a different method of robot learning from demonstrations through the employment of deep Q-networks. These networks permit effective learning not only with human demonstration data, but also with direct feedback from the collaborating user. The proposed solution is experimentally implemented in real-world human-robot collaborative tasks. Preliminary results and analysis suggest the competitive performance of the proposed approach. Future work of this study is also discussed.
KW - algorithm
KW - collaborative tasks
KW - human-robot interaction
KW - learning from human demonstrations
KW - robotics
UR - http://www.scopus.com/inward/record.url?scp=105002704460&partnerID=8YFLogxK
U2 - 10.1109/URTC65039.2024.10937588
DO - 10.1109/URTC65039.2024.10937588
M3 - Conference contribution
AN - SCOPUS:105002704460
T3 - URTC 2024 - 2024 IEEE MIT Undergraduate Research Technology Conference, Proceedings
BT - URTC 2024 - 2024 IEEE MIT Undergraduate Research Technology Conference, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE MIT Undergraduate Research Technology Conference, URTC 2024
Y2 - 11 October 2024 through 13 October 2024
ER -