Positioning Stabilization With Reinforcement Learning for Multi-Step Robot Positioning Tasks in Nvidia Omniverse Tagungsband uri icon

Open Access

  • false

Peer Reviewed

  • false

Abstract

  • For many years, universal robots have been exten-sively popular across a wide range of industries to accommodate a broad range of manufacturing requirements. The complexity of manufacturing environments has resulted in a growing demand for automated programming approaches for robotic systems, where artificial intelligence approaches have recently proven effective. Despite recent advancements, the development of AI-driven controllers for universal robots performing complex, multi-step tasks continues to face several challenges, including stability issues. This paper aims to address these challenges by leveraging reinforcement learning to automate robot programming, with a particular focus on ensuring the stability of robot movements during multi-step robot positioning for inspection purposes. Our research formulates the multi-step robot positioning task as a reinforcement learning problem and develops a reward function to account for the robots' stability at the checkpoints. We conducted experiments using four different RL methods, namely PPO, TRPO, SAC, and TD3. Our findings indicate that TRPO outperforms the other methods, converging to an optimal controller. This study contributes to the field of robotics by providing a robust approach to enhancing the stability and efficiency of robot programming in complex manufacturing environments.