Solving a Dynamic Scheduling Problem for a Manufacturing System with Reinforcement Learning uri icon

Open Access

  • false

Peer Reviewed

  • false

Abstract

  • A recent trend in the manufacturing community for increased customization capabilities and shorter production cycles has necessitated an increasing the level of flexibility and, consequently, more complexity in the development of control mechanisms. The increased complexity as well as the presence of uncertainty in such problems has been persuading researchers to attempt intelligent controllers who can also adapt to changes in such dynamic environments. One common problem in manufacturing systems is production scheduling, specifically when optimization on the job assignment should be carried out under uncertainty. In recent years, reinforcement learning has become increasingly popular for developing intelligent systems due to its ability to handle uncertainty. In this paper, we will develop an intelligent controller for our IIoT Test Bed at the HTW Dresden for production scheduling with uncertain operation times. In this study, we will employ three state-of-the-art reinforcement learning methods including DDQN, PPO and RecurrentPPO to perform dynamic production scheduling. The results show that RL was able to converge to optimal solutions that are superior to traditional heuristic methods for the complex problem of scheduling under uncertainty.