Preprint / Version 1

To Centralize or Decentralize? Examining Cyber-Physical Control on the Manufacturing Shop Floor

##article.authors##

  • Akash Agrawal Carnegie Mellon University
  • Sung Jun Won
  • Tushar Sharma
  • Mayuri Deshpande
  • Christopher McComb

DOI:

https://doi.org/10.31224/3567

Keywords:

Reinforcement Learning, Multi-agent systems, Intelligent Manufacturing, Autonomous mobile robots, Job scheduling

Abstract

Multi-agent Reinforcement Learning (RL) frameworks for job scheduling and navigation control of autonomous mobile robots are becoming increasingly common as a means of increasing productivity in manufacturing. Centralized and decentralized frameworks have emerged as the two dominant archetypes for these systems. However, the tradeoffs of these competing archetypes in terms of efficiency, stability, robustness, accuracy, generalizability, and scalability are not well-understood. This work investigates the time efficiency, learning stability, and robustness to operational disruptions of an exemplar decentralized RL framework in comparison to a centralized RL framework. Specifically, several policies with increasing computational budgets are trained using both frameworks and then evaluated on the throughput and safety of the shop floor in static and dynamic tests. We observe that the decentralized framework yields a high performing policy at a significantly lower training budget than the centralized one. However, the centralized framework exhibits superior learning stability as well as robustness to the initialization of robot locations in static testing. Furthermore, we compare the robustness of the frameworks in dynamic tests to find that the decentralized framework provides better compensation for processing delays and failures in real time.

Downloads

Download data is not yet available.

Downloads

Posted

2024-02-26