Planning and Learning For Decentralized MDPs With Event Driven Rewards

Decentralized (PO)MDPs provide a rigorous framework for sequential multiagent decision making under uncertainty. However, their high computational complexity limits the practical impact. To address scalability and real-world impact, we focus on settings where a large number of agents primarily interact through complex joint-rewards that depend on their entire histories of states and actions. Such history-based rewards encapsulate the notion of events or tasks such that the team reward is given only when the joint-task is completed. Algorithmically, we contribute:

  • A nonlinear programming (NLP) formulation for such event-based planning model. However, we observed empirically that with the increasing number of agents, NLP solvers were unable to scale, slow and often ran out of memory.
  • A probabilistic inference (Expectation Maximization) based approach that scales much better than NLP solvers for a large number of agents.
  • A policy gradient based multiagent reinforcement learning approach that scales well even for exponential state-spaces.

Our inference and RL-based advances enable us to solve a large real-world multiagent coverage problem modeling schedule coordination of agents in a real urban subway network where other approaches fail to scale.