Speaker
Rayadurgam Srikant
Abstract
We consider policy optimization methods in reinforcement learning settings where the state space is arbitrarily large, or even countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. Specifically, we consider the popular Natural Policy Gradient (NPG) algorithm, which has been studied in the past only under the assumption that the cost is bounded and the state space is finite, neither of which holds for the aforementioned control problems. Assuming a Lyapunov drift condition, which is naturally satisfied in some cases and can be satisfied in other cases at a small cost in performance, we design a state-dependent step-size rule which dramatically improves the performance of NPG for our intended applications. In addition to experimentally verifying the performance improvement, we also theoretically show that the iteration complexity of NPG can be made independent of the size of the state space. The key analytical tool we use is the connection between NPG stepsizes and the solution to Poisson’s equation. In particular, we provide policy-independent bounds on the solution to Poisson’s equation, which are then used to guide the choice of NPG stepsizes.
Bio
R. Srikant is one of two co-Directors of the C3.ai Digital Transformation Institute, a Grainger Distinguished Chair in Engineering, and Professor in the Department of Electrical and Computer Engineering and the Coordinated Science Lab. His research interests include machine learning, applied probability, stochastic control, and communication networks.
He is the recipient of the 2015 INFOCOM Achievement Award, the 2019 IEEE Koji Kobayashi Computers and Communications Award and the 2021 ACM SIGMETRICS Achievement Award. He has also received several Best Paper awards including the 2015 INFOCOM Best Paper Award, the 2017 Applied Probability Society Best Publication Award, and the 2017 WiOpt Best Paper award. He was the Editor-in-Chief of the IEEE/ACM Transactions on Networking from 2013-2017 and is currently an Area Editor for the Mathematics of Operations Research. More than twenty of his former advisees are on the faculty of top universities in the US and around the world, and the rest are in leadership or R&D positions in leading companies.