News & Events
Events
From Markov Decision Process to Reinforcement Learning
Release time:May 27, 2019

Topic: From Markov Decision Process to Reinforcement Learning

Speaker: Prof. Chi-Guhn Lee, University of Toronto

Time: 10:00 AM, May 29

Venue: Room 320, Weimin Building

Abstract:

In this talk I will share my personal research journey from the Markov decision processes, which is an elegant mathematical framework for sequential decision making, to reinforcement learning, a branch of machine learning for sequential decision making. The story goes back to a dynamic pricing problem for a condominium building in Toronto, in which we tried to set optimal prices of 350 units with various features. While we were able to formulate the challenging optimization problem as a Markov decision process and proved the optimality of the pricing scheme, we were doomed by the curse of dimensionality. This set out a new journey to the well studied area of machine learning: reinforcement learning. I will present multiple case studies that involve reinforcement learning before moving to a few projects that focus more on theoretical investigation of various aspects of the interesting machine learning method.

Biography of the Speaker:

Chi-Guhn Lee is a professor and the Director of Centre for Maintenance Optimization & Reliability Engineering (C-MORE) in the Department of Mechanical and Industrial Engineering at the University of Toronto. He received his Ph.D. in the area of Industrial & Operations Engineering from the University of Michigan, Ann Arbor, and joined the University of Toronto faculty in 2001. Prior to his Ph.D. studies, he spent over three years at Samsung SDS in Seoul, Korea, leading a project of re-usable OOP library for fast prototyping of system integration software. Professor Lee has done both theoretical and applied research in dynamic optimization under uncertainty. His theoretical works involve accelerated value iteration algorithm for Markov decision processes, progressive basis-function approximation for value function space, multi-variate Bayesian control chart optimization, and optimal learning using Multi-armed Bandit Model. His interest in application is diverse from supply chain optimization to financial engineering, to dynamic pricing and to healthcare optimization. In the past years, he and his team have actively adopted machine learning algorithms into their research portfolio. In particular, he is currently active in reinforcement learning, inverse reinforcement learning, and deep reinforcement learning. Professor Lee holds positions as associate editor of Enterprise Information Systems and International Journal of Industrial Engineering and serves as a member in a few editorial boards.



School of Reliability and Systems Engineering

Calendar