Loading Events

« All Events

  • This event has passed.

Markov Decision Processes, Classically and Coalgebraically

February 14, 2017 @ 1:00 pm - 3:00 pm

Speaker: Frank Feys (TU Delft)
Abstract:

Markov Decision Processes (MDPs) provide a formal framework for modeling sequential decision making when outcomes are unsure. MDPs have been applied in various areas, including robotics, agriculture, and power system economics, as well as finance and investment theory. An MDP is a state-based system where, in every state, an agent makes a choice of action that results in a reward and a probabilistic transition to a next state. The special instance in which
there are no rewards and the agent has only one choice in each state is the well- known Markov chain. In an MDP, the objective of the agent is to make choices such that the expected total rewards, the long-term values, are maximized. We call a rule that in each state dictates the agent which choice to make in order to achieve such an “optimal outcome” an optimal policy. A remarkable result of the theory of Markov Decision Processes is that an optimal policy always exists. In this talk, we review the classical theory of MDPs including two algorithms to find optimal policies: Policy Iteration and Value Iteration. Finally, we also discuss how Markov Decision Processes could be aptly analyzed using coalgebraic methods.

Details

Date:
February 14, 2017
Time:
1:00 pm - 3:00 pm

Venue

Room J, Building 31
Jaffalaan 5, Delft, Netherlands
+ Google Map