Multi-Task Reinforcement Learning with Imagination-based Agents

Deep Reinforcement Learning (DRL) has proved to be a powerful technique that allows autonomous agents to learn optimal behaviours (aka policies) in unknown and complex environments through models of rewards and penalizations [1].
By extending DRL with formal specification expressed in Temporal Logic (TL), researchers have developed algorithms able learn multiple tasks at the same time [2].

The earlier success of model-free RL [1], i.e., a kind of RL algorithms [3] that are just “reactive” to their environment without trying to fully understand it neither planning ahead, has motivated that the great majority of the works on Safe-RL and Temporal Logic focused on model-free RL.

On the contrary, model-based RL agents work by trying to understand their surroundings and create their own model of the world around them. Then, these agents use this model to “imagine” what will happen in the future by taking the different actions available and choosing the policy with the maximum expected reward. Earlier works with model-based RL fall behind their model-free counterparts. However there are new promising contributions such as MuZero [4], an algorithm able to play at super-human performance in Atari, Go, Chess and Shogi; or PlaNet[5] and Dreamer [6], which achieved impressive results in modeling and locomotion tasks in visual-input environments while requiring significantly less training data than model-free agents, have sparkle new interest on this subject.

Your contribution: The goal of this project is to design a new method that mixes Model-based RL with Temporal-Logic-based specifications in multi-task, safety-aware scenarios. You will first explore the literature of RL with TL and Model-based RL to become familiar with the topics. Then you will implement an algorithm mixing these components to be applied in a multi-task scenario similar to the one in [2].

[1] https://web.stanford.edu/class/psych209/Reading/MnihEtAlHassibis15NatureControlDeepRL.pdf

[2] http://www.cs.toronto.edu/~rntoro/docs/LPOPL.pdf

[3] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.

[4] Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., … & Lillicrap, T. (2019). Mastering atari, go, chess and shogi by planning with a learned model. arXiv preprint arXiv:1911.08265.

[5] Hafner, D., Lillicrap, T., Fischer, I., Villegas, R., Ha, D., Lee, H., & Davidson, J. (2019, May). Learning latent dynamics for planning from pixels. In International Conference on Machine Learning (pp. 2555-2565). PMLR.

[6] Hafner, D., Lillicrap, T., Ba, J., & Norouzi, M. (2019). Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603.

Project ID

STAI-CDT-2023-IC-5

Supervisor

Dr Francesco Belardinellihttps://www.doc.ic.ac.uk/~fbelard/

Category

Logic, Verification