During my masters I trained a population of LSTMs to trade profitably in an automated stock market. I worked as a ‘Research Analyst’ for a financial services firm between my masters and PhD, however the experience was very different to that of academic research.
My initial PhD project was data-backed decision making, looking at how to incorporate non-monotonic reasoning and sub-symbolic reasoning techniques. This has since wandered into the field of opponent modelling, so constructing a mental model of other agents using probabilistic inference and machine learning.
There are a number of reasons I was interested in the CDT – I worked in a financial services firm for a few months after my masters: they had mantra about the dangers of AI traders and the influx of AI into the financial services industry, so from that background a CDT focusing on safe and trusted AI techniques seemed very interesting. Beyond that, I must confess that I was keen to be part of a CDT for my PhD. The experience of having a cohort makes things quite a bit easier. While you are still very much alone with respect to your own research, having a bunch of like-minded people around you to bounce ideas off and to support each other is invaluable. Aside from the support of having a cohort, the masterclasses in various aspects of safe and trusted AI is fantastic for wider research purposes and gives a good overview of the field of AI in general.