The rise of fake news and misinformation is a threat to our societies. Even though we are not always able to quantify the effect of misinformation, it is clear that it is polarising the society and often leads to violence and promotes racism. Much of the fake news detection is based on human intervention that is often too slow to stop it—reliable automated fake news detection is needed. The results can be devastating, reaching from political instability to genocide. To make matters more difficult, a platform such as Facebook or Twitter cannot simply delete suspicious messages without providing and explanation to the users. This is what this projects aims to facilitate.
This project aims to develop AI methods to ensure that intelligent algorithms that are used to identify fake news and the sources of fake news are themselves trustworthy and are able to provide human-understandable explanations for their decisions.
In particular, in the first part of the project, the student will design a model of how fake news propagates in social networks, by modelling the social network as a graph, where the nodes are the agents and the edges can represent that an agent follows another (Twitter) or that the agents are “friends” on Facebook.
Some of the agents are malicious and their aim is to propagate fake news. To build a realistic model, the student will try to analyse existing data (e.g., Twitter Database ), read relevant literature describing the fake news mechanisms employed (e.g., ) and talk to engineers working on fake news detection. At this stage, Diego Seaz-Trumper  from the Wikimedia Foundation agreed to meet regularly and assist with this. We will also reach out to Twitter, Facebook and LinkedIn.
The second part of the project, aims to develop efficient AI methods that block fake news automatically in an explainable way. The use of a graph / network of influences means that it will possible to identify where in the network that interventions will have the most impact on the diffusion of information, and thus where to focus resources to have the most effect.
For this it will be crucial to develop a form of symbolic knowledge representation that is human readable. This will be the third part of the project. More precisely, the project will choose a particular application domain where fake news occurs, such as epidemiology or interest-rate policies, and draw upon any existing computational representation available for the domain. The AI techniques to be used for knowledge representation will be drawn from Natural Language Processing (NLP) and Computational Argumentation, along with causal models, such as Bayesian Belief Networks (Bayesnets). A causal model of the application domain will enable automated reasoning over social influence graphs, for example about the impact of network interventions.