Large Language Models (LLMs) have in recent years made it to mainstream users and NLP researchers alike, frequently topping leaderboards in a large variety of NLP tasks [1] and showing the public the possibilities of AI. However, their statistical, next-most-probable-token approach has given rise to a number of concerns around their safety and trustworthiness: they are generally not explainable in the sense that cannot offer explanations to their output; and they display scarce to nil numeracy and abstract reasoning capabilities [2]. Without these, their outputs remain model-free and potentially unsafe.
Reasoning and explanations show up often in the longer tradition of AI, for example in tractable subsets of First Order Logic such as Description Logics and the ALC logic [3]. Such reasoners have the capacity to represent and process symbols that represent entities in the real world and their relationships in a semantically meaningful way, often through graph-like structures called knowledge graphs [4] (KGs). They have well understood and tractable algorithms, and are often embedded in KGs as a mechanism for automated inference, derivation of new facts, and generation of explanations for semantic consistency. However, these explanations are fairly technical, often being represented with the symbols and operators of Description Logics, and can only be understood by experts.
This projects leverages the enormous potential of combining LLMs with KGs [5]. In it, we propose to develop a new KG reasoner based on LLM technology, capable of guaranteeing the correct derivation of facts through the mechanisms of reasoning (e.g. Tableau algorithm), and at the same time the verbalisation of forward-chain reasoning outputs using LLMs for domain-specific entities and relationships. Through this, a contribution will be made in making reasoner outputs and derivations better explainable, and acquire a human language character that makes them better understandable by humans. At the same time, this reasoner will be integrated into state-of-the-art, open-source LLMs with the goal of improving their reasoning capabilities.
We foresee experiments in a number of different use-cases, including one in temporal reasoning and understanding of changing semantics over time [6].