When using complex algorithms to make decisions within autonomous systems, the weak link is the abstract model used by the algorithms: any errors in the model may lead to unanticipated behaviour potentially risking successful task completion, human safety, or legal compliance. In an AI decision support setting, with a human in the loop, oversight may notice when the system is in error and intervene accordingly. In a system with greater autonomy, including reacting rapidly to a changing environment, this is typically not possible. AI planning is one example algorithm used in such systems. Many AI planners use a version of the Planning Domain Definition Language (PDDL) for specifying the abstract model from which a plan is derived. The advantage of PDDL planners is that they are domain-independent, capable of reasoning with PDDL models of a wide range of problems. The caveat is that PDDL is a technical language, focusing on the needs of planners not the needs of domain-expert stakeholders. As a result, domain stakeholdersÂ’ intuition of what needs modelling cannot be directly captured in PDDL. Instead, modelling requires mediation by someone with expertise in AI planning, PDDL, and the abilities and drawbacks of different planners, but who is typically not an expert in the specific problem domain. This creates a disconnect between domain stakeholders and the abstract model used by the AI planner, making it difficult for domain experts to assess the safety of any plans produced and, thus, to trust the results obtained by the AI planner. For example, expressing the domain-level concept of a deadline (i.e., when a certain action must be finished) involves multiple lines of PDDL code scattered across the model specification following a highly specific template. Once the PDDL has been written, it is difficult to (a) identify that a deadline is being modelled, and (b) verify that this has indeed been done correctly. This PhD project will explore the use of language engineering to develop domain-specific AI planners. Specifically, we will develop domain-specific modelling languages (DSMLs) for expressing planning models in terms that can be (a) understood and manipulated directly by domain experts and (b) automatically translated into PDDL. This addresses the problems above: by using DSMLs, domain experts can directly create, inspect, and manipulate abstract planning models, increasing trust in the correctness of these models. There still remains a gap between these models and the PDDL eventually used by the AI planners. However, automatically, rather than manually, generating PDDL, creates the opportunity to verify the translation process, increasing trust in, and safety of, the resulting PDDL. To enable scalable verification, the translation will be built incrementally from small, well-understood components that that can be automatically combined, producing more complex concepts and modelling patterns. Thus, each component can be verified independently, simplifying the overall verification task. Each component within the translation pipeline will be small and individually verifiable, building up a library of verified modelling components for easy reuse in domain-specific planning languages.
Project ID
STAI-CDT-2021-KCL-4
Supervisor
Steffen Zschalersteffen.zschaler@kcl.ac.ukhttp://www.steffen-zschaler.de/