AI Planning is concerned with producing plans that are guaranteed to achieve a robot’s goals, assuming the pre-specified assumptions about the environment in which it operates hold. However, no matter how detailed these assumptions are or how complex the resulting plan is, unexpected events at runtime can cause the robot to fail in achieving its goals.
For robots to safely adapt when such events occur, they must be able to reassess 1) the assumptions upon which they depend, 2) the achievability of its goals under the new environment conditions and 3) the quality of the plan that will achieve the goals or their variants.
Symbolic learning and AI planning have studied these problems in isolation and can be enable robots to adapt sequentially and iteratively; iteratively revise the assumptions until goals are realizable and then synthesize a plan. However, such a procedure can be costly and lack convergence guarantees (e.g., convergence may require more iterations than reasonable for a robot waiting to decide how to proceed in the face of an unexpected event).
This PhD will focus on the theory and implementation of a new hybrid symbolic learning and planning approach for assured runtime adaptation in the face of unexpected events.