Led by King’s College London in collaboration with Imperial College London, the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence focusses on the use of symbolic AI techniques for ensuring the safety and trustworthiness of AI systems. Symbolic AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Explicit models can be verified, and solutions based on them can be guaranteed as safe and correct; and they can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system.
Research and training programme
Students carry out research on the application of symbolic AI techniques to ensure the safety and trustworthiness of AI systems. King’s College London and Imperial College London are renowned for their expertise in symbolic AI and host some of the world’s leaders in areas such as logic and verification, automated planning, computational argumentation, norms, provenance, and human centred AI. The depth and breadth of AI expertise is complemented with related expertise in technical areas such as cybersecurity and data science, and with expertise related to the implications and applications of AI in areas such as security studies and defence, business, law, ethics and philosophy, social sciences, digital humanities, natural sciences and medicine.
Alongside their individual PhD project, students engage in diverse training activities in three areas: technical skills training; training in responsible research and innovation for AI; transferable skills training.
Typical training activities include:
- Technical training in model-based techniques for safe and trusted AI
- Interdisciplinary training on responsible research and innovation for AI
- Training on the philosophy and ethics of AI
- Public engagement training
- Entrepreneurial mindset training
- A group project, run in collaboration with the Centre’s industrial partners
- Regular seminars and masterclasses on broad-ranging topics relevant to the development of safe and trusted AI
- A hackathon, framed around challenges co-developed with the Centre’s industrial partners
- Diversity and inclusion training, including mentoring practices, impact of diversity and inclusion on group dynamics, and inclusive strategies for good research practice.
- Internship opportunities at the Centre’s partner organisations.
The training programme has been designed with input from the Centre’s industrial partners, ensuring that the skills it develops are relevant and valuable to industry.
Centres for Doctoral Training are unique in their cohort-based approach to PhD training. Students engage closely with their peers, improving the student experience and supporting the development of broad-minded and interactive graduates. Students regularly come together for training activities, shared lab space is provided, and there are a range of cohort building activities, including an annual academic retreat and peer support networks.
Conferences are integral to a student’s PhD experience; this may be via attending as a delegate, delivering a presentation or being part of an organising committee. The Centre’s students attend conferences all over the world in their respective research areas, offering the opportunity to network with experts, develop further knowledge, and gain insight into professional career opportunities. The Centre also hosts a Safe and Trusted Artificial Intelligence Conference, with internationally renowned keynote speakers, providing opportunities for students to gain experience of conference organisation.
Engagement with the Centre’s diverse industrial partners is a key part of the student experience. The partners deliver training, provide real world problems for students to work on, sponsor specific projects, attend events showcasing the Centre’s research, and offer placement opportunities. This engagement ensures the skills students develop are relevant and valuable to industry and society at large, while also informing and supporting UK industry in producing state-of-the-art safe and trusted AI solutions.
Public engagement is an important part of responsible research and innovation. Students are encouraged to engage different groups of people in their research, to ensure maximum impact and to inform future work. All students receive innovative public engagement training and the Centre has funding to support students in delivering public engagement activities. The Centre works closely with Science Gallery London and with King’s Culture to ensure students have opportunities and support to develop public engagement activities.
Equality, diversity and inclusion
The Centre is committed to providing an inclusive environment in which diverse students can thrive. Diversity is crucial for enabling world leading research, impact and teaching, and an inclusive environment allows people to contribute their best.
The Centre leadership team has identified five key Equality, Diversity and Inclusion (EDI) objectives, in order to focus its work in this area.
- Encourage a diverse population of students.
- Provide an inclusive working, learning and research environment for all staff and students.
- Ensure no student or staff is disadvantaged because of any protected, or other personal, characteristic.
- Embed EDI in the workings of the Centre.
- Seek, share and promote good practice in EDI.
The Centre particularly encourages applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.
Students are registered at either King’s College London or Imperial College London, depending on the institution of their main supervisor, where they have access to a range of sector-leading facilities and resources. All students can also access a range of relevant training offered by the Department of Informatics at King’s College London and the Department of Computing and Imperial College London.