Apply Now

The UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intellligence has approximately 12 fully funded doctoral studentships available each year. Please see the Application Timeline below for application round deadlines for entry in October 2023.

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

How to apply
Step One:

What are we offering? Find out about Fees and Funding.

Step Two:

Check Entry Requirements.

Step Three:

Select from available projects for September 2023 entry.

  • You should identify three preferred projects.
  • Alternatively, it is possible for you to propose your own project, relevant to the area of model-based safe and trusted artificial intelligence as described on these pages.
Step Four:

Write your research proposal
You should write a 3 – 4 page Research Statement on the project you have listed as your first choice, which will be evaluated as part of the application process. This should incorporate the following (irrespective of whether you are proposing your own project, or applying for an existing proposal):

  • your initial ideas on the particular challenges you would be interested in addressing within the project in the context of Safe and Trusted Artificial Intelligence,
  • a brief review of the relevant state of the art, identifying any limitations or open questions, and
  • your initial ideas on what research you might carry out towards addressing these challenges, referring to existing research literature where appropriate.

If you have listed your own project proposal as your first choice, then you should make clear its relevance to the theme of Safe and Trusted Artificial Intelligence.

Step Five:

Submit a PhD application to the relevant institution/s:

You should pay very careful attention to all the details given in the links above, and the instructions on the King’s and Imperial online application forms. As preliminary indication, whichever institution(s) you are applying to, you will need:

• Your research proposal (which must be submitted with your application).
• Supporting documentation, like transcripts of previous qualifications, academic reference(s), and proof of English language qualifications if English is not your first language.

*If you are proposing your own project then we encourage you to submit applications to both institutions, since this will allow greater flexibility in identifying potential supervisors.

**You may contact a prospective PhD supervisor to informally discuss your ideas before submitting an application, but you should bear in mind that funding decisions will only be made after the applications have been received and processed by both King’s College London and Imperial College London, and by the Centre’s admissions team.

Step Six:

As soon as you have uploaded your institutional application, please complete a Centre Applicant Information Form. You will need to include the reference number from your institutional application. 

NB: Occasionally, there is a short delay between uploading your institutional application and receiving your reference number. You will have met the application deadline so long as your institutional application has been uploaded on time, but don’t forget to upload the Centre Applicant Information Form as soon as you receive the reference number – failure to complete this form may result in your application not being considered.

Step Seven:

Relax! We look forward to receiving your application.

Entry requirements
Applicants will normally be expected to have a distinction at MSc level (or equivalent) in computer science or related discipline. However, in exceptional cases we may consider other qualifications (including at undergraduate level) and all applications will be considered on their merit as appropriate to the individual case. Applications from individuals with non-standard backgrounds (e.g. those from industry or returning from a career break) are encouraged, as are applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector. All applicants will need to demonstrate enthusiasm and aptitude for the programme.

It is not necessary that an applicant has completed their current course of study before applying. If an applicant has not completed their current course of study, any offer may be conditional on the eventual degree classification.

Applicants must have a good command of English and be able to apply it in an academic environment. Therefore, those who have not been educated in English will usually be required to provide certificated proof of competence in English language before starting their studies. Applicants should have an IELTS Score of 6.5 overall with a minimum of 6.0 in each skill, or a TOEFL iBT score of 92 overall with a minimum of 23 in writing and 20 in each of the other skills. Equivalent language qualifications may also be considered, see Band D of the King’s College London English Language Requirements, and Accepted English Qualifications (at Standard level) in the Imperial College London English Language Requirements.

Fees and funding

The Centre will fund up to 15 studentships each year, depending on the support available. Each studentship will be funded for 4 years. Funding includes tuition fees, stipend and a Research Training Support Grant (RTSG).

    • Tuition fees: Funded students will have full fees covered at the appropriate rate, whether home or international.
    • Stipend: A tax-free stipend set at the UKRI rate plus London-weighting (for 2022-23, students studying full-time in London receive a stipend of £19,668).
    • RTSG: A generous allowance will be provided for research consumables and additional training, and for attending UK and international conferences.

Who can apply?

Any prospective doctoral student wishing to study at the Centre, including prospective international students, can apply for a UKRI studentship. All UKRI-funded doctoral students will be eligible for the full award – both the stipend to support living costs, and fees at the UK research organisation rate. However, UKRI international studentships are limited to 30 percent of the total cohort and places will be competitive.

For further guidance on fee status, visit the:

Home Students

Home students will be eligible for a full UKRI award, including fees and stipend. To be classed as a Home student, candidates must meet the following criteria:

  •  be a UK National (meeting residency requirements), or
  •  have settled status, or
  •  have pre-settled status (meeting residency requirements), or
  •  have indefinite leave to remain or enter.

If a candidate does not meet the criteria above, they will be classed as an International student.

International students

International students applying to the Centre are eligible for full UKRI-funded studentships covering fees at the overseas rate and stipend. Studentships for international students are limited in number and competitive, and we encourage strong international candidates to apply.

Please note that there may be other costs which will not be covered by the studentship or CDT, such as visa fees, healthcare surcharge and relocation costs.


Part-time students

It is possible to apply to the Centre to study on a part-time basis and we welcome applications from people who are unable to study full-time due to managing, for example, caring responsibilities, a disability or chronic illness. Because of the nature of the Centre and its training programme, the demands on part-time students are somewhat different to those made of part-time students on a standard PhD programme. All part-time students enrolled in the Centre are required to:

  • Commit a minimum of 50% full-time-equivalent time to their PhD and the CDT programme.
  • Maintain a regular physical presence in the department during normal working hours.
  • Attend all compulsory elements of the Centre, including all training activities and all cohort building activities. This may sometimes necessitate full-time attendance over a period (for example, full-time attendance at the Centre Summer School will be expected over a 3 – 4 day period), and such activities may fall outside a student’s typical part-time hours. (Note that the Centre has a Carers’ Fund, which students may apply to in order to cover caring costs incurred by attendance of Centre activities that fall outside of normal hours.)

Part-time students will be supported by a pro-rata studentship in line with their mode of registration (assuming eligibility for a studentship as per UKRI Terms and Conditions).

If you are interested in the possibility of part-time study within the Centre please send an email to  in advance of the application deadline in order to discuss this. The Centre is unable to consider part-time applications from applicants who do not do this.

Note that the demands of cohort-based training, and the requirement for a minimum of 50% full time equivalent, mean that this programme is unfortunately not suitable for those wishing to combine a part-time PhD with a full-time career role. In this context, we believe it would be very difficult to effectively participate in this programme on a part-time basis if you are working more than 3 days per week and, moreover, we believe that to be successful, you will ideally be working much less than this. If you intend to combine a PhD with paid work that prevents full engagement with the cohort-based training programme offered by our Centre, you may instead wish to consider opportunities on a standard part-time PhD programme. Please see information on the Computer Science Research MPhil/PhD at the Dept of Informatics at King’s or PhDs in the Dept of Computing at Imperial.

Students in full-time employment

Because part-time students are required to study with a minimum of 50% full-time equivalent, the Centre is unfortunately unable to consider part-time applications from applicants in full-time work. Students in full-time employment are also not eligible for a studentship of any kind.

Self-funded students

We also welcome applications from students (Home/EU/International) who have secured their own funding or are in receipt of alternative scholarships.

If you are a self-funded student and wish to study within the Centre please send an email to in advance of the application deadline in order to discuss this.

Application Timeline
The next cohort will enter in October 2023. Please see the dates for the recruitment rounds in the table below. 

Application Deadline28 November 20226 February 20233 April 20235 June 2023
Notification Expected ByFebruary 2023May 2023July 2023August 2023
Application Checklist
Please check that you complete the following steps correctly:

1. Submit a PhD application to the relevant institution/s

2. Complete a Centre Applicant Information Form.

What happens next
Once you submit your complete Centre application it will be considered by the Centre selection committee. If you meet the eligibility requirements, your application will be discussed in next selection panel and you may be contacted by some supervisors of your preferred projects to conduct interviews.

Any questions relating to the Centre should be sent to Note that this email address is not monitored outside of working hours, so any questions relating to an application should be sent well in advance of the application deadline.


Frequently Asked Questions - Eligibility/Background
What are the minimal computing skills you ask for? Is a programming intensive background a must?

Our students come from a variety of backgrounds including, but not limited to, computer science. Programming skills are not a pre-requisite, but candidates must demonstrate sufficient technical skills and knowledge to cope with the programme. We consider each case individually, and award places on merit.

What is considered a related discipline? 

A relevant scientific or technical discipline could be computer science, mathematics or physics.

Do I need to have a strong AI  background?

We require candidates to have sufficient technical knowledge to demonstrate they can cope with the programme and some expertise that is applicable to safe and trusted AI.

Do you expect candidates to have published papers already?

No, this is a training programme and we do not expect applicants to already have publications. Many of our successful applicants have not published a paper before applying to the programme. Of course, applicants who have already published, should mention this in their application.

Can International students apply for the grant?

We can support a small number of international students with a full studentship at the Overseas rate (including a stipend, tuition fees and a generous allowance for research related expenses). International students may constitute up to 30% of our cohort and so these funded studentships are competitive. See more information in the Fees and Funding section above.

What are the new funding rules for candidates from the EU?

Please see the Fees and Funding section above for information about funding for EU candidates. We also advise applicants to connect with a King’s Advisor or contact Postgraduate Application Enquiries at Imperial for more detailed questions about eligibility for funding.

I have questions about English language proficiency. For example, “What level IELTs do I need?”; “Can I apply and provide proof of English proficiency later as part of a conditional offer?”; “Am I exempt from taking an English proficiency test if I have already studied at degree level in an English-speaking country?”

Please check our Entry Requirements. More information about English language requirements is provided under Band D of the King’s College London English Language Requirements, and Accepted English Qualifications (at Standard level) in the Imperial College London English Language Requirements. If you still have individual queries about your English language proficiency, please use the institutional contact details linked from the pages How to make an application to Kings and How to make an application to Imperial.

I am returning to education. Are you open to applications from people in my situation?

Yes. Applications from individuals with non-standard backgrounds (for example, those from industry or returning from a career break) are actively encouraged.

What does the Centre do to support equality, diversity and inclusion? 

The Centre is committed to providing an inclusive environment in which diverse students can thrive. Diversity is crucial for enabling world leading research, impact and teaching, and an inclusive environment allows people to contribute their best. The Centre has identified five key Equality Diversity and Inclusion Objectives to focus our work in this area which you can read about on our Programme information pages. We are keen to receive applications from women, disabled, and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

What support is provided for students who are parents or carers? 

The Centre welcomes applications from parents and carers.

We are committed to ensuring an inclusive interview process. In addition to the travel reimbursement available to all applicants, we are pleased to reimburse caring costs for a dependent child or adult should these be incurred as a result of attending interview.

Once students join the Centre, we have in place, as standard, funds to support care costs incurred from attending activities outside of normal working hours. We encourage parents and carers who are considering applying to email to discuss your individual needs and how the Centre might support you if your application is successful (for example, with flexible working arrangements, or part-time study).

There is also institutional support for students who are carers at King’s and at Imperial, and the terms of the UKRI grant makes provision for maternity, paternity, adoption and shared parental leave.

Is this programme feasible for disabled students, eg with mobility problems?

We welcome applications from disabled students. Please contact us to discuss how we can meet your individual needs:

How many current students also work elsewhere, and how do they balance the PhD with this? 

There are lots of opportunities for work available within Imperial and King’s. For example, many of our students enjoy paid work as Teaching Assistants (TAs). Please note that UKRI recommend that funded doctoral students undertake no more than six hours paid work per week, and it is always important for students to discuss any other activities of this kind with their supervisors in the first instance.

Can I study part time?

It is possible to apply to the Centre to study on a part-time basis and we welcome applications from people who are unable to study full-time due to managing, for example, caring responsibilities, a disability, or chronic illness. Because of the nature of the Centre and its training programme, the demands on part-time students are somewhat different to those made of part-time students on a standard PhD programme. Please see further information in the Fees and Funding section.

Is part-time funding pro-rated from a full-time studentship funding?

Yes, see further information in the Fees and Funding section above.

Can I do this course and work full time?

This is covered in the Fees and Funding section. Because part-time students are requried to study with a minimum of 50% full-time equivalent, the Centre is unfortunately unable to consider part-time applications from applicants who are in full-time work.

Can existing PhD students apply? 

If you are already studying towards a PhD degree (and meet the eligibility requirements for the CDT) you can apply for a place with the STAI CDT. If you were offered a place with the Centre, in order to accept you would have to withdraw from your current PhD programme and start a new PhD with the Centre. If you receive funding for your current PhD, there may be implications around this (e.g., you might be required to pay back any funding received). It is your responsibility to check any funding implications with your current funder.

Frequently Asked Questions - Application
See also How to Apply

What would make my application stand out? What do you particularly want me to mention? How detailed should the proposal be?

Your research proposal is your opportunity to show your interest and ideas about the selected research project. You should demonstrate a good level of understanding about the project area. Use the research proposal as an opportunity to show your ideas, skills and motivation. You will find guidance about writing your proposal in the sections on How to make an application to Kings and How to make an application to Imperial.

Should the research proposal include a personal statement on why I am suitable for the project?

No. Our application process only requires that applicants provide a 3 – 4 page Research Proposal on the project you have listed as your first choice. You will find guidance about writing your proposal in the sections on How to make an application to Kings and How to make an application to Imperial.

Can I apply for a studentship before contacting a potential supervisor?

It can be helpful to contact a prospective PhD supervisor to informally discuss your ideas before submitting an application but it is not mandatory to do so. See also How to apply.

I would like to suggest my own research proposal. Is this possible?

Yes. Here are a few tips if submitting your own proposal:

  • We encourage you to make applications to both institutions, since this will allow greater flexibility in identifying potential supervisors.
  • If you have already identified a potential supervisor, you may want to make contact and discuss the idea before submitting your proposal. (See the information about academics at the Department of Informatics at King’s and at the Department of Computing at Imperial for details about potential supervisors.)
  • In your research Proposal you must make clear the relevance of your project to the theme of Safe and Trusted Artificial Intelligence

How are research proposals assessed given that they are written for one project but a candidate may be interested in multiple projects?

Applicants should submit one research proposal about their preferred project. The objective of that proposal is to allow the CDT selection panel and any potential supervisors to understand the research interests and ideas of a candidate, and to assess their capability to explain and communicate them. One research proposal, even if written for a different project, is enough for supervisors to make an initial assessment of a candidate. If a supervisor is interested in a candidate, they may decide to have an informal conversation and/or ask them to write another proposal about their specific project(s).

Do you require academic references?

We require two references. These may be academic referees or relevant employer referees from research institutions/companies. Note that academic referees must have university email addresses and employer referees should have the official email address of the company (gmail, hotmail etc addresses are not acceptable). If you are applying to King’s College, and already have two academic references, you can scan and upload these to the online application instead of providing contact details (note that the references must be signed and on headed paper). If you are applying to Imperial College, you must provide referee contact details.

Please remember that it is your responsibility to ensure we have received the references by application deadline; ensure to start your application before the deadline and contact your referees to let them know we will be requesting a reference from them.

UKRI CDT in Safe and Trusted AI is a collaboration between King’s and Imperial. Which institution will award my degree? 

A student’s PhD registration will be made at the institution that employs their lead supervisor. Therefore, if a student’s lead supervisor is based at King’s, the student will be registered as a King’s student and their final PhD award will be from King’s. If a student’s lead supervisor is based at Imperial, the student will be registered as an Imperial student and the final PhD award will be from Imperial.

Is it possible to start my PhD programme in the Spring?

As the Centre provides an integrated training programme with activities that follow the academic year, we only accept entrants in September of each year.

Is it possible to reapply with an edited/improved research proposal if having unsuccessfullly applied in the last application round?

Yes, candidates that were unsuccessful in previous rounds are encouraged to apply with an improved research proposal. We advise candidates to consider the feedback they were given to make those improvements.

Are we only allowed to apply for advertised projects or could we also email potential supervisors that have not uploaded a project yet? 

As indicated under How to Apply, it is possible for you to propose your own project. While not a requirement, it’s generally helpful to have identified potential supervisors and to have discussed the project with them.

Frequently Asked Questions - Studentship Interviews
Where and when will interviews take place?

You can follow our expected timelines for recruitment activity by viewing our Application Timeline. Following submission of an application, shortlisted candidates will be invited to attend an interview, at either King’s College London or Imperial College London, (or virtually) with the supervisors of the project for which you have made an application (or with the supervisor identified as being a good fit for your proposed project) and a supporting panel of fellow academics. The supervisors will find a date and time that is mutually convenient with you for an interview to take place.

What can I expect from the interview?

Interviews typically take up to one hour and you will be asked questions so that the academics can find out more about you, your research interest and your skill set. It is likely that you will be asked questions around the following areas:

  • your academic background and other relevant experience to the PhD project;
  • your suitability in relation to the Centre’s research aims;
  • your suitability in relation to the Centre model, which adopts a cohort-based approach and an integrated training programme;
  • your technical aptitude for the Centre (and this may involve reading scientific papers and solving problems);
  • your specific research interests;
  • your motivations for doing a PhD; and
  • your prior knowledge of AI and related areas.

Supervisors may ask you to carry out a specific form of assessment such as (but not limited to) reviewing a paper, preparing a presentation, or completing a technical test. A supervisor may also ask you to prepare something specific to their own research agenda (particularly if being interviewed by a supervisor whose project is the second or third choice in your application).

Following an initial interview, further interviews may be arranged as a follow-up if required by the project supervisors or by the Centre Directors.

You will also be given the opportunity to ask questions about the Centre training programme and any other element of the PhD project or institution at which you (and your PhD project) will be hosted.

What should I wear to an interview?

We want you to be comfortable in your interview so feel free to dress as you wish. It is unlikely that the academics leading the interview will be wearing formal office wear so don’t feel pressured to do so. We want you to feel relaxed so you can perform at your best.

Does the Centre financially reimburse candidates for attending an interview?

If we ask candidates to attend campus, we reimburse travel costs. To discuss the process for reimbursement, please contact the Centre Manager via once you have been invited to interview for a studentship.

Frequently Asked Questions - Post-Interviews
When will I find out about the outcome of my application?

Please have a look at the Application Timeline for the date by which you will be informed via email about the outcome of your application in the particular round in which you applied. We may occasionally need to defer decisions about your application, in which case you will be contacted by email and notified of the delay by the original deadline as detailed in the Application Timeline.

Note that we are unable to offer individual feedback on written applications, but candidates who are interviewed can request feedback from their interview by contacting; it is at the panel’s discretion to provide feedback to candidates.

I have been offered a studentship, what happens next?

If you are offered a place, you will first receive an offer of funding from the Centre Office, and you will need to accept this via email by a specific deadline detailed in the email. Following acceptance of the studentship funding, the King’s Apply Portal or Imperial Application Portal will be updated, and you will receive an offer letter from the relevant institution’s Admissions Team. The offer you receive will be an offer of a place on this specific programme. It is important for candidates to accept the offer made via the institution’s admissions portal.

You must therefore accept:

  • the offer of funding from the Centre Office; and
  • the offer of a place from King’s College London or Imperial College London via the Admissions Team of the institution at which you will be registered for your PhD.

Can I keep in touch before joining the Centre?

The Centre Office will send regular communications when you accept a studentship with us through to when you join us at our Induction in late September/early October. The Centre Manager will send paperwork over the summer months to you to complete before joining the Centre, and King’s or Imperial (depending on the institution at which you are accepted) will send you enrolment information from August. It is recommended that you keep in touch your supervisor and the Centre Office ( and send along any queries you have after accepting a place on the programme.

When is Induction?

We will confirm the date of our Centre Induction event in mid to late August (and this is distinct from, and additional to, any induction from the host department and/or institution). It is most likely to take place in the first week of October, and you will meet the Centre Team, Centre Directors, and some of our current students.

Frequently Asked Questions - Other
What are indicative job prospects after study at the CDT?

At this point (we have only been running for two years at the time of writing in July 2021), we can’t give information about where our students go after graduating from our Centre. However, we do have experience with students from our institutions in similar or related topics, who are sought after in different areas. Some have gone on to become academic researchers and followed an academic career in prestigious universities, some have secured roles with AI startups and some have gone on to work with and for some of the larger AI technology companies. We also have students who have gone on to lead on technology and AI for major banks, management consultancies and major organisations in other industry sectors. We know that there’s a great (and increasing) demand for students with the skills and training that we provide, which is one of the reasons so many companies want to partner with us. Indeed, we are working closely with our industry partners to provide a greater link for our students to different kinds of organisation in advance of finishing their PhDs. The opportunities are truly endless right now.

In addition to the stipend and tuition fees, what additional funding is offered to students?

Each scholarship includes a generous Research Training Support Grant for attending international and UK conferences, for research consumables and for additional training. In addition, funded students will be offered a laptop for the duration of their studies when they start.

Do you only do symbolic AI or also learning based AI?

Proposals including, but not limited to, data-driven methods, such as machine learning, are welcome, but they must be explicit and clear about their symbolic components and how they enable safer and more trustworthy AI. Some machine learning can also be symbolic!

To what extend does the STAI CDT help students to get a PhD internship with one of the partner companies?

We work with our partners and others to advertise opportunities for industry placements to our students.

How much flexibility does a candidate have in pursuing their own research interests after being assigned to a project?

Candidates should apply for proposals that cover their own research interests. While the nature of research may naturally lead to adjustments in the course of the PhD, variations of the original proposal need to be agreed by supervisors and remain within the focus areas of the CDT.

Can I study on taught modules outside of the programme?

There is a set of core STAI modules that students are required to take as part of their training. In addition, students can choose from a range of other options at King’s or Imperial according to their development needs. These are routinely reviewed and discussed with supervisors before the beginning of a new semester.

Is the amount of funding on offer sufficient to sustain oneself in London?

Our stipend is comparable to other similarly funded PhD opportunities in London. Some of our students also engage in limited optional teaching activities which provide an extra stream of income (as well as valuable experience). In addition, funds are also available for attending national and international conferences and for other research activities.

Available Projects

Filter L
  • Multi-context architectures of neuro-symbolic AI systems for Mental Health

    Project ID: STAI-CDT-2023-KCL-15
    Themes: Argumentation
    Supervisor: Hector Menendez, Dr Mariana Pinto da Costa

    Mental care systems require patient assessment and diagnosis. Thus, creating a reliable artificial intelligence that provides a mental state examination (MSE) requires proper verification that guarantees accuracy and...

    Read more

  • Enhancing Trustworthiness of Neural Networks for Online Adaptive Radiotherapy

    Project ID: STAI-CDT-2023-IC-10
    Themes: Reasoning
    Supervisor: Prof Wayne Luk

    Magnetic Resonance (MR)-guided online adaptive radiotherapy has the potential to revolutionise cancer treatment. It exploits soft-tissue contrast of MR images obtained right before patient’s radiation treatment to...

    Read more

  • Teaching Large Language Models To Perform Complex Reasoning

    Project ID: STAI-CDT-2023-IC-9
    Themes: AI Planning, Logic
    Supervisor: Dr Marek Rei

    Large language models have become the main backbone of most state-of-the-art NLP systems. By pre-training on very large datasets with unsupervised objectives, these models are able to learn good representations for language...

    Read more

  • Extending Large Language Models Through Querying Symbolic Systems

    Project ID: STAI-CDT-2023-IC-8
    Themes: AI Planning
    Supervisor: Dr Marek Rei

    Large language models have become the main backbone of most state-of-the-art NLP systems. By pre-training on very large datasets with unsupervised objectives, these models are able to learn good representations for language...

    Read more

  • Ensuring Trustworthy AI through Verification and Validation in ML Implementations: Compilers and Libraries

    Project ID: STAI-CDT-2023-KCL-30
    Themes: Logic, Verification
    Supervisor: Dr Hector Menendez Benito, Dr Karine Even Mendoza

    The issue of machine learning trust is a pressing concern that has brought together multiple communities to tackle it. With the increasing use of tools such as ChatGPT and the identification of fairness issues, ensuring the...

    Read more

  • Automated verification and robustification of tree-based models for safe and robust decision making

    Project ID: STAI-CDT-2023-IC-7
    Themes: Verification
    Supervisor: Prof Alessio Lomuscio

    Advances in machine learning have enabled the development of numerousapplications requiring the automation of tasks, such as computer vision, that were previously thought impossible to tackle. Although the success was...

    Read more

  • Reasoning about Stochastic Games of Imperfect Information

    Project ID: STAI-CDT-2023-IC-6
    Themes: Logic, Verification
    Supervisor: Dr Francesco Belardinelli

    In many games the outcome of the players’ actions is given stochastically rather than deterministically, e.g., in card games, board games with dice (Risk!), etc.However, the literature of logic-based languages for...

    Read more

  • Multi-Task Reinforcement Learning with Imagination-based Agents

    Project ID: STAI-CDT-2023-IC-5
    Themes: Logic, Verification
    Supervisor: Dr Francesco Belardinelli

    Deep Reinforcement Learning (DRL) has proved to be a powerful technique that allows autonomous agents to learn optimal behaviours (aka policies) in unknown and complex environments through models of rewards and...

    Read more

  • From Verification to Mitigation: Managing Critical Phase Transitions in Multi-Agent Systems

    Project ID: STAI-CDT-2023-KCL-29
    Themes: AI Planning, Verification
    Supervisor: Dr Stefanos Leonardos, Dr. William Knottenbelt (Imperial College London)

    Background: With recent technological advancements, multi-agent interactions have become increasingly complex, ranging from deep learning models and powerful neural networks to blockchain-based cryptoeconomies. However, as...

    Read more

  • Common Sense Planning (for Robotics)

    Project ID: STAI-CDT-2023-KCL-28
    Themes: AI Planning
    Supervisor: Dr Gerard Canal, Dr Albert Meroño-Peñuela

    Task Planning (also known as Symbolic Planning or AI Planning) has proved to be a very useful technique to tackle the decision-making problem in robotics. Given a set of task goals, the planner can come up with a set of...

    Read more

  • Neurosymbolic approaches to causal representation learning

    Project ID: STAI-CDT-2023-KCL-26
    Themes: Logic, Verification
    Supervisor: David Watson

    Causal reasoning is essential to decision-making in real-world problems. However, observational data is rarely sufficient to infer causal relationships or estimate treatment effects due to confounding signals. Pearl (2009)...

    Read more

  • Fast Reinforcement Learning using Memory-Augmented Neural Networks

    Project ID: STAI-CDT-2023-KCL-27
    Themes: Norms, Reasoning
    Supervisor: Yali Du, Albert Meroño Peñuela

    Reinforcement learning resembles human learning with intelligence accumulated through experiment. To attain expert human-level performance on tasks such as Atari video games or chess, deep RL systems have required many...

    Read more

  • Verification of Neuro-Symbolic Multi-Agent Systems in Uncertain Environments

    Project ID: STAI-CDT-2023-KCL-28
    Themes: Multi-agent systems, Verification
    Supervisor: Nicola Paoletti

    The field of neuro-symbolic systems is an exciting area of research that combines the power of machine learning with the rigour of symbolic reasoning. Neural systems have shown great promise in a wide range of applications,...

    Read more

  • Incentive-aware digital twins for finance

    Project ID: STAI-CDT-2023-KCL-14
    Themes: Game Theory, Multi-agent systems
    Supervisor: Carmine Ventre

    Modern financial markets represent fertile soil for AI systems. As of October 2019, at least two thirds of the UK financial services companies use AI, with its growing adoption in trading, risk management and pricing....

    Read more

  • Verification of Matching Algorithms for Social Welfare

    Project ID: STAI-CDT-2023-KCL-19
    Themes: Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...

    Read more

  • Verifying Geometric Learning Machines for Generalisation, Robustness and Compression

    Project ID: STAI-CDT-2023-IC-2
    Themes: Verification
    Supervisor: Tolga Birdal

    As part of the model-based approaches to safe and trusted AI, this project aims to shed light on the phenomenon of robust generalisation as a trade-off in geometric deep networks.  Unfortunately, classical learning theory...

    Read more

  • Causal Temporal Logic

    Project ID: STAI-CDT-2023-KCL-18
    Themes: Logic
    Supervisor: Nicola Paoletti

    Temporal logic (TL) is arguably the primary language for formal specification and reasoning about system correctness and safety. They enable the specification and verification of properties such as “will the agent...

    Read more

  • Improving Robustness of Pre-Trained Language Models

    Project ID: STAI-CDT-2023-KCL-25
    Themes: Logic, Norms, Reasoning
    Supervisor: Yulan He

    Recent efforts to Natural Language Understanding (NLU) have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. We have witnessed the shift of paradigms in NLP...

    Read more

  • Explanations of Medical Images

    Project ID: STAI-CDT-2023-KCL-24
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Multiple Explanations of AI image classifiers

    Project ID: STAI-CDT-2023-KCL-23
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Integrating Sub-symbolic and Symbolic Reasoning for Value Alignment

    Project ID: STAI-CDT-2023-KCL-22
    Themes: Logic
    Supervisor: Sanjay Modgil, Odinaldo Rodrigues

    An important long-term concern regarding the ethical impact of AI is the so called ‘value alignment problem’; that is, how to ensure that the decisions of autonomous AIs are aligned with human values. Addressing...

    Read more

  • Extracting interpretable symbolic representations from neural networks using information theory and causal abstraction

    Project ID: STAI-CDT-2023-IC-4
    Themes: Logic, Norms, Reasoning
    Supervisor: Pedro Mediano

    Neurosymbolic systems seek to combine the strengths of two major classes of AI algorithms: neural networks, able to recognise patterns in unstructured data, and logic-based systems, capable of powerful reasoning. One of the...

    Read more

  • Learning and deploying safe and trustworthy models of data provenance

    Project ID: STAI-CDT-2023-KCL-21
    Themes: AI Provenance, Logic
    Supervisor: Albert Meroño Peñuela, Luc Moreau

    Our modern lives are increasingly governed by ubiquitous AI systems and an abundance of digital data. More and more products and services are providing us with better tools and recommendations for our professional,...

    Read more

  • Generative modelling with neural probabilistic circuits

    Project ID: STAI-CDT-2023-KCL-20
    Themes: AI Planning, Logic, Verification
    Supervisor: David Watson

    The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...

    Read more

  • Understanding Distribution Shift with Logic-based Reasoning and Verification

    Project ID: STAI-CDT-2023-KCL-17
    Themes: Logic, Reasoning
    Supervisor: Fabio Pierazzi

    Data-driven approaches have been proven powerful in a variety of domains, from computer vision to NLP. However, in some domains – such as in attack detection in security –  the arms race between...

    Read more

  • Towards Sharp Generalization Guarantees for All-data Training through Scenario Approach

    Project ID: STAI-CDT-2023-IC-1
    Themes: Verification
    Supervisor: Dario Paccagnan

    In recent years, AI has achieved tremendous success in many complex decision making tasks. However, when deploying these systems in the real world, safety concerns restrict — often severely — their adoption. One concrete...

    Read more

  • Dealing with Imperfect Rationality in AI Systems

    Project ID: STAI-CDT-2023-KCL-13
    Themes: Reasoning
    Supervisor: Carmine Ventre

    AI systems often collect their input from humans. For example, parents are asked to input their preferences over primary schools before a centralised algorithm allocates children to schools. Should the AI trust the input...

    Read more

  • Trusted Collective Intelligence through Norms, Ontologies and Provenance

    Project ID: STAI-CDT-2023-KCL-12
    Themes: AI Provenance, Norms
    Supervisor: Elena Simperl, Dr Timothy Neate

    Collective intelligence (CI) communities are among the greatest examples of collaboration, capability, and creativity of the digital age. CI communities allow large groups of individuals to work together towards a shared...

    Read more

  • Automatic Testing and Fixing Learning-based Conversational Agents with Knowledge Graphs

    Project ID: STAI-CDT-2023-KCL-10
    Themes: Norms, Verification
    Supervisor: Jie Zhang, Mohammad Mousavi

    Background: Learning-based conversational agents can generate conversations that violate basic logical rules and common sense, which can seriously affect user experience and lead to mistrust and frustration. To create...

    Read more

  • Data Bias Evaluation and Mitigation via Rule-based Classification

    Project ID: STAI-CDT-2023-KCL-9
    Themes: Norms
    Supervisor: Jie Zhang, Gunel Jahangirova

    Motivation: Training data can be severely biased. The existing metrics of data bias are based on data balance situations conditioned on protected attributes. This is coarse-grained and does not consider the relationship...

    Read more

  • Explainability of Agent-based Models as a Tool for Validation and Exploration

    Project ID: STAI-CDT-2023-KCL-11
    Themes: Argumentation, Verification
    Supervisor: Dr Steffen Zschaler, Dr Katie Bentley

    Agent-based models (ABMs) are an AI technique to help improve our understanding of complex real-world interactions and their “emergent behaviours”. ABMs are used to develop and test theories or to explore how interventions...

    Read more

  • Computational Social Choice and Machine Learning for Ethical Decision Making

    Project ID: STAI-CDT-2023-KCL-5
    Themes: AI Planning, Argumentation, Norms, Reasoning
    Supervisor: Maria Polukarov

    The problem of ethical decision making presents a grand challenge for modern AI research. Arguably, the main obstacle to automating ethical decisions is the lack of a formal specification of ground-truth ethical principles,...

    Read more

  • Detecting Deception and Manipulation in Planning and Explanation Systems

    Project ID: STAI-CDT-2023-KCL-2
    Themes: AI Planning
    Supervisor: Martim Brandao

    Planning algorithms are used in a variety of contexts, from navigation apps to recommendation algorithms, robot vacuums, autonomous vehicles, etc.Companies using such algorithms have financial incentives to manipulate (or...

    Read more

  • Safe Reinforcement Learning from Human Feedback

    Project ID: STAI-CDT-2023-KCL-4
    Themes: Verification
    Supervisor: Yali Du

    Reinforcement learning (RL) has become a new paradigm for solving complex decision-making problems. However, it presents numerous safety concerns in real world decision making, such as unsafe exploration, unrealistic reward...

    Read more

  • A Critical and Inclusive Approach to Robotics

    Project ID: STAI-CDT-2023-KCL-3
    Themes: AI Planning
    Supervisor: Martim Brandao

    Robotics are already being used in warehouses, factories, super markets, homes, hazardous sites and other applications. While many issues of stereotypes, disparate impact, and harmful impact of AI have been brought to the...

    Read more

  • Formal Reasoning about Golog Programs

    Project ID: STAI-CDT-2022-KCL-10
    Themes: AI Planning, Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...

    Read more

  • Trusted AI for Safe Stop and Search

    Project ID: STAI-CDT-2022-KCL-9
    Themes: Reasoning, Verification
    Supervisor: Mohammad Mousavi, Rita Borgo

    The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary...

    Read more

  • Detecting fake news

    Project ID: STAI-CDT-2022-KCL-8
    Themes: Reasoning
    Supervisor: Frederik Mallmann-Trenn

    The rise of fake news and misinformation is a threat to our societies. Even though we are not always able to quantify the effect of misinformation, it is clear that it is polarising the society and often leads to violence...

    Read more

  • Composable Neural Networks

    Project ID: STAI-CDT-2022-IC-5
    Themes: Verification
    Supervisor: Nicolas Wu, Matthew Williams

    Deep learning has shown huge potential in terms of delivering AI with real-world impact. Most current projects are built in either PyTorch, Tensorflow, or similar platforms. These tend to be written in languages where the...

    Read more

  • Co-Evolution of Symbolic AI with Data and Specification

    Project ID: STAI-CDT-2022-KCL-6
    Themes: Verification
    Supervisor: Jan Oliver Ringert, Mohammad Mousavi

    Trusted autonomous systems (TAS) rely on AI components that perform critical tasks for stakeholders that have to rely on the services provided by the system, e.g., self-driving cars or intelligent robotic systems. Two...

    Read more

  • Causal Decentralised Finance

    Project ID: STAI-CDT-2022-KCL-3
    Themes: Logic, Norms, Verification
    Supervisor: Hana Chockler

    The goal of this project is to develop a causality-based framework for the analysis of decentralised finance (DeFi), based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by...

    Read more

  • Building Abstract Representations to Check Multi-Agent Deep Reinforcement-Learning Behaviors

    Project ID: STAI-CDT-2022-IC-3
    Themes: Logic, Verification
    Supervisor: Francesco Belardinelli

    Reinforcement Learning, and its extension Deep Reinforcement Learning (DRL), are Machine Learning (ML) techniques that allow autonomous agents to learn optimal behaviours (called policies) in unknown and complex...

    Read more

  • Explainable Reinforcement Learning with Causality

    Project ID: STAI-CDT-2022-IC-4
    Themes: Logic, Verification
    Supervisor: Francesco Belardinelli

    Reinforcement Learning (RL) is a technique widely used to allow agents to learn behaviours based on a reward/punishment mechanism [1]. In combination with methods from deep learning, RL is currently applied in a number of...

    Read more

  • Verified Multi-Agent Programming with Actor Models

    Project ID: STAI-CDT-2022-ICL-1
    Themes: Logic, Verification
    Supervisor: Prof Nobuko Yoshida

    Today, most computer applications are developed as ensembles of concurrent multi-agents (or components), that communicate via message passing across some network. Modern programming languages and toolkits provide...

    Read more

  • Creating and evolving knowledge graphs at scale for explainable AI

    Project ID: STAI-CDT-2022-KCL-1
    Themes: AI Provenance, Argumentation, Verification
    Supervisor: Prof Elena Simperl

    Knowledge graphs and knowledge bases are forms of symbolic knowledge representations used across AI applications. Both refer to a set of technologies that organise data for easier access, capture information about people,...

    Read more

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Specification, diagnosis and repair for deep learning systems

    Project ID: STAI-CDT-2021-IC-21
    Themes: Logic, Verification
    Supervisor: Dalal Alrajeh

    Recent times have witnessed a flurry of advancements in ML, enabling their widespread application in domains such healthcare, security and autonomous vehicles. However, their deployment has also come at cost, resulting in...

    Read more

  • Synthesizing and revising plans for autonomous robot adaptation

    Project ID: STAI-CDT-2021-IC-20
    Themes: AI Planning, Logic, Verification
    Supervisor: Dalal Alrajeh

    AI Planning is concerned with producing plans that are guaranteed to achieve a robot’s goals, assuming the pre-specified assumptions about the environment in which it operates hold. However, no matter how detailed these...

    Read more

  • Symbolic machine learning techniques for explainable AI

    Project ID: STAI-CDT-2021-KCL-15
    Themes: AI Planning, Verification
    Supervisor: Kevin Lano

    Machine learning (ML) approaches such as encoder-decoder networks and LSTM have been successfully used for numerous tasks involving translation or prediction of information (Otter et al, 2020). However, the knowledge...

    Read more

  • Trustful Ontology Engineering and Reasoning through Provenance

    Project ID: STAI-CDT-2021-KCL-12
    Themes: AI Provenance, Logic
    Supervisor: Albert Meroño Peñuela

    Ontologies have become fundamental AI artifacts in providing knowledge to intelligent systems. The concepts and relationships formalised in these ontologies are frequently used to semantically annotate data, helping...

    Read more

  • Goal-based explanations for autonomous systems and robots

    Project ID: STAI-CDT-2021-KCL-11
    Themes: AI Planning
    Supervisor: Gerard Canal, Andrew Coles

    Autonomous systems such as robots may become another appliance found in our homes and workplaces. In order to have such systems helping humans to perform their tasks, they must be as autonomous as possible, to prevent...

    Read more

  • A Novel Model-driven AI Paradigm for Intrusion Detection

    Project ID: STAI-CDT-2021-KCL-6
    Themes: Logic, Verification
    Supervisor: Fabio Pierazzi

    This project aims to investigate, design and develop new model-driven methods for AI-based network intrusion detection systems. The emphasis is on designing an AI model that is able to verify and explain its safety...

    Read more

  • Neural-symbolic Reinforcement Learning.

    Project ID: STAI-CDT-2021-IC-2
    Themes: AI Planning, Logic
    Supervisor: Alessandra Russo

    Recent advances in deep reinforcement learning (DRL) have allowed computer programs to beat humans at complex games like Chess or Go years before the original projections. However, the SOTA in DRL misses out on some of the...

    Read more

  • Towards Trusted Epidemic Simulation

    Project ID: STAI-CDT-2021-IC-4
    Themes: Verification
    Supervisor: Wayne Luk

    Agent-based models (ABMs) are powerful methods to describe the spread of epidemics. An ABM treats each susceptible individual as an agent in a simulated world. The simulation algorithm of the model tracks the health status...

    Read more

  • Enhancing Scale and Performance of Safe and Trusted Multi-Agent Planning

    Project ID: STAI-CDT-2021-IC-5
    Themes: AI Planning
    Supervisor: Wayne Luk

    Cooperative Multi-Agent Planning (MAP) is a topic in symbolic artificial intelligence (AI). In a cooperative MAP system, multiple agents collaborate to achieve a common goal. A cooperative MAP solver produces...

    Read more

  • Verifying Safety and Reliability of Robotic Swarms

    Project ID: STAI-CDT-2021-IC-6
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    The effective development and deployment of single-robot systems is known to be increasingly problematic in a variety of application domains including search and rescue, remote exploration, de-mining, etc. These and other...

    Read more

  • Safe Rational Interactions in Data-driven Control

    Project ID: STAI-CDT-2021-IC-8
    Themes: AI Planning, Logic, Verification
    Supervisor: Alessio Lomuscio, David Angeli

    In autonomous and multi-agent systems players are normally assumed rational and cooperating or competing in groups to achieve their overall objectives. Useful methods to study the resulting interactions come from game...

    Read more

  • Verification of neural-symbolic agent-based systems

    Project ID: STAI-CDT-2021-IC-9
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    Considerable work has been carried out in the past two decades on Verification of Multi-Agent Systems. Various methods based on binary-decision diagrams, bounded model checking, abstraction, symmetry reduction have been...

    Read more

  • Abstract Interpretation for Safe Machine Learning

    Project ID: STAI-CDT-2021-IC-10
    Themes: Logic, Verification
    Supervisor: Sergio Maffeis

    Machine learning (ML) techniques such as Support Vector Machines, Random Forests and Neural Networks are being applied with great success to a wide range of complex and sometimes safety-critical tasks. Recent research in...

    Read more

  • Argumentation-based Interactive Explainable Scheduling

    Project ID: STAI-CDT-2021-IC-11
    Themes: Argumentation
    Supervisor: Ruth Misener

    AI is continuing to make progress in many settings, fuelled by data availability and computational power, but it is widely acknowledged that it cannot fully benefit society without addressing its widespread inability to...

    Read more

  • Correct-by-construction domain-specific AI planners

    Project ID: STAI-CDT-2021-KCL-4
    Themes: AI Planning, Verification
    Supervisor: Steffen Zschaler

    When using complex algorithms to make decisions within autonomous systems, the weak link is the abstract model used by the algorithms: any errors in the model may lead to unanticipated behaviour potentially risking...

    Read more

  • Probabilistic Abstract Interpretation of Deep Neural Networks

    Project ID: STAI-CDT-2021-IC-14
    Themes: Verification
    Supervisor: Herbert Wiklicky

    “The extraction of (symbolic) rules which describe the operation of (deep) neural networks which have been trained to perform a certain task is central to explaining their inner workings in order to judge their...

    Read more