The UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) brings together world leading experts from King’s College London and Imperial College London to train a new generation of researchers in safe and trusted artificial intelligence (AI).
An AI system is considered to be safe when we can provide some assurance about the correctness of its behaviour, and trustworthy if we have well-placed confidence in its decision making.
What we offer
We offer a unique four-year PhD programme, focussed on the use of symbolic AI techniques for safe and trusted AI, providing an explicit language for representing, analysing and reasoning about systems and their behaviours. Explicit models can be verified and solutions based on them can be guaranteed as safe and correct; and they can provide human-understandable explanations and support user collaboration and interaction.
You will engage in various training activities alongside your individual PhD project, both in state-of-the-art AI techniques and in ethical, societal, and legal implications of AI in research and industrial settings. You will graduate as an expert in safe and trusted AI, able to consider the implications of AI systems, to recognise this as a key part of the AI development process and equipped to meet the needs of industry, academia, and the public sector.
We will fund approximately 12 students to join in October 2022. Studentships are 4-year awards, including tuition fees, a tax-free stipend set at the UKRI rate plus London-weighting, and a generous allowance for consumables and travel.
How to Apply
The final application deadline for entry in October 2022 is 15 June 2022 (23:59).
Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.
For queries please contact: firstname.lastname@example.org