AI Safety Course

Do you want to help make AI beneficial to humanity? Explore AI Safety in our comprehensive 8-week program, following a Governance or Alignment Track.
Run with AI Safety Collab.
Course runs from 4th August - 27th October 2025
Apply by: 20 July, 2025, 23:59 (all time zones)

In this course you’ll explore the risks posed by advanced AI systems and approaches to addressing these risks, following our governance or alignment tracks.
The course is organised by AI Safety Collab and Safe AI London, a project of Arcadia Impact.

It runs from the week starting 4th August for 8 weeks, with an optional 4-week project phase after. The time commitment is about 5 hours per week including 2 hours of reading, 1 hour of exercises, and 1.5-2 hours for discussion sessions.
It’s open to applications from anyone interested in learning about the topic, you don’t need a technical background to participate (although some technical experience can be helpful for the Alignment track). The discussion sessions will be arranged after your application and will likely be online with in-person events arranged in London.
Further details can be found on the application form. This is a shared application form for the course so make sure to mention you’re based in London so we can arrange in-person sessions.

If you already have some background knowledge in AI Safety, we are also looking for course facilitators. Express interest in facilitating through this form.

Curriculum

AI Governance Track

The AI Governance Track follows the curriculum by Bluedot Impact covering approaches to reduce AI risks through standards and regulation, and foreign policy to promote AI safety internationally.
Each week will cover:

  1. How AI systems work

  2. The promise and Perils of AI

  3. Foundations of AI Governance

  4. The AI Policy Toolkit

  5. How AI is Governed Today

  6. Governance at Frontier AI Companies

  7. Proposals to Govern AI

  8. Contributing to AI Governance

  9. After the course there is an optional project phase


AI Alignment Track

The AI Alignment Track follows the curriculum by AI Safety Atlas covering progress in AI and technical approaches to reducing risks.
Each week will cover:

  1. Capabilities

  2. Risks

  3. Strategies

  4. Governance

  5. Evaluations

  6. Specification

  7. Generalisation

  8. Scalable Oversight

  9. Interpretability (optional)