AI Governance Reading Group

Explore policy and governance approaches to reducing risks posed by advanced AI systems.
Applications are now closed.
Express interest in future iterations

Understand the risks posed by the development of advanced AI systems and the policy and governance approaches to reducing these risks including standards and regulations. The course follows a curriculum by BlueDot Impact with input from researchers at Centre for the Governance of AI, Centre for the Study of Existential Risk, and Harvard University.

Each week consists of 2 - 4 hours split between independent readings and a group discussion which are held in person on a university campus or in central London and facilitated by a professional or SAIL organiser with previous experience in AI governance.

Details

  • We don’t have a specific number of applications we can accept yet although we don’t expect to be able to accept everyone that applies.

  • We don’t plan to run any virtual sessions although we recommend checking out the virtual course by BlueDot Impact. You can also see other local groups here and if you would like help being connected to local or online groups then you can get in touch with us.

  • The course doesn’t assume any prior technical knowledge in machine learning, or policy/regulation.

  • The discussions sessions will be in person in on Imperial, UCL, LSE, or KCL campus, or in a coworking space near to Farringdon, further details will be shared upon acceptance onto the course.

  • Yes! We currently plan to run another in person course starting in October this year. You can express interest in joining future iterations by filling out this form.

  • We run socials, hackathons, and other events. You can see all the events we run here.

  • We recommend joining our technical AGI Safety reading group.

  • Yes, the content is available online here.


Curriculum

  • The first week covers the technical basics of machine learning, the most dominant approach to AI.

  • AI could poses risks from misuse or accidents, and many AI researchers and notable figures have warned that we should treat extinction risk from AI as a global priority. While there is lots of disagreement about the severity of these risks, responsible policymaking should be informed by the reasons many experts are concerned.

  • This week looks at why it might be difficult to develop advanced AI aligned with human values, even if the developers of the system are trying to do so.

  • This weeks looks at how we can address the risks discussed through regulating industrial-scale AI development.

  • If some governments enforce AI regulations, AI developers in other countries without such regulations could cause global harms. This week looks at how policymakers can guard against this by slowing the proliferation of advanced AI to other countries without adequate regulation, or reaching agreements with multiple governments to put in guardrails.

  • This week builds on last week’s readings by looking at how governments can establish international agreements on AI safety regulation.

  • This week looks at additional governance proposals including AI companies’ policies, “CERN for AI”, slowing down development, and developing positive visions for AI.

  • The final readings look into the work currently happening in the field and how you could contribute.