About

AI Policy Bulletin publishes policy-relevant perspectives on frontier AI governance. 

With time, we will explore many different formats for research communication and commentary: research summaries, opinions, interviews, explainers, and more.

There’s a lot going on in AI policy, and it can be hard to navigate all the new research and commentary that’s coming out. We want to make it easier to access important information - be that by distilling a thorough report into something shorter, pulling disparate threads into a single story, or pointing you to our favourite pieces from around the web.

We are learning by doing. Reach out to us at admin@aipolicybulletin.org if you have feedback or would like to collaborate.

Editorial line

We are:

  • ‍Open to all perspectives. We want to create a space where people from different AI policy communities can exchange ideas and share their insights.‍
  • Focused on cutting-edge AI developments. We take seriously AI’s transformative potential.‍
  • Ready to spread the word. We want to help authors reach their target audience and have bigger impact in the world.

We’re trying to make sense of the landscape amidst high uncertainty, and we want people who care about policy to be better equipped to assess, shape, and respond to the situation as it unfolds. We’ve created AI Policy Bulletin to support thoughtful discourse and help make the pieces we find most useful accessible to more people.

Key assumptions

When we select pieces, we do so with the following assumptions:

  • Society is currently ill equipped to shape the trajectory for AI in a responsible way. Many good perspectives and solutions have been and will be proposed, and we want decision-makers to know about them and take action where it's appropriate to do so.
  • The AI industry is under regulated compared to most other critical and international industries (aviation, finance). It’s not clear what the optimal level of regulation is: It could come too soon or too late, it could try to regulate things that are best solved by other means, it could spur innovation or it could slow it down, and the rate of innovation could be a net positive or a net negative for society depending on the nature of AI capabilities.
  • Whenever possible, policy should be informed by evidence. However, holding regulatory action to too high an evidentiary standard can lead to neglecting to take appropriate action. Given the uncertainty about AI developments’ trajectory, arriving at evidence-based AI policy may first require evidence seeking policy
  • AI might match or exceed human capabilities in our lifetimes.
  • Financial and geostrategic considerations are leading to more competitive dynamics between major global powers. Cutting corners in the name of beating the adversary makes negative outcomes from AI misuse and accidents more likely to materialize. AI-enabled social catastrophes like extreme power concentration are on the table.

Write for us

Peer reviewers

Each piece is reviewed by two or more of our volunteer peer reviewers.

If you’re interested in joining our review team, please fill out this form.

Alex PetropoulosAlex Petropoulos
Alex Petropoulos
Advanced AI Policy Researcher, CFG
David AtkinsonDavid Atkinson
David Atkinson
Lecturer, University of Texas
Demetrius Floudas  Demetrius Floudas
Demetrius Floudas
AI@Cam Unit, University of Cambridge
Jacob Schaal  Jacob Schaal
Jacob Schaal
Head of Policy, Encode London School of Economics
Joan O'BryanJoan O'Bryan
Joan O'Bryan
Lecturer, Harvard University
Keller SchollKeller Scholl
Keller Scholl
Contractor, AI Governance
Kevin WeiKevin Wei
Kevin Wei
Technology and Security Policy Fellow, RAND
Radhika Bajpai  Radhika Bajpai
Radhika Bajpai
AI Governance & Cybersecurity Leader
Rocket Drew  Rocket Drew
Rocket Drew
AI Reporter, The Information
Sam ManningSam Manning
Sam Manning
Senior Research Fellow, Centre for the Governance of AI
Sharon MatzkinSharon Matzkin
Sharon Matzkin
Researcher, University of Hafia
Sienka DouniaSienka Dounia
Sienka Dounia
AI Safety Content Associate, Successif
Tao Burga  Tao Burga
Tao Burga
Non-Resident Fellow, Institute for Progress
Uma Kalkar  Uma Kalkar
Uma Kalkar
Policy advisor to the OECD on Global Risks
Vaibhav GargVaibhav Garg
Vaibhav Garg
Cybersecurity Executive Director, Comcast
Vinay Hiremath  Vinay Hiremath
Vinay Hiremath
Research Scholar, Centre for the Governance of AI
Yogasai Gazula  Yogasai Gazula
Yogasai Gazula
AI Policy & Delivery, Responsible AI Institute
Zach Stein-PerlmanZach Stein-Perlman
Zach Stein-Perlman
Founder, AI Lab Watch

Copy editors

Amber DawnAmber Dawn
Amber Dawn
Freelance writer and editor
Kimya NessKimya Ness
Kimya Ness
Freelance writer and editor
Noah KnappNoah Knapp
Noah Knapp
Publishing Editor, AIPB

If you’re a copyeditor interested in contract work, contact us.

Founding Team

The magazine was founded and is run by a team of volunteers.

Alex LintzAlex Lintz
Alex Lintz
Anine AndresenAnine Andresen
Anine Andresen
Jamie BernardiJamie Bernardi
Jamie Bernardi
Kristina FortKristina Fort
Kristina Fort
Max RäukerMax Räuker
Max Räuker