About
AI Policy Bulletin publishes policy-relevant perspectives on frontier AI governance.
With time, we will explore many different formats for research communication and commentary: research summaries, opinions, interviews, explainers, and more.
There’s a lot going on in AI policy, and it can be hard to navigate all the new research and commentary that’s coming out. We want to make it easier to access important information - be that by distilling a thorough report into something shorter, pulling disparate threads into a single story, or pointing you to our favourite pieces from around the web.
We are learning by doing. Reach out to us at admin@aipolicybulletin.org if you have feedback or would like to collaborate.
Editorial line
We are:
- Open to all perspectives. We want to create a space where people from different AI policy communities can exchange ideas and share their insights.
- Focused on cutting-edge AI developments. We take seriously AI’s transformative potential.
- Ready to spread the word. We want to help authors reach their target audience and have bigger impact in the world.
We’re trying to make sense of the landscape amidst high uncertainty, and we want people who care about policy to be better equipped to assess, shape, and respond to the situation as it unfolds. We’ve created AI Policy Bulletin to support thoughtful discourse and help make the pieces we find most useful accessible to more people.
Key assumptions
When we select pieces, we do so with the following assumptions:
- Society is currently ill equipped to shape the trajectory for AI in a responsible way. Many good perspectives and solutions have been and will be proposed, and we want decision-makers to know about them and take action where it's appropriate to do so.
- The AI industry is under regulated compared to most other critical and international industries (aviation, finance). It’s not clear what the optimal level of regulation is: It could come too soon or too late, it could try to regulate things that are best solved by other means, it could spur innovation or it could slow it down, and the rate of innovation could be a net positive or a net negative for society depending on the nature of AI capabilities.
- Whenever possible, policy should be informed by evidence. However, holding regulatory action to too high an evidentiary standard can lead to neglecting to take appropriate action. Given the uncertainty about AI developments’ trajectory, arriving at evidence-based AI policy may first require evidence seeking policy
- AI might match or exceed human capabilities in our lifetimes.
- Financial and geostrategic considerations are leading to more competitive dynamics between major global powers. Cutting corners in the name of beating the adversary makes negative outcomes from AI misuse and accidents more likely to materialize. AI-enabled social catastrophes like extreme power concentration are on the table.
Peer reviewers
Each piece is reviewed by two or more of our volunteer peer reviewers.
If you’re interested in joining our review team, please fill out this form.



































Copy editors




If you’re a copyeditor interested in contract work, contact us.
Founding Team
The magazine was founded and is run by a team of volunteers.









