Publish with us
Reach your audience by publishing with the AI Policy Bulletin. We publish the latest AI policy thinking with reaching decision makers in mind.
What is an AI Policy Bulletin piece?
We publish writing that brings valuable insights for ensuring that AI has a positive impact on society. Our primary focus is on how to govern general-purpose, advanced, AI systems especially taking into account likely near-term developments in AI capabilities.
If you’re unsure whether your piece fits this framing, please do reach out; we’re happy to discuss it as these content guidelines evolve.
Why write for us?
We help you by providing:
- Impact: We cultivate high-quality content on different social media platforms, publish a newsletter, and share your piece with an expert audience. We care about your piece reaching the right people at the right time.
- Audience: We are building an AI policy-focused magazine with a readership in international institutions and governments.
- Editorial Support: We will provide the help you need to communicate your point in the most actionable and policy-relevant way. We work with experienced editors to provide feedback on content and style and make your piece stand out.
How to submit a pitch
We welcome pitches (1-3 paragraphs) on various topics related to AI policy.
Your pitch should fit one of the below formats. All formats should include a strong, clear, and evidence-based argument and enough detail that we can understand the direction of your piece. When considering pitches, we also assess your relevant expertise and track record. If you’re newer to writing, that’s OK, but we may ask for more complete pieces before making a decision on publishing.
A pitch that gets us excited answers:
- What is your core argument?
- What evidence will you use to support your argument?
- Who is your audience, and why do they need to hear this right now?
- How does your expertise connect to this piece, from the readers’ perspective?
Writing Formats
In your pitch, please let us know which format your piece best suits. If you’re not sure, we’re happy to advise.
- Essays: We want to publish timeless essays with clear thesis and arguments, exploring an AI policy-related topic in depth. We are mainly interested in essays that are up to 2,500 words long.
- Policy proposals: We are open to publishing novel AI policy ideas. These should be well-researched and specific. Reach out if you have a policy idea in the works.
- Opinions and commentaries: We want to publish well-argued opinions and commentaries on relevant AI policy questions and events, in less than 1,000 words.
- Research summaries: We want to publish insightful summaries of AI policy research projects that distil the key messages of the research in less than 1,500 words.
- Reviews: We want to publish reviews of interesting cultural pieces or events relevant to AI policy that are up to 1,000 words long, for example, book reviews or conference reviews.
- Interviews: We might commission interviews with AI policy thought-leaders. Reach out if you want to write an interview for us.
Pieces we'd like to publish
Some examples of pieces we’d have been excited to publish include:
- This review of Dario Armodei’s Machines of Loving Grace by Dr Seán Ó hÉigeartaigh.
- This research summary by Jeffrey Ladish and Lennart Heim.
- This interview by Matt Clancy and Tamay Besiroglu.
- This policy pitch by Julia Willemyns, Haydn Belfield, and Tom Milton.
A non-exhaustive list of topics we'd be excited to receive pitches on:
- How much emphasis should there be on model weight security in AI legislation?
- Will AI run on an advertising model? What would that imply?
- Who's legally responsible for the actions of AI agents?
- The story behind California Senate Bill 1047 and future possibilities for U.S. state-level legislation
- EU AI Act simplified: Likely impacts on AI safety given current draft codes
- How does the AI control paradigm differ from alignment, and what are the strategic implications?
- What do Anthropic and OpenAI mean when they say their next models could present high risk by helping create chemical and biological weapons?
- Robot technology in 2025: Current state, developments, and trajectory
- AI Manhattan project proposals: Who has put ideas forward and how do they compare?
- AI labor market impacts: Assessing the evidence from 2023-2025
- Interpretability research breakthroughs: What's working and what isn't?
- Summary and assessment of the Frontier Model Forums’s information sharing agreement
- AI capabilities evaluations: What's the theory of change?
- Early research retrospective: How well has initial AI governance research aged?
- What level of model security is realistic to aim for?