Dean Ball Joins the Trump Administration as Senior Policy Advisor for AI & Emerging Tech

Summary
  • Policy researcher and tech analyst Dean Ball will start working as Senior Policy Advisor for AI and Emerging Technology in the White House.
  • A prolific voice in the AI policy space, Ball has argued thoughtfully for his views on the appropriate scope and timing of regulations, noting that regulators ought to approach AI policy as adaptation to an unfolding paradigm, rather than seeking to intervene directly.
  • Ball has advocated for certainty for developers, durability across U.S. states, and regulating companies rather than models or uses.

Dean Ball: Trump AI advisor

Until recently, Dean Ball gained recognition as the author of the Hyperdimensional Substack, co-host of the AI Summer podcast, and Research Fellow at the AI & Progress Project at GMU’s Mercatus Center.

Now, Ball joins the Office of Science and Technology Policy (OSTP), which advises the US President on the effects of science and technology on domestic and international affairs.

Though he’s been writing publicly about AI for less than two years, Ball has already become a prolific voice in the space—amassing a loyal following and publishing dozens of opinion pieces. He’s argued consistently against the likelihood of AI posing existential risks to humanity, promoted the idea that AI agents could drive more strategic decision-making in the U.S. economy, and advocated for an AI governance approach that limits state overreach into individual autonomy.

Key themes from Dean Ball's writing

  • Policy should create certainty for AI developers. Ball argues that one of the largest obstacles for AI firms—especially those deploying agentic systems—is regulatory uncertainty. He sees clear, forward-looking policy as essential to accelerating adoption across sectors.
  • Policy must be durable across U.S. states. Rather than locking in a one-size-fits-all approach, Ball supports Congressional frameworks that enable U.S. states to experiment with diverse governance models—without allowing a fragmented patchwork to emerge.
  • AI is an industrial revolution, not a crisis to contain. Ball resists the idea that AI policy should revolve around passing a single, decisive law that addresses a specific risk. Instead, he argues that the government should build tools, make bets, and shape the evolving landscape over time, accepting the uncertainty inherent in technological transformation.
  • Transparency is key. Ball was largely skeptical of the initial clauses drawn up in California’s famous Senate Bill 1047 proposal, though he came to see the final version (focused on transparency and liability) as substantially improved. In a TIME magazine piece taking stock of the debate, Ball and co-author Daniel Kokotajlo note that transparency has lower-downside and is more likely to gain widespread support than other potential regulations, and “is the key to making AI go well.”
  • Outcomes-based regulation is a mistake. He opposes regulatory models that aim to constrain AI based on how they are used and what societal outcomes they may lead to, viewing this kind of legislation as ill-suited to the dynamic and complex nature of AI technology.

Notable writings from Hyperdimensional

Here’s What I Think We Should Do

In this overview of his core views, Ball supports preserving key Biden-era Executive Order provisions like federal reporting requirements for labs and data centers, while calling for a narrowed focus for the U.S. AI Safety Institute. He argues that the major problems in making AI go well are primarily scientific and engineering, rather than regulatory, problems. The role of government should be to institute a basic standard of transparency for frontier labs, refraining from attempting to directly control what is akin to an industrial revolution.

"There is now way to pass "a law," or a set of laws, to control an industrial revolution. That is not what laws are for. Laws are the rules of the game, not the game itself."

Putting Private AI Governance into Action

Ball envisions a model in which government-authorized private entities take on oversight and evaluation roles. While ultimately accountable to public institutions, these organizations could experiment with novel standards-setting and compliance approaches—bringing a level of flexibility traditional regulators may lack.

On Algorithmic Impact Assessments

Referencing the rise of environmental impact assessments in the 20th century, Ball critiques proposals to introduce analogous frameworks for AI. He questions whether such assessments can be meaningfully applied to systems as complex, fast-evolving, and context-dependent as modern AI.

Where We Are Headed (Part One)

Without understanding where AI might take science, the economy, and society in 20 years, finding the motivation and justification to regulate it now can be difficult. Ball lays out the assumptions behind his policy proposals by envisioning what the world will look like in the coming decades—and it involves some give and take.

Authors
Noah Knapp
Publishing Editor, AIPB
Subscribe to newsletter
Share
This is some text inside of a div block.

Have something to share? Please reach out to us with your pitch.

If we decide to publish your piece, we will provide thorough editorial support.