8 Comments
User's avatar
Synthetic Civilization's avatar

This gets closer than most AI writing because it treats the real problem as institutional, not merely technical.

More intelligence does not automatically mean more freedom. It means the governance layer matters even more.

Andy Hall's avatar

Agreed. The ownership problem is crucial

Chris L's avatar

Some relevant resources you might find interesting:

• ‘AI for societal uplift’ as a path to victory - https://www.lesswrong.com/posts/kvZyCJ4qMihiJpfCr/ai-for-societal-uplift-as-a-path-to-victory

• AI tools for collective epistemics - https://newsletter.forethought.org/p/ai-tools-for-collective-epistemics

• Angle on shoulder AI tools - https://newsletter.forethought.org/p/angel-on-the-shoulder-ai-tools

• Future of Life Foundation - Fellowship on Human Reasoning - https://www.flf.org/fellowship

• International AI projects should promote differential AI development - https://newsletter.forethought.org/p/international-ai-projects-should

Andy Hall's avatar

Thank you! These are excellent

William's avatar

I love the optimism, Andy! I think there's a place for both optimism and clear-eyed pessimism. People said it was inevitable that kids would have phones in school too, and yet the tide is turning and schools are going phone-free, not from moral panic, but from an understanding of the studies showing that kids learn better, retain information better, and interact with other students better without them available during the school day. I would not dismiss the growing protest movement so easily either. They have real concerns about the risks of superintelligent AI. We're already seeing job loss(1) and warning signs of rogue AI agents(2) and the opponents only have the same, tired "it's inevitable" line as a defense.

So to engage in the spirit of the post, I would like to propose something constructive that could enhance the thoughtful framework you've laid out. I draw from my experience in the implementation of AI in scientific research. What we've learned is that it can bring great advantages, allowing researchers to get a structured overview of a research field in minutes instead of weeks. We're also learning from the feedback about using generative AI in writing as opposed to discovery/synthesis/reading. There we have to be a little more careful because while it's obviously great to have a copyeditor on hand and for people who don't speak the lingua franca of science to have a great translator on hand, there are places where it makes less sense. These areas are ones where the process is as important as the outcome - producing systematic reviews. These documents change policies and clinical practices affecting everyone, and the systematic review community has been very careful to build relationships with stakeholders - patients communities, clinicians, policymakers, etc. The reason systematic reviews have the practice-changing power they do is because those stakeholders have been part of the process for designing how systematic reviews are written. If we started generating them without following this structured process, they'd lose a lot of their power, even if they said largely the same thing as a review generated through the traditional process. I wrote in mroe detail about this: https://synthesis.williamgunn.org/2025/05/07/ai-generated-systematic-reviews-are-they-possible/

I suspect the same dynamic - the process being as important as the outcome - will be an important consideration in many other areas of governance and would facilitate adoption of AI where it makes sense much more quickly that casting all hesitation as naive Luddism. Does that sound reasonable?

1. https://jobloss.ai

2. https://fortune.com/2026/03/27/rogue-ai-agents-autonomous-safety/

Andy Hall's avatar

Thank you! Yes absolutely. We talked about this a lot back in the day with social media governance — the idea of “process legitimacy.” I think it’s very important.

We already see signs of this with people wanting humans in the loop for hiring decisions, for example. There are a bunch of governance decisions that I suspect will fall into this bucket: most obviously huge justice decisions like applying the death penalty. But probably a wide range of other value laden decisions too.

It would be great to develop a framework for (a) where AI can improve policymaking and governance at all and then (b) of those, which are ones where the outcome matters and the process doesn’t. Those would be the first to target with ai improvements.

I’ll be thinking more about this! Thanks for raising it.

panem et circenses's avatar

Great piece! Love the optimism and the direction.

Everything comes with trade-offs. Technology adoption makes the basic simpler but also raises the bar. There isn’t a fixed ceiling. Jevon teaches an important thing

Also as you mentioned the path is long but it will be a great achievement for policy-making and law.

Andy Hall's avatar

Yeah let’s hope! We need more people working on it