6 Comments
User's avatar
The Human Playbook's avatar

Hey Daniel, I share many of your concerns and I’m especially aligned with your call for a citizen-led movement to oversee AI. We need that. But I think the conversation needs even more nuance, especially from those of us working closely with the technology.

Yes, the major AI labs are racing toward dominance. Many are built on extractive models: trained on unconsented data, accelerating climate harm, prioritizing scale over alignment, and consolidating power under the guise of “safety.” That monopoly of intelligence is real.

But here’s the paradox: pausing development could actually entrench their control. We’ve seen this before with Musk calling for a freeze just when OpenAI was ahead.

So the path forward can’t just be policy. It’s values, systems, and participation. Not all AI has to follow the same trajectory. There are already alternative models …open, decentralized, community-led building with and for people, not to replace them. I’ve seen promising grassroots labs and initiatives across Europe and Africa.

Expand full comment
M. Cameron Harris's avatar

Thanks, Gil. This is Mark. I agree the conversation around AI development and implementation needs more nuance. To flesh out my own awareness of other considerations, I'm reading Reid Hoffman's 'Superagency: What could possibly go right with our AI future' (2025). I want to see how an articulate proponent of unfettered development justifies that stance.

I viscerally feel the bind Allenby and Sarewitz wrote about in 'The Techno-Human Condition (2011, MIT),' a great read for gaining appreciation of the difficulties of sense-making and societal responses in truly complex, dynamic risk-and-opportunity spaces. To a significant degree, we're trying to identify appropriate individual, community, team, enterprise, and government postures toward rapidly changing technogies in minimally comprehensible contexts with unclear interaction dynamics. And this at a time when multiple risk factors are trending in troubling directions, with their own complex interactions.

The much maligned 'precautionary principle' strikes me as obviously applicable with regard to AI development (thus the loose comparisons with developing nuclear weapons and pathogens). I understand that principle, roughly, as: if the severity an event's possible effects are unacceptable, it's reasonable to take actions to preclude that event, even when its probability is low and associated technologies hold great promise).

There is currently no balance between law and policy to sufficiently limit the risks associated with AI, on one side, with the potential benefits (to the public, over the near and long terms) from innovation openness. Public safety and wellbeing, in the near and long terms, should be a guiding value and metric.

I don't know where that balance is, but I sense current development does not give meaningful consideration to the values that make for desirable existence for most humans.

Expand full comment
The Human Playbook's avatar

Thanks Cameron. I’m really glad you brought up The Techno-Human Condition. This is an excellent source for the challenge we are facing. On Hoffman, I’m actually with him in spirit. I do believe in the idea of AI enhancing human agency, etc But I think he is overestimating the system’s ability to deliver that promise. The current ecosystem is accelerating toward extraction. Opportunities aren’t bubbling up, instead, capital seems to flow toward hype-driven bets or scammy exit plays. That doesn’t create his concept of “superagency”,it creates more illusion.

I also want to offer a gentle challenge on the catastrophic comparisons (e.g. nuclear weapons). From where I sit, working closely with real-world AI applications, the gap between the promise and actual results is massive. The tech isn’t trivial, and we shouldn’t ignore , but is far far far away from being a bomb. This has been a tactic to push capital allocation and pump into the market in a reckless way. There is a very small group of actors positioning themselves as both the disruptors and the saviors. And we should be careful with disseminating their propaganda (lots of incredible research has come from MIT, other thinkers in the field, Apples recent paper on LLMs and its limitations).

Meanwhile, those of us working day to day see much more modest outcomes, especially in enterprise use cases, where adoption is being pushed hard … (that’s the only way to make money in the space and where the narrative of human replacement works since it promises enterprises massive profits). Results have been so so mild that that’s part of why many labs have stopped talking so loudly about “AGI”. The framing created expectations they haven’t been able to meet, and public scrutiny around that gap is growing. So now we see a quieter pivot toward more abstract “safety” language or vague “alignment” missions while the real use cases remain fairly narrow (writing, code, productivity tools). Anyhow that’s just my take.

Expand full comment
Daniel Pinchbeck's avatar

Please join our call on Sunday - let's build a citizen movement to oversee AI : https://danielpinchbeck.substack.com/p/ai-and-humanitys-future-bc1

Expand full comment
Margo's avatar

Pope Leo did call AI a threat to humanity but he’s about the only one

Expand full comment
M. Cameron Harris's avatar

There are others, but their warnings are like whispers in a rock concert.

Expand full comment