Imagine a different developmental history for nuclear bombs. Instead of tightly constrained, utterly secretive projects involving tiny portions of humanity, governments, companies, universities, and even individuals around the world openly competed to be first to detonate a device some thought might destroy the atmosphere. If it managed to not destroy all life, think of the power that ‘courageous’ inventor would have to dominate known reality!
Imagine a huge prize in a global competion for the first individual or group that develops an absolutely contagious pathogen capable of eradicating humanity. There are virtually no government controls. In fact, if your idea is promising, your government and venture capitalists will line up to fund your lab.
That’s close to the situation today with AI development, with the exception that the Manhattan Project, and similar efforts in other countries, were explicitly seeking to develop a new, inconceivably destructive weapon. AI, by comparison, appeared in the guise of entertaining and largely positive applications, like solving complex protein folding challenges that have stymied important medical breakthroughs.
AI, like nearly all technologies, but far more so, offers tantalizing benefits while simultaneously harboring the darkest destructive potentials. Developers have shared some of their concerns about the risks of large language models’ (LLM, the popular forms of AI most people have interacted with), but the hype over their more dazzling outputs drowns out the warnings. Governments have proven as eager as greed-driven investors to be first to command the most powerful AIs. We see no meaningful precautions or controls in the USA, China, Russia, or Europe.
I participated in Daniel Pinchbeck's July series of interactive webinars and discussions investigating AI’s current and emerging capabilities, opportunities, and risks. A number of insider guest speakers covered a variety of perspectives, beginning with the cautiously hopeful and tracking toward the decidedly fatalistic. The more 'inside' they were (Silicon Valley, etc.), the more alarmed and pessimistic.
The emotional trajectory of the group followed that of the writers in last year's prize winning short film, Writing Doom, about a writers room starting work on a new season of a popular streaming show. Aside from the show runner (team leader), they are initially bored with the project. A studio staffer introduces a new addition to the group, a fan fiction writer who's also a PhD candidate in machine learning.
To get a feel for where Daniel's group landed during yesterday's final scheduled webinar meeting, or simply to understand the conundrum humanity now faces, take 28 minutes to watch the film.
Many in the group continue to press for pro-human, pro-Earth-life outcomes that prevent the worst potentials of the current ungoverned race to create an alien super-intelligence, an inscrutible mind that will have zero capacity to care about humans or other living organisms.
Because of the chilling conclusion of the webinars, Daniel scheduled an ad hoc follow-on discussion for this Sunday. The link is in his recent Substack post. If you are interested in the existential threats inherent in the potential near-term advent of artificial general intelligence (AGI), including the potential quick-turn follow-on to AGI, artificial super-intelligence (ASI), consider joining the conversation.
Near-term AI developments have the potential to eclipse all other threats to life and wellbeing combined. It's past time for people to set aside their divisions and cooperate to force politicians and sociopathic corporate officers to prevent AGI development.
Many groups are working to understand and mitigate risks AI poses, but companies, universities, government agencies, and—we can be certain—criminal enterprises are sprinting to get the first AGI and SAI out of the gate. Regardless their varied motives, their approaches are alike in their recklessness. We need to increase the focus on lasting human wellbeing immensely. We have to ramp up pressure to limit further development and to isolate advanced AI projects from the internet and from critical infrastructure.
In the next few months, I will write some post and notes digging into specific issues and ways to engage with relevant decisionmakers. In the meantime, please don’t wait for me, or any other sole voice. Please educate yourself, family and friends, colleagues, and anyone who will listen.
Hey Daniel, I share many of your concerns and I’m especially aligned with your call for a citizen-led movement to oversee AI. We need that. But I think the conversation needs even more nuance, especially from those of us working closely with the technology.
Yes, the major AI labs are racing toward dominance. Many are built on extractive models: trained on unconsented data, accelerating climate harm, prioritizing scale over alignment, and consolidating power under the guise of “safety.” That monopoly of intelligence is real.
But here’s the paradox: pausing development could actually entrench their control. We’ve seen this before with Musk calling for a freeze just when OpenAI was ahead.
So the path forward can’t just be policy. It’s values, systems, and participation. Not all AI has to follow the same trajectory. There are already alternative models …open, decentralized, community-led building with and for people, not to replace them. I’ve seen promising grassroots labs and initiatives across Europe and Africa.
Please join our call on Sunday - let's build a citizen movement to oversee AI : https://danielpinchbeck.substack.com/p/ai-and-humanitys-future-bc1