Loading...
Loading...

This is an interview with Aza Raskin, co-founder of the Center for Humane Technology and co-founder of the Earth Species Project. His work has focused on the societal impacts of technology systems and how incentives shape large-scale human behavior.
In this episode, Aza frames AGI governance as part of a broader pattern: when technology confers new forms of power, it creates races to exploit that power - and without coordination, those races tend toward harmful outcomes. The implications of AI, in his view, extend beyond technical risk into the manipulation of language, relationships, and the very substrate of human coordination.
This episode referred to the following other essays and resources:
-- Craig Mundie – Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8): https://danfaggella.com/mundie1/
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/UF9geTZpG5A
See the full article from this episode: https://danfaggella.com/raskin1
...
About The Trajectory:
AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?
From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.
Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle
Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai
No transcript available for this episode.

The Trajectory

The Trajectory

The Trajectory