Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes.
For much of the AI safety community, the central question has been "when will it happen?!" That is futile: we don't have a coherent description of what "it" is, much less how "it" would come about. Fortunately, a prediction wouldn't be useful anyway. An AI apocalypse is possible, so we should try to avert it.
You can support the podcast and get episodes a week early, by supporting the Patreon:
If you like the show, consider buying me a coffee:
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.