
Ben: A host of individuals and organizations — Nick Bostrom, Bill Joy, the Lifeboat Foundation, the Singularity Institute, and the Millennium Project, to name just a few — have recently been raising the issue of the “existential risks” that advanced technologies may post to the human race. I know you’ve thought about the topic a fair bit as well, both from the standpoint of your own AI work and more broadly. Could you share the broad outlines of your thinking in this regard? Steve: I don’t like the phrase “existential risk” for several reasons. It p
resupposes that we are clear about exactly what “existence” we are risking. Today, we have a clear understanding of what it means for an animal to die or a species to go extinct. But as new technologies allow us to change our genomes and our physical structures, it will become much less clear when we have lost something precious. Death and extinction become much more amorphous concepts in the presence of extensive self-modification.Source article HERE
No comments:
Post a Comment