| |
|
|
|
|
DETAILS |
|
Nate Soares discusses the scramble to create superhuman AI that has us on a path to extinction. But it's not too late to change course.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies & countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter-Eliezer Yudkowsky & Nate Soares-have studied how smarter-than-human intelligences will think, behave, & pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us-and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? Join Nate Soares, author of the new book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All", for a walk through the theory & the evidence, as he presents one possible extinction scenario, & explains what it would take for humanity to survive.
|
|
|
|
|
|
|
|