Can We Stop the AI Apocalypse? | Eliezer Yudkowsky
Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI’s capabilities for harm while still enabling its promising advances in research and development.
Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky