
Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431
Roman Yampolskiy discusses multiple existential risks from AGI including ikigai risk where superintelligent AI could make human existence meaningless

1 episode on Lex Fridman Podcast
Roman Yampolskiy is an AI safety researcher and computer scientist at the University of Louisville who specializes in existential risks from artificial intelligence. He is the author of the book AI: Unexplainable, Unpredictable, Uncontrollable, which examines the challenges and dangers of superintelligent AI systems.