I'm new to AI risk
Start here if you want clear, non-hype framing on stakes, timelines, and what responsible action looks like.
Path active (12)Library
A unified map of catalog episodes, Spotlight queue picks, TED talks, in-site editorials (briefings and maps), and the same shelf and intent filters across all of them. Leader profiles and timelines live under Leaders Watch — use the pill below — and are also included in site search.
What do you want to focus on?
Route by intent for a guided shortlist, or retrieve directly by keyword and theme in one place.
Focus matched: I'm new to AI risk
Active filters
Clear filtersClick an active filter to jump straight to that section and adjust it.
Edit filters
Expand a filter to review options. Only one section stays open for focus.
Start Here
Question-led pathways with a tightly curated shortlist from across podcasts, documents, and talks.
Start here if you want clear, non-hype framing on stakes, timelines, and what responsible action looks like.
Path active (12)80,000 Hours Podcast
5 May 2023
This conversation examines core safety through Tom Davidson on how quickly AI could transform the world, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -1 · 163 segs
AXRP
4 Oct 2024
This conversation examines core safety through Jaime Sevilla on Forecasting AI, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg 2 · 95 segs
Diary of a CEO
3 Dec 2025
Preparedness and timeline risk framed as an immediate operating concern for mainstream audiences.
Future of Life Institute Podcast
7 Jan 2026
This conversation examines core safety through How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -3 · 85 segs
Future of Life Institute Podcast
1 Sept 2025
This conversation examines core safety through What Markets Tell Us About AI Timelines (with Basil Halperin), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg 1 · 91 segs
Future of Life Institute Podcast
12 Dec 2025
This conversation examines core safety through Why the AI Race Undermines Safety (with Steven Adler), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -4 · 89 segs
Lex Fridman Podcast
30 Mar 2023
A high-stakes existential-risk interview focused on loss-of-control arguments and emergency policy posture.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -3 · 161 segs
Lex Fridman Podcast
2 Jun 2024
A high-risk framing interview centered on existential failure modes, loss-of-control arguments, and mitigation limits.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -10 · avg -7 · 107 segs
Lex Fridman Podcast
1 Feb 2026
A long-form strategic review of frontier-model progress, open-vs-closed dynamics, China-US competition, and AGI timelines.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -0 · 254 segs
Lex Fridman Podcast
9 Dec 2018
A foundational interview on alignment, value uncertainty, and control risks in advanced AI systems.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -4 · 64 segs
TED Talks
Date pending
Tegmark translates AI risk into decision architecture: who sets objectives, who governs deployment, and how societies keep agency as systems become more capable. He separates speculative myths from immediate governance design choices.
TED Talks
Date pending
Harris presents frontier AI as a collective governance decision rather than a technical inevitability. The talk links social-media deployment failures to current AI rollout dynamics, arguing that capability acceleration without institutional counterweights produces predictable public harm.