Library

The AI Safety Map

A unified map of catalog episodes, Spotlight queue picks, TED talks, in-site editorials (briefings and maps), and the same shelf and intent filters across all of them. Leader profiles and timelines live under Leaders Watch — use the pill below — and are also included in site search.

882 library itemsPerspective Map frameworkLeaders Watch5 guided shelvesSorted A-Z by source title
Filtered view active.I'm new to AI riskReturn to full Library

What do you want to focus on?

Route by intent for a guided shortlist, or retrieve directly by keyword and theme in one place.

Clear

Focus matched: I'm new to AI risk

Active filters

Clear filters

Click an active filter to jump straight to that section and adjust it.

Edit filters

Expand a filter to review options. Only one section stays open for focus.

Start Here

Question-led pathways with a tightly curated shortlist from across podcasts, documents, and talks.

Civilisational risk and strategy

80,000 Hours Podcast

5 May 2023

Tom Davidson on how quickly AI could transform the world

This conversation examines core safety through Tom Davidson on how quickly AI could transform the world, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -1 · 163 segs

ai-safety80000-hourscore-safety

AXRP

4 Oct 2024

Jaime Sevilla on Forecasting AI

This conversation examines core safety through Jaime Sevilla on Forecasting AI, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg 2 · 95 segs

ai-safetytimelinesaxrp

Diary of a CEO

3 Dec 2025

You Are Not Prepared For 2027

Preparedness and timeline risk framed as an immediate operating concern for mainstream audiences.

Risk-forward
Mixed
Opportunity
Spotlightai-safetydiary-of-a-ceomainstream-bridge

Future of Life Institute Podcast

7 Jan 2026

How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

This conversation examines core safety through How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 85 segs

Signal RoomFeaturedai-safetyflicore-safety

Future of Life Institute Podcast

1 Sept 2025

What Markets Tell Us About AI Timelines (with Basil Halperin)

This conversation examines core safety through What Markets Tell Us About AI Timelines (with Basil Halperin), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg 1 · 91 segs

ai-safetytimelinesfli

Future of Life Institute Podcast

12 Dec 2025

Why the AI Race Undermines Safety (with Steven Adler)

This conversation examines core safety through Why the AI Race Undermines Safety (with Steven Adler), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -4 · 89 segs

Signal RoomFeaturedai-safetyflicore-safety

Lex Fridman Podcast

30 Mar 2023

Eliezer Yudkowsky on Dangers of AI and End of Civilization (Lex Fridman)

A high-stakes existential-risk interview focused on loss-of-control arguments and emergency policy posture.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 161 segs

Featuredai-safetylex-fridmanai-risk

Lex Fridman Podcast

2 Jun 2024

Roman Yampolskiy on Dangers of Superintelligent AI (Lex Fridman)

A high-risk framing interview centered on existential failure modes, loss-of-control arguments, and mitigation limits.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -10 · avg -7 · 107 segs

Featuredai-safetylex-fridmanai-risk

Lex Fridman Podcast

1 Feb 2026

State of AI in 2026: LLMs, Scaling Laws, Agents, and AGI (Lex Fridman)

A long-form strategic review of frontier-model progress, open-vs-closed dynamics, China-US competition, and AGI timelines.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -0 · 254 segs

Featuredai-safetylex-fridmantimelines

Lex Fridman Podcast

9 Dec 2018

Stuart Russell on the Long-Term Future of AI (Lex Fridman)

A foundational interview on alignment, value uncertainty, and control risks in advanced AI systems.

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -4 · 64 segs

Featuredai-safetylex-fridmanalignment

TED Talks

Date pending

Max Tegmark — How to get empowered, not overpowered, by AI

Tegmark translates AI risk into decision architecture: who sets objectives, who governs deployment, and how societies keep agency as systems become more capable. He separates speculative myths from immediate governance design choices.

Risk-forward
Mixed
Opportunity
FeaturedTEDai-riskgovernancepolicy

TED Talks

Date pending

Tristan Harris — Why AI is our ultimate test and greatest invitation

Harris presents frontier AI as a collective governance decision rather than a technical inevitability. The talk links social-media deployment failures to current AI rollout dynamics, arguing that capability acceleration without institutional counterweights produces predictable public harm.

Risk-forward
Mixed
Opportunity
FeaturedTEDai-riskcivilisational-riskgovernance