I need technical alignment depth
Deep technical material on evals, interpretability, and control methods for reducing frontier-system risk.
Path active (12)Library
A unified map of catalog episodes, Spotlight queue picks, TED talks, in-site editorials (briefings and maps), and the same shelf and intent filters across all of them. Leader profiles and timelines live under Leaders Watch — use the pill below — and are also included in site search.
What do you want to focus on?
Route by intent for a guided shortlist, or retrieve directly by keyword and theme in one place.
Focus matched: I need technical alignment depth
Active filters
Clear filtersClick an active filter to jump straight to that section and adjust it.
Edit filters
Expand a filter to review options. Only one section stays open for focus.
Start Here
Question-led pathways with a tightly curated shortlist from across podcasts, documents, and talks.
Deep technical material on evals, interpretability, and control methods for reducing frontier-system risk.
Path active (12)AXRP
20 Jan 2025
This conversation examines core safety through Adria Garriga-Alonso on Detecting AI Scheming, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -2 · 25 segs
AXRP
11 Apr 2024
This conversation examines technical alignment through AI Control with Buck Shlegeris and Ryan Greenblatt, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -6 · avg -9 · 174 segs
AXRP
1 Mar 2025
This conversation examines technical alignment through David Duvenaud on Sabotage Evaluations and the Post-AGI Future, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -9 · avg -7 · 21 segs
AXRP
1 Dec 2024
This conversation examines technical alignment through Evan Hubinger on Model Organisms of Misalignment, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -6 · avg -7 · 120 segs
AXRP
31 Mar 2022
Auto-discovered from AXRP. Editorial summary pending review.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -6 · avg -8 · 79 segs
80,000 Hours Podcast
2 Oct 2018
This conversation examines technical alignment through Paul Christiano on how OpenAI is developing real solutions to the AI alignment problem, and his vision of how humanity will progressively hand over decision-making to AI systems, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -4 · 283 segs
AXRP
2 May 2023
This conversation examines technical alignment through Interpretability for Engineers with Stephen Casper, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -4 · 108 segs
AXRP
4 Feb 2023
This conversation examines technical alignment through Mechanistic Interpretability with Neel Nanda, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -1 · 182 segs
AXRP
12 Apr 2023
This conversation examines technical alignment through Reform AI Alignment with Scott Aaronson, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med 0 · avg -5 · 120 segs
AXRP
27 Jul 2023
This conversation examines technical alignment through Superalignment with Jan Leike, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -10 · avg -7 · 112 segs
Future of Life Institute Podcast
6 Feb 2026
This conversation examines technical alignment through Can AI Do Our Alignment Homework? (with Ryan Kidd), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Spectrum + transcript · tap
Spectrum trail (transcript)
Med -6 · avg -6 · 121 segs