Anthony Aguirre on the Future of Life Institute
Why this matters
Auto-discovered candidate. Editorial positioning to be finalized.
Summary
Auto-discovered from AXRP. Editorial summary pending review.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 20 full-transcript segments: median -7 · mean -8 · spread -23–0 (p10–p90 -20–0) · 15% risk-forward, 85% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.
- - Emphasizes safety
- - Emphasizes ai safety
- - Full transcript scored in 20 sequential slices (median slice -7).
Editor note
Auto-ingested from daily feed check. Review for editorial curation.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video GkMkoZvYshk · stored Apr 2, 2026 · 569 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/anthony-aguirre-on-the-future-of-life-institute.json when you have a listen-based summary.
Show full transcript
hello everyone this is one of a series of short interviews that I've been conducting at the Bay Area alignment Workshop which is run by far AI uh links to what we're discussing as usual are in the description um a transcript is as usual available at axr p.net and as usual if you want to support the podcast you can do so at patreon.com axr podcast well let's continue to the interview all right um well I'm currently speaking with Anthony giri nice to be here for those who aren't as familiar with you and what you do can you say a little bit a little bit about yourself yeah so um I'm a theoretical physicist by original training I'm I still hold a professorship at UC Santa Cruz Y and I've studied a whole range of theoretical physics things in the early universe and gravity and foundations of quantum mechanics and all sorts of fun stuff um but about 10 years ago or so uh I as well as another colleague Max tegmark and a few others sort of started to think about the long-term future and in particular transformative Technologies and the role that they would be playing and decided to start up a a nonprofit thinking about transformative Technologies and what could be done to steer them in maybe a more favorable Direction uh at that point AI was just starting to work a little bit and things like AGI were clearly decades away if not centuries in some people's opinion y so this was a more theoretical and and pleasantly abstract set of things to think about but nonetheless we we felt like it was coming and started a bunch of activities and that was the birth of the future of Life Institute and I'm now 10 years later executive director of the future of Life Institute pretty much full-time um and so I've got sort of a split between future of Life Institute my academic appointment and my students there and a couple of other hats that I wear including metaculus how how does metaculus fit into all of that like like I imagine once you're both a professor and helping run fli I'm surprised you had the time to start this other thing well I started metaculus at the same time as fli so was I didn't know what I was getting into at that point classic way people do things um yes so so mentaculus actually started with fli for a reason because thinking about the future and how we could steer it in more positive directions I felt like well then we have to know a little bit about what might happen in the future and conditionally you know if we did X and Y what might that do and also so how do we build a a ability to make predictions and also how do I identify people that are really good at making predictions yeah um and and modeling the world and so it it sort of started up at some level to be a a thing that would be of service to future of Life Institute but also everybody else who's thinking about the future and wants to do better planning and better decision making how useful to fli has metaculus been surprisingly little I have found actually so I think the the big lesson that I've taken from metaculus is that the the ability to say whether once you've really carefully defined a question like will X happen by date Y and you you know exactly what x is and you defined it so that everybody agrees what x is and um and so on once you've just done that whether that thing is 70% likely or 80% likely right nobody cares like it itely matters maybe if you were in like something very quantitative you know if you if you were working for a hedge fund or something you care about 70 versus 80 but the at some level nobody cares and whether it's 70 or 80 doesn't really change what you do in almost any decision but um the process of getting to the point of knowing exactly what x is and having a welldefined question and keeping track of who makes good predictions and who doesn't thinking about what I need you know what is it that I want to decide or what what decision do I want to take and what do I need to know in order to make that decision and turning that into very concrete operational questions that can you know happen or not happen yeah those things are really valuable right so I think the the interesting pieces um part to some to fli and some elsewhere haven't been so much the outputs like the actual predictions but going through the process of making the questions and got if if this is what we really want to understand how do we decompose that into well-defined things that are that can go on metaculus almost independent of how those things actually turn out so you say like 70 versus 80 like doesn't really matter um I imagine like 10% versus 90% might matter does that ever like does it ever come up that a thing turns out to be 90% And he thought it was 10 or vice versa very occasionally I mean it it's I would say it's rare um I would say like the Ukraine war was one where metaculus had a much higher prediction than I think most people had was right was useful to some people like I think there were some people who actually moved out of Ukraine because of the metaculus prediction we felt good about that you know it's not many people who take metaculus that seriously but I think there were a few so I think there are some it's also like 1% versus 10% is also of course a huge difference Which is less appreciated among you know if you're not thinking about probabilities all the time but it is also a huge difference and I think there are some of those I think a lot of it is also one once the reputation acrs then it gets taken more seriously so I think you know a lot of people in the AI Community take seriously at some level metaculus predictions about AGI arrival um because they you know metaculus does have a track record they know it's a bunch of people that are high knowledge and and thinking really carefully technically about it right um and it's a way of ACR it's a way of aggregating lots of wisdom of the right sort of people um so I think the fact that there you know there's an AGI question that is at 2030 versus 2035 on metaculus does make quite a big difference you know the sort of things that that uh AI safety is thinking about um whereas it's yeah or a probability of 30% versus 70% say in human level AGI by date X is a reasonably big deal yeah yeah um I think there there are some examples where the the outcomes or the outputs really matter um but there are very few where I've said like oh I think this is probably like 90% probable and I put it into mentaculus metaculus like no 10% right um usually the numbers are not that surprising fair enough fair enough um so before we get into the stuff you do at fli um we're currently at this alignment workshop being run by firei um how how are you finding the workshop the workshop has been enjoyable in the way that these workshops are which is that lots of people that I like and respect and want to talk to are here uh it usually is less about the program than about the actual iners physical Gathering and who you can Corner in in various uh breaks so from that perspective it's been great had great conversations awesome so yeah let's talk about um fli so you're currently the executive director um at a high level what's fli up to these days so fli is always evolving I think we we started out primar arily as a a more academic group we we funded the first sort of technical AI Safety Research grants back in 2015 we also were much more we were a convener so we we tried to bring together academics and nonprofit people and Industry people and kind of and a little bit policy makers and get them talking to each other before any of them were talking to each other yeah you R the uh the precursors to these alignment workshops there were in uh you know in Tropical Islands it was yes I went to one of those and it was quite fun the Puerto Rico one probably yeah yeah um well there are two but the probably the second one yeah um so we we did a number of convenings uh partly for Tech you know partly to to get technical people together but also partly socially to get different groups together who weren't talking together those groups are talking together a lot more now you know that they're not in um they're also talking together a lot less than they used to in the sense that there were lots of heads of labs that you get into together in one room and have them smiling and agreeing with each other in a way that doesn't really happen nowadays now that they're sort of bitter Rivals and at some level enemies some of them um so that was a different time for the last few years I would say we've been focused both well first we've gotten more focused on AI as opposed to other transformative Technologies we we have a we still have a long history in nuclear uh and we are thinking about nuclear issues but more where nuclear intersects AI or where bio intersects AI or were Weaponry uh intersects AI okay um so AI is obviously like the the thing that is looming most quickly uh in in all of our minds so we've focused on that I hope I think appropriately I mean if it doesn't turn out to be AI it'll be because there's a quick pandemic first or a nuclear war first and we'll hope that that doesn't happen fingers crossed so in terms of AI we're doing work some work still supporting technical research uh a lot more work developing doing policy research and policy engagement and some level ad of advocacy so we're we uh participated heavily in the eui ACT we participated in uh the a number of sessions in the US Congress like the the gave testimony on Capitol Hill and part of the the Schumer hearings and so like have basically had a presence of really trying to bring expertise about AI safety to policy makers in various forms um so there's that policy sort of expert and and like informative role we've also taken on some level of a policy advocacy role I would say especially starting with the open letter of like early 2023 right um that was a bit more of a strong position that we took we have taken those before so we we had a we've taken positions on autonomous weapons that we should not be basically developing them um we've taken positions on nuclear weapons but you know those were uh I would say less directly contrary to the aims of the major companies that were doing things um than the the pause letter was so I think since the pause letter we've taken a little bit more of a advocacy role with like like a a point of view about uh Ai and AI risk um and pushed a little bit more for the things that we feel are needed given the level of risk yeah so just at a high level what like what do you think is needed What policies do you advocate for yeah so broadly speaking we so in our uh what we've written formally and what you can find on our website we believe that there should be ultimately a like mandatory uh licensing system for AI systems okay uh at the frontier level and that should require more and more stringent safety guarantees as those systems get more and more powerful and sort of zero for for like very lightweight systems or very narrow systems um we think that at the moment AGI is being developed in a rather unsafe race Dynamic um by large un overseen companies okay um and that this is not good so we have we we still call for a pause we still think we should pause like a GI development now until we have a much better system in place for managing the risks of it um and we also think that you should not develop super intelligence until or unless you can make really strong guarantees like provable safety guarantees about the system um which are not possible to make now so we should not be developing super intelligence anytime soon until like safety techniques are radically more capable than they are now okay um so those like Baseline we believe that uh we are also quite concerned about issues of concentration of power whether that power is concentrated in a handful of AI companies that have huge economic dominance or a government that uses AI to you know uh suppress descent and surveil and all of those things or in AI itself so in sense of like not necessarily one AI like taking over everything but just lots of deferring decision to a network of AI systems so that human sort of decision making and agency is is largely lost talking about the licensing thing first so you mentioned yeah some sort of Licensing scheme where like for larger models you have to like or or not necessarily larger but for more powerful models you have to like do things to get a license like like what are you imagining um being their requirements for a license yeah I think again the it should ratchet up as the systems become more powerful and potentially threatening but I think you know for the systems we have now so I would say it it's sort of the same thing that is happening there should be evaluations of their capabilities and their risks um but it should be done either done or checked by uh a disinterested third party okay um and it should be done before the you know there should be a stamp of approval on that before the system is actually deployed so the evaluations are often done before the system is deployed but if they found anything dangerous it's unclear what would happen at this point like um so so there so they should actually be required they should actually have teeth and they should involve an independent third party and this is more or less what we do with every other product that might cause danger in our society we do it with airplanes we do it with cars we do it with drugs like everything else you develop the thing you have it you make an a case that the thing is safe at some reasonable and quantitatively defined level and then you roll it out to the consumer um so this is not some crazy new thing it's just feels weird because we've had an unbelievably unregulated uh Tech space and I think in many ways that has been fine in some ways it's been problematic um but once we're now getting to systems that are actually potentially quite dangerous we need to be adopting some of the techniques we've developed for actually dangerous other systems so you mentioned in the limit of super intelligence you would want some sort of like guaranteed you know provable guarantees of safety um so to me this is reminiscence of uh yosho beno's uh line of work um I guess uh was guaranteed safe AI there was some position paper yes I'm afraid I didn't actually read it but have you been in contact with them um or sure yeah so there's a whole program that you know Max techark has been pushing yoshua's been involved in C of Andro has been pushing davidd uh has his own version of this so I think there there are a few very ambitious programs that say okay what would it look like if we to to actually have safety when there's a there's a system that is potentially much more intelligent than us that is not an obvious that is not a problem that obviously even has a solution yeah right um like At first blush when you say how do I sort of control or ensure is safe for me a system far more intelligent than me the the most obvious answer is you don't right and like if you you know if you have 10 kindergarteners and they bring in a corporate CEO to help them solve problems for them um there's no sense in which those kindergarteners are going to be controlling that CEO the CEO is just more knowledgeable more worldly more wly more persuasive right right everything more effective than the 10 kindergarteners um and so I think that's a problem that is unsolvable for those kindergarteners okay um superintelligence may be an unsolvable problem in that same way um or it might not I think we don't know so I I think the the requirement should be very high that we really believe that the problem is solved and we can really reassure oursel that the problem is solved um before we go ahead with that because it's not obvious that the problem is solvable and I think we're we're doing something rather unwise by going ahead and assuming that there be a solution in time to super intelligent alignment or control um when it's not at all obvious that even in principle it's possible Right let alone that we know how to do it in practice I'm wondering so maybe this uh relates to just sort of um F's rol as a research funer so um uh you guys uh support um a bunch of PhD fellowships um I guess you also run grants rounds um yeah I'm wondering like is there a particular focus in the kinds of work that you want to or is it like more broad based it's pretty broad um and and we have different we decide on different things as priorities and then try to put you know both institutional resources and fiscal resources behind them so I think the you know the AI safety fellowships are part of the idea that we need to field build in technical AI safety yeah and lots of people agree with that and that's like our contribution to that yeah um the we just ran a uh request for proposals on concentration of power because we feel like that's something that is like lots of people talk about and worry about but aren't really doing much about it's certainly not at the like research level right um so wanted to have a like looking for things that aren't necessarily going to happen by themselves and really do need independent or philanthropic dollars so that's that's an example of that um others could be more Niche technical projects so we're funding things in like compute security and governance now things that uh probably will come into being on their own but probably much later than we would like them to so there the idea is to like accelerate the timeline for things that everybody agrees are good like everybody agrees security is good to First approximation right um but it won't NE but everybody also agrees that our security is in most cases yeah so so like trying to make that better for for high-powered AI um so it's a it's a mixture of different things like where we where we decide that there's some thing that we see is under funded but is important um and just design some sort of program to do that one thing you said um is that you support fiscally as well as um in in some other way like I forget what exactly you said yeah so well we give so we give away grants but we also like do joint programs with things or like so we future of Life Institute also has a sister organization future of Life Foundation right uh the role of which is to incubate new organizations so that might that will look like seed funding but also looks like finding leadership and designing what the institution does and uh providing you know Operational Support in early days and things like that um or we might have a an organization that we're uh collaborating with and we might help them out with Communications or you know coordinate with them on media production or whatever so that's the sort of thing just like are some of our institutional resources going to help some other project that other people are doing gotcha makes sense and and the feature of Life Foundation am I right that that's somewhat of a new organization that's pretty new um it now has a staff of two just recently doubled um so yeah so that's just getting started really and but it has sort of it is sort of fully launched one thing which is karma the the don't ask me what the acronym means because I will get it wrong okay but it is a technical AI uh policy shop that Richard Mala is leading okay um it is so contributed to a a pro taken over a project that was originally funded out of a future of Life Institute called now called wise ancestors which is looking at um non-human Extinction and what we can do about that and can we sort of back up some of the genetic data to to the hard drive as well as prevent some things from going extinct so that's a little bit of a different angle on extinction and and X risk but uh one that we found was could be interesting and useful um and there's like a bunch of things in the Hopper but yeah it's just getting started gotcha um if people are if there's some founder who like may be interested in doing something um and is curious like what kinds of organizations you want to Kickstart is there like is there a list on some website that they can look at there's not a public list but I would totally encourage them to get in touch with either myself or Josh Jacobson okay fl.org will give them the contact information for that so yeah we're we're eager to meet people who are uh excited about founding new organizations and would love to talk with them about what they're thinking what we're thinking if there's an interesting match there great well um thanks very much for chatting with me today yeah thanks for having me it's a pleasure this episode was edited by Kate brunot and Amber Don a helped with transcription the opening and closing themes are by Jack Garrett financial support for this episode was provided by the long-term future fund along with patrons such as Alexi maaf to read a transcript of the episode or to learn how to support the podcast yourself you can visit hrp.net finally if you have any feedback about this podcast you can email me at feedback axr p.net [Music] [Laughter] [Music] [Music]