Library / In focus

Back to Library
Future of Life Institute PodcastCivilisational risk and strategyFeatured pick

Lennart Heim on Compute Governance

Why this matters

Governance capacity is now part of the technical safety stack; this episode helps translate risk into policy with implementation value.

Summary

This conversation examines governance through Lennart Heim on Compute Governance, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 55 full-transcript segments: median 0 · mean -4 · spread -165 (p10–p90 -100) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
55 slices · p10–p90 -100

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes policy
  • - Full transcript scored in 55 sequential slices (median slice 0).

Editor note

Anchor episode for the AI Safety Map: high signal, durable framing, and immediate relevance to leadership decisions.

ai-safetygovernanceflipolicy

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video iCxJUDDvq94 · stored Apr 2, 2026 · 1,571 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/lennart-heim-on-compute-governance.json when you have a listen-based summary.

Show full transcript
welcome to the future of Life Institute podcast I'm here with Leonard Heim Leonard uh could you introduce yourself sure thanks for having me um yeah my name is Leonard Heim I'm a researcher at the center for the governance of AI in short Cafe I and I'm working on a research stream which we call I call compute governance so I'm asking myself to questions like where and why is computer particularism promising note for a governance can I do something to computational infrastructure which might allow us to achieve more beneficiary outcomes for example looking at Hardware mechanisms which could support this regime without just like can we use compute monitoring to yeah have more responsible actors are there on the iworld to start with is important it's important to specify exactly what you're worried about here so why in general should we worry about AI what what's your preferred framing for thinking about AI risk yeah I think my preferred way is like grouping in three categories like this misuse risk accident risk and structural risks right misuse if if somebody is not using gbd4 to send like a bunch of fishing mails accident risk is like for example if a self-driving car crashes in somewhere or we have like some scenarios where like AI systems try to do something on their own some malicious tasks and structural risk and like the things which are like underneath the surface are just like I don't know slowly all fall in love with a chatbot maybe this might be good I don't know if we are having a good time but there are like many reasons to believe something like these things might eventually be bad right it's slowly coming things which should like definitely look out for perhaps we could say a little bit more about the structural risk so what are you actually worried about here so yeah structural risk I think we when we talk about AI system we talk about like somebody often the general purpose technology right so it's going to be plugged in into many ways how our economy Works how how nation states compete with each other even on a military's perspective right and then these kinds of like structures can change over time and they change the Dynamics right um there's something you could be saying nuclear weapons have like in in like increased to something like some kind of structure risk you know there's like some structural risks just like we have them and they change the Dynamics of warfare and I think AI might be like similar in a lot of these cases this might be like a structure risk regarding military and how nation states compete with which are on how we how we go about war other structure risks are just you could for example think about recommendous systems with Facebook this might be a structural risk if it just turns all of us it's really addictive but it turns out we are like really miserable because of this so that's like a wide variety of things like these slow things which are like not immediately clear that's like actually bad but over time we actually see this playing out it's like being bad what is the AI Triad yeah yeah I try it or I also like to call it AI production function basic describes you have this this function which has like certain inputs and these inputs are those three things just Triad um so that's compute data and algorithms um with compute I mean something like computational infrastructure you know we need computation infrastructure data and centers your smartphone your computer to train and execute these systems with data this is the data we eventually train these systems on be it on a bunch of images or a bunch of text and algorithms generally describe machine learning so systems right deep learning within machine learning and with India for example Transformer architectures or other algorithms which eventually power these systems which are trained on the data using the compute we throw this into the compute air production function and out we get some AI systems which is like certain capabilities right and particularly air production function tries to think about like how much do these individual inputs matter and what can they tell us about the capabilities which I like to something the output of this air production function yeah and why is it you've you've tried you've decided to to focus on the compute aspect here in in compute governance what is it that makes compute specifically interesting from a governance perspective yeah I think one interesting trend is just like the role of compute over time I think Richard Saturn described it as the bitter lesson where he talked about like well we're trying to come up with this really cool algorithm trying to model the brain but actually the the bitter lesson is we just do like some type of search algorithms for more computer demo it looks like it's just working so better systems turn on to be formed better and better systems means more compute and we did an investigation where we looked at like what is actually the computer training usage of like cutting-edge systems those who have like the best capabilities those who have like a lot of citations and what we found there were the compute usage for training is doubling every six months and something doubling every six months means like wow things like this is a really important input this was something like the empirical motivation for these kinds of things otherwise what I usually describe is like computers some unique properties and it has some unique State of Affairs which makes it particularly interesting governance note which I'm happy to elaborate on yeah yeah please tell us yeah so unique properties with unique properties I'm usually saying well compute is this rivalous product it's like it's basically excludable so if you're crunching numbers on your computer nobody else can crunch numbers on your computer if a data center is fully utilized nobody else can use this this is in contrast to data and algorithms if I have a data in F algorithm I press command or z a Commando C Commando V and there we are I got it twice this does not work with compute I wish it would work like this otherwise we didn't need this complex supply chain it's still something excludable right if I'm not giving you access to your computation resources you can't have it even if I hack into a data center I can only use the computation resources of a nobody else is using this and B until the point they kick me out whereas like if I hack somewhere else and I try to steal the model I started to see the algorithm I try to see the data I have it I cannot exclude it later on because I can just copy the other aspect is the quantifiability I was like well computers like somewhat nice measurable and it's like the various ways of measuring I can just ask myself like how many eye chips of which type has an actor access to because how many chips they might have given the computers important might tell me something about the AI capabilities or like which kind of access they have to AI capabilities I can try to measure the training computers I just described but I can also just try to look yeah where are the data centers who's using them how many chips are there and which actors are there so I have to something like those who have a lot of computer to something like those AI actors I I should be like thinking about I should be trying to govern and make sure they build responsible AI systems they also like the fundamental properties where I'm claiming they stay the same over time this is just a fundamental thing about compute and this particularly interesting compared to data and algorithms like way harder to measure these um and you can't actually exclude them yeah so computers is basically physical and that makes it easier to control you're not gonna have for compute leaking online like the weights of a language model can you're not going to have a data set leaking online could it be the case that for example the knowledge used to produce these chips could be leaked online and that would make compute less controllable in a sense or or is is the production of compute very dependent on specific hardware and and perhaps local knowledge in in the factories and so on yeah this this is on the on the second point which I'm not calling the State of Affairs or something right um I'm like well computers being produced by this really complex concentrated supply chain and if you're just saying well if there's like certain IP not being leaked online can I just build like the next tsmc in my garage and my claim is like actually no you can't it's like really really hard this is a thing where like a lot of people are thinking about right now you might want to ask certain employees at asml and asking about what they're doing in their work and they might not be able to tell you so they're paying a lot of attention at like no IP is leaking from these places we have history of certain um people just like leakingism National like providing it to China and then China trying to use it and actually trying to yeah eventually produce some chips of it but it's still hard I think they're still the case so it's like if all of asml isps or like tsmc's IP would leak tomorrow it would definitely make it easier to build the chips where the machine switch better chips or to be eventually build the chips and Fabrication I don't still think it's sufficient we still have this tested knowledge of these people who build these machines we're talking about the most complex machines in the world to actually set this up yeah and this just seems really really hard to eventually build this or like it's some combination I think we all would wish for the world where like all of the organizational knowledge is written down but guess what sometimes we need to talk to each other to bring certain things across or like it's actually like sorry get out of the way I need to I don't know tighten the bullet on this machine um because there's like some specific way of going about it right symbolic example here so it's very difficult perhaps even if you wanted to to to leak all of the intellectual property from from these companies because perhaps many of the engineers do not know or can't describe what it what it is specifically they're doing or a document detailing what what they're trying to do wouldn't would never encapsulate all of the knowledge that they have about what they're trying to do in indeed and everybody just works on one small part right I mean like we talk about just like one machine which is built like by thousands of people there's like literally some person just like responsible for maybe this one I don't know there's one ball which is like setting up the mirror in the right space there right and the other one just like trying to make sure that the laser has like always exactly 12 volts or something right this just keeps the person busy um so this one person might know how does one component work but they rarely have like the overview on all of this works right this is like this is an Endeavor of holy SML and it's an Endeavor of haul off Humanity to some degree to build these chips where we like all come together we're trying to put it into certain boxes right they're different people designing it they're different people like building the machines they're different fabricating it and eventually assembling it but there's definitely some crosstalk there and like within each of these units it's like really really hard to just like make it based off the IP and like steal it and definitely people right now are trying this right they're like people trying to hack into asml dsmc and all of the other companies trying to get as much sip as they can and I hope they just have like pretty good cyber security and trying to prevent this do you think do you think these companies are actually investing enough in cyber security because I mean compared to how valuable their their IP might be of course there's a general answer where no one is ever investing enough in in cyber security but yeah do you think it's extra important for these companies a I think it's extra important B Simon security is really really hard you know somebody yeah somebody with like a background and then like it's really really hard really it gets really really hard if you need to deal with nation states right it's not like some kitties which are trying to hack you it's like trying a nation stage which is like trying to get certain IP is asml and tsmc really trying hard yes um I actually just watched the documentary about asml on the plane but like the CEO was complaining that just like they spent more and more like certain Millions just on securing their IP well eventually it's an interest Lucky them they're the only one building these machines if so many of us would get it it would not be good for their business perspective so they're clearly trying this in in this case right we also have like unfortunate cases of Nvidia being hacked eventually actually just by some kitties this is what the code looks like where a bunch of the data has been leaked and I think Nvidia should have probably had better cyber security here and maybe like scale it up like a lot because Nvidia is just a design theorem there's just like just something we can just steal the designs and then copy them or something I think there's like more to gain if you just do the IP there's still some tested knowledge but like there's more to gain and eventually all of these companies all of these companies somewhat involved if you think about AGI and Ai and if this turns all these systems are going to be really capable in future just need to step up the information security game right and that's a key priority within AI governance and within AI labs to ideally be as safe as the military or the NSA or whoever okay so when we talk about compute governance in general we have an option we want to know what certain companies and labs are doing so we want to to monitor how much compute they're using for example in general which options do we have available for monitoring compute usage one way of monitoring is like what they sometimes just reported themselves but they're just like hey we trained this new system and use that much compute the reason why I'm working on this because some AI Labs taught me hey we trained the system it's this big it used that much compute and like oh interesting that's a lot of compute this means you spend like single digit millions millions just on training these systems right to something we either just put it out there we I think we're hitting New Era now where like they stop doing this right so like I learned like look likes computers important it's an important act there and now let's stop doing this because you just also push maybe dangerous memes and give like away certain IP which you have there the general knowledge question is more compute usually means more capabilities and then like well more capabilities means more responsibilities that's what I'm advocating for so if I see those who train the biggest models I'm like guys you train the biggest models you have the most compute you should also bear the most responsibilities here this could be self-reported this is like just a company's just like telling someone ideally maybe like some act on the government and I think this would have to be great Act of just foresight when like people like hey you know what government we just trained system X use that much compute has this capabilities raise yourself or prepare or whatever you know like something along these lines would be good for them to know so they can have some foresight I think it would be good to know for the government to know GB France being released because it might have certain implications on your Society I don't think it's going to be like a really big deal right now but I think in the future will be more of a deal about HS like security implications and the governance should be um eventually briefed on this beforehand and we should probably also just mandate it but it's like every use of computers across a certain professional needs to report her just like yep my responsibility just tell me what's going on there as the first step and later on you could even mandate certain audits so in either it's like hey you're just trying to speak system could be could we please have like a risk assessment before you actually deploy this to the world right and maybe in this latest you could even like ask deny people training certain systems whereas like they ask you if I can train a certain system and you might be like who are you actually are you a responsible lap right and then we have like maybe some ways of eventually assessing this but again like you could imagine this as a tiered system with like different monitoring with different regimes we eventually ask him to do and it's unclear to me now what is warranted but I think we should definitely set up and play some kind of infrastructure to eventually get the started if it's warranted and I think we have like more than enough reasons right now that this might be a bit Rarity in the future yeah it's pretty it's easier to prove that you have something say that you have three servers then then it is to prove that you don't have something so it's pretty pretty difficult to prove that that you don't have you know either some extra compute hidden in the basement uh what What mechanisms do we have here to to try to verify that a lab doesn't have more compute than that they say they do as you say it's like easy to verify and have that much compute you could even imagine like run run this algorithm I know you have that much compute it should take you 10 seconds if I don't get demands in 10 seconds I know I'm gonna ask you where did your compute go right where it's handled like if they're hiding something in the basement I think what we can do here is just like build on top of the supply chain the supply action is really concentrated there's basically two types I mean actually just one type of GPU sitting out there which are Nvidia wants at least if you talk about these Hardware you can buy there's still a property Hardware like tpus or like other and then we just check on the supply chain because they're only that many Fabs who produce these chips and it's like Hey guys you're producing these chips where are they going I'm just gonna write it down and I know you have this many chips so I'm gonna ask you to prove this on time to time you know maybe I visit you maybe I sent you a proof of work different mechanisms to eventually achieve this but we know how to do traditional supply chain tracking and this is needed anyways right now um because of us export control reasons which you might chat about later that you just know where these types of chips go who has access to them so we can then do this and this yeah like traditional supply chain tracking methods or we can also just think about other methods where we just like certain people like the hardware only works if it tells you where it is this might be more intrusive but like something along these lines so Geo tracking for for hardware and and Geo locking basically for Hardware yeah you you could mentioned something like this if this eventually is oriented and again I'm only talking about like a small amount of chips here which is I mean not a small amount of chip but just like a small subset of all the chips are being produced right within the AI chips I'm talking about the chips which are the high best performing chips which go to Data Centers I'm not talking about geo-locking anybody's smartphone or computer or something don't find we talk about like Industrial Hardware we've just probably owned by 80 by a handful of actors because yeah just have like Cloud compute providers and others who just own a majority of all of this AI compute if you want to call it this way so so one of the things that makes compute interesting from a governance perspective is is that it's very concentrated there's a huge data center it's it's physical it's it's it's large physically large so so it's you can probably see it from from a plane almost what if say that I'm a lab and I wanna I want to get around certain restrictions I don't want someone monitoring my use of compute could I could I distribute my Computing over a large Network and then train my um my AI models like that perhaps it would be much less efficient but it would perhaps also allow me to circumvent a certain monitoring regimes I mean first of all I would wish nobody would try to circumvent this right maybe it's like wish for thinking it was like hey actually there is no downside to this there's like benefit like everybody's going to be doing this we ideally mandate this world right and everybody's on the same page too but you're right when you do restrictions when you do policies regulations whatever people are trying to evade this I think if this type of evasion we just like try to in this case I think a better terms like trying to de-train decentralized it's always distributed across many gpus but now we're just trying to think about putting some gpus location a and some other gpus and location B the question is just feasible there is research on this people are trying this this is feasible there is a penalty to this right the question like is this penalty worth it if you want to stay economically competitive and it's just like your competitors are not doing this and they can train like 40 cheaper it might not be competitive right then you rather follow the regulations to to eventually achieve this if we go back to the previous point where I'm just like tracking initially where all the AI chips go to like prevent this type of smuggling or this type of misuse there might be two different data centers I might ask both this and I was like hey who are actually your customers and then just like oh you both gave that much compute to customer a so I now have some idea what is the biggest thing which customer a try to train right so like we need different mechanisms there but again this is definitely one of the key things we need to look for um as a way to trying to evade this I'm pretty pessimistic that you can just like train it like at home or something where you're just like everybody donates their GPU at home these systems need to talk to each other at fast speed eventually then I don't know it depends where you live but most people are like kind of shitty internet at home um so it's like really hard to translate all of this data the data center is just like this this like this this engine which is like way more efficient it's just always going to be like more cost effective to do it in a data center right yeah you mentioned these uh export restrictions that the U.S uh have recently imposed on exports to China specifically uh also other countries perhaps it's um mostly China I think is the same I'm not sure if the same rules apply to Russia but they're definitely some sanctions to Russia where like more people play along taiwanel so it has sanctions to Russia yeah so what are these export controls to to China I think we can scope it into like roughly four buckets so they prevented the certain ships from going to China these ships are classical AI data center chips as is a100 or h100 they're not allowed to be export to China anymore they stopped the selling of semiconductor manufacturing equipment for example this asml company we we discussed um it's not allowed anymore to say abilities to China and then the Dutch they actually pay along they agree with this they stop selling it to China so they cannot produce their own ships right this is if you stop selling your chips they incentivize to produce their own chips so you're like oh I don't give you accessible machines which produce the chips so now they're trying to build their own machines which build their chips but now the US also has an export ban on equipment which helps you to build the machines which build the chips right and lastly they also stop actual U.S personal I think everybody with the US Passport is not allowed to work at certain companies anymore which fit into this like broader scheme of the Chinese semiconductor industry and that's a big deal they're basically trying to cut off access to um producing Sovereign semiconductor chips China can still buy most of the chips the chips which are actually banned and there's restrictions it's a limited number of chips and this is mostly chips I mean this is only chips used in data centers for AI workloads and maybe for some other workloads if we want to take inspiration from these uh export restrictions and say that something similar could apply to a U.S company what would be required for you to say this is a good idea how how Reckless would you would you see a company behaving before you would pull the trigger on a saying for example not giving them access to chips because this this seems like a a a pretty serious uh action you know you you talked about with with greater compute comes comes greater responsibility but perhaps also with with greater power over companies comes greater responsibility so what should we see from a U.S company before you would be able you'd be interested in in doing something like this as you said this is a pretty intrusive measure right and and it's a pretty blunt tool this is not to forget should I pull this I actually don't think so the reason why the US eventually did this is because other legal tools just don't work you can write as many angry letters as you want it just doesn't run it's like hey are we selling you Chips please don't give them to the Army you know guess what we figured out they end up at the army anyways so you pull this blunt tour because that's the only thing you can do I think what's useful to think about here is I'm you thinking about like some kind of tech stack at the top I have Services AI workloads computer um facial recognition something below there I have software computer vision machine learning this kind of thing and this Builds on top of hardware and eventually what the US government wants to achieve here they want to prevent certain misuse and app used by these systems against human rights being used by the Army and a bunch of other things and they can write an angry letter it's like hey please stop doing this but you're still allowed to do this it's not gonna work but you can do this legally in your own country you have different Tools in your own country if the US government sends an angry letter to open eye or anybody else well they better follow it right because they're sitting in the same jurisdiction so you probably don't need to use these blunt tools where you just like cut off the access to compute I think my argument is it's one of the most enforceable notes and sometimes you need enforceable nouns as you say particular actors are particularly Reckless another example to think about this is you think about like people running Silk Road like um drug markets online you send them angry letters but first of all like who do you send his letter to because you don't know actually who's running this right um and if you figure out who runs it you usually send them an angry later but sometimes you just don't know who's running this and then they just do this blunt tool which which I call compute governance so just unplug it they're trying to figure out where the server sits because this is sometimes easier than actually finding a person who runs it and then it just unplug it it's like well you know we couldn't find you you're doing this illegal thing um you didn't respond to our letters we couldn't send the letters so we're just going to unplug the server right now so like on the enforcement perspective compute is like really really useful and reasons for this is just like as a backup option if the other things don't work or maybe just as a layout defense right to eventually have this and again next to this enforcement tool they're like other reasons like quantifiability in these other things but but they are like they look a bit different than this disenforcement angle which I just talked about yeah you could imagine a scenario where say a a governance organization and a company disagrees about how powerful a model will be if it were to be trained we could we could say say uh gpt8 where where there's a disagreement about which capabilities this model will have and this is a general feature of these machine learning models because we we don't always know what's going what's going to arise from them which capabilities are going to arise we've been surprised before so only I have I probably you have too many people have been surprised by by the progress we've seen so given that we don't know which capabilities arise beforehand how do we know which training runs to restrict yeah that seems to be a hard question um I I will point to two things which are like somewhat useful here there is one thing where I would actually call this like an open domain of research which is like the problem is if you're trying to stop a training run from happening usually we have like something we we make governance based on a Model it's like oh this is a high risk model because it makes decision about who gets a loan who gets which medical advice and then we make decisions based on this and we run some evals or whatever we can't do this here so what I might be interesting is like maybe some properties of the training run so one question I could ask like how big is the training I'm going to be because again historically you've seen bigger training runs bigger models have more capabilities so all things being equal those sort of things which should be like more governed over like should be used more responsible in the future we might also learn more about like certain aspects of training runs which are particularly dangerous right so once you maybe invent an architecture where you have like some sub architecture like which enables your AI systems to have goals something wrong design so you have like certain reinforcement learning components or you have like certain online learning components these kinds of things I'm like well all things being equal systems with these kinds of properties should be more of concern right so I want some kind of verification of properties of training runs and like then I can just see what passes and this is like people should try to learn more about this and ideally we also have it in a way where it can be done automated so the company doesn't need to release the source code like just like nobody sees them we do it like in a privacy preserving way number way how we do it how we usually deal with these kinds of governance things is we make it about the organization we make it about the owner right if I'm gonna sell you a gun and verse gonna do a background check well I don't know how you're gonna use it right but the minimum thing which I can do is like well one have you done before are you a responsible actor and this I think opens up this whole government like this whole work stream of corporate governance which my colleague is working on it's like what makes up a responsible AGI lab what are the things they should be doing so I load them to train a certain system I could be saying hey you're only allowed to trade a certain system if you have an internal risk management system right this this might be what good government looks like we have the same for banks you're only allowed to open a bank if you announce certain regulations this is just like I'm not Reinventing the wheel we have this before I'm just saying well maybe you should make this about AI developers AI labs and maybe compute as a way of verifying this I think what I'm trying to push is like this whole notion when I click we need some kind of AI driver's license not everybody I'm not allowed to drive a truck and I think that's a really good idea right and I should do some training before I drive a truck the same goes for AI systems I think not everybody should train AI systems some corporate structures are just like more responsible than others and we know which features make this up this is not going to be foolproof but we learn over time and we could get started on this right now we're just like yep those systems have an impact this makes you more responsive this makes it less responsible but these are the more heavy-handed measures that we could take so monitoring and restricting compute to certain companies you've also explored the option of promoting work in in safety so how would this go you would would you would subsidize compute for for companies that you have reason to believe are doing important safety work yeah um I think this would be one example where you're just like give companies are like oh you know you're the good guys I give you more compute the problem we just have here does safety work scale with compute and again we generally have this really blurry line between like what is safety work and what isn't safety I think it's definitely worth exploring right if I'm saying like well we can restrict access I can also give more access to others and I think one prominent example of this is we see more and more systems or like nearly all systems which are compute intensive which means most systems with the most capabilities coming out of research Labs coming out of Corporations they're not coming out of Academia anymore Academia simply doesn't have the funds the teams and a bunch of other things to build these systems so what they're currently trying to do in the US is called internationally a research resources near but like trying to figure out well can we just give them more resources can we just give them more compute right and then you might be at arguments like well academics they have like somewhat of a different nature of research to do that because compared to for example industry right industry is eventually trying to make money whereas academics is like well we know that academic research is way more diverse we know that academic research is more like to produce public goods so maybe they would be like more incentivized so just like more beneficially in Isis and like take a closer look at this it's not necessarily clear to me it might not just be that they continue being part of this red race we're just like trying to build bigger and bigger models I do think this is not the best idea to do because I think it's like really expensive it's not the best competitive Advantage for linear it's not the best competitive Advantage for academics wherever they should be doing research just beneficial this could include the more diverse research which might help us with making progress on safety particularly parts of safety beat interpretability whatever pick your favorite thing and then we can fight over it if it's actually safety or not it's always an ongoing topic they might also just do some type of work which are called just apply scrutiny just like hey what about you download Llama Or you use the gp4 API and just like really point to the failure modes of these models and giving them compute for this might actually help right there's a proposal which I wrote with my colleague Marcus where we said well the narrow that should give access to pre-trained models from other researchers but also from academics and then academics can take a look at this and take these models apart I'm not only talking about computer scientists here I'm talking about the whole scientific domain which is basically now involved in our research right but it's just like what does this model mean for my domain take it apart and point to the failure modes so open AI or whoever can make this model eventually better perhaps the top Labs would be interested in having this data because perhaps it could be seen as you know if if we're discovering failure modes this is this is one of these things that might blur the line between safety and capabilities work where if someone some researcher at a university discovers a a book or a fail mode of a model that could be used to to improve the models uh capabilities but but also its safety there might be a win-win situation here where this notion of promoting safety work is less less of a stick compared to uh compared to restricting compute and more of a of a carrot approach I think so I mean just look at Twitter I mean everybody's basically on better testing gpt4 and just like trying to figure out what's happening there and ideally you should get paid for it or at least get a PhD for it something along these lines you should not have a better test on Alpha test on the whole society maybe you know maybe start with a small amount of people and like I mean oh man I did this to something right they like delayed their release but maybe it's not sufficient maybe like even more responsibility in the future and actually I don't want them to manage this this is not a democratic decision I want the government to decide what means responsibility is and what it doesn't mean could you talk a little bit about the tech supported diplomacy I imagine this means something like exporting these governance options uh globally and and yeah tell me what it means I I mean something on the lines like using Tech to reduce the social costs like a lot of times we do stupid stuff because we don't have enough information because nobody wants to tell it us right this is just like how many nuclear missiles does X have well if we both have like credible commitments this would just like help us a lot right we invented some tools to do it when I talk about tech assisted stuff and I'm thinking about AI I mean something like well can we Implement Hardware enabled mechanisms which allow us for actors to make certain credible commitments like where they actually have like technical proofs there's like hey guys last year I trained these many systems with this much compute and then we say like cool this looks good I was like oh this one model I would like to take a closer look at this we have like credible commitments this could enable just like labs to cooperate with each other where it's like if Labs not cop like all agree like hey guys let's slow down might be a good idea you know let's go slower you want the credible commitment you want somebody to check on this right and what I'm imagining to say you could imagine like software enabled Hardware enablement mechanisms which just proof like hey here's the monthly report how deepmind use their compute and then open your checks isn't cool they they hold up to their commitment this is great because yeah we trust take more usually at least it reduces the social costs for trusting people then we do if we were just like like tell each other and send us each like happy emails where we like do these kinds of stuff so like genetic sandals and you could imagine this across nation states across Labs across many different actors but they have like credible commitments about AI development right and using compute there as a tool to eventually enable this so you cannot circumvent this right like particular Hardware is like not impossible to hang absolutely not the case but definitely harder to hack than software stuff and in terms of Technical Solutions here so so that we don't have to rely on trust what's available I'm imagining some some perhaps some cryptography where what is something that's credibly neutral between Labs or between companies or between countries even because this is this has been a a problem for decades in the nuclear space where it's difficult to to for countries to trust each other and there is no neutral ground or technical way to prove for example how many nuclear warheads do you have so so what what's what's our options here yeah let's try to to piggyback this this nuclear thing I'm like well we might I mean we technically would have possibilities to use like count the number of news but like it's hard to hide but we technically do in the case of Iran and other countries we measure the level of enrichment of uranium and if they enrich it high enough the alarm belts go off right and then whoever does something right um so we have some wave it's like yeah uranium disenriched fine only nuclear power plants uranium or your guys we didn't we didn't we decide not to do this right that's why we have on side inspections and they even set up physical devices there which they continuously monitor this and I think that's the same thing which I'm pointing towards here right they like those things which I continuous monitored where we trusted which eventually need to be verified with on-site inspections or something different along these lines Okay so we've explored compute as a way to govern AI but there's a question of whether the compute might become less and less relevant because newer models will require less and less compute do you think that Cutting Edge models will continue to be limited by Computing Hardware I think so um I think we have some reasons to believe I mean this is historical we've seen I even though we don't know the exit now so gpd4 I'm I'm claiming here hereby that it's probably used more compute than gbd3 and maybe just like probably a lot of compute that I could not train in my basement I could probably also on the training with all of my statements what's definitely Ryan what you said over time to achieve capability X the compute required for this goes down over time right but what we've historically seen just people continue pushing compute more and capabilities more than everything the question is like what is the capability where it starts to be like worrisome right where it's like ah like yeah actually like which needs to be governed right and like if this goes down over time what do we do about this so for example maybe in 10 years from now I might be able to train Jeep before and then the question is like well how good is gbt6 compared to gbd4 it's gp4 just like worthless because already gpd6 is along there this is an important question just the general Notions like how does the offense defense balance of like more cutting-ages actually balance off and if we're lucky these Cutting Edge systems can defend across the other systems this is what we need to see right if this is the case for cyber weapons is it the case for AI systems which like develop dangerous pathogens anything along these lines if we look at the compute trends when do you think approximately that say I have the newest gaming computer when would I be able to train a gpt4 level model at home yeah I mean we can try to Crunch the numbers I don't know where's Tubidy for it's like it's like 1 to the power of 25 flop or something then then your GPU at home currently has what is it like 300 teraflops assuming you're getting a 100 um if we continue like if Moore's Law continues you could do it at home probably within the next six years if you're like happy to wait like a couple of months and algorithmic efficiency like make sure just like this continues to go down so algorithmic efficiency is like more important here than than how your compute develops right you have a bit of heartburn like a couple of years but you're also just like algorithmic efficiency made training gb4 just like way cheaper right so I guess this will be possible at some point like at home unsure right like at least some point it was it's gonna be cheaper maybe yeah we could crunch your numbers I cannot do it in my head right now but this is definitely possible to do it's not a super important whether it's at home but says for for something that's more attainable for for an average person say say a training GPT uh GPT four level model for five thousand dollars or something like this that's that's that's that's coming within perhaps six years this is not a long time of course in AI as six years is a million years but in in the in terms of real world impact this is this is pretty close uh so what is what is the Hope here is it that GPT 6 or a model like that will be able to detect the the misinformation or the the phishing attacks coming out from from gpt4 Models trained at home you you could imagine something in these lines right we're just like well the defense of these new model is like really really good right you could also just imagine a world where just like AI compute becomes more specialized so the GPU you have at home in the future is like actually not that useful for training like these new systems um like it depends how this will develop we already see like some kind of Divergent there your GP at home looks different than this Nvidia a100 so like let's not count on the six years there um the thing is with like exponential growth we also have like sometimes exponential times there or something um I'm happy to follow up with the number there um yeah I can't do it in my head right now this is what empiric looks like we can't continue pushing just like spending more computer on these kinds of systems I never said this is gonna be easy nobody ever said AI governance or there's hardly anything is going to be easy or what I'm saying is like look guys here's an interesting governance note I think it has some unique properties it's good at some things it's bad at others compute is not the solution to the AI governance problem it's part of the solution it might give you some nice tools that might give you tools to get International agreements and hopefully some of my colleagues are figuring out these International agreements or agreements against lab and the people in the government like we all work together and I'm like you know I'm giving one piece of the puzzle I was like hey look yes compute here how it might help and here's how it might be one defense layer out of many yeah let's touch upon some potential problems with compute governance and these problems are in a sense just problems with governance in general but just if we're talking about who's doing the governance of compute U.S Labs would probably be a bit worried if if their compute was governed by China and Chinese Labs might be worried if their compute was covered by the US government so do you but but it seems like for this to work you would have something close to a global system for for compute governance so who who's doing the governance and and how do you get everyone on board yeah um how do you get everyone on board I mean what we can say right now the US is governing China's compute this is happening right now independent if you agree with it I think it's a good idea just like that's the status quo how do they do this well they leverage certain choc points across the supply chain but they also do with the Allies if the Dutch and the Japanese wouldn't play along this wouldn't work right so like this to something you like already oh those three countries are like hey you know what be like you get together and we're trying to do this thing there we're like trying to achieve and use unfortunately this type of blunt tool which which we need for this um who's eventually doing this type of compute governance I think first of all will always be hard for countries to build their own Sovereign semiconductor supply chain right A lot of times you just have like regulatory flight and they build up their own whatever I was like yep but what this is not going to work for compute right I think a bunch of people ask me like well if you try to like control cocaine they're just gonna do it in Mexico and other countries and then they smuggle across I'm like yep seems right but building like making cocaine is like way easier than building these types of chips I think this will then just help so it looks like it'd always be like will be an effort of like the whole Global World to build these Supply chains and eventually we can all coordinate on this and actually agree on this and then we just have like other chalk points where just like if you look at the current chip designers right now who are leading there it's Nvidia it's AMD those are the leading companies there so I might not even need all the governments to sign up for this I might only need a responsible Nvidia where at least in the beginning gives people the features to eventually do something along these lines right maybe we don't even need to mandate it in the beginning maybe need to mandate it later later but for example Nvidia should like stop start thinking right now about this if there's like certain race how we can do this which might be used Maybe not maybe it's all turns out gonna be okay and we don't need it but I think we should definitely prepare um that these kinds of systems are being used and eventually all of this I'm not saying Nvidia should control it eventually we won't like governments and Democratic systems to control this right and I think that's a general thing we jump around like yeah it seems good if Democratic societies eventually decide on this seems at least better than if a dictator is deciding who's having so much compute and lastly there might be even ideas where just like I think a bunch of people are thinking about like well there is a one hectare or one entity who's not controlling the compute you could set up a third party right we have the iea for nuclear stuff maybe you have like an international compute Bank in the future which will be doing the stuff like based on an agreement maybe you and maybe whoever something along these lines and you could also enable uh think about mechanisms which are like self-sustained you're basically say hey I'm selling you this chip and the ship is going to do X if you do y whereas like this chip is not going to allow certain training runs across a certain size so it's not the US government pressing the button to stop it it's more like no this is what we agreed on this is just what the chip does there's like no way around it of course this is like really hard from a security perspective right you need might need some on side inspections regimes for this eventually to work so would you be worried that a government for example under the pretense of being worried about AI risk it uses these AI government measures to say cement their military Advantage so for example this could be any government but say the US government says that we need to restrict other governments or other countries ability to make AI progress because we're worried about AI risk but really what we are trying to do is to is to maintain uh you know U.S homogeny and and U.S military power I think that's definitely a thing one should be worried about right if they just cut off X's and I just like then put all of the chips in our pocket and use it or just like every computer is monitored except the US military's computers not monitored I think eventually we need something like everybody signs on and again it's I'm not saying the US has a chalk functional supply chain no no my countries have chalk points across the supply chain right and this is like then just how one way or you can just like pressure each other I mean maybe not pressure just like talk each other into this um and eventually get this so what about on the company side uh again I'm worried about you know institutions pretending to have uh good motives pretending to have good intentions about preventing AI risk but actually having underlying uh different motivations so here I'm imagining the the current top players in the in the field using this AI government regulation to avoid competition or to just uh maintain their market dominance and this is some this is an economic phenomena that we we in my opinions have seen in other areas and and could we also see it here I think that's a real problem with like any type of Regulation you usually have sometimes certain regulations like favor big players because they're just having easier time to play it along I think the status quo is just already that we just have like certain Labs leading right now and it's not the case that right now anybody who can enter the field can compete just as an example all the major ilabs are partnered with a cloud computing provider so they all just like looks like compute is important looks like I need a special partnership right open my eyes with Microsoft Microsoft has their own compute Google has to uncompute deep mindsets within Google they're also using Google Cloud hugging faces with AWS for Amazon web services so this is already a various entry which is like they can only have that many partners and anybody else who wants no access on cloud compute from cloud providers because this is technically the only way how you get access to this which is like somewhat price competitive they just have bigger costs because they don't have the special Partnerships right so like they're already facing this dilemma right now to some degree and eventually all of this trades off right if I just have like do I want like um yeah like a great competitive environment where everybody can compete you know we get the best AI systems we're like well actually you know what I don't want this competitive environment for AI systems because I think it could be a race to the precipice and this doesn't look good right so I might just actually take the cut that have like bigger companies there have like just more power but eventually I want them to verify what they do I want to have them prove by math my Hardware that they have these verifiable commitments right we should not say oh yeah up my deepmind are the good guys you know they play along I think we live in a happy world where they take certain risks serious this is great I'm not saying that taking it serious enough and then we just add another layer on top of it where we just mandate this kind of stuff do you worry that governing compute could drive Innovation from from more responsible countries to less responsible countries say that the more responsible countries Implement some form of AI governance but this just drives AI progress to less scrupulous countries yeah regulatory refined is a thing people leave countries people live in certain places of lower taxes but they don't live in any place right it's like oatmeal and the others they're like they're not sitting in any they don't sit in country with the lowest taxes even though you might have they have incentives there they're in centers playing along you want to sit in the bay because the talent is there because the people are there because people want to live there your AI developers want to live in a nice place right they they care about this kind of stuff so they cannot go just anywhere right um then I think just some of these companies like X just like feel they are American or something you know like maybe that's a thing I guess maybe maybe it doesn't stop once incentives are different but independent of that there there was just a case where if they don't play along and this is like the unique thing about compute we have this concentrated supply chain so we just stopped the AI chips going to Country X you're worried about this is just the thing we're going to do and if somebody sets up the like the AI Haven with like no regulations or whatever on the Bahamas or something along these lines yep then the chip stopped going to the Bahamas um I think it's a blunt tool but it's eventually the tool we then need to use and this might be the only tool we have and again this is better than um some other measures you could imagine and of course discussing all of this this all of this depends on how seriously we believe that we should take AI risk I think the two of us agree that we should take AI risk pretty seriously but not everyone agrees with this so your willingness or the willingness that we should have to use these tools of course depends on on the risk we see so so that but that's in a sense a whole uh separate discussion that I've had a number of times on this podcast here here's a a worrisome scenario say that uh in the 60s the US was on on a path to producing a lot of nuclear power plants and thereby getting clean and green and then cheap energy but um this was prevented by regulation that that's now mended made it very difficult uh to to build new new uh new nuclear power plants and perhaps this regulation was well intentioned and perhaps there was some real danger but also perhaps also some misunderstanding uh conflation between nuclear weapons and and and um nuclear power plants or exaggerated fears about the actual dangers of of nuclear and so on could this whole AI worry lead us to a similar situation where we we could have benefited enormously from from all the Innovation that we would have gotten from AI but because we were in a sense too worried more worried than was actually warranted by the evidence we killed off an industry that's definitely a downside I think that's a downside with anything how can you change this well you changed your mind over time I think what I'm trying to propose is like well we don't go Full Throttle all of these things it's like we should think about tiered system which eventually help off with this like hey we start with this and I think some things are just warranted right now they're just like companies reporting their training runs and reporting a compute usage I think is like can be argued for right now at least as an optional measure to these kinds of stuff and discuss to the government with the powerful of these AI systems are there more extreme things which have it as Cassidy varianted to something we see them already playing out right us is governing the computer of China it's happening so we rather have a good idea why it's happening and just like trying to change our mind over time if it turns out AIDS doesn't work or B it doesn't achieve your God I hope they change their mind so they get the AI chips back and can do like whatever like the good things with them right so we just need to continuously look at like the type of evidence we have and maybe I'm like a bit naive there maybe I'm too optimistic that we can just like continuous to do this but this is like this is what my job is about right that continues to check the risk landscape I'm trying to prepare for like some kinds of things which might happen if they're not him hooray don't get me wrong this would be great right and then I try to roll back or I have some measures which I just never activate right that's why I have like this t-tiered system step letters something along these lines I think in general we should like stay better saved and sorry and like complainer compared to nuclear power plants the stakes for AI are significantly higher compared to mistakes for nuclear power plants I was like this seems very into it I'm I'm fine with like taking some cut on these types of stuff if we have sufficient evidence a thing about this just forgotten like right now we're racing in this kind of answers and we have no clue how they work right Twitter is on it Twitter's currently figured out and like every day we find something new we had a merging capability and how these kinds of systems work and as long as it's the case and as I get seems totally fine for me to actually push the brake pedal and think more careful about these types of stuff Leonard thank you for coming on this has been super interesting for me thanks for having me

Related conversations

AXRP

15 Feb 2026

Guive Assadi on AI Property Rights

This conversation examines governance through Guive Assadi on AI Property Rights, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -1 · 136 segs

AXRP

28 Jun 2025

Peter Salib on AI Rights for Human Safety

This conversation examines governance through Peter Salib on AI Rights for Human Safety, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 196 segs

AXRP

27 Nov 2023

AI Governance with Elizabeth Seger

This conversation examines governance through AI Governance with Elizabeth Seger, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -7 · avg -8 · 110 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.